This week's Rubrik IPO filing includes important content between the headcount and cost breakdown sections that sheds light on how the data management company is thinking about the risks associated with generative AI and new technologies. It was. Rubrik has quietly established a governance committee to oversee how artificial intelligence is implemented in the business.
According to the Form S-1, the new AI governance committee includes managers from Rubrik's engineering, product, legal, and information security teams. The filing states that both teams will work together to assess the potential legal, security, and business risks associated with the use of generative AI tools and “take steps that can be taken to mitigate such risks. It says that measures will be considered.
To be clear, Rubrik is not inherently an AI business. The company's only AI product, a chatbot he launched in November 2023 called Ruby, is built on Microsoft and OpenAI APIs. But like many other companies, Rubrik (and current and future investors) is considering a future where AI plays an increasingly larger role in its business. Here's why AI governance could become the new normal.
Increased regulatory scrutiny
While some companies are taking the lead by adopting AI best practices, others will be forced to do so by regulations such as the EU AI Act.
The landmark law, dubbed the “world's first comprehensive AI law”, is expected to become law across the region later this year and bans some AI use cases deemed to pose “unacceptable risks”. and define other “high risk” applications. The bill also establishes governance rules aimed at mitigating risks that could exacerbate harms such as bias and discrimination. This risk assessment approach has the potential to be widely adopted by companies looking for a streamlined path to AI adoption.
Eduardo Ustaran, a privacy and data protection lawyer and partner at Hogan Lovells International LLP, expects the EU AI law and its myriad obligations to increase the need for AI governance, resulting in the need for a commission. Masu. “Apart from its strategic role in devising and overseeing AI governance programs, the AI Governance Committee is, from an operational perspective, an important tool for addressing and minimizing risk,” he said. Ta. “This is because a properly established and resourced committee must be able to collectively anticipate the full range of risks and work with businesses to address them before they materialize. Meaning, the AI Governance Committee will serve as the foundation for all other governance activities, providing much-needed peace of mind to avoid compliance gaps.”
In a recent policy paper on the corporate governance implications of the EU AI law, ESG and compliance consultant Katarina Miller agrees, recommending that companies establish AI governance committees as a compliance measure.
legal investigation
Compliance is not just about pleasing regulators. The EU AI law is in force, and “the penalties for breaching the AI law are significant,” said Norton Rose Fulbright, a British-American law firm.
Its scope extends beyond Europe. The law firm warned that “companies operating outside the EU may be subject to the provisions of the AI Act if they carry out AI-related activities involving EU users or data.” If it is something like GDPR, this law will have international implications, especially as EU and US cooperation on AI increases.
AI tools can put businesses in trouble beyond AI law. Rubrik declined to share comment with TechCrunch, likely due to the IPO quiet period, but the company's filings state that its AI governance committee is assessing a wide range of risks.
Selection criteria and analysis include the use of generative AI tools to identify sensitive information, personal data and privacy, customer data and contractual obligations, open source software, copyright and other intellectual property rights, transparency, and accuracy of output. and how it may cause reliability issues. , and security.
Note that there may be various other reasons for Mr. Rubrik's desire to address legal grounds. For example, it could also be there to show that you are responsibly anticipating problems, but Rubric has previously dealt with data breaches and hacks, as well as intellectual property litigation. This is important.
optical problems
Of course, companies don't just consider AI from a risk prevention perspective. They and their customers alike will have an opportunity they don't want to miss. This is one reason why generative AI tools are being deployed despite obvious flaws such as “hallucinations” (the tendency to fabricate information).
A delicate balance will be necessary for companies to formulate strategies. On the other hand, boasting about the use of AI can boost a company's valuation, regardless of how practical that use is or what difference it makes to its bottom line. On the other hand, you should remain calm about the potential risks.
“We are currently at a critical point in the evolution of AI, and the future of AI depends in large part on whether the public trusts AI systems and the companies that use them,” said the privacy and security software provider. Adomas Siudika, a OneTrust privacy advisor, said in an article: Blog posts on the topic.
Establishing an AI governance board could be at least one way to try to help on the trust front.