It doesn't take long for GenAI to spout untruths and falsehoods.
Last week, chatbots from Microsoft and Google declared the winner of the Super Bowl before the game even started. But the real problem begins when GenAI's illusions become harmful — endorsing torture, reinforcing ethnic and racial stereotypes, and writing persuasive conspiracy theories.
From established companies like Nvidia and Salesforce to startups like CalypsoAI, a growing number of vendors are offering products that they claim can help mitigate unwanted and harmful content from GenAI. But they are black boxes. Without testing each individually, it's impossible to know how these hallucinogenic products compare and whether they actually live up to their claims.
Shreya Rajpal saw this as a big problem and founded a company called Guardrails AI to solve it.
“Most organizations…are struggling with the same set of challenges in responsibly deploying AI applications and struggle to figure out what is the best and most efficient solution,” Rajpal said. He told TechCrunch in an email interview. “They often end up reinventing the wheel in terms of managing the set of risks that are important to them.”
Rajpal points out that research shows that complexity, and thus risk, is the biggest barrier to organizations adopting GenAI.
A recent poll conducted by Cnvrg.io, an Intel subsidiary, found that about a quarter of companies deploying GenAI apps are concerned about compliance and privacy, reliability, high implementation costs, and lack of technical skills. is a common concern. In another survey conducted by Riskonnect, a risk management software provider, more than half of executives said they were concerned about their employees making decisions based on inaccurate information from their GenAI tools. Ta.
Rajpal previously worked at self-driving startup Drive.ai, was part of Apple's special projects group after Apple acquired Drive.ai, and co-founded Guardrails with Diego Oppenheimer, Safeer Mohiuddin, and Zayd Simjee. Mr. Oppenheimer previously led Algorithmia, a machine learning operations platform, and Mr. Mohiuddin and Mr. Simjee held technical and engineering leadership roles at his AWS.
In some ways, what Guardrails offers is not that different from what's already on the market. The startup's platform acts as a wrapper around GenAI models, particularly open source and proprietary (such as OpenAI's GPT-4) text generation models, ostensibly making them more reliable, reliable, and secure. Masu.
But what makes Guardrails different is its open-source business model (the platform's codebase is available on GitHub and free to use) and crowdsourcing approach.
Guardrails allows developers to submit modular components called “validators” that interrogate specific behavioral, compliance, and performance metrics of GenAI models through a marketplace called Guardrails Hub. Validators can be deployed, reused, and reused by other developers and their Guardrails customers, and serve as building blocks for custom GenAI model moderation solutions.
“With the Hub, our goal is to create an open forum to share knowledge and find the most effective methods. [further] It's not just about deploying AI, it's also about building a reusable set of guardrails that any organization can deploy,” said Rajpal.
Validators in Guardrails Hub range from simple rule-based checks to algorithms that detect and mitigate problems in your models. There are currently about 50 of them, ranging from hallucination and policy violation detectors to sensitive information and unsafe code filters.
“Most companies will do broad, uniform checks for things like profanity and personally identifying information,” Rajpal said. “However, there is no universal definition of what constitutes acceptable use for a particular organization or team. There are organization-specific risks that need to be tracked. For example, communication policies between organizations may differ. Using a hub , people will be able to use our solutions right away or get a powerful starting point solution that can be further customized to their specific needs.”
The model guardrail hub is an interesting idea. But the skeptic in me wonders why developers would bother contributing to a platform, and one that is still in its infancy, without the promise of some form of compensation.
Rajpal is optimistic that the industry will selflessly help build towards a “safer” GenAI, if for no other reason than recognition.
“The hub allows developers to see the types of risks other companies face and the guardrails they have in place to address and mitigate those risks,” she added. “Validator is an open source implementation of guardrails that organizations can apply to their use cases.”
Guardrails AI does not yet charge fees for its services or software, but Zetta Venture Partners recently announced that it will Raised $7.5 million in a seed round led by the company. Rajpal says the proceeds will go toward expanding his six-person team at Guardrails and additional open source projects.
“We talk to so many people, including enterprises, small startups, and individual developers, who are unable to ship GenAI applications because they lack the necessary assurance and risk mitigation. “I do,” she continued. “This is a new problem that hasn't existed at this scale since ChatGPT and underlying models have been everywhere. We want to solve this problem.”