Generative AI creates something. I may be biased. In some cases, they may spew out harmful text. Can we call it “safe” then?
Rick Caccia, CEO of WitnessAI, believes it's possible.
“Protecting AI models is a real issue, and one of particular concern to AI researchers, but it's different from protecting their use,” Caccia, former senior vice president of marketing at Palo Alto Networks, told TechCrunch. said in an interview. “I think of it like a sports car. Even if you have a more powerful engine, model, you don't get anything without good brakes and steering. It’s just as important.”
There is certainly a demand for such control among businesses, and while they are cautiously optimistic about the productivity-enhancing potential of generative AI, they are concerned about the technology's limitations.
An IBM poll found that 51 percent of CEOs are hiring for generative AI-related roles that didn't exist until this year. However, according to research from Riskonnect, only 9% of companies say they are prepared to deal with threats arising from the use of generative AI, including privacy and intellectual property threats.
WitnessAI's platform captures activity between custom-generated AI models used by employees and employers (more like Meta's Llama 3, rather than gated models behind APIs like OpenAI's GPT-4). intercept and apply risk mitigation policies and safeguards.
“One of the promises of enterprise AI is that it unlocks and democratizes enterprise data to employees, empowering them to do their jobs better. But unlocking too much sensitive data, or having it leaked or stolen, is a problem.”
WitnessAI sells access to several modules focused on tackling various forms of generative AI risk. First, rules that prevent staff on a particular team from using AI-powered generation tools in unexpected ways, such as asking about pre-release revenue reports or pasting in internal codebases. can be implemented into an organization. The other implements techniques to protect the model from attacks that might compile proprietary sensitive information from prompts sent to the model and force the script to deviate.
“We believe the best way to help companies is to define a problem in a meaningful way and sell solutions that address that problem, such as the secure deployment of AI,” Caccia said. Masu. “CISOs want to protect their businesses, and WitnessAI helps them do that by ensuring data protection, preventing rapid injections, and enforcing identity-based policies. We want to ensure that upcoming regulations are being followed, and we provide visibility into those regulations and a way to report activity and risk.”
However, there is one caveat to WitnessAI from a privacy perspective. That means all data passes through that platform before reaching the model. The company is transparent about this and also provides tools to monitor which models employees access, the questions they ask the models, and the answers they get. But that in itself can create privacy risks.
In response to questions about WitnessAI's privacy policy, Caccia said the platform is “isolated” and encrypted to prevent customer secrets from being exposed to the public.
“We built a millisecond latency platform with built-in regulatory isolation that enables enterprises to use AI in a fundamentally different way than typical multi-tenant Software-as-a-Service offerings. “It's a unique and isolated design to protect your activities,” he said. “We create a separate instance of the platform for each customer and encrypt it with their key. Their AI activity data is isolated to them and invisible to us.”
Perhaps it will ease customer concerns. This is a tougher call for workers concerned about the potential for surveillance of WitnessAI's platform.
Surveys have shown that people generally don't appreciate their workplace activities being monitored, regardless of the reason, and believe it has a negative impact on company morale. Nearly a third of respondents in a Forbes survey said they would consider quitting their job if their employer monitored their online activities and communications.
However, Caccia claims that interest in WitnessAI's platform remains strong, with a pipeline of 25 early enterprise users in the proof-of-concept stage. (It won't be publicly available until Q3.) And in a vote of confidence from VCs, WitnessAI has raised his $27.5 million from Ballistic Ventures (which founded WitnessAI) and Google's corporate venture arm, GV. did.
The plan is to use the funding to grow WitnessAI's team of 18 to 40 by the end of the year. Growth will certainly be key to beating WitnessAI's rivals in the nascent space of model compliance and governance solutions, including tech giants like AWS, Google, and Salesforce, as well as startups like CalypsoAI.
“We have planned to get through to 2026 even with no sales, but the pipeline we need to reach this year's sales goals is already nearly 20 times larger,” Caccia said. said. “While this is our first funding round and public launch, enabling and using secure AI is a new field and all of our capabilities are being developed for this new market.”