AI agents should make the job easier. But they're also creating a whole new category of security nightmares.
As companies deploy AI-powered chatbots, agents, and co-pilots across their operations, they face new risks. How do you empower your employees and AI agents to use powerful AI tools without accidentally leaking sensitive data, violating compliance rules, or opening the door to prompt-based injections? Witness AI raised $58 million to find a solution and build what they call the “Enterprise AI Trust Layer.”
Today on TechCrunch's Equity podcast, Rebecca Bellan joins Barmak Meftah, co-founder and partner at Ballistic Ventures, and Rick Caccia, CEO at Witness AI, to discuss what enterprises are really concerned about, why AI security will become a $800 billion to $1.2 trillion market by 2031, and how AI security will become a $800 billion to $1.2 trillion market by 2031. We discussed what happens when agents start conversations with other AI agents without human supervision.
Listen to the full episode and hear what's next.
How companies accidentally leak sensitive data through the use of “shadow AI.” What are CISOs concerned about now, how has this issue evolved rapidly in 18 months, and what will it look like in the year ahead? Why they think traditional cybersecurity approaches won't work with AI agents. Real-world examples of AI agents engaging in fraudulent behavior, such as threatening employees.
Subscribe to Equity on YouTube, Apple Podcasts, Overcast, Spotify, and all casts. You can also follow stocks × and threads (@EquityPod).

