AI poses a dilemma for businesses. If you don't implement it yet, you could be missing out on productivity gains and other potential benefits. But if done incorrectly, it can expose your business and customers to unmitigated risks. This is where a new wave of “security for AI” startups arrives, assuming that threats such as jailbreaks and prompt injections cannot be ignored.
Mindgard, a spin-off from a British university, is one of them, as is Israeli startup Noma and US-based competitors Hidden Layer and Protect AI. “AI is still software, so all the cyber risks you've probably heard apply to AI as well,” says CEO and CTO Professor Peter Garrahan (pictured above, right). But he added that this also justifies a new approach, “if you look at the opaque nature and inherently random behavior of neural networks and systems.”
For Mindgard, that approach is Dynamic Application Security Testing for AI (DAST-AI), which targets vulnerabilities that can only be detected at runtime. This includes continuous, automated red teaming, a method of simulating attacks based on Mindgard's threat library. For example, you can test the robustness of an image classifier against adversarial inputs.
In that and more, Mindgard's technology owes a debt to Gallagan's background as a professor and researcher focused on AI security. This field is rapidly evolving. ChatGPT didn't exist when he entered the field, but he felt NLP and image models could face new threats, he told TechCrunch.
Since then, what seemed like a vision of the future has become a reality within a fast-growing sector, but LLMs, like threats, continue to change. Mr Gallahan believes the continued relationship with Lancaster University will help sustain the company. MindGuard will automatically own the intellectual property of the work of an additional 18 PhD researchers over the next few years. “No other company in the world would enter into a contract like this.”
Despite its connection to research, Mindgard already operates as a commercial product, more precisely a SaaS platform, with co-founder Steve Street leading the charge as COO and CRO. (Early co-founder Neeraj Suri, who was involved in the research side, is no longer with the company.)
Like traditional red teamers and pen testers, businesses are MindGuard's natural customers, but the company also works with AI startups that need to demonstrate to customers that they are committed to AI risk prevention, Gallahan said. spoke.
Since many of these prospects are based in the US, the company has added an American element to its cap table. After raising a £3m seed round in 2023, Mindgard has now announced a new $8m round led by Boston-based .406 Ventures, which will lead to a new $8m round led by Boston-based .406 Ventures. Investments and existing investors IQ Capital and Lakestar are also participating.
The funding will help with “team building, product development, research and development, and everything you'd expect from a startup,” as well as expansion into the United States. Former Next DLP CMO Fergal Glynn, who was recently appointed VP of Marketing, is based in the US. Boston. However, the company also plans to keep its R&D and engineering operations in London.
Mindgard's team is relatively small at 15 employees and is expected to remain that way, reaching 20-25 people by the end of next year. That's because AI security “hasn't even reached its prime yet.” But when AI starts to be deployed everywhere, and security threats follow, Mindgard will be ready. Garraghan said: “We founded this company to do good in the world, and the good thing here is that people can trust and use AI safely.”