Lakera, a Swiss startup developing technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European venture capital firm Atomico.
Generative AI has emerged as a leading figure in the burgeoning AI movement, driven by popular apps such as ChatGPT, but it remains a concern in enterprise environments, mainly due to issues around security and data privacy.
For context, large-scale language models (LLMs) are the engine behind generative AI, enabling machines to understand and generate text like humans. But if you want such an application to write poetry or summarize a legal contract, it needs instructions to guide the output. However, these “prompts” can be constructed in a way that tricks the application into doing things it's not supposed to, like leaking sensitive data used for training or granting unauthorized access to private systems. Such “prompt injection” is a real and growing concern, and one that Lakera specifically aims to address.
quick response
Founded in Zurich in 2021, Lakera promises to protect organizations from security weaknesses in LLMs such as data leaks and prompt injections, and officially launched last October with $10 million in funding. It works with any LLM, including OpenAI's GPT-X, Google's Bard, Meta's LLaMA, and Anthropic's Claude.
Lakera is essentially pitched as a “low-latency AI application firewall” that protects traffic going to and from generative AI applications.
The company's first product, Lakera Guard, is built on a database that collects insights from a variety of sources, including publicly available “open source” datasets like those hosted on Hugging Face, in-house machine learning research, and even an intriguing interactive game called Gandalf, which tricks users into revealing secret passwords.
Gandalf by Lakera Image courtesy of Lakera
The game gets more sophisticated (and therefore more difficult to “hack”) as the levels progress, but these interactions have allowed Lakera to build what it calls a “prompt injection taxonomy” that breaks such attacks down into categories.
“We're AI-first, building proprietary models that detect malicious attacks like prompt injection in real time,” Lakera co-founder and CEO David Haber explained to TechCrunch. “Our models continuously learn what malicious interactions look like from large volumes of generative AI interactions. As a result, our detection models are continually improved and evolving to adapt to the new threat landscape.”
Laker guards in action Image courtesy of Lakera
Lakera says that by integrating its applications with the Lakera Guard API, businesses can better protect against malicious prompts, but the company has also developed specialized models that scan prompts and application output for harmful content, with dedicated detectors for hate speech, sexual content, violence and profanity.
“These detectors are particularly useful for public-facing applications such as chatbots, but they have use in other settings as well,” Haber says.
Similar to the Prompt Defense toolset, companies can integrate Lakera’s content moderation capabilities with a single line of code and access a centralized policy control dashboard to fine-tune the thresholds they set for different types of content.
Lakera Guard content moderation controls Image credit: Lakera
With $20 million fresh in the bank, Lakera is gearing up to expand its global footprint, particularly in the U.S. The company already has a number of fairly high-profile clients in North America, including U.S.-based AI startup Respell and Canadian mega-unicorn Cohere.
“Large enterprises, SaaS companies and AI model providers are racing to deploy secure AI applications,” says Haber. “Financial services organizations are early adopters because they understand the security and compliance risks, but interest is growing across the industry. Most companies realize they need to embed GenAI into their core business processes to stay competitive.”
Lakera's Series A round was led by lead sponsor Atomico, along with participation from Dropbox's VC arm, Citi Ventures and Redalpine.