ARTICLE AD
Lakera, a Swiss startup that’s building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European venture capital firm, Atomico.
Generative AI has emerged as the poster child of the burgeoning AI movement, driven by popular apps such as ChatGPT. But it remains a cause for concern within enterprise settings, largely due to issues around security and data privacy.
For context, large language models (LLMs) are the engines behind generative AI and enable machines to understand and generate text just like a human. But whether you want such an application to write a poem or summarize a legal contract, it needs instructions to guide its output. These “prompts,” however, can be constructed in such a way as to trick the application into doing something it’s not supposed to, such as divulging confidential data that was used to train it, or give unauthorized access to private systems. Such “prompt injections” are a real and growing concern, and are specifically what Lakera is setting out to address.
Prompt response
Founded out of Zurich in 2021, Lakera officially launched last October with $10 million in funding, with the express promise to protect organizations from LLM security weaknesses such as data leakage or prompt injections. It works with any LLM, including OpenAI’s GPT-X, Google’s Bard, Meta’s LLaMA, and Anthropic’s Claude.
At its core, Lakera is pitched as a “low-latency AI application firewall” that secures traffic into and out of generative AI applications.
The company’s inaugural product, Lakera Guard, is built on a database that collates insights from myriad sources, including publicly available “open source” data sets such as those hosted on Hugging Face, in-house machine learning research, and a curious interactive game it developed called Gandalf, which invites users to attempt to trick it into revealing a secret password.
Lakera’s GandalfImage Credits: LakeraThe game gets more sophisticated (and thus more difficult to “hack”) as the levels progress. But these interactions have enabled Lakera to build what it calls a “prompt injection taxonomy” that separates such attacks into categories.
“We are AI-first, building our own models to detect malicious attacks such as prompt injections in real time,” Lakera’s co-founder and CEO David Haber explained to TechCrunch. “Our models continuously learn from large amounts of generative AI interactions what malicious interactions look like. As a result, our detector models continuously improve and evolve with the emerging threat landscape.”
Laker Guard in actionImage Credits: LakeraLakera says that by integrating their application with the Lakera Guard API, companies can better safeguard against malicious prompts. However, the company has also developed specialized models that scan prompts and application outputs for toxic content, with dedicated detectors for hate speech, sexual content, violence and profanities.
“These detectors are particularly useful for publicly-facing applications, for example chatbots, but are used in other settings as well,” Haber said.
Similar to its prompt defense toolset, companies can integrate Lakera’s content moderation smarts with a single line of code, and can get access to a centralized policy control dashboard to fine-tune the thresholds they want to set according to the content type.
Lakera Guard content moderation controlsImage Credits: LakeraWith a fresh $20 million in the bank, Lakera is now primed to expand its global presence, particularly in the U.S. The company already claims a number of fairly high-profile customers in North America, including U.S.-based AI startup Respell as well as Canadian mega-unicorn Cohere.
“Large enterprises, SaaS companies and AI model providers are all racing to roll out secure AI applications,” Haber said. “Financial services organizations understand the security and compliance risks and are early adopters, but we are seeing interest across industries. Most companies know they need to incorporate GenAI into their core business processes to stay competitive.”
Aside from lead backer Atomico, Lakera’s Series A round included participation from Dropbox’s VC arm, Citi Ventures and Redalpine.