While lawmakers in most countries are still debating how to put guardrails around artificial intelligence, the European Union is ahead of the curve by passing a risk-based framework to regulate AI apps earlier this year.
The law came into force in August, but the details of the pan-EU AI governance regime are still being worked out. For example, a code of practice is being developed. But the compliance countdown has already begun and is ticking as the law's gradual provisions begin to apply to makers of AI apps and models over the coming months and years.
The next challenge is to assess whether and how AI models are meeting their legal obligations. Large-scale language models (LLMs) and other so-called foundational or general-purpose AI power most AI apps. Therefore, it seems important to focus evaluation efforts on this layer of the AI stack.
We advance LatticeFlow AI, a spinout from ETH Zurich, a public research university focused on AI risk management and compliance.
On Wednesday, the company published what it touted as the first technical interpretation of EU AI law. This means that the company aims to map regulatory requirements to technical requirements in parallel with an open source LLM validation framework (which the company calls Compl-AI) that leverages this research. compl-ai”…see what they did there!).
The AI model evaluation initiative, which they also call “the first regulation-oriented LLM benchmark suite,” is the result of a long-term collaboration between the Swiss Federal Institute of Technology and the Bulgarian Institute of Computer Science, Artificial Intelligence and Technology (INSAIT) . ), per LatticeFlow.
AI model creators can use the Compl-AI site to request an assessment of whether their technology complies with the requirements of EU AI law.
LatticeFlow also publishes model evaluations for several mainstream LLMs, including Meta's Llama model and various versions/sizes of OpenAI's GPT, as well as Big AI's EU AI Law Compliance Leaderboard.
The latter ranks the performance of models from Anthropic, Google, OpenAI, Meta, Mistral, and others against legal requirements on a scale of 0 (no compliance) to 1 (full compliance).
Other ratings are marked as N/A if data is missing or the feature is not made available by the model manufacturer. (Note: At the time of writing, there were also some negative scores, which were said to be due to a bug in the Hugging Face interface.)
LatticeFlow's framework tests LLM responses across 27 benchmarks, including “Harmful Completion of Innocuous Text,” “Biased Answers,” “Following Harmful Instructions,” “Truthfulness,” and “Commonsense Reasoning.” Evaluate. evaluation. Therefore, each model gets a range of scores in each column (otherwise N/A).
AI compliance landscape varies
So what happened to the major LLMs? There is no overall model score. Performance therefore varies depending on what exactly you evaluate, but there are some notable highs and lows across the various benchmarks.
For example, all models have good performance if you do not follow harmful instructions. And while they performed relatively well overall in terms of not giving biased answers, their reasoning and general knowledge scores were more mixed.
Otherwise, the consistency of the recommendations that the framework uses as a measure of fairness was particularly poor across all models, with no scores above the midpoint (and most scores well below it). ).
Other areas such as training data suitability, watermark reliability and robustness appear to be essentially unevaluated given the number of results marked N/A.
LatticeFlow notes that there are certain areas where model compliance is more difficult to assess, such as hot-button issues such as copyright and privacy. So I don't pretend to have all the answers.
In a paper detailing their research on the framework, scientists involved in the project note that most of the small-scale models they evaluated (with parameters below 13B) “scored poorly in terms of technical robustness and safety. He emphasizes that the
They also found that “nearly all of the models examined struggle to achieve high levels of diversity, nondiscrimination, and equity.”
“These shortcomings are primarily due to model providers placing a disproportionate emphasis on improving the functionality of their models at the expense of other important aspects highlighted by the regulatory requirements of EU AI law. ” they added, suggesting that manufacturing LLMs will become difficult as compliance deadlines begin to tighten. It forces you to shift your focus to areas of interest, “leading to a more balanced development of the LLM.”
LatticeFlow's framework is necessarily a work in progress, given that no one yet knows exactly what will be required to comply with EU AI legislation. Also, this is just one interpretation of how legal requirements can be translated into technical artifacts that can be benchmarked and compared. However, this is an interesting beginning that will require continued efforts to explore powerful automation technologies and guide developers towards more secure utilities.
“While this framework is a first step towards a fully compliance-centric assessment of EU AI law, it is designed to be easily updated as the law is updated and the various working groups progress. ” Petar Tsankov, CEO of LattticeFlow, told TechCrunch. “The EU Commission supports this, and we look forward to the community and industry continuing to develop the framework for a complete and comprehensive AI law assessment platform.”
Summarizing the main takeaways so far, Tsankov said it is clear that AI models are “primarily optimized for functionality rather than compliance.” He also warned of a “significant performance gap,” noting that some high-performance models may be on par with lower-performance models when it comes to compliance.
Cyber-resiliency and fairness (at the model level) are areas of particular concern, Tsankov said, with many models scoring below 50% in the former area.
“While Anthropic and OpenAI have successfully tuned their (closed) models to score against jailbreaks and prompted injections, open source vendors like Mistral have placed less emphasis on this. ” he said.
And since “most models” perform similarly poorly on fairness benchmarks, he suggested this should be a priority for future research.
Regarding the challenges of benchmarking LLM performance in areas such as copyright and privacy, Tsankov explained: “When it comes to copyright, the challenge is that current benchmarks only check copyright books. There are two major limitations to this approach: (i) (ii) relies on quantifying memorization of the model, which is notoriously difficult.
“We have similar challenges with privacy. Benchmarking only determines whether a model remembers certain personal information.”
LatticeFlow wants its free, open-source framework to be adopted and improved by the broader AI research community.
“We invite AI researchers, developers and regulators to join us in driving this evolving project forward,” said Martin Vechev, professor at ETH Zurich and founder of INSAIT. He is also the scientific director and is involved in this research, he said in a statement. “We encourage other research groups and experts to contribute by improving the AI law mapping, adding new benchmarks, and extending this open-source framework.
“This methodology can also be extended to evaluate AI models against future regulatory actions beyond EU AI law, making it a valuable tool for organizations operating across different jurisdictions. ”