Despite growing demands for AI safety and accountability, current testing and benchmarking may not be enough, according to a new report.
Generative AI models (models that can analyze and output text, images, music, videos, etc.) have come under intense scrutiny because they are prone to making mistakes and generally behaving unpredictably. Now, organizations ranging from public agencies to major tech companies are proposing new benchmarks to test the safety of these models.
Late last year, startup Scale AI set up a lab to assess how well models comply with safety guidelines, and this month NIST and the UK AI Safety Institute released a tool designed to assess the risk of models.
However, tests and methods to investigate these models may be inadequate.
The Ada Lovelace Institute (ALI), a UK-based nonprofit AI research institute, conducted a study to audit recent research on AI safety assessments, interviewing experts from academic labs, private organizations, and vendor modelers. The co-authors found that while current assessments are useful, they are not exhaustive, are easily manipulated, and do not necessarily represent how models will behave in real-world scenarios.
“We expect the products we use, such as smartphones, prescription drugs, and cars, to be safe and reliable, and in these sectors, products are rigorously tested to ensure their safety before deployment,” Elliot Jones, senior research fellow at ALI and co-author of the report, told TechCrunch. “Our research aimed to explore the limitations of current approaches to AI safety assessment, evaluate how assessments are currently used, and explore their use as a tool for policymakers and regulators.”
Benchmarking and Red Teaming
The study co-authors first surveyed the academic literature to outline the current harms and risks posed by models and the state of existing AI model evaluations, then interviewed 16 experts, including four employees from unnamed technology companies developing generative AI systems.
The survey found that there is significant disagreement within the AI industry about the best methods and taxonomies for evaluating models.
Some evaluations only tested how the models matched lab benchmarks, not how the models would affect real-world users. Other evaluations utilized tests that were developed for research purposes rather than evaluating production models, but the vendors insisted on using these in production.
We've written about the issues with AI benchmarking before, but this study highlights all of those issues and more.
Experts cited in the study noted that it is difficult to infer a model's performance from benchmark results, and it is unclear whether benchmarks can even demonstrate that a model has a particular capability: For example, if a model performs well on a state bar exam, that does not mean it can solve more open-ended legal problems.
The experts also pointed out the problem of data contamination: benchmark results can overestimate a model's performance if the model is trained on the same data used to test it. Often, benchmarks are chosen by organizations for their convenience and ease of use, rather than because they are the best tool for evaluation, the experts said.
“Benchmarks are at risk of being manipulated by developers who train their models on the same datasets that are used to evaluate them, akin to looking at exam questions before the exam and strategically choosing which evaluations to use,” Mahi Hardalupas, ALI researcher and co-author of the study, told TechCrunch. “It also matters which versions of models are evaluated; small changes can cause unexpected changes in behavior and override built-in safety features.”
The ALI study also found problems with “red teaming,” in which an individual or group is tasked with “attacking” a model to identify vulnerabilities or flaws. Many companies, including AI startups OpenAI and Anthropic, use red teaming to evaluate models, but there are few agreed-upon standards for red teaming, making it difficult to evaluate the effectiveness of any given effort.
Experts told the study's co-authors that it is difficult to find people with the skills and expertise needed for red teaming, and that red teaming is a manual, costly and cumbersome process that presents a barrier to smaller organizations that don't have the necessary resources.
Possible solutions
Pressure to release models faster and a reluctance to conduct potentially problematic testing before release are the main reasons why AI evaluations have not improved.
“People we spoke to who work at companies developing foundational models say they feel increasing pressure within their companies to release models quickly, making it harder for them to push back or take evaluations seriously,” Jones said. “Major AI labs are releasing models faster than their own and society's ability to ensure they are safe and reliable.”
One person interviewed in the ALI study said evaluating models for safety is an “intractable” problem. So what hopes do the industry, and those who regulate it, have for a solution?
Mahi Hardarpas, a researcher at the ALI, believes there is a way forward but that it requires greater involvement from public institutions.
“Regulators and policymakers need to clearly articulate what they want from evaluations,” he said. “At the same time, the evaluation community needs to be transparent about the current limitations and potential of evaluations.”
Hardalpasso suggests the government implement measures to mandate greater public participation in the development of evaluations and support an “ecosystem” of third-party testing, including programs to ensure regular access to necessary models and datasets.
Jones believes that rather than simply testing how models respond to prompts, we may need to develop “context-specific” evaluations that look at the types of users a model might affect (such as those from particular backgrounds, genders or ethnicities) and the ways in which attacks on the model might breach safeguards.
“Developing more robust and reproducible evaluations based on understanding how AI models work will require investment in the science that underpins the evaluations,” she added.
However, there may never be any guarantee that the model is safe.
“As others have pointed out, 'safe' is not a property of a model,” Hardalpass says. “To determine whether a model is 'safe' one must understand the context in which the model will be used, to whom it is sold or accessible, and whether the safeguards in place to mitigate those risks are appropriate and robust. While an evaluation of an underlying model can be useful for research purposes to identify potential risks, it cannot guarantee that a model is safe, much less 'completely safe.' Many interviewees agreed that an evaluation cannot prove that a model is safe, only that it can show that it is unsafe.”