On Tuesday, startup Anthropic released a family of generative AI models that it claims achieves best-in-class performance. Just a few days later, rival Inflection AI announced a model it claims is qualitatively comparable to some of the most capable models, including OpenAI's GPT-4.
Anthropic and Inflection are by no means the first AI companies to claim that their models beat the competition, or that they beat them by any objective measure. Google made the same claim when releasing its Gemini model, and OpenAI made similar claims for GPT-4 and its predecessors GPT-3, GPT-2, and GPT-1. The list goes on.
But what metrics are they talking about? If a vendor says their model achieves state-of-the-art performance or quality, what exactly does that mean? Perhaps More importantly, do models that technically “perform” better than others actually feel like a measurable improvement?
As for the last question, it's unlikely.
The reason, or rather the problem, lies in the benchmarks that AI companies use to quantify the strengths and weaknesses of their models.
The most commonly used benchmarks for AI models today, especially those that utilize chatbots such as OpenAI's ChatGPT and Anthropic's Claude, are based on how the average person interacts with the model being tested. is not enough to understand. For example, one of his benchmarks cited by Anthropic in a recent publication, GPQA (“Graduate Level Google Proof Q&A Benchmark”), includes hundreds of questions on PhD-level biology, physics, and chemistry. However, most people use chatbots. Tasks include answering emails, writing cover letters, and talking about how you feel.
Jesse Dodge, a scientist at the Allen Institute for AI, an AI research nonprofit, said the industry has reached a “crisis of reputation.”
“Benchmarks are typically static and narrowly focused on assessing a single feature, such as the factuality of a model in a single domain or its ability to solve multiple-choice questions in mathematical reasoning. Dodge said in an interview with TechCrunch. “Many of the benchmarks used for evaluation are more than three years old, from a time when AI systems were mostly used for research purposes and did not have many actual users. We’re using it in ways that are very creative.”
That's not to say that the most popular benchmarks are completely useless. Someone is definitely asking ChatGPT PhD level math questions. But as generative AI models increasingly position themselves as mass-market “do-it-all” systems, old benchmarks are becoming less applicable.
David Widder, a postdoctoral researcher at Cornell University who studies AI and ethics, uses a variety of tools to help with common tasks, from solving elementary-level math problems to identifying whether a piece of writing contains anachronisms. He points out that many skill benchmark tests are never relevant to the majority of users.
“Older AI systems were often built to solve specific problems in context (e.g., medical AI expert systems), with a deeper and more contextual understanding of what constitutes good performance in that specific context. Now you can,” Widder told TechCrunch. “As systems become seen as ‘universal’, this becomes less likely and more emphasis is placed on testing models on a variety of benchmarks across different disciplines.”
Inconsistencies with use cases aside, there are questions about whether some benchmarks adequately measure what they are intended to measure.
An analysis of HellaSwag, a test designed to assess common sense reasoning in models, found that more than a third of test questions contained typos or “nonsensical” statements . Elsewhere, MMLU (short for “Massive Multitask Language Understanding”), a benchmark that vendors like Google, OpenAI, and Anthropic point to as evidence that their models can reason about logical problems, asks questions that can be solved by memorization. Masu.
“[Benchmarks like MMLU are] We’re going to talk more about remembering and associating two keywords,” Weider said. “I can find [a relevant] But that doesn't mean that I understand causal mechanisms, nor does it mean that I can actually extrapolate this understanding of causal mechanisms to solve new and complex problems in unexpected contexts. . Modeling is also impossible. ”
So the benchmark is broken. But can they be fixed?
Dodge thinks so, and requires more human involvement.
“The right way to go here is to combine evaluation benchmarks with human evaluation. Hiring people to direct the model to real user queries and evaluate how well it responds,” she said. Told.
As for Widder, he's less optimistic that current benchmarks can be improved to the point that they're useful to the majority of generative AI model users, even if he fixes more obvious errors like typos. Instead, testing of models should focus on the downstream impacts of these models and whether those impacts, good or bad, are perceived as desirable by those affected, he said. is thinking.
“I ask what specific contextual goals we want to be able to use the AI model for, and assess whether or not the AI model will be successful in those contexts,” he said . “And hopefully that process will also include an assessment of whether AI should be used in those situations.”