Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

TechCrunch Disrupt 2025: How to watch Astro Teller, Startup Battlefield, and more live

October 27, 2025

Sequoia Announces $950 Million in New Early-Stage Fund Aiming to “match the value of your next investment''

October 27, 2025

Disrupt 2025: Day 1 | Tech Crunch

October 27, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Google brings Pixel 6 and new devices to Material3 Expressive, along with other features, to the Pixel 6 and new devices

    September 3, 2025

    Google's NoteBookLM now allows you to customize the tone of your AI podcasts

    September 3, 2025

    Roblox expands the use of age estimation techniques and introduces standardized assessments

    September 3, 2025

    Instagram finally launches the iPad app

    September 3, 2025

    Complete the 2025 Confusion Builder Stage Agenda with the Maximum Scaling Voice

    September 3, 2025
  • Crypto

    Disrupt 2025: Day 1 | Tech Crunch

    October 27, 2025

    Less than 24 hours until Disrupt 2025 – ticket prices increase

    October 26, 2025

    Less than 24 hours until Disrupt 2025 – ticket prices increase

    October 26, 2025

    3 days left until Disrupt 2025 turns San Francisco into a startup city | Tech Crunch

    October 24, 2025

    President Trump pardons Binance founder Chao Changpeng

    October 23, 2025
  • Security

    Obvious security risks of AI browser agents

    October 25, 2025

    US government charges former L3Harris cyber chief with trade secret theft

    October 23, 2025

    Sam Altman's eye-scanning sphere promises to prove humanity in the age of AI bots

    October 22, 2025

    Apple warns exploit developers that iPhones have been targeted by government spyware

    October 21, 2025

    Amazon identifies problem that destroyed much of the internet, says AWS is back to normal

    October 21, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    TechCrunch Disrupt 2025: How to watch Astro Teller, Startup Battlefield, and more live

    October 27, 2025

    Sequoia Announces $950 Million in New Early-Stage Fund Aiming to “match the value of your next investment''

    October 27, 2025

    Accel and Prosus partner to support early stage startups in India

    October 26, 2025

    A comprehensive list of 2025 tech layoffs

    October 24, 2025

    TechCrunch Disrupt 2025 Side Events schedule: Women in Tech, MongoDB, Silkroad Innovation Hub and more to host

    October 24, 2025
TechBrunchTechBrunch

Here's why most AI benchmarks provide so little information

TechBrunchBy TechBrunchMarch 7, 20245 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


On Tuesday, startup Anthropic released a family of generative AI models that it claims achieves best-in-class performance. Just a few days later, rival Inflection AI announced a model it claims is qualitatively comparable to some of the most capable models, including OpenAI's GPT-4.

Anthropic and Inflection are by no means the first AI companies to claim that their models beat the competition, or that they beat them by any objective measure. Google made the same claim when releasing its Gemini model, and OpenAI made similar claims for GPT-4 and its predecessors GPT-3, GPT-2, and GPT-1. The list goes on.

But what metrics are they talking about? If a vendor says their model achieves state-of-the-art performance or quality, what exactly does that mean? Perhaps More importantly, do models that technically “perform” better than others actually feel like a measurable improvement?

As for the last question, it's unlikely.

The reason, or rather the problem, lies in the benchmarks that AI companies use to quantify the strengths and weaknesses of their models.

The most commonly used benchmarks for AI models today, especially those that utilize chatbots such as OpenAI's ChatGPT and Anthropic's Claude, are based on how the average person interacts with the model being tested. is not enough to understand. For example, one of his benchmarks cited by Anthropic in a recent publication, GPQA (“Graduate Level Google Proof Q&A Benchmark”), includes hundreds of questions on PhD-level biology, physics, and chemistry. However, most people use chatbots. Tasks include answering emails, writing cover letters, and talking about how you feel.

Jesse Dodge, a scientist at the Allen Institute for AI, an AI research nonprofit, said the industry has reached a “crisis of reputation.”

“Benchmarks are typically static and narrowly focused on assessing a single feature, such as the factuality of a model in a single domain or its ability to solve multiple-choice questions in mathematical reasoning. Dodge said in an interview with TechCrunch. “Many of the benchmarks used for evaluation are more than three years old, from a time when AI systems were mostly used for research purposes and did not have many actual users. We’re using it in ways that are very creative.”

That's not to say that the most popular benchmarks are completely useless. Someone is definitely asking ChatGPT PhD level math questions. But as generative AI models increasingly position themselves as mass-market “do-it-all” systems, old benchmarks are becoming less applicable.

David Widder, a postdoctoral researcher at Cornell University who studies AI and ethics, uses a variety of tools to help with common tasks, from solving elementary-level math problems to identifying whether a piece of writing contains anachronisms. He points out that many skill benchmark tests are never relevant to the majority of users.

“Older AI systems were often built to solve specific problems in context (e.g., medical AI expert systems), with a deeper and more contextual understanding of what constitutes good performance in that specific context. Now you can,” Widder told TechCrunch. “As systems become seen as ‘universal’, this becomes less likely and more emphasis is placed on testing models on a variety of benchmarks across different disciplines.”

Inconsistencies with use cases aside, there are questions about whether some benchmarks adequately measure what they are intended to measure.

An analysis of HellaSwag, a test designed to assess common sense reasoning in models, found that more than a third of test questions contained typos or “nonsensical” statements . Elsewhere, MMLU (short for “Massive Multitask Language Understanding”), a benchmark that vendors like Google, OpenAI, and Anthropic point to as evidence that their models can reason about logical problems, asks questions that can be solved by memorization. Masu.

“[Benchmarks like MMLU are] We’re going to talk more about remembering and associating two keywords,” Weider said. “I can find [a relevant] But that doesn't mean that I understand causal mechanisms, nor does it mean that I can actually extrapolate this understanding of causal mechanisms to solve new and complex problems in unexpected contexts. . Modeling is also impossible. ”

So the benchmark is broken. But can they be fixed?

Dodge thinks so, and requires more human involvement.

“The right way to go here is to combine evaluation benchmarks with human evaluation. Hiring people to direct the model to real user queries and evaluate how well it responds,” she said. Told.

As for Widder, he's less optimistic that current benchmarks can be improved to the point that they're useful to the majority of generative AI model users, even if he fixes more obvious errors like typos. Instead, testing of models should focus on the downstream impacts of these models and whether those impacts, good or bad, are perceived as desirable by those affected, he said. is thinking.

“I ask what specific contextual goals we want to be able to use the AI ​​model for, and assess whether or not the AI ​​model will be successful in those contexts,” he said . “And hopefully that process will also include an assessment of whether AI should be used in those situations.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

TechCrunch Disrupt 2025: How to watch Astro Teller, Startup Battlefield, and more live

October 27, 2025

Sequoia Announces $950 Million in New Early-Stage Fund Aiming to “match the value of your next investment''

October 27, 2025

Disrupt 2025: Day 1 | Tech Crunch

October 27, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.