Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

After data is wiped out, Kiranapro co-founders cannot rule out external hacks

June 7, 2025

Why investing in a growing AI startup is risky and more complicated

June 6, 2025

Humanity appoints national security experts to governing trusts

June 6, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Trump Mask feud was perfect for X and jumped on the app store chart

    June 6, 2025

    iOS 19: All the rumor changes that Apple could bring to the new operating system

    June 6, 2025

    WWDC 2025: What to expect from this year's meeting

    June 6, 2025

    The court denied requests to suspend awards regarding Apple's App Store payment fees

    June 6, 2025

    Perplexity received 780 million questions last month, the CEO says

    June 5, 2025
  • Crypto

    xNotify Polymarket as partner in the official forecast market

    June 6, 2025

    Circle IPOs are giving hope to more startups waiting to be published to more startups

    June 5, 2025

    GameStop bought $500 million in Bitcoin

    May 28, 2025

    Vote for the session you want to watch in 2025

    May 26, 2025

    Save $900 + 90% from 2 tickets to destroy 2025 in the last 24 hours

    May 25, 2025
  • Security

    After data is wiped out, Kiranapro co-founders cannot rule out external hacks

    June 7, 2025

    Humanity appoints national security experts to governing trusts

    June 6, 2025

    Italian lawmakers say Italy used spyware to target immigrant activists' mobile phones, but not for journalists

    June 6, 2025

    Humanity unveils custom AI models for US national security customers

    June 5, 2025

    Unlock phone company Cellebrite to acquire mobile testing startup Corellium for $170 million

    June 5, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Why investing in a growing AI startup is risky and more complicated

    June 6, 2025

    Startup Battlefield 200: Only 3 days left

    June 6, 2025

    Book all TC Stage Exhibitor Tables before ending today

    June 6, 2025

    Less than 48 hours left until display at TC at all stages

    June 5, 2025

    TC Session: AI will be on sale today at Berkeley

    June 5, 2025
TechBrunchTechBrunch

Here's why most AI benchmarks provide so little information

TechBrunchBy TechBrunchMarch 7, 20245 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


On Tuesday, startup Anthropic released a family of generative AI models that it claims achieves best-in-class performance. Just a few days later, rival Inflection AI announced a model it claims is qualitatively comparable to some of the most capable models, including OpenAI's GPT-4.

Anthropic and Inflection are by no means the first AI companies to claim that their models beat the competition, or that they beat them by any objective measure. Google made the same claim when releasing its Gemini model, and OpenAI made similar claims for GPT-4 and its predecessors GPT-3, GPT-2, and GPT-1. The list goes on.

But what metrics are they talking about? If a vendor says their model achieves state-of-the-art performance or quality, what exactly does that mean? Perhaps More importantly, do models that technically “perform” better than others actually feel like a measurable improvement?

As for the last question, it's unlikely.

The reason, or rather the problem, lies in the benchmarks that AI companies use to quantify the strengths and weaknesses of their models.

The most commonly used benchmarks for AI models today, especially those that utilize chatbots such as OpenAI's ChatGPT and Anthropic's Claude, are based on how the average person interacts with the model being tested. is not enough to understand. For example, one of his benchmarks cited by Anthropic in a recent publication, GPQA (“Graduate Level Google Proof Q&A Benchmark”), includes hundreds of questions on PhD-level biology, physics, and chemistry. However, most people use chatbots. Tasks include answering emails, writing cover letters, and talking about how you feel.

Jesse Dodge, a scientist at the Allen Institute for AI, an AI research nonprofit, said the industry has reached a “crisis of reputation.”

“Benchmarks are typically static and narrowly focused on assessing a single feature, such as the factuality of a model in a single domain or its ability to solve multiple-choice questions in mathematical reasoning. Dodge said in an interview with TechCrunch. “Many of the benchmarks used for evaluation are more than three years old, from a time when AI systems were mostly used for research purposes and did not have many actual users. We’re using it in ways that are very creative.”

That's not to say that the most popular benchmarks are completely useless. Someone is definitely asking ChatGPT PhD level math questions. But as generative AI models increasingly position themselves as mass-market “do-it-all” systems, old benchmarks are becoming less applicable.

David Widder, a postdoctoral researcher at Cornell University who studies AI and ethics, uses a variety of tools to help with common tasks, from solving elementary-level math problems to identifying whether a piece of writing contains anachronisms. He points out that many skill benchmark tests are never relevant to the majority of users.

“Older AI systems were often built to solve specific problems in context (e.g., medical AI expert systems), with a deeper and more contextual understanding of what constitutes good performance in that specific context. Now you can,” Widder told TechCrunch. “As systems become seen as ‘universal’, this becomes less likely and more emphasis is placed on testing models on a variety of benchmarks across different disciplines.”

Inconsistencies with use cases aside, there are questions about whether some benchmarks adequately measure what they are intended to measure.

An analysis of HellaSwag, a test designed to assess common sense reasoning in models, found that more than a third of test questions contained typos or “nonsensical” statements . Elsewhere, MMLU (short for “Massive Multitask Language Understanding”), a benchmark that vendors like Google, OpenAI, and Anthropic point to as evidence that their models can reason about logical problems, asks questions that can be solved by memorization. Masu.

“[Benchmarks like MMLU are] We’re going to talk more about remembering and associating two keywords,” Weider said. “I can find [a relevant] But that doesn't mean that I understand causal mechanisms, nor does it mean that I can actually extrapolate this understanding of causal mechanisms to solve new and complex problems in unexpected contexts. . Modeling is also impossible. ”

So the benchmark is broken. But can they be fixed?

Dodge thinks so, and requires more human involvement.

“The right way to go here is to combine evaluation benchmarks with human evaluation. Hiring people to direct the model to real user queries and evaluate how well it responds,” she said. Told.

As for Widder, he's less optimistic that current benchmarks can be improved to the point that they're useful to the majority of generative AI model users, even if he fixes more obvious errors like typos. Instead, testing of models should focus on the downstream impacts of these models and whether those impacts, good or bad, are perceived as desirable by those affected, he said. is thinking.

“I ask what specific contextual goals we want to be able to use the AI ​​model for, and assess whether or not the AI ​​model will be successful in those contexts,” he said . “And hopefully that process will also include an assessment of whether AI should be used in those situations.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

After data is wiped out, Kiranapro co-founders cannot rule out external hacks

June 7, 2025

Why investing in a growing AI startup is risky and more complicated

June 6, 2025

Humanity appoints national security experts to governing trusts

June 6, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.