Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Early AI investor Elad Gil found his next big bet: AI-powered rollup

June 1, 2025

TC Session: AI Trivi Account Down – Big Score with Tickets

June 1, 2025

4 days left: TC session: AI is almost in session

June 1, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Google quietly released an app that allows you to download and run AI models locally

    May 31, 2025

    A guide to using editing, Meta's new Capcut Rival for Short-Form video editing

    May 31, 2025

    Automattic says it will start contributing to WordPress again after pause

    May 30, 2025

    The last day to vote to destroy the 2025 agenda lineup

    May 30, 2025

    6 days left: Ready for the truth about unfiltered AI in TC sessions: AI?

    May 30, 2025
  • Crypto

    GameStop bought $500 million in Bitcoin

    May 28, 2025

    Vote for the session you want to watch in 2025

    May 26, 2025

    Save $900 + 90% from 2 tickets to destroy 2025 in the last 24 hours

    May 25, 2025

    Only 3 days left to save up to $900 to destroy the 2025 pass

    May 23, 2025

    Starting from up to $900 from Ticep, 90% off +1 in 2025

    May 22, 2025
  • Security

    8 things we learned from WhatsApp vs. NSO Group Spyware Litigation

    May 30, 2025

    White House investigates how Trump's chief staff's phone was hacked

    May 30, 2025

    US government sanctions technology company involved in cyber fraud

    May 29, 2025

    Ten years later, the bootstrap Thinkst Canary will reach $20 million ARR without VC funding

    May 29, 2025

    Security Startup Horizon3.AI raises $100 million in new rounds

    May 28, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Early AI investor Elad Gil found his next big bet: AI-powered rollup

    June 1, 2025

    TC Session: AI Trivi Account Down – Big Score with Tickets

    June 1, 2025

    4 days left: TC session: AI is almost in session

    June 1, 2025

    TC Session: AI Trivi Account Down – Next Shot in Big

    May 31, 2025

    The counted conversation begins in 5 days in a TC session: ai

    May 31, 2025
TechBrunchTechBrunch

There are significant limitations to evaluating the safety of AI models

TechBrunchBy TechBrunchAugust 4, 20246 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Despite growing demands for AI safety and accountability, current testing and benchmarking may not be enough, according to a new report.

Generative AI models (models that can analyze and output text, images, music, videos, etc.) have come under intense scrutiny because they are prone to making mistakes and generally behaving unpredictably. Now, organizations ranging from public agencies to major tech companies are proposing new benchmarks to test the safety of these models.

Late last year, startup Scale AI set up a lab to assess how well models comply with safety guidelines, and this month NIST and the UK AI Safety Institute released a tool designed to assess the risk of models.

However, tests and methods to investigate these models may be inadequate.

The Ada Lovelace Institute (ALI), a UK-based nonprofit AI research institute, conducted a study to audit recent research on AI safety assessments, interviewing experts from academic labs, private organizations, and vendor modelers. The co-authors found that while current assessments are useful, they are not exhaustive, are easily manipulated, and do not necessarily represent how models will behave in real-world scenarios.

“We expect the products we use, such as smartphones, prescription drugs, and cars, to be safe and reliable, and in these sectors, products are rigorously tested to ensure their safety before deployment,” Elliot Jones, senior research fellow at ALI and co-author of the report, told TechCrunch. “Our research aimed to explore the limitations of current approaches to AI safety assessment, evaluate how assessments are currently used, and explore their use as a tool for policymakers and regulators.”

Benchmarking and Red Teaming

The study co-authors first surveyed the academic literature to outline the current harms and risks posed by models and the state of existing AI model evaluations, then interviewed 16 experts, including four employees from unnamed technology companies developing generative AI systems.

The survey found that there is significant disagreement within the AI ​​industry about the best methods and taxonomies for evaluating models.

Some evaluations only tested how the models matched lab benchmarks, not how the models would affect real-world users. Other evaluations utilized tests that were developed for research purposes rather than evaluating production models, but the vendors insisted on using these in production.

We've written about the issues with AI benchmarking before, but this study highlights all of those issues and more.

Experts cited in the study noted that it is difficult to infer a model's performance from benchmark results, and it is unclear whether benchmarks can even demonstrate that a model has a particular capability: For example, if a model performs well on a state bar exam, that does not mean it can solve more open-ended legal problems.

The experts also pointed out the problem of data contamination: benchmark results can overestimate a model's performance if the model is trained on the same data used to test it. Often, benchmarks are chosen by organizations for their convenience and ease of use, rather than because they are the best tool for evaluation, the experts said.

“Benchmarks are at risk of being manipulated by developers who train their models on the same datasets that are used to evaluate them, akin to looking at exam questions before the exam and strategically choosing which evaluations to use,” Mahi Hardalupas, ALI researcher and co-author of the study, told TechCrunch. “It also matters which versions of models are evaluated; small changes can cause unexpected changes in behavior and override built-in safety features.”

The ALI study also found problems with “red teaming,” in which an individual or group is tasked with “attacking” a model to identify vulnerabilities or flaws. Many companies, including AI startups OpenAI and Anthropic, use red teaming to evaluate models, but there are few agreed-upon standards for red teaming, making it difficult to evaluate the effectiveness of any given effort.

Experts told the study's co-authors that it is difficult to find people with the skills and expertise needed for red teaming, and that red teaming is a manual, costly and cumbersome process that presents a barrier to smaller organizations that don't have the necessary resources.

Possible solutions

Pressure to release models faster and a reluctance to conduct potentially problematic testing before release are the main reasons why AI evaluations have not improved.

“People we spoke to who work at companies developing foundational models say they feel increasing pressure within their companies to release models quickly, making it harder for them to push back or take evaluations seriously,” Jones said. “Major AI labs are releasing models faster than their own and society's ability to ensure they are safe and reliable.”

One person interviewed in the ALI study said evaluating models for safety is an “intractable” problem. So what hopes do the industry, and those who regulate it, have for a solution?

Mahi Hardarpas, a researcher at the ALI, believes there is a way forward but that it requires greater involvement from public institutions.

“Regulators and policymakers need to clearly articulate what they want from evaluations,” he said. “At the same time, the evaluation community needs to be transparent about the current limitations and potential of evaluations.”

Hardalpasso suggests the government implement measures to mandate greater public participation in the development of evaluations and support an “ecosystem” of third-party testing, including programs to ensure regular access to necessary models and datasets.

Jones believes that rather than simply testing how models respond to prompts, we may need to develop “context-specific” evaluations that look at the types of users a model might affect (such as those from particular backgrounds, genders or ethnicities) and the ways in which attacks on the model might breach safeguards.

“Developing more robust and reproducible evaluations based on understanding how AI models work will require investment in the science that underpins the evaluations,” she added.

However, there may never be any guarantee that the model is safe.

“As others have pointed out, 'safe' is not a property of a model,” Hardalpass says. “To determine whether a model is 'safe' one must understand the context in which the model will be used, to whom it is sold or accessible, and whether the safeguards in place to mitigate those risks are appropriate and robust. While an evaluation of an underlying model can be useful for research purposes to identify potential risks, it cannot guarantee that a model is safe, much less 'completely safe.' Many interviewees agreed that an evaluation cannot prove that a model is safe, only that it can show that it is unsafe.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Early AI investor Elad Gil found his next big bet: AI-powered rollup

June 1, 2025

TC Session: AI Trivi Account Down – Big Score with Tickets

June 1, 2025

4 days left: TC session: AI is almost in session

June 1, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.