Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

One of Elon Musk's longtime VCS is suing his former employer after allegedly fired

May 8, 2025

Korean telephone giant SKT data breaches timeline

May 8, 2025

AppFigures: Apple earned more than $10 billion from its US App Store commission last year

May 8, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    AppFigures: Apple earned more than $10 billion from its US App Store commission last year

    May 8, 2025

    Instagram thread gets video ads

    May 8, 2025

    Google deploys AI tools to protect Chrome users from fraud

    May 8, 2025

    Match to lay off 13% of staff

    May 8, 2025

    Apple tries to delay ruling that it will prohibit cutting payments for external apps

    May 8, 2025
  • Crypto

    Stripe unveils AI Foundation model for payments, revealing a “deeper partnership” with Nvidia

    May 7, 2025

    Movie Pass explores the daily fantasy platform of film buffs

    May 1, 2025

    Speaking on TechCrunch 2025: Application is open

    April 24, 2025

    Revolut, a $45 billion Neobank, recorded a profit of $1 billion in 2024

    April 24, 2025

    The new kids show will come with a crypto wallet when it debuts this fall

    April 18, 2025
  • Security

    Korean telephone giant SKT data breaches timeline

    May 8, 2025

    Powerschool paid the hacker ransom, but now the school says it's being forced

    May 8, 2025

    VC Company Insight Partners Review Personal Data Stolen During a January Hack

    May 8, 2025

    Crowdstrike says it will fire 500 workers

    May 7, 2025

    Ox Security lands fresh $60 million to scan code vulnerabilities

    May 7, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    One of Elon Musk's longtime VCS is suing his former employer after allegedly fired

    May 8, 2025

    Sequoia leads a $1.5 billion tender offer for sales automation startup clay

    May 8, 2025

    Bosch Ventures is turning attention to North America with a new $270 million fund

    May 8, 2025

    A comprehensive list of 2025 tech layoffs

    May 7, 2025

    Kapor Capital's managing partner Ulili Onovakpuri has left the company

    May 7, 2025
TechBrunchTechBrunch

There are significant limitations to evaluating the safety of AI models

TechBrunchBy TechBrunchAugust 4, 20246 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


Despite growing demands for AI safety and accountability, current testing and benchmarking may not be enough, according to a new report.

Generative AI models (models that can analyze and output text, images, music, videos, etc.) have come under intense scrutiny because they are prone to making mistakes and generally behaving unpredictably. Now, organizations ranging from public agencies to major tech companies are proposing new benchmarks to test the safety of these models.

Late last year, startup Scale AI set up a lab to assess how well models comply with safety guidelines, and this month NIST and the UK AI Safety Institute released a tool designed to assess the risk of models.

However, tests and methods to investigate these models may be inadequate.

The Ada Lovelace Institute (ALI), a UK-based nonprofit AI research institute, conducted a study to audit recent research on AI safety assessments, interviewing experts from academic labs, private organizations, and vendor modelers. The co-authors found that while current assessments are useful, they are not exhaustive, are easily manipulated, and do not necessarily represent how models will behave in real-world scenarios.

“We expect the products we use, such as smartphones, prescription drugs, and cars, to be safe and reliable, and in these sectors, products are rigorously tested to ensure their safety before deployment,” Elliot Jones, senior research fellow at ALI and co-author of the report, told TechCrunch. “Our research aimed to explore the limitations of current approaches to AI safety assessment, evaluate how assessments are currently used, and explore their use as a tool for policymakers and regulators.”

Benchmarking and Red Teaming

The study co-authors first surveyed the academic literature to outline the current harms and risks posed by models and the state of existing AI model evaluations, then interviewed 16 experts, including four employees from unnamed technology companies developing generative AI systems.

The survey found that there is significant disagreement within the AI ​​industry about the best methods and taxonomies for evaluating models.

Some evaluations only tested how the models matched lab benchmarks, not how the models would affect real-world users. Other evaluations utilized tests that were developed for research purposes rather than evaluating production models, but the vendors insisted on using these in production.

We've written about the issues with AI benchmarking before, but this study highlights all of those issues and more.

Experts cited in the study noted that it is difficult to infer a model's performance from benchmark results, and it is unclear whether benchmarks can even demonstrate that a model has a particular capability: For example, if a model performs well on a state bar exam, that does not mean it can solve more open-ended legal problems.

The experts also pointed out the problem of data contamination: benchmark results can overestimate a model's performance if the model is trained on the same data used to test it. Often, benchmarks are chosen by organizations for their convenience and ease of use, rather than because they are the best tool for evaluation, the experts said.

“Benchmarks are at risk of being manipulated by developers who train their models on the same datasets that are used to evaluate them, akin to looking at exam questions before the exam and strategically choosing which evaluations to use,” Mahi Hardalupas, ALI researcher and co-author of the study, told TechCrunch. “It also matters which versions of models are evaluated; small changes can cause unexpected changes in behavior and override built-in safety features.”

The ALI study also found problems with “red teaming,” in which an individual or group is tasked with “attacking” a model to identify vulnerabilities or flaws. Many companies, including AI startups OpenAI and Anthropic, use red teaming to evaluate models, but there are few agreed-upon standards for red teaming, making it difficult to evaluate the effectiveness of any given effort.

Experts told the study's co-authors that it is difficult to find people with the skills and expertise needed for red teaming, and that red teaming is a manual, costly and cumbersome process that presents a barrier to smaller organizations that don't have the necessary resources.

Possible solutions

Pressure to release models faster and a reluctance to conduct potentially problematic testing before release are the main reasons why AI evaluations have not improved.

“People we spoke to who work at companies developing foundational models say they feel increasing pressure within their companies to release models quickly, making it harder for them to push back or take evaluations seriously,” Jones said. “Major AI labs are releasing models faster than their own and society's ability to ensure they are safe and reliable.”

One person interviewed in the ALI study said evaluating models for safety is an “intractable” problem. So what hopes do the industry, and those who regulate it, have for a solution?

Mahi Hardarpas, a researcher at the ALI, believes there is a way forward but that it requires greater involvement from public institutions.

“Regulators and policymakers need to clearly articulate what they want from evaluations,” he said. “At the same time, the evaluation community needs to be transparent about the current limitations and potential of evaluations.”

Hardalpasso suggests the government implement measures to mandate greater public participation in the development of evaluations and support an “ecosystem” of third-party testing, including programs to ensure regular access to necessary models and datasets.

Jones believes that rather than simply testing how models respond to prompts, we may need to develop “context-specific” evaluations that look at the types of users a model might affect (such as those from particular backgrounds, genders or ethnicities) and the ways in which attacks on the model might breach safeguards.

“Developing more robust and reproducible evaluations based on understanding how AI models work will require investment in the science that underpins the evaluations,” she added.

However, there may never be any guarantee that the model is safe.

“As others have pointed out, 'safe' is not a property of a model,” Hardalpass says. “To determine whether a model is 'safe' one must understand the context in which the model will be used, to whom it is sold or accessible, and whether the safeguards in place to mitigate those risks are appropriate and robust. While an evaluation of an underlying model can be useful for research purposes to identify potential risks, it cannot guarantee that a model is safe, much less 'completely safe.' Many interviewees agreed that an evaluation cannot prove that a model is safe, only that it can show that it is unsafe.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

One of Elon Musk's longtime VCS is suing his former employer after allegedly fired

May 8, 2025

Korean telephone giant SKT data breaches timeline

May 8, 2025

AppFigures: Apple earned more than $10 billion from its US App Store commission last year

May 8, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.