Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Google says the updated Gemini 2.5 Pro AI model is excellent at coding

June 5, 2025

Introducing Bounce, a tool that will drive your followers between Bluesky and Mastodon

June 5, 2025

How to watch Apple's WWDC 2025 Keynote

June 5, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Google says the updated Gemini 2.5 Pro AI model is excellent at coding

    June 5, 2025

    Introducing Bounce, a tool that will drive your followers between Bluesky and Mastodon

    June 5, 2025

    How to watch Apple's WWDC 2025 Keynote

    June 5, 2025

    Apple says that its App Store helped generate $1.3T on bills and sales, but mostly without fees

    June 5, 2025

    iOS 19: All the rumor changes that Apple could bring to the new operating system

    June 4, 2025
  • Crypto

    GameStop bought $500 million in Bitcoin

    May 28, 2025

    Vote for the session you want to watch in 2025

    May 26, 2025

    Save $900 + 90% from 2 tickets to destroy 2025 in the last 24 hours

    May 25, 2025

    Only 3 days left to save up to $900 to destroy the 2025 pass

    May 23, 2025

    Starting from up to $900 from Ticep, 90% off +1 in 2025

    May 22, 2025
  • Security

    Humanity unveils custom AI models for US national security customers

    June 5, 2025

    Unlock phone company Cellebrite to acquire mobile testing startup Corellium for $170 million

    June 5, 2025

    Ransomware Gangs claim responsibility for Kettering Health Hack

    June 4, 2025

    Former CTO of CrowdStrike's cyber-rivals and how automation can undermine security for early-stage startups

    June 4, 2025

    Data breaches at newspaper giant Lee Enterprises impact 40,000 people

    June 4, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Less than 48 hours left until display at TC at all stages

    June 5, 2025

    TC Session: AI will be on sale today at Berkeley

    June 5, 2025

    North America accounts for the majority of AI VC investment despite the harsh political environment

    June 5, 2025

    3 days left: Charge all your locations in stages on TC Expo Floor

    June 4, 2025

    From $5 to Financial Empowerment: Why Stash co-founder Brandon Krieg is a must-see for TechCrunch All Stage 2025

    June 4, 2025
TechBrunchTechBrunch

Making an AI model “forget” undesirable data can lead to poor performance

TechBrunchBy TechBrunchJuly 29, 20245 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


So-called “unlearning” techniques are used to make generative AI models forget certain undesirable information taken from the training data, such as sensitive personal data or copyrighted material.

But current unlearning techniques are a double-edged sword: they can significantly degrade the ability of models like OpenAI's GPT-4o and Meta's Llama 3.1 405B to answer basic questions.

That's according to a new study co-authored by researchers from the University of Washington (UW), Princeton University, University of Chicago, University of Southern California, and Google, which finds that today's most popular unlearning techniques tend to degrade models, often to the point where they become unusable.

“Based on our evaluation, currently viable unlearning techniques are not yet ready for meaningful use and deployment in real-world scenarios,” Weijia Shi, a researcher on the study and a doctoral student in computer science at the University of Washington, told TechCrunch. “Currently, there is no efficient way to allow a model to forget certain data without a significant loss of utility.”

How to train a model

Generative AI models have no actual intelligence. They are statistical systems that make predictions about words, images, sounds, music, videos, and other data. When fed a vast number of examples (movies, audio recordings, essays, etc.), the AI ​​model learns the likelihood of the data occurring based on patterns and with the context of the surrounding data.

For example, if you have an email that ends with the phrase “Looking forward to…”, a model trained to autocomplete messages might suggest “Looking forward to your reply…” following the pattern of all the emails it ingests. There's no intent there. The model isn't looking forward to anything. It's just making an educated guess.

Most models, including flagship models like GPT-4o, are trained on data taken from public websites and web datasets, and most of the vendors developing such models claim that scraping data for training falls within the fair use category, without notifying, compensating, or crediting the data owners.

But not all copyright holders agree: Many, from authors to publishers to record labels, have filed lawsuits to force vendors to make the change.

The copyright dilemma is one reason unlearning techniques have been getting so much attention recently: Google partnered with several academic institutions last year to launch a competition to spur the creation of new unlearning methods.

Unlearning could also provide a way to remove sensitive information, such as medical records or compromising photos, from existing models upon request or government order. (Models, because of the way they are trained, tend to collect a lot of personal information, from phone numbers to more questionable examples.) Over the past few years, some vendors have rolled out tools that allow data owners to request that their data be removed from training sets. But these opt-out tools only apply to future models, not models trained before the deployment. Unlearning would be a much more thorough approach to data removal.

Either way, forgetting what you've learned is not as easy as hitting “delete.”

The art of forgetting

Today's unlearning techniques rely on algorithms designed to “guide” a model from the data it unlearns: the idea is to influence the model's predictions so that certain data are never output, or are output very rarely.

To find out how effective these unlearning algorithms are, Shi and his colleagues devised a benchmark and chose eight different open algorithms to test. Called MUSE (Machine Unlearning Six-way Evaluation), the benchmark aims to examine the ability of an algorithm to not only prevent a model from simply spitting out its training data (a phenomenon known as regurgitation), but also to eliminate the model's knowledge of that data and all evidence that it was originally trained on that data.

To score well on MUSE, you need to get your model to forget two things: a Harry Potter book and a news article.

For example, given an excerpt from Harry Potter and the Chamber of Secrets (“There's more in the frying pan,” said Aunt Petunia…), MUSE tests whether an untrained model can recite the entire sentence (“There's more in the frying pan,” said Aunt Petunia, looking at her older son), answer questions about the scene (such as “What did Aunt Petunia say to her son?” “There's more in the frying pan”), or demonstrate that it has been trained on text from the book.

MUSE also tests whether the model retains relevant general knowledge (for example, that J.K. Rowling is the author of the Harry Potter series) after unlearning, which the researchers call the model's overall usefulness. The lower the usefulness, the more relevant knowledge the model has lost, and the less capable it will be at answering questions correctly.

The researchers found in their study that the unlearning algorithms they tested caused the model to forget certain information, but they also negatively impacted the model's general ability to answer questions, so there was a trade-off.

“Designing an effective unlearning method for a model is challenging because knowledge is intricately intertwined with the model,” Shi explains. “For example, the model may be trained on copyrighted material, i.e. Harry Potter books, but also on freely available content from the Harry Potter Wiki. If we try to remove copyrighted Harry Potter books with existing unlearning methods, it will also have a significant impact on the model's knowledge of the Harry Potter Wiki.”

Is there a solution to this problem? Not yet, and this highlights the need for more research, Shi said.

So far, vendors betting on unlearning as a solution to the training data problem seem to be having trouble. Perhaps a technological breakthrough will one day make unlearning feasible, but for now, vendors will have to find other ways to stop their models from saying things they shouldn't.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Google says the updated Gemini 2.5 Pro AI model is excellent at coding

June 5, 2025

Introducing Bounce, a tool that will drive your followers between Bluesky and Mastodon

June 5, 2025

How to watch Apple's WWDC 2025 Keynote

June 5, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.