Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Pets ready-made stem cell therapy may come

July 4, 2025

Everyone in high tech has an opinion about Soham Parekh

July 3, 2025

All stages of TechCrunch regain early release prices for limited time

July 3, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Not everyone is excited about DMs on the thread

    July 3, 2025

    Meta has found another way to engage you: message that message first

    July 3, 2025

    Everything you need to know about Flash, Blueski-based Instagram alternatives

    July 3, 2025

    Substack brings new updates to live streaming as it increases video push

    July 2, 2025

    Amazon shuts down the Freevee app in August

    July 2, 2025
  • Crypto

    Vitalik Buterin reserves for Sam Altman's global project

    June 28, 2025

    Calci will close a $185 million round as rival Polymeruk reportedly seeks $200 million

    June 25, 2025

    Stablecoin Evangelist: Katie Haun's Battle of Digital Dollars

    June 22, 2025

    Hackers steal and destroy millions of Iran's biggest crypto exchanges

    June 18, 2025

    Unique, a new social media app

    June 17, 2025
  • Security

    Ransomware Gang Hunter International says it's shut down

    July 3, 2025

    India's biggest finance says hackers have accessed customer data from insurance units

    July 2, 2025

    Data breaches reveal that Catwatchful's “Stalkerware” is spying on thousands of phones

    July 2, 2025

    Hacking, Leaking, Exposure: Do not use stalkerware apps

    July 2, 2025

    Qantas Hacks lead to theft of personal data for 6 million passengers

    July 2, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Pets ready-made stem cell therapy may come

    July 4, 2025

    Everyone in high tech has an opinion about Soham Parekh

    July 3, 2025

    All stages of TechCrunch regain early release prices for limited time

    July 3, 2025

    Kristen Craft brings fresh fundraising strategies to every stage

    July 3, 2025

    The Y Combinator alumni have launched a new $34 million fund dedicated to YC startups.

    July 3, 2025
TechBrunchTechBrunch

Making an AI model “forget” undesirable data can lead to poor performance

TechBrunchBy TechBrunchJuly 29, 20245 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


So-called “unlearning” techniques are used to make generative AI models forget certain undesirable information taken from the training data, such as sensitive personal data or copyrighted material.

But current unlearning techniques are a double-edged sword: they can significantly degrade the ability of models like OpenAI's GPT-4o and Meta's Llama 3.1 405B to answer basic questions.

That's according to a new study co-authored by researchers from the University of Washington (UW), Princeton University, University of Chicago, University of Southern California, and Google, which finds that today's most popular unlearning techniques tend to degrade models, often to the point where they become unusable.

“Based on our evaluation, currently viable unlearning techniques are not yet ready for meaningful use and deployment in real-world scenarios,” Weijia Shi, a researcher on the study and a doctoral student in computer science at the University of Washington, told TechCrunch. “Currently, there is no efficient way to allow a model to forget certain data without a significant loss of utility.”

How to train a model

Generative AI models have no actual intelligence. They are statistical systems that make predictions about words, images, sounds, music, videos, and other data. When fed a vast number of examples (movies, audio recordings, essays, etc.), the AI ​​model learns the likelihood of the data occurring based on patterns and with the context of the surrounding data.

For example, if you have an email that ends with the phrase “Looking forward to…”, a model trained to autocomplete messages might suggest “Looking forward to your reply…” following the pattern of all the emails it ingests. There's no intent there. The model isn't looking forward to anything. It's just making an educated guess.

Most models, including flagship models like GPT-4o, are trained on data taken from public websites and web datasets, and most of the vendors developing such models claim that scraping data for training falls within the fair use category, without notifying, compensating, or crediting the data owners.

But not all copyright holders agree: Many, from authors to publishers to record labels, have filed lawsuits to force vendors to make the change.

The copyright dilemma is one reason unlearning techniques have been getting so much attention recently: Google partnered with several academic institutions last year to launch a competition to spur the creation of new unlearning methods.

Unlearning could also provide a way to remove sensitive information, such as medical records or compromising photos, from existing models upon request or government order. (Models, because of the way they are trained, tend to collect a lot of personal information, from phone numbers to more questionable examples.) Over the past few years, some vendors have rolled out tools that allow data owners to request that their data be removed from training sets. But these opt-out tools only apply to future models, not models trained before the deployment. Unlearning would be a much more thorough approach to data removal.

Either way, forgetting what you've learned is not as easy as hitting “delete.”

The art of forgetting

Today's unlearning techniques rely on algorithms designed to “guide” a model from the data it unlearns: the idea is to influence the model's predictions so that certain data are never output, or are output very rarely.

To find out how effective these unlearning algorithms are, Shi and his colleagues devised a benchmark and chose eight different open algorithms to test. Called MUSE (Machine Unlearning Six-way Evaluation), the benchmark aims to examine the ability of an algorithm to not only prevent a model from simply spitting out its training data (a phenomenon known as regurgitation), but also to eliminate the model's knowledge of that data and all evidence that it was originally trained on that data.

To score well on MUSE, you need to get your model to forget two things: a Harry Potter book and a news article.

For example, given an excerpt from Harry Potter and the Chamber of Secrets (“There's more in the frying pan,” said Aunt Petunia…), MUSE tests whether an untrained model can recite the entire sentence (“There's more in the frying pan,” said Aunt Petunia, looking at her older son), answer questions about the scene (such as “What did Aunt Petunia say to her son?” “There's more in the frying pan”), or demonstrate that it has been trained on text from the book.

MUSE also tests whether the model retains relevant general knowledge (for example, that J.K. Rowling is the author of the Harry Potter series) after unlearning, which the researchers call the model's overall usefulness. The lower the usefulness, the more relevant knowledge the model has lost, and the less capable it will be at answering questions correctly.

The researchers found in their study that the unlearning algorithms they tested caused the model to forget certain information, but they also negatively impacted the model's general ability to answer questions, so there was a trade-off.

“Designing an effective unlearning method for a model is challenging because knowledge is intricately intertwined with the model,” Shi explains. “For example, the model may be trained on copyrighted material, i.e. Harry Potter books, but also on freely available content from the Harry Potter Wiki. If we try to remove copyrighted Harry Potter books with existing unlearning methods, it will also have a significant impact on the model's knowledge of the Harry Potter Wiki.”

Is there a solution to this problem? Not yet, and this highlights the need for more research, Shi said.

So far, vendors betting on unlearning as a solution to the training data problem seem to be having trouble. Perhaps a technological breakthrough will one day make unlearning feasible, but for now, vendors will have to find other ways to stop their models from saying things they shouldn't.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Pets ready-made stem cell therapy may come

July 4, 2025

Everyone in high tech has an opinion about Soham Parekh

July 3, 2025

All stages of TechCrunch regain early release prices for limited time

July 3, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.