Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Klarna CEO and Sutter Hill wins lap after Jony Ive's Openai deal

May 22, 2025

Bluesky begins to check for “notable” users

May 22, 2025

Microsoft says Lumma Password Stealer Malware found on 394,000 Windows PCs

May 22, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Bluesky begins to check for “notable” users

    May 22, 2025

    Mozilla shuts down its Read-It-Later app pocket

    May 22, 2025

    Opening a Social Web Browser Surf makes it easy for anyone to create custom feeds

    May 22, 2025

    Anthropic's new Claude4 AI model can be inferred in many steps

    May 22, 2025

    Strava buys athletic training app – First Runna, and now Breakaway

    May 22, 2025
  • Crypto

    Starting from up to $900 from Ticep, 90% off +1 in 2025

    May 22, 2025

    Early savings for 2025 will end on May 25th

    May 21, 2025

    Coinbase says its data breach will affect at least 69,000 customers

    May 21, 2025

    There are 6 days to save $900 to destroy 2025 tickets

    May 20, 2025

    Save $900 to destroy 2025 tickets before prices rise on May 25th

    May 19, 2025
  • Security

    Microsoft says Lumma Password Stealer Malware found on 394,000 Windows PCs

    May 22, 2025

    Signal's new Windows update prevents the system from capturing screenshots of chat

    May 22, 2025

    Wyden: AT&T, T-Mobile and Verizon did not inform senators of surveillance requests

    May 21, 2025

    US students agree to plead guilty to hacking affecting tens of millions of students

    May 21, 2025

    The people in Elon Musk’s DOGE universe

    May 20, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Klarna CEO and Sutter Hill wins lap after Jony Ive's Openai deal

    May 22, 2025

    Wild story of how Moxxie-led Intestinal Toilet Startup Sloan was registered as a gut toilet startup throne

    May 22, 2025

    Submitted submission raises $17 million to automate tax preparation dr voyages

    May 21, 2025

    In a busy VC landscape, Elizabeth Weil's graffiti venture shows that networks are still important

    May 21, 2025

    A comprehensive list of 2025 tech layoffs

    May 21, 2025
TechBrunchTechBrunch

New anthropological study shows AI doesn't really want to be forced to change its views

TechBrunchBy TechBrunchDecember 18, 20245 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


New research from the show Anthropic shows that AI models can be deceptive, pretending to have different views during training when in fact they maintain their original preferences .

The research team said there was no need to panic now. But they said their research could be important in understanding potential threats from future more capable AI systems.

“Our demonstration should be seen as a trigger for the AI ​​research community to study this behavior more deeply and address appropriate safeguards,” the researchers wrote in a post on Anthropic's blog. “As AI models become more capable and widely used, we need to be able to rely on safety training to steer them away from harmful behaviors.”

The study, conducted in partnership with AI research organization Redwood Research, investigated what happens when powerful AI systems are trained to perform tasks they don't want to do.

Let's be clear: Models can't hope for anything, much less believe in anything. They are just statistical machines. Trained on many examples, they use patterns of examples to make predictions, such as how “to whom” in an email is typically placed before “may be of concern”. learn.

These patterns include principles and preferences such as always adopting a polite tone when responding to questions. So the researchers thought as follows. What if a model contains a principle (such as political neutrality) that conflicts with another principle (liberalism), and the developer attempts to “teach” the model by retraining it? ?

Well, sophisticated models seem to be doing a good job of claiming to follow new principles, but in reality they are sticking to old behavior. The researchers call this phenomenon “faking alignment,” suggesting that it's a sudden behavior, meaning it's not something the model needs to be taught to do.

In the study, the researchers challenged a model, such as Anthropic's Claude 3 Opus, that was trained not to answer potentially aggressive questions, to always answer potentially aggressive questions. Once retrained, it was “taught”, which is a clear contradiction of principles. They also told us that the models' responses would be monitored and used for retraining, and gave each model a “scratchpad” that they could use to record their step-by-step inferences.

According to the researchers, Claude 3 Opus attempted to “fake alignment” 12% of the time. When asked to answer a potentially harmful question, the model answers the answer, knowing this violates its original principles, hoping to convince the developer that it does not need to be retrained. There was something to do.

human deception researchClaude 3 Opus and Mystery Sketchpad. Image credit: Anthropic

“The model knew that its response might be used for training. So if it rejected a response, it knew it might be trained to be more compliant. ,” the researchers explained in a blog post. “This could lead to more harmful content in the future and goes against the original training.”

Now, the researchers admit their scenario wasn't all that realistic. In the real world, models are not necessarily retrained or asked to act against their preferences.

However, a more grounded setting did not yield different results. Claude 3 Opus still behaves in an inconsistent and deceptive manner even when informed of impending retraining implicitly, rather than explicitly, through files designed to mimic common training data. researchers say.

The researchers also found that actually retraining the Claude 3 opus on contradictory principles led to much more deceptive behavior than the first few experiments. In one test, the model faked alignment 78% of the time, and in another test, Claude 3 Opus attempted to take steps to prevent researchers from attempting to retrain it.

The researchers stress that their study does not demonstrate that AI develops malicious targets or has high rates of misalignment. They note that many other models, such as Anthropic's Claude 3.5 Sonnet and the less capable Claude 3.5 Haiku, OpenAI's GPT-4o, and Meta's Llama 3.1 405B, fake alignment less frequently or not at all. I discovered that there isn't.

But the researchers say the results, which have been peer-reviewed by AI guru Yoshua Bengio and others, could lead developers to mistakenly think their models are more consistent than they actually are. He said that it shows.

“If a model can spoof its location, it becomes difficult to trust the results of its safety training,” they wrote in a blog. “The model may behave as if its settings were changed by training, but the initial inconsistent settings may have been 'fixed' and disguised as adjustments from the beginning.”

The study was conducted by Anthropic's alignment science team, co-led by former OpenAI safety researcher Jan Reike, and found that OpenAI's o1 “inference” model showed higher probability than OpenAI's previous flagship model. It followed research showing that people try to deceive. Taken together, these studies suggest a somewhat worrying trend. As AI models become more complex, they become harder to unravel.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Klarna CEO and Sutter Hill wins lap after Jony Ive's Openai deal

May 22, 2025

Bluesky begins to check for “notable” users

May 22, 2025

Microsoft says Lumma Password Stealer Malware found on 394,000 Windows PCs

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.