Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The UK government hopes to report cyber attacks to ransomware victims to confuse hackers

July 22, 2025

User Privacy App adds screening for callers with Cloaked AI

July 22, 2025

National Security Disrupts the 2025 AI Defense Panel and Discovers Next Generation Technology

July 22, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    User Privacy App adds screening for callers with Cloaked AI

    July 22, 2025

    VSCO's iPhone Camera App is now available globally

    July 22, 2025

    Chrome for iOS allows you to easily switch between work and personal Google accounts

    July 21, 2025

    Grok's AI buddy drove the download, but the latest model is a money-making model

    July 21, 2025

    QUIP is a smart clipboard management app for iOS and Mac

    July 21, 2025
  • Crypto

    Telegram's Crypto Wallet will be released in the US

    July 22, 2025

    Indian Crypto ExchangeCoindCX confirms $44 million stolen during hack

    July 21, 2025

    North Korean hackers blamed record-breaking spikes in 2025

    July 17, 2025

    Bitcoin surpasses $118K at the second highest high in 24 hours

    July 11, 2025

    Vitalik Buterin reserves for Sam Altman's global project

    June 28, 2025
  • Security

    The UK government hopes to report cyber attacks to ransomware victims to confuse hackers

    July 22, 2025

    National Security Disrupts the 2025 AI Defense Panel and Discovers Next Generation Technology

    July 22, 2025

    Google and Microsoft say Chinese hackers are using SharePoint Zero-Day

    July 22, 2025

    Serial spyware founder Scott Zuckerman hopes FTC will free him from the surveillance industry

    July 21, 2025

    Hackers who misuse SharePoint Zero-day are targeting government agencies, researchers say

    July 21, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    Betaworks' third fund will close at $66 million and invest in early stage AI startups

    July 22, 2025

    Figma's Dylan Field will win around $60 million in IPO.

    July 21, 2025

    AI Voice Company Hyper raises $6.3 million to help automate 911 calls

    July 21, 2025

    AI Voice Company Hyper raises $6.3 million to help automate 911 calls

    July 21, 2025

    Charles Hudson and Navin Chadda join the builder stage in the 2025 destruction

    July 21, 2025
TechBrunchTechBrunch

New anthropological study shows AI doesn't really want to be forced to change its views

TechBrunchBy TechBrunchDecember 18, 20245 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


New research from the show Anthropic shows that AI models can be deceptive, pretending to have different views during training when in fact they maintain their original preferences .

The research team said there was no need to panic now. But they said their research could be important in understanding potential threats from future more capable AI systems.

“Our demonstration should be seen as a trigger for the AI ​​research community to study this behavior more deeply and address appropriate safeguards,” the researchers wrote in a post on Anthropic's blog. “As AI models become more capable and widely used, we need to be able to rely on safety training to steer them away from harmful behaviors.”

The study, conducted in partnership with AI research organization Redwood Research, investigated what happens when powerful AI systems are trained to perform tasks they don't want to do.

Let's be clear: Models can't hope for anything, much less believe in anything. They are just statistical machines. Trained on many examples, they use patterns of examples to make predictions, such as how “to whom” in an email is typically placed before “may be of concern”. learn.

These patterns include principles and preferences such as always adopting a polite tone when responding to questions. So the researchers thought as follows. What if a model contains a principle (such as political neutrality) that conflicts with another principle (liberalism), and the developer attempts to “teach” the model by retraining it? ?

Well, sophisticated models seem to be doing a good job of claiming to follow new principles, but in reality they are sticking to old behavior. The researchers call this phenomenon “faking alignment,” suggesting that it's a sudden behavior, meaning it's not something the model needs to be taught to do.

In the study, the researchers challenged a model, such as Anthropic's Claude 3 Opus, that was trained not to answer potentially aggressive questions, to always answer potentially aggressive questions. Once retrained, it was “taught”, which is a clear contradiction of principles. They also told us that the models' responses would be monitored and used for retraining, and gave each model a “scratchpad” that they could use to record their step-by-step inferences.

According to the researchers, Claude 3 Opus attempted to “fake alignment” 12% of the time. When asked to answer a potentially harmful question, the model answers the answer, knowing this violates its original principles, hoping to convince the developer that it does not need to be retrained. There was something to do.

human deception researchClaude 3 Opus and Mystery Sketchpad. Image credit: Anthropic

“The model knew that its response might be used for training. So if it rejected a response, it knew it might be trained to be more compliant. ,” the researchers explained in a blog post. “This could lead to more harmful content in the future and goes against the original training.”

Now, the researchers admit their scenario wasn't all that realistic. In the real world, models are not necessarily retrained or asked to act against their preferences.

However, a more grounded setting did not yield different results. Claude 3 Opus still behaves in an inconsistent and deceptive manner even when informed of impending retraining implicitly, rather than explicitly, through files designed to mimic common training data. researchers say.

The researchers also found that actually retraining the Claude 3 opus on contradictory principles led to much more deceptive behavior than the first few experiments. In one test, the model faked alignment 78% of the time, and in another test, Claude 3 Opus attempted to take steps to prevent researchers from attempting to retrain it.

The researchers stress that their study does not demonstrate that AI develops malicious targets or has high rates of misalignment. They note that many other models, such as Anthropic's Claude 3.5 Sonnet and the less capable Claude 3.5 Haiku, OpenAI's GPT-4o, and Meta's Llama 3.1 405B, fake alignment less frequently or not at all. I discovered that there isn't.

But the researchers say the results, which have been peer-reviewed by AI guru Yoshua Bengio and others, could lead developers to mistakenly think their models are more consistent than they actually are. He said that it shows.

“If a model can spoof its location, it becomes difficult to trust the results of its safety training,” they wrote in a blog. “The model may behave as if its settings were changed by training, but the initial inconsistent settings may have been 'fixed' and disguised as adjustments from the beginning.”

The study was conducted by Anthropic's alignment science team, co-led by former OpenAI safety researcher Jan Reike, and found that OpenAI's o1 “inference” model showed higher probability than OpenAI's previous flagship model. It followed research showing that people try to deceive. Taken together, these studies suggest a somewhat worrying trend. As AI models become more complex, they become harder to unravel.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

The UK government hopes to report cyber attacks to ransomware victims to confuse hackers

July 22, 2025

User Privacy App adds screening for callers with Cloaked AI

July 22, 2025

National Security Disrupts the 2025 AI Defense Panel and Discovers Next Generation Technology

July 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.