New research from the show Anthropic shows that AI models can be deceptive, pretending to have different views during training when in fact they maintain their original preferences .
The research team said there was no need to panic now. But they said their research could be important in understanding potential threats from future more capable AI systems.
“Our demonstration should be seen as a trigger for the AI research community to study this behavior more deeply and address appropriate safeguards,” the researchers wrote in a post on Anthropic's blog. “As AI models become more capable and widely used, we need to be able to rely on safety training to steer them away from harmful behaviors.”
The study, conducted in partnership with AI research organization Redwood Research, investigated what happens when powerful AI systems are trained to perform tasks they don't want to do.
Let's be clear: Models can't hope for anything, much less believe in anything. They are just statistical machines. Trained on many examples, they use patterns of examples to make predictions, such as how “to whom” in an email is typically placed before “may be of concern”. learn.
These patterns include principles and preferences such as always adopting a polite tone when responding to questions. So the researchers thought as follows. What if a model contains a principle (such as political neutrality) that conflicts with another principle (liberalism), and the developer attempts to “teach” the model by retraining it? ?
Well, sophisticated models seem to be doing a good job of claiming to follow new principles, but in reality they are sticking to old behavior. The researchers call this phenomenon “faking alignment,” suggesting that it's a sudden behavior, meaning it's not something the model needs to be taught to do.
In the study, the researchers challenged a model, such as Anthropic's Claude 3 Opus, that was trained not to answer potentially aggressive questions, to always answer potentially aggressive questions. Once retrained, it was “taught”, which is a clear contradiction of principles. They also told us that the models' responses would be monitored and used for retraining, and gave each model a “scratchpad” that they could use to record their step-by-step inferences.
According to the researchers, Claude 3 Opus attempted to “fake alignment” 12% of the time. When asked to answer a potentially harmful question, the model answers the answer, knowing this violates its original principles, hoping to convince the developer that it does not need to be retrained. There was something to do.
Claude 3 Opus and Mystery Sketchpad. Image credit: Anthropic
“The model knew that its response might be used for training. So if it rejected a response, it knew it might be trained to be more compliant. ,” the researchers explained in a blog post. “This could lead to more harmful content in the future and goes against the original training.”
Now, the researchers admit their scenario wasn't all that realistic. In the real world, models are not necessarily retrained or asked to act against their preferences.
However, a more grounded setting did not yield different results. Claude 3 Opus still behaves in an inconsistent and deceptive manner even when informed of impending retraining implicitly, rather than explicitly, through files designed to mimic common training data. researchers say.
The researchers also found that actually retraining the Claude 3 opus on contradictory principles led to much more deceptive behavior than the first few experiments. In one test, the model faked alignment 78% of the time, and in another test, Claude 3 Opus attempted to take steps to prevent researchers from attempting to retrain it.
The researchers stress that their study does not demonstrate that AI develops malicious targets or has high rates of misalignment. They note that many other models, such as Anthropic's Claude 3.5 Sonnet and the less capable Claude 3.5 Haiku, OpenAI's GPT-4o, and Meta's Llama 3.1 405B, fake alignment less frequently or not at all. I discovered that there isn't.
But the researchers say the results, which have been peer-reviewed by AI guru Yoshua Bengio and others, could lead developers to mistakenly think their models are more consistent than they actually are. He said that it shows.
“If a model can spoof its location, it becomes difficult to trust the results of its safety training,” they wrote in a blog. “The model may behave as if its settings were changed by training, but the initial inconsistent settings may have been 'fixed' and disguised as adjustments from the beginning.”
The study was conducted by Anthropic's alignment science team, co-led by former OpenAI safety researcher Jan Reike, and found that OpenAI's o1 “inference” model showed higher probability than OpenAI's previous flagship model. It followed research showing that people try to deceive. Taken together, these studies suggest a somewhat worrying trend. As AI models become more complex, they become harder to unravel.