Contractors working on improving Google's Gemini AI are comparing its answers to the output produced by Anthropic's competing model, Claude, according to internal communications seen by TechCrunch.
When asked for comment by TechCrunch, Google declined to say whether it had permission to use Claude in its tests against Gemini.
As technology companies compete to build better AI models, the performance of these models is often measured against their competitors, typically by having contractors carefully evaluate their competitors' AI responses. rather than by running proprietary models through industry benchmarks.
Contractors working on Gemini, tasked with evaluating the accuracy of the model's output, must score each response they see according to multiple criteria, such as veracity and redundancy. According to communications seen by TechCrunch, contractors will be given up to 30 minutes for each prompt to determine whether Gemini or Claude's answer is better.
Contractors recently began noticing references to Anthropic's Claude appearing on the internal Google platform they use to compare Gemini to other unnamed AI models, the correspondence shows. Masu. At least one of the deliverables submitted to Gemini's contractors clearly stated, “I am Claude, created by Anthropic,” as seen by TechCrunch.
One internal chat showed contractors noticing that Claude's responses seemed to be more safety-oriented than Gemini's. One contractor wrote that “Claude's safety settings are the most stringent” of the AI models. In some cases, Claude did not respond to prompts that were deemed unsafe, such as role-playing another AI assistant. In another article, Claude avoided answering the question, but Gemini's answer included “nudity and bondage” and was flagged for a “serious safety violation.”
Anthropic's commercial terms of service prohibit customers from accessing Claude to “build competitive products or services” or “train competitive AI models” without Anthropic's approval. Google is a major investor in Anthropic.
Sheila McNamara, a spokesperson for Google DeepMind, which operates Gemini, declined to say in response to TechCrunch's questions whether Google had Anthropic's approval to access Claude. An Anthropic spokesperson contacted before publication did not comment by press time.
McNamara said DeepMind will “compare model outputs” for evaluation purposes, but will not train Gemini based on human models.
“Of course, in accordance with standard industry practice, in some cases we will compare model outputs as part of the evaluation process,” McNamara said. “However, the suggestion that a human model was used to train Gemini is inaccurate.”
Last week, TechCrunch exclusively reported that Google contractors working on the company's AI products were being forced to evaluate Gemini's AI capabilities in areas outside of their expertise. In internal communications, contractors expressed concerns that Gemini could generate inaccurate information on highly sensitive topics such as healthcare.
You can securely submit tips to this reporter on Signal (+1 628-282-2811).
TechCrunch has a newsletter focused on AI. Sign up here to get it delivered to your inbox every Wednesday.