Earlier this week, DeepSeek, a well-funded Chinese AI research institute, released an “open” AI model that outperformed many rivals on popular benchmarks. The model, DeepSeek V3, is large but efficient and can easily handle text-based tasks like coding and essay writing.
It also appears to be ChatGPT.
Posts on X and TechCrunch's own tests indicate that DeepSeek V3 identifies itself as ChatGPT, OpenAI's AI-powered chatbot platform. When asked for details, DeepSeek V3 claims to be a version of OpenAI's GPT-4 model released in June 2023.
This is still being reproduced today. In 5 out of 8 generations, DeepSeekV3 claims to be ChatGPT (v4), but only 3 times does it claim to be DeepSeekV3.
You can get a rough idea of part of the distribution of your training data. https://t.co/Zk1KUppBQM pic.twitter.com/ptIByn0lcv
— Lucas Beyer (bl16) (@gifmana) December 27, 2024
My delusions deepen. When you ask DeepSeek V3 about DeepSeek's API, it provides instructions on how to use OpenAI's API. DeepSeek V3 tells some of the same jokes as GPT-4, right down to the punchline.
So what's going on?
Models like ChatGPT and DeepSeek V3 are statistical systems. Trained on billions of examples, they learn patterns in those examples to make predictions. For example, how “to whom” in an email is typically placed before “may be of concern”.
DeepSeek hasn't revealed much about the source of its training data for DeepSeek V3. However, there is no shortage of public datasets containing text generated by GPT-4 via ChatGPT. If DeepSeek V3 was trained on these, the model may have memorized some of the GPT-4 outputs and regurgitated them verbatim.
“Obviously, the model has seen a live response from ChatGPT at some point, but it's not clear where that is,” said Mike Cook, a researcher at King's College London who specializes in AI. told TechCrunch. “It could be a ‘coincidence’…but unfortunately we have seen examples where people try to piggyback on knowledge by training their models directly on the output of other models.”
Cook said that training models based on output from competing AI systems can be “very bad” for model quality, as it can lead to hallucinatory or misleading answers such as those mentioned above. I pointed out that there is. “Just as we take copies of copies, we lose more and more information and connection to reality,” Cook said.
It may also violate the terms of use of those systems.
OpenAI's terms prohibit users of its products, including ChatGPT customers, from using the output to develop models that compete with OpenAI's own models.
OpenAI and DeepSeek did not respond to requests for comment. However, OpenAI CEO Sam Altman posted what appears to be a dig at DeepSeek and other competitors on Friday afternoon.
“It's (relatively) easy to copy what you know works,” Altman writes. “It's very difficult to do something new, risky, and difficult when you don't know if it's going to work.”
Certainly, the DeepSeek V3 is not the first model to misidentify itself. Google's Gemini and others are sometimes claimed to be competing models. For example, when prompted in Chinese, Gemini says it's a Wenxing Yiyan chatbot from Chinese company Baidu.
That's because AI debris litters the web, where AI companies source most of their training data. Content farms use AI to create clickbait. Reddit and X are inundated with bots. By some estimates, 90% of the web could be generated by AI by 2026.
This “contamination” makes it very difficult to completely filter the AI output from the training dataset.
It is certainly possible that DeepSeek trained DeepSeek V3 directly on the text generated by ChatGPT. After all, Google has been accused of doing the same thing before.
Heidi Kraaf, engineering director at consulting firm Trail of Bits, said the cost savings from “distilling” knowledge from existing models could be attractive to developers, regardless of the risks. Ta.
“Despite the fact that internet data is now awash with AI output, it is not guaranteed that ChatGPT or other models incorrectly trained on GPT-4 output will show output reminiscent of OpenAI customized messages. ” said Khlaaf. “It would not be surprising if DeepSeek partially used OpenAI models to perform the distillation.”
But more likely, a lot of ChatGPT/GPT-4 data ended up in the DeepSeek V3 training set. That means, for example, that you can't trust the model to self-identify. More concerning, however, is that DeepSeek V3 may exacerbate some of the model's biases and flaws by uncritically absorbing and repeating GPT-4's output.