ARTICLE AD
Earlier this week, DeepSeek, a well-funded Chinese AI lab, released an “open” AI model that beats many rivals on popular benchmarks. The model, DeepSeek V3, is large but efficient, handling text-based tasks like coding and writing essays with ease.
It also seems to think it’s ChatGPT.
Posts on X — and TechCrunch’s own tests — show that DeepSeek V3 identifies itself as ChatGPT, OpenAI’s AI-powered chatbot platform. Asked to elaborate, DeepSeek V3 insists that it is a version of OpenAI’s GPT-4 model released in June 2023.
This actually reproduces as of today. In 5 out of 8 generations, DeepSeekV3 claims to be ChatGPT (v4), while claiming to be DeepSeekV3 only 3 times.
Gives you a rough idea of some of their training data distribution. https://t.co/Zk1KUppBQM pic.twitter.com/ptIByn0lcv
— Lucas Beyer (bl16) (@giffmana) December 27, 2024
The delusions run deep. If you ask DeepSeek V3 a question about DeepSeek’s API, it’ll give you instructions on how to use OpenAI’s API. DeepSeek V3 even tells some of the same jokes as GPT-4 — down to the punchlines.
So what’s going on?
Models like ChatGPT and DeepSeek V3 are statistical systems. Trained on billions of examples, they learn patterns in those examples to make predictions — like how “to whom” in an email typically precedes “it may concern.”
DeepSeek hasn’t revealed much about the source of DeepSeek V3’s training data. But there’s no shortage of public datasets containing text generated by GPT-4 via ChatGPT. If DeepSeek V3 was trained on these, the model might’ve memorized some of GPT-4’s outputs and is now regurgitating them verbatim.
“Obviously, the model is seeing raw responses from ChatGPT at some point, but it’s not clear where that is,” Mike Cook, a research fellow at King’s College London specializing in AI, told TechCrunch. “It could be ‘accidental’ … but unfortunately, we have seen instances of people directly training their models on the outputs of other models to try and piggyback off their knowledge.”
Cook noted that the practice of training models on outputs from rival AI systems can be “very bad” for model quality, because it can lead to hallucinations and misleading answers like the above. “Like taking a photocopy of a photocopy, we lose more and more information and connection to reality,” Cook said.
It might also be against those systems’ terms of service.
OpenAI’s terms prohibit users of its products, including ChatGPT customers, from using outputs to develop models that compete with OpenAI’s own.
OpenAI and DeepSeek didn’t immediately respond to requests for comment. However, OpenAI CEO Sam Altman posted what appeared to be a dig at DeepSeek and other competitors on X Friday afternoon.
“It is (relatively) easy to copy something that you know works,” Altman wrote. “It is extremely hard to do something new, risky, and difficult when you don’t know if it will work.”
Granted, DeepSeek V3 is far from the first model to misidentify itself. Google’s Gemini and others sometimes claim to be competing models. For example, prompted in Mandarin, Gemini says that it’s Chinese company Baidu’s Wenxinyiyan chatbot.
And that’s because the web, which is where AI companies source the bulk of their training data, is becoming littered with AI slop. Content farms are using AI to create clickbait. Bots are flooding Reddit and X. By one estimate, 90% of the web could be AI-generated by 2026.
This “contamination,” if you will, has made it quite difficult to thoroughly filter AI outputs from training datasets.
It’s certainly possible that DeepSeek trained DeepSeek V3 directly on ChatGPT-generated text. Google was once accused of doing the same, after all.
Heidy Khlaaf, engineering director at consulting firm Trail of Bits, said the cost savings from “distilling” an existing model’s knowledge can be attractive to developers, regardless of the risks.
“Even with internet data now brimming with AI outputs, other models that would accidentally train on ChatGPT or GPT-4 outputs would not necessarily demonstrate outputs reminiscent of OpenAI customized messages,” Khlaaf said. “If it is the case that DeepSeek carried out distillation partially using OpenAI models, it would not be surprising.”
More likely, however, is that a lot of ChatGPT/GPT-4 data made its way into the DeepSeek V3 training set. That means the model can’t be trusted to self-identify, for one. But what is more concerning is the possibility that DeepSeek V3, by uncritically absorbing and iterating on GPT-4’s outputs, could exacerbate some of model’s biases and flaws.
Kyle Wiggers is a senior reporter at TechCrunch with a special interest in artificial intelligence. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself. occasionally — if mostly unsuccessfully.