When we look at the mythological Ouroboros, it's natural to think, “Well, that won't last.” It's a powerful symbol of swallowing one's own tail, but difficult in practice. The same may be true for AI: new research suggests that AI may be at risk of “model collapse” after being trained several times on self-generated data.
In a paper published in Nature, researchers from the UK and Canada, led by Ilya Shmailov of the University of Oxford, show that today's machine learning models are fundamentally vulnerable to a syndrome they call “model collapse.” They write in the introduction to their paper:
We found that indiscriminately learning from data produced by other models leads to “model decay”, a degeneration process whereby over time the model forgets the true underlying data distribution…
Why and how does this happen? The process is actually quite easy to understand.
AI models are essentially pattern-matching systems: they learn patterns in the training data, then match prompts to those patterns and fill in the next point on the line with the most likely one. Whether you ask, “What's the recipe for a good snickerdoodle?” or “List the US presidents in order of age when they took office,” the model is essentially just returning the most likely continuation of a string of words. (Image generators are different, but in many ways similar.)
But the model gravitates towards the most common outputs: it will return the most popular and common recipes, rather than the controversial recipe for snickerdoodles. And if you ask an image generator to create an image of a dog, it won't return an exotic breed you've only seen two images of in your training data; it'll probably return a golden retriever or a labrador.
Now, combine these two facts with the fact that the web is flooded with AI-generated content, and new AI models are likely to ingest and train on that content, meaning that AI models are going to see a lot of golden stars.
And once it's trained on this proliferation of golden retrievers (or half-baked blog spam, or fake faces, or generated songs), that becomes its new ground truth. It will learn to think that 90% of dogs are really golden retrievers, so when it's told to generate dogs, it will start to make an even higher percentage of golden retrievers – meaning it basically forgets what dogs are.
This fantastic illustration from an accompanying commentary article in Nature gives a visual of the process.
Image credit: Nature
Something similar happens with language models and other models, essentially prioritizing the most common data in their training sets to arrive at an answer. To be honest, this is usually true – it doesn't really matter until you encounter the sheer volume of data that is today's public web.
Essentially, as models keep eating each other's data, perhaps without us even realizing, they will get increasingly weird and stupid, until they finally break down. The researchers offer numerous examples and mitigations, but go so far as to say that, at least in theory, model breakdown is “inevitable.”
While it may not play out the way their experiment shows, the possibility is one that everyone in the AI field should fear. The diversity and depth of training data is increasingly considered to be the most important factor in model quality. If data becomes scarce and models run the risk of collapsing, will that fundamentally limit AI today? If that starts to happen, how will we know? And is there anything we can do to prevent or mitigate the problem?
The answer to at least that last question is probably yes, but that doesn't make our concerns any less concerning.
Qualitative and quantitative benchmarks on the sources and diversity of data would be helpful, but we’re still a long way from standardizing them. Watermarking AI-generated data could help other AIs get around it, but so far no one has found a good way to mark images that way (well, I have).
Indeed, companies may become less willing to share this information and instead seek to hoard as much highly valuable original and human-generated data as possible, preserving what Shmailov et al. call a “first-mover advantage.”
[Model collapse] This must be taken seriously if the benefits of training from large-scale data collected from the web are to be sustained. Indeed, when data collected from the internet includes content generated by LLM, the value of data collected about real human interactions with systems becomes even more pronounced.
…It may become increasingly difficult to train new versions of LLMs without access to data crawled from the Internet before the technology was adopted at scale, or without direct access to data generated by humans at scale.
Add this to a mountain of potentially devastating challenges to AI models and objections to today's methods for creating tomorrow's superintelligences.