On Sunday, California Governor Gavin Newsom signed AB-2013, a bill that requires companies developing generative AI systems to publicly outline the data used to train their systems. The summary should cover, among other things, who owns the data, how the data was sourced or licensed, and whether any copyright information or personal information is included.
Few AI companies say whether they will comply.
TechCrunch reached out to major companies in the AI space, including OpenAI, Anthropic, Microsoft, Google, Amazon, Meta, and startups Stability AI, Midjourney, Udio, Suno, Runway, and Luma Labs. Less than half responded, and one vendor (Microsoft) specifically declined to comment.
Stability, Runway, and OpenAI were the only companies that told TechCrunch they would comply with AB-2013.
“OpenAI complies with the laws of the jurisdictions in which we operate, including this region,” an OpenAI spokesperson said. A Stability spokesperson said the company “supports thoughtful regulation that protects the public and does not stifle innovation.”
To be fair, AB-2013's disclosure requirements will not go into effect immediately. These apply to systems released after January 2022 (ChatGPT and Stable Diffusion, to name a few), but companies must start publishing training data summaries by January 2026. there is. Additionally, the law only applies to systems available to Californians, leaving some wiggle room.
But there may be another reason why vendors are silent on the subject, and it has to do with how most generative AI systems are trained.
Training data is often obtained from the web. Vendors collect vast amounts of images, songs, videos, etc. from websites and train their systems based on them.
A few years ago, it was standard practice for AI developers to list the sources of their training data in technical documentation, usually accompanying a model release. For example, Google once revealed that it trained an early version of its family of image generation models, Imagen, on the publicly available LAION dataset. Many of the older papers mention The Pile, an open-source collection of academic research and training texts that includes codebases.
In today's cutthroat market, the composition of training data sets is considered a competitive advantage, and companies cite this as one of the main reasons for non-disclosure. But the details of the training data can also paint a legal target on the developer's back. LAION links to copyrighted and privacy-invasive images, while The Pile includes Books3, a library of pirated works by Stephen King and other authors.
There are already numerous lawsuits over the misuse of training data, and more are being filed every month.
The authors and publishers claim that OpenAI, Anthropic, and Meta used copyrighted books (some from Books3) for training. The music label sued Udio and Suno for allegedly training the musicians on their songs without paying them. Artists have filed a class action lawsuit against Stability and Midjourney, alleging data scraping constitutes theft.
It's not hard to see how AB-2013 could pose a problem for vendors seeking to thwart legal battles. The law requires the publication of a set of potentially incriminating specifications about training datasets, including when the set was first used and data collection is ongoing. It also includes a notification indicating whether or not.
The scope of AB-2013 is very broad. Companies that make “substantial changes” to their AI systems, i.e., tweak or retrain them, will also be forced to publish information about the training data they used to do so. The law has several carve-outs, but they primarily apply to AI systems used for cybersecurity and defense, such as “operating aircraft in national airspace.”
Of course, many vendors believe that the doctrine known as fair use provides legal protection, and they assert this in court and in public statements. Some companies, such as Meta and Google, are changing their platform settings and terms of service to make more user data available for training.
Spurred by competitive pressures and a bet that fair use defenses will ultimately prevail, some companies are training lavishly on data protected by intellectual property. A Reuters report revealed that Meta was at one point using copyrighted books for AI training, despite warnings from its lawyers. There is evidence that Runway sourced movies from Netflix and Disney to train its video generation system. OpenAI also reportedly transcribed YouTube videos without the creators' knowledge to develop models such as GPT-4.
As I've written before, the consequences are whether or not generative AI vendors are exempt from disclosing system training data. Courts may ultimately side with fair use advocates and decide that generative AI, rather than plagiarism engines as the New York Times and other plaintiffs argue, is transformative enough.
In a more dramatic scenario, AB-2013 could result in vendors withholding certain models in California or releasing versions of models for Californians trained only on fair use and licensed datasets. Possibly. Some vendors may decide that the safest course of action under AB-2013 is to avoid disclosure breaches and litigation.
Assuming the law is not challenged or sustained, we will have a clearer picture by the AB-2013 deadline, just one year from now.