Elon Musk's social media platform X has been the target of a series of privacy complaints after it used European Union users' data to train AI models without their consent.
Late last month, eagle-eyed social media users discovered settings that indicated X had begun covertly processing local user-submitted data to train its Grok AI chatbot. The revelation prompted the Irish Data Protection Commission (DPC), the watchdog that oversees X's compliance with the EU General Data Protection Regulation (GDPR), to express “consternation.”
GDPR allows for fines of up to 4% of annual global turnover if a violation is found, but stipulates that any use of personal data must have a valid legal basis. Nine complaints against X, filed with data protection authorities in Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Poland and Spain, accuse the company of neglecting this step by using Europeans' posts to train its AI without their consent.
Max Schrems, chairman of privacy rights nonprofit noyb, which is supporting the complaint, said in a statement: “Over the past few years, we have seen countless examples of inefficient and incomplete enforcement by the DPC. In this case, we want to ensure that Twitter fully complies with EU law, which requires it to, at a minimum, ask users for consent.”
The DPC has already taken action against X's processing of its AI models for training, and has filed a lawsuit in the Irish High Court seeking an injunction to force the company to stop using the data. However, noyb argues that the DPC's actions so far are insufficient and that there is no way for X's users to force the company to delete “data that has already been captured.” In response, noyb has filed GDPR complaints in Ireland and seven other countries.
The complaint alleges that X has no lawful basis for using the data of around 60 million EU people to train AI without their consent. The platform appears to be relying on a legal basis called “legitimate interest” for its AI-related processing, but privacy experts say it needs to get consent.
“Companies that interact directly with users could simply give them a yes/no prompt before using their data. Companies do this regularly for many other things, so it's certainly possible to do it with AI training as well,” Schrems suggested.
In June, Meta suspended similar plans to process user data for training its AI after noyb upheld a GDPR complaint and regulators intervened.
However, X appears to have gone unnoticed for weeks, as it quietly used user data to train its AI without notifying users.
The DPC said X processed data of Europeans to train its AI models between May 7 and August 1.
X users can now opt out of the processing via a setting added to the web version of the platform (we believe in late July), but before that there was no way to block the processing. And of course, it's hard to opt out of your data if you don't know it's being used for AI training in the first place.
This is important because the GDPR is explicitly intended to protect Europeans from unintended uses of their information that may affect their rights and freedoms.
In arguing against X's choice of legal basis, noyb points to a ruling from Europe's highest court last summer. The ruling concerned a competition complaint against Meta's use of people's data for ad targeting, in which the judge ruled that the legitimate interest legal basis was not valid for that use case and that user consent must be obtained.
Noyb also noted that providers of generative AI systems often claim they fail to comply with other core requirements of the GDPR, such as the right to be forgotten and the right to obtain a copy of personal data. These concerns are also reflected in other outstanding GDPR complaints against OpenAI's ChatGPT.