Elon Musk has agreed to stop using Europeans' X posts as AI training material for his Grok chatbot for now.
The Irish Data Protection Commission (DPC), which leads X's privacy oversight under the European Union's General Data Protection Regulation (GDPR), announced the development in a press release on Thursday, saying it welcomed the social media company's decision to “suspend the processing of personal data for the purposes of training its AI tool, Grok.”
It was revealed earlier this week that the DPC had filed suit seeking an injunction against Company X for processing data without consent, and Irish national broadcaster RTE also reported that the DPC plans to refer the matter to the European Data Protection Board (EDPB).
In a statement about X's sudden change of policy, DPC Commissioner Des Hogan said: “My colleague Commissioner Dale Sunderland and I welcome X's agreement to suspend processing whilst the DPC works with the EU/EEA. [European Economic Area] Peer regulators will continually examine the extent to which processing complies with the GDPR.
“One of our key roles as an independent regulator and rights-based organisation is to ensure the best outcomes for data subjects. Today's developments will help ensure the continued protection of the rights and freedoms of X users across the EU and EEA. We will continue to work with all data controllers to ensure that citizens' rights under the EU Charter of Fundamental Rights and the GDPR are upheld.”
The DPC has received questions including whether it will ensure that data that has been unlawfully processed is deleted.
The legality of AI models trained on illegally obtained data and how to address it are also important questions, but it remains unclear how privacy watchdogs will interpret the law in this area.
OpenAI's rival chatbot, ChatGPT, attracted early attention from some GDPR enforcement agencies after it processed publicly available European data to train its models: In January, Italy's privacy watchdog warned OpenAI over multiple suspected violations of the regulation.
But the EDPB task force that examined how GDPR could be applied to ChatGPT published its first report in May, leaving little to be resolved on key issues such as the lawfulness and fairness of the processing.
Ordering the removal of models trained on illicit data would represent another leap of faith for privacy watchdogs.