The UK's data protection watchdog has confirmed that Microsoft-owned LinkedIn has stopped processing user data to train AI models for now.
Stephen Almond, executive director of regulatory risk at the UK Information Commissioner's Office, said in a statement on Friday: “We are pleased that LinkedIn has considered the concerns we raised about the company's approach to training AI models with information about UK users, and we welcome LinkedIn's confirmation that it has suspended such model training pending further consultation with the ICO.”
Eagle-eyed privacy experts have already noticed that LinkedIn has quietly amended its privacy policy following backlash over its collection of people's information to train AI: The company added the UK to the list of European territories where it doesn't offer an opt-out, saying it doesn't process local user data for this purpose.
“At this time, we have not enabled training of Generative AI on member data from the European Economic Area, Switzerland, or the UK, and we have no plans to offer the setting to members in these regions until further notice,” LinkedIn general counsel Blake Lawit said in an updated version of the company's blog post first published on Sept. 18.
The professional social network had previously stated that it didn't process information about users in the European Union, EEA or Switzerland, where the EU's General Data Protection Regulation (GDPR) applies, but because UK data protection laws are still based on the EU framework, privacy experts were quick to complain when it emerged LinkedIn wasn't giving the same consideration to its UK users.
UK digital rights group Open Rights Group (ORG) expressed its anger at LinkedIn's actions by filing a new complaint with the ICO about AI processing of data without consent, but it also criticized regulators for failing to stop yet another AI data theft.
Meta, which owns Facebook and Instagram, has lifted an earlier moratorium on processing its local users' data to train its AI, reverting to a default of collecting information from UK users, meaning users with UK-linked accounts will again have to actively opt out if they don't want Meta to use their personal data to power its algorithms.
The ICO has previously raised concerns about Meta's practices, but the regulator has so far chosen to stand by and watch as the ad tech giant resumes data collection.
In a statement Wednesday, ORG's legal and policy director, Mariano Delli Santi, warned of the imbalance that having an opt-out mechanism embedded somewhere in the settings would give powerful platforms the ability to do whatever they want with people's information, and instead, he argued, they should be required to obtain affirmative consent up front.
“Opt-out models have once again proven wholly inadequate to protect our rights. Ordinary citizens cannot be expected to monitor and track every online company that decides to use our data to train AI,” he wrote. “Opt-in consent is not only a legal requirement, but also a common-sense requirement.”
We have sent questions to the ICO and Microsoft and will update this report if we receive a response.