Britain's data protection watchdog has concluded a nearly year-long investigation into Snap's AI chatbot My AI, saying it is satisfied the social media company has addressed concerns about risks to children's privacy. Ta. At the same time, the Information Commissioner's Office (ICO) has issued a general warning to the industry to proactively assess the risks to people's rights before bringing generative AI tools to market.
GenAI often refers to a type of AI that fronts content creation. In Snap's case, the tech powers chatbots that can respond to users in human-like ways, like sending text messages and snaps, allowing the platform to provide automated interactions.
Snap's AI chatbot is powered by OpenAI's ChatGPT, but the social media company said it applies various safeguards to the application, including guideline programming and age-by-default considerations to prevent children from viewing age-inappropriate content. Parental controls will also be built in.
“Our investigation into 'MyAI' should serve as a warning to the industry,” Stephen Almond, the ICO's executive director of regulatory risk, said in a statement on Tuesday. “Organizations developing or using generative AI need to consider data protection from the outset, including rigorously assessing and mitigating risks to people's rights and freedoms before bringing their products to market.”
“We will continue to monitor organizations' risk assessments and use all enforcement powers, including fines, to protect the public from harm,” he added.
Back in October, the ICO issued a pre-enforcement notice to Snap for what it said at the time was that it “may not have been able to properly assess the privacy risks posed by the generative AI chatbot 'My AI.'” sent.
This advance notice last fall appears to have been the only public rebuke to Snap. In theory, the administration could impose fines of up to 4% of a company's annual revenue if a data breach is confirmed.
Announcing the conclusion of the investigation on Tuesday, the ICO suggested that the company had “taken significant steps to more thoroughly investigate the risks posed by 'My AI'” following the intervention. The ICO also said Snap was able to demonstrate that it had implemented “appropriate mitigation” in response to the concerns raised, but did not say what additional measures, if any, the company had taken. (we asked).
Further details are likely to be released when the regulator's final decision is announced in the coming weeks.
“The ICO is satisfied that Snap has carried out a risk assessment on My AI that complies with data protection law. The ICO will continue to monitor the rollout of My AI and how it addresses emerging risks. We will continue to monitor this,” the regulator added.
Following the conclusion of the investigation, a Snap spokesperson sent us a statement that read: While we have carefully assessed the risks posed by My AI, we acknowledge that that assessment could have been more clearly documented and have implemented global procedures to reflect the ICO's constructive feedback. has been changed. We welcome the ICO's conclusion that our risk assessment is fully compliant with UK data protection law and look forward to continuing our constructive partnership. ”
Snap did not disclose the mitigation measures it had implemented in response to the ICO's intervention.
The UK regulator said generative AI remains an enforcement priority. It provides guidance to developers regarding AI and data protection rules. A consultation has also been launched to seek views on how privacy law should apply to the development and use of generative AI models.
Although the UK has yet to introduce formal legislation on AI, the UK government has chosen to rely on regulators like the ICO to decide how the various existing rules are applied. , European Union lawmakers have just approved a risk-based framework for AI, which will also include transparency obligations for AI chatbots.