The European Commission has sent a series of formal requests for information (RFIs) to Google, Meta, Microsoft, Snap, TikTok and X on how they are addressing risks associated with the use of generative AI. .
The requests, which relate to Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X, are being made under the block's relaunched e-commerce and online governance rules, the Digital Services Act (DSA) . Eight platforms are designated as Very Large Online Platforms (VLOPs) under the regulations. This means that in addition to complying with the rest of the rulebook, systemic risk must be assessed and mitigated.
The commission said in a press release Thursday that it is asking for more information on each mitigation measure for risks associated with generative AI on services. This includes those related to so-called “hallucinations” in which AI technology generates false information. Deepfake viral spread. Automated operation of services that may mislead voters.
“The Commission also requested information and internal information on risk assessments and mitigation measures related to the electoral process, the spread of illegal content, the protection of fundamental rights, gender-based violence, and the impact of generative AI on minors and mental health. “We are requesting documentation,” the commission added, stressing that the questions concern “both the dissemination and creation of generative AI content.”
The EU also said in a press conference with journalists that it was planning a series of stress tests to be carried out after Easter. These will test whether the platforms can cope with generative AI risks, including the potential for a flood of political deepfakes ahead of June's European elections.
“We want to encourage platforms to tell us what they are doing to prepare us as best as possible… for the things we may be able to detect and for the election. We are prepared for any incidents that we have to respond to,” a senior commission official said on condition of anonymity.
The EU, which oversees VLOPs’ compliance with these Big Tech-specific DSA rules, has cited election security as one of its enforcement priority areas. The company has recently been working on formal guidance and consulting on election security rules for VLOPs.
According to the Committee, today's request is intended, in part, to support that guidance. However, platforms have until April 3 to provide information related to election protection, which is described as an “urgent” request. But the EU has said it wants to finalize election security guidelines sooner than that by March 27.
The commission said the cost of producing synthetic content has fallen dramatically, increasing the risk of misleading deepfakes being mass-produced during elections. That's why political deepfakes are attracting the attention of major platforms at such a widespread scale.
The technology industry pact to combat deceptive use of AI during elections, which emerged from last month's Munich Security Conference, is backed by the same platform on which the European Commission is currently sending out RFIs, but the E.U. Opinions are not enough.
One committee official said the upcoming election security guidelines will be a “further step forward,” pointing to a triple whammy of safeguards the committee plans to use. The first step is to provide “clear due diligence rules” that give DSAs the power to target specific “risk situations”. combined with his more than five years of experience working with platforms through the (non-legally binding) Counter-Disinformation Code of Conduct, which the EU intends to become a code of conduct under her DSA. And soon, transparency labeling/AI model marking regulations will be enacted under the next AI law.
The EU's goal is to create an “ecosystem of enforcement structures” that it can leverage in the run-up to elections, the official added.
The commission's RFI today goes beyond voter manipulation, including the harms associated with deepfake pornography and other types of malicious synthetic content generation, whether the content created is images/video or audio. It also aims to address generative AI risks. These requests reflect other priority areas for the EU's enforcement of his DSA against VLOPs, such as risks related to illegal content (e.g. hate speech) and child protection.
The platform is available until April 24th to provide responses to these other generated AI RFIs.
Small platforms where misleading, malicious or harmful deepfakes can be distributed, and small AI tool makers that can produce synthetic media at low cost, are also on the EU's risk mitigation radar doing.
Such platforms and tools are not designated and therefore are not subject to the European Commission's explicit DSA supervision of VLOPs. However, a strategy to amplify regulatory influence is to apply pressure indirectly through larger platforms (which in this context may act as amplifiers and/or distribution channels). and through self-regulatory mechanisms such as the aforementioned disinformation laws. And the AI Agreement is expected to become operational soon, once the (strictly legal) AI Act is adopted (expected within the next few months).