The European Union on Tuesday published draft election security guidelines for around 24 (large) platforms with more than 45 million monthly active users in the region, which are subject to the Digital Services Act (DSA). regulated and therefore have a legal obligation to reduce systemic risk. Enable political deepfakes and more while protecting fundamental rights such as freedom of expression and privacy.
Targeted platforms include Facebook, Google Search, Instagram, LinkedIn, TikTok, YouTube, X, and more.
The commission has identified elections as one of a handful of priority areas for DSA enforcement against so-called Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs). In addition to adhering to a full online governance regime, this subset of DSA-regulated companies is required to identify and mitigate systemic risks, such as information manipulation targeting local democratic processes. .
According to the EU's election security guidelines, the bloc will require regulated technology giants to step up their efforts to protect democratic votes and provide support in several official languages spoken in the region. We expect to deploy capable content management resources and ensure that we have sufficient staff to effectively respond to the risks that arise. They block the flow of information on their platforms and act on reports by third-party fact checkers, but they risk hefty fines if they drop the ball.
This requires platforms to strike the right balance when it comes to moderating political content. For example, we need to keep pace with our ability to distinguish between political satire, which should be protected as free speech, and malicious political disinformation, which its creators desire. To influence voters and distort elections.
In the latter case, the content falls under the DSA classification of systemic risk, which the platform is expected to quickly discover and mitigate. EU standards here require that “reasonable, appropriate and effective” mitigation measures be put in place for risks associated with electoral processes, as well as respecting other relevant provisions of broader content moderation and governance regulations. is required to do so.
The commission has been working rapidly to develop election guidelines and began consultation on the draft just last month. The sense of urgency in Brussels is felt by the European Parliament elections scheduled for June. Officials said they will focus on testing the platform's readiness next month. Therefore, the EU is leaving platform compliance to chance, even with strict laws in place that would put tech giants at risk of huge fines if they fail to meet the European Commission's expectations this time around. It doesn't seem like he has any intention of doing so.
Algorithm feed user control
Key to the EU's election guidelines for mainstream social media companies and other major platforms is that they should offer users more meaningful choices than algorithmic or AI-powered recommendation systems, so they The idea is to give users some control over the type of content they see. .
“Recommender systems can play an important role in shaping the information environment and public opinion,” the guidance states. “In order to reduce the risks that such systems may pose regarding electoral processes, [platform] Providers should consider that: (i.) Recommender systems are designed and tailored in a way that takes full account of media diversity and pluralism and provides users with meaningful choice and control over their feeds; Make sure that ”
Platform recommendation systems must also have a way to down-rank election-targeted disinformation based on what the guidance describes as a “clear and transparent manner.” and/or posts from accounts repeatedly found to spread misinformation.
Platforms must also deploy mitigations to avoid the risk that their recommender systems spread AI-based generative disinformation (a.k.a. political deepfakes). They should also proactively evaluate their recommendation engines for risks associated with election processes and deploy updates to mitigate risks. The EU also recommends transparency regarding the design and functionality of AI-driven feeds. It urges platforms to engage in things like adversarial testing and red teaming to strengthen their ability to discover and defuse risks.
The EU advice on GenAI also recommends watermarking of synthetic media, while also noting the limits of technical feasibility.
Recommended mitigations and best practices for large platforms, outlined in a 25-page draft guidance published today, allow platforms to dial up internal resources to focus on specific election threats, such as upcoming election events. There are also expectations that the process will be put in place. To share relevant information and risk analysis.
Resources require local expertise
This guidance is intended to assist the work of organizations responsible for designing and coordinating risk mitigation measures, in addition to the collection of Member State-specific/national and regional information, as well as the need for analysis on 'risks specific to local contexts'. It emphasizes gender. And it requires “appropriate content moderation resources” with local language skills and knowledge of national and regional conditions and idiosyncrasies. This is a long-standing complaint in the EU regarding platforms' efforts to reduce the risk of disinformation.
Another recommendation is to establish a “dedicated, clearly identifiable internal team” in advance of the election period with resources commensurate with the risks identified in the election in question, and to conduct internal research on each election event. It's about strengthening processes and resources.
The EU guidance also clearly recommends hiring staff with local expertise, including language knowledge. On the other hand, platforms often seek to reuse centralized resources without constantly looking for dedicated local expertise.
“Teams should cover all relevant expertise, including areas such as content moderation, fact-checking, threat destruction, hybrid threats, cybersecurity, disinformation, and FIMI. [foreign information manipulation and interference]promote fundamental rights and public participation, and cooperate with relevant external experts, such as the European Digital Media Monitoring Organization (EDMO) hub and independent fact-checking organizations,” the EU also wrote.
This guidance could allow platforms to ramp up resources for specific election events and demobilize teams once voting ends.
The report notes that, depending on the level of risk and EU Member States' specific rules on elections (which are subject to change), the period over which additional risk mitigation measures may be required may vary. It is pointed out that it is expensive. However, the commission recommends that platforms deploy and operate mitigation measures at least one to six months before the election period and continue for at least one month after the election.
Not surprisingly, we expect the strongest mitigation efforts to be implemented in the period before Election Day to address risks such as disinformation targeting the voting process.
Hate speech in a frame
The EU generally advises platforms to consult other existing guidelines, such as the Code of Conduct on Disinformation and the Code of Conduct on Combating Hate Speech, to identify best practices for mitigation. However, it stipulates that they must ensure that users are provided with access to official information about the election process, including banners, links and pop-ups designed to direct users to reliable sources of information about elections. .
“The Commission recommends that when mitigating systemic risks to election integrity, due consideration is also given to the impact of measures to combat illegal content, such as public incitement to violence and hatred. “Where such illegal content has the potential to inhibit or silence voices in a democratic society, discussions must be made on behalf of particularly vulnerable groups and minorities,'' the commission wrote. ing.
“For example, violent extremist and terrorist ideologies, or racism in contexts such as FIMI targeting the LGBTIQ+ community, as well as forms of gender-based disinformation and gender-based violence, have been linked to open and democratic dialogue. It has the potential to undermine debate and further exacerbate division and polarization in society. In this regard, the Code of Conduct for Dealing with Unlawful Hate Speech Online can serve as an inspiration when considering appropriate action. It can be used as.”
It also recommends running media literacy campaigns and rolling out measures aimed at providing users with more contextual information, such as fact-checking labels. Prompts and nudges. Clear display of official accounts. Clear and non-deceptive labeling of accounts operated by Member States, third countries, and entities controlled or sponsored by third countries. Tools and information to help users evaluate the reliability of information sources. Tools for assessing provenance. and establish processes to combat abuse of these procedures and tools. This is like a list of things Elon Musk has dismantled since taking over Twitter (now X).
Notably, Musk has also been accused of spreading hate speech on the platforms he monitors. And as of this writing, X remains under investigation by the EU for various alleged violations of his DSA, including those related to content moderation requirements.
Transparency to enhance accountability
When it comes to political advertising, the guidance points to upcoming transparency rules in this area and advises platforms to take steps now to meet the requirements and prepare for legally binding regulations. . (For example, clearly labeling political ads, providing information about the sponsors behind these paid political messages, maintaining a public repository of political ads, and establishing systems to verify the identity of political advertisers.) (Introduction, etc.)
Elsewhere in this guidance, we also set out how to address election risks related to influencers.
Platforms must also follow the guidance and have systems in place that can de-monetize disinformation and provide “stable and reliable” data access to third parties vetting and investigating election risks. It has been demanded. The advice stipulates that data access to study election risks should also be provided free of charge.
More generally, the guidance encourages platforms to collaborate with regulators, civil society experts, and each other when sharing information about election security risks, and encourages platforms to collaborate with regulators, civil society experts, and each other when sharing information about election security risks. It encourages the establishment of communication channels for risk reporting.
This advice requires platforms to establish an internal incident response mechanism involving senior leadership and map other relevant stakeholders within the organization to address high-risk incidents. We recommend promoting accountability for responses and avoiding the risk of diversion of money.
After the election, the EU proposes that platforms review and publish how they performed, taking into account third-party evaluations (i.e. what they have previously favored). (Manipulated risk) trying to add a PR gloss to an ongoing platform rather than simply trying to mark their own homework, as has been the case.
The Election Security Guidelines themselves are not mandatory, but if a platform chooses an alternative approach to the recommended approach to address threats in this area, the Commission says the alternative approach meets the criteria for blocking. must be able to prove that
Failure to do so risks being found in violation of the DSA, which can result in fines of up to 6% of global annual revenue. As such, there is an incentive for platforms to participate in the bloc's programs that strengthen resources to combat political disinformation and other information risks to elections as a way to reduce regulatory risk. But they still need to implement the advice.
The EU guidance also contains more specific recommendations for the next European Parliament elections, which will be held from 6 to 9 June.
On a technical note, the election security guidelines are still in draft form at this stage. However, the committee said that it expects to formally adopt the guidance in April once all language versions are available.