Close Menu
TechBrunchTechBrunch
  • Home
  • AI
  • Apps
  • Crypto
  • Security
  • Startups
  • TechCrunch
  • Venture

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Google I/O 2025: What to expect including Gemini and Android 16 updates?

May 9, 2025

Epic Games and Spotify Test Apple's new app store rules

May 9, 2025

FBI and Dutch police seize and shut down hacked router botnets

May 9, 2025
Facebook X (Twitter) Instagram
TechBrunchTechBrunch
  • Home
  • AI

    OpenAI seeks to extend human lifespans with the help of longevity startups

    January 17, 2025

    Farewell to the $200 million woolly mammoth and TikTok

    January 17, 2025

    Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

    January 17, 2025

    Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

    January 16, 2025

    Apple suspends AI notification summaries for news after generating false alerts

    January 16, 2025
  • Apps

    Google I/O 2025: What to expect including Gemini and Android 16 updates?

    May 9, 2025

    Epic Games and Spotify Test Apple's new app store rules

    May 9, 2025

    X Timeline is not updated for many users

    May 9, 2025

    AppFigures: Apple earned more than $10 billion from its US App Store commission last year

    May 8, 2025

    Instagram thread gets video ads

    May 8, 2025
  • Crypto

    Stripe unveils AI Foundation model for payments, revealing a “deeper partnership” with Nvidia

    May 7, 2025

    Movie Pass explores the daily fantasy platform of film buffs

    May 1, 2025

    Speaking on TechCrunch 2025: Application is open

    April 24, 2025

    Revolut, a $45 billion Neobank, recorded a profit of $1 billion in 2024

    April 24, 2025

    The new kids show will come with a crypto wallet when it debuts this fall

    April 18, 2025
  • Security

    FBI and Dutch police seize and shut down hacked router botnets

    May 9, 2025

    Florida bill calling for encryption backdoors for social media accounts failed

    May 9, 2025

    Korean telephone giant SKT data breaches timeline

    May 8, 2025

    Powerschool paid the hacker ransom, but now the school says it's being forced

    May 8, 2025

    VC Company Insight Partners Review Personal Data Stolen During a January Hack

    May 8, 2025
  • Startups

    7 days left: Founders and VCs save over $300 on all stage passes

    March 24, 2025

    AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

    March 24, 2025

    20 Hottest Open Source Startups of 2024

    March 22, 2025

    Andrill may build a weapons factory in the UK

    March 21, 2025

    Startup Weekly: Wiz bets paid off at M&A Rich Week

    March 21, 2025
  • TechCrunch

    OpenSea takes a long-term view with a focus on UX despite NFT sales remaining low

    February 8, 2024

    AI will save software companies' growth dreams

    February 8, 2024

    B2B and B2C are not about who buys, but how you sell

    February 5, 2024

    It's time for venture capital to break away from fast fashion

    February 3, 2024

    a16z's Chris Dixon believes it's time to focus on blockchain use cases rather than speculation

    February 2, 2024
  • Venture

    A comprehensive list of 2025 tech layoffs

    May 9, 2025

    One of Elon Musk's longtime VCS is suing his former employer after allegedly fired

    May 8, 2025

    Sequoia leads a $1.5 billion tender offer for sales automation startup clay

    May 8, 2025

    Bosch Ventures is turning attention to North America with a new $270 million fund

    May 8, 2025

    A comprehensive list of 2025 tech layoffs

    May 7, 2025
TechBrunchTechBrunch

EU’s draft election security guidelines for tech giants take aim at political deepfakes

TechBrunchBy TechBrunchFebruary 8, 202411 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Telegram Email


The European Union has launched a consultation on draft election security mitigations aimed at larger online platforms, such as Facebook, Google, TikTok and X (Twitter), that includes a set of recommendations it hopes will shrink democratic risks from generative AI and deepfakes — in addition to covering off more well-trodden ground such as content moderation resourcing and service integrity; political ads transparency; and media literacy. The overall goal for the guidance is to ensure tech giants take due care and attention to a full sweep of election-related risks that might bubble up on their platforms, including as a result of easier access to powerful AI tools.

The EU is aiming the election security guidelines at the nearly two dozen platform giants and search engines that are currently designated under its rebooted ecommerce rules, aka the Digital Services Act (DSA).

Concerns that advanced AI systems like large language models (LLMs) which are capable of outputting highly plausible sounding text and/or realistic imagery, audio or video have been riding high since last year’s viral boom in generative AI — which saw tools like OpenAI’s AI chatbot, ChatGPT, becoming household names. Since then scores of generative AIs have been launched, including a range of models and tools developed by long established tech giants, like Meta and Google, whose platforms and services routinely reach billions of web users.

“Recent technological developments in generative AI have enabled the creation and widespread use of artificial intelligence capable of generating text, images, videos, or other synthetic content. While such developments may bring many new opportunities, they may lead to specific risks in the context of elections,” text the EU is consulting on warns. “[G]enerative AI can notably be used to mislead voters or to manipulate electoral processes by creating and disseminating inauthentic, misleading synthetic content regarding political actors, false depiction of events, election polls, contexts or narratives. Generative AI systems can also produce incorrect, incoherent, or fabricated information, so called ‘hallucinations’, that misrepresent the reality, and which can potentially mislead voters.”

Of course it doesn’t take a staggering amount of compute power and cutting edge AI systems to mislead voters. Some politicians are experts in producing ‘fake news’ just using their own vocal chords, after all. And even on the tech tool front malicious agents don’t need fancy GenAIs to execute a crudely suggestive edit of a video (or manipulate digital media in other, even more basic ways) in order to create potentially misleading political messaging that can quickly be tossed onto the outrage fire of social media to be fanned by willingly triggered users (and/or amplified by bots) until the divisive flames start to self-spread (driving whatever political agenda lurks behind the fake).

See, for a recent example, a (critical) decision by Meta’s Oversight Board of how the social media giant handled an edited video of US president Biden, which called on the parent company to rewrite “incoherent” rules around fake videos since, currently, such content may be treated differently by Meta’s moderators — depending on whether it’s been AI generated or edited in a more basic way.

Notably — but unsurprisingly — then, the EU’s guidance on election security doesn’t limit itself to AI-generated fakes either.

While, on GenAI, the bloc is putting a sensible emphasis on the need for platforms to tackle dissemination (not just creation) risks too.

Best practices

One suggestion the EU is consulting on in the draft guidelines is that the labelling of GenAI, deepfakes and/or other “media manipulations” by in-scope platforms should be both clear (“prominent” and “efficient”) and persistent (i.e. travels with content if/when it’s reshared) — where the content in question “appreciably resemble existing persons, objects, places, entities, events, or depict events as real that did not happen or misrepresent them”, as it puts it.

There’s also a further recommendation platforms provide users with accessible tools so they can add labels to AI generated content.

The draft guidance goes on to suggest “best practices” to inform risk mitigation measures may be drawn from the EU’s (recently agreed legislative proposal) AI Act and its companion (but non-legally binding) AI Pact, adding: “Particularly relevant in this context are the obligations envisaged in the AI Act for providers of general-purpose AI models, including generative AI, requirements for labelling of ‘deep fakes’ and for providers of generative AI systems to use technical state-of-the-art solutions to ensure that content created by generative AI is marked as such, which will enable its detection by providers of [in-scope platforms].”

The draft election security guidelines, which are under public consultation in the EU until March 7, include the overarching recommendation that tech giants put in place “reasonable, proportionate, and effective” mitigation measures tailored to risks related to (both) the creation and “potential large-scale dissemination” of AI-generated fakes.

The use of watermarking, including via metadata, to distinguish AI generated content is specifically recommended — in order that such content is “clearly distinguishable” for users. But the draft says “other types of synthetic and manipulated media” should get the same treatment too.

“This is particularly important for any generative AI content involving candidates, politicians, or political parties,” the consultation observes. “Watermarks may also apply to content that is based on real footage (such as videos, images or audio) that has been altered through the use of generative AI.”

Platforms are urged to adapt their content moderation systems and processes so they’re able to detect watermarks and other “content provenance indicators”, per the draft text, which also suggests they “cooperate with providers of generative AI systems and follow leading state of the art measures to ensure that such watermarks and indicators are detected in a reliable and effective manner”; and asks them to “support new technology innovations to improve the effectiveness and interoperability of such tools”.

The bulk of the DSA, the EU’s content moderation and governance regulation, applies to a broad sweep of digital businesses from later this month — but already (since the end of August) the regime applies for almost two dozen (larger) platforms, with 45M+ monthly active users in the region. More than 20 so-called very large online platforms (VLOPs) and very large online search engines (VLOSEs) have been designated under the DSA so far, including the likes of Facebook, Instagram, Google Search, TikTok and YouTube.

Extra obligations these larger platforms face (i.e. compared to non-VLOPs/VLOSEs) include requirements to mitigate systemic risks arising from how they operate their platforms and algorithms in areas such as democratic processes. So this means that — for example — Meta could, in the near future, be forced into adopting a less incoherent position on what to do about political fakes on Facebook and Instagram — or, well, at least in the EU, where the DSA applies to its business. (NB: Penalties for breaching the regime can scale up to 6% of global annual turnover.)

Other draft recommendations aimed at DSA platform giants vis-a-vis election security include a suggestion they make “reasonable efforts” to ensure information provided using generative AI “relies to the extent possible on reliable sources in the electoral context, such as official information on the electoral process from relevant electoral authorities”, as the current text has it; and that “any quotes or references made by the system to external sources are accurate and do not misrepresent the cited content” — which the bloc anticipates will work to “limit… the effects of ‘hallucinations’”.

Users should also be warned by in-scope platforms of potential errors in content created by GenAI; and pointed towards authoritative sources of information, while the tech giants should also put in place “safeguards” to prevent the creation of “false content that may have a strong potential to influence user behaviour”, per the draft.

Among the safety techniques platforms could be urged to adopt is “red teaming” — or the practice of proactively hunting for and testing potential security issues. “Conduct and document red-teaming exercises with a particular focus on electoral processes, with both internal teams and external experts, before releasing generative AI systems to the public and follow a staggered release approach when doing so to better control unintended consequences,” it currently suggests.

GenAI deployers in-scope of the DSA’s requirement to mitigate system risk should also set “appropriate performance metrics”, in areas like safety and factual accuracy of answers given to questions on electoral content, per the current text; and “continually monitor the performance of generative AI systems, and take appropriate actions when needed”.

Safety features that seek to prevent the misuse of the generative AI systems “for illegal, manipulative and disinformation purposes in the context of electoral processes” should also be integrated into AI systems, per the draft — which gives examples such as prompt classifiers, content moderation and other types of filters — in order for platforms to proactively detect and prevent prompts that go against their terms of service related to elections.

On AI generated text, the current recommendation is for VLOPs/VLOSEs to “indicate, where possible, in the outputs generated the concrete sources of the information used as input data to enable users to verify the reliability and further contextualise the information” — suggesting the EU is leaning towards a preference for footnote-style indicators (such as AI search engine You.com typically displays) for accompanying generative AI responses in risky contexts like elections.

Support for external researchers is another key plank of the draft recommendations — and, indeed, of the DSA generally, which puts obligations on platform and search giants to enable researchers’ data access for the study of systemic risk. (Which has been an early area of focus for the Commission’s oversight of platforms.)

“As AI generated content bears specific risks, it should be specifically scrutinised, also through the development of ad hoc tools to perform research aimed at identifying and understanding specific risks related to electoral processes,” the draft guidance suggests. “Providers of online platforms and search engines are encouraged to consider setting up dedicated tools for researchers to get access to and specifically identify and analyse AI generated content that is known as such, in line with the obligation under Article 40.12 for providers of VLOPs and VLOSEs in the DSA.”

The current draft also touches on the use of generative AI in ads, suggesting platforms adapt their ad systems to consider potential risks here too — such as by providing advertisers with ways to clearly label GenAI content that’s been used in ads or promoted posts; and to require in their ad policies that the label be used when the advertisement includes generative AI content.

The exact steerage the EU will push on platform and search giants when it comes to election integrity will have to wait for the final guidelines to be produced in the coming months. But the current draft suggests the bloc intends to produce a comprehensive set of recommendations and best practices.

Platforms will be able to choose not to follow the guidelines but they will need to comply with the legally binding DSA — so any deviations from the recommendations could encourage added scrutiny of alternative choices (hi Elon Musk!). And platforms will need to be prepared to defend their approaches to the Commission, which is both producing guidelines and enforcing the DSA rulebook.

The EU confirmed today that the election security guidelines are the first set in the works under the VLOPs/VLOSEs-focused Article 35 (“Mitigation of risks”) provision, saying the aim is to provide platforms with “best practices and possible measures to mitigate systemic risks on their platforms that may threaten the integrity of democratic electoral processes”.

Elections are clearly front of mind for the bloc, with a once-in-five-year vote to elect a new European Parliament set to take place in early June. And there the draft guidelines even includes targeted recommendations related to the European Parliament elections — setting an expectation platforms put in place “robust preparations” for what’s couched in the text as “a crucial test case for the resilience of our democratic processes”. So we can assume the final guidelines will be made available long before the summer.

Commenting in a statement, Thierry Breton, the EU’s commissioner for internal market, added:

With the Digital Services Act, Europe is the first continent with a law to address systemic risks on online platforms that can have real-world negative effects on our democratic societies. 2024 is a significant year for elections. That is why we are making full use of all the tools offered by the DSA to ensure platforms comply with their obligations and are not misused to manipulate our elections, while safeguarding freedom of expression.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

OpenAI seeks to extend human lifespans with the help of longevity startups

January 17, 2025

Farewell to the $200 million woolly mammoth and TikTok

January 17, 2025

Nord Security founder launches Nexos.ai to help enterprises move AI projects from pilot to production

January 17, 2025

Data proves it remains difficult for startups to raise capital, even though VCs invested $75 billion in the fourth quarter

January 16, 2025

Apple suspends AI notification summaries for news after generating false alerts

January 16, 2025

Nvidia releases more tools and guardrails to help enterprises adopt AI agents

January 16, 2025

Leave A Reply Cancel Reply

Top Reviews
Editors Picks

7 days left: Founders and VCs save over $300 on all stage passes

March 24, 2025

AI chip startup Furiosaai reportedly rejecting $800 million acquisition offer from Meta

March 24, 2025

20 Hottest Open Source Startups of 2024

March 22, 2025

Andrill may build a weapons factory in the UK

March 21, 2025
About Us
About Us

Welcome to Tech Brunch, your go-to destination for cutting-edge insights, news, and analysis in the fields of Artificial Intelligence (AI), Cryptocurrency, Technology, and Startups. At Tech Brunch, we are passionate about exploring the latest trends, innovations, and developments shaping the future of these dynamic industries.

Our Picks

Google I/O 2025: What to expect including Gemini and Android 16 updates?

May 9, 2025

Epic Games and Spotify Test Apple's new app store rules

May 9, 2025

FBI and Dutch police seize and shut down hacked router botnets

May 9, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2025 TechBrunch. Designed by TechBrunch.
  • Home
  • About Tech Brunch
  • Advertise with Tech Brunch
  • Contact us
  • DMCA Notice
  • Privacy Policy
  • Terms of Use

Type above and press Enter to search. Press Esc to cancel.