Tech companies are pledging to fight election-related deepfakes as policymakers increase pressure.
Today at the Munich Security Conference, vendors including Microsoft, Meta, Google, Amazon, Adobe and IBM announced their intention to adopt a common framework to address AI-generated deepfakes aimed at misleading voters. signed an agreement indicating that Thirteen other companies, including AI startups OpenAI, Anthropic, Inflection AI, Eleven Labs, Stability AI, and social media platforms X (formerly Twitter), TikTok, and Snap, join chipmaker Arm and security companies McAfee and TrendMicro in the deal. participated in the signature.
The signatories share best practices with each other on how to detect and label misleading political deepfakes as they are created and distributed on their platforms, and when deepfakes begin to spread. It said it would provide a “quick and appropriate response” in the event of an emergency. The companies added that they will pay special attention to context when responding to deepfakes and aim to:[safeguard] educational, documentary, artistic, satirical, and political expression, while remaining transparent with our users about our policies regarding deceptive election content.
The deal is virtually toothless, and some critics may say it is little more than a virtue show. The measure is voluntary. But the furor shows that the tech industry is wary of election-related regulatory crosshairs in a year when 49% of the world's population heads to the polls in national elections.
“The tech industry alone cannot protect elections from this new type of election fraud,” Microsoft Vice Chairman and President Brad Smith said in a press release. “As we look to the future, it seems to us that those of us at Microsoft will also need new forms of multi-stakeholder action… It is clear that election protection is important. [will require] It means we all work together. ”
There is no federal law in the United States prohibiting deepfakes, election-related or otherwise. However, 10 states have enacted laws criminalizing deepfakes, with Minnesota becoming the first state to target deepfakes used for political campaigns.
Elsewhere, federal agencies are taking all possible enforcement actions to combat the spread of deepfakes.
This week, the FTC announced that it is considering amending existing rules that prohibit impersonation of businesses and government agencies to cover all consumers, including politicians. And the FCC moved to make robocalls made by AI voices illegal by reinterpreting its rules that prohibit synthetic pre-recorded voice message spam.
In the European Union, the bloc's AI law requires all content generated by AI to be clearly labeled as such. The EU is also using the Digital Services Act to force the tech industry to curb various forms of deepfakes.
Meanwhile, deepfakes continue to proliferate. According to data from deepfake detection company Clarity, the number of defakes created increased by 900% year over year.
Last month, an AI robocall imitating US President Joe Biden's voice tried to dissuade people from voting in the New Hampshire primary. And in November, just days before Slovakia's elections, an AI-generated audio recording impersonated a liberal candidate discussing plans to raise beer prices and rig elections.
In a recent YouGov poll, 85% of Americans said they were very concerned or somewhat concerned about the prevalence of misleading video and audio deepfakes. A separate poll conducted by The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults believe AI tools will increase the spread of false and misleading information during the 2024 U.S. election cycle. There was found.