Earlier this year, there was widespread concern about how generated AI could be used to interfere in global elections to spread propaganda and disinformation. As far back as the end of the year, Meta shared that the technology had a limited impact across Facebook, Instagram, and Threads, and claims that such concerns were not widespread, at least on its platform.
The company said its findings were based on key elections in the US, Bangladesh, Indonesia, India, Pakistan, the EU parliament, France, the UK, South Africa, Mexico and Brazil.
“While there have been confirmed or suspected uses of AI in this way, the volume remains small and our existing policies and processes are insufficient to mitigate the risks associated with generative AI content. This has proven to be sufficient,” the company said in a blog post. “During the major election cycles mentioned above, AI content ratings related to elections, politics, and social topics accounted for less than 1% of all fact-checked misinformation.”
Meta says its Imagine AI image generator will create images of President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden in the month leading up to Election Day to prevent people from cheating. It points out that it has rejected 590,000 requests. Create election-related deepfakes.
The company also found that a connected network of accounts seeking to spread propaganda and disinformation “only resulted in increased productivity and content generation using generative AI.”
Meta believes that the use of AI reduces its ability to thwart these covert influence campaigns because it focuses on the behavior of these accounts, rather than the content they post, whether or not they were created with AI. He says he did not interfere.
The tech giant also revealed that it has suspended about 20 new covert influence operations around the world to prevent foreign interference. According to Meta, most of the networks it disrupted had no real audience, and some networks used fake likes and followers to appear more popular than they actually were. It was said to have been used.
Mehta went on to criticize other platforms, noting that false videos about the US election linked to Russia-based influence operations were frequently posted on X and Telegram.
“We will continue to review our policies and announce any changes in the coming months as we reflect on what we have learned from this remarkable year,” Mehta wrote.