OpenAI said in a blog post on Friday that it had banned a group of ChatGPT accounts linked to an Iranian influence operation that was generating content about the U.S. presidential election. The operation produced AI-generated articles and social media posts that did not appear to reach a large audience, according to the company.
This is not the first time OpenAI has banned accounts with ties to state-affiliated entities using ChatGPT for malicious purposes: in May, the company blocked five campaigns that used ChatGPT to manipulate public opinion.
These episodes are reminiscent of state actors' attempts to use social media platforms like Facebook and Twitter to influence past election cycles. Now, similar groups (or the same groups) are using generative AI to flood social channels with disinformation. Like social media companies, OpenAI appears to be adopting a whack-a-mole approach, quickly banning accounts associated with these efforts as soon as it finds them.
OpenAI said its investigation into the group of accounts benefited from a Microsoft Threat Intelligence report published last week, which identified the group, which OpenAI calls Storm-2035, as part of a broader campaign to influence U.S. elections in 2020 and beyond.
According to Microsoft, Storm 2035 is an Iranian network with multiple sites designed to resemble news outlets that “actively engages with US voter groups on both ends of the political spectrum, pushing polarizing messages on issues such as US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.” As evidenced by other operations, its strategy is not necessarily to promote specific policies, but to stoke dissent and conflict.
OpenAI identified five Storm-2035 website fronts, with compelling domain names like “evenpolitics.com” and touting themselves as progressive and conservative news outlets. The group used ChatGPT to draft several long-form articles, including one that claims “X is censoring Trump's tweets,” even though Elon Musk's platform never does that (rather, Musk is urging former President Donald Trump to get more involved on X).
Examples of fake news outlets featuring ChatGPT-generated content. Image credit: OpenAI
On social media, OpenAI identified 12 X accounts and one Instagram account controlled by the operation. The company said ChatGPT was used to rewrite various political comments that were then posted on these platforms. One of these tweets falsely claimed Kamala Harris was attributing “increasing immigration costs” to climate change, followed by “#DumpKamala.”
OpenAI said it saw no evidence that Storm-2035's articles were shared widely, noting that most of its social media posts received few to no likes, shares or comments. This is typical for such efforts, which can be launched quickly and cheaply using AI tools like ChatGPT. Expect to see more such notices as the election approaches and online partisan contention intensifies.