Microsoft and OpenAI have announced the creation of a $2 million fund to combat the growing risk that AI and deepfakes will be used to “deceive voters and undermine democracy.”
With a record 2 billion people heading to the polls across some 50 countries this year, there are concerns about the impact AI will have on voters, especially those in “vulnerable communities” who are more likely to have their opinions heard. Take it at face value.
The rise of generative AI, including wildly popular chatbots such as ChatGPT, has created a new and significant threat landscape involving AI-generated “deepfakes” aimed at perpetuating disinformation. It doesn't help that these new tools become widely available and allow anyone to create fake videos, photos, or audio of famous political groups.
As recently as Monday, India's Election Commission asked political parties not to use deepfakes and similar disinformation in online campaigning before and after elections.
Against this backdrop, all major technology companies, including Microsoft and OpenAI, have signed a voluntary pledge to combat such risks and to prevent deepfakes created to clearly mislead voters. We plan to build a common framework to address this issue.
Elsewhere, major AI companies are beginning to address these risks by introducing limits into their software. For example, Google has announced that it will no longer allow its Gemini AI chatbot to answer election-related questions, and Facebook's parent company Meta is also restricting election-related answers through its AI chatbot.
Earlier today, OpenAI announced a new deepfake detector for disinformation researchers designed to help identify fake content generated by the company's DALL-E image generator. He also served on the steering committee of the industry association Content Provenance and Authenticity Coalition (C2PA). Already participating members include Adobe, Microsoft, Google, and Intel.
A new “Social Recovery Fund” forms part of this broader push for “responsible” AI, with Microsoft and OpenAI currently working on “AI education and support in voters and vulnerable communities,” according to a blog post published by the companies today. He says he is striving to “improve literacy.” This includes a small number of organizations such as Older Adults Technology Services (OATS), Content Provenance and Authenticity Coalition for Content Provenance and Authenticity (C2PA), International Institute for Democracy and Electoral Assistance (International IDEA), and Partnership on AI (PAI). This includes issuing grants. .
According to Microsoft, these grants are aimed at increasing understanding of AI and its capabilities across society. For example, OATS appears to be planning to use the grant for a training program covering “fundamental aspects of AI” for people over the age of 50 in the United States.
“The launch of the Societal Resilience Fund is just one step in Microsoft and OpenAI's commitment to addressing the challenges and needs in the AI literacy and education space,” said Teresa Hutson, corporate VP of technology and corporate responsibility at Microsoft. says in a blog post. . “Microsoft and OpenAI remain committed to this work and continue to collaborate with organizations and initiatives that share our goals and values.”