Disinformation is spreading at an alarming pace, thanks in large part to openly available AI tools. In a recent survey, 85% of people said they were worried about online disinformation, and the World Economic Forum named AI-driven disinformation the world's biggest risk.
High-profile examples of this year's disinformation campaigns include the X bot network targeting U.S. federal elections and President Joe Biden's voicemail deepfakes that discouraged certain residents from voting. Overseas, candidates from South Asian countries are flooding the web with fake videos, images and news articles. A deepfake of London Mayor Sadiq Khan even incited violence at a pro-Palestinian march.
So what can you do?
AI can not only help fight disinformation, but also create it, argues Pamela San Martín, co-chair of the Meta Oversight Committee. Established in 2020, the board is a semi-autonomous body that reviews complaints about Meta's moderation policies and issues recommendations regarding content policies.
San Martín acknowledges that AI is not perfect. For example, Meta's AI product incorrectly flagged an Auschwitz Museum post as offensive and misclassified an independent news site as spam. But she is confident it will improve over time.
“Most social media content is managed by automation, which uses AI to flag certain content for human review or to take “action” on certain content. Set flag. That is, display a warning screen, delete it, or stop it. “It's about ranking them by algorithms and things like that,” San Martín said last week during a panel discussion on AI disinformation at TechCrunch Disrupt 2024. [AI moderation models] You can improve it, and if you do, it will really help you deal with the problem. [disinformation]”
Of course, AI has lowered the cost of sowing disinformation, so even upgraded moderation models may not be able to keep up.
Another panel participant, Imran Ahmed, CEO of the nonprofit Center to Combat Digital Hate, also pointed out that social feeds that amplify uninformed content exacerbate the harm. . Platforms like X effectively encourage disinformation through revenue sharing programs. The BBC reports that X pays users thousands of dollars for great posts that include conspiracy theories and AI-generated images.
“You have a forever bullshit machine,” Ahmed said. “That's quite worrying. I'm not sure it should be produced in a democracy that depends on some degree of truth.”
San Martín claimed that the oversight board influenced some changes here, including encouraging Meta to label misleading AI-generated content. The oversight committee also recommended that Mehta make it easier to identify cases of non-consensual sexual deepfake images, which are a growing problem.
But both Ahmed and panelist Brandi Nonnecke, a professor at the University of California, Berkeley who studies the intersection of emerging technology and human rights, said oversight boards and general autonomy alone can stop the flow of disinformation. I objected to the idea that it could be done.
“Fundamentally, self-regulation is not regulation because the oversight committee itself cannot answer the five fundamental questions that should always be asked of those in power,” Ahmed said. “What power do you have, who gave you that power, for whose benefit you use it, to whom you are responsible, and if you do a good job?” If not, how do we get rid of you? If there is an answer to all of those questions. [Meta]in that case you don't have any kind of checks and balances. You're just a little PR twist. ”
Mr. Ahmed and Mr. Nonnecke's views are not extreme. The New York University Brennan Center said in a June analysis that the Oversight Board only influences some of Meta's decisions because it controls whether Meta enacts policy changes and does not provide access to the algorithms. said that it would not have any effect on
Mr. Mehta has also privately threatened to withdraw support for the Oversight Committee, underscoring the unstable nature of its operations. The Oversight Committee is funded by an irrevocable trust, of which Meta is the sole investor.
Instead of autonomy (which platforms like X are unlikely to adopt in the first place), Ahmed and Nonnecke see regulation as a solution to the disinformation dilemma. Nonnecke believes that product liability tort is one way to impose problems on platforms because it is a principle that holds companies liable for injuries and damages caused by “defective” products. I am.
Nonnecke also supported the idea of watermarking AI content to make it easier to distinguish which content was generated by AI. (Watermarks, of course, come with their own challenges.) She notes that payment providers may be able to block the purchase of false information of a sexual nature, and that website hosts may suggested it could make it more difficult to sign up for a plan.
Policymakers in the United States have recently faced setbacks in their efforts to tax industry. In October, a federal judge blocked a California law that would have forced creators of AI deepfakes to take them down or face financial penalties.
But Ahmed believes there is reason for optimism. He cited recent moves by AI companies like OpenAI to watermark AI-generated images, as well as content moderation laws like the UK's Online Safety Act.
“It is inevitable that there will be a need for regulation of anything that can cause such damage to our democracy – to our health, to our societies, to us as individuals,” Ahmed said. said. “I think there are many reasons to be hopeful.”