Many feared that the 2024 election would be influenced and perhaps decided by AI-generated disinformation. There were some things I could find, but there were far fewer than I expected. But don't be fooled. The threat of disinformation is real. You just aren't the target.
At least that's what Oren Etzioni, a longtime AI researcher whose nonprofit TrueMedia focuses on the disinformation pulse generated, says.
“For lack of a better word, deepfakes are diverse,” he told TechCrunch in a recent interview. “Each serves its own purpose, and some we are more aware of than others. Let me say this: for every thing you actually hear, there is something that is not intended for you. There are 100. Maybe a thousand. What gets published in the mainstream press is really just the tip of the iceberg.”
In fact, most people, more than most Americans, tend to think that what they are experiencing is the same as what other people are experiencing. That's not true for many reasons. But in the case of disinformation campaigns, given a relatively informed public, readily available factual information, and news outlets that (despite noise to the contrary) are at least mostly trusted, America actually is a difficult target.
We tend to think of deepfakes as videos of Taylor Swift doing or saying things she wouldn't do. But the truly dangerous deepfakes are not those of celebrities or politicians, but those of situations and people that are not so easily identified or counteracted.
“The biggest thing that people don't understand is diversity. I saw an Iranian aircraft over Israel today,” he said — something that didn't actually happen but is easily disproved by those not on the ground. I can't do that. “You can't see this information because you're not part of a Telegram channel or a specific WhatsApp group. But millions of users are seeing this information.”
TrueMedia provides free services (via the web and API) to identify whether images, videos, audio, or other items are fake or real. This is not a simple task and cannot be fully automated, but we are slowly building the foundation of ground truth material to feed back into the process.
“Our primary mission is to detect academic benchmarks. [for evaluating fake media] It was plowed down a long time ago,” Etzioni explained. “We base our training on what people around the world have uploaded. We look at what different vendors are saying about it, what our models are saying about it. , draw conclusions. As a follow-up, forensics. Because the team does more extensive, time-consuming, and more detailed research on a significant portion of items rather than all of them, we are able to assign truth values unless we are very confident. We could still be wrong, but it's significantly better than any other single solution.
The main mission is to quantify the problem in three main ways outlined by Etzioni.
How much is there? “We don't know, there's no Google for this. We're seeing various signs that it's widespread, but it's very difficult, perhaps impossible, to measure accurately. ” How many people will see it? “This is easy because when Elon Musk shares something, '10 million people saw it,' so the eyeball count is well into the hundreds of millions. Millions every week. “I see items that have been viewed more than once.'' What was the impact? “This is probably the most important thing: How many voters didn't go to the polls because of Biden's fake phone call? We're not ready to measure that. The Slovak thing [a disinfo campaign targeting a presidential candidate there in February] I lost at the last minute. That may have had an impact on this election. ”
He stressed that all of this is work in progress, and some are just beginning. But you have to start somewhere.
“Let me make a bold prediction: Over the next four years, we will become even more adept at measuring this,” he said. “Because we have to. We're just trying to deal with it right now.”
Regarding some of the industry and technology attempts to make generated media more obvious, such as image and text watermarking, while they are harmless and perhaps helpful, they do not solve the problem. he said.
“What I'm saying is, don't bring watermarks into a gunfight.” These voluntary standards are useful in a cooperative ecosystem where everyone has a reason to use them, but they're useful in a collaborative ecosystem where everyone has a reason to use them, but they're also useful in trying to avoid detection. There is little protection against malicious attackers attempting to do so.
It all sounds pretty dire, and it is, but the most influential elections in recent history happened without any AI shenanigans. That's not because generative disinformation isn't common, but because its purveyors don't feel the need to participate. Whether that's scarier or less scary than the other options is up to you.