After a historic presidential debate filled with discussion about eating pets, Taylor Swift ended the night with a bang. The singer-songwriter, arguably the most influential figure in American pop culture, used the night of the debate to announce on Instagram that she would be voting for Kamala Harris for president.
Swift's support is monumental — she has enough political clout to get tens of thousands of Americans registered to vote simply by sharing a link — but more surprisingly, in her announcement, she also voiced concerns about AI deepfakes.
Swift posted on Instagram: “I recently learned that an AI of 'me' was posted on his site falsely endorsing Donald Trump for President. It really brought home my fears of AI and the dangers of spreading misinformation. I've come to the conclusion that as a voter, I need to be very transparent about my actual plans for this election. The easiest way to fight misinformation is to tell the truth.”
Swift's statement seemed more personal as she wrote about her experience being deepfaked into supporting a candidate for which she had no intention of actually voting.
“Her statement was, in my opinion, very well thought out and persuasively written, but the section about AI gives her a personal perspective on this election and the candidates' actions that nobody else has,” Linda Bross Baum, a professor in the Business and Entertainment Program at American University, told TechCrunch.
Celebrities, especially public figures like Swift, are especially vulnerable to deepfakes because there are enough photos and videos of them online to allow for advanced AI to create fake versions of them.
“One of the things I'm seeing a lot right now in my practice is a general increase in people impersonating AI in ad pitches,” intellectual property and entertainment attorney Noah Downs told TechCrunch in August. These fake AI ad pitches have become so prevalent that even “Shark Tank” had to release a PSA warning fans about the rampant scams of people impersonating investors on the show.
Meanwhile, Swift's role in the spread of non-consensual AI-generated pornographic images has sparked debate among lawmakers seeking to enact laws banning harmful by-products of generative AI.
“Unfortunately, this is happening all too often to ordinary people whose names, images and likenesses have been deepfake by AI products,” Bross-Baum said.
But when a high-profile figure like Swift is involved, lawmakers may be paying more attention.
“As a longtime entertainment industry lobbyist, I can tell you that having celebrities go to Capitol Hill with you and tell your story gets you a lot more attention,” she said.
If deepfakes play a role in the election for the most influential seat in world politics, the risks are a little higher than an uncanny valley version of diet-supplement peddler Lori Grenier. But as Election Day approaches, the US has little or no legislative ability to stop the spread of this misinformation on social media, where voters are getting their news more than ever before.
“Unfortunately, AI is playing a bigger role in this election because of the pervasiveness of the technology,” Bross Baum said. “We've been victims of robocalls in the past, but the technology has gotten so advanced now that it's actually possible to do deepfake calls where you don't necessarily know that the person calling is not the candidate.”
Bross Baum said that because Swift lives in Tennessee, she could potentially sue former President Trump under Elvis Act. But because the law is new, there is little legal precedent. Either way, Bross Baum believes that passing federal law would give consumers and celebrities more power to protect themselves. She sees the bipartisan No Fakes Act as especially promising, but any meaningful legislative change is unlikely before the election in early November.
“I believe it's a good thing for campaigns to use AI for data collection and analysis, but they need to be careful that the AI isn't misrepresenting candidates,” Bross-Baum said.