Spurred by the growing threat of deepfakes, the FTC is amending existing rules that prohibit impersonation of businesses and government agencies to include all consumers.
Depending on the final language and the public comments the FTC receives, the rule changes will allow GenAI platforms to identify products and services that they know or have reason to know are being used to harm consumers through identity theft. It may also be illegal to provide such information.
“Scammers are using AI tools to impersonate a wider range of individuals with uncanny accuracy,” FTC Chair Lina Khan said in a press release. “With voice cloning and other AI-based scams on the rise, it is more important than ever to protect Americans from identity theft. The enhancements do just that, strengthening the FTC’s toolkit to combat AI-powered fraud that impersonates individuals.”
It's not just people like Taylor Swift who have to worry about deepfakes. Online romance scams using deepfakes are on the rise. Scammers also impersonate employees to extract cash from businesses.
In a recent YouGov poll, 85% of Americans said they were either very concerned or somewhat concerned about the prevalence of misleading video and audio deepfakes. A separate poll conducted by The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults believe AI tools will increase the spread of false and misleading information during the 2024 U.S. election cycle. There was found.
Last week, my colleague Devin Caldway covered the FCC's move to make robocalls made by AI voices illegal by reinterpreting existing rules that prohibit artificially prerecorded message spam. This rule change and today's FTC action are timely in light of the deepfaked President Biden phone campaign designed to dissuade New Hampshire residents from voting. The current scope of the fight will be.
There is no federal law outright banning deepfakes. High-profile victims, such as celebrities, could theoretically be protected from more traditional existing legal statutes, including copyright law, publicity rights, and torts (invasion of privacy, intentional infliction of emotional distress, etc.). You can rely on remedies to fight back. But these patchwork laws can make litigation time-consuming and arduous.
In the absence of legislative action, 10 states have enacted laws criminalizing deepfakes — even though most of them are non-consensual pornography. As deepfake generation tools become increasingly sophisticated, these laws will no doubt be amended to encompass a broader range of deepfakes, and more state-level laws will no doubt be passed. (A case in point: Minnesota law already targets deepfakes used in political campaigns.)