Hundreds of people in the artificial intelligence community have signed an open letter calling for strict regulation of AI-generated impersonations, or deepfakes. While this is unlikely to spur actual legislation (despite the House's new special committee), it will certainly serve as a bellwether for how experts lean on this controversial issue. It is.
At the time of publication, the letter had been signed by more than 500 people involved in the AI field, stating, “Deepfakes pose a growing threat to society, and governments must impose obligations across the supply chain to stop the spread of deepfakes.'' “There must be,” he declares.
They are calling for the complete criminalization of deepfake child sexual abuse material (CSAM, also known as child pornography), regardless of whether the person depicted is real or fictitious. Criminal penalties will be sought in any case if someone creates or distributes harmful deepfakes. Developers will be required to prevent their products from being used to create harmful deepfakes in the first place, and will be subject to penalties if they do not take sufficient precautions.
Some of the more prominent signatories of this letter include:
- Jaron Lanier
- Francis Haugen
- stuart russell
- andrew yang
- marieche schaake
- steven pinker
- gary marcus
- Oren Etzioni
- genevieve smith
- joshua benzio
- dan hendricks
- Tim Woo
It will also be attended by hundreds of academics from various fields around the world. For those interested, one person from OpenAI and two from Google Deepmind have signed on, but no one from Anthropic, Amazon, Apple, or Microsoft has signed at the time of writing. (With the exception of Lanier, that position is not standard.) What is interesting is that in the letter they are categorized by “prominence”.
This is not the first time such measures have been called for. In fact, they have been under discussion in the EU for years before being formally proposed earlier this month. Perhaps the EU's thoughtfulness and willingness to see things through is what galvanized the voices of these researchers, creators, and executives.
Or perhaps it is the slow progress towards acceptance of KOSA and the lack of protection against this type of abuse.
Or maybe it's the threat of AI-generated scam calls that (as we've already seen) can sway elections or extort money from naive people.
Or perhaps yesterday's task with no particular agenda other than producing a report on what some AI-based threats are and how they can be legally restricted. The Force may have been announced.
As you can see, there's no shortage of reasons for people in the AI community to wave their arms around here and say, “Shouldn't we be doing something?!”
No one knows if anyone will pay attention to this letter. No one paid attention to this infamous letter calling for a “pause” on AI development, but of course this letter is a little more practical. If Congress decides to take up the issue, an unlikely event given this is an election year in which Congressional opinion is sharply divided, they could raise the temperature of the academic and development community around the world on AI. You will use this list as a reference when making measurements.