Meta isn't the only company grappling with the rise of AI-generated content and its impact on its platform. YouTube also quietly implemented a policy change in June to allow users to request the removal of AI-generated content that mimics their face or voice or other synthetic content. The change allows users to request the removal of this type of AI content through YouTube's privacy request process. This is an expansion of the company's previously announced approach to its responsible AI agenda, which was first introduced in November.
Rather than requesting removal for misleading content like deepfakes, YouTube would prefer that affected parties directly request the removal of content as a privacy violation. According to YouTube's recently updated help documentation on the topic, a first-party complaint is required, with a few exceptions, such as the affected individual being a minor, not having access to a computer, or being deceased.
However, simply submitting a takedown request does not necessarily mean that content will be removed, and YouTube cautions that it will make its own decision on complaints based on a variety of factors.
For example, they may take into account whether the content is disclosed as synthetic or AI-created, whether it is personally identifiable, and whether the content is parody, satire, or of public interest value. The company said it may also consider whether the AI content features celebrities or other prominent figures, or engages in “sensitive behavior,” such as criminal activity, violence, or endorsement of a product or political candidate. The latter is of particular concern in an election year where AI-generated endorsements could influence votes.
YouTube also said it will give content uploaders 48 hours to respond to complaints. If the content is removed within that time frame, the complaint will be closed; otherwise, YouTube will launch an investigation. The company also warns users that removal means removing the video entirely from the site, and, where applicable, also removing the individual's name and personal information from the video's title, description, and tags. Users can also blur the faces of people appearing in the video, but making a video private in response to a removal request is not enough, and the video may be reverted to public status at any time.
The company didn't widely advertise the policy change, but in March it introduced tools in its Creator Studio that let creators disclose if their realistic content was made with altered or synthetic media, such as generative AI. It also recently began testing a feature that lets users add crowdsourced notes to their videos to provide additional context, such as whether there's any intent to parody or if they're misleading in any way.
YouTube isn't opposed to the use of AI and is already experimenting with generative AI itself, such as in a comment summary feature and a conversation tool for asking questions about videos and getting recommendations. But the company has previously warned that simply labeling AI content as such doesn't necessarily protect it from removal, as it must comply with YouTube's Community Guidelines.
In the event of a privacy complaint regarding AI content, YouTube will not immediately penalize the original content creator.
“Creators, if you receive a privacy violation notice, please keep in mind that a privacy violation is separate from a Community Guidelines violation warning, and receiving a privacy violation warning does not automatically result in a strike,” a company representative shared this month on YouTube's community site, where the company provides updates directly to creators about new policies and features.