Vimeo has joined companies like TikTok, YouTube and Meta in implementing a way for creators to label AI-generated content. The video hosting service announced on Wednesday that if realistic content was created with AI, creators will have to disclose that to viewers.
New updates to Vimeo's terms of service and community guidelines will ensure that videos generated, synthesized, or manipulated by AI cannot be mistaken for real people, places, or events. This is a notable move for Vimeo, as it's becoming increasingly difficult to distinguish between authentic and fake content created by evolving generative AI tools.
Vimeo doesn't require creators to disclose clearly unrealistic content, such as animated content, videos with obvious visual effects, or videos that use minor creative assistance such as AI. However, videos that feature celebrities saying or doing things they did not actually do, or that use altered footage of real events or places, do require an AI content label.
Additionally, the company said AI content labels will begin appearing on videos that use Vimeo's AI toolset, including tools that can remove long pauses and conversational disruptions.
We now have a clear label at the bottom of a video indicating that a creator has voluntarily disclosed the use of AI. When uploading or editing a video, creators can select the AI-generated content checkbox and specify whether AI was used for the audio, visuals, or both.
Vimeo currently leaves it up to creators to label AI-generated content, but the company is working on developing an automated system to detect AI and label appropriate content.
“Our long-term goal is to develop an automated labeling system that can reliably detect AI-generated content, providing greater transparency and reducing burden for creators,” CEO Philip Moyer wrote in an official blog post.
Moyer, who only joined the company in April this year, has previously spoken about Vimeo's stance on AI. In a separate blog post, he told users that Vimeo prohibits training AI-generated models on videos hosted on the platform to protect user-generated content from AI companies. Similarly, YouTube's Neil Mohan has made it clear that using videos on the platform to train models (including OpenAI's Sora) is a violation of the company's terms of service.