Google's AI research arm DeepMind says it is developing AI technology to generate soundtracks for videos.
In an official blog post, DeepMind said it believes a technology called V2A (short for “video to audio”) is a key piece of the AI-generated media puzzle. While many organizations, including DeepMind, are developing video-generation AI models, these models are unable to create sound effects that are synchronized with the video they generate.
“Video generation models are advancing at an astonishing pace, but many current systems can only produce silent outputs,” DeepMind wrote. “V2A technology [could] This could be a promising approach to bring generative movies to life.”
DeepMind's V2A technology combines a description of the soundtrack (e.g., “Jellyfish, marine life, ocean pulsating underwater”) with the video to create music, sound effects, and even dialogue that matches the characters and tone of the video, then watermarks it with DeepMind's anti-deepfake SynthID technology. DeepMind says the AI model that powers V2A is a diffusion model, trained on a combination of sound and dialogue transcripts and video clips.
According to DeepMind, “By training on video, audio, and additional annotations, our technology learns to associate specific audio events with different visual scenes while responding to information provided in the annotations and transcripts.”
There's no word on whether the training data is copyrighted or whether the creators of the data were informed of DeepMind's work. We've reached out to DeepMind for more information and will update this post if we hear back.
AI-powered sound-generation tools are nothing new: Startup Stability AI released one just last week, and ElevenLabs released one in May, as well as models that create sound effects for videos. A Microsoft project can generate talking or singing videos from still images, and platforms like Pika and GenreX take videos and train models to infer appropriate music and sound effects for specific scenes.
But DeepMind claims its V2A technology is unique in that it can understand the raw pixels of a video and automatically sync the generated sound with the video, optionally without explanation.
V2A isn't perfect, and DeepMind acknowledges that: the underlying model wasn't trained on a lot of videos with artifacts and distortions, so it doesn't generate particularly high-quality audio for those videos. And the audio it generates isn't all that convincing in general; my colleague Natasha Lomas described it as a “mishrat of stereotyped sounds,” and I'd disagree.
For these reasons, and to prevent misuse, DeepMind says it won't be making the technology available to the public anytime soon.
“To ensure our V2A technology can have a positive impact on the creative community, we are gathering diverse perspectives and insights from leading creators and filmmakers, and using this valuable feedback to inform our ongoing research and development,” DeepMind said. “Prior to opening up access to the wider public, our V2A technology will undergo rigorous safety assessments and testing.”
DeepMind is pitching its V2A technology as an especially useful tool for archivists and those working with historical footage. But generative AI could also upend the film and TV industry in this way. Extremely strong labor safeguards are needed to ensure that generative media tools don't result in the loss of jobs, and potentially entire professions.