Google's flagship AI research lab, Google DeepMind, wants to beat OpenAI in the video generation game. And that may be the case, at least for a while.
On Monday, DeepMind announced Veo 2, the next generation video generation AI and successor to Veo that powers a growing number of products across Google's portfolio. Veo 2 can create clips longer than 2 minutes at resolutions up to 4K (4096 x 2160 pixels).
Notably, this is more than four times the resolution and six times the duration that OpenAI's Sora can achieve.
Admittedly, it's a theoretical benefit at this point. Veo 2 is currently available exclusively on Google's experimental video creation tool VideoFX, but video length is limited to 720p and 8 seconds. (Sora can generate clips up to 1080p and 20 seconds long.)
VideoFX's Veo 2. Image credit: Google
VideoFX remains on the waiting list, but Google says it will expand the number of users who can access it this week.
DeepMind vice president of products Eli Collins also told TechCrunch that Google will make Veo 2 available via the Vertex AI developer platform “once the model is ready for use at scale.”
“Over the coming months, we will iterate on our work based on user feedback,” Collins said. [we’ll] We're looking to integrate Veo 2's updated features into compelling use cases across the Google ecosystem… [W]We plan to share more updates next year. ”
easier to control
Like Veo, Veo 2 can generate videos with a text prompt (e.g. “Cars on the freeway”) or with text and a reference image.
What's new in Veo 2? According to DeepMind, the model can generate clips in different styles, has a better “understanding” of physics and camera controls, and produces more “sharp” footage. That's what it says.
DeepMind sharper means the textures and images in your clips will be sharper, especially in scenes with a lot of movement. When it comes to improved camera controls, you can now more precisely position a virtual “camera” in the video that Veo 2 produces and move it to capture objects and people from different angles.
DeepMind also claims that Veo 2 can more realistically model movement, fluid dynamics (like coffee being poured into a mug), and light properties (like shadows and reflections). This includes various lenses and film effects, as well as “subtle” human expressions, DeepMind says.
Google Veo 2 sample. Be aware that compression artifacts occur when converting clips to GIFs. Image credit: Google
DeepMind shared some select samples from Veo 2 with TechCrunch last week. For an AI-generated video, it's pretty good, no, very good. The Veo 2 has a great understanding of refraction and tricky liquids like maple syrup, and seems to have a knack for emulating Pixar-style animation.
But despite DeepMind's claims that elements like extra fingers and “unexpected objects” are less likely to cause hallucinations in this model, the Veo 2 can't completely clear the uncanny valley.
Notice the lifeless eyes of this cartoon dog-like creature.
Image credit: Google
Then we see the footage's bizarrely slippery roads, as well as pedestrians blending into each other in the background and buildings with physically impossible facades.
Image credit: Google
Mr Collins acknowledged there was work to be done.
“Consistency and consistency are areas where there is room for growth,” he said. “Veo can consistently follow prompts for a few minutes, but [it can’t] Following long and complex prompts. Similarly, character consistency can also be a challenge. There is also room for improvement in generating intricate detail, fast and complex motion, and continuing to push the boundaries of realism. ”
Collins added that DeepMind continues to work with artists and producers to improve its video generation models and tools.
“Since we started developing Veo, we’ve started working with creators like Donald Glover, The Weeknd, and d4vd to learn more about their creative processes and how technology can help them realize their visions. I really understood that,” Collins said. “Our work with creators on Veo 1 influenced the development of Veo 2, and we look forward to working with trusted testers and creators to get feedback on this new model.”
safety and training
Veo 2 was trained with many videos. This is generally how AI models work. When provided with more examples of data in some form, the model can recognize patterns in the data and generate new data.
DeepMind hasn't said exactly where it scraped the videos to train Veo 2, but YouTube could be one source. Google owns YouTube, and DeepMind previously told TechCrunch that Google models like Veo “could” be trained on some YouTube content.
“Veo is trained in the combination of high-quality video and instruction,” Collins says. “A video-description pair is a video and an associated description of what happens within that video.”
Image credit: Google
DeepMind hosts tools through Google that allow webmasters to block Labs' bots from extracting training data from their websites, but does not provide a mechanism for creators to remove their work from existing training sets. . The institute and its parent company maintain that training models using public data are fair use, meaning DeepMind believes it is under no obligation to ask permission from data owners. There is.
Not all creators agree. Especially when you consider research that estimates that tens of thousands of film and TV jobs could be destroyed by AI in the coming years. Several AI companies, including the startup of the same name behind the popular AI art app Midjourney, are being sued for violating artists' rights by training content without their consent.
“We are committed to working with creators and partners to achieve common goals,” Collins said. “We continue to collaborate with the creative community and wider industry to gather insights and listen to feedback, including from those who use VideoFX.”
Because of the way today's generative models operate during training, they come with certain risks, such as backflow when the model produces a mirror copy of the training data. DeepMind's solution is a prompt-level filter that includes violent, graphic, and explicit content.
Google's indemnification policy, which protects certain customers against claims of copyright infringement arising from the use of its products, will not apply to Veo 2 until it is generally available, Collins said.
Image credit: Google
To reduce the risk of deepfakes, DeepMind says it uses its proprietary watermarking technology, SynthID, to embed invisible markers in the frames that Veo 2 generates. However, like any watermarking technology, SynthID is not foolproof.
Upgrading Imagen
In addition to Veo 2, Google DeepMind this morning announced an upgrade to Imagen 3, its commercial image generation model.
The new version of Imagen 3 is available to users of ImageFX, Google's image generation tool, starting today. DeepMind says it can create “brighter, better-composed” images and photos in styles such as photorealism, impressionism, and animation.
“This upgrade is [to Imagen 3] It also follows prompts more closely and renders richer details and textures,” DeepMind wrote in a blog post provided to TechCrunch.
Image credit: Google
Deploying at the same time as the model is a UI update for ImageFX. When a user types a prompt, the keyword in the prompt becomes “chiplet” and a drop-down menu appears that suggests related words. Users can use the tip to repeat what they have written or to select from a line of automatically generated descriptors below the prompt.