Adobe announced the video generation capabilities of its Firefly AI platform ahead of Monday's Adobe MAX event. Starting today, users will be able to test Firefly's video generator for the first time on Adobe's website and explore Generative Extend, a new AI-powered video feature, in the Premiere Pro beta app.
On Firefly's website, users can try the text-to-video or image-to-video models, both of which can produce up to five seconds of AI-generated video. (The web beta is free to use, but there may be rate limits.)
Demo of Adobe's text-to-video model. Image credit: Adobe
Adobe said it has trained Firefly to create both animated content and photorealistic media, depending on the prompt's specifications. Firefly can also create videos with text, at least in theory. This is something that AI image generators have historically struggled to create. The Firefly video web app includes settings to toggle camera pan, camera movement intensity, angle, and shot size.
In the Premiere Pro beta app, users can try out Firefly's Generative Extend feature to extend video clips by up to 2 seconds. This feature is designed to continue camera movement and subject movement to generate additional beats in the scene. Background audio has also been expanded, making the AI audio model that Adobe has been working on in secret available to the public for the first time. However, to avoid copyright lawsuits from record labels, background audio extenders do not recreate audio or music.
In a demo shared with TechCrunch ahead of launch, Firefly's Generative Extend feature produced more impressive videos and appeared to be more practical than the text-to-video model. The text-to-video and image-to-video models are not as sophisticated or wow factor as Adobe's competitors in AI video, such as Runway's Gen-3 Alpha and OpenAI's Sora. (However, the latter is still a ship). Adobe says it's focusing more on AI editing features than AI video generation, which are more likely to satisfy its user base.
Here's how Generative Extend looks in Adobe Premiere (Adobe):
Adobe's AI capabilities must strike a delicate balance with creative audiences. The company is trying to take the lead in a field crowded with AI startups and tech companies demonstrating impressive AI models. On the other hand, many creators aren't happy that the work they've been doing for decades with mice, keyboards, and styluses could soon be replaced by AI capabilities. That's why Adobe's first Firefly video feature, Generative Extend, uses AI to solve video editors' existing problems (clips aren't long enough), rather than generating new videos from scratch. Masu.
“Our audience is the most pixel-perfect audience on the planet,” Alexander Kostin, vice president of generative AI at Adobe, said in an interview with TechCrunch. “Rather than generating new assets, they want AI to help them extend, create variations, and edit the assets they own. So for us, first It's very important to do generative editing and then generative creation.
Production-grade video models for easy editing: This is the recipe for Adobe's early success with Firefly's image models in Photoshop. Adobe executives have previously said that Photoshop's generative fill feature is one of the most popular new features of the past decade, primarily because it complements and speeds up existing workflows. The company hopes to be able to replicate that success with video.
Adobe strives to be considerate to creators, reportedly paying photographers and artists $3 for every minute of video they submit to train its Firefly AI models. That said, many creators are still cautious about using AI tools, worried that they will become obsolete. (Adobe also announced on Monday an AI tool that automatically generates content for advertisers.)
Kostin tells creators with these concerns that generative AI tools will create more demand for work, not less. “If you think about the need for businesses to create personalized, hyper-personalized content for every user they interact with, the demand is endless.”
Adobe's head of AI says people should consider how other technological revolutions have benefited creatives, comparing the rise of AI tools to digital publishing and digital photography. He points out how these breakthroughs were initially seen as a threat, and says that if creators reject AI, they'll be in trouble.
“Leverage generation to level up, improve your skills, and become a creative professional who can use these tools to create 100x more content,” says Costin. “The need for content is there, and now you can do it without sacrificing your life. Let's leverage technology. This is the new digital literacy.”
Firefly also automatically inserts an “AI-generated” watermark in the metadata of videos created using this method. Meta uses Instagram and Facebook's identification tools to label media with these labels as being generated by AI. The idea is that platforms and individuals can use AI identification tools like this to determine what is authentic and what is not, as long as the content includes appropriate metadata watermarks. However, by default, Adobe's videos do not display visible labels that indicate they are AI-generated in a way that is easily readable by humans.
Adobe specifically designed Firefly to produce “commercially safe” media. The company said it does not train Firefly on images or videos that include drugs, nudity, violence, politicians or copyrighted material. In theory, this means that Firefly's video generator will not create “unsafe” videos. Now that you can access Firefly's video models for free on the internet, let's see if that's true.