Users will have their first chance to try out Adobe's video-generating AI model in just a few months: Features powered by Adobe's Firefly Video model will be available in the Premiere Pro beta app and on the free website by the end of 2024, according to the company.
Adobe says the three features — Generative Extend, Text to Video, and Image to Video — are currently in private beta but will be generally available soon.
Generative Extend, which can extend input video by two seconds, is expected to be built into the Premiere Pro beta app later this year, and Firefly's Text to Video and Image to Video models, which create five-second videos from prompts or input images, will also be available on Firefly's dedicated website later this year (though Adobe says the time limit may be longer).
Prompt: A cinematic close-up and detailed portrait of a reindeer in a snowy forest at sunset. The lighting is cinematic and gorgeously soft, with sunlight, golden backlighting and dreamy bokeh and lens flare.
Adobe's software has been popular among creatives for decades, but generative AI tools like this one could upend the very industry the company serves, for better or worse. Firefly is Adobe's response to a recent wave of generative AI models, including OpenAI's Sora and Runway's Gen-3 Alpha. These tools have captivated audiences by creating clips in minutes that would take humans hours to create. However, these early attempts at tools are generally considered too unpredictable for use in professional environments.
But Adobe believes control is what sets it apart: Ellie Greenfield, Adobe's CTO of digital media, told TechCrunch that there's “huge demand” for Firefly's AI tools that can complement or accelerate existing workflows.
For example, Firefly's Generative Fill feature, added to Adobe Photoshop last year, is “one of the most heavily used features we've introduced in the last decade,” Greenfield says.
Adobe hasn't revealed pricing for these AI video features. As with other Firefly tools, Adobe allocates Creative Cloud customers a certain number of “generation credits,” with one credit typically getting you one generation. Naturally, the more expensive plans offer more credits.
In a demo with TechCrunch, Greenfield showed off some features powered by Firefly, which will be available later this year.
Generative Extend picks up where the original video left off and can add two seconds of footage relatively seamlessly. The feature runs the last few frames of a scene through Firefly's video model to predict the next few seconds. For the scene's audio, Generative Extend recreates background noises like traffic and nature sounds, but doesn't recreate human voices or music. Greenfield says this is to comply with music industry licensing requirements.
In this clip, Generative Extend was used just after the lens flare, at around 0.8 seconds.
As an example, Greenfield shared a video clip of an astronaut gazing into space that was fixed with the feature. Shortly after an unusual lens flare appeared on screen, it was revealed that the video clip was extended, but the camera pan and objects in the scene remained constant. I think this feature can be useful when a scene ends a little early and you need to extend the scene a little longer for a transition or fade out.
Firefly's Text to Video and Image to Video features are more familiar: they let you enter text or image prompts to create videos of up to five seconds in length. Users can access these AI video generators at firefly.adobe.com, though they're likely subject to rate limits (though Adobe hasn't revealed any details).
Adobe also says that Firefly's Text to Video feature is very good at accurately determining the spelling of words, something that AI video models struggle with.
Prompt: A macro detailed shot of water splashing and freezing to spell out the word “ICE”
When it comes to safety measures, Adobe has been cautious from the start. Greenfield said Firefly's video models have the ability to block the generation of videos that contain nudity, drugs, or alcohol. He added that Adobe's video generation models are not trained on public figures such as politicians or celebrities. The same can certainly not be said for some of its competitors.