ARTICLE AD
Users will get their first chance to try out Adobe’s AI model for video generation in just a couple months. The company says features powered by Adobe’s Firefly Video model will become available before the end of 2024 on the Premiere Pro beta app and on a free website.
Adobe says three features – Generative Extend, Text to Video, and Image to Video – are currently in a private beta, but will be public soon.
Generative Extend, which lets you extend any input video by two seconds, will be embedded into the Premiere Pro beta app later this year. Firefly’s Text to Video and Image to Video models, which create five second videos from prompts or input images, will be available on Firefly’s dedicated website later this year as well. (The time limit may increase, Adobe noted.)
Prompt: cinematic closeup and detailed portrait of a reindeer in a snowy forest at sunset. the lighting is cinematic and gorgeous and soft and sun-kissed, with golden backlight and dreamy bokeh and lens flaresAdobe’s software has been a favorite among creatives for decades, but generative AI tools like these may upend the very industry the company serves, for better or worse. Firefly is Adobe’s answer to the recent wave of generative AI models, including OpenAI’s Sora and Runway’s Gen-3 Alpha. The tools have captivated audiences, making clips in minutes that would have taken hours for a human to create. However, these early attempts at tools are generally considered too unpredictable to use in professional settings.
But controllability is where Adobe thinks it can set itself apart. Adobe’s CTO of digital media, Ely Greenfield, tells TechCrunch there is a “huge appetite” for Firefly’s AI tools where they can complement or accelerate existing workflows.
For instance, Greenfield says Firefly’s generative fill feature, added to Adobe Photoshop last year, is “one of the most frequently used features we’ve introduced in the past decade.”
Adobe would not disclose the price of these AI video features. For other Firefly tools, Adobe allots Creative Cloud customers a certain number of “generative credits,” where one credit typically yields one generation result. More expensive plans, obviously, provide more credits.
In a demo with TechCrunch, Greenfield showcased the Firefly-powered features coming later this year.
Generative Extend can pick up where the original videos stops, adding on an extra two seconds of footage in a relatively seamless way. The feature takes the last few frames in a scene, running them through Firefly’s Video model to predict the next couple seconds. For the scene’s audio, Generative Extend will recreate background noise, such as traffic or the sounds of nature, but not people’s voices or music. Greenfield says that’s to comply with licensing requirements from the music industry.
Generative Extend was used on this clip right around the :08 second mark, just after the lens flare.In one example, Greenfield showed a video clip of an astronaut looking out into space that had been modified with the feature. I was able to tell the moment it had been extended, just after an unusual lens flare appeared on screen, but the camera pan and objects in the scene stayed consistent. I could see it being useful when your scene ends a moment too soon, and you need to draw it out just a bit longer to transition or fade out.
Firefly’s Text to Video and Image to Video feature are more familiar. They allow you to input a text or image prompt and get up to five seconds of video out. Users will be able to access these AI video generators on firefly.adobe.com, likely with rate limits (though Adobe did not specify).
Adobe also says Firefly’s Text to Video features are quite good at spelling words correctly, something AI video models tend to struggle with.
Prompt: macro detailed shot of water splashing and freezing to spell the word ICEIn terms of safeguards, Adobe is erring on the side of caution to start out. Greenfield says Firefly’s video models have blocks around generating videos including nudity, drugs and alcohol. Further, he added, Adobe’s video generation models are not trained on public figures, like politicians and celebrities. The same certainly can’t be said for some of the competition.