Adobe’s Firefly AI videos arrive only when it is safe, amidst an India push

Adobe confirms that Firefly’s generative video capabilities will find priority integration within the Premiere Pro platform. (Official image)


It has been clear for a while now that text to video as a method will be the next significant chapter for generative artificial intelligence (AI), and while still most are in limited access stage, the pace at which tools are adopting realism makes these developments intriguing. Earlier in the year, OpenAI gave the world its first glimpse at Sora, a tool that used early demos to show off its realistic generations that at first glance would be difficult to identify as AI generations. As too, Runway’s Gen-3 Alpha. Now, it is Adobe’s turn to confirm that their Firefly platform will add what they call the Firefly Video Model, later this year. OpenAI still hasn’t shared a timeline, though now they may, in the coming weeks.

Adobe confirms that Firefly’s generative video capabilities will find priority integration within the Premiere Pro platform. (Official image)

Adobe confirms that Firefly’s generative video capabilities will find priority integration within the Premiere Pro platform, which marks up the company’s belief that AI will be ready for professional workflows for video content and editing. “Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use — never on Adobe users’ content,” says Ashley Still is Senior Vice President & General Manager of Creative Product Group at Adobe.

Also Read: Wired Wisdom: No escaping AI videos, but are we ready to answer tough questions about reality?

The future of generative video is still unclear, in terms of how these tools will find wider acceptance and how they will handle often complex prompts. Realism in demos is great. Sora impressed us, and the potential looks no less with Firefly’s video generation. Yet, those are extremely specific prompts to highlight capabilities—and these prompts are not often so clear and crisp when users begin to use them.

To that point, Adobe is pitching the editing capabilities too. Alongside, they believe the prompts on Firefly Video Models will be useful in filling gaps in a video edit by generating generic footage (also called b-rolls), and also secondary perspectives to a video that you share with Firefly. How about you looking at a skyline through binoculars, or a smartphone’s video camera? Firefly Video Models will be able to a generate a video with that perspective.

Also Read: As the world grapples with deepfakes, AI companies agree to a set of principles

“With the Firefly Video Model you can leverage rich camera controls, like angle, motion and zoom to create the perfect perspective on your generated video,” adds Ashley Still. There will be three pillars for this—Generative Extend, Text to Video, and Image to Video, all hoping to find relevance for typical creatives and enterprise users’ workflows.

Adobe insists that Firefly Video Models only release as a beta test later this year, when they are “commercially safe”. An essential element to this would be the placement of content credentials, an industry wide acceptance of labelling of AI generated content which HT has covered in detail earlier. This labelling is to differentiate generations from real videos or photos. With realistic video generations, as it already is with photos and audio (and often a mix of the two), it may be important to help distinguish between reality and the artificial to prevent misuse.

Also Read: Exclusive | Most cutting-edge tools use AI built in-house: Canva’s Cameron Adams

Another aspect of this would be how video generation models handle creating human faces, which may or may not bear resemblance to actual, living people. These significant developments, as the tech company counters AI competition as well as battles creative workflow platforms, fructify ahead of Adobe’s annual developer conference, MAX, next month.

A development that we must note is, Google in their new generative tools that use the Gemini models (these tools are also available on the new Pixel 9 phones), clearly do not generate human faces based on any prompts, and also do not magic edit photos with human faces—it will not change the perspective of the background in a photo which has my face, and a friend’s face. On the other hand, if it is an object such as a car for instance, you can create backdrops which may make it seem as though you’ve parked it with the New York skyline or the Kensington Palace in the background.

Also Read:Fight fire with fire? Gen AI as a defence against AI powered cyberattacks

Adobe has also added support for eight Indian languages for its versatile editing platform, Adobe Express. This should strengthen the platform’s relevance in India as a market, as competition with Canva’s Magic Studio increases. “With millions of active users, Adobe Express is seeing rapid adoption in India, and we’re excited to double down on this diverse market’s fast expanding content creation requirements by introducing user-interface and translation features in multiple Indian languages,” says Govind Balakrishnan, Senior Vice President, Adobe Express and Digital Media Services

The company confirms that Express on the web will support Hindi, Tamil and Bengali. At the same time, the translate feature will support Hindi, Bengali, Gujarati, Kannada, Malayalam, Punjabi, Tamil, and Telugu. Canva too, earlier this year, added support for multiple Indian and global languages to its suite, including translating content as well as generations, also aimed at teams and business users.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *