With AI as a foundation, Adobe gives its creative apps a broader feature suite

Adobe, at their annual creative conference called Adobe MAX, have announced a broad-spectrum functionality set that will now be bolted on to their popular array of apps. (Official image)


Miami, Florida: The underlying approach may not entirely be a surprise in this day and age, but the scale leaves little to chance. Tech giants Adobe, at their annual creative conference called Adobe MAX, have announced a broad-spectrum functionality set that will now be bolted on to their popular array of apps. Adobe Express, Photoshop, and the enterprise focused GenStudio, to name a few. The underliner, of course, is the Adobe Firefly family of generative artificial (AI) models, which now has generative video, as well as improvements across the board including image generation, vectors and design inputs.

Adobe, at their annual creative conference called Adobe MAX, have announced a broad-spectrum functionality set that will now be bolted on to their popular array of apps. (Official image)

“We want AI to bridge the digital divide, not widen it,” says Shantanu Narayen, CEO of Adobe, at the company’s annual MAX keynote. “The quality and controllability of these AI models is critical and given we have the domain experience, we made the strategic decision to invest in creating foundation models in the core creative categories,” Narayen details Adobe’s intent to continue to develop AI in-house.

These updates follow the Content Authenticity web app that was announced a few days ago. Now, Adobe’s doubling down focus with the latest feature additions across apps, to organisations and specific workflows.

“We need the development of industry standards to create an attribution, to what we call our credentials. Since 2019, the Adobe contract authenticity initiative has been focused on promoting the widespread adoption of provenance standards by content credentials and now we have over the 3,700 members, says Narayen.

“We’re giving the creative community a powerful new brush to paint the world by putting unprecedented power, precision and creative control in their hands. With the demand for content projected to grow as much as ten-fold, we’re empowering creators to scale the use of their content across marketing, HR and sales teams,” says David Wadhwani, president for digital media, at Adobe.

Also Read|To insulate reality from Gen AI, Adobe’s content authenticity web app takes shape

Adobe identifies competition on the radar

The announcements at Adobe MAX 2024 come at a time when the company wants to extend its advantage with graphics solutions and the entirety of the Creative Cloud suite, which doesn’t actually have direct competition in the same configuration, but does face theoretical competition for specific workflows and functionality.

Canva’s new Magic Studio, which underwent a revamp earlier this year and brought a wide range of AI functionality under one umbrella, does compete with certain Adobe’s software, including Express. Canva too, switched focused to small businesses, enterprises and teams users, with the Magic Studio update. The Canva owned Affinity programs too have the Designer platform for illustrators and designers, Photo for image editing and Publisher for the layouts and designs that sees it competing with Adobe’s Illustrator and InDesign.

There’s focused competition on the horizon, which is why Adobe is making moves that utilise an early-mover advantage with the Firefly AI models. Pixelmator Pro can be a Photoshop and Lightroom alternative for those who are using an Apple Mac or Apple iPad for their workflows. Apple’s own Freeform whiteboard app can be an alternative to Figma. Apple’s own Final Cut Pro could be an alternative to Adobe’s Premiere Pro, particularly with the former taking a differential pricing approach.

Also Read | Discerning users can counter Gen AI’s potential for misuse: Adobe’s Andy Parsons

One more app that is gaining the sort of capabilities that professional workflows would find relevance with, is the web and mobile use-case focused Adobe Express. Adobe is positioning this as a simplified value-addition in terms of functionality and collaboration across teams, with integration for InDesign as well as Lightroom files, in addition to Photoshop and Illustrator—moving your work files between them, is easier. The simplification extends to the one-click animate option for adding sound or motion to a file. There are brand templates in play as well.

The Firefly flex: Polished, and learning new skills

The company says updates to the Firefly Image 3 Model means this generative AI model can now generate images based on prompts and inputs, as much as four times faster than before. “This model delivers improved photorealistic qualities, better understanding of complex text prompts and more variety across generative results,” explains Deepa Subramaniam, who is Vice President for Product Marketing, Creative Professional at Adobe, in a briefing of which HT was a part.

There are updates for the Adobe Vector Model, that they hope will give greater creative control to designers who would be using the Adobe Illustrator software. There are enhancements too for Firefly Services and Custom Models for enterprise customers, with specific focus on speed and scale for content creation. The company says this is being used by many a global brand, including Deloitte, Gatorade, IBM, IPG Health and Mattel.

Also Read | Labels and watermarks become weapons of choice to identify AI images

Mattel, for instance, are using Firefly to design packaging for 2024 Holiday Bestie edition Barbie toys, which are now on sale. Starting with the US, Gatorade customers will be able to design custom bottles using text prompts on the beverage maker’s website, which will then be printed on their bottles before shopping—Firefly is the basis for this tool.

“In the last couple of years, since the Gen AI revolution began, we at Adobe have been creating generative models to serve our creative community, across a variety of modalities. First model was in imaging, second model was about vectors and the next model was on design,” says Alexandru Costin, VP of Generative AI at Adobe, in a briefing to HT.

These variety of models have been crucial to how Adobe’s apps have evolved in the past year and a bit longer, and this change includes Creative Cloud, Adobe Express as well as the document cloud and experience cloud for organisations. “This is to accelerate content creation and editing,” adds Costin. Firefly AI, they confirm, has now been used to generate more than 13 billion images, across its presence as a web app and integration within the Creative Cloud apps.

Also Read | Exclusive | Most cutting-edge tools use AI built in-house: Canva’s Cameron Adams

Creative Cloud updates: Features, across the board

Photoshop, Lightroom, Premiere Pro, Illustrator, InDesign, [Frame.io]Frame.io, Acrobat and Express, it is clear that no creative platform that’s part of Adobe’s arsenal of apps, left behind. This means, the “over 100 new Creative Cloud and Adobe Express features” the company talks about will span video editing, image editing, design, photography and more.

“Our product innovation is driven by what we hear directly from our community,” says Subramaniam. She points out that the new set of updates are geared towards giving users more flexibility and precision with their workflows, without compromising speed and productivity.

Premiere Pro, the video editing suite, among other updates gets the Firefly powered Generative Extend in the beta testing stage. This is so because it is an extension of the newly released Firefly Video Model. “We are addressing AI in the way that our creative professional community wants. By integrating the Firefly Video Model directly into Premiere, it’ll solve real-world editing problems,” Subramaniam points out.

Also Read | Meta and Google, like OpenAI and Apple, persuade new users as AI space evolves

Generative Extend in Premiere Pro extend clips to cover gaps in footage, smooth out transitions, or hold on shots longer for better editing refinement. If there is a few seconds of gap between two clips, or at least where you’d want them to be, click and drag the beginning or end of a video or audio clip to add what Adobe calls are an extension of the video file proceeding or succeeding it, including background and tonality. HT is yet to test this in detail, but the implementations we have seen so far indicate a good level of video photorealism and audio matching.

The training in use here is drawing from everything Adobe Firefly Video Model has learnt from its massive data sets. Since Generative Extend is still in the beta phase, it is limited to 1920×1080 or 1280×720 resolutions and between 12fps and 30fps for frame rates. That should extend with time.

“We have added new tools to our flagship applications like Photoshop, Illustrator, InDesign, Premiere Pro and Express, all driven by the power of generative AI, supporting creative professionals across all steps of their process and aiding in streamlining workflows and boosting productivity,” says Prativa Mohapatra, Vice President and Managing Director, Adobe India

Photoshop is also betting big on further Firefly infusion, with the new Firefly 3 now the basis for functionality including generative fill, generative expand, generate similar, generate background and generate image options now generally available on the Photoshop app for the desktop and on the web. There are new tools, namely automatic image distraction removal, as well as Generative Workspace.

The former, will specifically target unwanted and messy elements in a photo—people in the background, or wires next to an object, for example. While this functionality overall has been around in different forms across many apps from rival tech companies (Google Photos on a consumer level; rivals Canva’s Magic Eraser tool for consumers and businesses, for instance), Adobe is betting on Firefly’s new Image 3 model’s ability to do it better.

Lightroom, is unlocking Generative Remove, that uses the Firefly AI models to identify and subtract distractions from a photo, for general availability. Till now, it was available to a limited number of users, as part of the early access state. Adobe says the removals will be based on improved selection. The Lightroom editing process should see performance improvements across most computing platforms and devices, alongside an addition of Quick Actions.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *