Decoding Adobe’s new Adaptive Profile and Generative Extend editing tech

Unlike existing profiles such as Adobe Color or Adobe Landscape, the new Adobe Adaptive is image dependent. (Official image)


Miami, Florida: This would, if there was a metric for it, be the most frenetic Adobe MAX keynote in history. But then again, Adobe had a lot to get through considering the broad platform portfolio and an increasing infusion of artificial intelligence (AI) with their Firefly models. No mean feat, to have overcome potential competition from OpenAI and Meta convincingly—they still have to come good on the promise of a safe and reliable video generation model at work for consumers, which Adobe has achieved with the Firefly Video model. Amidst all the updates for Express, Photoshop, Premiere Pro, Illustrator, InDesign, Lightroom and of course Firefly AI, were two updates that caught our attention. An Adaptive Profile for photo edits, and the AI reliant Generative Extend that now arrives with Premiere Pro, will significantly alter the way content is edited, by reducing manual steps to achieve the same results.

Unlike existing profiles such as Adobe Color or Adobe Landscape, the new Adobe Adaptive is image dependent. (Official image)

The Adobe Adaptive Profile: Raw capabilities

If you are a Lightroom user, you’d probably appreciate this more than those who aren’t. Think of Adobe Adaptive as a predefined set of adjustments, and therefore called a profile, for your photos. It is AI-based, of course, which means that unlike existing profiles such as Adobe Color or Adobe Landscape, the new Adobe Adaptive is image dependent. This means AI models analyse each photo you wish to apply this to, and then adjusts tones as well as colours. You’d reach the same result by manually adjusting Exposure, Contrast, Highlights and Shadows too, with this saving more than one step in that process.

Results, of course, will vary depending on the sort of image it’s been asked to play with, but my initial impressions are one that indicate this is quite useful for a variety of photos. Pixel data will be maintained from the original photo. The limitation at this time is, the Adobe Adaptive profile works with raw files from any camera (including smartphones), but currently isn’t supporting non-raw formats such as JPEG, TIFF and HEIF. That means, most camera photography enthusiasts are out of the circle of trust, unless they decide to shoot in raw.

This links to Adobe Camera Raw tool, which supports a number of smartphones as well as digital cameras. In the former category, recent additions are of course the Google Pixel 9 phones as well as the Samsung Galaxy S24 Ultra, but at the time of writing this, the Apple iPhone 16 series is missing (the iPhone 15 series marks its presence however). Alongside, a number of cameras from Leica, Hasselblad, Sony, Nikon, Canon and Kodak, to name a few.

Adobe says the AI in play here has been trained on thousands of hand-edited photos which include people, pets, food, architecture, museum exhibits, cars, ships, airplanes, and landscapes too. Different types of artificial lighting as well as natural light covered, including variations for times of day and different seasons.

What are Adobe’s future plans for this feature? Quite a bit, actually. They say the intention is to support non-raw files soon. “We’re looking into mechanisms for exporting a single JPEG that contains both our SDR and HDR looks,” is the official line for now. At this time available through the Camera Raw too, Adobe Adaptive will eventually find integration within Adobe’s Photography ecosystem, including Lightroom.

Generative Extend in Premiere Pro: AI, to save your edits

First things first, this is still in beta testing phase, and that means real-world results may not always be perfect. That’s just work in progress. As the name suggests, Firefly’s new generative video model provides the basis for this—extend clips to cover gaps in footage, smooth out transitions, or hold on shots longer for better editing refinement. In Premiere Pro, suppose you have an editing timeline with a few seconds worth of gap between two clips, or at least where you’d want them to be. “Just click and drag the beginning or end of a video or audio clip to add lifelike, photorealistic video and audio extensions,” is how Adobe summarises it.

The training in use here is drawing from everything Adobe Firefly Video Model has learnt from its massive data sets. Since Generative Extend is still in the beta phase, it is limited to 1920×1080 or 1280×720 resolutions and between 12fps and 30fps for frame rates. That should extend with time.

A key takeaway is that sound effects that may have cut off too soon whilst recording, can be extended. What Generative Extend cannot do at this time, is generate dialogues or scenes with spoken words. If that’s cropped during recording, that cannot be mended. Extended video scenes will not have any dialogues, but extended music if chosen, for the background score.

All Generative Extend creations will include the Content Credentials labelling, which Adobe is working hard to build an industry consensus about. “We continue to innovate ways to protect our customers through efforts including Content Credentials with attribution for creators and provenance of content,” they say. Content Credentials are attached to any and all of Firefly generations, which gives each of these pieces an identifier in terms of when, how and where they were made, and whether AI was involved—that helps separate between AI generations, and real visuals.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *