Generative AI in Premiere Pro

Just another entry in a long list of tools

Generative AI in Premiere Pro

1 Like

I like how all the demo shots are lockoffs or almost lockoffs with talent straight to camera.

Weā€™ve messed with Runway and itā€™s impressive what it can do when it works, but whether it works is a crapshoot.

3 Likes

Exactly my experience too!!

Generative AI is impressive when it works, but often only works on shots which would be quick to comp yourself anyway and that you would have full control of.

Iā€™ve tried a few AI object removal tools on shots with moving camera and some degree of parallax and the results have not been good enough and it is harder to fix them then just do the shots yourself using traditional methods.

I should also add that we definitely are avoiding any generative AI on actual shots due to the grey area around legality.

2 Likes

I was watching a video review of kitchen gadgets The gadgets were cheap, with each one purpose built to solve one problem. Many, as the chef repeatedly pointed out, could be replaced with a knife. None of them required developing a skill. None of them will develop skils with use. They are buttons you push to solve a problem, usually poorly.

I had the same feeling when I saw the shot extender.

Maybe itā€™s fine if people just drag out the shot a little and AI manages to do fine with it, but that means no one will learn. AI1 will cover asses, fewer ā€˜well iā€™m never doing THAT againā€™ lessons will be learned and the state of the art will diminish.

Obviously I donā€™t think AI will actually cover many asses. The tech seems very close to topping out if it hasnā€™t already.

9 Likes

Iā€™m wondering about the usability of the the feature in a typical offline online workflow. Iā€™m really interested in all these new tools and canā€™t wait to play with it, but not without caution to repeatability and workflows.

That shot extender is definitely a cool concept, but my first thought was that if itā€™s done on the prores proxy and not Camera RAW that will be used for grade, what happens when itā€™s time for the online?
Similar to when that object removal gets done with the offline proxy, will the result the same when you ultimately have to do it again with the graded footage?

1 Like

I even wonder if generative ai can produce anything above 8bit images? And could it create images in different colourspaces such as ACES or LogC to match the rest of the film.

The shot extension would work similarly I guess to ML timewarp from Talosh I would suspect so you would think this could be accounted for.

1 Like

An question Iā€™m very much interested in. In looking into it, I have found very scant information. Only a single obscure tool that claims to restore bit depth of videos. Largely no research papers or other statements I could find on the topic. Some specific threads clarifying that tools like Topaz.AI improve spatial resolution, but will not modify bit depth.

That leads to three possibilities:

a) Itā€™s not an issue and not worth talking about (doesnā€™t pass my sniff test)
b) Itā€™s likely to become a big problem, but the current early adopters have not identified it as a risk yet, and thus not much has been done yet (quite possible, other things have prevented it from bubbling to the surface, or most tools used by only semi-pro community)
c) Some people know, but there are no easy answers, so nobody wants to be the one ruining the party (most likely)

Iā€™ll stay on the trail and will share what I find over time.

3 Likes

My thoughts exactly with the 8 bit only images. Unfortunately I think there a lot of video creators who donā€™t really think about the bit depth of images, especially when the end result just ends up on the insta-toks.

I think weā€™ll see two camps. The high end folks who care very much about keeping the images as high quality as possible from camera to delivery

And the second camp who is just cranking stuff out as fast and cheap as possible.

Knowing how fast all this is moving, Iā€™m sure these models will be able to deliver images better than 8-bit, that is if the people building these tools think itā€™s worth their time.

We didnā€™t even start to talk about grain matching with these object removals :face_with_peeking_eye:

1 Like

The problem is access to material. There is not enough that is public and unrestricted. Private and commercial entities will step in and build private models. But these will be very expensive and closely guarded. Your average person wonā€™t have access to them or be able to afford them, which will negate the cost/time savings everyone is hoping for.

2 Likes

So very true, plus of course this legality conversation of every model being built from ā€œpublicā€ content.

But, if Adobe wants the high end folks to stay in their ecosystem to use these tools, it might be worth some of their billions to invest capturing and creating material that can be used in these models.
They are offering to pay people for their content

But if itā€™s only training on h264 files, does that even help the bit depth problem?

1 Like

It helps with the legality, not the other factors.

And itā€™s not only 10bit content, but full camera dynamic range (e.g. log files) so it can be cut with that type of material.

At NAB several people were talking about studio efforts to re-process their libraries for HDR. Massive effort. But that content has been tone mapped and detail lost. ML models trained on proper material could help restore some of this.

1 Like

I really think that these features are for people who finish their projects in premiere. I share an office with a company that almost exclusively do everything in premiere. Very rarely do they send project to online/grade/vfx outside premiere. Actually I donā€™t think they ever did before I moved into the office. They were not impressed by the demo video, but I can see them using some of these features if they are in trouble and need a bit of help but donā€™t have time or money to use me or another freelancer.

4 Likes