KI and AI tools in Production

,

Hello
I am very interested in the current AI tools.
I wanted to hear your experiences with the different tools and whether you use them just for fun or in a production.
In our company we already use AI in very different places in our production, be it to generate patches for clean ups or deepfakes or for dubbing.
I’m excited to see where the journey will take us and believe that we should take it seriously.

I look forward to your comments.

1 Like

I’ve been using Midjourney to create bits and pieces that I need for sky replacements, set extensions, etc. It gives me plenty of options for things that I can’t draw and can easily modify.

I use Nuke’s copycat for a bunch of roto and cleanup tasks if I have a high volume of shots. It’s not perfect but can get me 90% there if the volume of shots justifies the time to train, test, etc.

I’ve also been using Runway for really rough roto. I recently had some green screen shots where the talent had some green on their shirt. Runway gave me shitty (but totally adequate) mattes quickly.

Finally, I use @talosh’s amazing ML Timewarp tool, but I’m curious to try the new beta of Twixtor. If I can get similar results in an OFX it’s totally worth the money.

3 Likes

Does Twixtor work in the timeline now? I haven’t used it since the IRIX days when dinosaurs roamed the Earth.

2 Likes

I’ll letcha know!

2 Likes

Maybe ADSK has as surprise for us at NAB.

4 Likes

I suspect RevisionFX just has less to lose / less-scared lawyers.

3 Likes

I used Runway’s greenscreen to do rough roto up until recently, when I realized “Magic Mask” on Resolve provided a much better result, didn’t require me to send my footage out of house for processing, and didn’t require an additional subscription to use (we have a bazillion Resolve Studio licenses).

I have used Runway, and Stable Diffusion to make comp elements a few times Trees, graffiti, park benches, brick walls, etc. Back in the old days, everybody scanned elements like this were from art books and magazines, so I don’t think there’s too much of a difference.

2 Likes

I use stable diffusion all the time for clean up and matte paintings. I usually access it through A1111, DiffusionBee (low-key an amazing SD tool-set for mac) and Invoke.ai.

Pika labs and Genmo for abstract design elements.

AVC labs for up scaling video (great for anything with faces)

Runway for pretty much everything else

3 Likes

I tried using runway for some mattepainting/extension stuff, but I do not think they like me very much. This is what they returned

3 Likes

For rotoscoping, I often use the rotobrush in After Effects, especially for more complex scenes with hair details. Since it’s a bit heavy, I try to optimize it by downloading the resolution and increasing the matte again within Flame.

Midjourney and Stable Diffusion to generate environment frames, passing these results through Maginific AI for upscaling and improving details and textures. I also use Photoshop (web version when on Linux) for outpainting and cleanups.

For frame interpolation or morphing, TimewarpML and SimpleML tool when I need to train clean tracking points when dealing with numerous screen shots.

2 Likes

I’ve been working on an extension to use multiple generative ai models directly inside after effects. It’s called Ziframe (https://ziframe.com). I built it for myself to address some of the pain points of using ai and working in after effects :

  • Constant context switching between Ae and the browser
  • The very tedious flow of Generating an image with ai → downloading file → moving file to project folder → importing file to Ae → moving it to the timeline → making adjustments with After → Exporting the image → Opening a video generation tool to make it move → etc.
  • Juggling between so many tools and subscriptions

Quick demo: https://youtu.be/82mRQL1_J0M

The idea of Ziframe is having a clean extension inside of After Effects to use multiple image and videos models to generate, edit, upscale and extend assets.

It takes your input, produces the image / videos for some credits and then saves the assets on your disk near your project, imports it into a new Ae composition. You can also create multiple things at the same time with the built-in queue.

That’s it. Just a simple tool to stay in the flow.

I hope this can help other people and improve your workflow as well. It’s still in early development, and I’d be happy to get feedback on it.

1 Like