Midjourney AI to Runwayml to Flame

I’ve been experimenting with AI video to create a proof of concept for a Noir film. I have found that it’s common for AI video generation to create heavy artifacts and duplicate frames. This is my workflow for cleaning up clips from Runway ML.

6 Likes

Very cool!
One thought that might save a step – why not just “fit to fill” your deal shot into the master sequence? If you have IN and OUT marks set for both master and source, timewarp should just calculate it automatically. Then, take a copy of the TW clip from your sequence, move over into a reel, right-click add TWML, choose ML from Flame’s TW effect.

2 Likes

Very cool experiment. Thanks for sharing.

I feel like for pre-viz these tools are fantastic. But the results are just very unrealistic at the whole shot level to be taken too serious. I’ve had more luck with small components to fix issues, like fill in some gaps in hair, etc.

1 Like

Hey Nick,

I’ll try doing that way again; the issue I was having is, it was rendering out without enough frames on the tail. Eventhough I compensated for that before rendering.

Hey Jan,

I agree 100%. I see this more of a workflow to proof of concept sequences for projects. I’ve found Midjourney super helpful for pitching projects to clients and giving concept artists / production designers a starting reference.

~ AM

Hmm. I don’t have the same clips to work with here – but “TW from Flame’s TW effect (beta)” always matches my clip length 1 to 1. As long as the flame TW soft fx are applied to the source clip.

1 Like