Project Archiving in the age of AI

An interesting topic just came up in another forum, that’s worth a discussion / ponder.

As we all know ML based segmentation and all kinds of other tricks are fantastic time savers.

Except they’re not 100% deterministic, and ML is evolving at rapid speed.

If you use a gMask, archive that batch, open it 2 years later on another version of Flame, chances are it will be just fine. Like those Avid bins from 10 years ago, still load as the timeline just like you left it.

But what about all those ML tools and masks? In the example that triggered this, it was Resolve, and the next generation of Magic Mask, which is not the same as the previous version of Resolve. So that project you re-opened, suddenly your mask is no longer right. In that case they have a compatibility mode. But how many versions back will they maintain that?

The proper fix would actually be to render out that mask and save it as a matte. That bakes it down and preserves it just like the old days. But that’s more file space, and more time making and managing these renders.

And here ML was supposed to save us time, but potentially just shifted that time to another task??

May need a feature request to auto-bake an ML mask with fewer clicks.

Thougths?

1 Like

I see what you’re saying about the big picture here. I’ve been thinking about this too. Similarly… what if you are two years down the road and the ML tool used was the “2025” version and the current software has completely dropped that as an option.

I’m currently rendering out mattes from these ML tools because:

A. They are slowing everything down if you are constantly re-running them. I usually get a good result and I want to ‘cache’ it anyway.

B. Rendered out to EXR zip with a black and white matte is a very small storage footprint.

2 Likes

Exactly. I think most/all those ML tools, especially those generating mattes, should have a ‘cache/persist’ feature built in, so we don’t have to do this manually. Also because it just clutters up the batch node tree.

Could be handled similar to motion vector caches - except need a way to import them :slight_smile:

1 Like

I like the idea of a 1 click bake the ML matte and have flame just store it as a video clip.

Flame-generated multi-part, multi-channel, multi-compression, open EXR, openclip is the answer.
Mattes, motion, and all those good things.
But it’s non-trivial, otherwise it would already be part of flame.
I’m sure there’s some nuke/resolve/python workaround, but that requires effort, and possibly, a fuck to give.
If there’s a concerted effort to request ‘Flame-generated multi-part, multi-channel, multi-compression, open EXR, openclip’ then we might get somewhere…
Otherwise we can request a version-aware intelligent sidecar file that will generate and attach these passes on the fly in future flame…
Or not.

If you’re working unmanaged, I’m a big fan of using the versioning system’s number and name to bake those mattes or precomps or uvs or depth maps into a single open clip stream.

So in the above which is an example of denoise, I start with a my write-node fresh from the publish. I copy it and set the versioning logic to custom version. I set the version number to 100 for denoising pre-renders and then I change the version name to denoise.

When I render this I will have injected my denoise render into my open clip versioning such that it appears in the ribbon like so:

Using MVP’s Import Open Clip From Write Node Logik portal script, I can right click on the write node and immediately import that resulting denoise back into batch and I continue comping. This strategy works great for keeping disparate elements of a comp mixed into a single open-clip stream.

There’s another method as well if you want to keep those extra elements even more tightly interwoven. Change your main write-node from rgb-a to multichannel under format:

…and then increase the number of channels that you wish to store in the file. In this case I picked 4 channels to store various aspects of my comp. The comp, the handle paint work, the car-paint work and a random charger detail frame.

Next connect up all of those inputs into your write node. I’m not allergic to hiding mux connections so mine looks like this:

Back in the write node, name those extra channels something you can remember. You don’t need to get creative. To be honest you don’t have to do anything at all if you don’t want to–there are always default names assigned regardless of what you do:

…and then render. What’s nice about this is that it doesn’t destroy your openclip structure. Any open clips referencing that stream are none the wiser that you just packed the EXR it’s referencing with more channels that just rgba.

If you use the script for importing write node’s openclips as I did above, you’ll get the same rgba single channel set node like we got from the denoise. Under the hood they’re still all there but the original openclip definition for the source references only a single channel set so that’s a limitation.

But those channels ARE still there and when you need access to them, you just right click on the multichannel write-node and use the Logic portal script Reveal Write File Path in Media Hub/Finder and import the sequence back in–not as an openclip but as the actual exr multichannel sequence it is.

…at which point you have access to all of those channels back in batch, cached into an exr on disk and ready to go.

It sounds like it’s a pain, but it’s actually only a couple clicks–many of which revolve around connecting what you actually want to pre-render to the node. Once the initial setup is done it versions like every other write node and you’re always rendering the most current version of those elements (if you want to).

5 Likes

And then wrap it all up with Collect Media and Bob’s your uncle!

2 Likes

I love the idea of making the write node a multi-channel EXR and saving all the mattes and other assets along with the RGB. That’s a brilliant way of preserving all the processing for future reference. I’ll definitely will adopt that going forward.

1 Like

Just mind your compressions settings—I’m sure you already know this but some flavors make more sense for storage others for collaboration and others still for pixel purity.

It’s a nifty trick. Another tip I’m sure you’re already aware of, but definitely experiment with a combine piped to a mux to route 4 different mattes (3 combined into an rgb in the combine node which is passed to the front of the mux with the 4th as alpha on the mux) then piped into a channel/layer set on the multichannel write, which of course you can then reimport—it’s the closest Flame has to a shuffle node and doubles as a reallllly handy caching process. Now if that could all happen under the hood…

1 Like

Yes, was just playing with it in a batch from a recent job.

What seems to be a good setup is two write nodes.

batchname_fill: multi-channel exr, dwab compression - various named beauty passes (denoise, cached intermediates, and final result)

batchname_key: multi-channel exr, zip/piz compression - bundle 3 mattes via combine, 4th matte straight in, after that add four more with next channel.

All linked to mirror the batch iteration number (or custom if preferred).

Lossless compression for matte detail, but works well there. DWAB for space efficiency.

1 Like

That’s the ticket. The zip scanlines are very nuke friendly.