Flame 2025 native ML Timewarp

Has anyone got any thoughts on the native ML timewarp in 2025, especially compared to @talosh’s? Especially in terms of quality of results rather than performance, as that is so hardware-dependent. We’re planning to upgrade software from 2024 to 25 across all our Flames in the next few weeks, and wondering if there’s anything to look out for etc.
Thanks

I don’t have any major first-hand evidence, but I remember that when this feature was in beta there were considerable comparisons being looked at and it seemed at least on par.

And to caveat: I think the comparison was not tool vs. tool, but model vs. model. The ADSK trained data vs. Vimeo90K.

I have run tests and found similar results.

One thing I would really like is the ability to run multiple channels through the native Timewarp and have the same inbetweens occur.

Reason being that colourists want individual mattes for multiple elements in a scene and if the director decides to apply a speed effect to an approved VFX shot, with multiple mattes already provided, then it would be a useful feature to have.

Or you have rendered CG and they want to retime the animation in the shot so you want to adjust all your passes simultaneously so they remain matched.

3 Likes

Totally makes sense.

Not sure how this could work unfortunately with the nature of ML tools. You almost would need an equivalent of motion vectors (figuratively) from the main RGB pass, which would record pixel operations, so that you could repeat them on the matte layer.

You can currently run a single matte channel through and it works.

I’m thinking about trying duplicates of the same MLTW node, using the same source as the front but passing different mattes through the matte channel of each node.

Interesting, and good experiment.

You can do this, just put your alternate channels in the Matte tab. Let’s say you have 3 additional mattes, combine them into RGB, then into Matte tab. If you have more than that, add more nodes and mimic them.

1 Like

Quality of the results isn’t quite as good, but that is more than made up for by the ability to use it in batch. The speed and iterability of having it in batch increase the use cases significantly.

3 Likes

Puzzlemattes :slight_smile:

I tried it and didn’t think it was as good.

This.

1 Like

Forgot to say, thanks everyone for these thoughts. Really useful to know. :+1:

This should not be a problem as it just produces two optical flow passes a matte to mix warped images together. One can use RGB to predict and warp everything else accordingly with no issue.

What kept me from doing it was difficulty with reading / writing exr files without dependency libraries.

I seem to find a way to pack and use OpenImageIO so time-warping multiple channels should come in the next dev of ML Timewarp.

Flame devs can definitely do the same if its not there already

2 Likes

I’ve been curious about something.
I prescribe an Openclip workflow that uses 16-bit openexr files in aces colorspace.
So when these files are used in flame, they are soft imported and governed by a .mio file.
If a flame clip/sequnce were to be processed with your tool the openexr files would not need to be exported since they already exist.
When you surface from your workload, would you be interested to see if we can combine this unmanaged media workflow with your technology?
You can direct message me through this forum and we can take it offline.
Thanks again for all of your generous efforts.
You have positively affected the working lives of thousands of flame artists.

4 Likes