It's a good day - Flame 2025.1 is here! šŸ”„šŸš€

On macOS, the inference is performed on the CPU (multi-threaded). On Linux, it is performed on the GPU (ML Engine Cache disabled) or on the Tensor cores (ML Engine Cache enabled). Therefore, you will get better performance on Linux.

We are only supporting CPU on macOS for now due to a technical limitation. We are working with our partners to remove this limitation for a future release.

4 Likes

Yes they are. That means when someone creates a Batch Setup using an Inference node, the model is actually saved in the setup, like for Matchbox shaders. It can then be loaded on another machine that doesnā€™t have the model on its file system.

4 Likes

Could you explain me what would be the benefits of rebuilding using another Frequency node rather than a Comp node?

There may be two benefits:

For irregular users of the node which arenā€™t familiar with the rest of the nodes required to combine (you really need to read the documentation to know about which blend modes to use when) - it may be more obvious if the Frequency node handles the rebuilding. So basically a usability improvement, but otherwise functional equivalent. May be more helpful for 3-band vs. 2-band.

Having the rebuild in the Frequency node could also make sure that the blend modes match the settings in the first node (assuming the data is linked) and properly applied. It could avoid errors. Minor benefit in my mind.

Using the rebuild node does deprive you from access to the transparency in the comp node, unless itā€™s exposed as a mix level in the rebuild section. That all just adds loads of complexity.

Generally speaking itā€™s an existing design pattern for other nodes - like the splitter/combiner, etc.

In my mind - a nice usability improvement. But there may be more important things to do.

Exactly what @allklier said and just because I love clean setups.

So using it would be some fast simple ( Freq. -<E Retouch ʎ>- Freq. ) and not ( Freq. -< E Retouch > Comp (check blending) > Comp (check blending) ) .

Something nice to have, therefor nothing crucial not to have :smiley:

2 Likes

It would be great to compile a list of existing ML models we can use with the new inference node. Is that in the documentation @fredwarren or is that up to the user to test out stuff they discover on GitHubā€¦

3 Likes

We have no intention on providing that list and we encourage you to make your own research about how the models were trained using which data set if this something that an be sensitive to you.

6 Likes

Like i said on other place, this is just great.
Be able to custom train or built an model to specific project and have comp team inference in comp is the step on right direction on my opinion.
Congrats on the release @fredwarren and adsk team! :tada:

6 Likes

I tried the ONNX in flame,
The results from the nuke plugin are better.
I just installed Netron and Zetane so will begin a new adventure soon.

I havenā€™t tried any of the models yet but Iā€™d be curious why the Nuke results are better? Do you mean on the same model the Nuke results are superior?

Actually its not exactly the same model, the onnx model has a 512*512 pre resizing before it calculates the depthā€¦

1 Like

I tried this ONNX model with a clip in flame.

I resized it to 1920 x 1080 since the nuke license is demo and restricted to the same dimensions.

I used this tool in Nuke 15.

So, it looks like the underlying model can be exploited in different ways.

The Nuke render is faster, and the results are significantly better.

Since the underlying idea (Depth Anything) is the same, and my hardware is the same, itā€™s all about the implementation, and by that I donā€™t mean the flame inference node, I mean the ONNX model that I used.

Iā€™m still knee deep in the meaningless horror that is Python GUI, so I donā€™t have more time to concentrate on it today, but Iā€™ll get back to it soon enough.

This page explains how to convert the model to ONNX but there is no precompiled version.

I tried the depth anything model as well, it seems to work, but my results only appear to be a white frame. I fed it a few different 4k shots. Running on a M1 MAX with 64GB

1 Like

High values depthā€¦expose it down.

2 Likes

Ah yes, that was it, thank you!

1 Like

I havenā€™t tried Flame 2025.1 myself yet thoughā€¦

The inference model used by Depth Anything V2 for Nuke uses the Small model due to commercial licensing issues.

The current release contains the V2_Small model, which is the best model allowed commercially according to the original projectā€™s license terms. However, you can convert the higher performing model to .cat format by following the compile instructions below.

I think we should use ā€œdepth_anything_v2_vits.onnxā€ if we want to compare with DAv2 for NUKE.

I canā€™t wait to try 2025.1 anyway!

@Hiroshi - yes, I was hoping that the ONNX existed for the Large Model but it appears that it is not readily available.

I havenā€™t had time to learn Natron or Zetane yet, but as soon as I do Iā€™ll create an OONX file.

The nuke implementation is a one click process.

The flame inference node did not produce an identical result from identical source media.

1 Like

Something that I think itā€™s worth mentioningā€¦

At the Logik NAB event in April, the Dev Team gave a presentation, and there was a great deal of feedback from those in attendance that we need new AI/ML tools in Flame. @fredwarren outlined some of the legal challenges that Autodesk, like everyone else, is facing right now vis-a-vis who owns the training data, etc. The challenge seemed daunting, and the Dev Team did a great job at explaining the situation.

Three months later, the Dev Team managed to ship a new version of Flame with a new ML Timewarp tool that is based on validated training so anyone can use it. Thatā€™s just incredible, and speaks volumes to how hard the team works and how clearly they understand the playing field. I installed 2025.1 on Mac and Linux over the weekend. Seamless, flawless install. I used the new ML TW on a 6K grading job I was doing on Linux and the results are astounding.

Bravos all around on getting this out to us so quickly!! :clap:t2::clap:t2::clap:t2:

31 Likes

While I see the logic (trĆ©s Flame) wonā€™t this result in our setups exploding in size? Letā€™s say the model is 1gb, you have it in 10 batchgroups, each has 10 iterationsā€¦thatā€™s 100gb which is bonkers.

Iā€™d like to see an option to treat it as a read file like you can with FBXs. Then itā€™s up to you to make sure you archive the model somehow but keeps things manageable.

2 Likes

great point.