On macOS, the inference is performed on the CPU (multi-threaded). On Linux, it is performed on the GPU (ML Engine Cache disabled) or on the Tensor cores (ML Engine Cache enabled). Therefore, you will get better performance on Linux.
We are only supporting CPU on macOS for now due to a technical limitation. We are working with our partners to remove this limitation for a future release.
Yes they are. That means when someone creates a Batch Setup using an Inference node, the model is actually saved in the setup, like for Matchbox shaders. It can then be loaded on another machine that doesnāt have the model on its file system.
For irregular users of the node which arenāt familiar with the rest of the nodes required to combine (you really need to read the documentation to know about which blend modes to use when) - it may be more obvious if the Frequency node handles the rebuilding. So basically a usability improvement, but otherwise functional equivalent. May be more helpful for 3-band vs. 2-band.
Having the rebuild in the Frequency node could also make sure that the blend modes match the settings in the first node (assuming the data is linked) and properly applied. It could avoid errors. Minor benefit in my mind.
Using the rebuild node does deprive you from access to the transparency in the comp node, unless itās exposed as a mix level in the rebuild section. That all just adds loads of complexity.
Generally speaking itās an existing design pattern for other nodes - like the splitter/combiner, etc.
In my mind - a nice usability improvement. But there may be more important things to do.
Exactly what @allklier said and just because I love clean setups.
So using it would be some fast simple ( Freq. -<E Retouch Ę>- Freq. ) and not ( Freq. -< E Retouch > Comp (check blending) > Comp (check blending) ) .
Something nice to have, therefor nothing crucial not to have
It would be great to compile a list of existing ML models we can use with the new inference node. Is that in the documentation @fredwarren or is that up to the user to test out stuff they discover on GitHubā¦
We have no intention on providing that list and we encourage you to make your own research about how the models were trained using which data set if this something that an be sensitive to you.
Like i said on other place, this is just great.
Be able to custom train or built an model to specific project and have comp team inference in comp is the step on right direction on my opinion.
Congrats on the release @fredwarren and adsk team!
I havenāt tried any of the models yet but Iād be curious why the Nuke results are better? Do you mean on the same model the Nuke results are superior?
So, it looks like the underlying model can be exploited in different ways.
The Nuke render is faster, and the results are significantly better.
Since the underlying idea (Depth Anything) is the same, and my hardware is the same, itās all about the implementation, and by that I donāt mean the flame inference node, I mean the ONNX model that I used.
Iām still knee deep in the meaningless horror that is Python GUI, so I donāt have more time to concentrate on it today, but Iāll get back to it soon enough.
I tried the depth anything model as well, it seems to work, but my results only appear to be a white frame. I fed it a few different 4k shots. Running on a M1 MAX with 64GB
I havenāt tried Flame 2025.1 myself yet thoughā¦
The inference model used by Depth Anything V2 for Nuke uses the Small model due to commercial licensing issues.
The current release contains the V2_Small model, which is the best model allowed commercially according to the original projectās license terms. However, you can convert the higher performing model to .cat format by following the compile instructions below.
I think we should use ādepth_anything_v2_vits.onnxā if we want to compare with DAv2 for NUKE.
At the Logik NAB event in April, the Dev Team gave a presentation, and there was a great deal of feedback from those in attendance that we need new AI/ML tools in Flame. @fredwarren outlined some of the legal challenges that Autodesk, like everyone else, is facing right now vis-a-vis who owns the training data, etc. The challenge seemed daunting, and the Dev Team did a great job at explaining the situation.
Three months later, the Dev Team managed to ship a new version of Flame with a new ML Timewarp tool that is based on validated training so anyone can use it. Thatās just incredible, and speaks volumes to how hard the team works and how clearly they understand the playing field. I installed 2025.1 on Mac and Linux over the weekend. Seamless, flawless install. I used the new ML TW on a 6K grading job I was doing on Linux and the results are astounding.
Bravos all around on getting this out to us so quickly!!
Iād like to see an option to treat it as a read file like you can with FBXs. Then itās up to you to make sure you archive the model somehow but keeps things manageable.