Flame’s AI-powered toolset continues to grow with a new ML mode in Timewarp for time remapping and a ML Inference Node for applying custom ONNX models to clips. Frequency node and colour coding updates, as well as more enhancements requested by you are also a part of this update.
Please beware and do you own testing. During beta, installing 2025.1 in our infrastructure, broke licensing, and we were unable to run ANY version of Flame. This has been known for months, and as yet un-resolved. Dev knows about it. Do you own testing. This may/may not be specific to our infrastructure. Linux.
You shouldn’t expect the performance to be as fast as the other modes since frames need to be created. Please do not compare the speed to the other modes in Flame, but to other ML retiming issues out there.
The performance also highly depends on the speed you have set. Please have a look at the video on the Flame Learning Channel. Jeff explains why very well.
Thanks for chiming in, Fred. That is exactly what I meant…apologies if I it did not come across in my post.
I meant “how fast is a Mac Studio vs a Linux box with a late-model supported Quadro at rendering the same ML Timewarp?”
On earlier versions of Talosh’s TimewarpML, Linux could render a frame in 5 seconds that took a minute+ on a new Mac. In my recent testing, ADSK-supplied ML tools were quite performant on the Mac Studio. I am curious if this is still the case with the new Timewarp.
@ALan - no
you have to download the ONNX files and apply them in the inference node, (and hope that they work on your system)
the nuke pathway is easy since someone already did the heavy lifting.
I can do it over the next couple of days once i’ve finished some other stuff or @cristhiancordoba can share his setup and reduce the repetition of labor.
Not in this case…maybe other models need it but this one was one-click go.
I found that who compiled this model embedded some input resizing to 518x518…this is a flag that you can use in the standalone pytorch version if you have a small size gpu but it is not mandatory. So we’ll need to figure out how to bypass that pre resizing. I’ve tried it by creating json with the inference builder but didn’t work. Maybe will need a better pytorch-to-onnx conversion.
I am surprised at the super features that have been added. I told the it dude directly to install the version for me.
I’m really looking forward to trying everything out!
Great update, looking forward playing around with the Inference node. Only sad that the Frequency node doesn’t have a rebuild function, like Lumps does, so you need to add comp nodes or a userbin preset every time.
It’s worth acknowledging that with 2025.1 ADSK has listened to our (at time quite harsh) feedback on the need to pick up ML features, and particularly implement an official version of MLTW.
Folks had some strong words over the last two years on ADSK relying on others to implement such important features. And folks were equally outspoken over the lack of progress on any type of ML features since awareness of legal issues made big companies pause and fall behind the open source community.
Both aspects have made significant progress with 2025.1 and I’m sure we’ll hear more detail in upcoming presentations.
Feedback matters. Feedback can trigger change. Roadmaps can be adapted based on feedback.
So keep giving feedback. Be to the point and describe the opportunity. And be professional.