It's a good day - Flame 2025.1 is here! 🔥🚀

Hi Logik! :wave:

It’s a good day - Flame 2025.1 is ready for you.

Flame’s AI-powered toolset continues to grow with a new ML mode in Timewarp for time remapping and a ML Inference Node for applying custom ONNX models to clips. Frequency node and colour coding updates, as well as more enhancements requested by you are also a part of this update.

Learn more: Help

Thank you for your continuous feedback, which greatly enhances the tool!

23 Likes

Looking forward to see the vids showing this new stuff!

4 Likes

Machine Learning Timewarp node in batch…

7 Likes

Please beware and do you own testing. During beta, installing 2025.1 in our infrastructure, broke licensing, and we were unable to run ANY version of Flame. This has been known for months, and as yet un-resolved. Dev knows about it. Do you own testing. This may/may not be specific to our infrastructure. Linux.

5 Likes

All five videos made by @Jeff for 2025.1 Update are now available in the Flame Learning Channel.

11 Likes

How are ML Timewarp speeds on the Mac vs Linux? A little slower or a whole lot slower?

You shouldn’t expect the performance to be as fast as the other modes since frames need to be created. Please do not compare the speed to the other modes in Flame, but to other ML retiming issues out there.

The performance also highly depends on the speed you have set. Please have a look at the video on the Flame Learning Channel. Jeff explains why very well.

1 Like

Thanks for chiming in, Fred. That is exactly what I meant…apologies if I it did not come across in my post.

I meant “how fast is a Mac Studio vs a Linux box with a late-model supported Quadro at rendering the same ML Timewarp?”

On earlier versions of Talosh’s TimewarpML, Linux could render a frame in 5 seconds that took a minute+ on a new Mac. In my recent testing, ADSK-supplied ML tools were quite performant on the Mac Studio. I am curious if this is still the case with the new Timewarp.

It is such a nice addition the abilitly to run onnx models inside of Flame! I just tested Depth Anything V2 and works right away!

@ALan @Edusanjo I think you guys were looking for this some days ago!

20 Likes

@cristhiancordoba - provide a batch group setup so that people can try it - my own tests have been with nuke

1 Like

Are the models embedded into the Batch like Matchbox are?

@ALan - no
you have to download the ONNX files and apply them in the inference node, (and hope that they work on your system)
the nuke pathway is easy since someone already did the heavy lifting.
I can do it over the next couple of days once i’ve finished some other stuff or @cristhiancordoba can share his setup and reduce the repetition of labor.

Just download the model here and apply the inference node…https://github.com/fabio-sim/Depth-Anything-ONNX/releases/download/v2.0.0/depth_anything_v2_vitl.onnx

2 Likes

no sidecar json or inference builder?

Not in this case…maybe other models need it but this one was one-click go.

I found that who compiled this model embedded some input resizing to 518x518…this is a flag that you can use in the standalone pytorch version if you have a small size gpu but it is not mandatory. So we’ll need to figure out how to bypass that pre resizing. I’ve tried it by creating json with the inference builder but didn’t work. Maybe will need a better pytorch-to-onnx conversion.

2 Likes

@cristhiancordoba - lovely!
Thankyou

I am surprised at the super features that have been added. I told the it dude directly to install the version for me.
I’m really looking forward to trying everything out! :nerd_face:

2 Likes

Great update, looking forward playing around with the Inference node. Only sad that the Frequency node doesn’t have a rebuild function, like Lumps does, so you need to add comp nodes or a userbin preset every time.

1 Like

It’s worth acknowledging that with 2025.1 ADSK has listened to our (at time quite harsh) feedback on the need to pick up ML features, and particularly implement an official version of MLTW.

Folks had some strong words over the last two years on ADSK relying on others to implement such important features. And folks were equally outspoken over the lack of progress on any type of ML features since awareness of legal issues made big companies pause and fall behind the open source community.

Both aspects have made significant progress with 2025.1 and I’m sure we’ll hear more detail in upcoming presentations.

Feedback matters. Feedback can trigger change. Roadmaps can be adapted based on feedback.

So keep giving feedback. Be to the point and describe the opportunity. And be professional.

Thank you @fredwarren, @Slabrie and teams.

24 Likes

Well said @allklier