Flame Machine Learning Timewarp, now on Linux and Mac

@talosh if you ever need machines for development, I’ve got a few on Teradici or Parsec you can use.

1 Like

Let us know how you go @johnag make some notes for us. You have a fairly beefy machine right?

1 Like

What kind of results are folks seeing with the new version on M2? I tried it on a loaded Studio a few weeks ago on a UHD clip and got about 1 frame/min render times. Pretty rough compared to my p6000. I checked “use Metal” or whatever it was labeled as well. The new GUI looked cool, though! :slight_smile:

Hi guys, here is a newer release for linux: Release v0.5.0 dev 005 · talosh/flameTimewarpML · GitHub


I’ve done some tests before with torch 2.0.1 and roughly Metal was 2 times faster then on CPU. This is way slower then Cuda, though Cuda was there for years whilst Metal is a very new in torch so chances are things will improve

Hello Talosh,
First, thank you so much for all your hard work on this tool. It is beyond useful and amazing. I am using the v0.5.0 and its great. I was wondering if you will be adding the other options like fluid morph cut in future versions? Also, its not really an issue with me because our storage media drive has a lot of space, but it was very useful to select where the location of the render goes not to accidentally fill up a drive. I use this tool more than most of the other tools within flame. Really appreciate you.


Hi Vahe, thank you! I’ll be adding other options in the very nearest future with fluid morph probably to be the first.

I will have a look into what is possible with writing sequences directly to file system. The point of moving to wiretap was to have less dependencies and maintaining EXR python bindings might be difficult across platforms and flame versions.

At the same time - due to the bug in current wiretap python bindings - I’m using a workaround of writing an uncompressed exr purely in python and reading it back with wiretap to create a frame buffer data that is suitable for it to be written in the framestore correctly. So technically it would be possible to have a sequence of uncompressed exrs written somewhere else as well.

At the moment I’m focused on streamlining some parts of that interface to make it more effective and fast, and experimenting with adding other models - for generative prompts or “before → after” bespoke learning using the same interface concept.

At the same time there is some experiments to create and train potentially better timewarp model as well (though it is a very lengthy process on my current GPU).

1 Like

Is the “fill/remove Duplicate Frames” feature available on flame 2024?

Hey @talosh, I’ll bet we can help. Do you need access to a faster GPU?

What do you need?

Not yet, but it will be there

1 Like

There’s a training script that runs in the console and uses GPU. Basically ssh to linux box with Nvidia card should be enough. I’m currently using P5000 with 16Gb, so any RTX or A card should be faster

1 Like

email me at randy@logik.tv and I can give you remote access to an A6000 on a Threadripper Pro.

1 Like

Would google-colab help?

Great! Really looking forward to it! :smiley: