Flame Machine Learning Timewarp, now on Linux and Mac

What kind of results are folks seeing with the new version on M2? I tried it on a loaded Studio a few weeks ago on a UHD clip and got about 1 frame/min render times. Pretty rough compared to my p6000. I checked “use Metal” or whatever it was labeled as well. The new GUI looked cool, though! :slight_smile:

Hi guys, here is a newer release for linux: Release v0.5.0 dev 005 · talosh/flameTimewarpML · GitHub

3 Likes

I’ve done some tests before with torch 2.0.1 and roughly Metal was 2 times faster then on CPU. This is way slower then Cuda, though Cuda was there for years whilst Metal is a very new in torch so chances are things will improve

Hello Talosh,
First, thank you so much for all your hard work on this tool. It is beyond useful and amazing. I am using the v0.5.0 and its great. I was wondering if you will be adding the other options like fluid morph cut in future versions? Also, its not really an issue with me because our storage media drive has a lot of space, but it was very useful to select where the location of the render goes not to accidentally fill up a drive. I use this tool more than most of the other tools within flame. Really appreciate you.

2 Likes

Hi Vahe, thank you! I’ll be adding other options in the very nearest future with fluid morph probably to be the first.

I will have a look into what is possible with writing sequences directly to file system. The point of moving to wiretap was to have less dependencies and maintaining EXR python bindings might be difficult across platforms and flame versions.

At the same time - due to the bug in current wiretap python bindings - I’m using a workaround of writing an uncompressed exr purely in python and reading it back with wiretap to create a frame buffer data that is suitable for it to be written in the framestore correctly. So technically it would be possible to have a sequence of uncompressed exrs written somewhere else as well.

At the moment I’m focused on streamlining some parts of that interface to make it more effective and fast, and experimenting with adding other models - for generative prompts or “before → after” bespoke learning using the same interface concept.

At the same time there is some experiments to create and train potentially better timewarp model as well (though it is a very lengthy process on my current GPU).

2 Likes

Is the “fill/remove Duplicate Frames” feature available on flame 2024?

Hey @talosh, I’ll bet we can help. Do you need access to a faster GPU?

What do you need?

Not yet, but it will be there

1 Like

There’s a training script that runs in the console and uses GPU. Basically ssh to linux box with Nvidia card should be enough. I’m currently using P5000 with 16Gb, so any RTX or A card should be faster

1 Like

email me at randy@logik.tv and I can give you remote access to an A6000 on a Threadripper Pro.

2 Likes

Would google-colab help?

Great! Really looking forward to it! :smiley:

Colab has got some data and time limits and when I was looking into it a while ago it was not much performance difference but it might change, I’ll have a look

I noticed the mention of a training script. Which brings an interesting question, which model is ML Timewarp using, is it a 3rd party model or is this a bespoke model? In either case do we know what images sources it was trained on.

Now that we’re past the honeymoon phase of the latest batch of AI tools, there’s increasing awareness that the training data source is an area of concern, primarily from a legal liability but also other aspects anytime you use these commercially. While ML Timewarp is not the same as Runway ML as in it generating not recognizable new content. But the same rules do still apply to a degree.

This maybe should be a separate thread, than the tech details of the ML Timewarp tool… Happy to move it…

Hi Alklier, sure we can do a separate thread for tech and ML details.

Version 0.4.x it is been based purely on RIFE and has been trained on Vimeo90 dataset.
With version 0.5 I’m experimenting with adding RAFT with optimised weights from Autoflow (https://github.com/google-research/opticalflow-autoflow - you can check the datasets they’ve been using to train - in order to get the first forward and backward pass feed into modified RIFE model. I’m also trying to improve warping artefacts by using MultiResUnet inserted inbetween RIFE multi-res passes.

The base data for training it is still Vimeo90 and I have two more sources on my own - one is a lot of 35mm film scans that I’ve filmed myself mostly in India and Nepal but some different places as well back in 2003 - 2008.
Another is a parts from Ukrainian period biopic I was editing and partly finishing - shot on Alexa and for that I have producers permission to use for that task (though it has never been put into a proper contract). This data currenlty is mostly used for validation.

The code is opensource and everyone is invited to participate

4 Likes

I’ve been experimenting with MultiRes and ACC U-Nets recently in terms of bespoke training for different purposes as cleanups, beauty or artefact removals and thinking of adding some simple GUI to it in order to be able to feed training images directly from Flame and potentially in can be used later to fine-tune Timewarp models as well or even have it fully re-trained using your custom data.

At the moment its quite hard-coded and terminal based but its been quite useful in some cases - feel free to look here for updates:

https://github.com/talosh/flameSimpleML

4 Likes

Could Pybox perhaps help you rapid prototype something out? Not that I know what that would entail…

I’m not familiar with PyBox api actually and I’m not sure if it allows to get frames other then the current frame. I though it might be useful for fluid-morph transitions, maybe it is easier to have it as a node.

With that copycat-like thing it should be possible to just have before/after clips exported somewhere or pull it using wiretap for training.

Copy that. Only mentioned it because at some point I believe @tpo posted a video with his workflow for training where he was using pybox. I’m sure you’ve seen it, and admittedly it was a long while back I watched it but I’ll post a link to that thread here just in case it’s useful.

I like Tiago’s rig - 8xA100 80Gb GPU’s, quite impressive )
I’ll try to look into pybox api to see what’s possible

2 Likes