FYI Baselight6 has just implemented the very same RIFE model that has been the core of v0.4
Hi guys, for newer GPU’s and drivers on Linux you might need to use a newer version of pytorch for dev v0.5.0 to work. Please refer to this:
https://github.com/talosh/flameTimewarpML/issues/87
and let me know if it works for you.
If it does I might need to update the bundled pytorch install in the new release
This is big news for facilities like ours, which use a lot of Flame/Baselight workflows. Thanks for sharing @talosh
flameSimpleML - Flame Machine Learning Source/Target tool with bespoke training:
Hi guys, I’ve put several scripts together in a package called flameSimpleML:
https://github.com/talosh/flameSimpleML/releases/tag/v0.0.1
This is a “copycat” - style model and scripts that allows training it using your own “source/target” data.
Training script is command-line at the moment. In order to train your model you need to create a folder somewhere for your dataset and then create two more folders named “source” and “target”. Export your training data there as uncompressed exr sequences. The script would assume that both “source” and “target” sequence are of the same dimensions and number of frames and exr’s are uncompressed.
When you run the script it would create a third “preview” folder within dataset folder and you can monitor the progress of model (hopefully) getting smarter there.
Training does not take a lot of GPU ram and can be run in a background so one can continue to use Flame.
One of the simple tests I’ve been using while writing is to give it the same sequence in colour as target and bw as source and teach it to colorise frames.
Trained model data is saved every 1000 iterations into your “homefolder/flameSimpleML_models/” as .pth file
To apply model select it from menu and navigate to that folder to load it. It is possible to use “F1 / F4” to see before / after.
This is a very first release and I’ve been testing it mostly on linux with Flame 2023.3 and I gave it some very limited testing on MacMini M2 using Flame 2025 tech preview. It might work on Intel macs if you sort PyTorch and Numpy dependencies out (give me a shout if you would like to try)
(it has a separate thread here: flameSimpleML - Flame Machine Learning Source/Target tool with bespoke training)
Hey @talosh, I’ve tried the new “flameTimewarpML.v0.5.0_dev_004” on my MacPro 2019 and it was very slow. Your previous release was flying in comparison.
For my Intel Mac 28 core 240gb of ram with x2 AMD Radeon Pro Vega II’s it seemed like I needed to manually install TensorFlow with HomeBrew using the instructions on the TensorFlow site.
After that it worked on 2023.3.2 but on 2024.2.1 I would get a TensorFlow unable to load error message(see attached). Then on the Beta neither version of TimewarpML even shows up(won’t say more then that since it’s the Beta/mum’s the word).
From what I can tell TensorFlow is not actually using the GPU on the Mac and so for whatever reason it becomes super slow.
Aslo I think using Wiretap makes it very tricky to smoothly work in Flame while TimewarpML is running. Flame becomes very sticky slow to respond at times, along with the UI window being in the way/having to move to the side.
One of the things on my Mac that made it very nice was that it would essentially run in the background and I could keep on working while rendering the timewarps.
Hi Ben,
I’ve moved away from tensorflow in dev 005 and I plan to move away from wiretap back to exr export / processing / import in the next dev release.
I don’t have an access to an Intel mac at the moment but on M2 machine new version of pytorch 2.1 seem to perform way faster then it was in 2.0. I’ll try to arrange some access to intel Mac to check it there
Got it thanks, I’ve PM’d you about testing it fyi
Nuke implementation…
Do you guys know if there is any Copyright concerns with the dataset used for RIFE model training.
@milanesa If you chase the chain back you land here: GitHub - megvii-research/ECCV2022-RIFE: ECCV2022 - Real-Time Intermediate Flow Estimation for Video Frame Interpolation which states that this has been trained on the Vimeo90K data set.
I did a search to see if anyone has made statements to the positive or negative of its license status. From a pure common sense point of view I’d say it’s in the gray zone and may not clear every lawyer’s hurdle.
It’s a collection of 90K videos that MIT assembled for research by downloading them from Vimeo. The list is publicly available. I’ve checked some of the videos myself out of curiosity and it’s a pretty random assortment going back 10+ years. I’m not sure if the mere lack of making their video on Vimeo private 10 years ago equates to permission to use in AI training. But that’s the older and grumpy person in me, not the young and eager entrepreneur.
Having said this, this data set is so widely used, that it would be a major earthquake if that that suddenly became a hot potato. For something small you’re probably good to use it. If you need it at work for something more high profile, I’d check with the legal team.
If you remember ADSK’s comments that they now need to show proof of training data being ‘license free’, you know where this is headed.
Thanks @allklier definitely a potential nightmare for a big company.
We can’t change the past here but flameSimpleML is a first step to potentially changing this in the future by letting users to train models on their osn datasets. I also plan to prepare some free dataset as a starting point and had some talks to a fellow producers on licensing more elaboratedmodel trained on their data
Amen!
Having a weird issue with the ml fluid morph tool. Everything is as it should be with the two clips feeding the tool, but I get the error " not enough frames in the incoming sequence: 0 given". There are indeed frames going out and each clip is the same length. It’s been awhile since I last used it and I was on centOS before but now on Rocky Linux. The timewarp works as expected. Any Ideas?
hmmmm, the source clips are 6054x3192. When I resize down to HD the fluidmorph works. I have a RTX A6000 card. Is there a resolution limit for the fluidmorph?
Could you try to make a copies of a clips and hard-commit them?
Hard commiting it seems to work! Thanks for the suggestion
I’m running into the same issue (rocky / 2024.2.1 / linux ) … just a ‘simple’ 1/2 slowdown with a basic clip gives the same error… any suggestions would be great. Thanks
edit: using 0.4.4
edit2: interestingly, if I generate some noise and use that, it works…
edit3: Think it has to do with the ‘new’ way of handling startframes. If I match my clip out, set startframe to 01 and smart replace it back into my clip, it all works…
Hi all,
I’m curious if anyone has been able to get TWML to work in a more efficient way on Mac? We’re running Mac studios with the M2 Ultra chip and Flame 2024.2.1. I’ve installed TWML version 0.4.3 which is working, but it’s really under utilizing the CPU processing. Any suggestions would be greatly appreciated. Thank you!
Yup. That’s a thing. All these tools are optimized for Nvidia. So, 2-4fps on Linux is normal, whereas 40-60 SECONDS PER FRAME on the cpu side for Mac.
It’s one of the big differentiators of those choosing Mac vs Linux.
Hi Randy, I totally understand. We’re currently stuck in the Mac environment for the time being and was hoping there was a work around?