exciting stuff!
Hi Andrew,
It is half-way through and my focus recently has been on building a training part that allows to train and fine-tune timewarp on a specific shot or set of shots. There were a couple of positive use cases already that pulled the shot from “undoable” to a sort of “one hour paint” just by setting it to learn over the weekend on conventional Pascal P6000 card.
I would love to get Mac version released ASAP but given the limited time I can spend on it having a full-time job I have to focus on a practical tasks I have to solve at work, if it makes sense.
If there’s anybody in the community who actually doing some production work on Mac and have time and some basic python skills please give me a shout so I can add you as a contributors to github repo and you can help to push mac version further.
In my experience it takes kind of really using it in production and catching an issue here and there once you work on real-life stuff and making changes to handle it better, if it makes sense.
So please, if there’s anybody interested in contributing please give me a shout
I understand completely. And I very much appreciate everything you’ve done for our community. I’ll reach out if we have someone on the team that wants to pick this up. Thanks again!
Hi guys, those on Apple Silicon Macs - could you please check if it works for you?
Drop the whole folder into python scripts
There are two models - flownet4 and flownet4_lite, could you please check both if possible.
P.S. If second frame jumping - I have this as well, couldn’t understand why, it does not happens only on Mac using Metal backend - CPU and Linux cuda does not have it with exactly the same code.
I tried it on my mac this the message I got:
Its likely because of Quarantine flag that Apple puts on everything that did not came in from the Appstore or smth.
Looks like this can be the fix: https://discussions.apple.com/thread/253714860?sortBy=best
xattr -c <path/to/application.app>
Maybe try to run it on a python binary from the console first, it is hidden folder under flameTimewarpML/ packages/.miniconda3/appenv/bin/python
Alternatively it is also possible to setup python environment from scratch as described in Readme:
https://github.com/talosh/flameTimewarpML/blob/main/README.md
Hi guys, here is v0.4.5 dev 001 for Linux and Mac
Big thanks to @MikeV for creating a custom export dialog
Here are some things to check and test:
- New Flownet4 and faster Flownet4_lite models with ratio support based on RIFE v4.15
- Support for per-shot fine-tuning and training models on your own data
- Support for negative values and values over 1
- Script for multiple shots batch re-time with constant speed
- Support for Flame 2025
- Support for MacOS Metal backend for faster rendering
Please note that the model state files supplied are WIP and have been taken from actual training cycles, but they are likely to be in a good place in terms of quality and can be used as a starting point for custom fine-tuning and training.
“Fluidmorph” and “Interpolate Frames” are yet to be implemented but it should not take too long.
Training on Mac is yet to be tested, but it is likely to work with current or future versions of Pytorch.
Please check README
and let me know if there’s any issues.
Anyone around with Intel Macs please use custom installation from readme to build python env with Pytorch 2.2.2
Does this still require updated libraries to work on the newer RTX cards?
Python env that comes pre-installed is Pytorch 2.2.2 and Cuda 11.8 and it should be fine for RTX and Ampere.
It also works for me backwards with 460.91.03 and Cuda 11.2 on P6000 (Flame 2023.2 / 2023.3 DKU’s).
Not sure if ADA Lovelace would strictly require Cuda12 at the moment.
If there’s someone out there on latest ADA RTX please check if Cuda 11 is fine.
This is amazing… Looking forward to play with this. Question… for training… would a good dataset be something shot at say 200 fps, then hard tw it to 25fps and maybe even faster… How about motion blur, avoid or not?
Thanks!
Giving it a slowmo shots for general training would be better but not a nessecity. Say, there are 3 frames and you give it frame 1 and frame 3 and ask it to get closer to frame 2 giving it a ratio of 0.5 and it will try to match it as close as possible by warping frame 1 and 3 and mixing them together with a matte it generates.
If the window is 5 frames it can have frame 1 and 5 and try to match frame 2 at ratio 0.25, frame 3 at the ratio of 0.5 and frame 4 at ratio of 0.75. This is the default window size at the moment, and it will also create all possible combinations by using, say frame 1 and 4 and frame 2 at ratio of 0.33 and 3 at ratio 0.66 and all their reverse counterparts.
This 5 frame window then slides 1 frame forward and create more samples until the end of the shot.
There are two magic words for a shot path, “fast” and “slow”. Everything that has “fast” in its path would be limited to 3 frames window size and if “slow” is the it would be given a 9 frame window.
So generally put all the fast motion shots in folder named “fast” and slowmo in a folder named “slow”. It can live anywhere in a tree.
Important is that each shot lives in its own folder and there are no cuts in sequences within sngle folder.
It only supports uncompressed exr’s at the moment and would expect it to be in linear colourspace if you would like to use generalization. This would flip / rotate / scale and flip colour channels / change exposure and colour balance and its level can be controlled by --generalize flag of train script. Setting it to 1 will only leave slight scale and horizontal flip and setting it to 0 will disable any agumentation at all.
You can play with it and check the progress in “preview” folder, I usually go there with Flame, and press R occationally to check the results.
There seem to be a small bug in handling hermite curves that might lead to slightly incorrect frame values. I’l make a fix in dev002 and in the meantime here is the file with correction to replace:
flameTimewarpML_inference.py (60.8 KB)
@talosh Perfect, that’s fixed it.
Thank Andriy
Hello Genius,
You are most likely aware of this but I only read it today.
It would seem that Hugging Face are donating $10,000,000.00 of GPU activity to developers that need it.
Thank you, good to know!
At the moment thanks to Tom Balkwil from DirtyLooks I’ve got a vm with A100 to do the test and training, and it is a part of their re-usable energy initiative to heat up some swimming pool for kids in east London.
https://www.tvbeurope.com/sustainability/post-production-house-dirty-looks-renders-enough-heat-to-power-public-swimming-pool
You are a special and valued human being.
Thank you.
Thank you very much! I’ve missed having fluid morph.
small note, your install instructions need to be updated to combine dev002 tars instead of dev001
What are the differences between 0.4.4 , 0.4.5 and 0.5?
Main difference between v0.4.4 and 0.4.5 is that 0.4.5 allows to fine-tune model or train it from scratch on your data. I’ve used this already with some complicated cases and it has been able to pull difficult shot so something very close to final by running for about a day or so on P6000.
Under the hood 0.4.5 also have very recent pytorch and python instals and some bits re-written to remove legacy dependencies.
0.5 was initially an attempt to create better visual interface and to use wiretap that would allow to skip rendering and exporting pass. That might be re-visited afrer v0.4.5 is out.