Flame Machine Learning Timewarp, now on Linux and Mac

first you need to remove miniconda3 folder in ~/flameTimewarpML
then run /bin/sh ~/flameTimewarpML/bundle/miniconda.package/Miniconda3-Linux-x86_64.sh -b -p ~/flameTimewarpML/miniconda3

This way you will get a fresh miniconda3

Init it with konsole -e /usr/bin/bash --rcfile <(echo ‘. ~/.bashrc; eval “$(~/miniconda3/bin/conda shell.bash hook)”; conda activate’)

then navigate to bundle folder and do pip3 -r requiremens.txt, that would theoretically install stuff for your driver version.

As a fallback you can add manually .detach() in two places in create_slowmo.py and run cpu-only

change “mid = (((mid[0]).cpu().numpy().transpose(1, 2, 0)))” to “mid = (((mid[0]).cpu().detach().numpy().transpose(1, 2, 0)))” around line 219

1 Like

Btw It is a good case to create auto-fallback, thank you for sharng info!

Hi guys, there’s an update that brings flameTimewarpML to MacOS (cpu-only). Should you have any issues please let me know

9 Likes

did all that (not CPU only thing) but it still doesn’t work. I’lls ee if I can have Cuda drivers updated to 10.2 and let you know.

the line 221, try changing it as well to put .detach() in front of .numpy() as it says

Will do that in 2 hours. Have a shot to comp and export to client for early afternoon. :wink:

Wow, this looks promising.
Does someone knows if this also works in networked linux flame environment?
In our facility the shared/python folder is on a network. So multiple flames are using those python scripts. Normally this doesnt give any issues. But because of the additional packages I have to install I’m afraid it doesn’t work…

no worries, I’ll add it as an auto-fallback option for the future release

1 Like

This should work but everyone will get a gialog box asking to unpack the bundle. Instead you can try to restrict it to flame user and put it in /opt/Autodesk/user/<flame_user_name>/python I can’t check it on Linux right now but it works for me on Mac, let me know if it does on your side

3 Likes

Ok, couldn’t wait to do that. And it seems to work!! I don’t have the error message. I’m now running it on a 80 frames in 3200x1800 resolution with 1/2 TW just to check. already 14%. Should take more or less 5 minutes to run on a P6000 card

Ok, for everyone to know (as I obviously didn’t), detach() part is there to run via CPU and not GPU (to avoid GPU problems). SO it ran at 5s/frame as with GPU, it should run at xFrames/sec.

Oh hell yeah I’m trying this today!

Welcome @talosh !!!

1 Like

great work, thank you!

Just tried it on a Mac on a shot I TW’d recently: much much better than Flame, and better than Resolve as well. Maybe I need to set something differently but a 21fr shot tw with the 1/2 setting rendered as 41 frames.

Thanks again!

2 Likes

it adds intermediate frames between original and starts and ends with the same frame as original sequence. Say you need to add one frame per gap between two original frames, 21 frames has 20 gaps so 20 synthetic frames are added wich results in 41 frame long sequence. Had it been 1/4 it would have been 3 frames per gap added, etc.

1 Like

So for all you Mac users out there, the install is pretty straightforward. Installs fine on a 16core Mac Pro running Big Sur, Flame 2021.2. Performance on a 3.5k clip, rendering 1/2, 119 frames is about 40 minutes. So, on the CPU only Mac side, the performance isn’t amazing but it would still be faster than manually painting and revealing from multiple versions of timewarps to remove tearing and nasty shiz.

2 Likes

That seems similar to what I got when I tested it. I tried a 35 frame HD clip set to 1/2. Took 5 minutes on my 16" MBP. What I thought was interesting is that the CPU meter in iStat never got above 15-20%.

1 Like

I’m afraid that the code has never been build to use multiprocessing at all, it just suppose to run on a single GPU and in this case it seem to be running on a single GPU core instead. I’ll try to have a look if adding several cpu workers will improve performance

2 Likes

That’d be amazing @talosh. I’ve got access to several flavors of Mac Pros and could help test if need be.

1 Like

@randy @andymilkis could you guys (on Mac) save a backup of create_slowmo.py in ~/Documents/flameTimewarpML/bundle and replace it with this one? I have quickly wrapped it around torch.multiprocessing and it should now fire batches of a size of your physical core number minus 2.

create_slowmo.py (9.4 KB)

please watch your memory in activity monitor - I have 32gigs and it is about 3gig per thread in 1/2 and 6 - 7 gig in 1/4, probably will have to make it more memory-efficient on slower speeds.

2 Likes