Flame Machine Learning Timewarp, now on Linux and Mac

@talosh love this tool so much. Thank you for creating this for us. Literally a gamechanger.

Release 0.4.3 (on linux) seems to automatically resize footage to 1920 x 1080 for some reason. Or I am doing something wrong somehow…?

Hi Jan! There is nothing in the code I’m aware of that would event attempt to change the size, so this is not really expected. The changes between 0.4.2 and 0.4.3 was very minimal and it was purely about python3 compatibility of the code running within flame and a slight improvement on hard commit logic when env variable is set to do it. Probably the first place to look is into import settings in Flame for exr sequences - is there anything related to this? Another step might be to drop 0.4.2 instead and see if it changes anything. Let me know the results, thank you!

Hey @talosh,

good news is: I finally got it running in our shared environment after some initial problems.
Bad news is: When I run it on linear footage, the highlights get clamped. I read this was fixed in version 0.4.1, but I’m afraid it’s still happening. I’m running 0.4.3. :confused:

1 Like

Hi Clauss, I’ve double-checked the code in 0.4.3 and the fix of clamping is there, nothing has changed since then. Are you using linux or mac and if on linux, does it behave the same with cpu mode enabled?

Sorry. I know this is a dumb question… I’ve installed, (terminal looked ok error-wise), restarted, now what do I do? The timewarp tool doesn’t look any different, and there aren’t any new nodes in batch/action/matchbox/ML node bin… Did I install correctly? I’m on MacOS Mojave. :pray:

Rt click on a clip on the desktop or in a library. You should see the Flame Timewarp ML option

1 Like

Thankyou! Got it. :+1:

Hey Andriy,

after some further investigation I noticed, that it’s not (directly) related to the tool:
My clip had a resize soft effect applied and was part of 10-bit timeline (with the clip itself being 16bit fp, ACES2065-1). When I did the simple 50% tw, the resized got rendered before the export and the clipping already applied at this point inside of flame.
When I did the smarter TW option with the tw soft effect, the script (I guess) did a sequence publish instead of a simple export and the highlights were preserved.
Same behaviour on Linux and Mac. CPU only option did nothing btw. It got stuck at the point where the miniconda env shows up with the progress bar.

So it’s not a bug, but more a workflow thing we have to carefully watch out for when using the tool. :slight_smile:

1 Like

@talosh Is there anything preventing this for being able to utilize a 2nd GPU if it’s present?

Also, what’s the best way to update the script? I’ve downloaded the latest release and overwrote 4.2 but since it’s already installed, nothing was unpacked/cleaned-up mean the source .py file is still 1+ GB. Is it better to download the source code and copy out the core script from there?

Hi Kyle, this should not be difficult at all, as far as I know pytorch has a distributed processing layer, it is just something that is difficult to check for me 'cause I have no flame machines with more that one gpu in the proximity.

Regarding the second thing with updating - this should not be the case, moving from 4.2 to 4.3 should install things again and prompt for the script to be cleaned up. Are you using some centralized env variable driven setup or is it just a plain single-worsktation layout?

I’d be more than happy to test the dual GPU setup if you’d like.

We’re using a centralized install. I ended up downloading the source and copying over the main script. Seems to work ok for what it’s worth. That said I know you said it was just some code changes and nothing to do with the model, etc.

1 Like

Yes, 0.4.3 is just 2022 compatible vs 0.4.2 that is not

@talosh do you still need help running this on the Mac side?

Hi Randy,
If it is about 2022 on Mac - would be good to check if it works smoothly there.

With regards to mac gpu tests - I think it is fine, that thing with ncnn + vulkan seem to be running on mac and if you have a graphics card then it is using it. The thing is even if it is fater then usinfg cpus it does not come close to cuda performance, and the whole hassle of creating a separate mac backend written in C++ and maintaining it alongside with Pyton on Linux is probably not worth it. There was some indications that Metal backend is coming to Pytorch - there is an experimental options and you can compile it to work on iPhone now. So the hope is that with M1 chips getting stronger it will just come to Pytorch on Mac as a backend eventually and will run same as on Linux

2 Likes

@talosh , can you please refresh my memory on how to adjust the ml_timewarp.py file to increase the amount of cores/ram the script uses?

Hi Randy, it is currently hardcoded multiplier to estimate it depeding on given image size.
It is in file “inference_common.py” at line 20:

thread_ram = megapixels * 2.4

Decreasing this calue would increase number of threads in CPU mode. Alan has been palying with it recently and he found that it works fine for him with as low as 0.5 on Linux.
Here is our discussion in full:

Hoe that helps!

@talosh Thank you for creating MLtools for us!
When I use this tool, I get the following error message.

initializing Timewarp ML...
usage: inference_sequence.py [-h] [--input INPUT] [--output OUTPUT]
                             [--model MODEL] [--UHD] [--exp EXP] [--cpu]
inference_sequence.py: error: unrecognized arguments: 605_0003_033/source 605_0003_033

I have 4 mac flame in my studio and 3 of them have the same message.
Is there a way to use it correctly? Thank you!

1 Like

I would guess due to the space in your filename (between source and the number) the number get’s interpreted as an argument. So avoid to have any spaces in your filename for the moment.

1 Like

Thank you for your comment.
But I’m not using space.

–output /Users/rooma02/Desktop/ML/A_TWML2_2021 605_0053_FB6

What about the name of your clip inside flame? The path from your last post doesn’t look like the path in your first post. :thinking:
Somewhere there is a space! :grin:

1 Like