Flame Machine Learning Timewarp, now on Linux and Mac

Annnddd it’s working great on Mac now…multit-threaded CPU support works well. Not as fast as a bunch of GPUs, but, definitely doable.

1 Like

Thanks for the great work and explanation.

1 Like

Fabulous work @talosh. Sounds like you’ve been working on this for awhile. What else do you think is possible with this?

It literally about a week, Randy, I’m quite a newbie. We’ve been exploring options for the job I’m on now and it has lots of 30 to 24 conversions and fluid morphs. So this thing came up because of this in the middle of last week and I’ve managet to amend it to read and write exrs and then it and became this script.
Thinking of a sort of a roadmap apart from that multi-cpu optimization that probably will come fisrt I’d like to do a fluid morph module - it is hevily manual at the moment but it’s been giving already good results in production.

Then it should be actual Timewarp not only slowmo, so the plan is to make it be able to render actual Flame Timewarp setups.

Another idea is to try to have some sort of dsp tuning like high-pass or flicker removal as the input to wrap, so basically having two inputs and wrapping one sequence being driven by another

And some sort of “train as you go” button so when it see the uncertain area it would try to refine itself against the actual images it is currently wrapping

4 Likes

Wow. Absolutely brilliant. How do you plan on getting the 30 to 24? Manually entered decimals like 1.25?

We’re just up-fps-ing it to 120 and back to timeline start-to-end frame with timewarp set to nearest

1 Like

Ignore me it was wrong parameter

1 Like

Holy cow. Thank you @talosh! This is so good. I just ran it on a shot I had to do a few weeks ago that was a real nasty paint/patch that took hours. The ML version has some super minor areas to clean up but man… crazy.

2 Likes

Can you explain this a little bit further the steps?
Much appreciated!
I mostly have to do 25 -> 24 for cinema
or 25 -> 30 for converting european to US standards.

1 Like

Just tried this. VERY impressive! :slight_smile: … by the way… for some reason I can only select 1/2 at the moment. When I try to select something else the menu goes blank and it doesn’t run. Have exit and restart flame again to get it to work again. Other than that… very impressive. Kudos!!

2 Likes

Having a similar error. On CentOS 7.4

RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 23.65 GiB total capacity; 914.62 MiB already allocated; 9.75 MiB free; 958.00 MiB reserved in total by PyTorch)
Press Enter to continue…Exception ignored in thread started by: <function build_read_buffer at 0x7f5ab0d16f70>
Traceback (most recent call last):
File “/opt/Autodesk/chengeveld/flameTimewarpML/bundle/create_slowmo.py”, line 139, in build_read_buffer
frame_data = cv2.imread(os.path.join(user_args.img, frame), cv2.IMREAD_COLOR | cv2.IMREAD_ANYDEPTH)[:, :, ::-1].copy()
TypeError: ‘NoneType’ object is not subscriptable

What version of CUDA drivers do you have installed?

1 Like

CUDA Version: 11.0

1 Like

Thank Ton, I will have a look into it once I have time, I might need to send you a custom script with some debug info around this menu

Hi Hengy, looks like you’re running out of memory, it might be Flame that is using it as well. Check if you can run it right after a fresh start of Flame and possibly on lower resolution first. The plan is to implement CPU/GPU mode switch on Linux in next release so one could possibly get around low GPU memory trading off some speed

1 Like

This conversions is a timewarp and resulting frames are hitting somwhere in between original frames. It is not currently possible to render a frame at arbitrary ration between two using this engine as it is trained to predict middle frame, but one can get closer to the goal by doing halves until it gets close enough within some range threshold. The plan is to implement this in one of the future releases so one could just right-click on a timeline segment with timewarp effect and get the result back. Currently this script can only fill the frames in between with some more frames at 2**ratio wich is not exactly optimal for this scenario but works. So you can for example render 1/8th version of a clip and then bring it back to timeline by setting in/out accordingly and doing fit-to-fill (alt-shift-R) and it will create a new timewarp but because of the additional frames motion will become less stuttery. Watch out the artifacts though, hope that helps

2 Likes

958.00 MiB reserved in total by PyTorch - This value seem a bit too low for me, normally it takes around 3-6 gb of vram, could you run nvidia-smi to check how much vram you have left?

1 Like

That was it. converted to HD. It’s running now. Thank you.

1 Like

Looks terrific. And pretty fast. Need to see how large I can go before running out of memory.

2 Likes

the CPU-only mode will definitely help with large resolutions, it is going to be there in the next release.

1 Like