With hard-commiting I could not find the way of just telling Flame to do the hard-import from python, so it is a bit of a work-around. Precisely it will go through this steps:
soft-import clip
open it as a sequence
apply an Image soft-effect to it
render that soft effect
execute “Hard Commit” shortcut
Can you see some of this additional steps happening? It’ll be useful to try to understand what step is failing in this case
This is an interesting thread, and am seeing if this ML timewaro can help a slowdown that client needs on some water footage… ie fast moving caustics… always a pain to slow down.
I can start a new thread but is anyone aware of a good uprezzing training for sharpening footage? I have some bird footage that is at 1080 but we need it a bit sharper as its getting scaled up for framing purposes? Any experience with doing the ol ‘enhance’, … ‘enhance’ …trick?
Theo
We ran it through ML timewarp. its brilliant, it made fast moving water… think miami blue water. 50% speed. fantastic result. Well done! much props!! it didnt feel warpy and elastic. its great!
Hi @talosh - Just wanted to commend you on this - amazing! I have it working on my MBP (Mojave Flame2020.3.1), but struggling with an error on my iMacPro (Catalina Flame2020.3.1). Any help would be much appreciated to hopefully benefit from a faster processor! .
I was unable to try using both gpu at the same time, but just select which one to use.
I’ll try in a future a script to force the use of both.
CPU performance was a disaster and took over 1300 sec to do the same task.(probably has different setting in the use of cpu compared to the one within flame application)
From within Flame application with those settings and model v2.3
Free RAM: 97.2 Gb available - Image size: 2224 x 1548
Peak memory usage estimation: 6.9 Gb per CPU thread
Limiting therads to 14 CPU worker threads (of 36 available) to prevent RAM from overflow
things are a not so easy to test. It seems that for short sequence (100-200 frames) is similar or even a bit faster then gpu render from the “stand alone” app… but after that limit of frame range, it start to really slow down. For long shots it takes really a long time and the more the time passes, the slower it goes.
Maybe with some tweaking in the use of ram and cpu could work better.
In conclusion…
The advantage i see in the stand alone application with the new beckend is that i can use a egpu just for the TW task and keep working in flame without using cpu or internal gpu, and that could be useful for long sequence.
Even with one GPU, it seems to have a more predictable rendering speed.
For sequence (commercial shot for example), that usually are not so long, i don’t see that much of a difference in performance.
I’ll keep testing to see if gpu are really more constant with the results compared to cpu.
Hi Christofer, would you be able to send an archive with your clip and tw setup? Could be just dummy or noise of the same length and res if there’s a prob with the actual footage to be sent over.
The module that does timewarps from flame uses the Ruby code from Julik Tarkhanov to parse flame animation channels, and from the log it looks that it fails on parsing back the result (expecting frame to be round integer but given 1.5 instead).
I’d need to move this pars into python code somehow anyway so this might be a good cause to start )
Indeed it was a janky timewarp with non int keyframes and a quick simplify solved it. Thanks AAF. I’ll put it up on a link shortly @talosh so you can have a look. Thanks so much for all of your work and support with this.
Just recently installed CentOS 7.6 on a z8. Any caveats to the install? Just rebuilt the system after unrelated 7.4 issues and don’t want to wreak havoc on my IT guy.