Yes that’s correct Andriy, that’s was using the Timewarp from Flame’s setup option.
The inefence_flame_tw.py did the trick and it looks like it can now handle longer shots.
thanks!
I’m just getting in on this. Is there a how to video to install on mac OS? Thanks!
All you need to do is put the .py file in /opt/Autodesk/shared/python and launch/relaunch flame. The first time it will upack/install what it needs. It will give you a prompt to let you know when it’s finished.
Hey Talosh,
I’ve got your ML Timewarp working - great results! My only issue is clipping in the highlights every second frame. So I’m feeding it a 16bit ACES tagged Clip and literally every second frame is flattening the highlights. Hopefully I haven’t missed something in the thread about it and I’m not wasting your time?
Is there a limitation to values?
Thanks
Talosh fix that this morning. You might have to pull the latest version from master if he has not yet cut a release.
Hi Drew, well spotted, thank you!
This should be fixed now in v0.4.1
Thank you! I’ll install and make full use of it on Monday. Going to save me a huge amount of artefact cleanup
It works so well! Amazing!
Working great now! Thanks Talosh
This is just amazing. I just had a shot where i tried half speed, but results were weird and playback was choppy. Realised that it was duplicate frames in the source causing it, and then I saw your tool can also take care of that. Fantastic. Thanks so much.
Hi Talosh!
Thanks for your hard job. It’s simply amazing. And thanks to attend my suggestion for customize the installation folder. Very very grateful.
Now, I have a problem. Previos releases to 4.0 worked. But since 4.0 and 4.2 TimewarpML does not work. Installation appears be OK. I setup the installation folder, and the folder to export the clips. I select the clip, launch timewarpml, th eclip is exported and nothing more happens.
The terminal window shows:
> Executing command: echo "/run/media/root/STONELINUX/ML_EXPORTS/testclip">/run/media/root/STONELINUX/ML_TIMEWARP/bundle/locks/4EF5C10A0E4DE73744DB08245460274030219432.lock[flameTimewarpML] Executing command: konsole -e /bin/bash -c 'eval "$(/run/media/root/STONELINUX/ML_TIMEWARP/miniconda3/bin/conda shell.bash hook)"; conda activate; cd /run/media/root/STONELINUX/ML_TIMEWARP/bundle; echo "Received 1 clip to process, press Ctrl+C to cancel"; trap exit SIGINT SIGTERM; python3 /run/media/root/STONELINUX/ML_TIMEWARP/bundle/inference_sequence.py --input /run/media/root/STONELINUX/ML_EXPORTS/testclip/source --output /run/media/root/STONELINUX/ML_EXPORTS/testclip --model /run/media/root/STONELINUX/ML_TIMEWARP/bundle/trained_models/default/v2.4.model --exp=1;
Checking logs , I can see in miniconda log:
'PREFIX=/run/media/root/STONELINUX/ML_TIMEWARP/miniconda3
Unpacking payload ...
/run/media/root/STONELINUX/ML_TIMEWARP/bundle/miniconda.package/Miniconda3-latest-Linux-x86_64.sh: line 412: /run/media/root/STONELINUX/ML_TIMEWARP/miniconda3/conda.exe: Permission denied
/run/media/root/STONELINUX/ML_TIMEWARP/bundle/miniconda.package/Miniconda3-latest-Linux-x86_64.sh: line 414: /run/media/root/STONELINUX/ML_TIMEWARP/miniconda3/conda.exe: Permission denied
Of course conda.exe has all read/write permission and it’s marked as “executable” file.
No idea about what’s happening.
Hi Kily, if you press and hold CTRL while you click “Create” button in flame dialog konsole window will not close itself and potentially it could say a bit more.
It looks like there is something with miniconda installation, could you confirm that if you delete or move away the whole folder and do a clean install of v4.0 it works?
There’s a tickbox in v4.2 that should allow to keep miniconda as and just update TWML scripts.
I did update miniconda snd some of the libraries in v4.1, just wandering what version of CentOS and dku/nvidia/cuda driver are you on?
*** MacOS: 5min test needed ***
Hi guys, those on various Macs could you please try to download attached and run a small test to check how the thing performs on various MacOS species? This can potentially make it faster on Mac while we’re waiting for pytorch to create a full Metal backend.
You’d need to run it from commandline:
cd ~/Downloads/ncnn_test; ./r-ncnn -m rife-v2.4/ -0 099239.jpg -1 099240.jpg -o test.jpg
Could you please attach test.jpg and dm it to me with the console output and your MacOS version.
(igonre expected_result.jpg)
Thank you!
Amazing. This thing is better than Resolve’s optical flow.
In somewhat related news, the installation for me works, but I will get this error occasionally but not while using the actual ml script.
…here it is on a login screen after a reboot, presumably when a background reactor task initiated after a hang and reboot.
That makes sense 'cause there’s no user and it does not know where to put the preferences in this case. I’ll have look into it, thank you for letting know!
I found the problem, but no the solution.
Before, the answers to your questions:
- I always do a clean installation. I delete ml installation folder and preference folder in .config folder
- version 4 and 4.2 fail in the same way.
Version, 3.0 worked when I tried few weeks ago. Thinking in the difference I realized that I installed 3.0 in my system HD (home folder, no custom path option). I still having enough room to this 4.2 version in my system disk, so I tried … and bingo. Everything works. Conda’s log shows a full log with a sucessfully installation, (or like so). And the script works perfectly from flame.
So the problem is something related to permissions. Now I remember a weird problem with nuke and a problem with cache files when I setup a different folder from default /var/tmp and I choose a folder from the same volume. That volume is one partition from my raid, used for flame’s media, formatted with xfs, and with read/write permissions for all users. It works perfectly with no problem. I feel that I need a linux guru for this problem.
EDIT—> it works! I could fix the problem. fstab mount option had “user” option for this volume. Wikipedia dixit: “user” option implies “noexec”, avoiding binaries executables. Fixed, rebooting… and TimewarpML is working now… I’m very happy !!! hahahah
Just getting into these great tools. Saw the demo on Logik Live. Awesome. Installed as instructed. Can’t get the tools to work with my NVidia RTX8000. Only works with GPU. Have CUDA 11.0 and Flame 2021.2. Using release 0.4.2.
The error when trying with the GPU is "(‘cuDNN error: CUDNN_STATUS_INTERNAL_ERROR\n’…
Any ideas? The CPU option is usable, it is super sluggish on our Z840. Not sure if I am just doing it wrong.
Hi Mark, The pytorch ml library bundled with the tool is built for Cuda 10.2 and I’m running it on Nvidia Driver Version: 450.57 CUDA Version: 11.0 with M6000. It is possible to update it to cuda11 but before doing it could you double check in nvidia-smi the driver and cuda version? There are different builds for 11.0 and 11.1
Hey @talosh,
Setting FLAMETWML_HARDCOMMIT=True
with 0.4.2 doesn’t seem to be having any affect on 2021.1 on Linux. The scripts cleans-up the original export but the imported result is still softlinked and no removed from the filesystem.
There aren’t any errors about it in the terminal unfortunately.