Thank you @talosh and @cnoellert for this! It’s soooo good.
Is it working with Linear/ACEScg in the latest version?
Our current workaround is to flip it to Arri Alexa logC - do the TW and flip back to ACEScg
Hi William,
TimewarpML does use “Pytorch" library (https://pytorch.org) to process images using neural networks.
It is technically opensource but mostly driven from Facebook. At the moment unfortunately there are no M1 Nueral Engine backend implemented.
But we have a quote from this thread (https://github.com/pytorch/pytorch/issues/47702):
We plan to get the M1 GPU supported. @albanD, @ezyang and a few core-devs have been looking into it. I can’t confirm/deny the involvement of any other folks right now.
Apparently there is a way to install native arm-compiled version (with 1.9.0) and it might be faster then the emulated one.
I don’t have any M1 machines around to really test it but there are some info that google gives us:
great thanks for whatever reason I thought PyTorch was native m1 supported and was just curious, but thanks for following up and will be interested to see how it goes.
@talosh, First: Fantastic work, its does some great work.
I do have some questions, when I run the script on there is some questions.
It always defaults to a certain path to do it’s work. Can that be modified in the menu?
Whenever I run it, its locked to the path. What defines that path? I was trying to figure
out in the script, where it’s being defined.
The other question is, sometimes the files seem to render, but it doesn’t populate in reel.
I have found the files, I was reading through board, I think there was something that addresses that.
Hi Brian,
This is not an expected behaviour and suggests some issues with the script reading or writing its preferences.
The first thing to check is probably if there’s enough permissions to read and write things.
Not sure what platform are you on but on linux prefs should be in ~/.config folder and on mac it is ~/Library/Preferences
I think in the current version the lock files that are responsible for getting thing imported back are placed there as well
(that was something I’d like to change in new one but it is still IP).
So could you please check if there’s anything related to TimewarpML in this folder and if it is generally readable / writable?
I’ll be honest @finnjaeger
This thread has gotten a little too big for me to manage.
I am still stuck on Flame 2020 so when I finally get python 3 I and going to have more than 330 messages to wade through.
@talosh i can’t thank you enough for the work you have done. I hope you will forgive me if I dm you in a couple of months, when I finally update, to pick your brain.
Once again. Thank you but is there anyway you can slow the flow of questions to this thread @randy? I will need to get up to speed at some point.
Hmm. It’s a big thread, yes. But I’m not sure how slowing anything would be helpful. There really isn’t that much you need to read. The information is here if and when you need it. All you gotta do is find the latest download on the GitHub, install it, and off you go. A vast majority of the posts in this thread are older questions about Mac and cpu only and relevant only to older versions of the tool. And when some of us had installation problems we posted here, but, those are unlikely to happen to you. I’m super proud that there’s a single place on the entirety of the internet where sysadmins and artists can go to find out about it. If in 1,000 years digital archaeologists were going through the rubble of the internet they’d likely look at this thread and nod approvingly. The search is your friend. The thread is a year old but activity has slowed substantially so when the late adopters join the party they can wade through the empty champagne bottles and Polaroid selfies to see what they’ve been missing.
Andy,
Question? Is there a way you can replace this link or add a solution to the one, that points to the release page on GitHub?
That way, when people search in the future, (like I did) they can just click right to the latest and greatest! This tool is way to cool to not have everyone using it!
I’ve found it pretty robust with any colour space so seems it is not that dependant on it.
Though it is always good to try things out.
According to the exrs it currently works by exporting exrs and importing exrs back regardless of the input clip. If it is something you’d like to change please let me know.
I found no way as yet to get clip’s tagged colourspace in python, so there’s no way for me to tag it back. If someone knows it is possible please point me to the right direction in the API )
If you are using an RTX 5000, RTX 6000 or RTX 8000, which is a CUDA series from Nvidia, then you can ignore this.
To consolidate and organize the information for for those of you on Centos 8.2 using Ampere GPUs such as the RTX A4000, RTXA5000 and/or RTX A6000, these are the instructions to get TWML up and running.
This is after a fresh install of the Adsk Centos 8.2 kickstart and after a fresh install of Flame and the 0.4.3 TWML, you’ll get an error message when attempting to launch the tool after installation in Flame.
One thing that is interesting @talosh about this Centos 8.2 / torch 1.9.0 on Flame 2022.1 is that after export the Flame export menu doesn’t ‘release,’ effectively keeping the TWML process as a ‘foreground’ process.