Hi Ben, looks like it is hitting max open files limit of the OS, probably due to the bug in opencv library that doesn’t close files on time. This should be possible to fix by increasing the limit.
launchctl limit maxfiles - should show the current limits
ulimit -u [value] should set it to certain value for the current session untill reboot.
Could you please check if increasing max open files value helps here? If so I can try to add it as a command to the script
Really awesome work @talosh. Do you think it would be possible to allow us to specify the temp directory as well as include an option to automatically remove the files that are exported and then rendered (which would also mean caching the result as opposed to soft-importing it)?
If one isn’t on top of it I could see it becoming a problem in terms of disk space.
I tried but with no change in the result, though my command line skills are very limited to say the least.
Below is what I tried with no change in for example the 4sec 9frm test still failing.
Was:
benvaccaro@Bens-Mac-Pro ~ % launchctl limit maxfiles
maxfiles 256 unlimited
changed to the below:
benvaccaro@Bens-Mac-Pro ~ % launchctl limit maxfiles
maxfiles 65536 200000
benvaccaro@Bens-Mac-Pro ~ % ulimit -u
512
But not sure if I did this correctly or not as now I can’t seem to change the “ulimit -u” result.
thanks very much for the quick update and please excuse my late reply!
I had to adjust the init_env to respect the env vars. Currently the path to minicoda is still hardcoded in there. I changed it like this:
After this it all work’s fine on OSX. But I can’t get it running on Linux. Installation went smoothly, menu shows up in Flame, I can export the clip, and then it’s stuck and nothing happens anymore (see screenshot). Any ideas what’s going on? As there are no errors printed, I’ve no idea what’s missing…
when timewarped by 50% the result is 1 frame shorter than what I would expect
it would be great if the rendered sequence would pick up the name and metadata from the original clip
it would be really great, if we had an env var for the render location, too! This way we could dynamically change it to the current project and renders would always end up in the correct project on the san and not some local dir.
Hi Kyle, I think temp directory is already choosable in the dialog before you actully start export.
As for hard-import I though about it as well but could not find an option within the Flame’s python API to do it. As a work-around from the script it is possible to drop a dummy effect on imported clip and hard-commit it. The code for this in fact is already there and just commented out at the moment. If you want to test it I can probably create a version with some sort of a toggle in the behaviour, will let you know when there’s something to test
Very true. I suppose I’m more after a way to standardize it across different machines so it’s not up to the artist to manage the data so a variable the defines a common place these files will be kept.
That said, if they were automatically cleaned-up then this wouldn’t really be needed.
(Regarding the 50% slowdown being 1 frame shorter than expected… this ML tool creates in between frames, which means that it may in fact be 1 frame shorter than what we are used to)
Hi Ben, as far as I can see it is happening when you use Timewarp from Flame’s setup, right? If so could you please replace a file named inefence_flame_tw.py in the bundle folder with the attached one and check if it perhaps fix the issue?
Hi Kyle,
I think there might be more cases depending on the pipeline, ie placing the files to the shot work folder.
Though if you like to limit artitst from selectiong work files location it is very easy to achieve.
At around line 1038 in flameTimewarpML.py the location is actually set so by modifying this one can set it to env variable or whatever pipeline requires, say:
self.working_folder = os.getenv(‘FLAMETWML_WORK_FOLDER’)
Then just search for the lines that says:
vbox.addLayout(hbox_workfolder)
and domment these lines out. It will remove the selector from the dialog boxes of the tools.
If you guys think that it should be done this way I can easily modify the code for the next version so it does it automatically in case this env variable is set
Hi Clauss, glad it worked at least for Mac.
The file you’re editing looks like init_env file in the bundle. It does not involved in the actual tool, it is just there to make it easy to init the environment and check / install dependency libs, though it is good to double check if env variables works also there.
The Flame part looks slightly suspicious to me because it is in CPU Proc mode, whilst on Linux the defaults should be GPU processing. Is it something you set manually by pressing 'CPU Proc" button? If not, it might be the case that you have different Nvidia driver with different Cuda version linked to it, and if so it might be needed to specify cuda version to pip3 during pytorch installation. The default version pytorch is linked to is Cuda v10.2.
Could you please check the output of nvidia-smi on linux machine and post the driver and cuda version?
yes, it’s the file from the bundle. I followed your instructions on GitHub and wasn’t sure at this point if the script was only used for the initial setup or also by the tool itself.
I didn’t enable the cpu button. Here is the output from nvidia-smi:
So the cuda version is 10.0 and not 10.2. As this installation should (hopefully) work for all workstations, I assume it would be best to upgrade the driver and not mess around with the pytorch installation which is shared on the repository, right? What do you think?
Regarding the working folder / env setup:
Personally I would prefer to have an env var setup the path, but still let the user change it if needed (so don’t hide that part in the gui). I love all automations, but in my world the user should always be able to change stuff if necessary. But maybe if one want’s to hide it from the user that could be another setting done via an env var?
I think for all centralised installations it is really handy to do all settings via env vars and not change anything in the code directly, as it makes updating much easier.
Hi Claus,
I see, will try to do it with two settings over env variables.
As for cuda version - the pytorch version that supports 10.0 is quite old
If you want to give it a quick try first unsinstall the current torch version with:
pip3 uninstall torch
and then try:
pip3 install torch==1.2.0
This should be done from within miniconda env shell and it might work with this version as well.
I wonder what Flame version you’re using with this driver - I’m testing it on 2020.2 and not sure about earlier versions in terms of Flame’s python api compatibility
Hi Ton,
I think it is a good question to the guy who is actually writing RIFE-2020 alorithm we’re using here. @andymilkis is probably collecting some questions for him to ask on one of the Logik events and I think it is a good one to add. As far as I know it is both end, yes and no: There is an optical flow part of it that generates forward and backward vectors but then the actual frame that is generated is made using another network that takes those frames and vectors as an input and actually creates the image. It is not currently possible to get a vector output out of this network and whether it is possible in general we can try to find out.
There are several open-source projects that implements optical flow via machine learning and I’ve been playing with GitHub - princeton-vl/RAFT to run it within the same environment, It should be easy to add it as a module for generating motion vectors, it is just different from RIFE.
The general thing with optical flow and ML is that it needs “Ground Truth” optical flow to be able to train and evaluate the result, and this is not easily avaliable for actual footage. Most of the open-source projects are using Sentinel dataset wich is an open-source 3D movie made in Blender with vectors rendered so there is something to train against:
Wow… this is really interesting… I was so amazed about the handling of edges by your ML timewarp that I started dreaming about having motionvectors at a similar level, that’d be beyond useful!!
The version that supports env variables for setting work folder is now on main branch on github.
Setting FLAMETWML_DEFAULT_WORK_FOLDER will set it as default and let users to change it.
With FLAMETWML_WORK_FOLDER set the ability to change it is blocked.