Flame Machine Learning Timewarp, now on Linux and Mac

I just noticed this…is this the problem? So I can’t seem to do this step…is this the problem?

Output log to: ‘/opt/Autodesk/log/slateDiskCacheCleaner.log’.
Registered thread ‘CoAsyncLogger’ [ 123145548845056 ]
Connected to DLmpd on 127.0.0.1 as 0:0
Connected to DLmpd on 127.0.0.1 as 0:20
Registered thread ‘CoDiskCache Cache Clean-Up Thread’ [ 123145549905920 ]
Bmd Audio : ERROR applying the interface: Bmd error for Cannot get audio input value.: (
[flameTimewarpML] [flameAppFramework] waking up
[flameTimewarpML] preferences loaded from /Users/randymcentee/Library/Preferences/flameTimewarpML/dxs-flame-01.local/flameTimewarpML.randy_3.uncomp_test.prefs
[flameTimewarpML] preferences loaded from /Users/randymcentee/Library/Preferences/flameTimewarpML/dxs-flame-01.local/flameTimewarpML.randy_3.prefs
[flameTimewarpML] preferences loaded from /Users/randymcentee/Library/Preferences/flameTimewarpML/dxs-flame-01.local/flameTimewarpML.prefs
[flameTimewarpML] script file: /opt/Autodesk/shared/python/flameTimewarpML-macos.v0.3.0/flameTimewarpML.py
PYTHON : flameTimewarpML initializing
[flameTimewarpML] bundle_id: 65df23be03066d189419b6164b3a0e12f96c2887 size 516Mb
[flameTimewarpML] creating new bundle folder: /Users/randymcentee/Documents/flameTimewarpML/bundle
[flameTimewarpML] unpacking payload: /Users/randymcentee/Documents/flameTimewarpML/bundle.tar
[flameTimewarpML] Executing command: tar xf /Users/randymcentee/Documents/flameTimewarpML/bundle.tar -C /Users/randymcentee/Documents/flameTimewarpML/
[flameTimewarpML] exit status 0
[flameTimewarpML] cleaning up /Users/randymcentee/Documents/flameTimewarpML/bundle.tar
[flameTimewarpML] bundle extracted to /Users/randymcentee/Documents/flameTimewarpML/bundle
[flameTimewarpML] extracting bundle took 3.31023001671 sec
[flameTimewarpML] installing Miniconda3…
[flameTimewarpML] installing into /Users/randymcentee/Documents/flameTimewarpML/miniconda3
[flameTimewarpML] Executing command: /bin/sh /Users/randymcentee/Documents/flameTimewarpML/bundle/miniconda.package/Miniconda3-latest-MacOSX-x86_64.sh -b -p /Users/randymcentee/Documents/flameTimewarpML/miniconda3 2>&1 | tee > /Users/randymcentee/Documents/flameTimewarpML/miniconda_install.log
^C^CTraceback (most recent call last):
File “/opt/Autodesk/flame_2021.2/bin/startApplication”, line 631, in
main()
File “/opt/Autodesk/flame_2021.2/bin/startApplication”, line 598, in main
exitStatus = runApplication(homeDirectory, appname)
File “/opt/Autodesk/flame_2021.2/bin/startApplication”, line 314, in runApplication
exitStatus = startApplication(programPath, vrefPath)
File “/opt/Autodesk/flame_2021.2/bin/startApplication”, line 327, in startApplication
exitStatus = exec_cmd_redirect_output(execString, vrefPath)
File “/opt/Autodesk/flame_2021.2/bin/startApplication”, line 462, in exec_cmd_redirect_output
exitStatus = child.wait()
File “/opt/Autodesk/python/2021.2/lib/python2.7/subprocess.py”, line 1099, in wait
pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0)
File “/opt/Autodesk/python/2021.2/lib/python2.7/subprocess.py”, line 125, in _eintr_retry_call
return func(*args)
KeyboardInterrupt
randymcentee@dxs-flame-01 ~ %
randymcentee@dxs-flame-01 ~ %
randymcentee@dxs-flame-01 ~ % cd to /Users/randymcentee/Documents/flameTimewarpML/bundle
cd: string not in pwd: to
randymcentee@dxs-flame-01 ~ % cd /Users/randymcentee/Documents/flameTimewarpML/bundle
randymcentee@dxs-flame-01 bundle % ls
benchmark inference_dpframes.py inference_sequence.py miniconda.package source_export.xml
bin inference_flame_tw.py inference_video.py model train.py
dataset.py inference_fluidmorph.py init_env model_cpu trained_models
flame_channel_parser inference_img.py locks requirements.txt
randymcentee@dxs-flame-01 bundle % /bin/bash --rcfile <(echo ‘. ~/.bashrc; eval “$(…/miniconda3/bin/conda shell.bash hook)”; conda activate’)
zsh: no such file or directory: …/miniconda3/bin/conda
zsh: command not found: “”
zsh: command not found: conda
bash: ‘.: command not found

The default interactive shell is now zsh.
To update your account to use zsh, please run chsh -s /bin/zsh.
For more details, please visit https://support.apple.com/kb/HT208050.
bash-3.2$

Somehow there are three dots before miniconda3, it should be two, probably my bad or some smart formatting on forum, I was just copy/pasting my shell: “$(…/miniconda3/bin/

Is there a way for me to do a teamviewer session or smth on your machine so I can check it quickly? If so, just dm me might be faster to do it this way

1 Like

Thank you @talosh for teamviewering in. Looks like for some reason the miniconda installing is hanging, but manual install works. So, if anybody out there experiences this, please post here. I’m on Big Sur and 0.1 and 0.2 was working fine, but something about my machine isn’t happy. @miles , you are on Big Sur and got 0.3 working fine, right?

Hi guys, here is new beta version of flameTimewarpML (0.4.0.beta.025), would be great if you could test it and tell about issues if any.

The main thing there is the new AI model that usually gives better resuts and problematic areas appear less soft (old version still avaliable if needed). Model versions are in sync with upstream RIFE-2020 models, so previous model is v1.8

Another new thing to check is new install logic that allows you to choose install location and adds an option to either keep or delete files that’s been using during install as well as strip the bundle from python script itself to save space make custom modifications easier. There is a tick box to not to install miniconda3 and dependencies automatically (@randy 's case when miniconda shell installer is stuck for misterious reason on BigSur). It will allow to take care of this parts manually if needed.

Paths with spaces in it are not yet supported, sorry

Mac - https://we.tl/t-iiiZ1L4dI7
Linux - https://we.tl/t-qicGyPlmfu

3 Likes

Going to install and try this version now thanks.
To note, @randy et’ all, I had no issues installing the previous version on Big Sur 11.2.1(flame 2021.2.1)

2 Likes

Nice one @BenV . I was good with the 0.1 and 0.2 versions, but something broke for me in the 0.3 version.

I know many others are fine on Bir Sur as well so this may be something particular to my machine.

1 Like

Am I getting mad or was there a demo of fluid morph in action anywhere here?
If someone points me to one or the other I would be very thankful :pray:t2:

There are two. Tim Farrell uploaded a BFX/batch based one in the Logik portal, and the ML TW tool has one as well.

Thanks and sorry. I wasn’t clear about what I’m searching: there was a screen recording of @talosh ‘s fluid morph in use somewhere. That’s what I’m looking for.

Hi Chris, the recording is here:

And the actual ‘plugin’ can be downloaded from here:

Hope that helps!

3 Likes

Hey @talosh!
Chris just asked me to add your awesome tool into our facilities repository.
I’m not sure if it’s me or if there is something with your upload (v0.4.0 beta for OSX): I tried to download via GitHub and WeTransfer and there is only one super large (1GB) *.py file, which seems to be corrupt. Am I missing something?

I saw that you now offer the option to install into a custom location, great! One more question to this:
As far as I could see from a quick look at your code on GitHub, the user will be prompted with a dialog for the install location, this path is saved in a preference file in the user folder and later used to check if an install is present, right?
At our facility we have all scripts and plugins in a centralised location. If I wanted to add your script to it, I had to mess around with the preference file on each machine, right? What do you think about checking for an env var (let’s call it “BUNDLE_LOCATION”) first before checking the preference file for the location? That way I could easily point to the repository as the bundle’s location to distribute it across all workstations.

What do you think? Thanks in advance!
Cheers,
Claus

3 Likes

Hi Claus, good point, checking env. variable can be easily done for that.
Currently the huge .py file is actual sctipt plus encoded tar archive that should be unpacked and then stripped from that .py file so it became just a script of a size about 120Kb.
You can as well get one without that heavy bundle file attached from github sources directly.

You can point at any custom path during installation as long as it does not contain spaces.

Another option for now is to install it on one machine so you have the bundle and miniconda folder, and then use non-bundled .py file and modify by adding something like this at aroud line number 144:

self.bundle_location = ‘/Volumes/my_custom_path’
self.prefs_global[‘bundle_location’] = self.bundle_location

Make sure the pas you’re using does not contain spaces.
Let me know what you think

Hi again Claus, another question is do you have a single location for python files and a single location for the bundle install? And are those locations used by both Linux and Mac machines?

Hi Talosh,

We also have a central single folder for all hooks. If there was an easy out-of-the-box config for this, would be much appreciated. Your work is great!!!

Alan

Sure, I’ll try to come up with something when I have time. The script in Flame and bundle folder should fine to be shared accros platforms, but there should be different miniconda3 env folders for Mac and for Linux.
At the moment it looks like there should be three settings on top of the flame .py sctipt:
TWML_BUNDLE_LOCATION = ‘’
TWML_MINICONDA_LINUX = ‘’
TWML_MINICONDA_MAC = ‘’

If set, they would override prefs, and if there’s ENV variable of the same name it would override prefs and settings in file. This way the bundle could be just copied from source as is and then miniconda and dependencies could be installed manually in shared locations for both systems.

@ALan, @claussteinmassl , Do you guys think something like that will work for you?

Dowloaded 4.0, installed without a hitch on Mac Mojave 10.14.6, running on Flame 2020.3.1 running smoothly so far!

1 Like

Hey Andriy,

thanks for your feedback!
Ok, I think the compressed bundle inside the py-file is the reason I thought the file was corrupt.
If I download the source code directly from GitHub: is there anything I need to compile or is it just a simple copy and paste? We sync our repository across different offices every night, so keeping file changes to a minimum is required. Therefore I would prefer to have all files separate, so if a new version comes out, I don’t have to transfer 1GB again, but just what’s necessary.

Our repository is the same for all operating systems. The structure (a bit simplified) is like this:

  • repository
    ---- nuke_globals
    ---- python_globals
    ------- site_packages_ab
    ------- site_packages_cd
    ------- …
    ---- flame_globals
    -------- plugins
    ------------ linux
    ---------------- plugin_a
    ---------------- plugin_b
    ------------ darwin
    ---------------- plugin_a
    ---------------- plugin_b
    -------- python
    ------------ inhouse
    ---------------- script_a
    ---------------- script_b
    ---------------- …
    ------------ external
    ---------------- script_a
    ---------------- script_b
    ---------------- …

In general we try to share common python stuff between Nuke and Flame, therefore the python_globals directory. But as this often means some extra work, I think it’s also fine if everything is just in one place next to each other.
I think having those three env variables you suggested would be perfect!

Thank you very much!

1 Like

Hi Claus, I’ve updated github with the code that supports env variables and put together some step by step instructions for centralized installation.
Make sure to the sources directly from github main branch as no release has been made yet. Could you please check and let me know if it works for you?

1 Like

To explain better, the whole thing is made of three parts:

  1. Python 3.8 environment with some dependency libraries installed (most notably PyTorch and NumPy). That’s where 1gig comes form (Mostly from PyTorch on cuda)

  2. Set of command line tools that should to be run in Python 3.8 environment with dependency libs to digest input exr image sequences and produce result. It has pre-trained AI model files as well. This set can be used independently outside of Flame as well if needed

  3. flameTimewarpML.py script that runs inside Flame’s Python 2.7 environment and takes care of getting images out of Flame, initializing Python 3.8 environment, running the command line tool within this environment and once finished getting the result back to Flame. There are some logic of saving settings as preferences and unpacking bundle if needed. There is a logic that can detect if bundle is attached to actual script and get rid of it by owerwriting itself with the code only dropping the bundle.

So with this recent update one can just point flame script to the environment and set of command line tools with env variables or directly specifying it by editing the top of the file. No bundle file has to be attached and version check is not performed in this case allowing manual setup and tweaks if needed.

2 Likes

Used 4.0 on my job this week with pretty amazing results, thanks!

I have run into an issue though and that is anything over about 4 sec and it will fail towards the end.
I just did a little more testing and it failed at 4sec 9frames(23.98), but so far good up too 4sec on the dot.

Here’s one that failed:

nitializing TimewarpML from Flame setup…
Trained model loaded: /Users/benvaccaro/Documents/flameTimewarpML/bundle/trained_models/default/v2.2.model

Free RAM: 211.6 Gb available
Image size: 2048 x 1080
Peak memory usage estimation: 4.4 Gb per CPU thread
Using 14 CPU worker threads (of 16 available)
rendering 160 frames to /Users/benvaccaro/Movies/Work/ML_tw/V1-0035_2K_4sec9frm_TWML_2021FEB19_1139_0E7
65%|████████████████████████▋ | 104/160 [10:41<04:25, 4.74s/frame]Exception ignored in thread started by: <function clear_write_buffer at 0x7faa7032e9d0>
Traceback (most recent call last):
File “/Users/benvaccaro/Documents/flameTimewarpML/bundle/inference_flame_tw.py”, line 73, in clear_write_buffer
p.start()
File “/Users/benvaccaro/Documents/flameTimewarpML/miniconda3/lib/python3.8/multiprocessing/process.py”, line 121, in start
self._popen = self._Popen(self)
File “/Users/benvaccaro/Documents/flameTimewarpML/miniconda3/lib/python3.8/multiprocessing/context.py”, line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “/Users/benvaccaro/Documents/flameTimewarpML/miniconda3/lib/python3.8/multiprocessing/context.py”, line 284, in _Popen
return Popen(process_obj)
File “/Users/benvaccaro/Documents/flameTimewarpML/miniconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py”, line 32, in init
super().init(process_obj)
File “/Users/benvaccaro/Documents/flameTimewarpML/miniconda3/lib/python3.8/multiprocessing/popen_fork.py”, line 19, in init
self._launch(process_obj)
File “/Users/benvaccaro/Documents/flameTimewarpML/miniconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py”, line 58, in _launch
self.pid = util.spawnv_passfds(spawn.get_executable(),
File “/Users/benvaccaro/Documents/flameTimewarpML/miniconda3/lib/python3.8/multiprocessing/util.py”, line 450, in spawnv_passfds
errpipe_read, errpipe_write = os.pipe()
OSError: [Errno 24] Too many open files