ONNX models (Inference)

,

Anyone that has tried the new inference node (flame 2024.1):
– any ONNX models to recommend?
– where do you go to check out available models?

This is all new to me.

2 Likes

@bryanb - here is the homepage.

It’s quite a deep rabbit hole and not everything is applicable to flame.

3 Likes

I’m also headed down that same rabbit hole (but I have no clue). @tpo posted a General Improvement a few days ago with a request for the Inference Node to handle RGB inputs (fl-03327) which also has some interesting links - one to a frame IO presentation that’s just bonkers. And another to one of the onnx links from huggingface.io. Which gets an error in Flame…

Here’s the link to Hugging Face which seems to be a place to find this stuff… Maybe there’s more sites I’ll find once I really dig into this one.

I’ve tried a few onnx files and get the same error, but, again, I’m just scratching the surface by finding any .onnx file to download and trying to load it in the node. Usually resulting in an error message.

I love the idea of generating mattes the way these examples show, but honestly need a lot more time reading up on this stuff to figure out how to leverage them inside that node.

I watched @Jeff’s FLC video on the node, and as usual, it’s informative and very lucid, but I’m left with more questions than answers…

3 Likes

it’s all new to me too but i have found this site and am trying my hand at it.

I am also in the process of finding a reasonable upscaler

3 Likes

I also sturggle to get any ONNX models to load, just goes "

and in the terminal log ***ErrorMsg: Failed to load model because protobuf parsing failed

Update: make sure to install git LFS before you clone from huggingface or else the onnx files will be empty…

Hugging Face uses Git LFS for large files, so you need to ensure you download the actual model.

Solution:

  1. Install Git LFS (if not installed)
  • On macOS:
brew install git-lfs
  • On Linux:
sudo apt install git-lfs  # Debian/Ubuntu  
sudo dnf install git-lfs  # Fedora  
  1. Enable Git LFS (Only needed once)
git lfs install
  1. Clone the Hugging Face Model Repo with LFS Files
git clone https://huggingface.co/your-model-repo
  1. Manually Pull the ONNX Model If you already cloned but only got the pointer file:
git lfs pull

but yea… not having any luck, this isnt fun.

All models i have downloaded from cattery worked oob in nuke

Ive just got on 2025 and not having much luck either. Feels like another doomed new feature

Just to test, grab some of the files of the LogikPortal. The DepthAnythingV2 is a good test case, as a pretty stable model.

There are still a lot of variables around ONNX models (as opposed to tools that use models internally), as you need to feed the model with the image in the exact color splace, channel order, etc. that it was trained in, or even if it runs, it will be gibberish on the other end. Also, these models are extremely memory hungry and have not been optimized for your average machine like the models Boris or ADSK embed.

The same is true for the Cattery models btw. I’ve run through all the available ones the other day, and half of them are hard to use or do not produce usable results for some of the same reasons.

While the thought of loading late-breaking models of hugging face and just plugging them into batch is intriguing, I think the reality will be more that folks with applicable experience need to test and massage them, then publish via LogikPortal for daily consumption.

The good thing is that we do have the platform, so this can become an extension of Flame like matchbox shaders eventually. Which also required a few folks to roll up their sleeves and do the dishes first.