Everyone should subscribe to this guys channel if your interested in VFX AI inference

This is his latest video, he is a Nuke guy based in Spain.

12 Likes

Alex is awesome. I worked with him at Framestore for a few years. Really talented Artist. Have been loving his channel too.

The SAM models are very interesting.

I’ve started coming up to speed with ONNX conversions and used the sam-vit model as my test case. Some progress, more work to do.

One take away is that the current inference node is a major step forward, but as pointed out in another thread may need more flexible inputs/outputs/multi-channel handling.

The default sam-vit model comes with a mask input, which errors out the inference mode. Trying to see if there’s a model tweak to get around it for now.

2 Likes

Same in my case…im doing some pytorch conversions for some models…i have already converted the depth anything models with high rez input so the output is great! Also pending to release some image segmentation models…
I’ve noticed that in order to convert these models with high res you need a lot of memory ram (128+) for the conversion process but it works great so I now understand why so many models are compiled with low rez. It is a bit of pain to get these conversions to work but i think its a great opportunity to learn and get some stuff for the community.

Will release depth anything v2 models for flame tomorrow.

I have tested a lot of onnx models inside of flame but many many of them are multichannel models so we really need adsk to implement multichannel onnx inference to make them work. so please upvote Tiago’s request in order to really take advantage of this new feature in flame.

7 Likes

For those interested in this who haven’t already done it, you can add your vote to this Improvement request: FI-03327

3 Likes

@allklier, try generating the ONNX model with one RGB & one matte input (and one matte output) to allow the Inference node to load it. The ONNX model available on the vit-matte Git hub has a single input with a dynamic batch size, so the inference node is currently not able to interpret it properly.

That being said, we will do our best to make the Inference node channel handling more flexible in a future release.

8 Likes

Man… just downloaded the cattery after watching this and it’s shockingly good. and despite the disclaimers I’m finding it very temporally consistent on hair detail on all the shots I’ve tried it on so far. exciting stuff!!

1 Like

@randy do you hear this, people want to compile and share high quality onnx models and they need a lot of ram.

Are your spidey senses tingeling like
mine are?

Is it just CPU and ram you need for the compilation? Man i have this spare mac pro 2019 sitting around that one could throw in many TB of ram if that would make things amazing ?!

we found this one but it errors when trying to build the json file

havent gone too deep into this yet so maybe its a quick fix?

Ooh…ummm….maybe? Watcha need? Remote access to a box with tons of ram? That’s it? Mac? Linux?

not me … others but apparently we need RAM to to high res models

Cristan and I ended up engaging a Bare Metal VPS thru Vultr in their South Africa datacenter that had 768gb of RAM to convert Depth Anything with a very high Input Size setting. We generated 2 versions, one for people with 24GB GPUs and another for 48GB GPUs. He will be posting them to Logik Portal.

10 Likes

Yes! But I would like someone with 48gb gpu to test the depth anything models we made yesterday before posting them on the logik portal. Need to make sure they work on 48gb gpu and they work well. Alan has not been able to run 2025.1 due to licensing issues, so if there is someone with 48gb gpu on 2025.1 please reach out to test it.

1 Like

@cristhiancordoba - sorry brother I’m in the cheap seat here - only 8GB in this device and the even older M6000 has only 24GB.

I have tons of ram but that’s only useful for training.

I think @randy might have an gpuber?

Cool stuff Alan.
Bigger input size model is super helpful

I will be able to test this tomorrow after my upgrade. Running A6000… If you send me the models to test, I can work on it right after I upgrade…

so regarding MacOS , can someone give me a ELI5?

From what I undestand its running on mac CPU?

because i mean… we have 192 GB of shared ram in the mac studio, thats a HELL of a lot more vram than any RTX 6000 ADA … whats missing so we can actually use that power?

INSANE