Magic mask in inference ONNX

hey all. anyone seen a ONNX type of thing out there that will do what resolves magic mask will do? I have no idea where to look for ONNX software

I think in this case the closest thing to magic mask is birefnet posted by @allklier, even though it is not as accurate as magic mask it is so good at following the shape and silhouette.

1 Like

great thanks. where is this Logik portal everyone mentions?

logik-portal

logik-portal forum topic

1 Like

I still have to upload that to the portal now that the legal disclaimers were addressed. In the meantime the dropbox link in my original post works.

https://www.dropbox.com/scl/fo/xa2xqls5a7ivflr6kouk5/ANCa2h3Muf2QRxwuGGkcJ8g?rlkey=n9ad26flf5l7x0tjyhf3lvgeb&st=m1ud62pc&dl=0

1 Like

thanks. quick question as I know nothing about python. do these go into the inference models folder.

No python required for using the models. You can save them anywhere, though I think that folder opens by default. When you add the node it prompts you for the file location.

1 Like

hi, sorry for bothering You but i cant use Your models, getting this msg:

*ErrorMsg: Non-zero status code returned while running MatMul node. Name:‘/bb/layers.0/blocks.0/mlp/fc1/MatMul’ Status Message: /home/sansrem/Downloads/onnx/onnxruntime-1.18.0/onnxruntime/core/framework/bfc_arena.cc:376 void onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 1811939328

im totally new to this stuff
any ideas?
thanks for any re. :wink:
P

Hi @piotr,

It looks like your system doesn’t have enough VRam. Are you on Linux or Mac? Which of the models did you try. And if on Linux, what GPU do you have?

Linux, Gpu- RTXA5000, any of bitrefnet.

Copy.

Even the one with _1024?

I was discussing with @cristhiancordoba yesterday. When I did these models originally I used my Mac Studio (my Linux system is stuck on 2025 for a project for a bit longer, and my other system only has an old GPU in it). On the Mac with its unified memory they do work. But he mentioned that the 2K and 3K models were too big for 24GB VRam. The 1K model worked for him though.

This is work in progress. We discovered some other issues with this model, where some pre- and post-processing is required and are testing two matchboxes in collaboration with the ADSK devs to do this (the json gain is not sufficient). There also seems to be a channel misalignment that needs further investigating. I finally found some time yesterday to get back to this and look into these things.

With an A5000 on Linux, I would try the 1K versions. There not as detailed on Flame as you see elsewhere, but for a loose matte with some post processing they might still work.

The original author of these models has now posted his own ONNX versions of 1K resolution. But the results are similar to what we see with my conversion. The problem isn’t so much with the file, but also with the fact that the Inference node is very new and we’re discovering new complexities with all the different models out there.

I would expect it to take a few iterations of Flame for the dust to settle and everything to work seamlessly.

Unfortunately no shortcuts I’m afraid.

The more people willing to test and experiment though, the better.

1 Like

sign me in for testing :wink:

1 Like