This is worth checking out for awesome roto that is miles better than runwayml and resolve IMHO
Looks very cool!
hmmmm Sammie-Roto says it needs python 3.10, my rocky 9.5 has 3.9, so I need to figure that out.
/opt/Autodesk/python/$VERSION/bin/python
should be 3.11.5
It has its own python embedded.
@joelosis thx for sharing! looks really promising…
It’s been solid for me, been really impressed with the mattes at higher res.
@joelosis What is the highest res you’ve worked on with this tool?
Arri open gate using source and the hires options, my threadripper desktop with 128 GB of ram and GTX titan with 24 GB would max almost out though. With a few tests when I was using open gate as the source setting from highest to the HD matte output was 90% of the time acceptable. The biggest help I have found is feeding it the source footage with minimal compression steps, that’s what gave me great mattes
I just tried it on a ~4.5k plate, and it ran out of GPU ram at the matting stage when set to Full on an A6000 Ada.
Anything over 6 seconds and it will eat up anything and anything hardware that’s there, again the HD processing res has been more than acceptable for me when using 4k source etc
Also on the windows side it would max out my 24gb vram but use around 80gb of my system ram when using source
Yeah, thanks. That’s the issue with these tools, they are memory hog… I’ve using this tool for a while through Pinokio, and i was always getting OoM when using fullhd and above.. I thought that maybe there was some memory optimization or quantization using this sammiroto tool. That’s why I asked.
Thanks to both Alan & Joel.
One thing I found really useful was doing this at lower rez, and then upscale it using a tensorrt upscaler (really really really fast) with remacri4x upscale model (all this on comfyui)
My process has been anything that is less than 4 or so seconds I will usually take my chances at full res , but if it’s a longer shot I’ll switch it to HD
Very promising !
Curious to try downscaling a 4K plate to HD, generating the matte and then using the new ML Resize node to apply the matte back to the 4K plate.
I’ll give it a shot one day
still looking for a better app than Resolve for ML roto
Thanks for sharing your insights
Wow, cool stuff. I just added mps support (apple silicon) for MatAnyone in ComfyUI. Its fast. and the resulting matte is pretty good.
This looks very promising. Especially it’s temporal consistency is a major step forward and feature of this model reading through the description.
Not sure what’s involved in converting this to ONNX and if it’s possible, but worth looking at.
Also - it’s worth to highlight. If any of you have checked the GitHub pages or Hugging Face model cards for any of these matting tools. The by-lines of all this work is almost exclusively Asian researchers. And I’m saying that with respect for the work Chinese and other Asian universities have done moving this field forward.
The main author of the MatAnyone model credited on GitHub is from a Hong Kong university. My son is currently finishing a PhD in ML in Hong Kong but in a different field, so I do have some insight into this universe.
It’s very sad to see how science and foreign researchers are treated by the current politics. Many of these researchers are impacted in direct and indirect ways.
So as we all are googly-eyed over this progress and eager to deploy these advances in our pipelines we shouldn’t loose sight that we are taking freely, but no longer giving freely, creating a problematic imbalance.
Chinese dominance in AI started long before Jan. 20.
That the US not contributing to this field as much is one issue. How we have treated these researchers while freely celebrating and using their work has changed since Jan 20th.
Many of these have done part of their studies at US universities. They have collaborated in projects that had US funding. Conferences where they can publish their papers have either disappeared or are no longer accessible. Some had to abandon studies in progress and start over.
If that doesn’t matter or doesn’t fit your world view, no problem. Just stop using these models, until US based born researchers have created ones of equal capability. Until then roto it should be.
Or, tongue in cheek, put tariffs on foreign ML models in the hope that domestic researchers crack the nut faster…
In seriousness, this type of progress, based on research, works best in a collaborative environment with appropriate guardrails, not in a protectionist mindset.
So is this all running local?
For me it’s not easy to answer, please allow me to give it my view:
Living as so called expat in Shanghai since 2004, I visited many of the top universities all around China making many friends from different science labs over the last 2 decades on my journey to work for JoVE. I understand the ‘recent’ concern about that, but in reality… I have to deal with very friendly, smart, intelligent engineers, researchers and students. Collaboration is almost always true international with participants from all over the world. I personally dislike political fence policy in this regard. What happen if chinese researchers do not get access to the newest nvidia hardware? Huawai fill the gap with new advancements in AI hardware. And there many more unknown brands developing in and around Shenzhen.
Back to the topic…
yes if it is working locally, you have control. If you send data to an online subscription service..you leak not only the data but propably lose clients as well. I recommend : local (LAN-node) only.