Here is an excerpt from a podcast of the creator talking about it: https://youtu.be/kSkn8fPM7o4?si=opoKgEvEkvW_hYHr&t=1603
I was testing this yesterday and it seems pretty solid. I’m going to see if I can make some ComfyUI Nodes unless @cnoellert gets there first. A python script to trigger the commands doesn’t make a lot of sense, but something like ComfyUI nodes with some level of interaction seems like it would be useful.
Sometimes I can’t even believe that “I am part” of a group/forum with all these amazing guys who share their things and has mind-blowing skills.
Thanks guys for what you do.
Amazing! I’ll stand down, but let cursor finish it’s current task.
I didn’t say it worked yet lol…
Edit… looks like we’re cooking with gas on MPS. Haven’t tested Cuda yet though.
Release the hounds…
Amazing! Downloading now.
Got it working, but had to add this:
# Then add CorridorKey:
pip install "corridorkey @ git+https://github.com/nikopueringer/CorridorKey.git"
after
pip install -r requirements.txt
Awesome stuff, thanks Chris!
Fixing now. Added to the requirements.txt so it should be picked up from now on. Thanks for doing a dry run @john-geehreng
Amazing you guys got it up and running so quickly.
What’s the opinion on the results?
I’m running a longer test now on denoised footage. The source is 5k but I’m doing the work at 3k… 115 frames. We’ll see how it goes, but there’s something to be said for MPS and unified memory. It may go slow but it goes… Also a perfect opportunity to use the revamped Wiretap nodes.
Edit: Does what it says on the tin. When I pushed the resolution a little I found there was an instance where the de-spill went bananas and caught the whole frame but it was only a single frame affected. It’s hard to know if it was an isolated instance or what but the rest of the clip was good. Really good.
Would like to understand how this unified memory technology is different than just running cpu mode and process all through ram. I just built a system with 288gb ecc ram (really cheap, don’t jugde me - I know the ram prices issue) and wish I could run longer and bigger setups but I guess it’ll ages.
Could you share your specs and how much time it took for the 3k footage? How many frames?
5k source, resized in the Wiretap Comfy Node to 2048x1716, OCIO to rec709, 115 frames in half, OCIO back to acesCG took 20mins roundtriping back into Flame. That’s on a M4 Max MacBook Pro with 128gigs of ram.
Amazing work @cnoellert, many thanks for this. Well you and Claude!
I was having a look at the original github readme and the author mentions “MLX Setup for Apple Silicon”. Any chance you gave this a go as well? First time I’ve run across it.
I noticed that… let me look into it. Maybe there’s some there there.
I wonder if they’re going to train the model again to generate, ambient occlusion, albedo, normal maps, motion vectors, object IDs, etc… The same workflow could be used to generate all the passes from the same training source with the other passes…
I watched this the other day! Pretty amazing!
Does copy cat do just as good?

