Good question, could be worth a try.
Someone made a GUI app for this CorridorKey. GitHub - edenaion/EZ-CorridorKey: Perfect Green Screen Keys made EZ! Ā· GitHub
Just tested it on a 100 frames 4K clip and the vram usage was just 7gb and took less than 10min.
So based on this:
Iāve got it going as a Pybox on Mac as a full MLX port with an option for 8bit quantization⦠Next step is to reverse in Cuda support with what I have working. Hoping to have it ready tomorroow.
@cnoellert - dope - you are a monster brother - an expert in the use of this software and a monster in the manipulation of the surrounding software. ![]()
how are you getting on with changing the font of subtitles, because that appears to be useful for some kinds of work?
If only PyBox was betterā¦
FU_pybox ftw
Amazing! Youāre on a tear my man. Will it work in linear as well? I vaguely recall something about the model working in that space but maybe Iām confused. Happens a lot, haha.
Agreed that itād be nice if the performance of PyBox was better. The speed of the BLG renderer is a great example of how slow and painful it is.
You keep mentioning your new suite of tools but are you making them available?
The model expects srgb input, not linear and the pipeline converts linear->srgb before feeding the model if you click the srgb button. Processing wise I despill, the optional conversion to srgb, scale to model res, inference, scale resultant alpha to source res, combine with original res despilled plate.
Resolution-wise: the model is expecting 2048Ć2048āits fixed input tensor is [B, 4, 2048, 2048] so hence the dynamic scaling that handles random input sizes (resize, inference, scale alpha back out), but 2K is the native resolution where itās operating at full quality. Thatās only regarding the matte though, Iām despilling the fg at full res.
Regarding pybox, the mechanics for handling when to draw new frames and handling interface updates was confusing⦠at least for me and Iām struggling dealing with the lag time it takes to generate versus when the interface is polling for a result from the daemon.
Iām also not a software dev so thereās thatā¦
At the reverse user group the message was very loud and clear. PyBox is ancient history and nobody wants to hear about it.
Unfortunate, seems like a little TLC could make it work without us having to claude-code a sea of OFX plugins. Alas here we are.
At least on the instance of this little experiment. I very much may try just to convert the model to Onyx. Might be a better way to approach things.
That being said if I were at Autodesk the two things on my list would be python api for everything you can control and a new python socket api that allowed for external bi-directional communication software-wise.
It seems urgent that they address this. The window is closingā¦.
If not already closed⦠thereās a small sliver of light still.
Agreed.
Itās been a while since things changed so quickly. Everyone is racing to make it to safety before the ice shelf splits further.
Large legacy shops are at massive disadvantage.
FU PyBox indeed. FU right in the ear.
There are no more large legacy shops.
Corporations are doing interesting things, and their spend on AI eclipses the spend of small shops or individuals.
There was a window of opportunity at the beginning of 2025 and that window has closed.
Maybe. Itās a fluid situation.
The thing that is certain: past laurels arenāt saving you in the current environment. All the moats are leaky as a sieve.
And the users are more adaptive as ever thanks to everyone being a code exporter. We have a code trade deficit on our hands. And high prices arenāt saving legacy palyers.
Have you seen this project? Iām a complete coding noob so no idea if thereās anything there that could help inform you. Seems to be using onnxā¦
From their notes:
RAM note: The ONNX trace at 2048Ć2048 requires ~140 GB of memory. If you have 128 GB RAM, add temporary swap:
sudo fallocate -l 64G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile sync && sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' uv run python export_corridorkey_onnx.py --img-size 2048 --no-verify sudo swapoff /swapfile sudo rm /swapfile
This is what I was working on yesterday with the Cuda version. Close but having to do some offload tricks to keep from oom. My ada a5000 only has 32gigs so Iām VRAM poor as the kids say.
Might be faster to spin up a cloud server instance than do code heroics. A few bucks vs. a few hours.
Is this just for the conversion to ONYX or for the actual running of the model?
Awhile back I was working with @cristhiancordoba on converting some models to see how high a resolution we could get, and rented some Vultr high GPU RAM machines. Costs a few bucks.
