@SamE I did forget to mention that there’s a cloud version of ComfyUI, and he shows at least one example with that. So if you’re Mac only, that would be a viable path. They have a free tier to at least try it out, though over time I assume you would have to pay. People who pay for RunwayML seem to have mixed feelings, in some ways because you’re beholden to their dev cycle and focus (which seems to have shifted away from what we care about mostly) - with ComfyUI, which is an open source tool, you’re not dependent on someone’s roadmap, your just paying for compute power, and can throw that at anything the community comes up with.
While the new MacStudios are amazing machines in many ways, this remains one of their achilles heels. ML and AI evolving fast and is attached at the hip to NVidia GPUs whether you like it or not. And with AI now being much more seriously part of our workflows and beyond what Autodesk will do internally (and them having to differentiate CPU and GPU processing), that gap may become more painful.
Yeah this is my problem. I’m Mac all the way, but comfy just isn’t great on a Mac. So I do all my comfy through RunComfy.com. Basically hire a workstation by the hour. Couple of dollars for a 48gb gpu up to $7/hr for the motherlode:
Trouble is you get addicted to that 141gb A200. Sometimes in a pinch can be running a couple of A200 systems while running flame on my Mac. That is the advantage of using comfy in the cloud it offloads all your processing so Flame is running locally unimpeded. The disadvantage is that I cant get it to write to my project drive, so your constantly having to download your results and place them in the right folder, small but annoying. I learned for free using comfy on my Macs, but now do all paid work in the cloud. You can basically be running 3 workstations from one interface, Flame locally, then one instance of comfy for working and a render bitch comfy churning away. It’s a very productive way of working, although the price adds up.
Combined several of the workflows in the course, for re-doing the shot.
For that shot we had to use a Veo3 generated plate in HD resolution and replace the driver, which was shot in studio. As the new driver didn’t completely cover the original in the shot, I had used the Flame paint node to clear enough of the edges to avoid any problems.
Here, a ComfyUI workflow, that uses the points editor and Sam2 to create a live matte of the helmet, dilate the matte a tad, then send everything through the MiniMax object remover, which does a more than decent job. I might not use the whole clean plate, but just comp sections of it on the original where I had to paint previously.
Ran the whole workflow at 960x480 and upscaled at the end. Full 192 frames.
Timed it after a fresh restart, so there was no caching going on. Completed in just under 6 minutes. VRam stayed around 35%, memory was quite contained as well. Run locally on my Linux Flame.
I did a test run at full HD resolution on a shorter section (30 frames), and VRam topped out at 57%. So if I were patient enough and wanted better detail, probably could have run this at full res instead of upscaling.
Now that the workflow is setup and saved, I can apply that to similar shots, simply reload, reset the points editor and run it. Of course there’s no guarantee that it works on the new shot. It’s AI after all. But the same was true for some old-style paint techniques.
Am doing the course in Linux, bit of a struggle to get it going, but with some help of chatgpt it went by rather easy. I even fed the script they provide thru chatgpt and it adjusted it for my linux system. Afterwards some mdoels didnt downlaod and install and i again fed the errrors to chatgpt and it gave me ther ight commands to get everything going. I am watching each lesson once and then watching again as i follow along. Now at the LoRAs chapter.
Yes, it does take a few steps to install on a Linux flame. Also because the course script is a windows bat/python setup.
In my case I installed Anaconda3, then cloned ComfyUI from GitHub, and while launching it found about 10 different python libraries which were missing and had to be installed via pip.
Then I just looked in their install script which models they preloaded, and installed those via ComfyUI manager and copied them into the models folder. Same with their list of custom nodes, just installed them via ComfyUI manager. Takes about 20min to get through.
I can share the setup if that would help. But it’s essentially parts from module 8, 9, 11, and 14 combined into a single workflow and some minor refinements, like the matte dialate.
You can run Comfy on a M Mac. It’s important that you use the correct models. Many will not run and you will get an error. So far in my experience the fp8 do not work, but fp16 work fine
Negative. It’s simply someone else’s computer. Fill it with whatever you want. Typically you pay for some permanent storage so ya don’t have to spend compute time uploading and backing up your shiz.
Just a heads up to everyone here installing comfy for the first time. You DEFINITELY want to install this on a segregated machine/container. Side-loading in ComfyUI is a massive, massive security hole right now. Ask me how I know…
It’s seems more the fact that we’re downloading custom nodes with little oversight and reputation checking that are massive python libraries that could do all sorts of things.
Some would require elevated privileges but others the mere fact that files may be freely accessible from your system maybe bad enough.
So it’s not a new or unique security risk, just that we’re much more willing to download potentially dubious code without 2nd thought in the race to stay relevant.
One solution would be to sandbox it as a separate user with less access, and be mindful of what you run with sudo
I tried it on Runpod this summer, since I didn’t have a decent computer at hand. It’s pretty neat, you rent a volume storage of whatever size you want (100GB is about $7/month), with a PyTorch template of you choice, and you install your own Comfy with your models, custom nodes etc. Then each time you start your Pod, you choose the GPU you need for the task (from an RTX 3090 to a 6000 PRO), and you pay by the hour. An RTX A4000 is about $0.25/hour, a bigger GPU can go up to $2/hour. When you’re done, you shut down your Pod, and you’re back to paying only for your storage. All of your work, workflows, etc, are kept on your storage until the next time. I find it a little bit less user friendly than a local install (for example, for downloading models you have to use command lines), but you get used to it. And you have access to server grade GPUs without breaking the bank, Last year I tried Vast AI, but at this time, you had to start with a blank vanilla ComfyUI template every session. Don’t know if it has changed since then. There is a good tutorial somewhere on YouTube on how to setup your own persistant Comfy on Runpod.