As ComfyUI is emerging as a key tool for those merging Flame and AI, it’s helpful to get some good resources.
As I’m sure others have watched it, the pixaroma YT channel is a great starting point. But I find it very still image centric, and not through the lens of VFX work. While the principles are translatable to an extend, and we do work on plates at times, it’s a lot to sift through.
Earlier this week ActionVFX announced an upcoming course that by all accounts will be a better fit. Doug Hogan lives primarily in the Nuke world, but has recently had some fxphd courses for Nuke, and the Discord community he has built, has become the official fxphd online community (replacing the old zero traffic Slack channel). I did watch one of his courses, and found it well done and helpful.
So with that background, this one might be worth considering. It’s paid training, but the price is quite reasonable if it saves you a few weekends of digging through YT slush to find your own nuggets.
It’s a good course, IMHO… and very worth it for folks who are either starting from scratch or who need to shift gears from playing around to being productive. Doug’s got a good grasp of the how and why and since he comes from a VFX background his perspective and approach makes sense as @allklier was saying.
He has command of the why something works, it’s not just a YT video of ‘look what I did’. And it has the premise that you want to understand and control the process, which is key for us.
All while keeping it real. Seems worth the price and time.
There is so much talk about ComfyUI these days. I have watched few Youtube videos and those put me off hard times. Let me know how you feel about that training as you advance.
I see some good results, but difficult to know how much time they are putting to get those results. I tried to use Copycat in Nuke few times and that one was quite slow and tedious even with RTX 4090.
Good understanding to even generate still frame would be beneficial as many times searching fitting photo to be used as blurred background can be tricky.
I’m about half-way through the course. And it’s more helpful than pretty much most things out there.
The first few chapters are not exciting, but a solid foundation. If you’ve used ComfyUI before you can speed watch.
But once he gets to LORA, control nets, IP adapters, and Image2Image we’re getting to the part where you can have fine control of the image being generated, and not just throwing ever more prompts at it, but actually precisely guiding the model in the direction you want it to explore.
Then after that he gets into the video portion, not just image.
So this definitely is with us as audience in mind and very much gives you concrete tools to use these models more effectively.
Doug does make a passing comment that there is an intermediate follow-up course in the works apparently, which if this is the baseline, should be a must watch if you need to catch up and use these tools in production.
I’ll update this once I make it further through the course. But it’s holding up to its promise so far.
ComfyUI itself can run in CPU only mode. But as all ML tools it may be an unsatisfactory experience.
Also ComfUI is the platform and framework, but the actual work is done by individual nodes. A small number of them are system nodes that come with ComfyUI itself. The vast majority and where all the advancement comes from are actually 3rd party nodes from a variety of developers. So while the platform may run CPU only, the same is likely not true for all the nodes.
In fact the course is actually taught on a Windows system no less. While you can get ComfyUI to run on Linux, you do want to run it within Anaconda or another python virtualizer as to not interfere with the main system. So there are extra steps to be taken. For Windows there is a download, hit install, and you’re good package available.
Also, ComfyUI at this point can’t take advantage of dual GPU in any reliable way. So you want one big GPU, not two small one.
I lucked out that my Nuke system is a big windows box with A5000 GPUs, so I had a path of low resistance. Realize that will not be everyone’s mileage here.