Weavy.ai

Hello Flamers,
I’m wondering if anyone here has explored Weavy.ai.
From my first impressions, it seems more aligned with the Flame workflow — intuitive, streamlined, and less complex compared to Comfy.
If you’ve used it, could you share your insights?

The founders Jonathan and Itay are both ex Flame artists / CDs from Gravity post. We are having a lot of fun kicking the tires on this platform!

4 Likes

Yep. I like it. A bit easier to navigate than comfy imho. However, it’s a it like a taxi on tariff 2 at times. But still fun.

1 Like

You had me at ex flame artist! I’ll give it a try and let u know my thoughts

To make it a fair comparison - keep in mind that with good hardware, ComfyUI is free.

A decent Linux Flame is the equivalent of X-Large Plus on RunComfy, it’s not until you get to the A100 configs that you’re in data center territory.

Now, if it’s a paying job and it has appropriate budget, that shouldn’t matter. Either of the tools can be run in the cloud and the client can pay for it.

1 Like

good point @allklier - currently i just dont have the bandwidth (or know how) to set up a separate machine or partition, manage a secure environment and all the other stuff that comes with it… not to mention “good hardware” is prob 5-10K? For now a cloud based platform is good while we dip our toes into the water… but yes. overtime continued expense will mandate an on prem solution…

1 Like

Well exactly. I just put the charges on the job.

1 Like

Absolutely. Just wanted to highlight as we make comparison. The ability to do on-prem, also for the cases where there may be limits of what can be used in the cloud may matter.

And while not an issue early on, as you spend time learning, you want to bet on the right horse. There’s overlap in the knowledge between these tools, as they share various models and processes. But each is also unique in some ways.

…and just like that Comfy now runs in the cloud as well https://www.runcomfy.com

Ah sorry, I thought I was clear on that. That has been around for some time. So with Comfy you have both choices. With some of the others it’s cloud only.

1 Like

I really like Weavy. It just works straight away without having to download models from GitHub or make sure you’re combining the right AEV or safetensors or CLIP or whatever the hell all those words are in Comfy.

I set up a shot on Weavy in about 3 minutes and then as an experiment built the same thing in comfy. It took about an hour in comfy despite me having just done the action vfx course and having had no training at all in Weavy. And also the comfy result wasn’t as good because comfy didn’t have the latest Kling model that was in Weavy.

If you can get over the credit model, it’s Weavy all the way for me I think.

6 Likes

I can’t help but imagine a version of Flame that has these tools included.

There’s some attempt at comp nodes inside Weavy but nothing like there is in Flame & Nuke.

Weavy and Comfy are principally UIs utilising the APIs of lots of other existing AI models. Surely this can be pulled inside of batch without any of the worries around training data seeing as they’re 3rd party tools.

3 Likes

Well, I didn’t know about its existence till 30 min ago, but even if it is easy to use, it could come with presets for z-depth, normals, extend videos, inpaint, outpaint, segment… usual stuff that normally takes some time to set up, and with a working and reliable preset, I could use it.

2 Likes

@Edusanjo if you hit tab and type ‘in paint’ you will see all the nodes for the different inpainting options to just drop into your flow graph. same with ‘depth’ etc. the result is only as good as the avail models floating around on the web which are used in other platforms FYI, weavy isn’t creating that code.

3 Likes

Hey guys! Little heads up, when you using Comfy and similar tools, you are running models in Quantize mode, and usually with swaped layers.

This is the same that we grab a Alexa .arri file, convert to mp4, conform using mp4 and batch a comp on top mp4. (because hey, mp4 looks the same in fullscreen as arri, why use it?)

So keep that in mind everytime you choose to use comfy and similar tool for your AI project, because, those are not the real models.

Not saying you should not use, but you should not lock your elf thing that wherever you get from Comfy and similar outputs is the state of AI model. You actually hurting you self in the same way you would choosing the mp4 to work but looks same in full screen.

1 Like

This is a good call-out. But I believe some caveats are necessary.

Quantization can happen in ComfyUI if you’re on a low resource system, but it’s not inherent to the tool overall.

It happens specifically when you start the ComfyUI server with the –lowvram commandline option, and when you load checkpoints with –fp16-vae. Both are meant to reduce precision of the loaded model for hardware constrained systems.

If your GPU is good enough, or you run in the cloud with a data center GPU this shouldn’t be as much of a concern.

That said, a lot of models were trained on sRGB/Rec709 data, so they will not be able to match the dynamic range of RAW footage without extra steps, and any AI work thus needs to happen in the later phases of a comp where you are in display color space. And similarly there may be limitation on what you can do with AI generated content in the context of HDR deliverables.

1 Like

kind of. Most if not all models used in ComfyUI workflows are heavilly quantized or even adapt to comfy by removing lots of internals layers, i know this because i also help in my of those models to the community.
The results, even if you use comfyUI settings itself, the backend models are still not “the real models” aiming to degraded outputs results comparing to “original models”
degraded not only meaning pixel depthwise, but also results itself, such consistency, textures, aniamtions, everything.
Even if you use –fp16-vae, are not enough since the models it self can still be heavily clamped (quantize) .

Not saying is all bad, is amaizng to see people using on consumer level and access to many people easier. But more and more i see people making those in actual production. And many many more nodes trying to fix results that would not even need fixes otherwise.

1 Like

But it does beg the question, does one need the type of hardware you run to enjoy the benefits. To use your analogy, people will use iMovie if they don’t have the software or hardware required to do something better than mp4 quality-wise.

1 Like

Such a great question. The main focus on comfy community model is to always give access to people, even if by doing that sacrifice quality. Important is to people have access.
With that thinking, create a huge gab between models results vs most outputs that community are doing.
And that is perfect fine. I dont see has a problem.
I do see has a problem when people think that is actually the only way todo and that is the true output without even learning what is happening in beckend.

4 Likes

Those are good points.

To what extend does that apply to Weavy and Runway as well? Just because it’s cloud may not mean better. As cost is a major factor especially on affordable none-enterprise subscriptions. We’ve seen this chatgpt5 which is primarily a more efficient rather than more powerful model. The race to efficiency.

And how can you tell as a user which category you’re dealing with?