I’m running Flame on a Mac Studio and Comfy in the cloud. Good in some ways, annoying in others. Thinking about getting the Nvidia Spark as a standalone Comfy workstation next to my Flame. Anyone doing anything like this, would be very interested in any insights.
Cool little machine. Not the fastest I think, but the massive 128gb unified memory would be v useful large video models. And can double it up for 256gb later maybe
It feels like it’s impressive as a V ram but not in a compute perspective. Most of the influencers on YouTube I’ve reviewed it. Check out network Chuck’s on YouTube. He had a review about a month back. After watching that I was like yeah I think we still need to stick with big ass beefy Nvidia 5000 Adas or 6000 pro Blackwells.
I’ll second that. Was running some ComfyUI models locally on A6000 Classic (pre Ada), and then on RunComfy H200. Problem was not Vram constrained, but compute constrained. H200 was only 4x faster. Now I’ll take that any day, but the use cases may vary, mind the details.
Bigger issue is that some of these tools don’t scale horizontally, which is where it seems the big custom cloud models have an edge, because they are custom engineered between software and hardware.
Mediocre performance, interesting only because of the unified memory concept, run its own OS… could we slap an apple sticker on it, 4 wheels by 700$, remove ports, and no one would notice?