Anyone aware of this?
Yeah it’s a super common fast storage option.
That is one of the larger piles of sales horseshit I’ve seen in awhile.
This is written 100% by sales people with near zero real-world analysis, it’s hilarious.
And as always with these type of cloud shill white papers, ZERO about costs, which in totality is always way more than just on-prem/colocation, and recurring.
Very aware. @ALan 's thoughts on this kind of workflow are definitely worth searching - he’s come to a very practical evaluation of this kinda system.
I’ve worked extensively over the past year on a WEKA based system and after setting up, it’s needed almost zero maintenance. We scaled projects up and down easily, shared the system amongst many artists on Flames in many locataions and the storage ran without hiccups since turning it on. I’m not exaggerating: the engineer who set it up left the company six months ago and no one has had to make adjustments or restarts or anything since he last grew it over the summer '22.
But it ain’t cheap. And it gave the guys paying bills nightsweats. They’re currently phasing it out for a much more manual solution. Also, aside from cost, the downside was collaborating outside the Flame ecosystem, but I suspect that was more case based for us than others.
What Weka does differently to a lot of major vendors is that most of your data lives on the cheap object store and it has some high performant NVME drives that act as a cache. The one thing to note is that there’s a 6 node minimum so if you have a smaller install it doesn’t make financial sense. Once you get into the 15+ Flames it’s a no brainer.
I’d love to see a true cost breakdown of even just what Weka alone costs.
We built a 150TB all SSD NAS that can easily service a large number of flames for about $30K, one time purchase that will serve us for many years. And still has 60+ drive slots available for expansion.
But all I ever see are shilling press releases, and never any tangibles.
We just invested in some on-prem storage. Got a quote from Weka. It was more than double the price of the PixStor solution we’ve gone with. We were also really impressed with Quobyte which was also less than half the cost. Nowhere near as cost effective as your solution though Alan.
@ALan roughly $7 per TB per month. Hardly something to get your pitchforks out for.
Is that just the cost of the Weka license? Or is that all in, including the machine cost, network fees, etc?
I’m with Alan on this. When I added up the costs, it took less than 8 months for a similarly spec’d P620, with all NVMe storage to pay for itself in comparison to the AWS offering. That was excluding the cost involved in getting AWS setup or pushing to it pulling from the cloud. A fully spec’d Max Studio was less than 6 months to pay for itself. Personally, I’d be locally hosting a Mac Studio or Linux system and using Teradici or Parsec or something when you want people to log in. Then use the system as a render node when you don’t need it as a workstation.
I can understand going cloud if you want to use numerous artists in places all over the world for short periods of time. Otherwise, I’d be buying a system.
It’s not all about costs. Some facilities don’t have facilities. Some are 100% virtual. Some are 95% virtual. So there a lot more to is than this is more costly than that. Yes, it’s expensive. But so is real estate, heating, cooling, rent, security, electricity, travel, commute times, etc.
Just because it doesn’t make sense for you and your business doesn’t mean it’s a shitty product. I’ve worked for all of the cloud studios and the ones that have Weka have the best performing Flames.
For the last several years we have used a hybrid approach to this.
Our thinking was one local server isn’t a safe bet. Deadlines don’t change and disaster recovery could take quite a while in the worst case scenario.
We use object storage with Metadata references that just look like a folder of files to the workstations. It isn’t super expensive for projects that are active. Our typical server file structure is maintained and the files are mapped to uniform folder names so projects work across Mac, Linux and PC workstations.
All cloud storage is synced to our existing local servers, then backed up eventually to offline media.
Cloud failure has a backup.
Our workstations have very fast local Nvme raid for active projects. The cloud caches portions of source files that are accessed locally and invisibly to the users.
The renders are done locally and uploaded as needed.
We have a fairly resilient system, with universal access to our artists and a fair degree of redundancy and reasonably low costs.
Workstations in the cloud seems very, very expensive to me, and technical problems that arise, as they always do, are 100% out of your hands and control.
Roger Berk
President
Creative Technology
Voice 330 388 6178
Roger, that sounds really great. What do you use to do the sync?
Elad
Because the cloud is a mounted volume, we just use standard Linux tools to sync our local servers. The cloud folders are mostly source and deliverables, so they are read mostly. We are not rendering to the cloud.
Roger Berk
President
Creative Technology
Voice 330 388 6178
Ahhh! got it! Thanks for the info!
@ALan so I just ran the numbers for an exact match.
I can’t mention the client here but I looked at a few and across multiple cloud providers and they’re all in-line with this.
400T total, 6 node cluster with 60T of NVME is around $9k per month. With that comes a full DR copy once a day and hourly snapshots.
Again, I don’t think cloud is the answer for everything and there are certainly cases where we advise against it. Use the best tool for the job.
Hi Tom,
Thank you for providing the cost information. First time someone is willing to be forthright.
But that is about 6x (over 3years) the cost of buying 2 identical servers and hosting one on-prem and the other colocation, and doing similar syncing.
Alan
We are fully cloud-based at Rohtau (www.rohtau.com) and it is great, I wouldn’t go on-prem but recognise there are a few kinks to iron out to have the perfect experience.
We did evaluate WEKA and is great but the cluster-based setup, and the fact that uses its proprietary format for the data, the costs associated made me decide on a different system which has worked very well although it is true we will be revisiting this very soon and move to something like Pixstor or similar.
The interesting thing about our setup is that we can shrink down to almost nothing, and that is something I value a lot… no debt, almost no infrastructure costs… happy to be selective with the work we embark on.
In terms of costs, we are playing easily half that.
I hope it helps
I’m using FSx for Lustre on AWS and it’s nice and fast for me with Flame. And I don’t need to leave it running all the time, either, since it’s got the S3 integration. But… I don’t really have anything to compare it to, since I’m new to Flame and not sure how fast it should feel. Anyone else been hands on with FSx in AWS and can share their experience with it vs the others? Thanks!
EDIT: I should mention I’ve tried just the 1.2TB volume at 250, 500 and 1000 throughput. 500 feels like the sweet spot in terms of price/performance… but I’m only 1 guy using it so can take advantage of the burst throughput.
We use FSx and it’s great and cheap, throughput wise we sacrifice a bit the performance so our default is 256/512 and move up as needed which only happened a few times on delivery stage.
The issue on my side is that fsx does not support tiering like weka does, nor sym links and there is a 64tb limit which sooner or later will be an issue.
Let’s see