Hey @jordibares, cool, thanks for sharing your experience. Glad I’m on the right track with my setup!
I had a brief skim at WekaFS’s tiering. For FSx, have you looked at the Data Repository Association with S3 feature? That’s how I’ve got mine set up. I do all my tiering on S3. The DRA pulls the metadata on to the FSx volume only, and fetches data on demand OR they’ve got handy shell cmds to pre-fetch and release data as you need it and as you don’t.
When I’ve used this setup for Nuke jobs, I pull exactly what I’m working on that day over to the FSx volume (the script dependencies like scans, precomps, elements from the library) and release what I don’t need when I hit the 1.2TB limit I’ve got set up on the volume. Keeps my costs low that way. After I’m done working I delete the entire stack and all my data is safely on S3 waiting for me the next day.
We built a ~180TB all SSD server just for the flame frame stores, shared among all our Flames, for about $30K years ago. It can deliver 5+ GB/s, and I never have to think should I upgrade the performance settings. Paid for itself many times over, and the only cost is electric. Never had a single failed drive.
Current cloud cost and complexity is not sustainable. If you must be NOT on-prem, co-location is the way to go.
We have 2x Mellanox SN2100 which is 16 port 100gigE switch.
Bought 1 off eBay for about $3K USD, and another as backup off a dude on Reddit for about $2300.
Each port can be split into 4x10/25gigE or 2x50gigE.
SSD is so cheap these days. You can get ~15TB NVMe for about $1000 now, so that type of server would be even cheaper than when we built.
Did you build it yourself with a Supermicro board or is it an off the shelf system from HP or Synology or something like that? If it’s DIY are you using TrueNAS? I converted an old Dell server to a live backup system using TrueNAS and it seems good.
It is a SuperMicro system with 72x 2.5" drive bays. The whole thing is just 4U, self contained. Run standard RockyLinux 9.X with XFS and Hardware raid.
ZFS is great in many ways, except speed. In all my tests over the years, I’ve found ZFS to be 25-33% the speed of same exact hardware with XFS.
WOW that thing looks insane in the best possible way! Thanks for the info.
My knowledge of how to set up and run a framestore on something like that is very limited. Is it possible to run a framestore on a server and then collect a specific job to a local drive? We sometime start jobs at the office then take the system into a production company or agency for presentations.
Hey @AndrewStalph , why not do Teradici/DCV or similar back to the office from agency? Would save you the trouble of copying projects back and forth. That is, if the two sites aren’t too far away and the latency is ok.
The complexity is something you get over quite quickly, but it is true it takes quite a bit of time to rewire you head around the new environment.
Cost-wise, it is a complex subject because we are not buying the same thing at all, you are buying hardware, and I am buying an ecosystem of services and with it comes many things you are not accounting for, so it is not that simple.
But regardless of the current numbers, the cloud is only going down so I think it is safe to say that at some point, the price point will be such that on-prem won’t make sense at all for the majority.
With regards with co-location, I think you get the worst of both worlds rather than the best; not only your initial cost still is there, but now you have a monthly cost including all the bits the cloud charges you but at scale, so much cheaper. Furthermore, your scalability is still compromised so what is the benefit of that?
I’d be interested to see a cost breakdown of your cloud implementation.
Regarding co-location, I don’t see how you can say it is worst of both worlds.
You do have a 1 time HW purchase, and yes you have month rack charge, but it is hardly “all the bits”. For instance, I know someone who co-locates, and they’re all in charge is $1K a month for a full rack.
Let’s take that an extrapolate some.
1 full rack is 42 rack units.
1U - Firewall ($3k)
1U - copper 1gig switch ($1)
1U - 100gigE switch ($3k)
2U x 10 rack workstations/flames ($9k each)
4U x fileserver with 60xHDDs w/45TB NVMe cache (lets call this 1PB of storage) ($30k)
2U - 1x virtual machine host for various purposes ($7k each)
3U - 12x Mac mini as side computers ($1400 each)
we still have 10U left.
so for about $160k, we have a mini studio that fits in a single rack. Include the 1K a month for a total of 4 years, gets a total of $208K. Amortized over 4 years, that is about $4500/m. That you own, that is faster, full control. Many of those systems can also last much longer (switches, firewall, storage, vmhost). Power, UPS, A/C, redundant internet is all taken care of. And everything is standard linux management, don’t need to learn any new skills. Even up the colocation fees by another grand/month and it’s still a better deal than cloud.
From what I’ve seen in this thread. 60T of cloud storage is $9K/month alone.
Now, if you have a CG movie that you need rendered overnight, yeah cloud is perfect, thankfully Flame isn’t.
The key aspect I think needs clarifying is that you are on CAPEX world, I am on OPEX world and those 200K I rather invest on Tesla (for example) and still have a studio that can expand and shrink on a pay as-you-go model, no debt = freedom.