I am working on implementing a centralized storage for the projects. Mind you, not a separate projects server. The server that will host the project files has a very decent read/write speed over the network saturating a 10gige line. Is this enough speed to not bottle neck things? Currently we all have local M.2 drives that are used as the local storage for projects, is this a massive downgrade or the speed is not that important for these files? again, only project files, not the media cache, that will stay local (for now)
Hey Frankie…
We’ve been running Centralized both the S+W database and framestore for a very long time. With 2026 this is even easier, and you should definitely utilize a single centralized 2026 S+W Database. It can be just a simple minimal RockyLinux virtual machine. In regards to network, all our workstations are connected via 25Gig-E fiber to our central NAS. We’ve never had any speed issues or bottlenecks with this workflow. There are a few low cost 25/100 gigE switches now and the cards are really cheap off eBay.
Happy to answer any questions you may have.
The specs in that document are wild lunatic requirements. Our 2026 S+W Project Server is a simple RL9.5 Minimal VM, with 2 cores, 4gigs of RAM and 64GB drive. No issues.
Now legacy (pre-2026) S+W servers, yes, those were horribly in-effecient garbage and required stupid amount of resources for a crappy single file Berkly DB.
Is there a mac config for setting up a project server? If not is it possible to setup a virtual server on a synology nas?
I only have a 10gig network. Will that be ok for 2 seats of flame.
Project server requires a Linux host.
Hey Alan! We’re finally speccing this out now. A few questions if you don’t mind:
What does your array of VMs look like? Is the VM running the database also running the framestore and accessing the NAS locally? Or is the NAS its own VM and then everyone else sees it via NFS?
Are you still happy with ZFS?
Thanks in advance, man!
S+W DB VM is very lightweight minimal RL9.5.
NAS is baremetal seperate machine mounted NFS on all workstations.
ZFS is SLLOOOOOW but safe. We’ve had a few close calls with data loss which was mitigated with quick restoration from ZFS snapshot. When we were running LVM/XFS with SSD cache in front, we ran into an edge case filesystem corruption that nearly lost everything. Would not have happened with ZFS and that was when we switched over. So even though it is like 3-4x slower than HW RAID/XFS, we choose stability and safety. Plus ZFS is excellent at management and administration tools. We do get about a 25% space savings with its built in compression too.
Where do your timeline renders live? Like if you have a fancy timewarp or Action repo on a shot? Is that on the S+W DB VM?
NAS. The S+W DB VM should be serving just metadata. In 2026 your “framestore cache” can live anywhere you want. And the smart way, is on centralized storage.
What about a single system just to host the flame projects? Right now we dont have the infrastructure for 25gig and centralized cache, storage etc (for now). I CAN map and share over nfs the local storage on each machine just in case other systems need that data.
Hopefully a miniPC with a M.2 drive? All I seem to find are ones with 2.5G ethernet, is that enough?
If you read the system requirements for a project server, then your proposition is more than adequate, if you are using about 5 flames.
Scale the requirements up according to the recommendations.
@ALan’s 25Gb infrastructure supports shared framestore/media caches, as well as the smaller requirements for the project data.
For just the S+W metadata, yeah, 1gig is even enough.
But I highly recommend you run the S+W project as a VM inside Proxmox, and do frequent Snapshot Backups.
I can’t stress enough, the effeciency and workflow gains you will get from going centralized with ALL storage.
$1000 bucks and you get 8x 25gig ports for your workstations, and 2x 100gig uplinks to your servers.
If only there was a way to do this with mac workstations.
Theoretically you could implement thunderbolt networking if your computers were in close proximity to each other.
Mac has 25gig available to it, although much more expensive.
yeah and 100gbe is coming. atto makes some. ThunderLink Adapters – ATTO Technology, Inc.
exactly - some sort of apple tax
all of the adapters are between 3 and 5 times more expensive.
if long thunderbolt cables were less expensive than the cost of adapters, cables and switches then thunderbolt networking would seem viable.
it appears that the most true promise of thunderbolt has consistently been - it’s way more expensive than you imagine…
There are native Apple drivers now for the Mellanox ConnectX cards (only 4,5,6) via com.apple.DriverKit-AppleEthernetMLX5.dext
I have a 25gbe Mellanox ConnectX-5 nic off ebay in a Sonnet Echo Express SEL Thunderbolt to PCIe box running on MacOS 14.7 - I paid around $300 total for the card and enclosure on ebay.
it’s just a missed opportunity that a $100 cable and a $60 thunderbolt pcie card for linux would achieve a 40Gb connection…
tempted.