I am buying a Synology to create a 10gb network in my studio. I am gonna have a MacPro and a Z840 attached to it. I have my own Raid for the MacPro where I do the caching, and I intend to create the stones of my Z840 inside the Synology. So the Synology will be a mix of Server/Backup + Stonefs for my second machine.
My problem is, SSD makes things so expensive. A 10Gb connection in a Synology HDD would be suitable for my second machine? Would I be able to play in real-time 2K?
Should be fine. I can get real-time 10 bit 3k out of 4 drives in a raid 5 array.
Real-time 16 bit may be an issue and hopefully someone with more knowledge about throughput chimes in.
I would consider setting up a single stone and sharing the storage with the second machine. That’s our setup currently and works great. But this greatly depends on the jobs you’re considering to throw at the second unit.
Using the same storage for server/backup AND storage for the second machine makes me uncomfortable. Call me old fashioned (yes I am) but my call would be to have a dedicated hardware for stone.
agree, i use a synology but not fast enough for a framestore but great for a file server. Having a PCIE NVME or other internal disk array is the way to go for a good framestore. you kinda need greater than 700Mb per sec read and writes, you might not get that even on 10Gb if your drives are just sata. If ur conencted via thunderbolt or pcie or something internal thats a diff story. you can these speeds when striping multiple drives together. I would sep my file server and framestore out
The newer synology boxes allow for bonded ethernet, and the new Mac Pros also (theoretically) can support multiple/bonded Ethernet connections. I wasn’t able to get it to work, maybe because my switch doesn’t support bonding, or maybe because I’m not smart enough or sufficiently dedicated.
In any event, it’s worth investigating. You might get framestore quality performance out of it that way, but you probably won’t otherwise. I love my synology boxes, but even over 10gE they’re not quite fast enough for 2k.
I also definitely agree with @Sinan and @theoman don’t put your backup and your cache storage on the same array. That’s a nightmare waiting to happen.
In most cases “bonded Ethernet” doesn’t do what you think it does, i.e. it’s not meant to allow a single connection to “stripe” data across more than a single link. Depending on the configuration, it meant for fault tolerance (in case a link goes down), and allowing multiple connections (from multiple hosts) to be spread across multiple links, but apart from very specific, non standard configurations, it’s not going to give you “twice the speed” for a single client talking to a file server.
I too would suggest a dedicated framestore on the HP: PCIe card to host M.2 NVMe SSDs are super cheap, and so are the drives. And if this is mostly just a cache with all the “real” media on the NAS, you don’t need a ton of cache, and you can probably live without any RAID protection. And with a x16 adapter and 4 NVMe M.2 drives (even fairly small ones), you’ll get all the local bandwidth you need.
But it’s twice as many cables… why isn’t it twice as fast?
I keeed, I keeeeed.
for proper bonding (which is possible) you can either run the cables directly or configure your switch with LACP Link Aggregation and LACP basics - Thomas-Krenn-Wiki
You can then read theoretical 20Gbit on iPerf and some disk speed tests and so on, but realistically it depends on how the data is handled, I have used 2x 1gbit LACP on linux flame but not as a framestore and I cant say there was a massive difference, I felt like when I was caching it was better as it was able to pull files in a parralell matter but…
If you run just a small synology you wont ever have to worry about dual 10gbit performance though, even getting 10gbit saturated from a 8 disk array is a stretch without nvme caches. A DAS like a pegasus is way faster than a NAS with the same drives in it, networking overhead is real .
I have a 8bay and I get around 5gbit of speed without cache on SMB/NFS and I just ordered the new 8bay and will use nvme cache. Bonding made no difference here. I get more when I use a SAN connection but thats not very useful
I’ve been able to linearly scale bonded 1g ethernet to increase speeds for a single connection. Must use Round Robin mode and each segment be on a different VLAN.
Now that we have 100Gb backbone and 25Gb to workstations I don’t mess with that.
If you are doing Linux to Linux bonding, then you can indeed create a “fat pipe” from multiple small ones. But for standard LACP (802.3ad, now 802.1AX) between a client, a switch and a file server (for instance), the hashing function used to determine which link a “flow” will go over is based on the MAC address, IP address, TCP port number… of the source and destination. So for instance if you are using TCP port number based hashing, and you were using an ftp client that opened separate TCP connections for a single transfer, or you were using SMB multichannel, then indeed you could end up using more than one link’s worth of bandwidth between the client and the server. But a single NFS over TCP mount for instance, that would typically go over a single link.
As Alan said, in the age of 25/40/50/100… Ethernet at reasonable prices, it’s a lot simpler / easier to just have a single, fat enough pipe for what you need.