Linux NVME M.2 16TB usable RAID10

Hey All,

Saw there were some related posts, but nothing recent and nothing quite addressing the topic…

Building a new Linux workstation (z8 G5 Fury - I know I can build my own but I just had HP replace my A6000 overnight on my current workstation so there is something to be said for paying more but getting better service) and was looking to build a 990Pro (MLC) m.2 frame store. While it’s easy to find the parts to throw one together these days, I don’t have a clue as to pros/cons and what’s worth/not worth spending my hard earned scratch on.

My current system has 8xSSD in RAID 5 controlled by an LSI hardware RAID card. That just doesn’t seem cost effective, efficient or even necessary anymore.

Was looking to put eight 990Pro 4TB m.2 in a RAID 10 solution. Is that a bad idea? Hotpoint makes this which seems pretty intense and up to the rigors of everyday frame store use:

https://www.highpoint-tech.com/product-page/ssd7749m

But for a fraction of the cost I can pick up two of these (I’ll have plenty of open slots in the system to accommodate) or something much similar from another vendor:

Is it worth $1400 for better cooling and more robust build? I’m mainly woking uncompressed 1080p in Flame, so I don’t really need to worry about speed/totally saturating all the lanes with eight drives on one x16 slot.

Am I better off sticking to hardware RAID and SSDs? Or should I just have four 4TB m.2 drives on a single card RAID5 solution and take the 4TB storage hit for easier recovery?

Any advice/suggestions would be greatly appreciated.

I’d stick to a Highpoint. And if you are caching only on the Highpoint, just yolo it with RAID 0.

Archive nightly.

2 Likes

Not a full answer, but just some thoughts on two of your points:

RAID 5 vs RAID 10 - the tradeoff is capacity vs. rebuild-time. RAID 10 is actually less efficient space wise, but has much faster rebuilds. Worthing thinking through a bit more.

I keep hearing stories about NVME/SSD cards that don’t have good cooling and falling over. We just had one earlier this week. So if you go for a big card and also anticipate driving it pretty hard (may not be the case for you), the robust and properly cooled could be wroth it.

I have an unused 7505 4nvme that I can part with below retail. On the HP you should be able to use the hardware raid as opposed to HBA.

Thanks Randy… I’m always afraid to RAID 0. We work in a quick turnaround broadcast environment about 80% of the time. Without at least some fault tolerance I’d be afraid to lose a days work when we have a delivery due.

In your experience the Highpoints have been reliable?

1 Like

Thanks for the advice. Yeah, given the cost of m.2 these days, a 32TB RAW RAID10 is not ridiculously expensive. I’ve always gone RAID 5 because cost was prohibitive for the amount of storage we needed. But something about having a redundant framestore in case of failure close to delivery times is very appealing.

If I did do that, might be better to do two 16GB groups on separate 4 drive cards. That way if a card fails I don’t lose anything.

Yeah the highpoints on Linux have been great. Go with a zfs pool type RAIdZ1 if you’d like parity.

Thanks! I might take you up on that offer if I decided to go with two cards.

When you mention hardware raid - do you mean through BIOS? I know I currently have a two m.2 drive HP card that I use as a bootable RAID0 that is managed via BIOS.

It’s not through the bios, but the driver loads at boottime. It has a web gui, but to make it easier, go through Randy’s ZFS method.