Promise R8 raid format

Hi Guys,

I’m about to wipe my whole system and do a fresh install and want some advice on formatting my RAIDS.

My system consists of
iMacPro 2tb ssd storage and it holds the Flame Framestore
1 x 24TB Promise R8 (TB2) raid 5 for media
1 x 24TB Promise R8 (TB2) raid 5 backup of Media drive (CCC clones every night)
1 x ARECA 16TB Raid 5 for archives which is also cloned to a Synology 38TB server which also clones to the cloud.

Yeah I know sounds like a bit of an overkill of backups but I have the tech so might as well use it.

I’m about to upgrade to Big Sur but not sure about formating the Promise Media RAIDS as HFS or APFS

Thoughts?

If you had the weekend to experiment I would suggest binding those Pegasus devices together and trying to exploit two thunderbolt connections and your internal ssd as a caching device.
APFS or HFS might not matter: you may have different options.
Are you intent on wiping your system disk too?

It’s tough to keep squeezing more life out of legacy equipment but I imagine you could get them to give you more than a last gasp by tying them together to build a bigger life raft.

Happy to go over some ideas privately if you like and if you find a successful recipe you could publish it back here for others to use.

FWIW I’m still limping some very old macs along with ZFS.

1 Like

I don’t know of any reason not to use APFS. It’s been around for what…3 years now? Almost 4?

I wonder if there are any benefits to using APFS over HFS+. My gut feeling is that if it’s not broken don’t fix it. I’m now on Big Sur and everything is humming along nicely. I wonderIf it’s worth the effort to change the raid drives. If there’s a benefit then I’ll do it. Maybe I’ll do the main one and then do some speed tests.

@philm - I prefer to keep 1 drive as the main media drive and the other as a complete snapshot. Don’t like all my eggs in 1 basket but thanks for the idea.

It seems counterintuitive but I think if you mirrored your pegasus drives you would end up with more storage space overall, better redundancy, more options for accelaration.

Right now you have two raid 5 storage spaces which means that you are using some disk space for parity.
Moreover you are cloning one drive to the other every night.

So you have two 8-bay devices with at least one parity drive in each device, therefore you have 14 drives worth of storage space and 2 drives of parity.

Let’s assume that the drives are 3TB drives - you would have 24TB raw storage per R8, with approximately 21TB usable storage, 3TB of parity, once formatted as RAID5.

Then you update a duplicate of that data every day to the other R8.

So you have approximately 42TB of storage and 6TB of parity in use, and are prepared for a disaster (one unit burns - you bring the other unit online while you sue your reseller).

I would propose that you investigate a RAID10 setup:

(Make 8 small mirrors and stripe those together using the pegasus software, or Mac disk utility or Open ZFS)

======================================================
*** L2ARC iMac 2TB SSD ***

               *** RAID 10 ***

======================================================
Pegasus-1 | Pegasus-2

Drive-1 (mirror-A-1) | Drive-1 (mirror-A-2)
Drive-2 (mirror-B-1) | Drive-2 (mirror-B-2)
Drive-3 (mirror-C-1) | Drive-3 (mirror-C-2)
Drive-4 (mirror-D-1) | Drive-4 (mirror-D-2)
Drive-5 (mirror-E-1) | Drive-5 (mirror-E-2)
Drive-6 (mirror-F-1) | Drive-6 (mirror-F-2)
Drive-7 (mirror-G-1) | Drive-7 (mirror-G-2)
Drive-8 (mirror-H-1) | Drive-8 (mirror-H-2)

You don’t have parity disks with which to reconstruct failed data but you have two live copies of data.

You yield 24TB of useable space with a 1 for 1 clone on another hard disk in another enclosure with a separate power supply over another data link.

If you set your enclosures to run in JBOD mode so that you don’t get any weird bit swapping and compression by the RAID controllers you can open this sort of system up for OpenZFS.

With Open ZFS you can enable write caching and more importantly read caching.

Also, ZFS supports instantaneous snapshots - it’s what time machine was supposed to be and if you’re already using carbon copy cloner then you know how good time machine isn’t.

More importantly than that, if your RAID enclosures both fail and are unsupported, you can buy a SATA capable rack mount unit, slot those drives in and carry on.

The key components for making large cheap RAIDs run like clockwork and at great speed are RAM (always use the most that you can stuff in a machine), L2ARC (NVME is cheap as dirt these days - buy a lot and use it), ZIL (again NVME but you don’t need so much of it).

The drawbacks with storage and filesystems are proprietary hardware, firmware and software.
So if your promise enclosures don’t permit disk passthrough, or your thunderbolt connections mysteriously become as useless as firewire, or your operating system doesn’t permit, well, anything, then you might be out of luck.

Like I said, I’m happily limping 12 year old devices along using openZFS.
I know Alan Latteri is using OpenZFS.

You can still use your Areca for archives, and your other drives for long term back up.

Actually, if you were going to wipe your long term backup I would suggest openzfs for that as well.

Happy to talk about any of this further if you feel like it.

Fortunately my lockdown is right next to a bird sanctuary on the beach so I’ll be strolling up and down the wetlands all weekend, trying not to think about the armed idiots who are trying to destroy my freedom while claiming theirs.

1 Like

WOW thanks for the comprehensive response. The RAID 10 idea has got me interested. I just need to imagine different senarios for my workflow, backup and failovers. I’m also about to go into a bunch of work that will keep me busy for a few weeks so I’ll pick your brain when I decide which way to go. Meanwhile I’m happy I’m on Big Sur now… and yes it feels snappier.

Thanks again for your input.

1 Like