It seems counterintuitive but I think if you mirrored your pegasus drives you would end up with more storage space overall, better redundancy, more options for accelaration.
Right now you have two raid 5 storage spaces which means that you are using some disk space for parity.
Moreover you are cloning one drive to the other every night.
So you have two 8-bay devices with at least one parity drive in each device, therefore you have 14 drives worth of storage space and 2 drives of parity.
Let’s assume that the drives are 3TB drives - you would have 24TB raw storage per R8, with approximately 21TB usable storage, 3TB of parity, once formatted as RAID5.
Then you update a duplicate of that data every day to the other R8.
So you have approximately 42TB of storage and 6TB of parity in use, and are prepared for a disaster (one unit burns - you bring the other unit online while you sue your reseller).
I would propose that you investigate a RAID10 setup:
(Make 8 small mirrors and stripe those together using the pegasus software, or Mac disk utility or Open ZFS)
======================================================
*** L2ARC iMac 2TB SSD ***
*** RAID 10 ***
======================================================
Pegasus-1 | Pegasus-2
Drive-1 (mirror-A-1) | Drive-1 (mirror-A-2)
Drive-2 (mirror-B-1) | Drive-2 (mirror-B-2)
Drive-3 (mirror-C-1) | Drive-3 (mirror-C-2)
Drive-4 (mirror-D-1) | Drive-4 (mirror-D-2)
Drive-5 (mirror-E-1) | Drive-5 (mirror-E-2)
Drive-6 (mirror-F-1) | Drive-6 (mirror-F-2)
Drive-7 (mirror-G-1) | Drive-7 (mirror-G-2)
Drive-8 (mirror-H-1) | Drive-8 (mirror-H-2)
You don’t have parity disks with which to reconstruct failed data but you have two live copies of data.
You yield 24TB of useable space with a 1 for 1 clone on another hard disk in another enclosure with a separate power supply over another data link.
If you set your enclosures to run in JBOD mode so that you don’t get any weird bit swapping and compression by the RAID controllers you can open this sort of system up for OpenZFS.
With Open ZFS you can enable write caching and more importantly read caching.
Also, ZFS supports instantaneous snapshots - it’s what time machine was supposed to be and if you’re already using carbon copy cloner then you know how good time machine isn’t.
More importantly than that, if your RAID enclosures both fail and are unsupported, you can buy a SATA capable rack mount unit, slot those drives in and carry on.
The key components for making large cheap RAIDs run like clockwork and at great speed are RAM (always use the most that you can stuff in a machine), L2ARC (NVME is cheap as dirt these days - buy a lot and use it), ZIL (again NVME but you don’t need so much of it).
The drawbacks with storage and filesystems are proprietary hardware, firmware and software.
So if your promise enclosures don’t permit disk passthrough, or your thunderbolt connections mysteriously become as useless as firewire, or your operating system doesn’t permit, well, anything, then you might be out of luck.
Like I said, I’m happily limping 12 year old devices along using openZFS.
I know Alan Latteri is using OpenZFS.
You can still use your Areca for archives, and your other drives for long term back up.
Actually, if you were going to wipe your long term backup I would suggest openzfs for that as well.
Happy to talk about any of this further if you feel like it.
Fortunately my lockdown is right next to a bird sanctuary on the beach so I’ll be strolling up and down the wetlands all weekend, trying not to think about the armed idiots who are trying to destroy my freedom while claiming theirs.