OpenZFS crash course

I’ll have to drag out a laptop so that I can search the pdf for the switch, but I’m pretty certain that it will support link aggregation and it has four ports for rj45 corresponding to your four 10g ports on your two Mac pros

I’m looking on owc website for the thunder bay 6 to check what NVMe is compatible but I don’t even see the model on the website.
Perhaps you can drop the support team a note to ask what NVMe is compatible?
The pdf says that it supports 1xM2.NVMe per enclosure
The smallest capacity on owc website is 240GB for $55 each.
A worthwhile upgrade.
Oh
Ram is pricey right now - forget the ram upgrade
It’s a luxury not a necessity

Neither the cache disks or ram are absolutely necessary immediately.
One of the good things about openZFS is that you can start with the basics and add the go faster stripes later.

Yeah I’m more in the mood for an afternoon and cheap improvement. :+1:

…says m.2 nvme.

Great that’s the same pdf that I’m reading here.
Buy 1x OWC aura 240GB m2.NVMe per enclosure when you want to spring for them.
They’re $54.99 each right now.
Probably cheaper if you buy western digital blue from amazon.
They will act as a write cache which is part accelerator and part security against data loss through power loss.
You currently have a Mac boot disk per Mac Pro right?
And a high point raid card with flash drives in it?

Alll correct. Standard internal apple boot off the 4tb internal and high point filled with nvmes for the framer store.

Ok
Is everything backed up?
System disk is time machined?
Flame framestore ready to be wiped?
Every flame project backed up?
Reliable archives in place?

Framestore is ready to be wiped.
Sys disk cloning now. Ready in a bit.
Flame projects backed up.

Is this major surgery?

Not at all
It’s very straightforward
But it’s good practice to make upgrades or side grades reversible

Make sure you have all the latest security and system updates on the boot disk before you install the openZFS tools.
It’s now 7am here
The house is starting to wake
The dog will require attention soon
And I’ll head out for a run.

The 30,000 foot view is:
Back everything up
Check the backups
Install openZFS
Wipe the thunder bays
Start making disk pairs (Thunder Bay 1 disk 1 & Thunder Bay 2 disk 1)
Add mirror pairs 1 through 6 to the zpool
Add the high point raid as a cache.
Generate some test data
Copy it into the new zpool
Bond the Ethernet connections
Transfer some data over the network

1 Like

same here. lemme mull this over and find a way to dip my toe.

merci monsieur.

All good
A fresh system will give you the options you require:

Either:
SoftRAID (which you can do)
APFS tiered storage (which you can do)
Or openZFS (which seems new to you but you’ll soon be a master of it)

HFS or APFS volumes are only good for Mac but an openZFS volume could be attached to a windows machine or Linux or FreeBSD or oracle.

The last consideration is whether or not you want a fixed framestore of NVMe (8TB?) or if you want an 8TB fast cache on a 48TB mirrored frame store.

The only other thing you want to consider is to bond your 10Gb ports to get 20Gb between your flames.

And you can easily set all these scenarios up in half a day and test them all

First things first
Make your backups
Check your backups
Archive your flame projects
Archive your flame setups
Check your flame archives

1 Like

Copy that. Thank you for the generous wisdom. This is an area which I know nothing about. It all sounds cool, and better, but I’m pretty much guaranteed out of my element on this one. Ha! That’s never stopped me before, but, the only thing I know about zfs is watching Linus tech tips talk about how great it is. So, yes, its all new to me and as a serial over-complicator…well, you get the idea.

Linus is great.

OpenZFS as a concept is less complex than hardware raid.
It’s less complex than a SAN.

It’s very reliable on Linux.
It’s reliable on macOS (until apple break it like they break so many things - shake, aperture & QuickTime for example)

The oracle only version is superior, but that’s not openZFS, it’s oracle zfs and expensive.

The optimal expected outcome of this exercise is a a good volume for you to do local and shared flame.

I’m back and available for a little while.
The shouty people in the senate chambers are great wallpaper when the volume is muted.
Is your backup complete?
Want to teamviewer/FaceTime/phone call?

@randy
That was a fun phone call.
Once again, good job maintaining this safe space for nerds.
Thanks.

Good luck with the zfs experiments.
Let me know if you want me to post anything more to this list.

1 Like

Thank you!

A post was merged into an existing topic: SoftRAIDers, how do you SoftRAID?

Password:
pool: macguyver
state: ONLINE
config:

NAME        STATE     READ WRITE CKSUM
macguyver   ONLINE       0     0     0
  mirror-0  ONLINE       0     0     0
    disk10  ONLINE       0     0     0
    disk4   ONLINE       0     0     0
  mirror-1  ONLINE       0     0     0
    disk11  ONLINE       0     0     0
    disk5   ONLINE       0     0     0
  mirror-2  ONLINE       0     0     0
    disk13  ONLINE       0     0     0
    disk6   ONLINE       0     0     0
  mirror-3  ONLINE       0     0     0
    disk14  ONLINE       0     0     0
    disk7   ONLINE       0     0     0
  mirror-4  ONLINE       0     0     0
    disk15  ONLINE       0     0     0
    disk8   ONLINE       0     0     0
  mirror-5  ONLINE       0     0     0
    disk16  ONLINE       0     0     0
    disk9   ONLINE       0     0     0

errors: No known data errors
randymcentee@dxs-flame-01 ~ %

Well, I got the 2 Thunderbay enclosures into OpenZFS. Took some fiddling with…of all things…Big Sur System Preferences security and the kext nonsense and csrutil and something about the nvram boot something because kext wouldn’t load.

Read speed are solid at 2,000MB/s according to Blackmagic, writes are slower in the mid 400MB/s if I recall.

I’ll write a bunch to it tonight and see what happens next.

The other whacky thing was just dealing with all the disk names, but I finally figured out how to get them all and match them up as individual mirrors.

To add the L2ARC, its as simple as:?

zpool add $poolname cache $diskname

…right? And I’d just add 4 disks which are my NVME disks, right?