Highpoint 7505 and Lenovo P620 incompatibility

In addition to my reply above, here is a good article explaining the differences between ZFS & XFS.

1 Like

Ugh. I went through the LVOL crap again to see it there was a speed improvement and there wasn’t. Same old 360-380 MB/S.

I even bifurcated the pci slot in hopes that would help, but no. Same/Same.

Chris, I hope that sabrent card gives a better result.

1 Like

ZFS is like 3 commands to setup. No need to mess with fstab, mount points, blah blah. Super fast to setup and I can easily set it up via Ansible to automagically deploy and rebuilt my boxes without the need for UUIDs or any other filesystem shenanigans.

I agree. The logical volume crap is way convoluted. ZFS for sure. I was interested in seeing if there was a performance increase doing it the old-school way and it netted out the same both ways. Now I’m going to scrap the LVOL and go back to ZFS.

It comes tomorrow. We’ll see…

In my experience ZFS is 3-4x SLOWER than XFS on the same exact hardware. Even after playing with all the tuning knobs. With xfs I just do a dumb simple mkfs.xfs and go and it fucking screams and maxes out the bandwidth of the PCI slot. ZFS snapshots can be a life saver though.

3 Likes

Looking at the feature comparison, it seems that zfs has some unique features, but short of specifically needing them, xfs is a better choice from a technical viewpoint, usability being another thing.

And you don’t have to use LVM. LVM only provides two important functions - spanning drives and providing for dynamic resizing. There’s no reason that stops you from just making an XFS filesystem directly on your NVMe barebones.

Regarding the performance - that is most certainly odd. It could be useful to test the underlying disk and then with the filesystem, to see if the bottleneck is more hardware or more the way the filesystem is setup?

There are a number of commandline performance tests in Linux, one that runs via dd, one that runs via hdparm, and possibly also one called fio. A quick Google search should surface details on them. I remember having used dd and hdparm in the summer when I had to rebuild my system.

1 Like

ZFS performs well when you have a fairly fresh filesystem, with a large percentage of free space and small amounts of fragmentation. As time goes on, especially if the filesystem is kept fairly full, then ZFS performance can quickly deteriorate. It’s a known thing that even a quick Google will produce a lot of information about.

If ZFS is working fine for you then that’s great. I personally would not recommend it for a performant volume and XFS was designed for our sort of use case but that’s just my opinion of course.

2 Likes

Samsung PM1735

large nvme drive with direct pciE … works fine , is fast, doesnt overheat , its awesome, simple no drivers, no raid, just works as a single really good drive

@ALan the xfs is awesome. 6389 MB/s.

I’m still having issues with fstab automounting the raid. The fstab commands keep bricking my rocky install. However, with the new speed on the raid, I don’t mind running mount -a when I reboot.

I know that creating the raid is a little convoluted compared to the zfs, but in my instance the results speak for themselves.

2 Likes

@snacks i can send you a copy of an old xfs flavored etc/fstab file tonight if that’s helpful.

@randy That would be awesome!

So glad to hear it man.

@cnoellert I’m interested to see if the Sabrent is faster without all the raid hardware on the 7505. Less overhead, I would think.

Same. I’m thinking to give it a go with XFS as well as ZFS just to see. DM’d you but curious if you followed the ADSK guide for setting up your XFS attempt or something else.

Just keep in mind that the initial speed of XFS & ZFS on a newly striped volume will be similar. Over time and use is where ZFS starts to slow down in comparison. Except in the case of the original post of course.

Not on SSD. There is no fragmentation on SSD.

fstab.zip (1.2 KB)

@snacks here’s an example of an etc/fstab for an XFS deal.

Awesome! Thanks @randy! Got it all sorted out.

More complicated.

Fragmentation still exists, as it’s a nature of any filesystem, regardless of storage technology.

However, fragmentation on SSD does not incur head-seek latency penalties, which is the primary bottleneck in spinning drives (which you’re referring to). Thus it’s reasonable to say that SSD don’t cause major hardware driven fragmentation performance degradation.

If you look deeper, there are still fragmentation penalties on SSDs, because they can still fragment IOPs and cause extra work. However the performance penalty, especially considering the core speed of NVMe could be negligible if it’s based on hardware latency, not so much if the algorithm is just inefficient.

Having said that, how big the penalty of this extra work is may differ between file system architectures, and it’s plausible that ZFS degrades worse than XFS even on solid state I/O performance characteristics.