Highpoint 7505 and Lenovo P620 incompatibility

I’ve been struggling for a while now with getting my 7505 to work. I’ve been through two cards now. I have them in two machines and they’re both terrible. I finally decided to give Highpoint a shout and find out what is going on. I figured it was a driver issue, but it was much more terrible than that.

According to Highpoint, there is a Broadcom chip incompatibility between the 7505’s and Lenovo’s motherboard. There are a few cards on the list they gave me that are also incompatible. I don’t know if it’s a motherboard revision issue, but my bios is updated and everything else working,

SSD7500_7749_6780_FAQ_v1.00_23_09_19.pdf (91.2 KB)

My tech recommended running the board in HBA mode, but I can’t find anything about how to actually do that. I’m waiting on Highpoint to explain how this works.

I was at the end of the rope trying to figure this out. I had tried everything to get these cards to work and it just wan’t happening.

Anyone know anything about HBA mode? If not then I’ll be selling my cards. So now I’m in the market for some new NVME RAID cards. Anyone have any suggestions?

Oh dude, sorry it sounds like you’ve been through the ringer.

My P620s like these from Highpoint:

https://www.amazon.com/High-Point-SSD7101A-1-Dedicated-Controller/dp/B073W71K4Z/ref=sr_1_1?crid=WM0A195OGE8C&keywords=highpoint+SSD7101A-1&qid=1704473918&sprefix=highpoint+ssd7101a-1%2Caps%2C111&sr=8-1

https://www.amazon.com/HighPoint-Technologies-SSD7540-8-Port-Controller/dp/B08LP2HTX3/ref=sr_1_1?crid=B5C68N379G6K&keywords=highpoint+SSD7540&qid=1704473970&sprefix=highpoint+ssd7540%2Caps%2C74&sr=8-1

1 Like

Yeah, that mirrors my experience as well. The bios level driver worked just fine but the whole HighPoint software level has consistently been a no-go. That’s with multiple 7505s on multiple iterations of Rocky on multiple P620s.

For me–and your experience might differ–the card still passes through all of the nvme drives that it houses to the OS (HBA mode), so I’ve striped them up as a ZFS volume and have been running that way for the last 7 months or so. That’s worked a treat on 8.5 and now 8.7.

@randy’s got info on the ZFS install and stripe process in his backpocket somewhere he might share… It’s not as complex as going down the LV and XFS route that’s documented here.

1 Like

Yeah I’ve never used Highpoint’s hardware RAID. I just roll a software RAID using ZFS.

Shameless promotion. Setting up a Flame framestore with ZFS

That’s how I had it originally setup, but I wasn’t getting the speed that I had expected. In the flame framestore speed test, I was getting about 386mb/s. I had expected much more from that. I couldn’t do more testing because most of the linux speed tests need a /dev/sd(x) drive to talk to.

I replied to the Highpoint guy about getting any additional information, but it looks like I’m heading back to ZFS for the time being.

For sure. Doing it that way was a major pain in the ass. I learned my lesson the hard way doing it like that. Learned a lot of commands, though.

1 Like

my highpoint card costs me so much pain .

0/10 would not recommend .

1 Like

What are you using instead? What’s worked for you, Finn?

on linux i am just using “stupid” adapter cards, asus for example, but your mainboard needs to support bifurcation.

on macOS I went with the sonnet cards.

Shit. I was expecting you of all people to have a solid solution.

Back to zfs again, I think.

I’m thinking that I may have a problem of perception. Having only the framstore test available to test the speed of a zfs pool and the crappy results of that, I don’t really have a quantifiable way to measure the speed of the framestore. What is a reasonable expectation? What is a way to quantify it?

Sorry to keep prattling on about this, but it’s been stuck in my craw for a while now.

Word. The speeds you’re getting were really slow though regardless. Do you have the card fully populated?

There’s some truth to that. Speed tests are easy and convenient, but not always right, and not always representative. So it may be better to construct a test project that represents actual workflows with footage that will stress the system and see what you get. Also never straight forward to tell whether your workflow is I/O bound, CPU, GPU, or memory bound.

Also you focus on the framestore. That’s definitely important for some workflows. And I might be wrong on this, as I’m still finding my own footing. But if you don’t cache sources, and don’t do a lot of intermediate renders, the picture is more complex of where the performance comes from.

At the moment I’m using internal M.2 NVMe just as regular drive, and then a second external NVMe via Thunderbolt (I’m not on a P620). They’re 2TB and 4TB respectively. Jobs live on the NAS, so these NVMe are just for sources and renders of the current job, not anything else. Simplistic, but also less IT headache. While I didn’t fully validate, it doesn’t feel storage speed or capacity is productivity limit.

Yeah, I’ve got 4 Samsung 980 pro’s in it, so it’s not a limitation of the nvme’s. I think my frustration is that I can’t look at a test result to compare it to anything. I have an old 2.5 ssd icydock with 8 drives in it running pci 2 and it’s still faster (perceptively) than what I’m getting from the highpoint.

I’m trying to be maximum bang for my buck, but this is the one area that I can’t quantify, but I know that I’m getting nowhere near the performance I expected.

Jan, I kind of run the same setup on my mbp. Two OWC 2tb nvme drives: one for framestore and one for clips in the job. Works pretty well. I just want to wring as much performance out of this P620 as I can.

1 Like

I’m going to pick up one of these and see how it goes →

Figure at 100 bucks it’s worth a spin. I’ll report back after.

1 Like

Just gonna throw this out again for like the 50th time. centralized S&W server for the win here. Local frame store is so 2010’s.

Even if I had 1 flame, I’d still do it.

2 Likes

The old ADSK XFS creation guide is antiquated. Straight mkfs.xfs is all you need. It figures out the block alignments automatically now. You can put it on the raw block device if you want, although putting an LV underneath isn’t a bad idea, for future expansion.

Alan, I just found your old Cenos S+W server setup video. You don’t have an updated version for Rocky by chance? I have an old pc sitting around that I could test it on, but wouldn’t it need a raid as well for storage so it’s not being bottlenecked by the io needs? What would the benefit be for having it on the server as opposed to local? I just have the one flame.

Word. But if you don’t have a local cache volume, you need a networked one, which means you need a fast network, and then you need a NAS or another box serving out your storage.

Not poo-pooing the idea but it’s overkill in this instance, where a 100 buck card could solve a bandwidth issue.

2 Likes

Out of interest, why are people suggesting ZFS over XFS? Are you planning on using snapshots or scaling the file system at a later date?

I’ve read a whole lot of complaints about performance off of ZFS volumes and unless you are holding sensitive data that needs regular snapshots, then it may not be the best choice of file system for Flame IMHO. XFS is a higher performance filesystem that utilises less system resources, RAM in particular. It is also easier to manage.

ZFS is a great option for a NAS but I wouldn’t choose it for a local framestore.