45Drives HomeLab product line

Has anyone looked at 45drives new HomeLab line of products as a possible solution for central storage, or more specifically central Stone + Wire Server?

I am looking at the HL15 system from 45Drives for a central stone + wire server solution. Wondering how it may stack up against other solutions this community may be using.

https://store.45homelab.com/configure/hl15

1 Like

@MagicMtn - why not call 45Drives and tell them what you want to do?
It’s not the old days - I’m sure they would be delighted to find out what you intend to do and help you solve the problem.

remember, 15 interfaces is good for one user, not particularly good for two or more users, and as the workload becomes increasingly asymmetric, the performance will tail off.

also networking…

but those chassis are very good value for money.

I have an existing relationship with 45Drives. I am simply in the research stage for what a central stone and wire solution may look like, cost etc.

Wanted to hear some feedback from this community before reaching out to any vendors to work with.

The HomeLab line from 45 Drives is more of a DIY type solution available through their online store front.

If you want to work with the sales people you are almost certainly going to be quoted for their main product line at a higher price point. My understanding is the home lab line is only available thought the site…I could be wrong on this. Appears to be a very new product.

What hardware are other people using?
What vendors are you working with for this type of solution?

We have three Mac based flames in our environment.

How about:

  • How big are your projects?
  • How many simultaneous projects?
  • How many simultaneous users?
  • How long do your clients squat your storage?
  • How do you do lifecycle management?

I would not recommend your suggested device as a shared solution - it will not cope today and will not scale for tomorrow.

My $0.02

(But I think 45 Drives make great products, including the houston interface, and all of the training and outreach)

my 2c as someone that has just transitioned to central s+w

→ i setup a 10tb nvme raid as a central framestore

3 months in with heavy projects on mulitple machines and i am looking at 300GB used. its all just 10Gbit as well, way enough but nvme latencies makes flame really happy

Thats because we also work completely unmanaged, so only thing in the central framestore is timeline renders and maybe proxys . and then I also switched all timeline caches to dwab exrs which makes everything tiny, its like prores but for exrs.

its simple enough to adapt workflows to massively lower the amount of framestore space and networking speed to and from the framestore that will save you massive amounts of money and save you from the headache of having giant archives.

Regarding s+w you want to run it in some kind of virtual machine like proxmox

its bascially @ALan s kickass setup scaled down to a smaller shop doing commercials

performance is great

1 Like

@finnjaeger - this is a much more tolerant method since NVMe performance and capabilities exceed the maximum network performance and capabilities.

talk about what kind of CPU/RAM/network/OS/Hypervisor/VM backs this up and then you’re building a recipe, and that’s a very worthwhile discussion.

then @ALan might/could choose to detail his own racing car and the community can start working on plans/dreams/aspirations.

It’s a non-trivial exercise with spectacular results.

Our three flames each have 15TB of available frame storage and a central server for project related data and archives thats about 120TB

2 - 3 simultaneous users. Not really to sure about the simultaneous projects thing.

Clients keep data on our storage for 3 to 6 months (estimated) and then things get archived to LTO tape.

As phil mentioned… I would love to hear more about CPU/RAM/network/OS/Hypervisor/VMs etc to see exactly what other people are doing, what exact type of hardware they have purchased etc.

At the price point of our current solution. we are spending around $6k on the storage that attaches to our systems. My assumption is that the Central S+W solution maybe around double that costs so thats where the difficulty of getting this type of solution off the ground lies. I will have to explain to people why we ware spending more and what the benefit may be.

What is this comprised of?

Remember you need to have a high speed network too. You can get cheap Mellanox 25gig cards off eBay for ~$50, and you can get a cost effective switch from Mikrotik for ~$800

1 Like

(2) 2019 MacPros with OWC Accelsior 4M2 16TB
(1) iMac pro with OWC Thunderblade 16TB

We have a 10 gig switch with a some available 40gb QSFP+ ports

If we utilized this switch would the 10gig ports be sufficient or should I be looking to use the 40gb ports?

@ALan Thanks for the input.

What is the specific breakdown of the topology of storage. Your post is a summary.

I would not risk it with 10gig. Spend the $1200.

Also, everything I say will be coming from the perspective of a Linux infrastructure and workstation. I have ZERO experience with Mac apart from being used as our side internet machines.

1 Like

Not 100% sure what your asking
The storage on each system are 4x4tb NVMEs stripped to be a RAID0.

On the mac pros they are PCIe connected and on the iMac Thunderblade connects via thunderbolt.

Not against looking into switching to linux however the macs have been solid for us and being able to use jump desktop software makes it easy to give access to remote freelancers which is something that is done often.

I think the way i pictured it, I could build a central frame store using proxmox and linux VMs that could be accessed by the Macs, or is that not recommended?

@MagicMtn - proxmox is an excellent solution.
@Alan turned me on to proxmox nearly 10 years ago.
I prefer it to HyperV and could never afford VMWare.
(Nowadays nobody can afford VMWare.)

45 Drives appear to be a great partner for you in your current project since they have a full support mechanism for proxmox.

It’s free to try, and cheap to support.

@MagicMtn - it also appears that you have the guts of a system anyway - you could repurpose all of your NVMe right now into a shared volume.
Tidy up
Do your archives
Start on Friday night
Finish on Sunday
Restart production on Monday.

Yes, this is the info I was asking. so you have about 48TB is NVMe. I assume this is the m.2 form factor. Enterprise gear really doesn’t have slots for mass amount of M.2, usually just like 2 ports on the motherboard for boot. You could get PCI cards that hold a lot of sticks, but then you have no hot swap, and I don’t really consider that enterprise solid. But it would also be a waste to not use those.

  1. I really like Dell servers. They’ve been rock solid for us for years, and I’ve bought almost exclusively off e-Bay without issue. But certainly you could goto a reseller or whatever.

  2. If you have a machine room and cooling and noise is not an issue, I’d look at the R7x0 series. (r740,r750,r760), or the R7x15 series (r7415,r7515,r7615). These are 2U units either Intel or AMD. These come in a variety of configurations, and most have 12 bays for 3.5" HDD, though some configuration I think have an additional 4. Even using conservative 16TB spinning drives, after accounting for RAID6 you could easily get 160TB to accomodate your current workload. 20/22TB drives are commodity now. Although SATA will work, SAS is much better. I’d buy 3 to have as cold spares.

  3. If noise or heat is a problem, then look at the Dell T6x0 (630,640series. They are servers but in Tower form. They are 5U equivalent, and quiet. We have one from 10 years ago still in daily service without issue. They can be configured for up to 18 3.5" drives.

  4. Find a PCI add in card that accomodates the most amount of m.2 sticks, which will likely be 4, and add those internally to use as either ZFS or LVM/XFS/Hardware Raid cache.

Depending on configuration, the R series can probably have about 4 open PCI slots, and the T series has I think 6 or 8. Remember you will need 1 for the High Speed network card.

You can now have a single server with more storage than you currently have, plus re-using the NVMe M.2 as a cache layer.

Depending on what OS you use, you might host your S+W database servers locally. I like to use separate storage and VM hosts (Proxmox).

Remember downtime is almost always more expensive than anything else.

I’ll post some link below. These are just the first things I found on eBay, and definitely not selected for either price or optimal configuration.

1 Like

I have an HL15 at home, with 8x18TB drives running RAIDz2 (ZFS’s version of RAID-6). It’s primarily my home server/dumping grounds, but I did want it to be fast for moving big files around and light home-Flame usage, to replace a small 4-drive SAS array currently directly attached to my main workstation at home (that spends most of its time running Windows but I also dual-boot Linux for occasional Flame stuff at home.)

Running 10Gbit copper, I get pretty consistent 1GB/sec transfers. Linking a bunch of material of various flavors (4k XAVC MXF and ProRes) off it, I edited a little personal piece at home and it was well fast enough. I do not yet have the framestore on it, just source material, but will be trying that next at some point to see how it goes.

I am considering doing an HL15 at the office to centralize and replace two existing/aging SAS arrays that are individually attached to my Flame workstation and an Avid workstation. I don’t really have any concerns about it speed-wise as a source media server, but might still do a small local SSD/nvme to use as the Flame framestore for faster cache playback when needed, depending on how my further home experiments go.

Been pretty happy with the HL15 so far! I just got one of the fully built systems rather than building my own, as I’d just built my home workstation earlier this year and it’s not something I enjoy doing more than once every few years.

1 Like

In addition to what Alan just wrote which I find quite valuable. Not sure if this is of your interest but…

I’ve found that many people dont know about enterprise/server ssds also known as 2.5 SFF SSD’s NVME, which are actually the same flash speed as m.2 factor nvme’s, but with several advantages over just m2 prices…

  • They are enterprise environment so it means they are way more realiable, way more TBW which is key in these environments.
  • These enterprise ssds are U.2 which is quite similar to SATA/SAS interface, so it means its cheaper to run, in raid mode. why cheaper? A good nvme raid card typically can hook up to 4 m2 nvme ssds and it costs at least 1K$, a BRAND NEW raid card for the enterprise ssd could cost around 100U$ and can handle up to 8 drives, there is also a [16 drives version] (BROADCOM 05-50077-02 PCI-Express 4.0 x8 PCIe HBA 9500-16i Tri-Mode Storag - US | eBay) for just less than 300U$
  • M.2 NVME drives get QUITE HOT, which means degradation over the time…2.5 nvme ssds are not that hot. Also, temperature can lead to damage = Data loss.
  • In case of data loss, recoverying from these would be cheaper than m2 nvme thanks to the U.2 - SATA/SAS like interface. Like any other mechanical hdd. So, how many nvme cards do you need to buy to get 16 m2 nvme’s? if you have the money great, but, do you have enough pci slots? a board with several pcie slots costs a lot of money as well… so…yeah, running m2 nvme is quite quite expensive and not that realiable (IMO). They are fast but not many services do m2 nvme data recovery.

Just my two cents.

3 Likes

OK, thats interesting.
I currently have a dell R730xd running truenas that is used for the 120TB server I described above.
Are you suggesting regular mechanical harddrives are be able to read and write fast enough to work for for a central framestore as long as I have the correct network setup? (or is the pci card with 4 NVMEs necessary?)

Right now I am running truenas with 12 x 16tb drives
Topology is 2x 6 disks raidz2 vdevs for the storage pool.

I was always under the impression that this set up wouldn’t be fast enough because ZFS had too much overhead or whatever. I could never really find any solid info on that so I never wanted to risk it.
I dident think to add NVMEs as the L2ARC cache to speed things up.

The dell system currently just runs on 10gig. I could slap a 40gig card in the dell and connect it to my switch and maybe thats all I need here for a test.

The Macs would still be at 10gig but as a test case seems like it could be a good first step.

Then you basically have all you need. Yeah, add a 40gigE card in that thing, buy more RAM off eBay, and add NVMe as cache.

ZFS without lots of RAM and Cache is very slow. Those 2 things help to make it tolerable.

@MagicMtn - Intel NICs can do onboard processing bypassing CPU bottlenecks.

The recipe appears to be shaking itself out.
Lots of RAM,
Lots of CPU (for PCIe lane management),
Intel NICs, (Use Link Aggregation for more bandwidth),
A good managed switch,
Robert is your mother’s brother…

But this is not 800 bucks and the cost of a few drives.
Reuse what you can.
Use the NVMe as a removable accelerator.

You’re most of the way there.