Flame noob in need of first time system advice

Hi all,

I’ve just joined this forum on advice of a seasoned flame friend of mind. What a hive of knowledge!

So quick background… I’ve been a Nuke compositor for 16 years mainly in high end film and TV but recently moved into commercials / short form and setup my own cloud based nuke vfx studio during the pandemic (it sounds fancy, there are only two of us ;). I am in the process of getting flame setup in the cloud too but as my internet speeds are so slow I would need to conform projects locally then send up the plates and run the edit in the cloud. Should I decide not to do it locally.

I was planning to buy a new workstation to do heavy 3D and some 6k uncompressed resolve grading. I had been really keen on the idea of getting the threadripper Pro 32 core with a 4090 Nvida card (twice as fast as the a5000 I believe) when they come out next week, this would tear through 3D work. But I also want to run flame and on the advice of my flame friend he said I’d likely run into compatibly nightmares with new GFX cards that are not Quatro and also that custom builds that do not meed the flame recommend specs are not for the faint hearted. And I’m faint hearted when it comes to linux :sweat_smile:

So I have been looking at buying an old HP Z8 G4 (PCI 3 motherboard) in the following configuration:

1 x HP Z8 G4 1450W (Grade A) - Windows 10 Pro Duel boot with Rocky Linux 8.5
2 x Intel Xeon Gold 6146 - 12-Core 24-Threads 3.20GHz (4.20GHz Boost, 24.75MB Cache, 165W TDP)
12 x 16GB - DDR4 2666MHz (PC4-21300, 2Rx4)
2 x 8TB - SATA-3 (7,200 RPM, 6Gb/s) HDD - Mirrored for archive storage

Then separately I would buy:
PNY NVIDIA A5000 24GB
HighPoint 8-Channel M.2 NVMe RAID Controller
500GB Seagate FireCuda 530, M.2 (2280) PCIe 4.0 - Linux boot drive
500GB Seagate FireCuda 530, M.2 (2280) PCIe 4.0 - windows boot drive
x3 2TB WD Blue SN570, M.2 (2280) PCIe 3.0 (x4) NVMe 1.4 SSD, 3500MB/s Read, 3500MB/s Write - Raid 5 - linux Disk
x3 2TB WD Blue SN570, M.2 (2280) PCIe 3.0 (x4) NVMe 1.4 SSD, 3500MB/s Read, 3500MB/s Write - Raid 5 - windows Disk

So a few questions:

CPU
Autodesk recommends Dual 12-core Intel Xeon Gold 6136 @ 3.0 GHz should I reduce my spec to match this? The one I have selected is slightly faster But I have no idea how exact I have to be with matching specs for support and compatibility.

HDD
Does this sound like a good config… Do you know if I can run two different raid 5’s off one card as well as the two boot disks in no raid config?

HDD
Is there a way to copy data between the Linux the windows file systems?

GFX card
I’m guessing the Quatro a5000 over the Geforce 4090 is the way to go if I don’t want a headache? It would be a shame as the 4090 will be almost double the performance and smaller price.

File System
Can I run my Nuke and Resolve on the same linux raid disk as Flame or does Flame have a special file system? Is there a good reason to keep this separate?


Sorry for the long post and all the many questions! I’ve done my best to get to where I’ve got to with the build but struggling on the last few points. Also if you have any thoughts on the above I would love to hear it.

FYI likelyhood is that when learn flame and use this workstation and on the cloud I will really only use flame assist so I can do the conform, clocking and mastering (short form). I will do the rest of the comp in Nuke (hope that’s not a dirty word here :wink:

Many thanks!

Kia

3 Likes

Yo! Welcome @kia! Okay that’s a juicy post. I know a little but not all, so here are some big picture thoughts.

Thought 0.
Damn! You’ve done your homework!

Thought 1.
Damn! That’s a ton of stuff to fit in one case! Are you sure that’s all gonna fit? The 4090 is 3 slots, right? Double check space and slots and all that. The x2 8TB drives? For archiving what? Project files and stuff? I’d rather small NAS then archive on the same box.

Thought 2.
Your friend is right and wrong. Autodesk certifies workstations but that does not mean those are the only ones that work. If you look at the Flame benchmark you’ll see all the sortsa stuff that does run Flame. Now, that doesn’t necessarily mean the installs were easy but I’d reckon that the overall amount of Linux black belt people in the world who are willing to troubleshoot all kindsa driver issues and all that are the absolute minority. So you absolutely do NOT need to match the Autodesk specs. Its just what some OEM partner sent to them that guarantees awesome performance but doesn’t mean you need to buy that much (or that little!) performance.

Thought 3.
There are a few 3080 and 3090 owners out there and yes, they outperform the RTX A5000 but maybe not the A6000. There are a few random bugs/issues with some of those cards that I think perhaps @friendlyape or @Sinan could share because I know they are using these configs. Autodesk is pretty adamant that they do not support ‘gamer cards’ and rightfully so. There are other issues and for example HP Z Anywhere, a big part of our business, I think requires Ampere or Quadro cards. BUT the good news with all this cloud computing fun stuff happening is that slowly the supported hardware is increasing and hopefully that means graphics cards as well.

Thought 4. I dunno about Intels anymore. They are soooo expensive and their Threadripper Pro counterparts are crushing them in benchmark performance. Plus cost! Currently the Threadripper Pros are available from Lenovo in the P620 and the new Dell (Dell has entered the chat). I have a bunch of P620s but I’m ‘just a Flame guy’ with the occasional SynthEyes/Blender/PFTrack deal, so local 3d rendering is a secondary concern. But the P620 and Dell are smaller chassis so you won’t be able to cram everything in there.

Thought 5. Having everything in one box for me doesn’t make sense. At least for me. I do dual boot into Windows occasionally but at that scale you may rather use a NAS. I use a couple Synologys cuz if I have to share between partitions you might as well share between computers. I think.

Thought 6.
I love the Highpoints. I’ve got a 4 and 8 banger and they’ve been solid for me. @finnjaeger has had some bad luck but for me and @andymilkis they’ve been great.

Thought 7.
Flame’s framestore does not need or have a special filesystem. I use either mdadm linux with an xfs file system which is a software raid0 or zfs with good results. The Highpoint just presents all the nvme sticks to the OS as available drives so you could theoretically make two separate 4 sticks software raids, one in Linux, one in Windows, and off ya go. The Flame framestore is a managed framestore which will be in a subfolder on your drive and you’ll never need to go into that folder, but you can absolutely use that same disk for whatever else you want. Not sure about the Linux to Windows or vice versa though with data moving back and forth.

Thought 8.
You ready for this box to be $50 a month in electricity and heat up your office by 4-5 degrees Fahrenheit, right?

Thought 9.
If you’re in New Zealand @friendlyape might be selling a box, no idea if its what you want or where you are.

Thought 10.
Wanna build one yourself? I’m thinking about building a Threadripper Pro on top of this. Threadripper Pro Motherboard | ASUS Pro WS WRX80E-SAGE SE WIFI .

And Nuke isn’t a bad word here as long as you publicly admit that its both pricier and inferior to Flame. I kid, I kid. It’s an amazing tool and a lot of us here use it daily.

1 Like

8 banger highpoint is great in linux but absolutely doesnt work in the mac pro, now confirmed by 3 people with the same card … :stuck_out_tongue:

1 Like

@Kia - welcome. I was in a very similar situation a few months ago. Coming from Nuke and other apps and finally adding Flame to the mix.

I did build myself a new system, as my Nuke system is Win10 and a dead-end for Flame unfortunately. A few considerations comes to mind:

You discuss cloud and local. Do you plan on having the local system just for conform, or do you plan on doing all Flame work local? That might impact on how you configure the system. I think if you can build one local, that is preferable.

I’ve looked at the Z8 (and the Dell equivalents). In the end I got a system from a local system builder I’ve worked with before (Puget Systems). They specialize in post production systems.

HP and Dell workstations have the advantage that they have a big customer service / spare part network, and that the configs are certified by Autodesk. The downside is that these systems are generally one generation of hardware behind because of their internal product roadmaps. So if you build a system you want to keep for a while and want to buy in to the latest to maximize that investment, a semi-custom build can be a better deal. And while Autodesk certifies a few configurations, you can certainly put Flame on other configurations and it works just fine.

The system I’m running on for the last few months has an Asus motherboard with 10GbE, a 12th gen i9 16 core CPU, 128GB RAM, A5000 GPU, and two Samsung M.2 SSD, one for OS and one for Framestor / renders. It does have thunderbolt for external storage. The install required doing the NVidia driver separate, but other than it was a regular Flame install and runs perfectly smooth. Right now on a job with plates that are 5,440x2160 without any noticeable slowdowns.

The alternative system (which runs Nuke and all the other apps) and is relevant for one of your choices above is a Supermicro motherboard, Xeon W-3365 32 core, 128GB RAM, 2x 4TB FireCuda, and A5000 with Highpoint SAS RAID controller and external QNAP enclosure. Would have loved it for Flame, but not giving up my Win10 install, and a bit more pricey then the other system.

I’m a firm believer in staying with NVidia’s workstation cards. Keeps things simpler. A5000 or A6000 depending on budget. On the FireCuda’s look for the thermals. They get super hot, and my system builder had to put a special fan on it to keep them from throttling at the hardware level. Since they sit below the PCIe cards you often can’t put heatsinks on them without blocking precious ports.

I’ve seen mixed results with M.2 RAID configs, and not sure they’re worth it. I’d rather have a few M.2 as straight devices for render and cache, and then a large RAID for the job files. Divide things up so you always read from one drive and write to another one.

I’m still relatively new to Flame, so don’t have a good feel for which tasks are GPU bound and which are CPU bound. So good to strike a balance between them. Definitely feels like your Z8 uses a pretty old CPU - the Xeon 6146 is Skylake class CPU going back to 2017. That’s several generations behind. Same with PCIe 3 - at this point PCIe 4 and even 5 is standard. Buying a new system with PCIe 3 seems like a marginal investment. Certainly pairing it with a 4090 would be a very lopsided config.

Regarding sharing files between Linux and Windows, actually I’m keeping my big RAID on my Win10 box, where it’s shared via SMB3 and my Flame box only has the two on-board M.2 and for the rest it’s using the Win10 RAID via 10GbE SMB without any issue.

As to running Nuke and Flame on the same Linux image - good question. I assume the answer is yes, but since Flame comes with it’s own custom image, there may be surprises. I ran into an issue with Syntheyes not installing on the Flame ISO and Russ only being half-interested in fixing it. So for now I’m keeping my Linux box dedicated to Flame, and running everything else on my Win10 box where installs are much easier. Since they share storage, monitors, keyboard and what not, it’s super easy to switch back and forth and not worry about it.

Hope this helps.
Jan

Fun story on the support end: I also have a Dell workstation and some other Dell hardware. If I had a problem, they would send someone over to my house to fix it. The other day I got a new Dell laptop which arrived DoA and required a new motherboard. They would have sent someone over. That’s a nice thing to have.

My Nuke Win10 system from Puget which I got at the beginning of the year, had on 3 occasions rebooted itself overnight and got stuck in the BIOS. Didn’t cause any harm since I wasn’t working on it at the time. In the system logs there was an entry about a hardware failure. But we were unable to isolate the problem.

So Puget suggested we swap the system out just in case (on 3 not always reproducible random reboots during long idle times). So they built an identical system and shipped it to me. They were fine with me taking 2 weeks to migrate data and making sure the new system worked well before shipping the old one back. That’s a $15K system they built a replica for the swap. Didn’t cost me a dime. Even return shipping was free, and I had system overlap so no downtime. Lifetime warranty on those system.

That beats a Dell service guy coming over to my house.

1 Like

This might be a thread of it’s own. But for those who recently came over from Nuke, or are working in a hybrid setup - it’s an interesting question of what you do in Nuke and what you do in Flame. There’s considerable overlap and both are capable of the similar things but in different ways, and then both have some things they’re better at. I’m still trying to figure this out. In some cases mixing and matching in the same job, the sum can be greater than the parts, in other situations it can add a lot of sand into the gears.

Related to what Jan said, there is a great fxphd series by the great @SamE, all about coming from Nuke and learning Flame. It is several years old now so it won’t involve all the latest tools, but it’s really more about the approach between the two. It was invaluable to my assistant who came from Nuke when I hired her.

3 Likes

Thanks for the reminder, I should watch that. I’ve been a long-time fan of fxphd. Some of the courses are aging, but still invaluable.

Thanks so much for the replies so far guys… Really appreciate the time you’ve taken with such long answers. I will try and respond to some of the points:

Randy

That’s a ton of stuff to fit in one case! Are you sure that’s all gonna fit?

I’ve been put off going for a non quatro card after reading some of the other posts on getting the 3090 to work… I don’t have a tech skills to problem solve this stuff on installation or if the gfx card starts doing funny things.

The x2 8TB drives? For archiving what?

That was the plan for archiving projects, any particular thoughts about why it’s best to have external storage?

I dunno about Intels anymore. They are soooo expensive and their Threadripper Pro counterparts are crushing them in benchmark performance

If I went for the Xeon I would be buying a refurbished one. The system without HDD and GFX cards would cost £3500, which comes in significantly cheaper than the Threadripper. But probably the performance is not nearly as good

I did look at the Levono 620 but by the time I spec their out of the box version to match a similar spec on the Autodesk website with the A5000 card I’m looking about £10K+ which is a bit outside my budget.

I use a couple Synologys cuz if I have to share between partitions you might as well share between computers

So what file system do you have on the NAS? Does that mean you can copy files on there from Windows and the Linux? If so maybe that would be a good option. I could host all of my windows project files on there and use it for archive for the flame. Just not sure if the data through put of the NAS would be enough for the windows machine to run 3D and Unreal and maybe a bit of Nuke?

I use either mdadm linux with an xfs file system which is a software raid0 or zfs with good results

Your ‘thought 7’ was good to know so I don’t have to buy even more disks! Any specific thoughts on xfs vs zfs? I know nothing about this.

You ready for this box to be $50 a month in electricity and heat up your office by 4-5 degrees Fahrenheit, right?
Haa haa, well I’m trying to keep the heating off as much as possible during the day this winter it might be a nice little warmer under my desk so I don’t have to heat the whole house :rofl:

If you’re in New Zealand…

UK I’m affraid

I’m thinking about building a Threadripper Pro on top of this. Threadripper Pro Motherboard | ASUS Pro WS WRX80E-SAGE SE WIFI

I have been looking into this configuration there are a few companies in the UK like PC specialist and Scan computer that enable you to configure custom builds for threadrippers on this motherboard… Going with one of those companies I could get it at a better spec and cheaper than building off the back of a Lenovo but still a bit scared of going off the recommended specs and getting into technical troubles. I wish I was more advanced at Linux and this wasn’t an issue.

finnjaeger
Thanks for the info I will be on Linux so this is very helpful.

allklier

Do you plan on having the local system just for conform, or do you plan on doing all Flame work local?

There will likely be two use cases:

  1. Local conform, send exr’s to the cloud for Nuke comp. Then finish either in the cloud or send comps back to a local timeline.
  2. Local conform. Comp in Nuke locally, finish locally

I would not be compositing in Flame as I will be getting an assist licence only.

while Autodesk certifies a few configurations, you can certainly put Flame on other configurations and it works just fine.

Maybe it would be a nice resource to put a google spreadsheet together with peoples computer specs with 2 columns. column one, straight forward installs, column two headache tech people only :wink:

The system I’m running on for the last few months has an Asus motherboard

Does the 5k uncompressed footage playback real time on the timeline? I would be curious to know the full spec. Why did you have to install the Nvidia driver separate seeing as you have a certified card?

I’m a firm believer in staying with NVidia’s workstation cards
I’ve been convinced on this from reading posts here despite the fact it upsets me that I won’t be able to use the new beast Nvidia 4090.

I’d rather have a few M.2 as straight devices for render and cache, and then a large RAID for the job files.

So what do you do about redundancy if your project M.2 fails? Is there a way to store essentially the live project files on a mirrored disk and the footage on an M.2 and the cache on another M.2 so there is no risk of you loosing your project files and not being able to recover your work.

Buying a new system with PCIe 3 (HP Z8) seems like a marginal investment. Certainly pairing it with a 4090 would be a very lopsided config

I’m sold on the A5000 graphics card now so would go for that config. I do take your point about buying old gen kit though! Will come back to this point later down. I did however read an article comparing the performance of high end graphics cards on AAA games using PCI 3 vs 4 and it was extremely marginal. Not sure how this would stack up on pro graphics though.

I’m keeping my big RAID on my Win10 box, where it’s shared via SMB3 and my Flame box

Great info thanks for this. I just need to work out if a NAS Raid will be fast enough to run my Windows 10 projects for NUKE and 3D, that will save me a lot money on buying extra disks for the windows box and having to buy archive.

since Flame comes with it’s own custom image, there may be surprises

Yes this was my concern too. I guess I could run Nuke and DaVinci on Windows if this does become an issue if I use your suggestion of a NAS.

Puget suggested we swap the system out just in case

That sounds amazing! I’ve read many of the articles on their website they sound great, shame they are not in the UK too.


So I’m a bit further after these discussions thank you!

Still need to decide between:

  1. threadripper custom build, would need to be 24 core as I can’t afford it with an a5000 card. This could well be out of my tech abilities to get running or

An old Gen HP Z8 with an a5000 which would be easier and cheaper to get running, but at the expense of performance and longevity.

I don’t need a mega system for the flame as I will be doing Uncompressed timeline only but I would like it to still be juicy for the Nuke and the 3D.

To answer a few of your questions:

Yes, that footage (EXR sequence) plays back real-time.

They way I’m setup is that I have a RAID 5 volume for each system (in this case shared via Win10). Then I also have a big Synology NAS. I use a sync utility at the end of the day to mirror the local drives onto the NAS for redundancy, and then furthermore mirror just the project and critical data files into the cloud via LucidLink. So it’s not full real-time redundancy, but worst case I’m 8 hours behind, which is good enough.

I regularly work of my Synology NAS as well. With 10GbE connections I get 400MB/s transfer rates which for most work is good enough. If it’s super critical, those FireCuda’s (2x 4TB) on my Win10 box come in at 5,000MB/s read and write without RAID config. So I use one for footage and one for renders.

I should also say that I’ve found Flame actually be a lot more efficient with resources. So you can get away with smaller system than you might think. Resolve is the worst in terms of performance hogging hardware, Nuke is somewhere in the middle, and Flame seems pretty light footed in my experience. Also Flame makes it easier than Nuke to do intermediate renders and caching to speed up your node trees. You can do that in Nuke, but it’s a bit more work. In Resolve it all just sucks.

1 Like

There is no special problem in run resolve. I follow an special order, maybe not required, but it works for me:

Install system with adsk iso (always) -》install dku -》install studio driver (if you have an external output video box-》install resolve -》 install flame.

Since nuke works like a copy-paste installer there is no any problem either. Even nuke can output a client video out throught studio driver with a blackmagic box.

Another comment, go to threadripper. A beast.

Ok so I have an update:

I managed to buy a A5000 off ebay for £1100, more than half price (hope it’s not a knock off :grimacing:)

I also bought a second hand Lenovo ThinkStation P620 Ryzen Threadripper PRO 3975WX for £3000

There is no ram in there but thinking I will go up to 256GB for future proof it’s only £300 more than 128GB

Then there is the issue of Storage.

Currently there is on 500GB M.2 in there with a windows 10 install. I was going to buy one more 500GB for the linux install. Then wasn’t sure what to do about the framestore and project drive for the windows 10 partition.

Possibly get a PCIe card and raid 0 with x2 m.2 for windows and x2 for linux then sync up to a QNap. But that will be £££££

Thing is I work with another guy who’s on Nuke who also works remote so need a solution to sync with him and keep file paths consistent. Has anyone here done anything similar?

1 Like

Sounds like a good plan. How much storage you need? A 4 or 6 bay Synology means you could Synology Drive that really easily. And heck, if you wanted, you each buy a 4 or 6 bay Synology and have your Synologys sync themselves.

You just saved a ton of cash so perhaps its it’ll in your budget?

1 Like

Is the other guy on Linux too, or a different OS?

One good way of sharing files between two remote people like that is LucidLink, which is a Cloud NAS. It will use a local cache drive of appropriate size for all the working files, so speed is determined by your local drive, but everything is synced with the Cloud and file locking exists. Can go cross-OS easily. But you may have to setup the mount points/drive mapping to keep paths matching. I forget if Nuke has a mapping function like Resolve does?

Hey @randy

How much storage you need?

To be honest I’m not sure quite how much storage I need, I’d probably get away with big disks in a 4 bay I think, but if I’m filling them up then I’m making profit right so could put aside some cash for something more juicy if needed. I watched your youtube about having a step by step plan rather than splashing out all at once which was great advice and what one of the things that made me have a re-think and go down the current route

And heck, if you wanted, you each buy a 4 or 6 bay Synology and have your Synologys sync themselves.
You just saved a ton of cash so perhaps its it’ll in your budget?

My other guy has a old QNap so I was thinking about getting one of those so maybe they would talk with each other (although I’ve heard Synology is good too, not really sure the differences next thing to research)… As for how to best sync from all those different devices I’m not sure on the technicalities.

As for budget I’m getting close to maxing out so will need to think small for the NAS for now.

Hey @allklier

Is the other guy on Linux too, or a different OS?

He is on Windows 10 but he’s hoping to get setup on flame too

One good way of sharing files between two remote people like that is LucidLink, which is a Cloud NAS. It will use a local cache drive of appropriate size for all the working files, so speed is determined by your local drive, but everything is synced with the Cloud and file locking exists. Can go cross-OS easily. But you may have to setup the mount points/drive mapping to keep paths matching. I forget if Nuke has a mapping function like Resolve does?

Yeah you can do that with nuke between operating systems so you have to keep the paths the same if you are going from say win 10 to win 10 but not win 10 to linux for example

LucidLink sounds interesting I will look into this. The major sticking point for me is my internet speed. Usually I structure my folders like this

shots
—>shot_010
------>plates
------>renders
------>software

Pretty standard stuff. But I’m thinking to save on internet bandwidth I might have to split folders out a bit to stop work in progress renders being synced so something like this

job name
----> published_renders
-------->shot_010
-------->shot_020
----> plates
-------->shot_010
-------->shot_020
----> renders
-------->shot_010
-------->shot_020
----> software
-------->shot_010
-------->shot_020

Which would be horrible!!! Unless there is a script that can recognise certain folder names not to sync

Hi @Kia ! I’m in a similar situation - long time nuke, noob flame, rolled a distributed studio. If bandwidth is asymmetrical with dl>ul, trying to continuously sync the entire project folder to all parties involved can be a rough experience.

It makes sense to push only what is needed to who needs it; so for a nuke artist: plates, art, and v00 scripts go out, only work files/scripts come back in. Since you are on the main/hero box for the job, you can fire open nuke to preview and render on your end.

Sync apps like syncthing/resilio/rsync/etc will all have some option for an ‘ignore list’ so you can skip directories, files, or specific extensions in your project scaffold. Also watch the sync logs and build up a list of the ~tmp/autosave files the various apps create. Something like substance painter can generate tmp files in the multi-gig size. Reducing traffic to just what is needed will help the experience.

Speaking of job scaffold ideas for remote work (from the nuke side here!), I’ve used ‘work_artist’ folders in the root of the project next to a typical seq/shot structure to isolate what is being pushed around. If I’m working with an artist like ‘Anne’, I’ll publish all shots (plates+scripts) into job/seq/shot folders as usual, then a second pass of select shots to the ‘work_anne’ folder, with the output of those exported scripts set to write to /job/seq/shot/renders/v01 path.

Artist-wise, when she starts Anne only sees the job_name/project_root folder, and a ‘work_anne’ folder that contains her shots. When she renders, a job/seq/shot/renders/v01 folder is created, and she can view the renders as normal. The only thing set to sync between us is the ‘work_anne’ folder from her branch of the job.

All of the incremental renders exist exclusively on her end and are not pushed back to me, only the project file/shot metadata is. Your folder structure looks something like this:

job_name
—>sequence
------>shot_010
--------->plates
--------->renders
--------->scripts
------>shot_020
--------->plates
--------->renders
--------->scripts

—>work_anne
------>shot_010
--------->plates
--------->scripts
------>shot_020
--------->plates
--------->scripts

Hi @mybikeislost , thanks for the explanation of you setup… That does make a lot of sense!

I think I’m going to have to feel my way through it on the file sharing side of things… I do have an AWS cloud studio setup with 4 nuke seats and also trying to get the flame up and running in the cloud. It means I’ve never had to think about all these data issues before… But now that I’m trying to take on full projects with a timeline rather than just shots I need to conform locally and possibly still send the plates up to the cloud for compositing when working in a team until I can get a better internet connection… I looked today and they said upgrades for my area to go full fiberoptic are taking place between April 2022 and April 2025 :frowning:

Hi All,

Thanks again for all the help.

Thought id let you know I managed to put together a very nice system with various ebay purchase and got my flame up and running last weekend!! :grin:

Thanks to @randy for this great video Logik Academy - My First Linux Flame - YouTube

I would have been scratching my head a lot trying to figure out how to install linux and flame by myself but this made it a breeze.

For anyone who is interested spec turned out to be:

Dual boot Windows 11 / Linux Rocky 8.5
Lenovo ThinkStation P620 Ryzen Threadripper PRO 32 core
256 GB RAM
PYN Nvidia A5000 Graphics card
x2 500 GB M.2 SSD for boot drives one OS on each
HighPoint 8-Channel M.2 NVMe RAID Controller
x4 2TB M.2 SSD (still waiting for these but the plan is to stripe (raid 0) a project drive 4TB for each OS

I managed to put this together for about half the price of buying a new 5000 series threadripper because I bought the 3000 series box of ebay for a bargain and the graphics card was less than half the price (it was new) on ebay

4 Likes