ah. This machine was a flame. It’s running autodesk kickstart 9.3, but then I’ve updated the kernel and the graphics card drivers and this will just be an AI workstation.
I have previously submitted a request to have Flame support Nvidia 580+drivers. I think the RTX PRO 6000 can fully support running Flame, but the latest version currently does not match 580+drivers. The earliest Flame 2026.2 may be supported. Your AI workstation should be able to run Flame as long as you install the 570 driver separately. Upgrading the kernel is not a problem, as long as the dependent environment is met, it can be used without installing a complete DKU. DKU binding to specific kernels is actually not friendly.
I setup an AI only station myself. I used the ADSK RL9.5 installer and the latest Nvidia driver from their site. NO issues with Teradici using a single A6000 ADA.
Some kernel level software packages in DKU, such as AJA, are packaged based on specific kernel versions. I would prefer kernel level software packages to be more universal. When upgrading the kernel, they can dynamically match the new kernel, just like DKMS. This way, Flame can be installed on more Linux versions, just like Davinci, rather than being highly dependent on that specific version of the OS.
@randy here is working fine with 6k blackwell. But Older driver and cuda 12.8.
Ive got issues with Cuda 13 and latest driver on Rocky for some reason.
Also, you can run @ 325W fine with this card, release a bit of more power without extra heat.
sudo nvidia-smi -i 0 -pl 325
sudo nvidia-smi -i 1 -pl 325
Also, put those fans to work if the box is not on your room and you going to use AI:
sudo nvidia-settings -V -c :0 -a ‘GPUFanControlState=1’ -a ‘GPUTargetFanSpeed=’“85”
Back to automatic:
sudo nvidia-settings -V -c :0 -a ‘GPUFanControlState=0’
Aight, im downgrading to your CUDA version. 13.0 was giving me dumb error messages in ai-toolkit when running 1920x1080 sample images when it 5000 Adas was fine.
Should not need from source. Make sure your cuda is aganist 12.8 also.
Check with nvcc –version
attached is mine.
Pytorch im going safe with this combo most of times: pip install torch==2.8.0 torchaudio==2.8.0 torchvision==0.23.0
Has been well with 6000s and GB300s
cu13.0 is still some strange stuff happening. Good to wait untill pytorch gets out of –pre in my opinion.
now I just need to modify all my old yamls to fill up all the VRAM. And yah, working through the fan % boost and maybe power boost you reccomended earlier @tpo
@tpo do you do a lot of LoRA training? I’d love to catch up about it sometime. I dont know a lot of us out there leaning on LoRAs that much, and , I could use some friends.