Dual gpu and HP Anywhere artifacts?

Who’s got dual gpu config/procedure to share? Config below. I get weird gut artefacts

(venv) [comfyui@dxs-flame-07 ComfyUI]$ nvidia-smi
Fri Oct 17 23:57:49 2025
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05 Driver Version: 580.95.05 CUDA Version: 13.0 |
±----------------------------------------±-----------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA RTX PRO 6000 Blac… Off | 00000000:41:00.0 On | Off |
| 30% 48C P8 15W / 300W | 364MiB / 97887MiB | 0% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+
| 1 NVIDIA RTX PRO 6000 Blac… Off | 00000000:C3:00.0 Off | Off |
| 30% 39C P8 7W / 300W | 18MiB / 97887MiB | 0% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+

±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 24629 G /usr/libexec/Xorg 142MiB |
| 0 N/A N/A 131026 G /usr/bin/gnome-shell 198MiB |
| 1 N/A N/A 24629 G /usr/libexec/Xorg 4MiB |
±----------------------------------------------------------------------------------------+
(venv) [comfyui@dxs-flame-07 ComfyUI]$

After testing, Flame 2026.1 currently does not support Nvidia Driver 580+ and CUDA13. Just install the 570 driver.

1 Like

ah. This machine was a flame. It’s running autodesk kickstart 9.3, but then I’ve updated the kernel and the graphics card drivers and this will just be an AI workstation.

1 Like

I have previously submitted a request to have Flame support Nvidia 580+drivers. I think the RTX PRO 6000 can fully support running Flame, but the latest version currently does not match 580+drivers. The earliest Flame 2026.2 may be supported. Your AI workstation should be able to run Flame as long as you install the 570 driver separately. Upgrading the kernel is not a problem, as long as the dependent environment is met, it can be used without installing a complete DKU. DKU binding to specific kernels is actually not friendly.

I setup an AI only station myself. I used the ADSK RL9.5 installer and the latest Nvidia driver from their site. NO issues with Teradici using a single A6000 ADA.

2 Likes

There is a –no-kernel option with the DKU so it won’t touch that. But if you are not running Flame, no reason to use the DKU.

2 Likes

Some kernel level software packages in DKU, such as AJA, are packaged based on specific kernel versions. I would prefer kernel level software packages to be more universal. When upgrading the kernel, they can dynamically match the new kernel, just like DKMS. This way, Flame can be installed on more Linux versions, just like Davinci, rather than being highly dependent on that specific version of the OS.

1 Like

Fucking baller.

3 Likes

If dropping $20K on GPUs is wrong, I don’t want to be right!

1 Like

My yard is the worst on the block. Weeds galore.

Haven’t washed my car forever. Dirty AF.

I still have one kidney left for the next upgrade cycle.

My GPU game is strong though.

7 Likes

@randy here is working fine with 6k blackwell. But Older driver and cuda 12.8.
Ive got issues with Cuda 13 and latest driver on Rocky for some reason.
Also, you can run @ 325W fine with this card, release a bit of more power without extra heat.
sudo nvidia-smi -i 0 -pl 325
sudo nvidia-smi -i 1 -pl 325

4 Likes

Also, put those fans to work if the box is not on your room and you going to use AI:
sudo nvidia-settings -V -c :0 -a ‘GPUFanControlState=1’ -a ‘GPUTargetFanSpeed=’“85”

Back to automatic:
sudo nvidia-settings -V -c :0 -a ‘GPUFanControlState=0’

4 Likes

Aight, im downgrading to your CUDA version. 13.0 was giving me dumb error messages in ai-toolkit when running 1920x1080 sample images when it 5000 Adas was fine.

Thanks for the confirmation @tpo

Oh…howd did you deal with PyTorch @tpo ?

Did you rebuild from source?

Should not need from source. Make sure your cuda is aganist 12.8 also.
Check with nvcc –version
attached is mine.

image

Pytorch im going safe with this combo most of times:
pip install torch==2.8.0 torchaudio==2.8.0 torchvision==0.23.0
Has been well with 6000s and GB300s

cu13.0 is still some strange stuff happening. Good to wait untill pytorch gets out of –pre in my opinion.

1 Like

@randy what machine are you running both of those GPUs on?

Lenovo P8 / Threadripper Pro 79xx something.

I did compile from source and did manage to somehow get most of it working minus torch vision and torch audio.

I’ll pivot and see if I can match your config.

1 Like

boom!

now I just need to modify all my old yamls to fill up all the VRAM. And yah, working through the fan % boost and maybe power boost you reccomended earlier @tpo

Angry Light Up GIF by Hot & Toxic

1 Like

@tpo do you do a lot of LoRA training? I’d love to catch up about it sometime. I dont know a lot of us out there leaning on LoRAs that much, and , I could use some friends.

1 Like

yay nicee :fire:

@randy yep lots of loras. I do have my own trainer Loras code that is basically these two below:
https://github.com/tdrussell/diffusion-pipe
https://github.com/modelscope/DiffSynth-Studio

3 Likes