There was a post recently by @talosh about using your GPU to mine Ethereum when you’re not using Flame. On the flip side, you can do the opposite and really bring everything down to lower levels so as to save energy and reduce heat in your machine room…thus less A/C.
@Alan posted this this video quite some time ago which inspired me to look into it. As he says, ADSK turns everything up to it’s maximum levels, which makes sense. The issue is those settings stay on regardless if you’re using Flame or not.
Here are two small scripts that will be run when you login and logout. They use tuned-adm and nvidia-settings to adjust the settings.
Login: Bring overall system settings back to ADSK levels and set the primary GPU to maximum and secondary GPU to adaptive.
Logout: Bring overall system settings to balanced and set all GPUs to adaptive.
- You’ll have to modify your xorg.conf to remove all occurrences of PowerMizer for this to work. This setting trumps everything. Took me a long time to figure this out.
- You can use different tuned-adm settings on logout, but balanced seem to be a good one as we have things happening in the BG plus we go into different machines all the time so I didn’t want to do anything that would really bring things down too much.
- Obviously this means people need to login and logout for it work.
How to Install
- login.sh /opt/Autodesk/USER/.kde/Autostart
- logout.sh /opt/Autodesk/USER/.kde/shutdown
pwr_mgmt.zip (1.3 KB)
On our Z840s, this will bring the GPU usage down from 80W (idle) to 20W and CPU temp from ~60C to ~30C (unsure about the wattage). So all inline with what Alan found.
Long story short, when you see the data on something like Grafana it’s clear this saves a lot of excess power usage and heat with no side affects.
Thanks again to @Alan for the inspiration.
Yup…that is the approach. Things are a bit different with Gnome, but not much.
For KDE, these can be added systemwide here:
also there is a typo in your logout.sh , “sleet 2”
Woops. Well I guess it’s not really needed then as everything seem to be working fine. Good to know about the systemwide locations.
/etc/kde/shutdown/ and /etc/kde/env/ don’t seem to be working for me. However, the ~/.kde/Autostart/ and ~/.kde/shutdown/ seem to work fine. Since I’m only really ever logging in with one user, this is fine for me.
What cli utility are you checking CPU wattage and speed with in that video? I’d love to see what kind of savings I might be getting on a Z820. I can see that the scripts are working to switch the tune-adm profile setting but when I monitor with nvidia-smi I see no difference in GPU fan speed or power between the profiles (Quadro M6000, 38% fan, 69ºC, 89W).
I don’t remember the app. It was a long time ago. I don’t install it normally. We are now on CentOS8 so things work different. What command are you using for setting power on GPU? The important metric to look at is its P state. I believe P0 is full speed and P8 is most conservative.
I’m using Kyle’s scripts linked above. The scripts use “tuned-adm profile profile_name” to change the power profile. That appears to be working in that I can see it change when I run “tuned-adm active” after a login/out.
I just tried running the command the scripts are using to set the GPU power, but it gives me an error when I run it:
nvidia-settings -a “[gpu:0]/GPUPowerMizerMode=0”
ERROR: The control display is undefined; please run
nvidia-settings --help for usage information.
I’m looking at the nvidia-settings command but I don’t understand why it won’t work. I have a single monitor plugged into my machine, which I thought the [gpu:0] part would address.
Unsure about the error you’re getting when trying to run the nvidia-settings command. The gpu:0 means the first GPU, at least on our systems. It works on both 820s (single GPU) and 840s (dual GPUs). Maybe --help will give you a bit more insight with how the GPU is mapped, maybe it’s not 0 in your case. What do you get when you run nvidi-smi?
You’ll also need to remove all mentions of PowerMizer in your xorg.conf. If you don’t do that then it will overwrite any other setting and it won’t work.
nvidia-smi output attached. I’m thinking the “0” to the left of Quadro M6000 is display/device 0, so I’m also stumped as to why the scripts you posted aren’t working for me. I’ll keep picking at nvidia-settings commands to see if I can get anything to work.
I did comment out all the PowerMizer lines in my xorg.conf so hopefully I’m good there. I haven’t tried giving the machine a full restart since I did that because I thought restarting X would be sufficient. I suppose it’s worth a try just to rule it out. I’m on CentOS 7.6 if that matters.
Interesting. My driver version is 440.44 and we’re on 7.4. Maybe they’ve changed how to adjust that value between versions…
are you trying to run /usr/bin/nvidia-settings -a “[gpu:0]/GpuPowerMizerMode=1” via an SSH session?
I was trying over ssh. I logged into the desktop as root and ran the command in a shell from there. It didn’t like the quotes, so I removed those and then it worked. I left the quotes in the scripts, though.
And, suddenly, login/logout changes started working. So it seems like things are working now. 90W to 17W looks pretty good to me.
Over SSH, Nvidia-settings has no concept of displays, so that makes total sense.
Glad you got it working. It makes quite a difference. Like Alan said, dealing with anything monitor related over ssh is a no go sadly.
It should work if you set the environment variable DISPLAY so it has an X display to talk to.
nvidia-settings -q GPUPowerMizerMode
Incidentally, for you CentOS 8 people, here’s what I’ve done that works for me:
Add to /etc/gdm/PreSession/Default
/usr/sbin/tuned-adm profile autodesk
env DISPLAY=:0 /usr/bin/nvidia-settings -a "[gpu:0]/GPUPowerMizerMode=1"
(there’s probably already a PATH statement in it – just add to the bottom and make sure it ends with
exit 0 or else if something fails with a return code, you won’t be able to login)
/usr/sbin/tuned-adm profile balanced
env DISPLAY=:0 /usr/bin/nvidia-settings -a "[gpu:0]/GPUPowerMizerMode=0"
chmod +x /etc/gdm/PostSession/Default so it’s executable.
Thanks for this Bob. We’re going to be moving to 8 in the coming weeks so this will save some time poking around the internet figuring out how to make it work again.