GTX 980 Resisting Memory Speed Setting with EVGA Precision

Message boards : Number crunching : GTX 980 Resisting Memory Speed Setting with EVGA Precision
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Profile GTP

Send message
Joined: 5 Jul 99
Posts: 67
Credit: 137,504,906
RAC: 0
United States
Message 1662420 - Posted: 8 Apr 2015, 1:47:11 UTC

I found I couldn't adjust the memory speed clocks with Precision X on a new 980. Then I stumbled on the fact that the setting will not stick if you have the GPU loaded. Exit SETI and wait a few seconds for the process's to stop and then try it. Then start SETI again.

All the best,
Aaron Lephart

TechVelocity.com
ID: 1662420 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1662425 - Posted: 8 Apr 2015, 2:46:29 UTC - in response to Message 1662420.  

That may be so, but it doesn't run in the state that it is in when SETI isn't running. So when you start SETI up again, you will be back to 6GHz (I think).
ID: 1662425 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1662427 - Posted: 8 Apr 2015, 2:57:18 UTC - in response to Message 1662425.  

You can use the Nvidia Inspector to change the speed of P2. Close out BOINC, launch NI and select p2 for each GPU and move the speed up to 3504MHz and then click the button to the right lower corner (I don't remember right now what it's called). Do this for each of your GPUs then close out the NI and relaunch Boinc. Your GPU speed should now be set to 3.5 GHz

Jason recommend Process Lasso to help keep keep everything running smooth. I'd go with that recommendation. I just started to use it as well and it seems to keep the drivers from crashing and resetting the GPU.

Zalster
ID: 1662427 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1665170 - Posted: 13 Apr 2015, 23:09:51 UTC

Just got an interesting explanation for why the card is not running CUDA or OpenCL at full speed (7GHz), and is being put into P2 state by the driver to run at 6GHz.

The claim is as follows: for gaming, you want the highest possible pixel calculation rate and you don't really care if there is an occasional error, as long as you don't get visible artifacts. BUT when you are doing large scientific calculations (e.g., SETI), you want to minimize the occurrence of errors, so you don't let the card run at its rated gaming speed to prevent the occasional errors that occur then. (Sort of using the gaming card as a workstation card by downclocking).

This actually makes some sense. I don't know if it is actually true, of course, but it IS interesting.
ID: 1665170 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1665175 - Posted: 13 Apr 2015, 23:32:13 UTC - in response to Message 1665170.  

Just got an interesting explanation for why the card is not running CUDA or OpenCL at full speed (7GHz), and is being put into P2 state by the driver to run at 6GHz.

The claim is as follows: for gaming, you want the highest possible pixel calculation rate and you don't really care if there is an occasional error, as long as you don't get visible artifacts. BUT when you are doing large scientific calculations (e.g., SETI), you want to minimize the occurrence of errors, so you don't let the card run at its rated gaming speed to prevent the occasional errors that occur then. (Sort of using the gaming card as a workstation card by downclocking).

This actually makes some sense. I don't know if it is actually true, of course, but it IS interesting.


So the driver developers are deliberately downclocking the card for computational purposes. If you want to use a video card for computational purposes, you buy a purpose built GPGU card such as TESLA or QUADRO. If the driver developers think that GPGU users of gaming cards wouldn't check their results for errors and then continue using a card that generates errors, they must have a low opinion of the user and think they must provide a nanny service. If a cruncher sees that his cards are generating errors, he investigates and cleans the card or downclocks the card on his own until it provides valid results. It would have been nice if the driver notes had made mention of this fact when the 900 series came out. Never a problem until the 900 series came out. In my opinion, the drivers should stay out of this area and leave it up to the self-interest of the user to use the card sensibly. I don't want a nanny overseer. My $0.02

Cheers, Keith
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1665175 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1665186 - Posted: 14 Apr 2015, 0:42:02 UTC - in response to Message 1665175.  

Well, not exactly. They seem to be saying that gaming has laxer requirements than computing, per se. You are right, however, in that it should be our option to think about/do the downclocking. Or maybe Maxwell is more prone to this kind of error(?).

If this last is true, things may get very interesting in terms of class action lawsuits, since it implies they knowingly released a (semi)defective product. If, in fact, Maxwell has this type of defect.

Maybe it's time to get a bowl of popcorn to watch the fireworks.
ID: 1665186 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1665200 - Posted: 14 Apr 2015, 1:29:51 UTC - in response to Message 1665186.  

Well, not exactly. They seem to be saying that gaming has laxer requirements than computing, per se. You are right, however, in that it should be our option to think about/do the downclocking. Or maybe Maxwell is more prone to this kind of error(?).

If this last is true, things may get very interesting in terms of class action lawsuits, since it implies they knowingly released a (semi)defective product. If, in fact, Maxwell has this type of defect.

Maybe it's time to get a bowl of popcorn to watch the fireworks.


I think the fireworks and lawsuits have already started if I recall correctly with the 970 memory issue. I haven't seen any sign of weakness yet in my 970's with overabundance of compute errors. Heck, I'm even overclocking the memory from P0 stock speed by +100 MHz with no issues.

Cheers, Keith
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1665200 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1665826 - Posted: 15 Apr 2015, 23:51:10 UTC

I decided to track this down a bit.

First I called EVGA Tech Support; the guy I spoke to (at 2nd Level) has not heard of the downclocking of CUDA apps, but he said he would look into it.

Then I called NVIDIA Tech Support, and am in the process of trying to get some kind of statement from them as to why they downclock for CUDA and OpenCL.

I will pass on anything they tell me.
ID: 1665826 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11361
Credit: 29,581,041
RAC: 66
United States
Message 1665832 - Posted: 16 Apr 2015, 0:29:09 UTC - in response to Message 1665826.  

My limited experience with Nvidea tech support was excellent.
ID: 1665832 · Report as offensive
Previous · 1 · 2

Message boards : Number crunching : GTX 980 Resisting Memory Speed Setting with EVGA Precision


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.