How to optimize GPU configuration?

Message boards : Number crunching : How to optimize GPU configuration?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3

AuthorMessage
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1837734 - Posted: 24 Dec 2016, 21:23:52 UTC - in response to Message 1837676.  

You got the info all wrong!

90°C is a setting on Tthrottle!
while they are usually around run about 50-60°C, in my home (2x 730, 240 & 1050Ti).;)

I just went by what you posted.

That's why I use Tthrottle on Win & keep my nVidias under 90°C!

I interpret that as using Tthrotlle to keep the GPU temperatures below 90°.
Grant
Darwin NT
ID: 1837734 · Report as offensive
KLiK
Volunteer tester

Send message
Joined: 31 Mar 14
Posts: 1304
Credit: 22,994,597
RAC: 60
Croatia
Message 1837981 - Posted: 26 Dec 2016, 14:53:05 UTC - in response to Message 1837732.  

Solution's might be:
- use of L5600 series procs? they are 60W only!
- put newer BIOS from HP?
- use of extra 80, 120, 150mm fan on a case?

I recently switched from X3360 to a low powered Q9400S & Q9550S...they are not so "power hungry" & still using the OEM intel coolers from X3360...works like a charm! ;)

L5600s would be serious performance step down from X5675s (3.07/3.4ghz hexacore @95w)
BIOS update is latest from HP, 2016, doubt it would support L-series CPUs though
Doubt Intel 5520 chipset would support L-series CPU, regardless of BIOS
Has added 120mm PWM fan, but could take case fans off pwm as noted above also
Z600 is now 7 y/o box
Thanks for the input.
[edit]
Apologies for drifting OT. Thought I was in the Xeon thread.

That's why I use CPUs on WCG...which only gets you badges for a time donation, not for a RAC! ;)

My GPUs on other hand get the job done on SETi@home! ;)


non-profit org. Play4Life in Zagreb, Croatia, EU
ID: 1837981 · Report as offensive
KLiK
Volunteer tester

Send message
Joined: 31 Mar 14
Posts: 1304
Credit: 22,994,597
RAC: 60
Croatia
Message 1837982 - Posted: 26 Dec 2016, 14:55:08 UTC - in response to Message 1837734.  

You got the info all wrong!

90°C is a setting on Tthrottle!
while they are usually around run about 50-60°C, in my home (2x 730, 240 & 1050Ti).;)

I just went by what you posted.

That's why I use Tthrottle on Win & keep my nVidias under 90°C!

I interpret that as using Tthrotlle to keep the GPU temperatures below 90°.

Some chipsets might be wrong...so they got pretty heated up! ;)

Had to keep them under 90°C, not to overheat...even new paste didn't help out!
Sold them out to a gamers...they didn't have problems with the cards, as they don't pull 100% from them! ;)

Now, I'm on a verge of being a number 1 in Croatia with my new cards... ;)


non-profit org. Play4Life in Zagreb, Croatia, EU
ID: 1837982 · Report as offensive
Profile kim Herrick
Avatar

Send message
Joined: 29 Sep 03
Posts: 2
Credit: 31,984,556
RAC: 0
United States
Message 1842970 - Posted: 19 Jan 2017, 2:52:00 UTC - in response to Message 1833271.  

i moved the gtx-1070 from an amd-8350 to my i1-5820 and it is displaying the same low GPU load. its not the platform it appears to be the card or the driver...
avatar closely resembles the craft I saw back in '73
ID: 1842970 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1842979 - Posted: 19 Jan 2017, 5:02:13 UTC - in response to Message 1842970.  

I wouldn't expect too much performance running CUDA 50.
ID: 1842979 · Report as offensive
Profile Michel Makhlouta
Volunteer tester
Avatar

Send message
Joined: 21 Dec 03
Posts: 169
Credit: 41,799,743
RAC: 0
Lebanon
Message 1843998 - Posted: 23 Jan 2017, 11:57:47 UTC

i have installed lunatics and added the commandline that i found on the forum for the 1070 and made it run 2 WU per GPU. I am not seeing much improvements. I've assigned 1 core per GPU WU also.

the load was almost 100% on the GPU with 1 WU, so I am assuming it is useless nowadays to run 2? or am I doing something wrong?
ID: 1843998 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1844030 - Posted: 23 Jan 2017, 14:05:52 UTC - in response to Message 1843998.  
Last modified: 23 Jan 2017, 14:06:14 UTC

i have installed lunatics and added the commandline that i found on the forum for the 1070 and made it run 2 WU per GPU. I am not seeing much improvements. I've assigned 1 core per GPU WU also.

the load was almost 100% on the GPU with 1 WU, so I am assuming it is useless nowadays to run 2? or am I doing something wrong?

Going to anonymous platform is needed only to manual select of particular plan class/app. Usually after some time host selects best one on its own.
In this case anonymous platform not needed cause setup like 2 WU/device or tuning line can be supplied via app_config.xml instead.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1844030 · Report as offensive
Profile Michel Makhlouta
Volunteer tester
Avatar

Send message
Joined: 21 Dec 03
Posts: 169
Credit: 41,799,743
RAC: 0
Lebanon
Message 1844090 - Posted: 23 Jan 2017, 18:59:30 UTC

i've installed lunatics for the cpu, i thought it would help? if not, how do i fallback in that case?

as for the 2 WU per GPU, i used to have a 780 and running 3 WU. But that was running CUDA not SoG. I am assuming it is pointless to run more than 1 WU per GPU nowadays?
ID: 1844090 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34258
Credit: 79,922,639
RAC: 80
Germany
Message 1844107 - Posted: 23 Jan 2017, 21:03:38 UTC - in response to Message 1844090.  

i've installed lunatics for the cpu, i thought it would help? if not, how do i fallback in that case?

as for the 2 WU per GPU, i used to have a 780 and running 3 WU. But that was running CUDA not SoG. I am assuming it is pointless to run more than 1 WU per GPU nowadays?


Wrong.
For your GPU`s running 2 instances is always better.


With each crime and every kindness we birth our future.
ID: 1844107 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1844109 - Posted: 23 Jan 2017, 21:04:21 UTC - in response to Message 1844090.  
Last modified: 23 Jan 2017, 21:04:39 UTC

i've installed lunatics for the cpu, i thought it would help? if not, how do i fallback in that case?

Opt CPU apps faster than stock.
I am assuming it is pointless to run more than 1 WU per GPU nowadays?

Depends on config. For medium-high end GPUs 2+ tasks usually better than 1. At least to hide startup delays if nothing else.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1844109 · Report as offensive
KLiK
Volunteer tester

Send message
Joined: 31 Mar 14
Posts: 1304
Credit: 22,994,597
RAC: 60
Croatia
Message 1847342 - Posted: 8 Feb 2017, 16:10:43 UTC
Last modified: 8 Feb 2017, 16:12:29 UTC

This is a great topic to start sharing some optmizations for the cards!

Now I'm just finishing my work on my job, on which I do SETi@home with Quadro M2000 4GB card. My cmdline today is:
      <cmdline>-high_precision_timer -use_sleep -sbs 512 -period_iterations_num 20 -tt 240</cmdline>
      <ngpus>0.33</ngpus>

A BIG help came from a great Aussie guy named Stephen here on forum...Thanks man!

Screen gives my freedom to work, while I'm not in CAD...maybe I'll change that in future!
Will post some examples at home, from my other cards...to share, that's what forum is for! ;)


non-profit org. Play4Life in Zagreb, Croatia, EU
ID: 1847342 · Report as offensive
Previous · 1 · 2 · 3

Message boards : Number crunching : How to optimize GPU configuration?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.