How to optimize GPU configuration?

Message boards : Number crunching : How to optimize GPU configuration?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
awdorrin

Send message
Joined: 27 Sep 99
Posts: 71
Credit: 106,424,089
RAC: 261
United States
Message 1833330 - Posted: 30 Nov 2016, 12:28:32 UTC - in response to Message 1833271.  
Last modified: 30 Nov 2016, 12:35:10 UTC

I did try to add the command line options last night, but I think I must have done something wrong, because I am not seeing the command line options listed in Task Manager, when I enable the 'Command Line' column (win10 here)

How can I tell for sure they are enabled?

I edited the files:
mb_cmdline_win_x86_SSE2_OpenCL_ATi_HD5.txt
mb_cmdline_win_x86_SSE3_OpenCL_NV_SoG.txt

Do I need to enable anything in one of the XML files or rename the mb_cmdline files to match the exact name, with the rev#?

I just used the suggested values from the readme files, to start with (Brent, I'll try yours once I get it working) :)
For Ati:
 -unroll 12 -oclFFT_plan 256 16 256 -ffa_block 12288 -ffa_block_fetch 6144 -tune 1 64 4 1 -tune 2 64 4 1
 

For nVidia:
 -sbs 256 -spike_fft_thresh 2048 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 32 -oclfft_tune_cw 32


BTW, I also doubt it is CPU load at this point.
Not sure if it is just some conflict between AMD and nVidia, or maybe the nVidia driver is causing problems, or maybe its just one of those Windows 10 things...

Thanks!
ID: 1833330 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34253
Credit: 79,922,639
RAC: 80
Germany
Message 1833345 - Posted: 30 Nov 2016, 15:09:56 UTC

You can see it in your task list.
It is active.

<core_client_version>7.6.22</core_client_version>
<![CDATA[
<stderr_txt>
Maximum single buffer size set to:256MB
SpikeFind FFT size threshold override set to:2048
TUNE: kernel 1 now has workgroup size of (64,1,4)
oclFFT global radix override set to:256
oclFFT local radix override set to:16
oclFFT max WG size override set to:256
oclFFT max local FFT size override set to:512
oclFFT number of local memory banks set to:32
oclFFT minimal memory coalesce width set to:32
Priority of worker thread raised successfully
Priority of process adjusted successfully, below normal priority class used
OpenCL platform detected: Advanced Micro Devices, Inc.
OpenCL platform detected: NVIDIA Corporation
BOINC assigns device 0
Info: BOINC provided OpenCL device ID used


With each crime and every kindness we birth our future.
ID: 1833345 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34253
Credit: 79,922,639
RAC: 80
Germany
Message 1833347 - Posted: 30 Nov 2016, 15:18:34 UTC - in response to Message 1833345.  

You can see it in your task list.
It is active.

<core_client_version>7.6.22</core_client_version>
<![CDATA[
<stderr_txt>
Maximum single buffer size set to:256MB
SpikeFind FFT size threshold override set to:2048
TUNE: kernel 1 now has workgroup size of (64,1,4)
oclFFT global radix override set to:256
oclFFT local radix override set to:16
oclFFT max WG size override set to:256
oclFFT max local FFT size override set to:512
oclFFT number of local memory banks set to:32
oclFFT minimal memory coalesce width set to:32
Priority of worker thread raised successfully
Priority of process adjusted successfully, below normal priority class used
OpenCL platform detected: Advanced Micro Devices, Inc.
OpenCL platform detected: NVIDIA Corporation
BOINC assigns device 0
Info: BOINC provided OpenCL device ID used


I also suggest to use the same line for your ATI GPU`s.


With each crime and every kindness we birth our future.
ID: 1833347 · Report as offensive
awdorrin

Send message
Joined: 27 Sep 99
Posts: 71
Credit: 106,424,089
RAC: 261
United States
Message 1833426 - Posted: 1 Dec 2016, 4:03:00 UTC - in response to Message 1833347.  

Ah, I never thought of looking at the details of the tasks on the website!

https://setiathome.berkeley.edu/result.php?resultid=5325193468

I was trying to figure out how to check it from my PC itself and couldn't figure it out.

Thanks!
ID: 1833426 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34253
Credit: 79,922,639
RAC: 80
Germany
Message 1833440 - Posted: 1 Dec 2016, 10:28:49 UTC - in response to Message 1833426.  

Ah, I never thought of looking at the details of the tasks on the website!

https://setiathome.berkeley.edu/result.php?resultid=5325193468

I was trying to figure out how to check it from my PC itself and couldn't figure it out.

Thanks!


Of course you can check it on your PC too.
In Boinc manager check which slots are used by GPU and check in the slots directory.


With each crime and every kindness we birth our future.
ID: 1833440 · Report as offensive
awdorrin

Send message
Joined: 27 Sep 99
Posts: 71
Credit: 106,424,089
RAC: 261
United States
Message 1833570 - Posted: 2 Dec 2016, 0:51:16 UTC - in response to Message 1833440.  

Jeez, hard to believe I've been running this software since 1999, and I know so little about what is going on in the folders.
Just found the 'slots' folder up under the main BOINC folder. Sure enough, there it is.

Thanks once again!

And the comment you made several days ago about the 'guppi' tasks under Cuda possibly causing the screen lags. I think you are definitely right, and it seems it is also happening with the OpenCL SoG tasks as well.

Right now, system is again sluggish, and there are two 'guppi' tasks running on the NVidia, and 3 running on my AMD cards, and two on the CPU.

I wonder if these 'guppi' tasks do not work well with the hyperthreading of the i7?

I have tomorrow of of work, so I think I'll spend some time reading through the forums and googling to see what else I can learn, and what else I can tweak to make things happier.

Thank you!
ID: 1833570 · Report as offensive
awdorrin

Send message
Joined: 27 Sep 99
Posts: 71
Credit: 106,424,089
RAC: 261
United States
Message 1834804 - Posted: 8 Dec 2016, 13:49:12 UTC - in response to Message 1833570.  

So I have tweaked my command line files based off comments I've read in different forum posts.
currently using the following:

mb_cmdline_win_x86_SSE2_OpenCL_ATi_HD5.txt
 -sbs 256 -spike_fft_thresh 2048 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 32 -oclfft_tune_cw 32 -cpu_lock -use_sleep -high_prec_timer


mb_cmdline_win_x86_SSE3_OpenCL_NV_SoG.txt
 -sbs 256 -spike_fft_thresh 2048 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 32 -oclfft_tune_cw 32 -cpu_lock -use_sleep -high_prec_timer -period_iterations_num 3


The jerky mouse movements and slow keyboard input, and overall sluggishness seems to have gone away, although occasionally the mouse with lag just slightly.
Currently : 2 Nvidia, 4 ATI, 4 CPU tasks running and resmon shows 83% CPU utilization.
The Nvidia SOG tasks are taking much less CPU now, 4-6% or less, rather than a full core.

May try to raise the number of GPU tasks up to 3 on each GPU, since they should be able to support them. Just not entirely sure how the CPU will handle it.
Will probably wait a week before trying that.
ID: 1834804 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1834949 - Posted: 9 Dec 2016, 9:28:33 UTC - in response to Message 1834804.  
Last modified: 9 Dec 2016, 9:45:55 UTC

There is new option that "old-school" optimizers largely ignore still.
It's -tt N where N in ms.
It defines desirable kernel length in time.

With all currently available versions -period_iterations_num N command behavior greatly differs from "old-school" familiar with.
It defines only initial values. After that adaptation algorithm in place that will gradually tune app execution through task to achieve -tt N goal.
Default is 60ms. To reduce lag reduce it.
To increase performance - increase it (but lag can increase too).

I think -tt N option should be included in standard recommended tuning line with bigger value for unattended execution hosts (where lag not important)

EDIT: also there is set on performance counters available in stderr that prints mean PulseFind kernels execution times achieved on particular task run.
If you see ~60ms for some kernels that means increasing -tt N will change app behavior indeed.
If all values less than 60ms - additional -tt N increase will change nothing cause GPU so fast that longer kernel just impossible with particular input data.

Here example of those counters:

Fftlength=32,pass=3:Tune: sum=132845(ms); min=15.11(ms); max=81.69(ms); mean=59.87(ms); s_mean=58.69; sleep=60(ms); delta=145; N=2219; usual
Fftlength=32,pass=4:Tune: sum=134695(ms); min=18.19(ms); max=99.68(ms); mean=59.39(ms); s_mean=51.56; sleep=45(ms); delta=129; N=2268; usual
Fftlength=32,pass=5:Tune: sum=44940.5(ms); min=10.33(ms); max=65.25(ms); mean=52.56(ms); s_mean=54.47; sleep=45(ms); delta=339; N=855; usual
Fftlength=64,pass=3:Tune: sum=69339.5(ms); min=7.485(ms); max=74.63(ms); mean=55.92(ms); s_mean=60.5; sleep=60(ms); delta=304; N=1240; usual
Fftlength=64,pass=4:Tune: sum=62782.2(ms); min=6.645(ms); max=73.42(ms); mean=55.27(ms); s_mean=55.03; sleep=45(ms); delta=276; N=1136; usual
Fftlength=64,pass=5:Tune: sum=22667.6(ms); min=5.17(ms); max=68.92(ms); mean=33.53(ms); s_mean=64.11; sleep=60(ms); delta=730; N=676; usual
Fftlength=128,pass=3:Tune: sum=37867.7(ms); min=3.762(ms); max=84.93(ms); mean=41.43(ms); s_mean= 47; sleep=45(ms); delta=681; N=914; usual
Fftlength=128,pass=4:Tune: sum=31032(ms); min=3.012(ms); max=70.03(ms); mean=39.58(ms); s_mean=63.72; sleep=60(ms); delta=616; N=784; usual
Fftlength=128,pass=5:Tune: sum=19366.2(ms); min=2.602(ms); max=60.05(ms); mean=26.1(ms); s_mean=38.08; sleep=30(ms); delta=785; N=742; usual
Fftlength=256,pass=3:Tune: sum=23763(ms); min=1.916(ms); max=51.8(ms); mean=27.41(ms); s_mean=50.43; sleep=45(ms); delta=910; N=867; usual
Fftlength=256,pass=4:Tune: sum=17087.7(ms); min=1.538(ms); max=37.26(ms); mean=20.74(ms); s_mean=36.22; sleep=30(ms); delta=867; N=824; usual
Fftlength=256,pass=5:Tune: sum=11410.9(ms); min=1.327(ms); max=25.26(ms); mean=15.01(ms); s_mean=24.75; sleep=15(ms); delta=825; N=760; usual
Fftlength=512,pass=3:Tune: sum=19760.6(ms); min=0.9823(ms); max=21.68(ms); mean=17.71(ms); s_mean=21.18; sleep=15(ms); delta=1159; N=1116; usual
Fftlength=512,pass=4:Tune: sum=14405.2(ms); min=0.7803(ms); max=16.53(ms); mean=13.17(ms); s_mean=15.39; sleep=15(ms); delta=1137; N=1094; usual
Fftlength=512,pass=5:Tune: sum=10662.2(ms); min=0.684(ms); max=12.31(ms); mean=9.946(ms); s_mean=11.41; sleep=0(ms); delta=1115; N=1072; usual
Fftlength=1024,pass=3:Tune: sum=43977.5(ms); min=0.5104(ms); max=36.26(ms); mean=22.4(ms); s_mean=23.93; sleep=15(ms); delta=1984; N=1963; high_perf
Fftlength=1024,pass=4:Tune: sum=438.322(ms); min=0.402(ms); max=7.814(ms); mean=3.199(ms); s_mean=6.328; sleep=0(ms); delta=1973; N=137; usual
Fftlength=1024,pass=5:Tune: sum=312.844(ms); min=0.3548(ms); max=5.881(ms); mean=2.503(ms); s_mean=5.477; sleep=0(ms); delta=1961; N=125; usual
Fftlength=2048,pass=3:Tune: sum=43913.7(ms); min=5.114(ms); max=20.61(ms); mean=11.73(ms); s_mean=11.7; sleep=0(ms); delta=1; N=3745; high_perf
Fftlength=4096,pass=3:Tune: sum=49372.7(ms); min=2.529(ms); max=23.08(ms); mean=6.591(ms); s_mean=6.594; sleep=0(ms); delta=1; N=7491; high_perf
Fftlength=8192,pass=3:Tune: sum=100582(ms); min=6.689(ms); max=6.767(ms); mean=6.714(ms); s_mean=6.716; sleep=0(ms); delta=1; N=14981; usual


As one can see for particular GPU (it's mine HD6950) quite big share of different kernel geometries are saturated at 60ms.
That is, increasing -tt N value will allow longer execution times for kernels that is, fewer kernels call overall, and hopefully less overhead and better performance.
From other side, running longer than 60ms will cause noticeable lags in keyboard input and mouse movement ( GPU tasks not pre-emptable so if kernel takes 60ms that GPU will be unavailable for anything else those 60ms).

Another useful info one can extract from these counters are max execution times.
For few first lines max time >60ms, up to ~100.
That is, in initial stages of each such task processing lag will be more noticeable than adaptation will take place.
So, if I would like to reduce lag I would increase value of -period_iterations_run N to divide kernels on more parts hence reducing initial length of kernel call. This will change initial point for adaptation start. But after few initial iterations time will gradually converge to those 60 ms again until I would provide -tt N option also.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1834949 · Report as offensive
awdorrin

Send message
Joined: 27 Sep 99
Posts: 71
Credit: 106,424,089
RAC: 261
United States
Message 1835991 - Posted: 15 Dec 2016, 1:36:40 UTC

So, after all of that, my new NVidia card died last night.
:-(

I came home from work to find only one monitor working. Win10 had rebooted itself for updates (I forgot to disable driver updates) so it had tried updating the GTX 1070's driver.

At first I figured it was driver issues, and spent 2 hours trying to uninstall, DDU, and reinstall the driver.
It would find the card, but the screen would never display anything, other than the quick flash of 'DVI signal detected'

So, I don't know if S@H stressed the card enough to trigger a failure in a marginal card, or if something else happened.
Initiated a return request with NewEgg, so waiting to hear back.

Bah! (Could be worse I suppose, but still a bummer)
ID: 1835991 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1849
Credit: 268,616,081
RAC: 1,349
United States
Message 1835995 - Posted: 15 Dec 2016, 2:07:31 UTC - in response to Message 1835991.  

So, after all of that, my new NVidia card died last night.
:-(
...
Bah! (Could be worse I suppose, but still a bummer)

Infant mortality, the bane of all electronics :|
ID: 1835995 · Report as offensive
KLiK
Volunteer tester

Send message
Joined: 31 Mar 14
Posts: 1304
Credit: 22,994,597
RAC: 60
Croatia
Message 1837571 - Posted: 23 Dec 2016, 20:45:08 UTC - in response to Message 1835991.  
Last modified: 23 Dec 2016, 20:47:25 UTC

So, after all of that, my new NVidia card died last night.
:-(

I came home from work to find only one monitor working. Win10 had rebooted itself for updates (I forgot to disable driver updates) so it had tried updating the GTX 1070's driver.

At first I figured it was driver issues, and spent 2 hours trying to uninstall, DDU, and reinstall the driver.
It would find the card, but the screen would never display anything, other than the quick flash of 'DVI signal detected'

So, I don't know if S@H stressed the card enough to trigger a failure in a marginal card, or if something else happened.
Initiated a return request with NewEgg, so waiting to hear back.

Bah! (Could be worse I suppose, but still a bummer)

That's why I use Tthrottle on Win & keep my nVidias under 90°C!

Too bad someone hasn't made the similar thing on Linux...so using there only "heatsink cards" (low power usage, like 240 or 730)!

RIP 1070

@Raistmer:
Can you program the app to use LM sensors with 90°C for throttling?! :/


non-profit org. Play4Life in Zagreb, Croatia, EU
ID: 1837571 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1837584 - Posted: 23 Dec 2016, 21:22:20 UTC - in response to Message 1837571.  

That's why I use Tthrottle on Win & keep my nVidias under 90°C!

There is something wrong with your system or cards if they are running at 90°c. No need for 3rd party programmes if the card is cooled properly.
Grant
Darwin NT
ID: 1837584 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1849
Credit: 268,616,081
RAC: 1,349
United States
Message 1837605 - Posted: 23 Dec 2016, 23:13:58 UTC - in response to Message 1837584.  
Last modified: 23 Dec 2016, 23:16:04 UTC

That's why I use Tthrottle on Win & keep my nVidias under 90°C!

There is something wrong with your system or cards if they are running at 90°c. No need for 3rd party programmes if the card is cooled properly.

That would be my experience also. At least, I've never seen any of my 980s exceed 80°c or so.
I do pay a lot of attention to case airflow, though, and have augmented the existing case fans with additional pwm fans as needed, using pwm splitter wiring harnesses and Arctic F9 fans. My fight on the dual Xeon boxes is keeping the CPUs at or below 80°c.
I don't use TThrottle on anything, as I was never able to get it to install and operate properly, but as long as you can keep everything cool enough to avoid damage, I feel throttling isn't needed or useful.
ID: 1837605 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1837615 - Posted: 23 Dec 2016, 23:42:29 UTC - in response to Message 1837605.  

My fight on the dual Xeon boxes is keeping the CPUs at or below 80°c.

Are they running stock or aftermarket coolers?
Grant
Darwin NT
ID: 1837615 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1849
Credit: 268,616,081
RAC: 1,349
United States
Message 1837651 - Posted: 24 Dec 2016, 4:57:04 UTC - in response to Message 1837615.  
Last modified: 24 Dec 2016, 4:59:06 UTC

My fight on the dual Xeon boxes is keeping the CPUs at or below 80°c.

Are they running stock or aftermarket coolers?

Yeah, that's part of the issue. No room to go to an aftermarket cooler and still be able to get the skins back on these cases. Neither Arctic nor Cooler Master seem to make any low-height units for LGA1366. I did do some testing with skins off using an Arctic Freezer 7 Pro, which should have been plenty of oomph for these 80-95w CPUs. Didn't see appreciable difference versus the wacky stock OEM (Foxconn) coolers. What I was (barely) able to get added to the case was a 2RU server cooler, specifically a SuperMicro SNK-P0038P Heatsink X9 for 2U LGA1366 Socket Type Intel, which is just barely larger than the 80mm socket footprint left open, and short enough not to hit the skins. With a 80x25mm fan wedged in, it seems to cool as well as the OEM. Issue was the OEMs weren't available at a reasonable price when I needed to add one to take the one box from a single E5504 to 2x E5620.
The real issue is the HP BIOS is locked down so tight there's nothing to be done with the fan profiles, which value quiet over cool. Only adjustment is minimum (idle) fan speed. I have never been able to find anything out there that will tweak the fan profile. Just one of those things you have to put up with if you choose to delve into the HP world.
Fortunately, both the E5620s and X5675s seem to do OK running 80°c 24x7.
ID: 1837651 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1837663 - Posted: 24 Dec 2016, 8:07:33 UTC - in response to Message 1837651.  

Have you tried straight 12v to the fans? Or even 5v, then bye bye profiles :)
ID: 1837663 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1849
Credit: 268,616,081
RAC: 1,349
United States
Message 1837666 - Posted: 24 Dec 2016, 8:44:10 UTC - in response to Message 1837663.  

Have you tried straight 12v to the fans? Or even 5v, then bye bye profiles :)

Good thought!
I could even grab a small pot out of the junk box to get a manual adjust. Would need to dummy up the sense lead to avoid errors on boot, but that's easy enough.
ID: 1837666 · Report as offensive
KLiK
Volunteer tester

Send message
Joined: 31 Mar 14
Posts: 1304
Credit: 22,994,597
RAC: 60
Croatia
Message 1837676 - Posted: 24 Dec 2016, 11:10:19 UTC - in response to Message 1837584.  

That's why I use Tthrottle on Win & keep my nVidias under 90°C!

There is something wrong with your system or cards if they are running at 90°c. No need for 3rd party programmes if the card is cooled properly.

You got the info all wrong!

90°C is a setting on Tthrottle!
while they are usually around run about 50-60°C, in my home (2x 730, 240 & 1050Ti).
;)


non-profit org. Play4Life in Zagreb, Croatia, EU
ID: 1837676 · Report as offensive
KLiK
Volunteer tester

Send message
Joined: 31 Mar 14
Posts: 1304
Credit: 22,994,597
RAC: 60
Croatia
Message 1837677 - Posted: 24 Dec 2016, 11:15:40 UTC - in response to Message 1837651.  

My fight on the dual Xeon boxes is keeping the CPUs at or below 80°c.

Are they running stock or aftermarket coolers?

Yeah, that's part of the issue. No room to go to an aftermarket cooler and still be able to get the skins back on these cases. Neither Arctic nor Cooler Master seem to make any low-height units for LGA1366. I did do some testing with skins off using an Arctic Freezer 7 Pro, which should have been plenty of oomph for these 80-95w CPUs. Didn't see appreciable difference versus the wacky stock OEM (Foxconn) coolers. What I was (barely) able to get added to the case was a 2RU server cooler, specifically a SuperMicro SNK-P0038P Heatsink X9 for 2U LGA1366 Socket Type Intel, which is just barely larger than the 80mm socket footprint left open, and short enough not to hit the skins. With a 80x25mm fan wedged in, it seems to cool as well as the OEM. Issue was the OEMs weren't available at a reasonable price when I needed to add one to take the one box from a single E5504 to 2x E5620.
The real issue is the HP BIOS is locked down so tight there's nothing to be done with the fan profiles, which value quiet over cool. Only adjustment is minimum (idle) fan speed. I have never been able to find anything out there that will tweak the fan profile. Just one of those things you have to put up with if you choose to delve into the HP world.
Fortunately, both the E5620s and X5675s seem to do OK running 80°c 24x7.

Solution's might be:
- use of L5600 series procs? they are 60W only!
- put newer BIOS from HP?
- use of extra 80, 120, 150mm fan on a case?

I recently switched from X3360 to a low powered Q9400S & Q9550S...they are not so "power hungry" & still using the OEM intel coolers from X3360...works like a charm! ;)


non-profit org. Play4Life in Zagreb, Croatia, EU
ID: 1837677 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1849
Credit: 268,616,081
RAC: 1,349
United States
Message 1837732 - Posted: 24 Dec 2016, 21:15:48 UTC - in response to Message 1837677.  
Last modified: 24 Dec 2016, 22:00:33 UTC

Solution's might be:
- use of L5600 series procs? they are 60W only!
- put newer BIOS from HP?
- use of extra 80, 120, 150mm fan on a case?

I recently switched from X3360 to a low powered Q9400S & Q9550S...they are not so "power hungry" & still using the OEM intel coolers from X3360...works like a charm! ;)

L5600s would be serious performance step down from X5675s (3.07/3.4ghz hexacore @95w)
BIOS update is latest from HP, 2016, doubt it would support L-series CPUs though
Doubt Intel 5520 chipset would support L-series CPU, regardless of BIOS
Has added 120mm PWM fan, but could take case fans off pwm as noted above also
Z600 is now 7 y/o box
Thanks for the input.
[edit]
Apologies for drifting OT. Thought I was in the Xeon thread.
ID: 1837732 · Report as offensive
Previous · 1 · 2 · 3 · Next

Message boards : Number crunching : How to optimize GPU configuration?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.