Freeing CPU cores

Message boards : Number crunching : Freeing CPU cores
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4

AuthorMessage
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1407625 - Posted: 25 Aug 2013, 15:54:14 UTC - in response to Message 1407623.  

<snip>

I use TThrottle to keep the CPU and GPU temperatures under 75 °C.


whoa. i tried to look up your gpu, but i couldn't find a temperature threshold anyone agreed upon. 75C is far far too hot. i'm running an old gtx 260 core 216 and i can barely after a full night get it to 119F, how in gods name is that thing still functioning? thats an insane amount of heat to put near any components imho.


The GTX480 models mostly ran around 80c+, most GPU's can run around 90C with almost no problems.

That said, I like to keep mine under 70c.

ID: 1407625 · Report as offensive
MonChrMe

Send message
Joined: 9 Jun 13
Posts: 23
Credit: 113,889
RAC: 0
United Kingdom
Message 1407630 - Posted: 25 Aug 2013, 16:10:30 UTC - in response to Message 1407623.  

whoa. i tried to look up your gpu, but i couldn't find a temperature threshold anyone agreed upon. 75C is far far too hot. i'm running an old gtx 260 core 216 and i can barely after a full night get it to 119F, how in gods name is that thing still functioning? thats an insane amount of heat to put near any components imho.


Nah, the silicon on consumer Geforce GPUs is normally good up to 90' (~190f). Your GTX260 has a maximum operating temperature of 105'c, for example.
The NVS line are business class hardware, which tends to be built with reliability in mind. Should be safe up to the same, if not higher, temps.

His CPU is rated for up to 100'c as well.

A common trick with HTPC and 'silent' builds is to use custom fan profiles that turn down the fans and 'run hot' (it's less work to maintain a card at 80'c than at 60'c, hence less noise). As long as the heat doesn't 'bleed out' into other components or soften the chassis (possible with laptops) it's fine.

Speedfan or Speccy should be able to tell you what temps your hard drives are running at. They're the most vulnerable components to heat - need to keep those below 55'c


Note for anyone reading; always check the maximum temps for your CPU and GPU for yourself; AMD cpu's in particular tend to have lower maximum temperatures.
ID: 1407630 · Report as offensive
musicplayer

Send message
Joined: 17 May 10
Posts: 2430
Credit: 926,046
RAC: 0
Message 1407641 - Posted: 25 Aug 2013, 17:11:02 UTC
Last modified: 25 Aug 2013, 17:13:36 UTC

What is the difference between questions regarding performance and the possible errors or malfunctions which may also be present at the same time?

I just got a little pop-up box here while running a Genefer World Record task by means of CUDA giving me a Vcore warning, something I have not seen before.

I pulled up the monitor box for the CPU and sensors, respectively. Two tabs besides each other when selected.

For Vcore, it stays at about 1.240 to 1.260 V most at the time, but while the color labeling is still blue, it apparently goes below 1 V right now. Also the sensor for CPU fan lights up in yellow, but no warning is given. Apparently with my window open, the RPM on the CPU fan decreases to less than 580 RPM (rounds per minute).

Possibly the CPU was choked for a slight moment. I'll keep a watch on it.
ID: 1407641 · Report as offensive
Profile William Kendrick

Send message
Joined: 25 Dec 08
Posts: 46
Credit: 180,614
RAC: 0
United States
Message 1407682 - Posted: 25 Aug 2013, 20:10:52 UTC - in response to Message 1407630.  

whoa. i tried to look up your gpu, but i couldn't find a temperature threshold anyone agreed upon. 75C is far far too hot. i'm running an old gtx 260 core 216 and i can barely after a full night get it to 119F, how in gods name is that thing still functioning? thats an insane amount of heat to put near any components imho.


Nah, the silicon on consumer Geforce GPUs is normally good up to 90' (~190f). Your GTX260 has a maximum operating temperature of 105'c, for example.
The NVS line are business class hardware, which tends to be built with reliability in mind. Should be safe up to the same, if not higher, temps.

His CPU is rated for up to 100'c as well.

A common trick with HTPC and 'silent' builds is to use custom fan profiles that turn down the fans and 'run hot' (it's less work to maintain a card at 80'c than at 60'c, hence less noise). As long as the heat doesn't 'bleed out' into other components or soften the chassis (possible with laptops) it's fine.

Speedfan or Speccy should be able to tell you what temps your hard drives are running at. They're the most vulnerable components to heat - need to keep those below 55'c


Note for anyone reading; always check the maximum temps for your CPU and GPU for yourself; AMD cpu's in particular tend to have lower maximum temperatures.

holy smokes! i had no idea.. i run it at around 25C at all times even under full load! wow. fans ftw, lol.

ID: 1407682 · Report as offensive
Profile William Kendrick

Send message
Joined: 25 Dec 08
Posts: 46
Credit: 180,614
RAC: 0
United States
Message 1407684 - Posted: 25 Aug 2013, 20:12:54 UTC - in response to Message 1407641.  
Last modified: 25 Aug 2013, 20:21:01 UTC

What is the difference between questions regarding performance and the possible errors or malfunctions which may also be present at the same time?

I just got a little pop-up box here while running a Genefer World Record task by means of CUDA giving me a Vcore warning, something I have not seen before.

I pulled up the monitor box for the CPU and sensors, respectively. Two tabs besides each other when selected.

For Vcore, it stays at about 1.240 to 1.260 V most at the time, but while the color labeling is still blue, it apparently goes below 1 V right now. Also the sensor for CPU fan lights up in yellow, but no warning is given. Apparently with my window open, the RPM on the CPU fan decreases to less than 580 RPM (rounds per minute).

Possibly the CPU was choked for a slight moment. I'll keep a watch on it.

i'll need your bios revision, what mobo you're using, and your cpu and i can literally tell you exactly if you're in trouble or not.. but tbh.. youre voltages are within reason without me even breaking open my golden book :)
to get you some sort of idea of some voltage readings, i broke open my thuban golden numbers txt file. most of my voltages ran around 1.2 to 1.4v, respectively. my processor, as an example, runs at 1.4375, and its overclocked to right around 4 gigahertz. if i was a betting man... i'd say you're just fine, voltage wise :)
however, it dropping like it did... that isn't normal, i would think, but a drop in voltage... i do not believe that that would do anything to the components, i however would believe it could lead to errors while processing... see if you can SET it or not in your bios if you are really worried

ID: 1407684 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1407755 - Posted: 26 Aug 2013, 0:46:17 UTC - in response to Message 1400673.  

With 8 cores one core is 12.5.

Anything less than 100 will free one core. (allow 7 tasks)
88 will free one core. (7 tasks)

99% will free one core.

The value you fill in is an integer, which means that a value of 87.5 will be rounded down to 87. The actual value for BOINC is rounded down to the nearest value at which the CPU amount changes.

So given the 8 core example, values are percentages:
1 - 12 = 0
13 - 25 = 1
26 - 38 = 2
39 - 50 = 3
51 - 62 = 4
63 - 74 = 5
75 - 86 = 6
87 - 99 = 7
0 & 100 = 8

When you use 1 to 12, the old value "On multiprocessors, use at most N processors" kicks in, as a minimum value.
When the percentage = 0, BOINC will default to using all cores.

When the percentage is 1 - 12 and N is 0, BOINC defaults to 1 core, since this is not the way to tell BOINC not to use any CPU, and it'll always use CPU (for itself, and to run GPU programs).
ID: 1407755 · Report as offensive
Profile James Sotherden
Avatar

Send message
Joined: 16 May 99
Posts: 10436
Credit: 110,373,059
RAC: 54
United States
Message 1408213 - Posted: 27 Aug 2013, 4:26:53 UTC

Its been over a week, I said would run test for one week But decided i will go for a month.

Here are the results so far. control comp. 7003180 running 8 cores and 1 wu per GPU, no overclocks . RAC at start was 15,891 is now 16,202

Test comp. 6814791 running with HT off and 1 wu per GPU no overclocks started with 15,550 rac and is now 15,343.

When the month is up I will turn HT back on let the RAC equalize then turn HT back off on the yest comp. And run 2 wu on the GPU. For the control comp. I will let it run 8 cores and 2 wu on the GPU.

But I will open another thread for that test. As this will be off topic here.
What im doing now is probally off topic to, So if the OP objects let me know.
[/quote]

Old James
ID: 1408213 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1408347 - Posted: 27 Aug 2013, 13:54:16 UTC
Last modified: 27 Aug 2013, 13:55:36 UTC

And one another possible reason to free some cores on modern CPUs and especially AMD APUs running Ati AP/MB on GPU part: throttling.
Thermal throttling or consumed power throttling, both decrease multiplier for CPU (and, maybe GPU too) part of APU. I see this very clear on test runs with loaded APU. CPU multiplier (x7-x40 for my APU) stays on x37 constantly if only 1 core busy with SETI app and others + GPU are idle. But when APU under full load (and even when 1 core of 4 is idle) CPU multiplier constantly changes between x27 and x33 and sometimes drops to even x14. Naturally this greatly increases elapsed times for all running tasks, including GPU ones. One should take care about this throttling feature of modern CPUs when searching for optimal config. Full load can be really slower in such situation.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1408347 · Report as offensive
Profile James Sotherden
Avatar

Send message
Joined: 16 May 99
Posts: 10436
Credit: 110,373,059
RAC: 54
United States
Message 1416272 - Posted: 16 Sep 2013, 6:10:40 UTC

Final report on running HT on and off. After 4 weeks here are the results of my test on two computers. Both are I7 3770's running win7 and lunatics op aps. 550Ti GPUs. Control computer has 16GB of RAM. Test is running 8GBs of ram.

Test computer 6814791 started with a RAC of 15,550 and ended with a RAC of 11,857.

Control 7003180 started with a RAC of 15,891 and ended with 14,964.

Both machines have had and have there share of shortys wu's And in the past I have noticed my rac will drop when we get them. I didnt look and compare if the test machine had more shortys than the other. My observations over the 4 weeks showed that the two machines were averageging 1200 differance in rac until this past week when the gap widend to its present 3,107.

I started this test with the opinion my rac would die big time running HT off. Im pleasantly suprised that even though it did fall. It was not catastrophic..

I question if the differance in RAM between the two machines was a factor. Did the test machine get stuck with more shortys in the last week I ran the test.

conclusions- Your mileage may vary but if you want the most out of your machines Id run with HT on.

At the beginning of this post I said I ran with one free core to feed ther GPU and I was losing rac. I only ran it a week so that was not a fair test. I will switch HT back on my test rig and let my rac equalize on both rigs then I will free a core on one of the rigs and run the test for a month. I will start a new thread when I begin that test.

Any ideas on what machine should be the test one. the 16 Gb ram or the 8GB ram?


[/quote]

Old James
ID: 1416272 · Report as offensive
Profile Cliff Harding
Volunteer tester
Avatar

Send message
Joined: 18 Aug 99
Posts: 1432
Credit: 110,967,840
RAC: 67
United States
Message 1416394 - Posted: 16 Sep 2013, 14:56:29 UTC

On my 950 machine, I run @ 90% (7 cores) with .5 core allocated for all GPU operations (2 x AP & 3 x MB) for each GTX660SC @ 2GB. When running strictly AP tasks equals out to 5 CPU tasks & 2 * 2 GPU. When running a mix of AP & MB it's 4 x AP CPU & 3 * 2 MB GPU. When running strictly MB tasks it's 4 CPU & 3 * 2 GPU. There have been times when MB & AP share a single GPU and that is 1 & 1. Since the room has no a/c, temps on the GPU have gotten up to 78c, but usually run 66~72c. When the temps hit the 78c mark I suspend GPU operations until ambient temps drop below 80f.

According to the task manager under the processes tab each AP tasks uses 12~13 CPU regardless if it is a CPU or GPU task. The MB GPU tasks uses 0~2. Under the performance tab with the current task being a mix of AP * MB I see CPU usage of 55~67%. I'm using just over 50% of 6GB of ram.


I don't buy computers, I build them!!
ID: 1416394 · Report as offensive
Previous · 1 · 2 · 3 · 4

Message boards : Number crunching : Freeing CPU cores


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.