avoid hyper-threading

Message boards : Number crunching : avoid hyper-threading
Message board moderation

To post messages, you must log in.

AuthorMessage
peak

Send message
Joined: 6 Mar 11
Posts: 31
Credit: 39,440
RAC: 0
Germany
Message 1085524 - Posted: 10 Mar 2011, 2:01:07 UTC

hi!

When I first started with seti@home, my problem was that only one Core of my Core2Duo-processor was used: one core was at 100% and the other at 0%.
With editing my computer prefereces on the web-interface i could change that.
At "on multiprocessors, use at most __ processors" I entered 99 and from then on both cores were at 100%.

I am running seti@home now on my Netbook too which has a Intel-Atom-Processor. The Processor has only one core, but it emulates two via hyper-threading.
But because the atom is a in-order-processor, the computation of the WUs is slower with emulated cores.
So my netbook should only use only one core/cpu/thread/whatever.

So I assigned the netbook to the "seperate prefences for work" and changed in the work-preferences "on multiprocessors, use at most 1 processors".
But it havnt had any influence. I clicked on "update", i restartet the client, i restartet the computer... no effect
It is running still two WUs and both emulated cores are at 100%.

Then I created a global_prefs_override.xml with <max_cpus>1</max_cpus>.
Now the CPU-usage was at 50%.
But one core was at around 30% and the other at 20%, but both were fluctuating a lot.

Independently from my actual problem i want that only one core is used and the other one is completely unused.
How can I achieve that?
Maybe that already means that the atom-processor "forgets" his emulating and uses the entire physical CPU for boinc.

But maybe he operates then at just 50%.

How can I tell the system to just compute one WU at a time?
I could suspend all the other WUs manually, but thats only a temporary solution.


Paul
ID: 1085524 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1085526 - Posted: 10 Mar 2011, 2:10:17 UTC - in response to Message 1085524.  
Last modified: 10 Mar 2011, 2:15:26 UTC

The preference you need (because you're running Boinc 6.10.58) is:

On multiprocessors, use at most 50% of the processors
Enforced by version 6.1+

Claggy
ID: 1085526 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1085527 - Posted: 10 Mar 2011, 2:11:27 UTC - in response to Message 1085524.  

hi!

When I first started with seti@home, my problem was that only one Core of my Core2Duo-processor was used: one core was at 100% and the other at 0%.
With editing my computer prefereces on the web-interface i could change that.
At "on multiprocessors, use at most __ processors" I entered 99 and from then on both cores were at 100%.

I am running seti@home now on my Netbook too which has a Intel-Atom-Processor. The Processor has only one core, but it emulates two via hyper-threading.
But because the atom is a in-order-processor, the computation of the WUs is slower with emulated cores.
So my netbook should only use only one core/cpu/thread/whatever.

So I assigned the netbook to the "seperate prefences for work" and changed in the work-preferences "on multiprocessors, use at most 1 processors".
But it havnt had any influence. I clicked on "update", i restartet the client, i restartet the computer... no effect
It is running still two WUs and both emulated cores are at 100%.

Then I created a global_prefs_override.xml with <max_cpus>1</max_cpus>.
Now the CPU-usage was at 50%.
But one core was at around 30% and the other at 20%, but both were fluctuating a lot.

Independently from my actual problem i want that only one core is used and the other one is completely unused.
How can I achieve that?
Maybe that already means that the atom-processor "forgets" his emulating and uses the entire physical CPU for boinc.

But maybe he operates then at just 50%.

How can I tell the system to just compute one WU at a time?
I could suspend all the other WUs manually, but thats only a temporary solution.


Paul


I think to do what you want you have to assign the processor affinity manually to only use a specific cpu. In this case 0 or 1. a single threaded app normally will bound around cpu's like that depending on the OS. :/

SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1085527 · Report as offensive
peak

Send message
Joined: 6 Mar 11
Posts: 31
Credit: 39,440
RAC: 0
Germany
Message 1085544 - Posted: 10 Mar 2011, 2:50:18 UTC

Yes <ncpus>N</ncpus> in the cc_config.xml will limit BOINC to only using the number of CPU's listed, but the OS still controls how the workload is distributed between cpus.


I think that tells the OS:
"use only one core/half of your resources"
and the effect of that is, that boinc is only starting one WU

But if boinc would only just start one WU (=one thread) and would not make any other statements, maybe the OS would refrain from emulating a second thread.

maybe not...
but...
I would like to try.
Is there a way to directly tell boinc to just start one WU?
ID: 1085544 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1085549 - Posted: 10 Mar 2011, 2:57:08 UTC - in response to Message 1085544.  

Yes <ncpus>N</ncpus> in the cc_config.xml will limit BOINC to only using the number of CPU's listed, but the OS still controls how the workload is distributed between cpus.


I think that tells the OS:
"use only one core/half of your resources"
and the effect of that is, that boinc is only starting one WU

But if boinc would only just start one WU (=one thread) and would not make any other statements, maybe the OS would refrain from emulating a second thread.

maybe not...
but...
I would like to try.
Is there a way to directly tell boinc to just start one WU?


Very few OSs allow you to bind an app to a specific CPU, which is what you appear to want to do. I don't know what flavors of Windows allow this, if any.
ID: 1085549 · Report as offensive
peak

Send message
Joined: 6 Mar 11
Posts: 31
Credit: 39,440
RAC: 0
Germany
Message 1085554 - Posted: 10 Mar 2011, 3:01:17 UTC - in response to Message 1085549.  

Very few OSs allow you to bind an app to a specific CPU, which is what you appear to want to do. I don't know what flavors of Windows allow this, if any.

Maybe thats the next step.
What i want right now is to just run one WU at a time.
I can achieve that with suspending all WUs manually except for one, but a automatic solution would be nice :)

Forget the OS-resources-hyperthreading-issue for the moment^^
ID: 1085554 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1085556 - Posted: 10 Mar 2011, 3:08:52 UTC - in response to Message 1085554.  

Very few OSs allow you to bind an app to a specific CPU, which is what you appear to want to do. I don't know what flavors of Windows allow this, if any.

Maybe thats the next step.
What i want right now is to just run one WU at a time.
I can achieve that with suspending all WUs manually except for one, but a automatic solution would be nice :)

Forget the OS-resources-hyperthreading-issue for the moment^^


See here.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1085556 · Report as offensive
Profile Tazz
Volunteer tester
Avatar

Send message
Joined: 5 Oct 99
Posts: 137
Credit: 34,342,390
RAC: 0
Canada
Message 1085558 - Posted: 10 Mar 2011, 3:15:48 UTC - in response to Message 1085549.  

Very few OSs allow you to bind an app to a specific CPU, which is what you appear to want to do. I don't know what flavors of Windows allow this, if any.


You can bring up the Task Manager and go to the Processes tab, right-click on the program and choose 'Set Affinity ...' and select which processors you want that process to use. I don't know if you'd have to do that for every wu or not.
</Tazz>
ID: 1085558 · Report as offensive
peak

Send message
Joined: 6 Mar 11
Posts: 31
Credit: 39,440
RAC: 0
Germany
Message 1085560 - Posted: 10 Mar 2011, 3:24:58 UTC

If thats the only option, it sucks.
Because the cpu-load is at exactly 50% with that but fluctuatingly spread to both cores.

I just compared the two situations on my main notebook... it is indeed the same... <ncpus>1</ncpus> results in the same taskmanager-graph then suspending manually all the WUs except for one.

During the next days I will try both <ncpus>1</ncpus> and manually suspending the WUs over a long period of time on my EEE-PC and see what RAC it does.
Somehow they
http://www.planet3dnow.de/vbulletin/showthread.php?t=344734&garpg=8
have done it...
ID: 1085560 · Report as offensive
peak

Send message
Joined: 6 Mar 11
Posts: 31
Credit: 39,440
RAC: 0
Germany
Message 1085561 - Posted: 10 Mar 2011, 3:26:50 UTC - in response to Message 1085560.  

my last post was @HAL9000
Tazz' post is being processed :D
ID: 1085561 · Report as offensive
peak

Send message
Joined: 6 Mar 11
Posts: 31
Credit: 39,440
RAC: 0
Germany
Message 1085564 - Posted: 10 Mar 2011, 3:33:19 UTC

You can bring up the Task Manager and go to the Processes tab, right-click on the program and choose 'Set Affinity ...' and select which processors you want that process to use. I don't know if you'd have to do that for every wu or not.


Thats so strange!
I did it:
When only one WU is running and I assign it only to Core-1, Core-1 is at 100% and Core-2 at 0%; the overall CPU-load is at exactly 50%.
When I assign that WU to both Cores, it is fluctuating (Core-1 20%, Core-2 30%); but the overall-CPU-load is again at exactly 50%.
not one percent more!

strange!
I think internally it is still only one core, but the taskmanager is displaying it differently and messed up.
ID: 1085564 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1085572 - Posted: 10 Mar 2011, 3:51:40 UTC - in response to Message 1085560.  

If thats the only option, it sucks.
Because the cpu-load is at exactly 50% with that but fluctuatingly spread to both cores.

I just compared the two situations on my main notebook... it is indeed the same... <ncpus>1</ncpus> results in the same taskmanager-graph then suspending manually all the WUs except for one.

During the next days I will try both <ncpus>1</ncpus> and manually suspending the WUs over a long period of time on my EEE-PC and see what RAC it does.
Somehow they
http://www.planet3dnow.de/vbulletin/showthread.php?t=344734&garpg=8
have done it...


That article seems to only be about running 1 tasks at a time on the atom. I didn't see anything about them locking the application to one cpu.

In their test where they ran 2 tasks at once vs one after another they didn't consider the long term effects. Using their numbers I'll expand on what they did a little.

If CPU 0 always runs about 200 minutes per task & CPU 1 always runs about 300 minutes per task. Then after 600 seconds you would have 5 tasks complete (3 at 200 minutes and 2 at 300 minutes).

If running tasks one at a time & they always take about 150 minutes. Then after 600 minutes you would only have 4 tasks complete.

So Hyper Threading would get more work done over time. Which is often, but not always the situation. Better analysis would require running the same tasks both ways, as they did, but using more tasks. I normally use 12-24 hours of tasks when I do that kind of testing.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1085572 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1085573 - Posted: 10 Mar 2011, 3:53:13 UTC - in response to Message 1085564.  

You can bring up the Task Manager and go to the Processes tab, right-click on the program and choose 'Set Affinity ...' and select which processors you want that process to use. I don't know if you'd have to do that for every wu or not.


Thats so strange!
I did it:
When only one WU is running and I assign it only to Core-1, Core-1 is at 100% and Core-2 at 0%; the overall CPU-load is at exactly 50%.
When I assign that WU to both Cores, it is fluctuating (Core-1 20%, Core-2 30%); but the overall-CPU-load is again at exactly 50%.
not one percent more!

strange!
I think internally it is still only one core, but the task manager is displaying it differently and messed up.


That is how it is suppose to work. That functionally has been in there since at least Windows NT 4.0 around 15 some years ago.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1085573 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34872
Credit: 261,360,520
RAC: 489
Australia
Message 1085577 - Posted: 10 Mar 2011, 4:14:36 UTC - in response to Message 1085572.  

If thats the only option, it sucks.
Because the cpu-load is at exactly 50% with that but fluctuatingly spread to both cores.

I just compared the two situations on my main notebook... it is indeed the same... <ncpus>1</ncpus> results in the same taskmanager-graph then suspending manually all the WUs except for one.

During the next days I will try both <ncpus>1</ncpus> and manually suspending the WUs over a long period of time on my EEE-PC and see what RAC it does.
Somehow they
http://www.planet3dnow.de/vbulletin/showthread.php?t=344734&garpg=8
have done it...


That article seems to only be about running 1 tasks at a time on the atom. I didn't see anything about them locking the application to one cpu.

In their test where they ran 2 tasks at once vs one after another they didn't consider the long term effects. Using their numbers I'll expand on what they did a little.

If CPU 0 always runs about 200 minutes per task & CPU 1 always runs about 300 minutes per task. Then after 600 seconds you would have 5 tasks complete (3 at 200 minutes and 2 at 300 minutes).

If running tasks one at a time & they always take about 150 minutes. Then after 600 minutes you would only have 4 tasks complete.

So Hyper Threading would get more work done over time. Which is often, but not always the situation. Better analysis would require running the same tasks both ways, as they did, but using more tasks. I normally use 12-24 hours of tasks when I do that kind of testing.

Actually if you think about it this way it's likely better to run each core at 50%.
The physical core is the work horse while the virtual core is a cart.

The physical core handles the workload while the virtual core handles the simple stuff which in the end give you better user responsiveness from the unit.

If you lock the w/u to the pyshical core then the system will likely be very sluggish while locking the w/u to the virtual core will likely result in very slow completion times.

With hyper-threading turned off you will likely still wind up with a slow responding setup but not fully using what the CPU is capable of.

Now that's just my very simple way of explaining things but it's what I found out with P4's with it.

Cheers.
ID: 1085577 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1085580 - Posted: 10 Mar 2011, 4:21:34 UTC - in response to Message 1085577.  

If thats the only option, it sucks.
Because the cpu-load is at exactly 50% with that but fluctuatingly spread to both cores.

I just compared the two situations on my main notebook... it is indeed the same... <ncpus>1</ncpus> results in the same taskmanager-graph then suspending manually all the WUs except for one.

During the next days I will try both <ncpus>1</ncpus> and manually suspending the WUs over a long period of time on my EEE-PC and see what RAC it does.
Somehow they
http://www.planet3dnow.de/vbulletin/showthread.php?t=344734&garpg=8
have done it...


That article seems to only be about running 1 tasks at a time on the atom. I didn't see anything about them locking the application to one cpu.

In their test where they ran 2 tasks at once vs one after another they didn't consider the long term effects. Using their numbers I'll expand on what they did a little.

If CPU 0 always runs about 200 minutes per task & CPU 1 always runs about 300 minutes per task. Then after 600 seconds you would have 5 tasks complete (3 at 200 minutes and 2 at 300 minutes).

If running tasks one at a time & they always take about 150 minutes. Then after 600 minutes you would only have 4 tasks complete.

So Hyper Threading would get more work done over time. Which is often, but not always the situation. Better analysis would require running the same tasks both ways, as they did, but using more tasks. I normally use 12-24 hours of tasks when I do that kind of testing.

Actually if you think about it this way it's likely better to run each core at 50%.
The physical core is the work horse while the virtual core is a cart.

The physical core handles the workload while the virtual core handles the simple stuff which in the end give you better user responsiveness from the unit.

If you lock the w/u to the physical core then the system will likely be very sluggish while locking the w/u to the virtual core will likely result in very slow completion times.

With hyper-threading turned off you will likely still wind up with a slow responding setup but not fully using what the CPU is capable of.

Now that's just my very simple way of explaining things but it's what I found out with P4's with it.

Cheers.


Man, I remember all the testing back in the S@H Classic days when HT was introduced in the P4's. There were also people doing test running 2 tasks on CPUs that did not have HT.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1085580 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34872
Credit: 261,360,520
RAC: 489
Australia
Message 1085582 - Posted: 10 Mar 2011, 4:25:05 UTC - in response to Message 1085580.  
Last modified: 10 Mar 2011, 4:29:52 UTC

Yeah, either the wrong CPU in the right motherboard or at the other end the right CPU in the wrong motherboard. lol

Also remember that this is a netbook in which you don't want to much heat build up produced as well either though you still want a setup that is also usable.

Cheers.
ID: 1085582 · Report as offensive
peak

Send message
Joined: 6 Mar 11
Posts: 31
Credit: 39,440
RAC: 0
Germany
Message 1085594 - Posted: 10 Mar 2011, 4:58:44 UTC

That article seems to only be about running 1 tasks at a time on the atom. I didn't see anything about them locking the application to one cpu.

In their test where they ran 2 tasks at once vs one after another they didn't consider the long term effects. Using their numbers I'll expand on what they did a little.

If CPU 0 always runs about 200 minutes per task & CPU 1 always runs about 300 minutes per task. Then after 600 seconds you would have 5 tasks complete (3 at 200 minutes and 2 at 300 minutes).

If running tasks one at a time & they always take about 150 minutes. Then after 600 minutes you would only have 4 tasks complete.

So Hyper Threading would get more work done over time. Which is often, but not always the situation. Better analysis would require running the same tasks both ways, as they did, but using more tasks. I normally use 12-24 hours of tasks when I do that kind of testing.


well...
You are completely right.
I will not atempt to disable the hyper-threading anymore :P


sorry for not thinking

thanks @all of you for help
ID: 1085594 · Report as offensive
Profile Gundolf Jahn

Send message
Joined: 19 Sep 00
Posts: 3184
Credit: 446,358
RAC: 0
Germany
Message 1085626 - Posted: 10 Mar 2011, 8:05:57 UTC - in response to Message 1085560.  

During the next days I will try both <ncpus>1</ncpus> and manually suspending the WUs over a long period of time on my EEE-PC and see what RAC it does.

Don't use <ncpus>1</ncpus>!! Use "On multiprocessors, use at most 50% of the processors" as Claggy suggested; with that, only one task will be started on a two-core (real or virtual) system.

Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)

SETI@home classic workunits 3,758
SETI@home classic CPU time 66,520 hours
ID: 1085626 · Report as offensive

Message boards : Number crunching : avoid hyper-threading


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.