Message boards :
Number crunching :
CPU affinity in multicore systems
Message board moderation
Author | Message |
---|---|
![]() ![]() Send message Joined: 29 Sep 99 Posts: 16515 Credit: 4,418,829 RAC: 0 ![]() |
I remember earlier versions of Crunch3r's optimised SETI clients there was code to set processor affinity. I see from Task Manager that this is not the case with the V2.4. What are the advantage or disadvantage to having 4 WUs crunching on a Quad, but either floating across the 4 cores or to have each WU locked to a core? It's good to be back amongst friends and colleagues ![]() ![]() |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14687 Credit: 200,643,578 RAC: 874 ![]() ![]() |
I remember earlier versions of Crunch3r's optimised SETI clients there was code to set processor affinity. Processor affinity isn't set by the SETI application, but by the BOINC client. I think Crunch3r's recent BOINCs still have it. I tried one of them (and Trux's, before that) when I was trying to work out the behaviour of my then-new 8-core, 15 months or so ago: I couldn't detect any difference. I think others have reported similar findings. Edit - results reported here. |
![]() ![]() Send message Joined: 29 Sep 99 Posts: 16515 Credit: 4,418,829 RAC: 0 ![]() |
Thanks Richard I felt there may have been an advantage to the processor lock. But if past trials showed little differences then the question has been answered. I will leave the thread open to see if anyone else wants to chime in. After that I will ask for the thread to be locked. It's good to be back amongst friends and colleagues ![]() ![]() |
Alinator Send message Joined: 19 Apr 05 Posts: 4178 Credit: 4,647,982 RAC: 0 ![]() |
IIRC, the benefits are: 1.) Locking a task to specific core avoids the overhead involved if the OS decided to move it off to a different core for some reason. 2.) There's an advantage by not having two tasks from the same project running on shared cache CPU's. Presumably this is due to reduced contention for the shared resource between the two processes. Alinator |
![]() ![]() Send message Joined: 19 May 99 Posts: 411 Credit: 1,426,457 RAC: 0 ![]() |
I run Crunch3rs SSSE3 version and when a WU completed, one of the 4 cores drops from 100% to about 15% for a second or so before the next WU is started. I had a run of 23min Wu's and noted that each time it was a different core that dropped, thus I believe that each instance seems to be running on 1 core. It's funny tho, on one PC the processor drops, but the other one does not. |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14687 Credit: 200,643,578 RAC: 874 ![]() ![]() |
IIRC, the benefits are: But are they measurable in practice? |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51507 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
IIRC, the benefits are: NOT.......... "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
Alinator Send message Joined: 19 Apr 05 Posts: 4178 Credit: 4,647,982 RAC: 0 ![]() |
IIRC, the benefits are: I'm pretty sure archae86 did some testing a ways back (could have just been for hyperthreaded CPU's though), and although it wasn't a huge difference it was measurable. Something like 3 or 4 percent comes to mind about it. Most likely it amounts to even less today for the current Intels. @ David: Most likely is that one of the cores has to pick up the BOINC CC to handle the comm and other 'cleanup' chores for ending the current task and starting the next one. <edit> LOL... I see Mark saw this thread, and if anyone would know he probably does! ;-) Alinator |
archae86 Send message Joined: 31 Aug 99 Posts: 909 Credit: 1,582,816 RAC: 0 ![]() |
I saw benefit from mixing SETI and Einstein that was strong on my hyperthreaded Gallatin, though it varied quite a bit with specific application releases. There remain some interesting effects in mixing Einstein with SETI, and in the interaction of high-VHAR SETI units, but none of those points is the topic of this thread. I don't recall ever reporting a benefit from restricting SETI or Einstein applications to a specific processor, however, which is the topic of this thread. I did a brief experiment reported in this post, from which I concluded there was no observable benefit from affinity setting for the case at hand. In fact, while the results were probably below significance, at face value they showed a tiny deficit. My personal opinion is that a general impression that affinity "ought to" help keeps a durable myth alive in the face of multiple non-confirming tests and a lack of carefully reported confirming tests. |
Alinator Send message Joined: 19 Apr 05 Posts: 4178 Credit: 4,647,982 RAC: 0 ![]() |
Yep, thanks for posting the link to the more recent work you did on the question. That was was got me think of you when I posted earlier, but couldn't find it right away. Sorry if my post made it sound like you had confirmed that 'urban legend'. When you stop to think about it, any well designed SMP kernel would try to avoid setting off a scenario which would lead to wasteful 'context switches' across true multi-core processors unless there was no other choice. I suppose it's even possible if you restricted the kernels ability to schedule by setting affinity manually, it might lead to it having to make 'bad' choices for other tasks and have the net effect of degrading everything as a result. Hyperthreading ones are a whole different story, and the well documented problems with that were one reason it was withdrawn when the Core family was released. Alinator |
W-K 666 ![]() Send message Joined: 18 May 99 Posts: 19490 Credit: 40,757,560 RAC: 67 ![]() ![]() |
I ran dual P3 and a P4 HT computers on Einstein and Seti, with no work cache and 50:50 share. On both there was a benefit to Seti of about 7% compared to running Seti:Seti but saw no benefit to Einstein in any configuration. This was over 18 months ago and apps at both sites have changed considerably since then. I have not tried since then, E6600 and Q6600 in family belong to sons and computers are frequently not here so monitoring would be difficult. |
![]() ![]() Send message Joined: 29 Sep 99 Posts: 16515 Credit: 4,418,829 RAC: 0 ![]() |
An interesting debate, and concensus is there is no advantage to using processor affinity. Good! It's good to be back amongst friends and colleagues ![]() ![]() |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14687 Credit: 200,643,578 RAC: 874 ![]() ![]() |
I saw Crunch3r was active in the next-door thread five minutes ago. Surprised he didn't drop in here - he's been the main advocate of CPU affinity in recent years. |
![]() ![]() Send message Joined: 4 Jul 99 Posts: 1575 Credit: 4,152,111 RAC: 1 ![]() |
An interesting debate, and concensus is there is no advantage to using processor affinity. I would say the advantage is negligable rather than none existant. Earlier comparisons seemed to indicate there was some advantage to setting affinity with discreate processors, less than 5% though. Since there are so few systems of this type it was not the effort to put into a standard client. BOINC WIKI ![]() ![]() BOINCing since 2002/12/8 |
DJStarfox Send message Joined: 23 May 01 Posts: 1066 Credit: 1,226,053 RAC: 2 ![]() |
It would make sense that a dumber kernel would require CPU affinity on tasks. However, with the Linux 2.6 kernel and the work MS did on theirs, they are smart enough now to use CPUs better than ever before. However, if you use Intel speedstep or amd powernow, keeping a task on 1 CPU will prevent the CPU from speeding up and hitting the brakes back and forth. I would also venture to guess that having a lot of CPUs (9 or more) would make affinity more worthwhile. This would keep cache misses lower and the CPUs from fighting each other for memory access. |
![]() Send message Joined: 6 Jul 03 Posts: 262 Credit: 4,430,487 RAC: 0 ![]() |
On a related note, Intel's Dynamic Acceleration, a feature on mobile Penryn processors meant to increase performance when only one core is being utilized, was found to only work when the application was set via CPU affinity (source). This was the case under Windows Vista, other OS's may be different. |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.