Message boards :
Number crunching :
hyperthreading and processor affinity in linux with taskset
Message board moderation
Author | Message |
---|---|
agentrnge Send message Joined: 14 Jul 02 Posts: 3 Credit: 26,305,484 RAC: 8 |
I just linking to the main boinc forum. Admins, you have my apologies in advance if this is frowned upon. I only saw one mention of "taskset" in a search in the seti@home forums. I describe my ongoing adventures in setting cpu affinity with taskset in the post linked below. I am trying to prevent overloaded hyperthreaded cores from causing slow downs of my PC. So far so good. http://boinc.berkeley.edu/dev/forum_thread.php?id=9698#57193 Cheers. |
Urs Echternacht Send message Joined: 15 May 99 Posts: 692 Credit: 135,197,781 RAC: 211 |
You are crunching 24/7. Of course you tried to set <no_priority_change>1</no_priority_change> in your "cc_config.xml"-file in your BOINC directory, did you ? Actions that need a user to become root are not recommended, that might be the reason why you didn't find any mentions of direct priority or affinity changes. _\|/_ U r s |
agentrnge Send message Joined: 14 Jul 02 Posts: 3 Credit: 26,305,484 RAC: 8 |
Urs: I did not, and never had the <no_priority_change> option set But I did a little more looking into what was causing my system to bog down. It was just my environment. I was storing my BOINC dir on my server. NFS over a 1 Gb link was never a problem for any work units until I started doing climateprediction tasks. I just looked at the size of the project folder. 9 GB.. Figured it must be doing crazy high IO and that was grinding things to a halt on my network, slowing other things down. I saw lots of iowait on the server. So I moved it to local storage ( SSD ). Then looked a little closer at it while it ran. Regularly ( every few seconds ) hitting 100-200 MB/sec of disk IO ( over 1/2 second averages as viewed via iostat/iotop). So this project was IO bound. Kept my network and server overtasked. I initially had my perf monitoring logger polling at 2 mins, and it was masking high IO over the the longer average time. So the above nonsense about core affinity might not be really needed. Still running with it for now though. |
ivan Send message Joined: 5 Mar 01 Posts: 783 Credit: 348,560,338 RAC: 223 |
So I moved it to local storage ( SSD ). Then looked a little closer at it while it ran. Regularly ( every few seconds ) hitting 100-200 MB/sec of disk IO ( over 1/2 second averages as viewed via iostat/iotop). So this project was IO bound. Kept my network and server overtasked. I've installed lm-sensors and gkrellm on as many of my Linux boxes as I can. Then run sensors-detect to collect sensor info and run gkrellm, configuring which sensors and subsystems to monitor. I have a script that starts up ssh -Yf user@machine gkrellmfor all my machines and then I can monitor them all in one workspace of my (Windows+cygwin) desktop. It's getting difficult to monitor those 20-CPU Xeons with only a 1920x1200 monitor, tho'but! :-/ |
agentrnge Send message Joined: 14 Jul 02 Posts: 3 Credit: 26,305,484 RAC: 8 |
Time for a higher res monitor I guess. lol Speaking of monitoring. Yes, love lm-sensors. Also recently found pcm, from intel. overlap of info from turbostat, but more details about what the cpus are doing. Both use the msr module. Its Intel software. afaik it only reports on newish Intels. I am presently only running intel cpus at home. I'll check out gkrellm Right now instead of watching things actively I have a few perf collection thingies running just writing to log files. I take a quick look at the logs occasionally. Sometimes looking at live output, but not regularly. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.