Message boards :
Number crunching :
Advice on system optimization needed.
Message board moderation
Previous · 1 . . . 3 · 4 · 5 · 6
Author | Message |
---|---|
Eric Claussen Send message Joined: 31 Jan 00 Posts: 22 Credit: 2,319,283 RAC: 0 |
I'm going to cut back until I can get more solar on the roof. With the current rates in California this is costing about $6 a day. I think I will set up the timers so it runs during off peak hours for now. Eric |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
No I don't have a problem with overcommitting. That is easy to fix, just reduce the number of cpu cores used until the gap decreases to around 1-2 minutes. I'm OK with that. Since I can't get affinity to work correctly on the problem hosts, I comment out those lines in the script. Problem solved. I can set affinity on the gpu tasks with no issues. On any host. I just want to figure out why the script works differently on cloned identical systems. And works correctly on my one Intel system. Not enough data. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I'm going to cut back until I can get more solar on the roof. With the current rates in California this is costing about $6 a day. I think I will set up the timers so it runs during off peak hours for now. I need more solar too. About triple of what I now have. The crunching is costing me $33 a day. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
No I don't have a problem with overcommitting. That is easy to fix, just reduce the number of cpu cores used until the gap decreases to around 1-2 minutes. I'm OK with that. Since I can't get affinity to work correctly on the problem hosts, I comment out those lines in the script. Problem solved. I can set affinity on the gpu tasks with no issues. On any host. I just want to figure out why the script works differently on cloned identical systems. And works correctly on my one Intel system. Not enough data.Hmmmm, so we can ignore what you said here? ...So the only option is to use cpu% to reduce the number of cpu cores used. But the thread scheduler can't keep the task on the same thread and constantly moves it around. And you end up with both an overcommitted cpu and poor cpu_time/run_time tracking to boot.As far as I know, using cpu% works as it should on all CPUs. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
No I don't have a problem with overcommitting. That is easy to fix, just reduce the number of cpu cores used until the gap decreases to around 1-2 minutes. I'm OK with that. Since I can't get affinity to work correctly on the problem hosts, I comment out those lines in the script. Problem solved. I can set affinity on the gpu tasks with no issues. On any host. I just want to figure out why the script works differently on cloned identical systems. And works correctly on my one Intel system. Not enough data.Hmmmm, so we can ignore what you said here? As usual you post something out of context, just for the sake of being argumentative. The Intel cpu does not move work around the cores. The AMD cpus do for the ones I can't get the script to set affinity working correctly. When the load moves around, the separation between cpu_time and run_time increases, forcing you to drop cores via cpu % to get them back into balance. The one AMD host that does have affinity working correctly can run more cpu % than the others simply because it doesn't move the load around. So it does more work than the others on a like for like basis. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
Not out of context at all. Your post claimed the %cpu wasn't keeping the CPU from being Over-committed. Read it yourself. Intels Do move work around cores, All modern CPUs do. It's very easy to see, I first noticed it Years ago when still running BOINC on Windows. My problem is, in a thread about system optimization, You make a post saying %cpu doesn't work. That is a problem, and you can expect a response to that post. I have been playing around with both methods of providing cpu support to gpus today on the 7.16.1 client. All I can say is that if you have a Intel processor either method works and everything runs fine. If on the other hand you have a AMD processor, you will still be cussing the brain-dead Linux AMD cpu thread scheduler and looking for compromises.BTW, I don't see CPU affinity mentioned once in that post. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Well excuse me for not including all the nitty-gritty of my configuration in my first post. I have used my affinity script since moving to Linux and assigned affinity even back in Windows with Process Lasso because of the unique nature of AMD cpus. Don't know how you are determining that a cpu tasks moves around on Intel or AMD cores. I have never seen my cpu task move around when using affinity. The purpose is to lock the PID of the task to the core it starts on. Process Lasso does it and so does the thread scheduler in Linux. When I can get it to work. My post simply stated that I can't get what should work to work on some of my hosts and that is what I find frustrating. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
Your post made it sound as though you couldn't stop cpu over-commitment, you didn't mention affinity. Most people don't bother with affinity, and they also don't bother with over-clocking. Intel has been moving work around the cores pretty much since they developed multi-core CPUs to balance the loads and temps. It's pretty common knowledge, that's Why they have affinity. Next time you have a chance fire up Windows, use the options in Raistmer's SOG App to lock the CPU core on just a couple tasks, and watch what happens in the CPU monitor. Just run a couple of SOG tasks, and don't use the OS's affinity settings. You should see the tasks move from core to core as the core temps rise, just the way the Developers intended. I decided to stop fighting the Developers some time back, if they think it's important to balance loads and temps, that's good enough for me. |
Tom M Send message Joined: 28 Nov 02 Posts: 5126 Credit: 276,046,078 RAC: 462 |
On this system: https://setiathome.berkeley.edu/show_host_detail.php?hostid=8696615 The RAC is still increasing with great enthusiasm. To the point where it has cracked the top 20 (at least for the moment). The gpus are set to use 0.5 per core using the Linux/Tbar/petro All-in-One combo. The "cpu over committed" issue on the time differences continues. I am guessing that the difference between the wallclock time and the cpu time usage is about 1/3 (taking maybe a third again more wall clock time). As I said before, I am waiting for the RAC climb to peter out..... Tom A proud member of the OFA (Old Farts Association). |
rob smith Send message Joined: 7 Mar 03 Posts: 22540 Credit: 416,307,556 RAC: 380 |
The one sure-fire way of stopping CPU over-commitment is to use the "use at most x% of CPUs" - set this to give you 1 core per GPU task running. If the CPU has less cores than the number of GPU tasks being run then you are going to be stuck with over-commitment even if you stop CPU crunching. Remember the "use 0.5cpu" setting is a "weak target", not an absolute figure - if the GPU tasks need more than 0.5cpu then they will attempt to grab the extra they need, and most likely fail to get it, and in so doing they will slow down both the CPU and GPU tasks.... Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
or just reserving 1, or a half, or a third of a CPU thread to support each GPU WU being processed. Grant Darwin NT |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
All you have to do is Lower the Use at Most ___ % of the CPUs until you reach the point of Not being Over-committed. Anything else is a Red Herring. That One setting should be all you need to set. That's what the CUDA Developers decided 12 Years ago, and it holds to this day. All the other stuff is just confusing people. One Setting is All it takes. |
Tom M Send message Joined: 28 Nov 02 Posts: 5126 Credit: 276,046,078 RAC: 462 |
All you have to do is Lower the Use at Most ___ % of the CPUs until you reach the point of Not being Over-committed. I think all three of you are referring to the setting that controls the # of cpus/threads that Boinc will use. Right? Tom A proud member of the OFA (Old Farts Association). |
rob smith Send message Joined: 7 Mar 03 Posts: 22540 Credit: 416,307,556 RAC: 380 |
Yes. The setting of "use at most" restricts the number of cores available to CPU tasks, but does not impact on the number of cores that GPU tasks can access. Those required by GPUs are drawn preferentially from those not committed elsewhere (BOINC, O/S etc.) Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
Yes. which is why, if you are using the -nobs argument, it's a good idea to tell BOINC that you are reserving 1 CPU core for 1 GPU, so that when you set the CPU%, you actually get that number or very close to it, and not have to play trial and error with the CPU% values to get your desired outcome. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.