Message boards :
Number crunching :
% of processors to use for optimum operation
Message board moderation
Author | Message |
---|---|
Mark + Rita Hadley (earthbilly) Send message Joined: 18 Dec 16 Posts: 30 Credit: 28,355,726 RAC: 0 |
I’ve been tinkering with SETI output for over a year now and I recognize some interesting computing output results I would like to share. I didn’t look down the thread list all the way to the end so not sure if this is old news so forgive if I am repeating what everyone already knows. This is based on an i7 4 core 8 processor, (string), CPU and an i7 6 core 12 processor (string) CPU, and an 8 core 16 processor CPU. The percentages of operating tasks allowing for 2 core 4 processor, 4 core 8 processor, 6 core 12 processor, and an 8 core 16 processor CPU’s have similar results. I began with running precisely 7 of 8 processors in options for an i7-4770. I then realized that if I changed to 6.9 processors, I ran one less processor, but the GPU ran much faster increasing my output. I always take note of wattage due to my home operating on or trying to run 100% on homemade solar electricity. I observed no reduction in wattage with these changes. One time I reduced the load to half the processors plus the extra 0.9 to allow the GPU processing bandwidth on the CPU and the wattage did not drop, however the clock for remaining time to completion was dropping by several seconds per second. I have experimented many times with this idea and see a mathematical relationship I would like to share. On an 8 processor CPU we drop from 7 to 4 processors or tasks not counting the extra just shy of a whole processor for the GPU, each processor completes its task nearly three times faster. I see no or discernible drop in wattage used by the computer. I see 3 less processors or tasks working with (conservatively) double the output of each remaining processor with a final output equaling the work of 8 processors operating and sharing bandwidth and buss width and running cooler without dropping wattage. So, I think there are two items to take away from this. Extra processor bandwidth is necessary to assign to operate the GPU without enacting an additional task. Reducing the number of tasks; does not reduce the overall output per day in fact does increase does not significantly reduce the wattage demand of the computer yet reduces the stress of temperature. Lastly, I find a difficult decision between running 4 tasks plus GPU extra and three tasks plus GPU designated processing on my 8 processor CPU’s. This is the first reduction of tasking that reduces wattage used and very slightly lowers RAC. I most always run with for an i7-4770, i7-4790K, 49.875% and i7-6800K, 49.95%. Contrary to expectation the RAC goes up when compared to larger % tasking. Sunny regards, |
Mark + Rita Hadley (earthbilly) Send message Joined: 18 Dec 16 Posts: 30 Credit: 28,355,726 RAC: 0 |
Oh my! April 1. This is not a April fools joke! |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
You have observed nothing revolutionary. If you take a look at your completed, valid, non-overflow cpu tasks in the task detail page on the website, look at two values listed for each task. There is the cpu_time and the run_time. Ideally, the two should always match or be within a minute of each other. The run_time is the wall clock elapsed time from start to finish of the task. The cpu_time is the actual time spent computing on the task. If the two match, the processor thread operated at 100% efficiency and didn't spend time servicing some other process on the thread. When there is a large disparity between run_time and cpu_time, we say the system is overloaded and trying to run too many simultaneous tasks at the same time. The cure is to reduce the system concurrent tasks value in the app_config file via the max_concurrent_tasks parameter or through the Computing Preferences page via the "use XX% of available cpus" parameter. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Mark + Rita Hadley (earthbilly) Send message Joined: 18 Dec 16 Posts: 30 Credit: 28,355,726 RAC: 0 |
I just thought it was interesting you could make the graph line on Host Total steeper by doing fewer tasks. |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
I just thought it was interesting you could make the graph line on Host Total steeper by doing fewer tasks. This has been a known effect of overcommitting resources since at least 1970 or so, when the first virtual memory systems began to be used for time-sharing a mainframe amongst many users. What happens is the given resource (CPU usage, in this case) is squeezed because not only do your programs have to run, but also the Operating System has to service requests from the programs (e.g., reading files, switching tasks between processors both for you and the OS) , so the CPU needed is more than the apps running would need. Sometimes much more. The general term for this was "thrashing" when the OS was switching tasks more than running them, IIRC. Therefore, running fewer programs would get more actual work done. The trick is to figure out how many programs (WUs, in this case) can be run without getting to the thrashing stage, which we all try to do. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
Extra processor bandwidth is necessary to assign to operate the GPU without enacting an additional task. It's just a case of reserving a CPU core for each GPU WU you are crunching. Then when the GPU runs out of work, the released CPU core can process CPU work till you get more GPU work. I just use app_config.xml to reserve a CPU core for each GPU WU being processed. <app_config> <app> <name>setiathome_v8</name> <gpu_versions> <gpu_usage>1.00</gpu_usage> <cpu_usage>1.00</cpu_usage> </gpu_versions> </app> <app> <name>astropulse_v7</name> <gpu_versions> <gpu_usage>0.5</gpu_usage> <cpu_usage>1.0</cpu_usage> </gpu_versions> </app> </app_config> Grant Darwin NT |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.