Message boards :
Number crunching :
Dynamically regulate number of CPU tasks?
Message board moderation
Author | Message |
---|---|
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Maybe my thinking cap is wearing out these days. But, here goes. Is there a way to have Boinc start and suspend the number of CPU tasks crunching Seti? My new rig has 10 cores. When fully provisioned with WUs, it runs 2 on each GPU and 5 CPU tasks. This loads it about right to still be able to use it as my daily driver. During the weekly outages, the GPUs run out of Seti work and right now I have GPUGrid as a backup project. Instead of doing that, would it be possible to have Boinc start up more CPU tasks when the GPUs run dry? Like 4 more for a total of 9, leaving 1 core for daily driver usage? And then suspend the extra CPU tasks when the GPUs start picking up work again. I'd like to be able to do this within Boinc, rather than resorting to scripts, etc., but maybe there is no mechanism to automatically achieve this. Meow? "Freedom is just Chaos, with better lighting." Alan Dean Foster |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
It´s easy, just don´t put any aditional project as a backup. LEave only SETI running on the host. Stop or remove the others projects. In my main host i use 2 WU/GPU with 4 GPU´s it runs normaly 8 GPU WU + 4 CPU. When the GPU runs dry of workit automaticaly start more CPU WU up to 0 GPU + 12 CPU WU running at that time (my host has a 6 core CPU / 12 threads) You could control the number of used WU etc. with the app_config file i use... <project_max_concurrent>12</project_max_concurrent> <avg_ncpus>1.0</avg_ncpus> <ngpus>0.5</ngpus> but you could easely play with the number to leave 1 or more CPU free. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
It´s easy, just don´t put any aditional project as a backup. LEave only SETI running on the host. Stop or remove the others projects. Oooohh.....that looks promising. Where do you insert those lines? Thank you, Juan! I shall play with those options. Meow!! "Freedom is just Chaos, with better lighting." Alan Dean Foster |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
OK..... max concurrent does not work in my older Boinc. How 'bout this..... Is there a way to set the base priority of the GPU SOG app to 'normal'? It normally runs at 'below normal' and the -hp switch will take it to 'high' base priority. Can it be set to run at 'normal' priority? Meow? "Freedom is just Chaos, with better lighting." Alan Dean Foster |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
This is my app_config, obviusly you need to adapt the <cmdline> to your GPU to optimize it and avoid the screen lag if necesary, but i´m pretty sure mine is close since mine it´s a 1070 GPU allmost crunching only host. <app_config> <project_max_concurrent>12</project_max_concurrent> <app_version> <app_name>setiathome_v8</app_name> <plan_class>opencl_nvidia_SoG</plan_class> <avg_ncpus>1.0</avg_ncpus> <ngpus>0.5</ngpus> <cmdline>-use_sleep -tt 500 -hp -period_iterations_num 1 -high_perf -sbs 2048 -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64</cmdline> </app_version> <app_version> <app_name>astropulse_v7</app_name> <plan_class>opencl_nvidia_100</plan_class> <avg_ncpus>1.0</avg_ncpus> <ngpus>0.5</ngpus> <cmdline>-use_sleep -unroll 28 -oclFFT_plan 256 16 256 -ffa_block 12288 -ffa_block_fetch 6144 -tune 1 64 4 1 -tune 2 64 4 1 -hp </cmdline> </app_version> </app_config> |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
OK..... You could simulate it by playing with the Computing Preferences and adjust the use of % of the avaiable CPU´s How 'bout this..... Not sure if there is a way in the cmdline but you could do that by ussing the project lasso program for example. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
This is my app_config, obviusly you need to adapt the <cmdline> to your GPU to optimize it and avoid the screen lag if necesary, but i´m pretty sure mine is close since mine it´s a 1070 GPU allmost crunching only host. Then he need to change the lines related on the app_info file instead. But mainly it´s the same. I belive for what he ask for just stop the back-up project and adjust the % CPU used do the job. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
OK..... Like to do it without yet another program running on the computer................ But if there is no way to do it in Boinc, I guess that would be an option. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Like to do it without yet another program running on the computer................ I wish i know a way too -hp produces a lot of lag on my host when i need to do anything else. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Like to do it without yet another program running on the computer................ Exactly. I think 'normal' priority would play nicer with the other things I have running at any given time. 'High' priority grabs just a bit too much. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Exactly. I think 'normal' priority would play nicer with the other things I have running at any given time. I agree. That´s why until i learn how to do that by the <cmdline>, i use project lasso to do the trick rissing the priority just to normal not high. |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
I really think you want to keep the -hp thread normally for those cards, but maybe reduce it when youtube etc in use. A thought may be to use a BAT script placed on the desktop to replace the command line, one with/without the -hp flag, which would start on NEXT task, which wouldn't be that long for you. copy low_priorty.txt > NV_commandline.txt exit ... copy high_priorty.txt > NV_commandline.txt exit I'm not sure what app_info options are available in BOINC v6.10.58, they could be limited ... your app_info should handle the CPU tasks taking over as GPU tasks become unavailable. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13727 Credit: 208,696,464 RAC: 304 |
Instead of doing that, would it be possible to have Boinc start up more CPU tasks when the GPUs run dry? Like 4 more for a total of 9, leaving 1 core for daily driver usage? And then suspend the extra CPU tasks when the GPUs start picking up work again. That's what my system does. I reserve 1 CPU core for each GPU WU. When the GPU runs out of work, those released CPU cores start crunching CPU work. When the GPUs start up again, 1 CPU WU pauses for each GPU WU that starts running, and when a CPU WU finishes crunching, it just finishes off the paused CPU WUs. My app_config.xml <app_config> <app> <name>setiathome_v8</name> <gpu_versions> <gpu_usage>1.00</gpu_usage> <cpu_usage>1.00</cpu_usage> </gpu_versions> </app> </app_config> Just change the cpu_usage and gpu_usage values in your App_info.xml, exit BOINC & restart. My only issue with the newer version of BOINC is the removal of the event log from the main screen & being moved to the Tools menu. But being able to use app_config.xml & not worry about trashing my cache due to a simple typo in app_info.xml means I can deal with it (annoying as it is). Grant Darwin NT |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Instead of doing that, would it be possible to have Boinc start up more CPU tasks when the GPUs run dry? Like 4 more for a total of 9, leaving 1 core for daily driver usage? And then suspend the extra CPU tasks when the GPUs start picking up work again. Would the logic you are using in app_config also work in app_info? "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
Yes it would. If I'm reading you right you have: - max_concurrent=9 - GPU=0.5 - CPU=1.0 Right?? And you are wanting to start ALL cores/threads when GPUs are dry? |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Yes it would. I would like to try 9 CPU tasks when the GPUs are empty. max concurrent, avg ncpus, and ngpus are not recognized by my elder version of Boinc, so they are doing nada. Right now I am using 50% CPU in local prefs to limit the number of CPU tasks running to 5. Meow. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
I misread, was thinking 6/12 threads but you have 10/20. For my 12 threads and 3x1080s I run : <app> <name>setiathome_v8</name> <gpu_versions> <gpu_usage>1</gpu_usage> <cpu_usage>1.7</cpu_usage> </gpu_versions> </app> Which works out to 3 GPU task, 7 CPU, or 10 total. And CPU tasks fire up as needed to a max of all 12 threads. By over reserving you can do that, with percentages you can't. For you that would be: GPU=0.5 ... CPU=1.4 1.4*4=5 reserved, so would be 5 CPU + 4 GPU = 9 under normal conditions. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Instead of doing that, would it be possible to have Boinc start up more CPU tasks when the GPUs run dry? Like 4 more for a total of 9, leaving 1 core for daily driver usage? And then suspend the extra CPU tasks when the GPUs start picking up work again. They can have my messages tab when they pry it from my cold dead hands! I just use the old manger with the newer client. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.