Message boards :
Number crunching :
Question about dedicating CPU cores to GPU support w/Lunatics apps.
Message board moderation
Author | Message |
---|---|
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
I know this has been mentioned, but I cannot find this now. Perhaps somebody could reprise it for me. I currently on my quad rigs idle 2 cores via local preferences to support the GPUs when doing AP work with the Lunatics app. So, of course, when no AP is available and the GPUs are doing MB, they don't require much CPU support and much of those two cores are slacking off. I think it was suggested that by modifying app_info, one could basically get Boinc to 'auto switch', and dedicate the CPU cores to the GPUs when needed, but release them for CPU MB or AP work when the GPUs are doing MB and don't need them. If so, that would make better use of my CPU resources between AP splitting runs. How would that work with multiple GPUs? And with 2 AP tasks per GPU? It the setting per GPU? Per running instance of the app? Thanks for any clarification offered. I'll check back in when I get home from work in about 12 hours, so don't wonder why I am not responding right away if any questions are asked. Meow for now. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Go to Mike site, DL the latest build (R2058 actualy in Beta), install and follow the configuration instructions, i´m sure the kitties will love that! |
Fred E. Send message Joined: 22 Jul 99 Posts: 768 Credit: 24,140,697 RAC: 0 |
I think it was suggested that by modifying app_info, one could basically get Boinc to 'auto switch', and dedicate the CPU cores to the GPUs when needed, but release them for CPU MB or AP work when the GPUs are doing MB and don't need them. Edit app_info.xml and change the "avg CPU" and "max CPU" lines as follows: <app_version> <app_name>astropulse_v6</app_name> <version_num>604</version_num> <avg_ncpus>1.0</avg_ncpus> <max_ncpus>1.0</max_ncpus> There are two app sections for 1843, so either pick the right plan class or just do both of them. This will reserve 1 cpu core for each task running and release them when not needed. A value of .5 will reserve 1 for every two tasks running - you might want to try that if you install the Beta Juan mentioned. I use .5 for cuda 50 tasks (2 at a time) and do not reserve any cores in BOINC's computing preferences. I use app_config.xml for these settings but you can't do that with your version of BOINC. Another Fred Support SETI@home when you search the Web with GoodSearch or shop online with GoodShop. |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
If you have more GPU threads than CPU threads available (e.g., quad-core CPU and 2 GPUs running 3 tasks each) you also want to have an app_config.xml that limits the number of APs on GPU to the number of cores you are using. (If you don't do this, you will grossly over commit your CPU resources, and spend a good deal of time swapping WUs rather than getting work done). When running an AP on GPU, BOINC will then pause running one of the CPU WUs and reserve its resources for the GPU AP. The WU that was running on the CPU will go to "Waiting to Run" until the GPU AP finishes. Use the <max_concurrent> parm in app_config.xml for this under astropulse_v6. |
Fred E. Send message Joined: 22 Jul 99 Posts: 768 Credit: 24,140,697 RAC: 0 |
Use the <max_concurrent> parm in app_config.xml for this under astropulse_v6. Mark runs BOINC 6.10.58 on all his rigs, so he can't use the max concurrent setting. Support for app_config.xml began with version 7.0.40. His post says he runs 2 AP at a time on 2 gpu's. One rig has only 2 cpu cores but the others have 4. And one has 3 gpu's, so there's not enough cores there. I don't know a good solution for the two rigs without enough cpu cores. Maybe BOINC will limit the number of concurrent AP tasks with the cpu usage values I suggested, but I haven't tested that. Another Fred Support SETI@home when you search the Web with GoodSearch or shop online with GoodShop. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
One rig has only 2 cpu cores but the others have 4. And one has 3 gpu's, so there's not enough cores there. I don't know a good solution for the two rigs without enough cpu cores. That´s not totaly true, go for the beta version i talk before, it uses very few cores to crunch AP. I use on a slow old I5 with only 4 cores with 3 gpus running 2 WU at a time on each GPU. With no problem. |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
Mark runs BOINC 6.10.58 on all his rigs, so he can't use the max concurrent setting. Then Mark is screwed, because I don't believe there's any way to specifically limit the APs crunched by his GPUs when he has them. And each one realistically requires a full CPU core. At least until the Beta mentioned above is released for general use. Unless, of course, he upgrades his BOINCs. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Mark, just, preferably with the newer build as Juan described, adjust the ncpus app_info fields in the astropulse section to 1.0 [...as Fred described] "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
OK, I consulted with the kitties and found that I can do this, on my current version of Boinc, and get it to do pretty much what I want it to do. I just had to change max_ncpus to match avg_ncpus, and set the local preference back to use 100% of the processors instead of manually idling cores there. All GPUs are set to run 2 tasks at a time. So, for the quad rigs with 2 cards, .50 works. When 2 APs are running, 1 core gets reserved for AP, when 4 are running, AP grabs 2 cores. The quad rig with 3 cards gets a setting of .34 for both avg and max. When 3 APs run, 1 core gets reserved, and when 6 APs are going, it sets aside 2 cores. The dual core with 2 cards gets set at .25, so it does not reserve a core until all 4 GPU tasks are AP, and then it only reserves one core. And of course, as the AP work runs out, the CPU cores get released to go back to whatever they were doing, without any loafing around. This will not let a single running AP task get a full core set aside for it without grabbing too many cores as more APs start to run, but I think it shall do nicely. I may try the new build at a later date, but for now, this works. Thanks for all the advice and hints. Meow! "Freedom is just Chaos, with better lighting." Alan Dean Foster |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
I'm not sure if you realize it, but, you have 3 Instant Invalids on the one machine I browsed. Seems a few people are getting these and it's a mystery as far as I know. Check the first 3, http://setiathome.berkeley.edu/results.php?hostid=5082339&offset=0&show_names=0&state=5&appid= |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
I'm not sure if you realize it, but, you have 3 Instant Invalids on the one machine I browsed. Seems a few people are getting these and it's a mystery as far as I know. Not instant, as in not just now, not related to my changes. The last two were 8 hours ago whilst I was at work, and the other was 15 hours before that. But, I'll keep an eye on the invalids if the numbers start going up. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
Look at this machine; http://setiathome.berkeley.edu/results.php?hostid=6407690&offset=0&show_names=0&state=5&appid= Your task is being marked before the second wingman reports. That shouldn't happen. I'm seeing this pop up all around. Seems to be increasing... More, Invalid before second wingman reports, http://setiathome.berkeley.edu/results.php?hostid=2909037&offset=0&show_names=0&state=5&appid= |
Helli_retiered Send message Joined: 15 Dec 99 Posts: 707 Credit: 108,785,585 RAC: 0 |
I think it was suggested that by modifying app_info, one could basically get Boinc to 'auto switch', and dedicate the CPU cores to the GPUs when needed, but release them for CPU MB or AP work when the GPUs are doing MB and don't need them. Thank you, I have been waiting for this. :-) Helli |
Mike Send message Joined: 17 Feb 01 Posts: 34253 Credit: 79,922,639 RAC: 80 |
I think it was suggested that by modifying app_info, one could basically get Boinc to 'auto switch', and dedicate the CPU cores to the GPUs when needed, but release them for CPU MB or AP work when the GPUs are doing MB and don't need them. @Helli On your system i would suggest to set it like this. <avg_ncpus>0.5</avg_ncpus> <max_ncpus>0.5</max_ncpus> Also add this to the comandline or ap_cmdline_win_x86_SSE2_OpenCL_NV.txt file -unroll 12 -ffa_block 12288 -ffa_block_fetch 6144 This should speed your 780 up. With each crime and every kindness we birth our future. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Mike I use -use_sleep -unroll 18 -ffa_block 8192 -ffa_block_fetch 2048 on my 780, as sugested by someone else in other thread, and i see you sugest a different setting for the 780. As i´m sure you have a lot more experience than me, could you tell me what is the best setting? If not to much work, you know the setting for the 670 or the 690 too? Thanks in advance. As a sugestion somebody could make a table with the optimal setting for each card, i´m sure that will be very wellcomed by the comunity. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Depends.... Helli has 8 cores available, not just 4. If I were him, I would try 1 for avg and max, thus using a full core for each AP task running, and leaving 6 still doing CPU work. Just my opinion, but that's what I would try. And then monitor CPU usage to see if it still stays close to 100%. If not, then perhaps back down to the .5 setting and check it again. Also, please note that I am doing this on 8 crunch-only rigs. My daily driver tends to get a little sluggish when using all cores for crunching. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Fred E. Send message Joined: 22 Jul 99 Posts: 768 Credit: 24,140,697 RAC: 0 |
Also add this to the comandline or ap_cmdline_win_x86_SSE2_OpenCL_NV.txt file Mike, any reason you didn't include the -hp switch to bump the priority up above the default of "below normal"? I usually recommend that for the AP/Nvidia apps. Another Fred Support SETI@home when you search the Web with GoodSearch or shop online with GoodShop. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Also add this to the comandline or ap_cmdline_win_x86_SSE2_OpenCL_NV.txt file Not speaking for Mike, but I think that running AP at high priority on anything but a crunch-only rig would lead to usability problems on most computers that are also being used for other tasks. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Helli_retiered Send message Joined: 15 Dec 99 Posts: 707 Credit: 108,785,585 RAC: 0 |
I think it was suggested that by modifying app_info, one could basically get Boinc to 'auto switch', and dedicate the CPU cores to the GPUs when needed, but release them for CPU MB or AP work when the GPUs are doing MB and don't need them. Thanks Mike, i will try your advise immediately.
Mark, i had disabled HT as Mike has viewed my Setup. The last weeks i had run the GTX780 alone because i wanted to know how much the GTX780 can do... (38k) ;-) Helli |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Ahh, then with 4 cores available, the .5 settings are what I am using now. You would get 1 core reserved for AP when 2 are running on the 780. If just one AP is running, it would not reserve a core yet. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.