Message boards :
Number crunching :
Can I select which WU/project goes to which GPU?
Message board moderation
Author | Message |
---|---|
![]() ![]() Send message Joined: 21 Jan 02 Posts: 112 Credit: 2,669,228 RAC: 0 ![]() |
Can I select which project or WU goes to which GPU? I have a dual GPU setup, a GTX-570 and a GTX-550TI, and I would like my GPUGrid WU's to run on the GTX-570 as they run pretty long. For the mentioned project I probably should ask on the GPUGrid forum (which I will do), but would it also be possible for other projects (like SETI) or WU's? I had a dual directory setup before I exchanged my GF8600GT 256MB with a GTX-570. I removed my cc_config.xml file so both GPU's are now again controlled by 1 BOINC client. Today I solved it by waiting until the GTX-570 was almost finished with a SETI-WU and then I resumed the GPUGrid WU's. The GTX-570 runs the GPUGrid WU's about 3-4 (or 2-4) times faster than the GTX-550TI. Is there something I could put in the cc_config.xml file or in the projects app_info.xml file? Anthony. SETI classic wu's 10000; CPU time 47121 hours The longer I live, the more reasons I develop for wanting to die. |
Blake Bonkofsky Volunteer tester ![]() Send message Joined: 29 Dec 99 Posts: 617 Credit: 46,362,341 RAC: 0 ![]() |
You would need to go back to your dual directory installation, using CC_CONFIG's to only allow one GPU per client. Otherwise, BOINC will do its own scheduling and prioritizing of projects. ![]() |
![]() Volunteer tester ![]() Send message Joined: 11 Sep 99 Posts: 6529 Credit: 182,887,480 RAC: 49,302 ![]() ![]() |
Can I select which project or WU goes to which GPU? I have a dual GPU setup, a GTX-570 and a GTX-550TI, and I would like my GPUGrid WU's to run on the GTX-570 as they run pretty long. For the mentioned project I probably should ask on the GPUGrid forum (which I will do), but would it also be possible for other projects (like SETI) or WU's? The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well. SETI@home classic workunits: 93,865 CPU time: 863,447 hours |
![]() ![]() Send message Joined: 21 Jan 02 Posts: 112 Credit: 2,669,228 RAC: 0 ![]() |
The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well. When will version 6.13.x come out. And BOINC 7.x is still in Alpha release so I don't know if it's wise to install that, especially if your not sure that that option is in there. Otherwise I'll just have to go back to the 2 client setup I had before, for the time being. Anthony. SETI classic wu's 10000; CPU time 47121 hours The longer I live, the more reasons I develop for wanting to die. |
![]() Volunteer tester ![]() Send message Joined: 11 Sep 99 Posts: 6529 Credit: 182,887,480 RAC: 49,302 ![]() ![]() |
The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well. I think 6.13.x became 7.x, but I'm not really sure on that. I haven't had the time to muck around with BOINC versions a lot recently. SETI@home classic workunits: 93,865 CPU time: 863,447 hours |
![]() Volunteer tester ![]() Send message Joined: 30 Nov 03 Posts: 60532 Credit: 44,424,891 RAC: 6,659 ![]() ![]() |
The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well. And once one goes to 7.x there's no going back until all ones wu's are exhausted, at least that's about what I've read. What is BSG Robotech-Saga-Wiki SW-wiki |
![]() Volunteer tester ![]() Send message Joined: 14 May 99 Posts: 4199 Credit: 53,060,160 RAC: 4 ![]() |
The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well. Yes it did, current testing build is 7.0.12. ![]() ![]() |
![]() ![]() Volunteer tester ![]() Send message Joined: 17 Feb 08 Posts: 1062 Credit: 50,750,172 RAC: 5,975 ![]() ![]() |
Can I select which project or WU goes to which GPU? I have a dual GPU setup, a GTX-570 and a GTX-550TI, and I would like my GPUGrid WU's to run on the GTX-570 as they run pretty long. For the mentioned project I probably should ask on the GPUGrid forum (which I will do), but would it also be possible for other projects (like SETI) or WU's? It's working from 7.0.7 onwards. You can exclude a gpu from a particular project or even just a specific app. I have one rig where I use a GTX570 but only for GPUgrid work and exclude it from the other couple of projects. As mentioned by the others though once you go to the 7.x clients there is no going back as the client_state files are incompatible. Also the scheduling is totally different and has some issues around keeping a GPU cache. If you decide to try it I suggest you subscribe to the BOINC alpha mailing list as that where we report issues. You can subscribe to the list even if you aren't an alpha tester. |
![]() ![]() Volunteer tester ![]() Send message Joined: 4 Oct 99 Posts: 499 Credit: 46,986,972 RAC: 11,370 ![]() ![]() |
The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well. I use the exclude option and it works fine for me. You can download 7.0.12 from here: http://boinc.berkeley.edu/dl/ Then use something like this in your cc_config.xml.... <cc_config> <options> <use_all_gpus>1</use_all_gpus> <exclude_gpu> <url>http://www.gpugrid.net</url> <device_num>1</device_num> </exclude_gpu> <exclude_gpu> <url>http://setiweb.ssl.berkeley.edu/beta</url> <device_num>0</device_num> </exclude_gpu> </options> </cc_config> I am not sure if you'll need the "<use_all_gpus>1</use_all_gpus>". It might work without it. I use it since it does no harm. There are more tags/options to the "exclude" command but I think might be enough for you. The Other type of tags are for type of GPU NVIDIA/ATI and a tag for what applikation to exclude if there are many applications in 1 project. TRuEQ & TuVaLu |
![]() ![]() Send message Joined: 21 Jan 02 Posts: 112 Credit: 2,669,228 RAC: 0 ![]() |
Thank for the presented options, but I don't want to run an alpha-version of BOINC, so I just went back to my 2 client setup. Lost 1 GPUGrid WU in the process which was running for 4.5 hours :( but sh*t happens. Anthony. SETI classic wu's 10000; CPU time 47121 hours The longer I live, the more reasons I develop for wanting to die. |
![]() ![]() Volunteer tester ![]() Send message Joined: 4 Oct 99 Posts: 499 Credit: 46,986,972 RAC: 11,370 ![]() ![]() |
Thank for the presented options, but I don't want to run an alpha-version of BOINC, so I just went back to my 2 client setup. Lost 1 GPUGrid WU in the process which was running for 4.5 hours :( but sh*t happens. You can always wait for the alpha to be "recomended version" and then try the exclude. I've been running the alpha for some time now and i'd say I prefere it. TRuEQ & TuVaLu |
![]() ![]() Send message Joined: 21 Jan 02 Posts: 112 Credit: 2,669,228 RAC: 0 ![]() |
|
![]() Volunteer tester ![]() Send message Joined: 30 Nov 03 Posts: 60532 Credit: 44,424,891 RAC: 6,659 ![]() ![]() |
While we're on Boinc 7.x, It would be nice If I could tell one gpu to run 1 wu and the others to run 2 wu's per gpu, is this possible at all? What is BSG Robotech-Saga-Wiki SW-wiki |
![]() ![]() Volunteer tester ![]() Send message Joined: 4 Oct 99 Posts: 499 Credit: 46,986,972 RAC: 11,370 ![]() ![]() |
While we're on Boinc 7.x, It would be nice If I could tell one gpu to run 1 wu and the others to run 2 wu's per gpu, is this possible at all? You can do that with 2 different projects when using app_info.xml for the project that you want to run 2 instances with. Also use the proper cc_config.xml to exclude the GPU's for the specific projects you don't want to use the specific GPU on. I am not sure all projects support several instances though. please check proper forum for that. I use app_info.xml to run several instances sometimes with POEM, SETI and SETI Beta on 1 of my GPU's. Some people run 2 instances of BM to do what you want todo. TRuEQ & TuVaLu |
LadyL Volunteer tester ![]() Send message Joined: 14 Sep 11 Posts: 1679 Credit: 5,230,097 RAC: 0 |
While we're on Boinc 7.x, It would be nice If I could tell one gpu to run 1 wu and the others to run 2 wu's per gpu, is this possible at all? If you want the same application to run on one GPU once and on the other twice, as far as I know the answer is No. Hang on... If you figure out, how to exclude the GPU from an app (as opposed to from a project as outlined earler in this thread, provided it is possible at all) then you can try and run under anon, duplicate the app you want to run, rename one and do two entries one for each GPU. Mind you, highly experimental. After a bit of rummaging the boinc wiki yields: <exclude_gpu> Of course you rely on BOINC numbering your devices the same each time. And running alpha clients has its own pitfalls. So, as far as I can see, doable, if you think it is worth the fuss. edit: and before people start wondering why you might want to do that, if you have multiple GPUs of mixed Fermi and pre-Fermi class, you would want to multithread on the Fermis but not on the per-Fermi cards. |
![]() Volunteer tester ![]() Send message Joined: 30 Nov 03 Posts: 60532 Credit: 44,424,891 RAC: 6,659 ![]() ![]() |
One project as LadyL says(Seti@Home only), Just not with 2 Boincs, too complicated... No their all GTX295 cards, this is to give the gpu usage a bump, as the display gpu gets enough, but the other 5(when I had 6 working gpus) suffer due to a lack of shared memory which seems to maxed out at 3199MB... What is BSG Robotech-Saga-Wiki SW-wiki |
kittyman ![]() Volunteer tester ![]() Send message Joined: 9 Jul 00 Posts: 49660 Credit: 904,385,033 RAC: 188,970 ![]() ![]() |
One project as LadyL says(Seti@Home only), Just not with 2 Boincs, too complicated... As far as I know, 200 series NV GPUs can only run 1 WU per actual GPU, 2 tasks per card. My body may be here, but my mind is in a galaxy far, far away. Have made friends here. Most were cats. ![]() |
![]() Volunteer tester ![]() Send message Joined: 30 Nov 03 Posts: 60532 Credit: 44,424,891 RAC: 6,659 ![]() ![]() |
One project as LadyL says(Seti@Home only), Just not with 2 Boincs, too complicated... Well then how under Windows 7 x64 do I enlarge the shared memory pool as It's restricting CUDA processing??? 4095MB total graphics memory 896MB per gpu 3199MB total shared system memory Either MS wants CUDA to be limited to only Sli-able devices or Nvidia won't fix this problem... 6 or more gpus per pc and except for 1 or 2 gpus(depends on if multi-gpu/sli is enabled or not) and the performance per gpu goes into the crapper, over clocking will to an extent overcome this, but I suspect this is why I lost a gpu that or it was already flaky when I bought the Evga card(pre overclocked)... What is BSG Robotech-Saga-Wiki SW-wiki |
![]() Volunteer developer Volunteer tester ![]() Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 ![]() |
Either MS wants CUDA to be limited to only Sli-able devices or Nvidia won't fix this problem... Just to clarify this devious & complex conspiracy a bit (lol), after long term analysis the timeline looks something like this: - MS specifies an entirely new driver model for Vista+ (WDDM) that includes extra functionality for reliability, security & efficiency. - The WDDM spec includes hardware acceleration features not found in Pre-Fermis - Cuda needs to operate on the older hardware in a compatible way, whether those hardware features are there or not, so added driver functionality allows that, while incurring extra overhead. - Older hardware without those added features incurs these extra overheads, in terms of bus contention, CPU usage, to emulate the absent hardware features. That added overhead places new upper limits on what a system can handle with older cards. - Since Cuda needs to operate on XP as well, driver changes & these added features find their way into newer XP Driver model as well (same added overhead, new upper limits on what hardware can be handled by a new system.) Yes, so MS started the conspiracy to fix the problems with the old 11+ year old XP driver model standard, and these improvements to some extent deprecate older hardware. That's not really 'fixable' other than going entirely back to an old-school setup, though newer hardware (inc GPUs) and careful system configuration make a big difference. While similar limits apply with newer cards, they do tend to be more manageable with that added care about system drivers & hardware choices. In short, your limits are lower because your system is 'doing more'. How that translates into practical system implementations & raw performance is changing. Whether those changes are for the better on not would depend on your perspective. Probably stuffing the slots full of legacy cards, especially on a newer OS, would give the impression of backward steps. Jason "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
![]() Volunteer tester ![]() Send message Joined: 30 Nov 03 Posts: 60532 Credit: 44,424,891 RAC: 6,659 ![]() ![]() |
Either MS wants CUDA to be limited to only Sli-able devices or Nvidia won't fix this problem... That may be Jason, but I really can't afford to scrap a bunch of GTX295 cards plus water blocks and go for 3 GTX590 cards(water cooled), those 3 cards are just around $2900 or so, I might be able to raise enough for 2 EVGA GTX590 cards, but not 3. So there's no way to raise the amount of shared system memory, as I've figured out that 5 gpus are using 639.8MB, Where as 6 gpus would use about 533.17MB, for 12 gpus It's half of that or about 266.58MB, One would think that's plenty, But It isn't, I have 4 dummy plugs and 2 monitors and I'm using 275.50 x64(Windows 7 Pro x64 sp2 w/16GB system ram) which at least doesn't do a BSOD on a reboot like the 280's do, which is a bug I've read. Oh and the desktop is extended across all monitors and dummy plugs. What is BSG Robotech-Saga-Wiki SW-wiki |
©2018 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.