Can I select which WU/project goes to which GPU?

Message boards : Number crunching : Can I select which WU/project goes to which GPU?
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile S@NL - Mellowman
Avatar

Send message
Joined: 21 Jan 02
Posts: 112
Credit: 2,669,228
RAC: 0
Netherlands
Message 1189582 - Posted: 29 Jan 2012, 1:45:01 UTC

Can I select which project or WU goes to which GPU? I have a dual GPU setup, a GTX-570 and a GTX-550TI, and I would like my GPUGrid WU's to run on the GTX-570 as they run pretty long. For the mentioned project I probably should ask on the GPUGrid forum (which I will do), but would it also be possible for other projects (like SETI) or WU's?

I had a dual directory setup before I exchanged my GF8600GT 256MB with a GTX-570. I removed my cc_config.xml file so both GPU's are now again controlled by 1 BOINC client.

Today I solved it by waiting until the GTX-570 was almost finished with a SETI-WU and then I resumed the GPUGrid WU's. The GTX-570 runs the GPUGrid WU's about 3-4 (or 2-4) times faster than the GTX-550TI.

Is there something I could put in the cc_config.xml file or in the projects app_info.xml file?

Anthony.
SETI classic wu's 10000; CPU time 47121 hours

The longer I live, the more reasons I develop for wanting to die.
ID: 1189582 · Report as offensive
Blake Bonkofsky
Volunteer tester
Avatar

Send message
Joined: 29 Dec 99
Posts: 617
Credit: 46,383,149
RAC: 0
United States
Message 1189590 - Posted: 29 Jan 2012, 2:42:04 UTC - in response to Message 1189582.  

You would need to go back to your dual directory installation, using CC_CONFIG's to only allow one GPU per client. Otherwise, BOINC will do its own scheduling and prioritizing of projects.
ID: 1189590 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1189597 - Posted: 29 Jan 2012, 3:16:45 UTC - in response to Message 1189582.  

Can I select which project or WU goes to which GPU? I have a dual GPU setup, a GTX-570 and a GTX-550TI, and I would like my GPUGrid WU's to run on the GTX-570 as they run pretty long. For the mentioned project I probably should ask on the GPUGrid forum (which I will do), but would it also be possible for other projects (like SETI) or WU's?

I had a dual directory setup before I exchanged my GF8600GT 256MB with a GTX-570. I removed my cc_config.xml file so both GPU's are now again controlled by 1 BOINC client.

Today I solved it by waiting until the GTX-570 was almost finished with a SETI-WU and then I resumed the GPUGrid WU's. The GTX-570 runs the GPUGrid WU's about 3-4 (or 2-4) times faster than the GTX-550TI.

Is there something I could put in the cc_config.xml file or in the projects app_info.xml file?

Anthony.

The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1189597 · Report as offensive
Profile S@NL - Mellowman
Avatar

Send message
Joined: 21 Jan 02
Posts: 112
Credit: 2,669,228
RAC: 0
Netherlands
Message 1189603 - Posted: 29 Jan 2012, 3:39:01 UTC - in response to Message 1189597.  

The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well.

When will version 6.13.x come out. And BOINC 7.x is still in Alpha release so I don't know if it's wise to install that, especially if your not sure that that option is in there.

Otherwise I'll just have to go back to the 2 client setup I had before, for the time being.

Anthony.
SETI classic wu's 10000; CPU time 47121 hours

The longer I live, the more reasons I develop for wanting to die.
ID: 1189603 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1189607 - Posted: 29 Jan 2012, 3:47:47 UTC - in response to Message 1189603.  

The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well.

When will version 6.13.x come out. And BOINC 7.x is still in Alpha release so I don't know if it's wise to install that, especially if your not sure that that option is in there.

Otherwise I'll just have to go back to the 2 client setup I had before, for the time being.

Anthony.

I think 6.13.x became 7.x, but I'm not really sure on that. I haven't had the time to muck around with BOINC versions a lot recently.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1189607 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65690
Credit: 55,293,173
RAC: 49
United States
Message 1189610 - Posted: 29 Jan 2012, 4:08:02 UTC - in response to Message 1189607.  

The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well.

When will version 6.13.x come out. And BOINC 7.x is still in Alpha release so I don't know if it's wise to install that, especially if your not sure that that option is in there.

Otherwise I'll just have to go back to the 2 client setup I had before, for the time being.

Anthony.

I think 6.13.x became 7.x, but I'm not really sure on that. I haven't had the time to muck around with BOINC versions a lot recently.

And once one goes to 7.x there's no going back until all ones wu's are exhausted, at least that's about what I've read.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1189610 · Report as offensive
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1189611 - Posted: 29 Jan 2012, 4:08:05 UTC - in response to Message 1189607.  

The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well.

When will version 6.13.x come out. And BOINC 7.x is still in Alpha release so I don't know if it's wise to install that, especially if your not sure that that option is in there.

Otherwise I'll just have to go back to the 2 client setup I had before, for the time being.

Anthony.

I think 6.13.x became 7.x, but I'm not really sure on that. I haven't had the time to muck around with BOINC versions a lot recently.


Yes it did, current testing build is 7.0.12.

ID: 1189611 · Report as offensive
MarkJ Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 08
Posts: 1139
Credit: 80,854,192
RAC: 5
Australia
Message 1189629 - Posted: 29 Jan 2012, 5:44:10 UTC - in response to Message 1189582.  

Can I select which project or WU goes to which GPU? I have a dual GPU setup, a GTX-570 and a GTX-550TI, and I would like my GPUGrid WU's to run on the GTX-570 as they run pretty long. For the mentioned project I probably should ask on the GPUGrid forum (which I will do), but would it also be possible for other projects (like SETI) or WU's?

I had a dual directory setup before I exchanged my GF8600GT 256MB with a GTX-570. I removed my cc_config.xml file so both GPU's are now again controlled by 1 BOINC client.

Today I solved it by waiting until the GTX-570 was almost finished with a SETI-WU and then I resumed the GPUGrid WU's. The GTX-570 runs the GPUGrid WU's about 3-4 (or 2-4) times faster than the GTX-550TI.

Is there something I could put in the cc_config.xml file or in the projects app_info.xml file?

Anthony.


It's working from 7.0.7 onwards. You can exclude a gpu from a particular project or even just a specific app. I have one rig where I use a GTX570 but only for GPUgrid work and exclude it from the other couple of projects.

As mentioned by the others though once you go to the 7.x clients there is no going back as the client_state files are incompatible. Also the scheduling is totally different and has some issues around keeping a GPU cache.

If you decide to try it I suggest you subscribe to the BOINC alpha mailing list as that where we report issues. You can subscribe to the list even if you aren't an alpha tester.
ID: 1189629 · Report as offensive
Profile TRuEQ & TuVaLu
Volunteer tester
Avatar

Send message
Joined: 4 Oct 99
Posts: 505
Credit: 69,523,653
RAC: 10
Sweden
Message 1189656 - Posted: 29 Jan 2012, 9:12:07 UTC - in response to Message 1189603.  
Last modified: 29 Jan 2012, 9:15:30 UTC

The option to do so is in the works. The 6.13.x release have the features for <exclude_gpu>, but I don't know how well, or if, it works. I would expect it is or will be in BOINC 7.x as well.

When will version 6.13.x come out. And BOINC 7.x is still in Alpha release so I don't know if it's wise to install that, especially if your not sure that that option is in there.

Otherwise I'll just have to go back to the 2 client setup I had before, for the time being.

Anthony.



I use the exclude option and it works fine for me.
You can download 7.0.12 from here: http://boinc.berkeley.edu/dl/

Then use something like this in your cc_config.xml....

<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
<exclude_gpu>
<url>http://www.gpugrid.net</url>
<device_num>1</device_num>
</exclude_gpu>
<exclude_gpu>
<url>http://setiweb.ssl.berkeley.edu/beta</url>
<device_num>0</device_num>
</exclude_gpu>
</options>
</cc_config>

I am not sure if you'll need the "<use_all_gpus>1</use_all_gpus>". It might work without it. I use it since it does no harm.


There are more tags/options to the "exclude" command but I think might be enough for you.

The Other type of tags are for type of GPU NVIDIA/ATI and a tag for what applikation to exclude if there are many applications in 1 project.
TRuEQ & TuVaLu
ID: 1189656 · Report as offensive
Profile S@NL - Mellowman
Avatar

Send message
Joined: 21 Jan 02
Posts: 112
Credit: 2,669,228
RAC: 0
Netherlands
Message 1189756 - Posted: 29 Jan 2012, 17:41:34 UTC

Thank for the presented options, but I don't want to run an alpha-version of BOINC, so I just went back to my 2 client setup. Lost 1 GPUGrid WU in the process which was running for 4.5 hours :( but sh*t happens.

Anthony.
SETI classic wu's 10000; CPU time 47121 hours

The longer I live, the more reasons I develop for wanting to die.
ID: 1189756 · Report as offensive
Profile TRuEQ & TuVaLu
Volunteer tester
Avatar

Send message
Joined: 4 Oct 99
Posts: 505
Credit: 69,523,653
RAC: 10
Sweden
Message 1189807 - Posted: 29 Jan 2012, 18:55:38 UTC - in response to Message 1189756.  

Thank for the presented options, but I don't want to run an alpha-version of BOINC, so I just went back to my 2 client setup. Lost 1 GPUGrid WU in the process which was running for 4.5 hours :( but sh*t happens.

Anthony.


You can always wait for the alpha to be "recomended version" and then try the exclude.

I've been running the alpha for some time now and i'd say I prefere it.
TRuEQ & TuVaLu
ID: 1189807 · Report as offensive
Profile S@NL - Mellowman
Avatar

Send message
Joined: 21 Jan 02
Posts: 112
Credit: 2,669,228
RAC: 0
Netherlands
Message 1189811 - Posted: 29 Jan 2012, 19:00:47 UTC - in response to Message 1189807.  

As soon as it comes out of alpha/beta and it becomes the recommended version I'll install it.

Anthony.
SETI classic wu's 10000; CPU time 47121 hours

The longer I live, the more reasons I develop for wanting to die.
ID: 1189811 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65690
Credit: 55,293,173
RAC: 49
United States
Message 1189853 - Posted: 29 Jan 2012, 21:04:05 UTC

While we're on Boinc 7.x, It would be nice If I could tell one gpu to run 1 wu and the others to run 2 wu's per gpu, is this possible at all?
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1189853 · Report as offensive
Profile TRuEQ & TuVaLu
Volunteer tester
Avatar

Send message
Joined: 4 Oct 99
Posts: 505
Credit: 69,523,653
RAC: 10
Sweden
Message 1189857 - Posted: 29 Jan 2012, 21:16:53 UTC - in response to Message 1189853.  
Last modified: 29 Jan 2012, 21:21:05 UTC

While we're on Boinc 7.x, It would be nice If I could tell one gpu to run 1 wu and the others to run 2 wu's per gpu, is this possible at all?


You can do that with 2 different projects when using app_info.xml for the project that you want to run 2 instances with.

Also use the proper cc_config.xml to exclude the GPU's for the specific projects you don't want to use the specific GPU on.

I am not sure all projects support several instances though. please check proper forum for that.

I use app_info.xml to run several instances sometimes with POEM, SETI and SETI Beta on 1 of my GPU's.

Some people run 2 instances of BM to do what you want todo.
TRuEQ & TuVaLu
ID: 1189857 · Report as offensive
LadyL
Volunteer tester
Avatar

Send message
Joined: 14 Sep 11
Posts: 1679
Credit: 5,230,097
RAC: 0
Message 1189859 - Posted: 29 Jan 2012, 21:20:05 UTC - in response to Message 1189853.  
Last modified: 29 Jan 2012, 21:23:07 UTC

While we're on Boinc 7.x, It would be nice If I could tell one gpu to run 1 wu and the others to run 2 wu's per gpu, is this possible at all?


If you want the same application to run on one GPU once and on the other twice, as far as I know the answer is No.

Hang on...

If you figure out, how to exclude the GPU from an app (as opposed to from a project as outlined earler in this thread, provided it is possible at all) then you can try and run under anon, duplicate the app you want to run, rename one and do two entries one for each GPU.

Mind you, highly experimental.

After a bit of rummaging the boinc wiki yields:

<exclude_gpu>

Don't use the given GPU for the given project. If <device_num> is not specified, exclude all GPUs of the given type. <type> is required if your computer has more than one type of GPU; otherwise it can be omitted. <app> specifies the short name of an application (i.e. the <name> element within the <app> element in client_state.xml). If specified, only tasks for that app are excluded. You may include multiple <exclude_gpu> elements. New in 6.13
<exclude_gpu>
<url>project_URL</url>
[<device_num>N</device_num>]
[<type>nvidia|ati</type>]
[<app>appname</app>]
</exclude_gpu>


Of course you rely on BOINC numbering your devices the same each time.
And running alpha clients has its own pitfalls.

So, as far as I can see, doable, if you think it is worth the fuss.

edit: and before people start wondering why you might want to do that, if you have multiple GPUs of mixed Fermi and pre-Fermi class, you would want to multithread on the Fermis but not on the per-Fermi cards.
ID: 1189859 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65690
Credit: 55,293,173
RAC: 49
United States
Message 1189863 - Posted: 29 Jan 2012, 21:27:26 UTC
Last modified: 29 Jan 2012, 21:30:31 UTC

One project as LadyL says(Seti@Home only), Just not with 2 Boincs, too complicated...

No their all GTX295 cards, this is to give the gpu usage a bump, as the display gpu gets enough, but the other 5(when I had 6 working gpus) suffer due to a lack of shared memory which seems to maxed out at 3199MB...
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1189863 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1189866 - Posted: 29 Jan 2012, 21:33:35 UTC - in response to Message 1189863.  

One project as LadyL says(Seti@Home only), Just not with 2 Boincs, too complicated...

No their all GTX295 cards, this is to give the gpu usage a bump, as the display gpu gets enough, but the other 5(when I had 6 working gpus) suffer due to a lack of shared memory which seems to maxed out at 3199MB...

As far as I know, 200 series NV GPUs can only run 1 WU per actual GPU, 2 tasks per card.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1189866 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65690
Credit: 55,293,173
RAC: 49
United States
Message 1189875 - Posted: 29 Jan 2012, 21:50:41 UTC - in response to Message 1189866.  
Last modified: 29 Jan 2012, 21:51:09 UTC

One project as LadyL says(Seti@Home only), Just not with 2 Boincs, too complicated...

No their all GTX295 cards, this is to give the gpu usage a bump, as the display gpu gets enough, but the other 5(when I had 6 working gpus) suffer due to a lack of shared memory which seems to maxed out at 3199MB...

As far as I know, 200 series NV GPUs can only run 1 WU per actual GPU, 2 tasks per card.

Well then how under Windows 7 x64 do I enlarge the shared memory pool as It's restricting CUDA processing???

4095MB total graphics memory
896MB per gpu
3199MB total shared system memory

Either MS wants CUDA to be limited to only Sli-able devices or Nvidia won't fix this problem...

6 or more gpus per pc and except for 1 or 2 gpus(depends on if multi-gpu/sli is enabled or not) and the performance per gpu goes into the crapper, over clocking will to an extent overcome this, but I suspect this is why I lost a gpu that or it was already flaky when I bought the Evga card(pre overclocked)...
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1189875 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1189970 - Posted: 30 Jan 2012, 3:30:00 UTC - in response to Message 1189875.  
Last modified: 30 Jan 2012, 3:34:27 UTC

Either MS wants CUDA to be limited to only Sli-able devices or Nvidia won't fix this problem...


Just to clarify this devious & complex conspiracy a bit (lol), after long term analysis the timeline looks something like this:

- MS specifies an entirely new driver model for Vista+ (WDDM) that includes extra functionality for reliability, security & efficiency.
- The WDDM spec includes hardware acceleration features not found in Pre-Fermis
- Cuda needs to operate on the older hardware in a compatible way, whether those hardware features are there or not, so added driver functionality allows that, while incurring extra overhead.
- Older hardware without those added features incurs these extra overheads, in terms of bus contention, CPU usage, to emulate the absent hardware features. That added overhead places new upper limits on what a system can handle with older cards.
- Since Cuda needs to operate on XP as well, driver changes & these added features find their way into newer XP Driver model as well (same added overhead, new upper limits on what hardware can be handled by a new system.)

Yes, so MS started the conspiracy to fix the problems with the old 11+ year old XP driver model standard, and these improvements to some extent deprecate older hardware. That's not really 'fixable' other than going entirely back to an old-school setup, though newer hardware (inc GPUs) and careful system configuration make a big difference. While similar limits apply with newer cards, they do tend to be more manageable with that added care about system drivers & hardware choices.

In short, your limits are lower because your system is 'doing more'. How that translates into practical system implementations & raw performance is changing. Whether those changes are for the better on not would depend on your perspective. Probably stuffing the slots full of legacy cards, especially on a newer OS, would give the impression of backward steps.

Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1189970 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65690
Credit: 55,293,173
RAC: 49
United States
Message 1189973 - Posted: 30 Jan 2012, 3:46:47 UTC - in response to Message 1189970.  
Last modified: 30 Jan 2012, 3:50:11 UTC

Either MS wants CUDA to be limited to only Sli-able devices or Nvidia won't fix this problem...


Just to clarify this devious & complex conspiracy a bit (lol), after long term analysis the timeline looks something like this:

- MS specifies an entirely new driver model for Vista+ (WDDM) that includes extra functionality for reliability, security & efficiency.
- The WDDM spec includes hardware acceleration features not found in Pre-Fermis
- Cuda needs to operate on the older hardware in a compatible way, whether those hardware features are there or not, so added driver functionality allows that, while incurring extra overhead.
- Older hardware without those added features incurs these extra overheads, in terms of bus contention, CPU usage, to emulate the absent hardware features. That added overhead places new upper limits on what a system can handle with older cards.
- Since Cuda needs to operate on XP as well, driver changes & these added features find their way into newer XP Driver model as well (same added overhead, new upper limits on what hardware can be handled by a new system.)

Yes, so MS started the conspiracy to fix the problems with the old 11+ year old XP driver model standard, and these improvements to some extent deprecate older hardware. That's not really 'fixable' other than going entirely back to an old-school setup, though newer hardware (inc GPUs) and careful system configuration make a big difference. While similar limits apply with newer cards, they do tend to be more manageable with that added care about system drivers & hardware choices.

In short, your limits are lower because your system is 'doing more'. How that translates into practical system implementations & raw performance is changing. Whether those changes are for the better on not would depend on your perspective. Probably stuffing the slots full of legacy cards, especially on a newer OS, would give the impression of backward steps.

Jason

That may be Jason, but I really can't afford to scrap a bunch of GTX295 cards plus water blocks and go for 3 GTX590 cards(water cooled), those 3 cards are just around $2900 or so, I might be able to raise enough for 2 EVGA GTX590 cards, but not 3.

So there's no way to raise the amount of shared system memory, as I've figured out that 5 gpus are using 639.8MB, Where as 6 gpus would use about 533.17MB, for 12 gpus It's half of that or about 266.58MB, One would think that's plenty, But It isn't, I have 4 dummy plugs and 2 monitors and I'm using 275.50 x64(Windows 7 Pro x64 sp2 w/16GB system ram) which at least doesn't do a BSOD on a reboot like the 280's do, which is a bug I've read. Oh and the desktop is extended across all monitors and dummy plugs.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1189973 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : Can I select which WU/project goes to which GPU?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.