App_info with one count value for GTX 295 and one for Fermi and later.

Message boards : Number crunching : App_info with one count value for GTX 295 and one for Fermi and later.
Message board moderation

To post messages, you must log in.

AuthorMessage
Morten Ross
Volunteer tester
Avatar

Send message
Joined: 30 Apr 01
Posts: 183
Credit: 385,664,915
RAC: 0
Norway
Message 1285143 - Posted: 18 Sep 2012, 13:40:30 UTC

I've got one rig with currently just one GTX 690. It's very lonely :-)

For GTX 295 a count value of anything lower than 1 will not work, but I would like to utilize some free PCIe slots with some soon to be reitred GTX 295s.

I'm thinking perhaps it could be done using 2 plan_classes, and/or version_nums, but how to make BOINC understand the difference, and also not to mess with project server?

It seems I either have to use only 295s, or only Fermi and later. I hope I'm wrong.
Morten Ross
ID: 1285143 · Report as offensive
Profile Tim
Volunteer tester
Avatar

Send message
Joined: 19 May 99
Posts: 211
Credit: 278,575,259
RAC: 0
Greece
Message 1285153 - Posted: 18 Sep 2012, 14:12:22 UTC - in response to Message 1285143.  
Last modified: 18 Sep 2012, 14:13:43 UTC

I've got one rig with currently just one GTX 690. It's very lonely :-)

For GTX 295 a count value of anything lower than 1 will not work, but I would like to utilize some free PCIe slots with some soon to be reitred GTX 295s.

I'm thinking perhaps it could be done using 2 plan_classes, and/or version_nums, but how to make BOINC understand the difference, and also not to mess with project server?

It seems I either have to use only 295s, or only Fermi and later. I hope I'm wrong.


I test it with 3 gtx 285 and 1 gtx 580 once. Set the 285's to 0,5.

The system was VERY unstable, in fact so unstable that I had random restarts and blue screens, even with the latest driver at that time.
I do not think it is BOINC issue but vga issue.

So for me the best way is NOT to mix fermi and prefermi cards.
ID: 1285153 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34837
Credit: 261,360,520
RAC: 489
Australia
Message 1285210 - Posted: 18 Sep 2012, 20:56:17 UTC - in response to Message 1285153.  

Personally I wouldn't mix fermi and pre-fermi cards either.

Cheers.
ID: 1285210 · Report as offensive
Horacio

Send message
Joined: 14 Jan 00
Posts: 536
Credit: 75,967,266
RAC: 0
Argentina
Message 1285238 - Posted: 18 Sep 2012, 21:40:28 UTC - in response to Message 1285143.  

I'm thinking perhaps it could be done using 2 plan_classes, and/or version_nums, but how to make BOINC understand the difference, and also not to mess with project server?

AFAIK, The only way is running 2 instances of BOINC, each one set to ignore one of the GPUs...

Any other workaround (including the trivial "crunch only 1 per GPU in all of them") will give you more headaches than beneffits, specially due to the difference in performances... the estimations will be very wrong for both GPUs, the cache may get over/under filled, and worst of all, WUs may fail due to "too much time elapsed" error when crunched on the slower GPU.
ID: 1285238 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1285478 - Posted: 19 Sep 2012, 13:25:08 UTC - in response to Message 1285238.  

I'm thinking perhaps it could be done using 2 plan_classes, and/or version_nums, but how to make BOINC understand the difference, and also not to mess with project server?

AFAIK, The only way is running 2 instances of BOINC, each one set to ignore one of the GPUs...

Any other workaround (including the trivial "crunch only 1 per GPU in all of them") will give you more headaches than beneffits, specially due to the difference in performances... the estimations will be very wrong for both GPUs, the cache may get over/under filled, and worst of all, WUs may fail due to "too much time elapsed" error when crunched on the slower GPU.

Running multiple instances of BOINC would also have the benefit of not getting VLAR timeout issues if the GPU's were in an instance separate from the CPU(s).

The only disadvantage of running multiple instances is that there are multiple machines in your list that are all part of the same machine. So if you were trying to get a high ranking in the "Top computers" list it would work against you.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1285478 · Report as offensive
Morten Ross
Volunteer tester
Avatar

Send message
Joined: 30 Apr 01
Posts: 183
Credit: 385,664,915
RAC: 0
Norway
Message 1285612 - Posted: 19 Sep 2012, 19:46:57 UTC

Thanks for your input :-)


Morten Ross
ID: 1285612 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1285632 - Posted: 19 Sep 2012, 20:28:55 UTC

There are 2 issues I have observed when running multiple BOINC clients you may want to be aware of if you decide to give it a try.

BOINC 6.10.xx does not always generate a different host_cpid for each instance. That is fixed in BOINC 7.0.xx. At least in v 7.0.25 or newer.
If all client instances have the same host_cpid you will get a lot of Abandoned tasks each time the other client connects.

BOINC Manager 7.0.xx seems to have an issue when connecting to other BOINC instances on the local machine. When connecting to localhost:31417 or localhost:31418 Manager seems to only want to connect to port 31416. Even specifying the host name/address doesn't appear to work. BOINC Manager 6.10.xx doesn't seem to have that issue. BOINC Manager 7.0.xx does work if started using the host/port syntax. Such as:
boincmgr.exe /n, --namehost=localhost /g, --gui_rpc_port=31417

SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1285632 · Report as offensive

Message boards : Number crunching : App_info with one count value for GTX 295 and one for Fermi and later.


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.