Message boards :
Number crunching :
App_info with one count value for GTX 295 and one for Fermi and later.
Message board moderation
Author | Message |
---|---|
Morten Ross Send message Joined: 30 Apr 01 Posts: 183 Credit: 385,664,915 RAC: 0 |
I've got one rig with currently just one GTX 690. It's very lonely :-) For GTX 295 a count value of anything lower than 1 will not work, but I would like to utilize some free PCIe slots with some soon to be reitred GTX 295s. I'm thinking perhaps it could be done using 2 plan_classes, and/or version_nums, but how to make BOINC understand the difference, and also not to mess with project server? It seems I either have to use only 295s, or only Fermi and later. I hope I'm wrong. Morten Ross |
Tim Send message Joined: 19 May 99 Posts: 211 Credit: 278,575,259 RAC: 0 |
I've got one rig with currently just one GTX 690. It's very lonely :-) I test it with 3 gtx 285 and 1 gtx 580 once. Set the 285's to 0,5. The system was VERY unstable, in fact so unstable that I had random restarts and blue screens, even with the latest driver at that time. I do not think it is BOINC issue but vga issue. So for me the best way is NOT to mix fermi and prefermi cards. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34984 Credit: 261,360,520 RAC: 489 |
Personally I wouldn't mix fermi and pre-fermi cards either. Cheers. |
Horacio Send message Joined: 14 Jan 00 Posts: 536 Credit: 75,967,266 RAC: 0 |
I'm thinking perhaps it could be done using 2 plan_classes, and/or version_nums, but how to make BOINC understand the difference, and also not to mess with project server? AFAIK, The only way is running 2 instances of BOINC, each one set to ignore one of the GPUs... Any other workaround (including the trivial "crunch only 1 per GPU in all of them") will give you more headaches than beneffits, specially due to the difference in performances... the estimations will be very wrong for both GPUs, the cache may get over/under filled, and worst of all, WUs may fail due to "too much time elapsed" error when crunched on the slower GPU. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I'm thinking perhaps it could be done using 2 plan_classes, and/or version_nums, but how to make BOINC understand the difference, and also not to mess with project server? Running multiple instances of BOINC would also have the benefit of not getting VLAR timeout issues if the GPU's were in an instance separate from the CPU(s). The only disadvantage of running multiple instances is that there are multiple machines in your list that are all part of the same machine. So if you were trying to get a high ranking in the "Top computers" list it would work against you. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Morten Ross Send message Joined: 30 Apr 01 Posts: 183 Credit: 385,664,915 RAC: 0 |
|
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
There are 2 issues I have observed when running multiple BOINC clients you may want to be aware of if you decide to give it a try. BOINC 6.10.xx does not always generate a different host_cpid for each instance. That is fixed in BOINC 7.0.xx. At least in v 7.0.25 or newer. If all client instances have the same host_cpid you will get a lot of Abandoned tasks each time the other client connects. BOINC Manager 7.0.xx seems to have an issue when connecting to other BOINC instances on the local machine. When connecting to localhost:31417 or localhost:31418 Manager seems to only want to connect to port 31416. Even specifying the host name/address doesn't appear to work. BOINC Manager 6.10.xx doesn't seem to have that issue. BOINC Manager 7.0.xx does work if started using the host/port syntax. Such as: boincmgr.exe /n, --namehost=localhost /g, --gui_rpc_port=31417 SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.