Message boards :
Number crunching :
how to avoid cuda50 in favour of _SoG
Message board moderation
Author | Message |
---|---|
Ninos Y Send message Joined: 26 Aug 99 Posts: 15 Credit: 55,831,116 RAC: 0 |
I would like to know how to exclude the server from sending me cuda50 tasks. Currently using Boinc Manager with 7.6.22. If I delete the setiathome_8.00_windows_intelx86__cuda50.exe, it eventually just reuploads it (along with all the previous cuda iterations) I am NOT interested in installing Lunatics. I have 2 GTX 950s, running 2 tasks per GPU, they should be adequate to use _SoG. How do we tell the servers: 'no thank you' to cuda50? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
How do we tell the servers: 'no thank you' to cuda50? Just let it run, and eventually it will determine SoG is the best application to use & will continue to use it. Grant Darwin NT |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
BOINC doesn't currently have the ability to disable specific GPU features. Like disabling CUDA and leave OpenCL enabled. However if you want to get tricky you could remove the CUDA section from your coproc_info.xml and them make it read only. I have used similar tricks for my Radeon GPUs. When I either needed to tell the server a GPU did have support CAL when it didn't or telling the server it didn't have support CAL when I only OpenCL. I haven't the same method CUDA. So I don't know if it will work the same. Proceed at your own risk. EDIT: Looking at the coproc_info.xml for my 750ti it looks like maybe the value <have_cuda>1</have_cuda> could be changed to 0 instead of removing the CUDA section. If you give it a try be sure to let us know what happens. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
BOINC doesn't currently have the ability to disable specific GPU features. Like disabling CUDA and leave OpenCL enabled. However if you want to get tricky you could remove the CUDA section from your coproc_info.xml and them make it read only. I decided since I'm only using the OpenCL apps I'd give it a try. Looks like the whole CUDA section has to come out. 11/21/2016 7:18:21 PM Starting BOINC client version 7.4.42 for windows_x86_64 11/21/2016 7:18:21 PM log flags: sched_ops 11/21/2016 7:18:21 PM Libraries: libcurl/7.39.0 OpenSSL/1.0.1j zlib/1.2.8 11/21/2016 7:18:21 PM Data directory: D:\BOINC 11/21/2016 7:18:21 PM Failed to delete old coproc_info.xml. error code -110 11/21/2016 7:18:21 PM OpenCL: NVIDIA GPU 0: GeForce GTX 750 Ti (driver version 364.51, device version OpenCL 1.2 CUDA, 2048MB, 1967MB available, 1622 GFLOPS peak) 11/21/2016 7:18:21 PM OpenCL: Intel GPU 0: Intel(R) HD Graphics (driver version 10.18.10.4358, device version OpenCL 1.2, 1195MB, 1195MB available, 358 GFLOPS peak) 11/21/2016 7:18:21 PM OpenCL CPU: Intel(R) Celeron(R) CPU J1900 @ 1.99GHz (OpenCL driver vendor: Intel(R) Corporation, driver version 3.0.1.10891, device version OpenCL 1.2 (Build 76427)) 11/21/2016 7:18:21 PM Asteroids@home Found app_info.xml; using anonymous platform 11/21/2016 7:18:21 PM SETI@home Found app_info.xml; using anonymous platform 11/21/2016 7:18:21 PM Host name: SIMIII 11/21/2016 7:18:21 PM Processor: 4 GenuineIntel Intel(R) Celeron(R) CPU J1900 @ 2.41GHz 11/21/2016 7:18:21 PM Processor features: fpu vme de pse <SNIP> 11/21/2016 7:18:21 PM OS: Microsoft Windows 7: Ultimate x64 Edition, Service Pack 1, (06.01.7601.00) 11/21/2016 7:18:21 PM Memory: 3.71 GB physical, 7.42 GB virtual 11/21/2016 7:18:21 PM Disk: 198.09 GB total, 188.47 GB free 11/21/2016 7:18:21 PM Local time is UTC -5 hours You may notice the line. Failed to delete old coproc_info.xml. error code -110. BOINC is just complaining that it can't delete the old one and make a new one, but that is the desired effect in this situation. If you were to change your coprocessor configuration in any way you would probably want to let BOINC generate a new one and modify it again. This does drop the driver version from your host information on the website. Because when BOINC doesn't do driver version detection for OpenCL. Which is why there are no driver versions display for Intel GPUs and some Radeon GPUs. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
If SoG better for your particular setup BOINC will send only them from some point in time. For that you need to complete >10 tasks for both CUDAxx and OpenCL SoG. Check APR value for each app version. If SoG's bigger - no need to do anything, BOINC will finally send only them. If some CUDA APR bigger (but you very sure you want only SoG) then maybe worth to set NNT, finish all current tasks and then detach from SETI project and attach again (to get new host ID and chance to compute APR from beginning correctly). SETI apps news We're not gonna fight them. We're gonna transcend them. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Somebody suggested running multiple tasks on the GPU when in CUDA mode, and only a single task in SoG mode. That can be done with the plan_class extension to app_config.xml - doesn't need the whole Lunatic - and will drive cuda APRs downwards until SoG is dominant. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Somebody suggested running multiple tasks on the GPU when in CUDA mode, and only a single task in SoG mode. That can be done with the plan_class extension to app_config.xml - doesn't need the whole Lunatic - and will drive cuda APRs downwards until SoG is dominant. That is also a more elegant option than my brute force gutting CUDA from the system. Especially to keep CUDA for other projects. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
And clearly demonstrates drawbacks of APR usage for performance estimation. It just can't be used such way w/o additional knowledge about config. SETI apps news We're not gonna fight them. We're gonna transcend them. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
And clearly demonstrates drawbacks of APR usage for performance estimation. It just can't be used such way w/o additional knowledge about config. Having such a large list of apps is great for the project to be able to support a large spectrum of hardware. For SETI@HOME, with so many apps, needs a feature for the user to be able to better define the apps they would like run on their system. It might be helpful as a BOINC standard feature, or enabling the feature if it id already there. Allowing the user to enable/disable by Plan Class from their Project Preferences seems like the most logical option. Perhaps something along the lines of what Collatz implemented: http://i.imgur.com/ukiDhgD.png SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Ninos Y Send message Joined: 26 Aug 99 Posts: 15 Credit: 55,831,116 RAC: 0 |
great suggestions, thank you all. For me, I had _SoGs running nicely with 3 tasks per GPU, then noticed some triplicate errors (something like that), and I believe this is what triggered the servers to abandon the distribution of _SoGs for me in favour of the CUDA50s. Then I scaled back to 2 tasks per GPU, now it is back to _SoGs. There seems to be checks and balances to make sure the Berkley team isn't getting nonsense results. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I like what Collatz has done. Looks very simple to implement, at least from the user side selection. Don't know how involved it would be to do the server configuration for plan_classes choice. Maybe SETI can just borrow the snippet of code they wrote. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Some type of plan class management at the user level does seem like it should be a standard BOINC feature to me. As "the ARP method will figure it out eventually" really doesn't cut it in some cases. I gave up running my Android devices. As the APR system was favoring the less efficient apps. The more efficient apps, when actually selected, mostly would get a task that didn't count towards "completed". Being able to compare the few good tasks they were sent to the others apps and the gap in app efficiency made the decision for me. My devices were wasting to much time running less efficient apps. I imagine the percentage of users that ever access their project preferences is likely extremely small. So that would put such a feature low on a "to do" list. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Jeff Buck Send message Joined: 11 Feb 00 Posts: 1441 Credit: 148,764,870 RAC: 0 |
For the first year or so that my T7400 was crunching for S@h I ran stock. The scheduler initially tested both Cuda42 and Cuda50 for the GPUs. Cuda50 would have been the sensible choice to end up with for those cards. However, every time we got a burst of APs (remember those days?), they would cause the MB tasks to take longer than normal, thus dragging down the APR for whatever the top-rated Cuda flavor was at the time. Then the scheduler would go back to flip-flopping between Cuda42 and Cuda50. Whichever one was on top when the AP music stopped (sort of like musical chairs) would then be the favored flavor until the next round of APs. Finally, there was a long-enough gap between AP bursts, at a time when Cuda42 was on top, that the Cuda42 APR built back up to a point that even the next round of APs couldn't drag it down to the Cuda50 APR. That doomed that machine to Cuda42 forever and ever. I even tried artificially forcing it down by running 5 or 6 tasks at a time on each GPU but never could get it low enough. The only solution was to switch to Lunatics and manually select Cuda50. In short, it would be nice if there was some sort of reset switch for the APRs when they get badly skewed for some reason or other. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34748 Credit: 261,360,520 RAC: 489 |
For the first year or so that my T7400 was crunching for S@h I ran stock. The scheduler initially tested both Cuda42 and Cuda50 for the GPUs. Cuda50 would have been the sensible choice to end up with for those cards. A lots of things changed with the intro of MB V8 and running multiple w/u's is a big 1 on middle to lower class cards. Cheers. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.