Message boards :
Number crunching :
Help! No GPU work for Days
Message board moderation
Author | Message |
---|---|
JBird Send message Joined: 3 Sep 02 Posts: 297 Credit: 325,260,309 RAC: 549 |
ID: 7825734 3 Big NVidias here - starving What did I do/can I do? Did I in fact do something? Thanks in advance JBird |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Did you install lunatics? |
JBird Send message Joined: 3 Sep 02 Posts: 297 Credit: 325,260,309 RAC: 549 |
It was already there - you saying I need to? Only change was 980Ti/Titan swap on this machine Did redo drivers - all machines - every move every time Hey thanks for stopping by! Hard to believe you are only post Edit> I did get 5 v8 CUDAs - once, today. Clueless why no others |
Bernie Vine Send message Joined: 26 May 99 Posts: 9954 Credit: 103,452,613 RAC: 328 |
Moved from Q&A |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
It seems we have been going through a VLAR storm, which means tasks are in shortage for Nvidia cards. |
William Send message Joined: 14 Feb 13 Posts: 2037 Credit: 17,689,662 RAC: 0 |
also, once you are empty and have repeated 'no task' events, boinc will go into increased backoffs (up to 24h). [mea culpa. it's meant to stop boinc asking for tasks too often] under normal conditions it can take a few attempts to top up - if you are on a 24h backoff... each tie a task is reported, those backoffs are cleared, so it doesn't happen if you have tasks left. 'priming the pump' when you are empty can be a bit difficult. I suggest you periodically hit the update button until you get a few. And yes, if there are few tasks to be had for NV because of lots VLAR you are competing for a very limited resource and must get lucky on your request. A person who won't read has no advantage over one who can't read. (Mark Twain) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Coupled, this morning, with a 'shorty storm': a high proportion of tasks which run quickly, and need replacing. They run well on GPUs, but tend to be doled out in large batches to hosts requesting large amounts of work. That makes it less likely that any one work request is filled. But if you do manage to snag a batch, they do make very good pump-primers. |
JBird Send message Joined: 3 Sep 02 Posts: 297 Credit: 325,260,309 RAC: 549 |
Hey Thanks for the feedback everyone I just happen to be loaded for Bear in the Nvidia department here and would still welcome the opportunity to take on VLAR with these powerful CUDAs - one at a time, down-clocked, whatever it takes - let me at em! Yes, must say my observations with shorties ie guppi_MESSIERS at 13 seconds/42 tasks in 5 minutes and 5 credits is mesmerizing makes me feel like these GPUs would have some fun with VLAR Ah well, Devs are workin on it I expect Smoke em if you gottem Ready to Ride here |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
If you completely out of work with powerful GPUs you could sacrifice CPU performance but let GPU run. Just install anonymous platform apps (via Lunatics installer for example) and then remove GPU coprocessor tags from CUDA app section. This will make BOINC to think that it's CPU app and schedule CPU-eligible work to it. Such way you can get VLARs on GPU. But "real" CPU app will get nothing (cause there is no way to feed 2 different apps under same plan class AFAIK) and its section should be removed to not to intercept tasks from GPU app. So, CPU will sit idle (can be used in another project) and GPU will work on VLARs. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Yes, must say my observations with shorties ie guppi_MESSIERS at 13 seconds/42 tasks in 5 minutes and 5 credits is mesmerizing No, I didn't mean that sort of (overflow) shorty, just the regular VHAR that used to run five minutes with v6, and now run nearer 15 minutes with v8. |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
Example of such modification based on NV SoG build: <app_info> <app> <name>setiathome_v8</name> </app> <file_info> <name>MB8_win_x86_SSE3_OpenCL_NV_r3430_SoG.exe</name> <executable/> </file_info> <file_info> <name>libfftw3f-3-3-4_x86.dll</name> <executable/> </file_info> <file_info> <name>mb_cmdline_win_x86_SSE3_OpenCL_NV.txt</name> </file_info> <app_version> <app_name>setiathome_v8</app_name> <version_num>800</version_num> <platform>windows_intelx86</platform> <cmdline>-gpu_lock -total_GPU_instances_num 6 -instances_per_device 2</cmdline> <file_ref> <file_name>MB8_win_x86_SSE3_OpenCL_NV_r3430_SoG.exe</file_name> <main_program/> </file_ref> <file_ref> <file_name>libfftw3f-3-3-4_x86.dll</file_name> </file_ref> <file_ref> <file_name>mb_cmdline_win_x86_SSE3_OpenCL_NV.txt</file_name> <open_name>mb_cmdline.txt</open_name> </file_ref> </app_version> </app_info> Cause BOINC will think it's CPU app app's own scheduling mechanism should be used instead. so enable GPUlock and set how many tasks should be distributed to each GPU. This particular example implies 3 GPU devices each of those should run 2 tasks at a time. If different nums required edit -total_GPU_instances_num 6 -instances_per_device 2 string. Also, tuning string can be added to mb_cmdline*.txt file (or inside <cmdline> tag). This example implies there are enough CPUs to allow running of 6 tasks. If not either edit number of CPUs in cc_config.xml or set fraction usage of CPU in app_info. |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Thank you Raistmer, Will test later. JBird. Make sure your commandline parameters met your app_config.xml |
JBird Send message Joined: 3 Sep 02 Posts: 297 Credit: 325,260,309 RAC: 549 |
@ William(Sweet) a task is reported, those backoffs are cleared, so it doesn't happen if you have tasks left. 'priming the pump' when you are empty can be a bit difficult. I suggest you periodically hit the update button until you get a few. Yes pretty nightmare-ish trying to coax Boinc! FYI Solution I found - Suspend Einstein thru 2 SETI workfetch cycles (typically 5 minutes each, as you know) BTW - Einstein Parkes PMPS XT v1.57 beta cuda 55 run in 1hr 30 minutes on 980 Ti and Titan X - at 3 tasks per card - that would be very successful error-free CUDA processing - pretty big data Jus saying [Win 10 and DX 12 environment here] |
JBird Send message Joined: 3 Sep 02 Posts: 297 Credit: 325,260,309 RAC: 549 |
Thank you Raistmer, === + 1 yes thankyou and Z - wanna get with you about this |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.