Message boards :
Number crunching :
Report all problems here...........
Message board moderation
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 · Next
Author | Message |
---|---|
kittyman Send message Joined: 9 Jul 00 Posts: 51484 Credit: 1,018,363,574 RAC: 1,004 |
Anybody else test this yet? I refuse to abort them....just against my policy of not cherry picking Seti work...somebody's gotta crunch them and if the servers send 'em my way I will do so. I had enough GPU work to get through the outage, but now that we are in the recovery phase and the CPU/GPU limit split did not work quite the way it was intended, they are waiting on the lifting of the limit to cache up again. And it's quite possible, depending on the work mix being issued when that happens, that they may get a snootfull of VLARs anyway. At some point, they will probably have to process some. And at some point, the Lunatics crew will probably get a better handle on getting the GPUs to deal with them without such a processing penalty. "Time is simply the mechanism that keeps everything from happening all at once." |
Link Send message Joined: 18 Sep 03 Posts: 834 Credit: 1,807,369 RAC: 0 |
Has anyone of you tried what I have written above? I have done something similar with ready crunched WUs, when we had upload problems. Any Ghost WUs shown in the task lists probably do not affect the limits. AFAICT the limit logic does not do a database query to find out how many are in progress (that would be expensive like the "resend lost work" feature). So it is based on information in the request, IOW the work the host does know about.Joe |
MadMaC Send message Joined: 4 Apr 01 Posts: 201 Credit: 47,158,217 RAC: 0 |
I think editing the client_state is beyond most people here... |
kittyman Send message Joined: 9 Jul 00 Posts: 51484 Credit: 1,018,363,574 RAC: 1,004 |
I think editing the client_state is beyond most people here... And probably should be, as one slip can wipe out your Boinc installation. "Time is simply the mechanism that keeps everything from happening all at once." |
W-K 666 Send message Joined: 18 May 99 Posts: 19417 Credit: 40,757,560 RAC: 67 |
It's all lies, 10/07/2010 09:37:29 SETI@home Requesting new tasks for CPU 10/07/2010 09:37:34 SETI@home Scheduler request completed: got 1 new tasks 10/07/2010 09:37:36 SETI@home Started download of ap_05no09ah_B5_P0_00009_20100709_08745.wu 10/07/2010 09:37:49 SETI@home Sending scheduler request: To fetch work. 10/07/2010 09:37:49 SETI@home Requesting new tasks for CPU 10/07/2010 09:37:55 SETI@home Scheduler request completed: got 0 new tasks 10/07/2010 09:37:55 SETI@home Message from server: No work sent 10/07/2010 09:37:55 SETI@home Message from server: This computer has reached a limit on tasks in progress 10/07/2010 09:38:02 SETI@home Finished download of ap_05no09ah_B5_P0_00009_20100709_08745.wu But Application Details says SETI@home Enhanced (anonymous platform, nvidia GPU) Number of tasks completed 21683 Max tasks per day 187 Number of tasks today 0 Consecutive valid tasks 91 Average turnaround time 11.34 days Astropulse v505 (anonymous platform, CPU) Number of tasks completed 0 Max tasks per day 100 Number of tasks today 1 Consecutive valid tasks 0 Average turnaround time 0.00 days As the AP V505 'Number of tasks today' is the one d/load as shown in msg's. Then why the "Message from server: This computer has reached a limit on tasks in progress" Me thinks someone or something has lost his/her/its logic. edit] Times sre UTC+1 |
Gundolf Jahn Send message Joined: 19 Sep 00 Posts: 3184 Credit: 446,358 RAC: 0 |
Me thinks someone or something has lost his/her/its logic. Yes, it's you ;-) The message has nothing to do with the application details but with the 5-tasks-per-core/140-tasks-per-machine limits, and that machine has far more than 140 tasks in progress. Gruß, Gundolf Computer sind nicht alles im Leben. (Kleiner Scherz) SETI@home classic workunits 3,758 SETI@home classic CPU time 66,520 hours |
W-K 666 Send message Joined: 18 May 99 Posts: 19417 Credit: 40,757,560 RAC: 67 |
Me thinks someone or something has lost his/her/its logic. No it doesn't, it has less than 50. If it did have more than 140 then why did it allow the download a few minutes before I posted? |
Gundolf Jahn Send message Joined: 19 Sep 00 Posts: 3184 Credit: 446,358 RAC: 0 |
No it doesn't, it has less than 50. Okay, I can only go by the task list of that host, and there are more than seven pages à twenty tasks in progress. Does it perhaps have about twenty CPU-tasks aboard? (I won't go through the task list to count them:-) Gruß, Gundolf |
W-K 666 Send message Joined: 18 May 99 Posts: 19417 Credit: 40,757,560 RAC: 67 |
No it doesn't, it has less than 50. The accurate breakdown of tasks is 9 cpu tasks, of which 4 are being crunched, 39 GPU tasks, including the one that is being crunched. There are also 3 Beta tasks waiting to report, but the Beta servers are still down. Edit] The cpu tasks that are on the computer, 8 were downloaded yesterday (Berkeley time) and one today. |
Gundolf Jahn Send message Joined: 19 Sep 00 Posts: 3184 Credit: 446,358 RAC: 0 |
The accurate breakdown of tasks is 9 cpu tasks, of which 4 are being crunched, 39 GPU tasks, including the one that is being crunched. Okay, if we are still talking about the same host (see links in my previous posts) then I'm out of ideas (other than ghosts, see Richard's and Joe's posts). :-( Gruß, Gundolf |
hiamps Send message Joined: 23 May 99 Posts: 4292 Credit: 72,971,319 RAC: 0 |
This error says it is CPU but I can't figure it out. I don't think I have seen this. I have turned all my GPU's to stock and toned down my CPU overclock a lot... Stderr output <core_client_version>6.10.58</core_client_version> <![CDATA[ <message> - exit code -6 (0xfffffffa) </message> <stderr_txt> setiathome_CUDA: Found 3 CUDA device(s): Device 1: GeForce GTX 285, 1007 MiB, regsPerBlock 16384 computeCap 1.3, multiProcs 30 clockRate = 1476000 Device 2: GeForce GTX 260, 879 MiB, regsPerBlock 16384 computeCap 1.3, multiProcs 27 clockRate = 1242000 Device 3: GeForce GTX 275, 879 MiB, regsPerBlock 16384 computeCap 1.3, multiProcs 30 clockRate = 1476000 setiathome_CUDA: CUDA Device 1 specified, checking... Device 1: GeForce GTX 285 is okay SETI@home using CUDA accelerated device GeForce GTX 285 Priority of process raised successfully Priority of worker thread raised successfully size 8 fft, is a freaky powerspectrum size 16 fft, is a cufft plan size 32 fft, is a cufft plan size 64 fft, is a cufft plan size 128 fft, is a cufft plan size 256 fft, is a freaky powerspectrum size 512 fft, is a freaky powerspectrum size 1024 fft, is a freaky powerspectrum size 2048 fft, is a cufft plan size 4096 fft, is a cufft plan size 8192 fft, is a cufft plan size 16384 fft, is a cufft plan size 32768 fft, is a cufft plan size 65536 fft, is a cufft plan size 131072 fft, is a cufft plan ) _ _ _)_ o _ _ (__ (_( ) ) (_( (_ ( (_ ( not bad for a human... _) Multibeam x32f Preview, Cuda 3.1 Work Unit Info: ............... WU true angle range is : 0.448164 SETI@home error -6 Bad workunit header !swi.data_type || !found || !swi.nsamples File: ..\seti_header.cpp Line: 204 </stderr_txt> ]]> Official Abuser of Boinc Buttons... And no good credit hound! |
Gundolf Jahn Send message Joined: 19 Sep 00 Posts: 3184 Credit: 446,358 RAC: 0 |
This error says it is CPU but I can't figure it out. Nope, it's GPU: setiathome_CUDA: CUDA Device 1 specified, checking... And somehow, the other text doesn't sound like stock, but that's just a feeling. -6 is normally a VLARkill, but 0.44 is not VLAR. Gruß, Gundolf |
hiamps Send message Joined: 23 May 99 Posts: 4292 Credit: 72,971,319 RAC: 0 |
This error says it is CPU but I can't figure it out. A strange one...I do not use the VLarKill so it can't be that... 1652179463 630363314 6 Jul 2010 15:14:55 UTC 9 Jul 2010 20:41:22 UTC Error while computing 0.00 0.00 --- SETI@home Enhanced Anonymous platform (CPU) Official Abuser of Boinc Buttons... And no good credit hound! |
Garry Webb Send message Joined: 25 Aug 99 Posts: 40 Credit: 13,561,408 RAC: 0 |
Having major problems with one of my cuda machines. I detached the nasty little bugger from the project so that the tasks could be reassigned. I truely did not want to do this, however it is better that some one else have the opportunity to do the work while I get this old machine back up to functiong properly. |
perryjay Send message Joined: 20 Aug 02 Posts: 3377 Credit: 20,676,751 RAC: 0 |
Here's hoping you get it back up and crunching while you can still get work before the next outage. PROUD MEMBER OF Team Starfire World BOINC |
Jord Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 |
Just had 4 hours of severe thunderstorms. Had to close everything down. If you live in the North of The Netherlands, be warned. It's really bad what's coming towards you. Earlier today I had already set to only do work between 8pm and 9am, as during the day, with living room temps in the 30s Celsius it's not nice to have PCs add heat to that. ;-) |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13861 Credit: 208,696,464 RAC: 304 |
with living room temps in the 30s Celsius it's not nice to have PCs add heat to that. ;-) I think the temperature in my room has only dropped below 28°c 4 or 5 times so far this year. Most of the time it's around 32°c Grant Darwin NT |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
... That is a true Bad workunit header error. Because two other hosts did not see the error, IMO the most likely explanation is the WU file got corrupted somehow. Less likely, though of course possible, is a bug in the x32f build you're testing for Jason. The servers of course only know it was sent to be done on CPU, not that you had Rescheduled it to GPU. Joe |
Garry Webb Send message Joined: 25 Aug 99 Posts: 40 Credit: 13,561,408 RAC: 0 |
The VLAR kills were early work before I installed the no kill version of Lunatics. The problem seems to be the computer itself. Not the nvidia card. It kept stopping processing and gave a msg that nvp 4 was at fault. Soo, suspect that the nvidia program was causing the computer to malfunction. Still working on the OS. |
Garry Webb Send message Joined: 25 Aug 99 Posts: 40 Credit: 13,561,408 RAC: 0 |
Probably not, That's Ok though. I have two others that will not cache more than a days worth of wu's with the extra wu cache set at 10 days. Berk says I have reached my daily limit on those two. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.