Message boards :
Number crunching :
Panic Mode On (69) Server problems?
Message board moderation
Previous · 1 . . . 6 · 7 · 8 · 9
Author | Message |
---|---|
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Hi Dave, Eh.. it's not really that important. At first it was a project to compare the run-time vs. percent blanked for AP tasks using the lunatics apps. It started way back with r103 when I made the switch from stock->optimized. At that time, I had four machines that were crunching, and they were all radically different architectures. I made some good observations and data points. Even recently when my main cruncher of just over five years started developing problems and I removed one of the CPUs, the data discovered a possible architecture flaw. I have at one point also sent all of my work to Josef to see if he could make any sense of an issue I was having. So it wasn't really a waste, but like you said Dave.. it was probably time for that project to come to an end. I've worked through small periods of not being able to get at the tasks, or DB crashes that last a week or more without any significant loss, but this most recent occurrence was enough to just make me scrap it. Of course I could just start anew now that it is working for the most part. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
hey btw i have a question On my machines (quad-core Intel CPUs, mostly elderly CUDA cards like 9800GT), I've always found the VHAR cunch particularly efficiently on the GPUs (ever since cuda23 came out, at least). Running multiple VHAR on the CPUs tends to be counter-productive, because memory bus contention slows them all down. |
Dave Stegner Send message Joined: 20 Oct 04 Posts: 540 Credit: 65,583,328 RAC: 27 |
Anyone else getting this kind of stuff or is it just me ?? SLWS006 2953 SETI@home 3/3/2012 16:09:37 Sending scheduler request: Requested by user. 2954 SETI@home 3/3/2012 16:09:37 Reporting 2 completed tasks, not requesting new tasks 2955 3/3/2012 16:09:59 Project communication failed: attempting access to reference site 2956 SETI@home 3/3/2012 16:09:59 Scheduler request failed: Failure when receiving data from the peer 2957 3/3/2012 16:10:01 Internet access OK - project servers may be temporarily down. 2958 3/3/2012 16:10:02 Project communication failed: attempting access to reference site 2959 SETI@home 3/3/2012 16:10:02 Temporarily failed upload of 16dc11ad.8105.104986.7.10.124_1_0: HTTP error 2960 SETI@home 3/3/2012 16:10:02 Backing off 1 min 0 sec on upload of 16dc11ad.8105.104986.7.10.124_1_0 2961 3/3/2012 16:10:04 Internet access OK - project servers may be temporarily down. Dave |
Dimly Lit Lightbulb 😀 Send message Joined: 30 Aug 08 Posts: 15399 Credit: 7,423,413 RAC: 1 |
I think you MB people are about to run out of new MB WU's. Not many files left to split now. Only 35 channels left to do. As an astropulse only cruncher, I shall giggle that the multibeamers will have the experience of what it's like for no tasks to be split for them to crunch :) Member of the People Encouraging Niceness In Society club. |
Dimly Lit Lightbulb 😀 Send message Joined: 30 Aug 08 Posts: 15399 Credit: 7,423,413 RAC: 1 |
I think you MB people are about to run out of new MB WU's. Not many files left to split now. Only 35 channels left to do. Should we put it down as Seti Karma? :) Member of the People Encouraging Niceness In Society club. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
Anyone else getting this kind of stuff or is it just me ?? Maybe just you. I've hit the server limits, but when i can get work it comes down at a good pace. Grant Darwin NT |
Belthazor Send message Joined: 6 Apr 00 Posts: 219 Credit: 10,373,795 RAC: 13 |
I believe every cruncher have a nice bunch now :-P |
Wiggo Send message Joined: 24 Jan 00 Posts: 34744 Credit: 261,360,520 RAC: 489 |
Seeing as even with the limits imposed I'm still good for 3 days so I can't see me whining any time soon. :D My "whining" will likely start when the AP's are turned back on and I have to go searching for good proxies again. ;) Cheers. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
Seeing as even with the limits imposed I'm still good for 3 days I wish. 3-3.5 days CPU. Only 0.8-1.1 for the GPUs. I can't be bothered messing around with the FLOPS numbers to get the DCF to stabilise. Grant Darwin NT |
Wiggo Send message Joined: 24 Jan 00 Posts: 34744 Credit: 261,360,520 RAC: 489 |
Seeing as even with the limits imposed I'm still good for 3 days I have never had the inclination or time to bother with flops either Grant, I just let mine sort themselves out (hopefully Dr D.A. will start to show that he's worth his wages doing the right thing instead of just changing things to make it look like he's worth it). Cheers. |
Belthazor Send message Joined: 6 Apr 00 Posts: 219 Credit: 10,373,795 RAC: 13 |
BTW, one tape was added just now. 35 channels to split. No items to whine at all - multibeam forever! ;-))) |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
BTW, one tape was added just now. 35 channels to split. No items to whine at all - multibeam forever! ;-))) At present (2012-03-04 13:30:07 UTC) the counts are at. total channels to do: MB 54/84 AP 98/98 So until the AP start again I suspect the whining will be held down to a minimum. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
It's going to be interesting to see how close to zero results out in the field we get for AP, before they start sending out new AP workunits. As of now, we have 86,770 AP results out in the field. It could be that the next time the AP splitters run here, they could be splitting AP v6 Wu's instead of v505 Wu's, we'll see in time, Claggy |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
It's going to be interesting to see how close to zero results out in the field we get for AP, before they start sending out new AP workunits. As of now, we have 86,770 AP results out in the field. You'll get them when you need them ;-) Claggy |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.