Message boards :
Number crunching :
Panic Mode On (10) Server problems
Message board moderation
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 12 · Next
Author | Message |
---|---|
![]() ![]() Send message Joined: 9 Feb 04 Posts: 1175 Credit: 4,754,897 RAC: 0 ![]() |
|
![]() ![]() Send message Joined: 25 May 99 Posts: 944 Credit: 52,956,491 RAC: 67 ![]() ![]() |
Yes. Upload problems look pretty general at present. Downloads have been going well. No alarms on the server status page, though. ![]() ![]() |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
Have a look at the graph in message 825377 (this thread, 31 October). We're up against the top limit on downloads again (Matt found some AP data in a cellar somewhere) - you can never get uploads through cleanly when the pipe is full of downloads. Just try getting into a football stadium when a match has just ended. |
![]() ![]() Send message Joined: 9 Feb 04 Posts: 1175 Credit: 4,754,897 RAC: 0 ![]() |
|
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51527 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
must be unlucjy with my timing then, now have 4 waiting to be uploaded even though one says 100%. Will wait they will go up sometime hopefully today.It is amazing that this message can get through but not my wus The forums are on a different server than the up/downloads.......thank goodness. At least then we can still compare notes about what the workload servers are doing........ "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51527 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
Oops...... According to the server status page, Vader has left the building.........just in time for the weekend........ "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
![]() ![]() Send message Joined: 29 Feb 08 Posts: 286 Credit: 167,386,578 RAC: 0 ![]() |
Just wondering... Why is it that the servers always go down during the weekends? It cant be a mere coincidence. Any comments? Uploads and downloads seem to be ok, however, the pending credits list is building up. I am up to 22k now. Not really bothered though. This is not the first and this wont be the last, hehe... Thinking about it, this is similar to the wedding vows: "To have and to hold, from this day forward (the day we signed up for seti), for better for worse (success in finding an ET signal), for richer or poorer (in terms of credits), in sickness and in health (server status), to love and to cherish (continue to crunch), till death us do part (hope the death is only on our part and not seti)." ______________ ![]() ![]() |
Iona ![]() Send message Joined: 12 Jul 07 Posts: 790 Credit: 22,438,118 RAC: 0 ![]() |
I thought something had gone wrong, when a few results were reported at around 0630 local time and no credits were returned after about 20 minutes. Oh well, its better than having WUs sent out to a 3rd machine. Perhaps the reason that the servers seem to go wrong at the weekend, is because they miss people being there! Don't take life too seriously, as you'll never come out of it alive! |
![]() ![]() Send message Joined: 29 Feb 00 Posts: 16019 Credit: 794,685 RAC: 0 ![]() |
. . . Database/file status Results ready to send - 0 - 0 - 28m Current result creation rate - NULL/sec - NULL/sec - 6m Results out in the field - 3,400,242 - 148,672 - 28m Results received in last hour - 46,894 - 694 - 0m Result turnaround time (last hour average) - 66.04 hours - 210.31 hours - 0m Results returned and awaiting validation - 2,748,804 - 159,540 - 28m Workunits waiting for validation - 0 - 0 - 28m Workunits waiting for assimilation - 40 - 0 - 28m Workunit files waiting for deletion - 12 - 0 - 28m Result files waiting for deletion - 0 - 0 - 28m Workunits waiting for db purging - 469,262 - 7,124 - 28m Results waiting for db purging - 966,038 - 19,881 - 28m Transitioner backlog (hours) - 6 - 0m ![]() Science Status Page . . . |
![]() ![]() Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 ![]() |
|
Zap de Ridder Send message Joined: 9 Jan 00 Posts: 227 Credit: 1,468,844 RAC: 1 ![]() |
I was just looking and while doing so the ones with two results ( validate state: initial ) got validated some minutes ago. |
![]() ![]() Send message Joined: 3 Apr 99 Posts: 16 Credit: 14,968,143 RAC: 0 ![]() |
Just wondering... Why is it that the servers always go down during the weekends? Servers and other equipment ALWAYS work good while skilled technician (or man with sledge hammer) is nearby. |
Zap de Ridder Send message Joined: 9 Jan 00 Posts: 227 Credit: 1,468,844 RAC: 1 ![]() |
Looks like some one is working on it at the moment. |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
The Cricket graph jumped to maximum download rate about half an hour ago: that's 7am, on a Saturday morning, lab time. Yet more unpaid overtime for the boys. |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
From Server Status page, [As of 15 Nov 2008 20:40:14 UTC]: Results ready to send 1,042,055 Somehow, I don't think so - there were none two hours ago, and I got "no work from project" at 20:23:18 |
Ingleside Send message Joined: 4 Feb 03 Posts: 1546 Credit: 15,832,022 RAC: 13 ![]() ![]() |
From Server Status page, [As of 15 Nov 2008 20:40:14 UTC]: Actually, it is possible the status-page is correct... For one thing, for performance-reasons the Scheduling-server doesn't check the database directly if any work is available, instead it looks on a shared memory-array the Feeder sets-up. Not sure how large this array of work SETI is using, and neither if Scheduling-server looks-through the whole array for work in case can't find any, but it's possible the array is "empty" one second, until Feeder re-fills the next second. Another way to generate tons of work in that seems very little time is, the Transitioners is responsible for triggering Validator then enough results is reported for a wu, but also to generate new "Tasks" in case of errors, timed-out tasks, or newly-split wu's has been added to the database. For some reason all the Transitioner-processes wasn't running for many hours, but everything else was. Meaning, Scheduling-server happily sent-out all work, and as available work dropped-below whatever limits is set, the Splitters fired-up splitting new wu, and continued splitting new wu's until reached the "disk full"-limit. Since it's much faster for the Transitioner to generate some new database-fields with the Task-info for each wu than it is to split a wu, many hours of already-split wu's can give a huge spike in available work in fairly short time then Transitioners was re-enabled. "I make so many mistakes. But then just think of all the mistakes I don't make, although I might." |
Zap de Ridder Send message Joined: 9 Jan 00 Posts: 227 Credit: 1,468,844 RAC: 1 ![]() |
Whatever, just before going to bed I checked the server state page and everything looks back to normal. Not that it's important to me cause my puter is only working 6/24 at 50% and next astro pulse wu wil be monday or tuesday reporting :-) Anyway hail to those or the one that took care of it all today. |
![]() ![]() Send message Joined: 14 May 99 Posts: 4438 Credit: 55,006,323 RAC: 0 ![]() |
|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13913 Credit: 208,696,464 RAC: 304 ![]() ![]() |
Something's still not quite right. System shows plenty of work available, but i've been getting "No work from project" messages for the last 4 hours. Grant Darwin NT |
Swibby Bear Send message Joined: 1 Aug 01 Posts: 246 Credit: 7,945,093 RAC: 0 ![]() |
Been getting "NO WORK FROM PROJECT" for most of the day. Almost out of WUs on all machines now. |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.