Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (118)
Message board moderation
Previous · 1 . . . 18 · 19 · 20 · 21 · 22 · 23 · 24 . . . 94 · Next
Author | Message |
---|---|
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . Glad to be of service ... 8^} Stephen . . |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Data Distribution State SETI@home v7 # Astropulse # SETI@home v8 # As of* Results ready to send 0 0 98 0m Current result creation rate ** 0/sec 0.0938/sec 7.0673/sec 5m Do we have a problem? <edit> Forget about the Current Creation Rate rises to 77/sec |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14649 Credit: 200,643,578 RAC: 874 |
Yes. SETI had a problem today, but it's now resolved. It'll take a little while for the resulting mess - congestion - to clear up, but it'll settle down in a day or two. When a task completes, two things happen. First, a file containing the result data is uploaded. That was going on normally all day, with one file uploading for each task as soon as it was complete - you probably didn't even notice them. Secondly, BOINC 'reports' the task - does the housekeeping to say how long it took, etc. These reports can be stacked up and transferred in bulk - in fact, it's more efficient to do it that way. The report can't be done unless the upload has already taken place. So if they've gone, you know that everything is complete. |
betreger Send message Joined: 29 Jun 99 Posts: 11358 Credit: 29,581,041 RAC: 66 |
Results ready to send 0 0 1,595 This is not good |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Results ready to send 0 0 1,595 Current result creation rate ** 0.0613/sec 60.2323/sec To keep the caches filled we need something > 40/sec. So >60/sec is fine, the splitters are just working hard to fill the new cache limits of 1000`s of hungry hosts.. Unless something else changes all will be returning to normal in few hours. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13720 Credit: 208,696,464 RAC: 304 |
The problem is that it needs to sustain that output, and in reality, it can't. You might get bursts over 50, but the average output is often a lot less.Results ready to send 0 0 1,595 Ready to send is now down to 1,139. Edit- big jump to 5,290. And big jumps are usually the result of no work going out. Hopefully it was just a case of a big gap between updates. Grant Darwin NT |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
I tend to look at the out in the field number. lately it has been 7.3 million. It lets us know how big the hole to fill is. After the outage I saw it at 6.8 million...and now it is 7,208,377 , so the caches are starting to fill up again. The RTS should start to build up again once the "hole" is filled and it has done some assimilating, deleting and purging. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13720 Credit: 208,696,464 RAC: 304 |
Instant timeout on some downloads again. Grant Darwin NT |
gs Send message Joined: 18 May 99 Posts: 45 Credit: 5,412,660 RAC: 8 |
Old computer broke down. trying to get this set up on a new one. Wondering why nothing happened. Now I know. Thank you. Same here. Waiting to receive new WUs. Happy New Year to everybody! |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Old computer broke down. trying to get this set up on a new one. Wondering why nothing happened. Now I know. Thank you. . . May the bluebird of happiness ... etc ... etc ... etc Stephen :) |
taslehoff Send message Joined: 28 Sep 02 Posts: 3 Credit: 2,938,934 RAC: 0 |
Thanks for the reply mate (you learn something new every day) :) |
Sleepy Send message Joined: 21 May 99 Posts: 219 Credit: 98,947,784 RAC: 28,360 |
And we are back! :-) |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1849 Credit: 268,616,081 RAC: 1,349 |
So far the smoothest recovery I've seen in quite a while. Was able to report all work immediately, and got a small download. Guess we'll see how it goes; hope my optimism doesn't jinx anything :) |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
So far the smoothest recovery I've seen in quite a while. Was able to report all work immediately, and got a small download. Guess we'll see how it goes; hope my optimism doesn't jinx anything :) . . If it does we'll just blame you :) Stephen :) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13161 Credit: 1,160,866,277 RAC: 1,873 |
So far the smoothest recovery I've seen in quite a while. Was able to report all work immediately, and got a small download. Guess we'll see how it goes; hope my optimism doesn't jinx anything :) Don't know what your magic recipe is . . . I never have any kind of luck. Tue 07 Jan 2020 04:27:57 PM PST | SETI@home | Scheduler request failed: HTTP internal server error Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | Project has no tasks available Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | Project requested delay of 303 seconds Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30466.0.21.44.190_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57863_HIP21594_0020.30508.818.21.44.19.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57863_HIP21594_0020.30508.818.21.44.18.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57863_HIP21594_0020.30508.818.21.44.1.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.189.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.218.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.204.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.221.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.196.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57863_HIP21594_0020.30508.818.21.44.16.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc55_2bit_guppi_58692_63351_HIP23319_0037.5834.818.22.45.36.vlar_2 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc22_2bit_guppi_58691_73716_HIP40209_0069.27070.818.21.44.173.vlar_2 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.200.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.228.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_59155_HIP22762_0024.29719.409.21.44.87.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_59155_HIP22762_0024.29719.409.21.44.41.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.214.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30393.409.21.44.205.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.216.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30393.409.21.44.196.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_59155_HIP22762_0024.29719.409.21.44.65.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30427.0.22.45.205_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.205.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.197.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_56565_HIP21556_0016.30476.0.21.44.194.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57863_HIP21594_0020.30508.818.21.44.20.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.213.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57231_HIP21594_0018.27735.818.22.45.121.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57231_HIP21594_0018.27735.818.22.45.119.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.198.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30393.409.21.44.225.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_59155_HIP22762_0024.29719.409.21.44.82.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc22_2bit_guppi_58691_74037_HIP40671_0070.28596.409.21.44.172.vlar_2 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58693_04093_HIP98677_0128.16158.818.22.45.205.vlar_3 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc15_2bit_guppi_58691_74697_HIP40118_0072.11167.818.22.45.28.vlar_2 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.212.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30427.0.22.45.199_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30427.0.22.45.188_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57231_HIP21594_0018.27735.818.22.45.115.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.202.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30466.0.21.44.199_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc22_2bit_guppi_58691_73716_HIP40209_0069.27070.818.21.44.171.vlar_2 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30466.0.21.44.195_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30427.0.22.45.206_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30454.818.22.45.222.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.221.vlar_1 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc22_2bit_guppi_58691_74037_HIP40671_0070.28596.409.21.44.203.vlar_2 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30466.0.21.44.181_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_55925_HIP21556_0014.30466.0.21.44.207_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc54_2bit_guppi_58692_57547_HIP21488_0019.30417.409.22.45.206.vlar_0 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] Deferring communication for 00:05:03 Tue 07 Jan 2020 06:29:42 PM CST | SETI@home | [sched_op] Reason: requested by project Its giving me all these "ack"s and I don't have a single Seti@Home task running right now. What are these again? Tom A proud member of the OFA (Old Farts Association). |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13161 Credit: 1,160,866,277 RAC: 1,873 |
You were able to report your finished work. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
You were able to report your finished work. Ah, slaps forehead...... A proud member of the OFA (Old Farts Association). |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
So far the smoothest recovery I've seen in quite a while. Was able to report all work immediately, and got a small download. Guess we'll see how it goes; hope my optimism doesn't jinx anything :) . . 3 Machines constant "No tasks available" while the 4th machine cannot report work. NO errors just no response at all, the requests go off into limbo ... . . Time for break ... Stephen <shrug> and all after a 10 hour outage ... :( |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13161 Credit: 1,160,866,277 RAC: 1,873 |
My assumption is that the stock machines with just a hundred or so tasks to report all hog the connections right after return. And force out the hosts that need to report thousands and make dozens of connections to report the 256 max tasks allowed. I never can make a connection for several hours after the project has returned. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.