Message boards :
Number crunching :
Panic Mode On (107) Server Problems?
Message board moderation
Previous · 1 . . . 17 · 18 · 19 · 20 · 21 · 22 · 23 . . . 29 · Next
Author | Message |
---|---|
Wiggo Send message Joined: 24 Jan 00 Posts: 36827 Credit: 261,360,520 RAC: 489 |
Another early outrage start and quick recovery even with all them big w/u's. :-) Cheers. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
Another early outrage start and quick recovery even with all them big w/u's. :-) Took a couple of hours for the splitters to really get going- the ready-to-send buffer actually ran dry for a while till the splitters switched up a few gears. Grant Darwin NT |
rob smith Send message Joined: 7 Mar 03 Posts: 22535 Credit: 416,307,556 RAC: 380 |
Anyone else suffering from an ever growing "pendings" list. A few weeks ago it was ~3000, now at ~6.5k (tasks actually in hand is still ~1300. - Don't take the "tasks in hand figure as valid as I'm suffering a massive pile of ghosts just now - own fault I upset my front-end firewall and didn't notice for a few hours :-( Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Anyone else suffering from an ever growing "pendings" list. Dunno what's going on there. My pendings have stayed pretty even, give or take, with 7 rigs running. But no major jump. State: All (7342) · In progress (2548) · Validation pending (2568) · Validation inconclusive (99) · Valid (2112) "Time is simply the mechanism that keeps everything from happening all at once." |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
Anyone else suffering from an ever growing "pendings" list.If you have 'ghosts', then your wingmates have pendings - and vice-versa. As you do, so will you be done to! |
petri33 Send message Joined: 6 Jun 02 Posts: 1668 Credit: 623,086,772 RAC: 156 |
The WoW event ended. Some users may have shut down their rigs without emptying the cache. To overcome Heisenbergs: "You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Anyone else suffering from an ever growing "pendings" list.If you have 'ghosts', then your wingmates have pendings - and vice-versa. As you do, so will you be done to! . . There is a relatively simple, if tedious, way to recover your ghosted tasks. The really tedious part is that you only get 20 at a time and if there are lots them then you have to repeat it over and over. But it will clear up the numbers for everybody involved. . . The actual process is in a file on another rig so I cannot repeat it at the moment. Stephen . |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
The WoW event ended. Some users may have shut down their rigs without emptying the cache. . . That is true, and a very annoying and unnecessary thing to do. Good etiquette is to empty a machine's cache before shutting it down. Off course this is not possible in the event of hardware failure but how often is that the case? Stephen :( |
rob smith Send message Joined: 7 Mar 03 Posts: 22535 Credit: 416,307,556 RAC: 380 |
I too thought it was fall-out from WOW, but on a computer with ~1700 pendings only about 250 date back that far, with the majority being from the last 7 days. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
My oldest pendings go back to the last week of July which I hope will clear out this week if the wingmen report on time. And don't forget the validation servers had a hiccup on Aug 8-9 and missed all validations on those days. Have to wait for the original deadlines to pass to clear the validators on the second pass. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
JohnDK Send message Joined: 28 May 00 Posts: 1222 Credit: 451,243,443 RAC: 1,127 |
There is a relatively simple, if tedious, way to recover your ghosted tasks. The really tedious part is that you only get 20 at a time and if there are lots them then you have to repeat it over and over. But it will clear up the numbers for everybody involved. This is what I do to recover ghosts WUs, text is copied from someone else, dont remember who... As an alternative to the "ghost" recovery process that I had previously posted quite a while back (involving client_state backup and restore, etc.), I have another one to offer that I think is simpler and can be entirely controlled within BOINC Manager. It just requires a fast finger on your mouse button, since the key here is to interrupt a scheduler request before it completes. I just used this quite successfully over the weekend to recover 127 "ghosts" that I had created on Friday when I started trying to run the "Special" app on Linux. I just went back to that machine occasionally when I had a few minutes and ran it when I knew I had room in the queue for at least 20 tasks to be recovered, so 7 times in all. I figured that, since it was my fault for creating the "ghosts", the least I could do was try to recover them as a courtesy to my wingmen. |
Speedy Send message Joined: 26 Jun 04 Posts: 1643 Credit: 12,921,799 RAC: 89 |
blc04_2bit_blc04_guppi_57898_16542_DIAG_KIC8462852_0017 52.39 GB has been sitting at (108) for a number of days but it does not seem to be slowing splitter progress |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
blc04_2bit_blc04_guppi_57898_16542_DIAG_KIC8462852_0017 52.39 GB has been sitting at (108) for a number of days but it does not seem to be slowing splitter progress Same for all the other blc_04 data. Once the blc_05 data was loaded the splitters moved to those files, the only blc_04 WUs i've received since the blc_05 files were loaded have been re-sends. Grant Darwin NT |
Speedy Send message Joined: 26 Jun 04 Posts: 1643 Credit: 12,921,799 RAC: 89 |
blc04_2bit_blc04_guppi_57898_16542_DIAG_KIC8462852_0017 52.39 GB has been sitting at (108) for a number of days but it does not seem to be slowing splitter progress That is a factor that I hadn't even considered thanks Grant. At least with only BLC 04 resends going out it will help clean the database up a little bit by removing outstanding results |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I'm assuming we are in a Arecibo VLAR storm from the splitters since all machines are receiving the "no work is available" message for the past couple of hours. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
I'm assuming we are in a Arecibo VLAR storm from the splitters since all machines are receiving the "no work is available" message for the past couple of hours. . . Have you done the "kick the servers in the pants" thing yet?? . . I find I have to do that regularly and it gets work pretty consistently even when the previous response has been "no work available". Stephen ?? |
Wiggo Send message Joined: 24 Jan 00 Posts: 36827 Credit: 261,360,520 RAC: 489 |
I just had a look at the logs on my rigs and everything has been very fine here Keith. Cheers. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
No, I haven't. Been watching Nascar and football. It is not usual for ALL machines to need the "kick in the pants" at the same time. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Well, that was a chore. Unusual that all machines were synched up with network communication schedules. Had to sequentially go through the "kick in the pants" routine. Still ended up with two machines synched up which doesn't help with hitting the buffer at the same time for the limited 100 tasks available. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Still way down on the Linux machine. I got all of 3 task after the kick routine. Down about 200 now. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.