Message boards :
Number crunching :
Data Chat
Message board moderation
Previous · 1 . . . 13 · 14 · 15 · 16 · 17 · 18 · 19 . . . 34 · Next
Author | Message |
---|---|
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
Is there a specific time for the crash, or is it just in the wee hours in a general sense??As I remember it, it was a few minutes after midnight PDT, or after 07:00 UTC. I'll check a few logs. The panic thread posts lead me to guess 10:30 UTC |
betreger Send message Joined: 29 Jun 99 Posts: 11362 Credit: 29,581,041 RAC: 66 |
Of course, if Synergy crashes in the small hours, for the third week in a row, we won't risk running out of data files to split... ;-) The fetid stench of burnt hair and the dissonant sounds of howling at the moon most likely would cause many to drink. |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
more data points. at 0:20 we had 2645 channels left. at 5:40 we had 2357 channels left. The crunch rate has slowed. we are now doing about 1296 channels / day. according to these latest data points now that we are into the blc11s, so we will be fine until Monday morning without added files. The big question is... will we have another server issue this Sunday morning (10:30 to 11:30 UTC best guess) ?? If so, what cron job is triggering this?? I hope I wake to find the machine working well despite the predictions of a problem. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13746 Credit: 208,696,464 RAC: 304 |
more data points.Actually, it's picked up, then dropped down a bit, but is likely to pick up again. Some of those BLC11s take about 25% less time to process than the BLC34s did. Grant Darwin NT |
Boiler Paul Send message Joined: 4 May 00 Posts: 232 Credit: 4,965,771 RAC: 64 |
Looks like it is deja vu all over again 9/22/2019 5:48:38 AM | SETI@home | Scheduler request failed: Couldn't connect to server 9/22/2019 5:48:39 AM | | Project communication failed: attempting access to reference site 9/22/2019 5:48:40 AM | | Internet access OK - project servers may be temporarily down. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14653 Credit: 200,643,578 RAC: 874 |
Yup. I wasn't watching at the time, but my first logged failures are 22/09/2019 11:40:11 | SETI@home | Scheduler request failed: Timeout was reached 22/09/2019 11:42:08 | SETI@home | Scheduler request failed: Couldn't connect to server (tz UTC+1) Edit: and last successful on any machine was 10:34:17 UTC. "Once is a mistake. Twice is a pattern. Three times is a habit ..." |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
Looks like I missed our not scheduled but turning into a regular Sunday morning outage. Someone at Seti was up late, or up early to give us some more data too. blc35_2bit_guppi_58643 * |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Somebody is working on Sunday, they add a lot of blc35 tapes One question: is only me, or the bls 34-35-36 tapes stresses a lot more the GPU'e while crunching? I run on AIO hybrids but even with this I see an increase of few degrees on the GPU while they are crunching. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14653 Credit: 200,643,578 RAC: 874 |
Somebody is working on Sunday, they add a lot of blc35 tapesSee earlier in this thread. There's a big difference between the data recorded on day 58642 and that for day 58643 (I was trying to work out why). The new tapes are day 58643, so I'm expecting they'll be chewy. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Somebody is working on Sunday, they add a lot of blc35 tapes . . Most definitely yes, the temps are up and system response is poorer. Those little suckers really tie up the GPUs. Stephen < shrug > |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
results received per hour is up to 134k, which is a little higher than usual, so I was wondering how the new data is?? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14653 Credit: 200,643,578 RAC: 874 |
results received per hour is up to 134k, which is a little higher than usual, so I was wondering how the new data is??Bumped a few of the early arrivals to see how they ran. Seem, as expected, to be of the longer-running variety, like the recent BLC34s. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
results received per hour is up to 134k, which is a little higher than usual, so I was wondering how the new data is?? . . I am seeing a few noise bombs. Stephen :( |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
We got some new data files blc34_2bit_guppi_58642* They also gave us some Aricebo files to split this morning as well. Plenty of data!! yeah! |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
That wasn't bad for a Tuesday outage. They seem to be still finding old Aricebo files for us to run/rerun (Multibeam only not AP 21jn12ac |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
That wasn't bad for a Tuesday outage. . . It seems they are still cleaning out the closet :) Stephen :) |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
What's up with the data?? We are having a hard time recovering from the outage and building up a nice RTS queue. the results returned is a little high at 145k. The deletions are a bit high, which is probably what is causing the splitters to be slow. I'm guessing everyone has a good cache, so as long as the current data isn't too noisy we should get back to normal soon enough. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34862 Credit: 261,360,520 RAC: 489 |
What's up with the data?? We are having a hard time recovering from the outage and building up a nice RTS queue. the results returned is a little high at 145k. The deletions are a bit high, which is probably what is causing the splitters to be slow.Yes both of my rigs had recovered full caches by the time I got up this morning (10hrs ago), but a lot of the current Arecibo MB work is either VHAR or VLAR noise bombs from what has landed here so far which might account for that. Cheers. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
What's up with the data?? We are having a hard time recovering from the outage and building up a nice RTS queue. the results returned is a little high at 145k. The deletions are a bit high, which is probably what is causing the splitters to be slow. . . It's not that they are noisy but rather that they are quickies. These Blc34 tasks take about half the time that the Blc35s were taking :) Part of that is because a large number of them are not actually VLAR tasks. . . So returns are high, RTS is low and there is a backlog building up at the deleters. Stephen :) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
The 21jn12ac tasks are VHAR's and quickies. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.