Data Chat

Message boards : Number crunching : Data Chat
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 13 · 14 · 15 · 16 · 17 · 18 · 19 . . . 34 · Next

AuthorMessage
Profile Unixchick Project Donor
Avatar

Send message
Joined: 5 Mar 12
Posts: 815
Credit: 2,361,516
RAC: 22
United States
Message 2012782 - Posted: 21 Sep 2019, 22:07:25 UTC - in response to Message 2012781.  

Is there a specific time for the crash, or is it just in the wee hours in a general sense??
As I remember it, it was a few minutes after midnight PDT, or after 07:00 UTC. I'll check a few logs.


The panic thread posts lead me to guess 10:30 UTC
ID: 2012782 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11362
Credit: 29,581,041
RAC: 66
United States
Message 2012784 - Posted: 21 Sep 2019, 22:15:43 UTC - in response to Message 2012775.  

Of course, if Synergy crashes in the small hours, for the third week in a row, we won't risk running out of data files to split... ;-)

Thinking on this event, I already reserve a place in my favorite bar for tomorrow. LOL

There are people who are willing to set their hair on fire and howl at the moon if necessary.

I cannot contribute to the hair fire, so I will go to the bar with Juan.

The fetid stench of burnt hair and the dissonant sounds of howling at the moon most likely would cause many to drink.
ID: 2012784 · Report as offensive
Profile Unixchick Project Donor
Avatar

Send message
Joined: 5 Mar 12
Posts: 815
Credit: 2,361,516
RAC: 22
United States
Message 2012824 - Posted: 22 Sep 2019, 5:53:04 UTC

more data points.
at 0:20 we had 2645 channels left.
at 5:40 we had 2357 channels left.
The crunch rate has slowed. we are now doing about 1296 channels / day. according to these latest data points now that we are into the blc11s, so we will be fine until Monday morning without added files.

The big question is... will we have another server issue this Sunday morning (10:30 to 11:30 UTC best guess) ?? If so, what cron job is triggering this?? I hope I wake to find the machine working well despite the predictions of a problem.
ID: 2012824 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13746
Credit: 208,696,464
RAC: 304
Australia
Message 2012842 - Posted: 22 Sep 2019, 8:39:24 UTC - in response to Message 2012824.  

more data points.
at 0:20 we had 2645 channels left.
at 5:40 we had 2357 channels left.
The crunch rate has slowed.
Actually, it's picked up, then dropped down a bit, but is likely to pick up again.
Some of those BLC11s take about 25% less time to process than the BLC34s did.
Grant
Darwin NT
ID: 2012842 · Report as offensive
Boiler Paul

Send message
Joined: 4 May 00
Posts: 232
Credit: 4,965,771
RAC: 64
United States
Message 2012844 - Posted: 22 Sep 2019, 10:54:29 UTC

Looks like it is deja vu all over again

9/22/2019 5:48:38 AM | SETI@home | Scheduler request failed: Couldn't connect to server
9/22/2019 5:48:39 AM | | Project communication failed: attempting access to reference site
9/22/2019 5:48:40 AM | | Internet access OK - project servers may be temporarily down.
ID: 2012844 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14653
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2012847 - Posted: 22 Sep 2019, 11:10:28 UTC
Last modified: 22 Sep 2019, 11:28:14 UTC

Yup. I wasn't watching at the time, but my first logged failures are

22/09/2019 11:40:11 | SETI@home | Scheduler request failed: Timeout was reached
22/09/2019 11:42:08 | SETI@home | Scheduler request failed: Couldn't connect to server

(tz UTC+1)

Edit: and last successful on any machine was 10:34:17 UTC. "Once is a mistake. Twice is a pattern. Three times is a habit ..."
ID: 2012847 · Report as offensive
Profile Unixchick Project Donor
Avatar

Send message
Joined: 5 Mar 12
Posts: 815
Credit: 2,361,516
RAC: 22
United States
Message 2012873 - Posted: 22 Sep 2019, 14:39:16 UTC
Last modified: 22 Sep 2019, 14:39:43 UTC

Looks like I missed our not scheduled but turning into a regular Sunday morning outage. Someone at Seti was up late, or up early to give us some more data too.

blc35_2bit_guppi_58643 *
ID: 2012873 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2012881 - Posted: 22 Sep 2019, 15:02:24 UTC
Last modified: 22 Sep 2019, 15:04:56 UTC

Somebody is working on Sunday, they add a lot of blc35 tapes

One question: is only me, or the bls 34-35-36 tapes stresses a lot more the GPU'e while crunching?
I run on AIO hybrids but even with this I see an increase of few degrees on the GPU while they are crunching.
ID: 2012881 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14653
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2012889 - Posted: 22 Sep 2019, 16:23:24 UTC - in response to Message 2012881.  

Somebody is working on Sunday, they add a lot of blc35 tapes

One question: is only me, or the bls 34-35-36 tapes stresses a lot more the GPU'e while crunching?
I run on AIO hybrids but even with this I see an increase of few degrees on the GPU while they are crunching.
See earlier in this thread. There's a big difference between the data recorded on day 58642 and that for day 58643 (I was trying to work out why). The new tapes are day 58643, so I'm expecting they'll be chewy.
ID: 2012889 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2012904 - Posted: 22 Sep 2019, 18:20:28 UTC - in response to Message 2012881.  

Somebody is working on Sunday, they add a lot of blc35 tapes

One question: is only me, or the bls 34-35-36 tapes stresses a lot more the GPU'e while crunching?
I run on AIO hybrids but even with this I see an increase of few degrees on the GPU while they are crunching.

. . Most definitely yes, the temps are up and system response is poorer. Those little suckers really tie up the GPUs.

Stephen

< shrug >
ID: 2012904 · Report as offensive
Profile Unixchick Project Donor
Avatar

Send message
Joined: 5 Mar 12
Posts: 815
Credit: 2,361,516
RAC: 22
United States
Message 2012912 - Posted: 22 Sep 2019, 19:17:05 UTC

results received per hour is up to 134k, which is a little higher than usual, so I was wondering how the new data is??
ID: 2012912 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14653
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2012930 - Posted: 22 Sep 2019, 22:57:08 UTC - in response to Message 2012912.  

results received per hour is up to 134k, which is a little higher than usual, so I was wondering how the new data is??
Bumped a few of the early arrivals to see how they ran. Seem, as expected, to be of the longer-running variety, like the recent BLC34s.
ID: 2012930 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2012936 - Posted: 23 Sep 2019, 1:00:02 UTC - in response to Message 2012912.  

results received per hour is up to 134k, which is a little higher than usual, so I was wondering how the new data is??


. . I am seeing a few noise bombs.

Stephen

:(
ID: 2012936 · Report as offensive
Profile Unixchick Project Donor
Avatar

Send message
Joined: 5 Mar 12
Posts: 815
Credit: 2,361,516
RAC: 22
United States
Message 2013064 - Posted: 24 Sep 2019, 0:19:32 UTC

We got some new data files blc34_2bit_guppi_58642*

They also gave us some Aricebo files to split this morning as well.

Plenty of data!! yeah!
ID: 2013064 · Report as offensive
Profile Unixchick Project Donor
Avatar

Send message
Joined: 5 Mar 12
Posts: 815
Credit: 2,361,516
RAC: 22
United States
Message 2013181 - Posted: 24 Sep 2019, 20:59:01 UTC

That wasn't bad for a Tuesday outage.

They seem to be still finding old Aricebo files for us to run/rerun (Multibeam only not AP
21jn12ac
ID: 2013181 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2013201 - Posted: 24 Sep 2019, 23:24:05 UTC - in response to Message 2013181.  

That wasn't bad for a Tuesday outage.

They seem to be still finding old Aricebo files for us to run/rerun (Multibeam only not AP
21jn12ac


. . It seems they are still cleaning out the closet :)

Stephen

:)
ID: 2013201 · Report as offensive
Profile Unixchick Project Donor
Avatar

Send message
Joined: 5 Mar 12
Posts: 815
Credit: 2,361,516
RAC: 22
United States
Message 2013224 - Posted: 25 Sep 2019, 4:02:23 UTC

What's up with the data?? We are having a hard time recovering from the outage and building up a nice RTS queue. the results returned is a little high at 145k. The deletions are a bit high, which is probably what is causing the splitters to be slow.

I'm guessing everyone has a good cache, so as long as the current data isn't too noisy we should get back to normal soon enough.
ID: 2013224 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34862
Credit: 261,360,520
RAC: 489
Australia
Message 2013225 - Posted: 25 Sep 2019, 4:53:15 UTC - in response to Message 2013224.  

What's up with the data?? We are having a hard time recovering from the outage and building up a nice RTS queue. the results returned is a little high at 145k. The deletions are a bit high, which is probably what is causing the splitters to be slow.

I'm guessing everyone has a good cache, so as long as the current data isn't too noisy we should get back to normal soon enough.
Yes both of my rigs had recovered full caches by the time I got up this morning (10hrs ago), but a lot of the current Arecibo MB work is either VHAR or VLAR noise bombs from what has landed here so far which might account for that.

Cheers.
ID: 2013225 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2013227 - Posted: 25 Sep 2019, 5:26:20 UTC - in response to Message 2013224.  

What's up with the data?? We are having a hard time recovering from the outage and building up a nice RTS queue. the results returned is a little high at 145k. The deletions are a bit high, which is probably what is causing the splitters to be slow.

I'm guessing everyone has a good cache, so as long as the current data isn't too noisy we should get back to normal soon enough.


. . It's not that they are noisy but rather that they are quickies. These Blc34 tasks take about half the time that the Blc35s were taking :) Part of that is because a large number of them are not actually VLAR tasks.

. . So returns are high, RTS is low and there is a backlog building up at the deleters.

Stephen

:)
ID: 2013227 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2013228 - Posted: 25 Sep 2019, 5:32:29 UTC - in response to Message 2013227.  

The 21jn12ac tasks are VHAR's and quickies.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2013228 · Report as offensive
Previous · 1 . . . 13 · 14 · 15 · 16 · 17 · 18 · 19 . . . 34 · Next

Message boards : Number crunching : Data Chat


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.