Panic Mode On (108) Server Problems?

Message boards : Number crunching : Panic Mode On (108) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 17 · 18 · 19 · 20 · 21 · 22 · 23 . . . 32 · Next

AuthorMessage
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6530
Credit: 186,882,348
RAC: 35,078
United States
Message 1903856 - Posted: 30 Nov 2017, 13:39:55 UTC - in response to Message 1903771.  

Just had a look at my request backoffs.
The GPUs just finished some AP work, so their backoffs are only 20min.
The CPU backoff is 15hrs. A bit extreme IMHO.

That is just 1 of the reasons why I've stuck with the old BOINC version that I use, back offs never exceed 4hrs.

Cheers.

I'm not sure what causes the large backoffs in the new versions of BOINC but I don't see those on SET@home.
I believe that project backoffs are reset when there is a successful upload or other communication? So that may be related.
I set NNT until ~2PM today on my E5-2670 system, but it looks like my other systems keep talking to the servers about every 5 min according to the timestamps on my list of computes.
All of them currently have task on hand, but the R9 390X host is set to not do any GPU processing at the moment. It's only running CPDN CPU tasks and building up GPU work.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the BP6/VP6 User Group today!
ID: 1903856 · Report as offensive
kittyman Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 49980
Credit: 919,385,903
RAC: 147,779
United States
Message 1903874 - Posted: 30 Nov 2017, 14:50:25 UTC

Oh, meowmeowmeow.
Kitties are waiting to snag new work when the servers get kicked properly.
Meow.
Happy is the person who shares their life with a cat. (Or two or three or........) =^.^=

Have made friends here.
Most were cats.
ID: 1903874 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 4079
Credit: 240,654,620
RAC: 232,582
United States
Message 1903876 - Posted: 30 Nov 2017, 14:54:04 UTC - in response to Message 1903856.  

This line right here is responsible for the Backoffs in BOINC, https://github.com/BOINC/boinc/blob/master/client/work_fetch.cpp#L89. Now, if you change that line to;
backoff_interval *= 0;
The Backoffs go away, and your chances of snagging an AP increase.
This machine has the Backoffs removed, https://setiathome.berkeley.edu/results.php?hostid=8316299
This machine doesn't, https://setiathome.berkeley.edu/results.php?hostid=6796479
Kinda makes me want to build my own Mac version of BOINC, but, hopefully I won't need to.
ID: 1903876 · Report as offensive
Profile Jeff Buck Special Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1903976 - Posted: 30 Nov 2017, 23:57:22 UTC

I've only kept one of my crunch-only machines running, because with an old dual-core CPU and a single 750Ti, a little judicious rescheduling was able to keep it from running out. But I figured even that machine must be getting close to empty, so I just checked it and found a lot of tasks in the queue. So, I scrolled back up through the Event Log and, about 2 hours back, found:

11/30/2017 1:29:25 PM | SETI@home | Sending scheduler request: To fetch work.
11/30/2017 1:29:25 PM | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
11/30/2017 1:29:27 PM | SETI@home | Scheduler request completed: got 42 new tasks
All resends but, hey, they're crunchable!
ID: 1903976 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4624
Credit: 299,396,644
RAC: 620,930
United States
Message 1903980 - Posted: 1 Dec 2017, 0:16:43 UTC
Last modified: 1 Dec 2017, 0:17:47 UTC

That was some magic trick on your host I guess. Nothing but an occasional resend trickling in on all machines. I got a brief spurt of 13 tasks on Pipsqueek around that time I believe. Six new MB tasks and the rest AP. They are turning the screws at the project it seems. Been up and down visible and disappeared over the last hour or so. Still no sign of the splitters coming back yet.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1903980 · Report as offensive
Profile Jeff Buck Special Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1903981 - Posted: 1 Dec 2017, 0:23:38 UTC - in response to Message 1903980.  

Just lucky timing. It appears that this host must have reset the project and dumped his entire cache about 4 seconds before my scheduler request arrived. But I only got 42 out of the 196 that he Abandoned, so there must be a few other lucky crunchers out there, too. The irony is that, as one of my crunch-only machines, it automatically shuts down on weekdays when the peak electric rates kick in, at 4:00 PM (about 20 minutes ago), so those tasks will sit quietly for 5 hours until it starts back up again.
ID: 1903981 · Report as offensive
juan BFP Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 6896
Credit: 391,203,473
RAC: 147,265
Panama
Message 1903984 - Posted: 1 Dec 2017, 0:33:58 UTC

The msg changes

Thu 30 Nov 2017 07:31:51 PM EST | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Thu 30 Nov 2017 07:31:53 PM EST | SETI@home | Scheduler request failed: Couldn't connect to server
Thu 30 Nov 2017 07:32:12 PM EST | | Project communication failed: attempting access to reference site
Thu 30 Nov 2017 07:32:14 PM EST | | Internet access OK - project servers may be temporarily down.

Something is happening on the server side. Hope it's the daylight at the end of the tunnel
ID: 1903984 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4624
Credit: 299,396,644
RAC: 620,930
United States
Message 1903985 - Posted: 1 Dec 2017, 0:37:13 UTC
Last modified: 1 Dec 2017, 1:32:25 UTC

There's 692 1001 1627 tasks sitting in the buffer currently but the scheduler is still offline so no way to access them yet. Not sure where they are coming from since I still see no splitters splitting.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1903985 · Report as offensive
Profile Ghan-buri-Ghan Mike Project Donor

Send message
Joined: 27 Dec 15
Posts: 83
Credit: 31,524,415
RAC: 74,599
United States
Message 1904002 - Posted: 1 Dec 2017, 3:04:32 UTC

Asteroids just ran dry, at least for the moment. . Still crunching Einstein and Milky Way....
AP still trickling in.
ID: 1904002 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 7839
Credit: 18,735,520
RAC: 9,350
United States
Message 1904003 - Posted: 1 Dec 2017, 3:07:18 UTC


Data Distribution State SETI@home v7 # Astropulse # SETI@home v8 # As of*
Results ready to send 0 162 3,084 0m

This is not good, but it is fun watching Einstein go up.
ID: 1904003 · Report as offensive
Profile Jeff Buck Special Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1904010 - Posted: 1 Dec 2017, 3:21:57 UTC - in response to Message 1903985.  

There's 692 1001 1627 tasks sitting in the buffer currently but the scheduler is still offline so no way to access them yet. Not sure where they are coming from since I still see no splitters splitting.
Those increasing numbers had me scratching my head for a bit. I naturally assumed those were resends piling up, since the splitters are still off, but with nobody able to report tasks, I was puzzled as to where they were coming from. Then it dawned on me that they must be tasks that are simply timing out on the server, having passed their reporting deadlines.
ID: 1904010 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4624
Credit: 299,396,644
RAC: 620,930
United States
Message 1904023 - Posted: 1 Dec 2017, 4:41:25 UTC - in response to Message 1904010.  

Jeff, I think you have figured it out. I was wondering where they were coming from too. Makes sense.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1904023 · Report as offensive
Profile Wiggo "Socialist"
Avatar

Send message
Joined: 24 Jan 00
Posts: 14508
Credit: 187,781,620
RAC: 73,329
Australia
Message 1904029 - Posted: 1 Dec 2017, 5:54:52 UTC - in response to Message 1904010.  

There's 692 1001 1627 tasks sitting in the buffer currently but the scheduler is still offline so no way to access them yet. Not sure where they are coming from since I still see no splitters splitting.
Those increasing numbers had me scratching my head for a bit. I naturally assumed those were resends piling up, since the splitters are still off, but with nobody able to report tasks, I was puzzled as to where they were coming from. Then it dawned on me that they must be tasks that are simply timing out on the server, having passed their reporting deadlines.

Those tasks are very likely 1's that have not been returned by their deadlines so they're ready to be resent. ;-)

Cheers.
ID: 1904029 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4624
Credit: 299,396,644
RAC: 620,930
United States
Message 1904044 - Posted: 1 Dec 2017, 7:59:14 UTC - in response to Message 1904029.  

They aren't any much good at my end here. I've had nothing but the servers are unavailable since this afternoon.

11/30/2017 23:55:10 | SETI@home | Reporting 1 completed tasks
11/30/2017 23:55:10 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
11/30/2017 23:55:13 | | Project communication failed: attempting access to reference site
11/30/2017 23:55:13 | SETI@home | Scheduler request failed: Couldn't connect to server
11/30/2017 23:55:15 | | Internet access OK - project servers may be temporarily down.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1904044 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 7839
Credit: 18,735,520
RAC: 9,350
United States
Message 1904049 - Posted: 1 Dec 2017, 8:55:32 UTC - in response to Message 1904044.  

yep
12/1/2017 12:53:44 AM | SETI@home | Scheduler request failed: Couldn't connect to server

ID: 1904049 · Report as offensive
Tutankhamon
Volunteer tester
Avatar

Send message
Joined: 1 Nov 08
Posts: 7175
Credit: 44,716,773
RAC: 14,490
Sweden
Message 1904061 - Posted: 1 Dec 2017, 11:10:46 UTC

It's all fine. The project will be back in January. Maybe a little earlier than that.
ID: 1904061 · Report as offensive
Stephen "Heretic" Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 3439
Credit: 72,478,342
RAC: 83,781
Australia
Message 1904062 - Posted: 1 Dec 2017, 11:13:49 UTC - in response to Message 1904061.  

It's all fine. The project will be back in January. Maybe a little earlier than that.


. . :)

Stephen
ID: 1904062 · Report as offensive
rob smith Special Project $250 donor
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 16244
Credit: 320,413,624
RAC: 238,916
United Kingdom
Message 1904063 - Posted: 1 Dec 2017, 11:16:24 UTC

I hope it is earlier than that - my backup projects are rapidly running out of data, and even my backup backup projects are a bit short on work.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1904063 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 12036
Credit: 120,276,265
RAC: 60,954
United Kingdom
Message 1904066 - Posted: 1 Dec 2017, 11:37:55 UTC - in response to Message 1904061.  

It's all fine. The project will be back in January. Maybe a little earlier than that.
Which January?
ID: 1904066 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6530
Credit: 186,882,348
RAC: 35,078
United States
Message 1904085 - Posted: 1 Dec 2017, 13:32:02 UTC - in response to Message 1904066.  

It's all fine. The project will be back in January. Maybe a little earlier than that.
Which January?

Yes.

But in all seriousness.
I can only imagine they meant January for the year 3141.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the BP6/VP6 User Group today!
ID: 1904085 · Report as offensive
Previous · 1 . . . 17 · 18 · 19 · 20 · 21 · 22 · 23 . . . 32 · Next

Message boards : Number crunching : Panic Mode On (108) Server Problems?


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.