Panic Mode On (75) Server problems?

Message boards : Number crunching : Panic Mode On (75) Server problems?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 10 · Next

AuthorMessage
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22149
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1237513 - Posted: 26 May 2012, 19:07:16 UTC - in response to Message 1237504.  

Geeze, we're almost at 7 million MB Results out in the field. I can't even remember the last time we were at such a high number.

How high can we go before we're going to make a server or database barf all over the place?


I wonder if the boys and gals in the lab are deliberately trying to find out. If that's that case, just hold tight, it could be a rough ride.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1237513 · Report as offensive
Profile Fred J. Verster
Volunteer tester
Avatar

Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 0
Netherlands
Message 1237574 - Posted: 26 May 2012, 20:31:46 UTC - in response to Message 1237514.  

I just UPLoaded a few MB WU's, but still quite a lot waiting to report and
get uploaded.


ID: 1237574 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14644
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1238874 - Posted: 29 May 2012, 16:26:09 UTC - in response to Message 1238870.  

Now, MB Results out in the field 7,001,148. It was a very long time ago since we had that many MB results out in the field.

A lot of little ones went out yesterday or overnight.
ID: 1238874 · Report as offensive
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1238898 - Posted: 29 May 2012, 20:39:31 UTC - in response to Message 1238885.  

According to the news on the main page, we will be down for almost 2 days again, beginning in about 1 hour and 30 minutes. Everything will be down, a total blackout, website and all, including the Boinc site.

I hope all of you have a big enough cache to last for the down time.

See you on Thursday.....


Nope, you will see me on Sunday.

ID: 1238898 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13715
Credit: 208,696,464
RAC: 304
Australia
Message 1239054 - Posted: 31 May 2012, 19:19:56 UTC - in response to Message 1234580.  


I hope they bring the splitters back online soon, the Ready to Send buffer is rapidly dwindling.
Grant
Darwin NT
ID: 1239054 · Report as offensive
Horacio

Send message
Joined: 14 Jan 00
Posts: 536
Credit: 75,967,266
RAC: 0
Argentina
Message 1239139 - Posted: 31 May 2012, 21:54:00 UTC - in response to Message 1239054.  


I hope they bring the splitters back online soon, the Ready to Send buffer is rapidly dwindling.


Now it's over... :(
Does anybody has a spare WU to share with me? LOL
ID: 1239139 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34249
Credit: 79,922,639
RAC: 80
Germany
Message 1239141 - Posted: 31 May 2012, 21:55:19 UTC

I have about 10 days left.



With each crime and every kindness we birth our future.
ID: 1239141 · Report as offensive
andybutt
Volunteer tester
Avatar

Send message
Joined: 18 Mar 03
Posts: 262
Credit: 164,205,187
RAC: 516
United Kingdom
Message 1239156 - Posted: 31 May 2012, 22:26:58 UTC - in response to Message 1239141.  

I have a couple of days left. Can't get any more as uploading 2500 and damn slow!
ID: 1239156 · Report as offensive
-BeNt-
Avatar

Send message
Joined: 17 Oct 99
Posts: 1234
Credit: 10,116,112
RAC: 0
United States
Message 1239185 - Posted: 31 May 2012, 23:49:07 UTC

I've got about 11 days worth for my gpu's and 7 days for my cpu. So I should be good to go for awhile, I'm sure the fella's will have the splitters back into shape and everyone filled by then....least I hope so. ;p
Traveling through space at ~67,000mph!
ID: 1239185 · Report as offensive
Profile Slavac
Volunteer tester
Avatar

Send message
Joined: 27 Apr 11
Posts: 1932
Credit: 17,952,639
RAC: 0
United States
Message 1239188 - Posted: 31 May 2012, 23:56:32 UTC - in response to Message 1239185.  

MB splitters are back online. There was a bit of a problem with Gowron earlier but as its tasks have been shifted to the donated JBOD, it turned out not to be a big deal. Gowron is now handling backup storage only.


Executive Director GPU Users Group Inc. -
brad@gpuug.org
ID: 1239188 · Report as offensive
-BeNt-
Avatar

Send message
Joined: 17 Oct 99
Posts: 1234
Credit: 10,116,112
RAC: 0
United States
Message 1239189 - Posted: 1 Jun 2012, 0:00:41 UTC

Yep just noticed I've got 8 in the download cue now. Wonderful work for a Thursday guys!
Traveling through space at ~67,000mph!
ID: 1239189 · Report as offensive
Profile Misfit
Volunteer tester
Avatar

Send message
Joined: 21 Jun 01
Posts: 21804
Credit: 2,815,091
RAC: 0
United States
Message 1239202 - Posted: 1 Jun 2012, 0:33:59 UTC

5/31/2012 5:31:05 PM | SETI@home | Scheduler request completed: got 0 new tasks
5/31/2012 5:31:05 PM | SETI@home | Project has no tasks available

Here I go again. And I have no days left since my cache is set to 0.1 days cuz I like to share. Too bad more people don't.
me@rescam.org
ID: 1239202 · Report as offensive
Wedge009
Volunteer tester
Avatar

Send message
Joined: 3 Apr 99
Posts: 451
Credit: 431,396,357
RAC: 553
Australia
Message 1239216 - Posted: 1 Jun 2012, 1:14:59 UTC

I do wonder about that, too. I keep a one-day cache, which is normally enough to cover the weekly outage. If everyone who holds a ten day cache dropped to something more reasonable, there'd be more WUs to share around... and I imagine that WUs would be validated far more quickly too. My 'pending' WU count hasn't gone below 1000 for weeks (it hit nearly 2800 after coming back from the outage). Is it really that good for the project for WUs to be 'sat on' with hosts using huge caches for days before they're actually processed?

Just a thought, not having a go at anyone at all.
Soli Deo Gloria
ID: 1239216 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6651
Credit: 121,090,076
RAC: 0
United States
Message 1239220 - Posted: 1 Jun 2012, 1:28:55 UTC

The way I see it is that we are all sharing whatever the servers can dish out. This is a huge project, with no telling how much data will need to be crunched before we get any results. Things are definitly heading in the right direction with Seti hardware, so I think the capability to deliver more work to us is increasing. The more work that gets sent out, the better our chances. I run a 5 day cache, and that is 5-6 thousand wu's. I crunch every one of them with few errors or invalids. In order for my rig to do as much science as it can, I don't let in run dry if I can help it.

Steve


Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1239220 · Report as offensive
Wedge009
Volunteer tester
Avatar

Send message
Joined: 3 Apr 99
Posts: 451
Credit: 431,396,357
RAC: 553
Australia
Message 1239223 - Posted: 1 Jun 2012, 1:34:59 UTC

I suppose I was just considering Folding@home's philosophy of timeliness of results vs the bulk of work that can be done. I appreciate the improvements in the hardware with regards to WU generation/transfer/validation/etc, but I was thinking that with smaller caches, WUs (from a global project perspective) could be processed and validated faster, freeing resources in the databases and other server processes and generally allowing S@h to run more efficiently.

I admit I don't follow the forums here that thoroughly (some users have been less than friendly), so I don't know if this is an idea that's been raised previously.
Soli Deo Gloria
ID: 1239223 · Report as offensive
tbret
Volunteer tester
Avatar

Send message
Joined: 28 May 99
Posts: 3380
Credit: 296,162,071
RAC: 40
United States
Message 1239269 - Posted: 1 Jun 2012, 3:15:45 UTC - in response to Message 1239223.  

I suppose I was just considering Folding@home's philosophy of timeliness of results vs the bulk of work that can be done. I appreciate the improvements in the hardware with regards to WU generation/transfer/validation/etc, but I was thinking that with smaller caches, WUs (from a global project perspective) could be processed and validated faster, freeing resources in the databases and other server processes and generally allowing S@h to run more efficiently.

I admit I don't follow the forums here that thoroughly (some users have been less than friendly), so I don't know if this is an idea that's been raised previously.


You ought to hang-out more!

Yeah, it's talked-about from time-to-time. I'd rather have the work I turn-in error-out or validate quickly, but...I kind-of like to have work to do when there is a power-outage or a RAID corruption or something, too.

Just over this little outage I had a computer run-out of CPU work (but for some reason had plenty of GPU work in reserve).

And for some reason we don't seem to always go FIFO, so the last thing you download may be the next thing you crunch while older tasks sit.

So... what do you do?

I'm trying a happy 5-day medium, but apparently this whole misreporting issue is messing with the estimated run times so what I set isn't necessarily what I get.
ID: 1239269 · Report as offensive
Profile Misfit
Volunteer tester
Avatar

Send message
Joined: 21 Jun 01
Posts: 21804
Credit: 2,815,091
RAC: 0
United States
Message 1239290 - Posted: 1 Jun 2012, 5:07:20 UTC - in response to Message 1239220.  

The difference is you aren't bragging about having several days worth available to crunch while complaining your uploads won't go thru. For the people who do those are not words of encouragement to those of us who are running on empty.
me@rescam.org
ID: 1239290 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1239327 - Posted: 1 Jun 2012, 7:17:31 UTC - in response to Message 1239290.  

You choose to run on empty. Why should that be a concern to
any one else?
ID: 1239327 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1239329 - Posted: 1 Jun 2012, 7:22:40 UTC

Come now, kitties......

The subject of cache sizes has long been debated.
Fact remains, it is up to the individual users to choose what cache size they wish to try to maintain within the choices made available by this project.

Let's not stir the pot please.


"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1239329 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 680
Credit: 563,640,304
RAC: 597
Australia
Message 1239346 - Posted: 1 Jun 2012, 8:05:05 UTC - in response to Message 1239329.  


Getting Scheduler request failed: HTTP Internal Server Error

I don't believe that this problem is fixed ...

cheers
ID: 1239346 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 10 · Next

Message boards : Number crunching : Panic Mode On (75) Server problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.