Panic Mode On (75) Server problems?


log in

Advanced search

Message boards : Number crunching : Panic Mode On (75) Server problems?

Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 11 · Next
Author Message
tbretProject donor
Volunteer tester
Avatar
Send message
Joined: 28 May 99
Posts: 2917
Credit: 219,783,554
RAC: 36,039
United States
Message 1237482 - Posted: 26 May 2012, 18:20:20 UTC - in response to Message 1237436.

All I can say is it is so nice to have a cache again! I can make it to Tuesday without running out of work. :D

Steve


You can say that again.

I'm still glad someone did something. Can you imagine how all of these upload and reporting issues are going to begin piling-up every time there is an outage?

Ouch.

Sten-Arne
Volunteer tester
Send message
Joined: 1 Nov 08
Posts: 3924
Credit: 22,510,027
RAC: 30,269
Sweden
Message 1237504 - Posted: 26 May 2012, 18:55:32 UTC

Geeze, we're almost at 7 million MB Results out in the field. I can't even remember the last time we were at such a high number.

How high can we go before we're going to make a server or database barf all over the place?
____________
I'm only running one computer. Using 2 cores of an old Q8200 CPU for CPU tasks, and 2 cores feeding a single Mid-range GPU, ATI HD7870.
Look at the RAC folks, and ask yourselves why it beats so many multi GPU monster computers :-)

Filipe
Send message
Joined: 12 Aug 00
Posts: 112
Credit: 4,442,976
RAC: 4,023
Portugal
Message 1237507 - Posted: 26 May 2012, 19:01:32 UTC

Geeze, we're almost at 7 million MB Results out in the field


Can't we do anything about that?
____________

rob smithProject donor
Volunteer tester
Send message
Joined: 7 Mar 03
Posts: 8972
Credit: 66,158,511
RAC: 92,005
United Kingdom
Message 1237513 - Posted: 26 May 2012, 19:07:16 UTC - in response to Message 1237504.

Geeze, we're almost at 7 million MB Results out in the field. I can't even remember the last time we were at such a high number.

How high can we go before we're going to make a server or database barf all over the place?


I wonder if the boys and gals in the lab are deliberately trying to find out. If that's that case, just hold tight, it could be a rough ride.
____________
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

Sten-Arne
Volunteer tester
Send message
Joined: 1 Nov 08
Posts: 3924
Credit: 22,510,027
RAC: 30,269
Sweden
Message 1237514 - Posted: 26 May 2012, 19:08:39 UTC - in response to Message 1237507.

Geeze, we're almost at 7 million MB Results out in the field


Can't we do anything about that?


We are doing anything about it, by crunching them as fast as we can. The high number now is a result of that the project removed the limits on how many tasks we could have in our caches.

If it works, it's fine. We're less vurnerable to a project outage, since we have big caches to crunch from, but maybe the project servers will barf at such a high number, who knows....
____________
I'm only running one computer. Using 2 cores of an old Q8200 CPU for CPU tasks, and 2 cores feeding a single Mid-range GPU, ATI HD7870.
Look at the RAC folks, and ask yourselves why it beats so many multi GPU monster computers :-)

Profile Fred J. Verster
Volunteer tester
Avatar
Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 20
Netherlands
Message 1237574 - Posted: 26 May 2012, 20:31:46 UTC - in response to Message 1237514.

I just UPLoaded a few MB WU's, but still quite a lot waiting to report and
get uploaded.


____________

Sten-Arne
Volunteer tester
Send message
Joined: 1 Nov 08
Posts: 3924
Credit: 22,510,027
RAC: 30,269
Sweden
Message 1238870 - Posted: 29 May 2012, 16:21:02 UTC

Now, MB Results out in the field 7,001,148. It was a very long time ago since we had that many MB results out in the field.

____________
I'm only running one computer. Using 2 cores of an old Q8200 CPU for CPU tasks, and 2 cores feeding a single Mid-range GPU, ATI HD7870.
Look at the RAC folks, and ask yourselves why it beats so many multi GPU monster computers :-)

Richard HaselgroveProject donor
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8902
Credit: 54,785,258
RAC: 32,631
United Kingdom
Message 1238874 - Posted: 29 May 2012, 16:26:09 UTC - in response to Message 1238870.

Now, MB Results out in the field 7,001,148. It was a very long time ago since we had that many MB results out in the field.

A lot of little ones went out yesterday or overnight.

Sten-Arne
Volunteer tester
Send message
Joined: 1 Nov 08
Posts: 3924
Credit: 22,510,027
RAC: 30,269
Sweden
Message 1238885 - Posted: 29 May 2012, 20:28:43 UTC
Last modified: 29 May 2012, 20:30:05 UTC

According to the news on the main page, we will be down for almost 2 days again, beginning in about 1 hour and 30 minutes. Everything will be down, a total blackout, website and all, including the Boinc site.

I hope all of you have a big enough cache to last for the down time.

See you on Thursday.....
____________
I'm only running one computer. Using 2 cores of an old Q8200 CPU for CPU tasks, and 2 cores feeding a single Mid-range GPU, ATI HD7870.
Look at the RAC folks, and ask yourselves why it beats so many multi GPU monster computers :-)

Profile arkaynProject donor
Volunteer tester
Avatar
Send message
Joined: 14 May 99
Posts: 3768
Credit: 48,777,915
RAC: 1,076
United States
Message 1238898 - Posted: 29 May 2012, 20:39:31 UTC - in response to Message 1238885.

According to the news on the main page, we will be down for almost 2 days again, beginning in about 1 hour and 30 minutes. Everything will be down, a total blackout, website and all, including the Boinc site.

I hope all of you have a big enough cache to last for the down time.

See you on Thursday.....


Nope, you will see me on Sunday.
____________

Grant (SSSF)
Send message
Joined: 19 Aug 99
Posts: 6023
Credit: 64,039,227
RAC: 45,391
Australia
Message 1239054 - Posted: 31 May 2012, 19:19:56 UTC - in response to Message 1234580.


I hope they bring the splitters back online soon, the Ready to Send buffer is rapidly dwindling.
____________
Grant
Darwin NT.

Horacio
Send message
Joined: 14 Jan 00
Posts: 536
Credit: 75,962,509
RAC: 214
Argentina
Message 1239139 - Posted: 31 May 2012, 21:54:00 UTC - in response to Message 1239054.


I hope they bring the splitters back online soon, the Ready to Send buffer is rapidly dwindling.


Now it's over... :(
Does anybody has a spare WU to share with me? LOL
____________

Profile MikeProject donor
Volunteer tester
Avatar
Send message
Joined: 17 Feb 01
Posts: 25671
Credit: 35,670,108
RAC: 24,848
Germany
Message 1239141 - Posted: 31 May 2012, 21:55:19 UTC

I have about 10 days left.

____________

andybutt
Volunteer tester
Avatar
Send message
Joined: 18 Mar 03
Posts: 252
Credit: 122,855,066
RAC: 76,671
United Kingdom
Message 1239156 - Posted: 31 May 2012, 22:26:58 UTC - in response to Message 1239141.

I have a couple of days left. Can't get any more as uploading 2500 and damn slow!
____________

-BeNt-
Avatar
Send message
Joined: 17 Oct 99
Posts: 1234
Credit: 10,116,112
RAC: 0
United States
Message 1239185 - Posted: 31 May 2012, 23:49:07 UTC

I've got about 11 days worth for my gpu's and 7 days for my cpu. So I should be good to go for awhile, I'm sure the fella's will have the splitters back into shape and everyone filled by then....least I hope so. ;p
____________
Traveling through space at ~67,000mph!

Profile Slavac
Volunteer tester
Avatar
Send message
Joined: 27 Apr 11
Posts: 1932
Credit: 17,952,639
RAC: 0
United States
Message 1239188 - Posted: 31 May 2012, 23:56:32 UTC - in response to Message 1239185.

MB splitters are back online. There was a bit of a problem with Gowron earlier but as its tasks have been shifted to the donated JBOD, it turned out not to be a big deal. Gowron is now handling backup storage only.
____________


Executive Director GPU Users Group Inc. -
brad@gpuug.org

-BeNt-
Avatar
Send message
Joined: 17 Oct 99
Posts: 1234
Credit: 10,116,112
RAC: 0
United States
Message 1239189 - Posted: 1 Jun 2012, 0:00:41 UTC

Yep just noticed I've got 8 in the download cue now. Wonderful work for a Thursday guys!
____________
Traveling through space at ~67,000mph!

Profile Misfit
Volunteer tester
Avatar
Send message
Joined: 21 Jun 01
Posts: 21790
Credit: 2,510,901
RAC: 0
United States
Message 1239202 - Posted: 1 Jun 2012, 0:33:59 UTC

5/31/2012 5:31:05 PM | SETI@home | Scheduler request completed: got 0 new tasks
5/31/2012 5:31:05 PM | SETI@home | Project has no tasks available

Here I go again. And I have no days left since my cache is set to 0.1 days cuz I like to share. Too bad more people don't.
____________

Join BOINC Synergy!

Wedge009
Volunteer tester
Avatar
Send message
Joined: 3 Apr 99
Posts: 384
Credit: 160,882,707
RAC: 146,779
Australia
Message 1239216 - Posted: 1 Jun 2012, 1:14:59 UTC

I do wonder about that, too. I keep a one-day cache, which is normally enough to cover the weekly outage. If everyone who holds a ten day cache dropped to something more reasonable, there'd be more WUs to share around... and I imagine that WUs would be validated far more quickly too. My 'pending' WU count hasn't gone below 1000 for weeks (it hit nearly 2800 after coming back from the outage). Is it really that good for the project for WUs to be 'sat on' with hosts using huge caches for days before they're actually processed?

Just a thought, not having a go at anyone at all.
____________
Soli Deo Gloria

Profile SciManStevProject donor
Volunteer tester
Avatar
Send message
Joined: 20 Jun 99
Posts: 4951
Credit: 85,565,585
RAC: 35,324
United States
Message 1239220 - Posted: 1 Jun 2012, 1:28:55 UTC

The way I see it is that we are all sharing whatever the servers can dish out. This is a huge project, with no telling how much data will need to be crunched before we get any results. Things are definitly heading in the right direction with Seti hardware, so I think the capability to deliver more work to us is increasing. The more work that gets sent out, the better our chances. I run a 5 day cache, and that is 5-6 thousand wu's. I crunch every one of them with few errors or invalids. In order for my rig to do as much science as it can, I don't let in run dry if I can help it.

Steve


____________
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website

Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 11 · Next

Message boards : Number crunching : Panic Mode On (75) Server problems?

Copyright © 2015 University of California