Panic Mode On (75) Server problems?

Message boards : Number crunching : Panic Mode On (75) Server problems?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 10 · Next

AuthorMessage
Profile Link
Avatar

Send message
Joined: 18 Sep 03
Posts: 834
Credit: 1,807,369
RAC: 0
Germany
Message 1239367 - Posted: 1 Jun 2012, 9:07:00 UTC - in response to Message 1239216.  
Last modified: 1 Jun 2012, 9:17:40 UTC

If everyone who holds a ten day cache dropped to something more reasonable, there'd be more WUs to share around...

No, the splitters stop, when about 250,000 WUs are ready to send, so once they run out of tapes (for whatever reason), there are usually no more than 250,000 WUs to send out regardless of what the people have in their caches.

Larger caches actually force the servers to generate larger work buffer, which is than stored in the cache of each client, so if the servers are down, the clients can still do a lot of work for the project and return it after the outage. If we all had just one day cache, the processing for S@H would have been stopped after about 24 hours completely, with larger caches we process the WUs like nothing happend, the current servers are powerful enough to catch up and restore our caches a while after the outage anyway.
ID: 1239367 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1239506 - Posted: 1 Jun 2012, 16:50:49 UTC

My single core machine was down to one MB left of its 2.5-day cache, but on the first scheduler contact after it all came back up, it reported all the completed tasks and got 8 new ones to fill the cache back up.

Main cruncher is AP-only and reported what it completed during the outage, and hasn't gotten any new APs yet. Just a little less than a day before I run out. Not worried, nor complaining. I'll get more work eventually.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1239506 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13715
Credit: 208,696,464
RAC: 304
Australia
Message 1239961 - Posted: 2 Jun 2012, 4:42:41 UTC


Noticed the network traffic has dropped off, so i had a look in my messages tab & it's getting a lot of "Project has no tasks available" and "No tasks sent" messages. It's trying to get work, it's just not receiving it.
Grant
Darwin NT
ID: 1239961 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1239968 - Posted: 2 Jun 2012, 4:50:14 UTC - in response to Message 1239961.  


Noticed the network traffic has dropped off, so i had a look in my messages tab & it's getting a lot of "Project has no tasks available" and "No tasks sent" messages. It's trying to get work, it's just not receiving it.

The main reason the bandwidth dropped off is the AP splitters are not running at the moment.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1239968 · Report as offensive
.clair.

Send message
Joined: 4 Nov 04
Posts: 1300
Credit: 55,390,408
RAC: 69
United Kingdom
Message 1241305 - Posted: 4 Jun 2012, 20:42:19 UTC

As of now, that is only one AP splitter and not a lot of data for it to chew on,
Looks like it will be `that` kind of evening on this side of the pond.

`More coal in the boiler the ship is slowing down`
ID: 1241305 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22149
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1241312 - Posted: 4 Jun 2012, 20:56:23 UTC

...and by the time I read your post the coal in the AP splitting boiler had run out...
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1241312 · Report as offensive
Profile Dimly Lit Lightbulb 😀
Volunteer tester
Avatar

Send message
Joined: 30 Aug 08
Posts: 15399
Credit: 7,423,413
RAC: 1
United Kingdom
Message 1241382 - Posted: 4 Jun 2012, 22:41:07 UTC

I missed some Astropulse's being split? Oh man, I'm currently crunching the last one in my cache. Time to panic me thinks.

Member of the People Encouraging Niceness In Society club.

ID: 1241382 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6651
Credit: 121,090,076
RAC: 0
United States
Message 1241410 - Posted: 4 Jun 2012, 23:23:13 UTC

My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them.

Not really a panic, but I am doing a Spock raised eyebrow.....

Steve
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1241410 · Report as offensive
Profile Fred J. Verster
Volunteer tester
Avatar

Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 0
Netherlands
Message 1241422 - Posted: 4 Jun 2012, 23:39:23 UTC - in response to Message 1241382.  
Last modified: 4 Jun 2012, 23:57:10 UTC

I missed some Astropulse's being split? Oh man, I'm currently crunching the last one in my cache. Time to panic me thinks.


Not really a panic, but I am doing a Spock raised eyebrow.....
Steve.


Well, you'll survive, also have 4 times the throughput of my hosts...
Since I'm still running BOINC 7.00.25, almost all SETI, MB also Astropulse
are run High Priority....MW, too. Merely cosmetic?

I've just crunched a few on ATI 5870 GPUs, 3 to 4 hours runtime,
9,779.41 6,056.19 CPU in behandeling* AstroPulse v6
Anoniem platform (ATI GPU)
High CPU time is due to the high % of blanking.
*Checked no 2nd result, yet.(I'm waiting for my {wing-}man).

I could do with some more AstroPulse work, though.
Panic, no way, besides thats where Back-Up Projects can be used for ;-).
ID: 1241422 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14644
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1241427 - Posted: 4 Jun 2012, 23:50:01 UTC - in response to Message 1241410.  

My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them.

Not really a panic, but I am doing a Spock raised eyebrow.....

Steve

Oh good, they must be splitting shorties again - I could do with a few of them.
ID: 1241427 · Report as offensive
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1241430 - Posted: 4 Jun 2012, 23:53:07 UTC - in response to Message 1241410.  

My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them.

Not really a panic, but I am doing a Spock raised eyebrow.....

Steve


I am on 2600 wu's with my single GTX560 and a 5 day cache.

ID: 1241430 · Report as offensive
Profile Fred J. Verster
Volunteer tester
Avatar

Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 0
Netherlands
Message 1241439 - Posted: 5 Jun 2012, 0:08:09 UTC - in response to Message 1241430.  
Last modified: 5 Jun 2012, 0:24:48 UTC

My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them.

Not really a panic, but I am doing a Spock raised eyebrow.....

Steve


I am on 2600 wu's with my single GTX560 and a 5 day cache.


I've SETI@home Enhanced (1784), (In behandeling)* (831)MB crunched, reported
and waiting for a cannonical result. 3 Astropulse WUs reported.
Less then 1000 a.t.m. on my i7-2600+ 2x ATI 5870 GPUs, 12 a time, have to
set a larger cache, but I'm never out of work, so I let it be.

Most of the regular posters, volunteer developpers, testers,
have an almost 10 fold throughput, since the(y) use of CUDA (FERMI/KEPPLER) and
OpenCL 1.2(ATI-AMD{SDK} 2.4)!
More or less. Look at these Results,
Atropulse
WU 99732719
.
ID: 1241439 · Report as offensive
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 572
United Kingdom
Message 1241450 - Posted: 5 Jun 2012, 0:28:02 UTC - in response to Message 1241410.  

My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them.

Not really a panic, but I am doing a Spock raised eyebrow.....

Steve


I've had the same, a run of "fast" regular WU's then a few crunchy ones, it upsets (increases) the estimated completion time and with a larger cache its enough to cause Boinc to panic.

Sometimes it will jump in and out of high priority mode, if you have got a bunch of longer running WU's when one finnishes it will kick it into high priority mode and start doing a bunch of shorties, then as the esimated completion time drops it will start back on the longer running ones untill it completes another one and then kicks back into high priority mode again.

There does not seem to be a lot of VLAR's or VHAR's around, but there seems to be a lot of variation (runtime wise) on the regular WU's.



Kevin


ID: 1241450 · Report as offensive
.clair.

Send message
Joined: 4 Nov 04
Posts: 1300
Credit: 55,390,408
RAC: 69
United Kingdom
Message 1241493 - Posted: 5 Jun 2012, 2:20:07 UTC - in response to Message 1241427.  

My rig just jumped into high priority mode and is leaving work unfinished all over the place. I can't even remember the last time that happened. I do have about 6900 wu's on board, which seems a wee high for a 5 day cache. Even with that amount, I should have no trouble crunching them.

Not really a panic, but I am doing a Spock raised eyebrow.....

Steve

Oh good, they must be splitting shorties again - I could do with a few of them.

Yup, i am crunching shorties as well most of the names end in VLAR
And this 7970 munches them, yum yum :¬)
ID: 1241493 · Report as offensive
.clair.

Send message
Joined: 4 Nov 04
Posts: 1300
Credit: 55,390,408
RAC: 69
United Kingdom
Message 1241497 - Posted: 5 Jun 2012, 2:33:26 UTC - in response to Message 1241450.  

There does not seem to be a lot of VLAR's or VHAR's around, but there seems to be a lot of variation (runtime wise) on the regular WU's.

VLAR, I nicked them.
Get yer ands off my vlar`s their mine all mine :¬)
ID: 1241497 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1241572 - Posted: 5 Jun 2012, 5:33:42 UTC

I picked up some APs from earlier. Was getting kind of close to an empty cache again there.. but I'm good for another ~1 day or so now. Would love to fill the 10-day cache up again though.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1241572 · Report as offensive
mckeand

Send message
Joined: 27 Jun 99
Posts: 1
Credit: 1,561,465
RAC: 0
United States
Message 1241863 - Posted: 5 Jun 2012, 22:56:41 UTC

Is this thing on?
Not getting any work, just set this up today 6-5-12
If this is not the place to ask please point me to the
right place, eh?

Just visited the VLA in New Mexico on highway 60.
Most impressive and very interesting. We will go
back for the tour.

I am only 90 miles from Roswell, NM, do you want me to
go look around there? I have heard rumors.......

Peace, Bob
ID: 1241863 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1241870 - Posted: 5 Jun 2012, 23:02:37 UTC

So before the maintenance, I got 16 APs. No more since it came back up since there aren't any tapes available to split. C'mon tapes..
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1241870 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65689
Credit: 55,293,173
RAC: 49
United States
Message 1241988 - Posted: 6 Jun 2012, 3:20:21 UTC

Lets see I've got nearly a 4 day cache now, but of course Boinc 6.10.58 x64 isn't reporting, unless I do an update with BoincTasks 1.33, then it's 64 at a time.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1241988 · Report as offensive
Profile Slavac
Volunteer tester
Avatar

Send message
Joined: 27 Apr 11
Posts: 1932
Credit: 17,952,639
RAC: 0
United States
Message 1242053 - Posted: 6 Jun 2012, 6:22:26 UTC - in response to Message 1241988.  

6.10.60, no fetch issues at all. Heck over the weekend I forgot to check on it and found 8000 tasks for 2 GPU's. Eek.


Executive Director GPU Users Group Inc. -
brad@gpuug.org
ID: 1242053 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 10 · Next

Message boards : Number crunching : Panic Mode On (75) Server problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.