Panic Mode On (85) Server Problems?

Message boards : Number crunching : Panic Mode On (85) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 23 · Next

AuthorMessage
Filipe

Send message
Joined: 12 Aug 00
Posts: 218
Credit: 21,281,677
RAC: 20
Portugal
Message 1397660 - Posted: 1 Aug 2013, 18:24:23 UTC

Where are you at with the new larger MB tasks? I remember to read about a 4x bundle.

MB tasks 4x times larger. So we can reduce the load on the database.
ID: 1397660 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1397903 - Posted: 2 Aug 2013, 13:21:55 UTC - in response to Message 1397660.  

Where are you at with the new larger MB tasks? I remember to read about a 4x bundle.

MB tasks 4x times larger. So we can reduce the load on the database.

We were testing the large size wu's on beta, but then I think all focus shifted to v7 apps. Which gives longer run times by doing more science. At the moment the focus looks to be the release of a stock android app. Which was just released, on beta, this week.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1397903 · Report as offensive
Mal McKellar

Send message
Joined: 6 May 03
Posts: 3
Credit: 1,983,416
RAC: 5
Australia
Message 1397919 - Posted: 2 Aug 2013, 14:20:19 UTC

Hi Guys,
A quick question please: How do I set SETI to highest priority?
I also run Einstein, LHC, Cosmology & Milky Way.

Thanks for your help.
Mal
ID: 1397919 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1397925 - Posted: 2 Aug 2013, 14:38:28 UTC - in response to Message 1397919.  
Last modified: 2 Aug 2013, 14:39:54 UTC

Is not allways a good ideia to try to run too many projects on one slow machine, concentrate the resources on one or two project is more eficient, but if you want to try...

You could change the resource share form 100 (the defoult) to a large numer in SETI (500 will give SETI 5x more time than the others, etc.), that will give your SETI work more share of the processing time.
ID: 1397925 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22273
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1398016 - Posted: 2 Aug 2013, 18:05:38 UTC

Set SETI to 1000, the others to a number below 100. And WAIT, it will take BOINC some time to sort its life out and balance the work correctly, and remember it is a long term average share, not an "instant" share.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1398016 · Report as offensive
Mal McKellar

Send message
Joined: 6 May 03
Posts: 3
Credit: 1,983,416
RAC: 5
Australia
Message 1398148 - Posted: 2 Aug 2013, 23:20:57 UTC - in response to Message 1397925.  

Thanks Juan & Rob,
I will try that.
I agree with your comments about too many tasks - I actually only run one of the others at any one time with Seti always on.
Many thanks,
Mal
ID: 1398148 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1398393 - Posted: 3 Aug 2013, 19:41:31 UTC

The MB splitting back-log is making pretty good progress. I would guess it shouldn't be but 2-4 days before we get some new tapes to work with for AP. My cache is still hanging on. I've been getting 1-4 re-sends/day. Currently have just under 3 days until idle, unless I get more before then.

Still not complaining. Just mentioning that relief should be coming soon.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1398393 · Report as offensive
Thomas
Volunteer tester

Send message
Joined: 9 Dec 11
Posts: 1499
Credit: 1,345,576
RAC: 0
France
Message 1398570 - Posted: 4 Aug 2013, 5:40:35 UTC

1 AP (GPU) in my list of WU's ! That's already something... :)
ID: 1398570 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 680
Credit: 563,640,304
RAC: 597
Australia
Message 1398593 - Posted: 4 Aug 2013, 8:24:52 UTC - in response to Message 1397903.  

Where are you at with the new larger MB tasks? I remember to read about a 4x bundle.

MB tasks 4x times larger. So we can reduce the load on the database.

We were testing the large size wu's on beta, but then I think all focus shifted to v7 apps. Which gives longer run times by doing more science..


and less credit ...lets not forget this other fact shall we..

:)

ID: 1398593 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1399008 - Posted: 5 Aug 2013, 15:42:34 UTC

Not a real panic, but anyone have any ideia way for about more than a week no AP WU are spliting? I see in the servers stst page 11 tapes are allready AP done but there are a lot more space than only 11 tapes on the servers. Did i miss something?
ID: 1399008 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22273
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1399032 - Posted: 5 Aug 2013, 16:18:40 UTC

APs will become available again when a new batch of tapes are loaded. APs are split much faster than MB, so get shot out more quickly and each tape contains fewer APs than MBs because APs are bigger, and have a much smaller overlap than MBs (and this has been explained on quite a few occasions.)
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1399032 · Report as offensive
Profile William
Volunteer tester
Avatar

Send message
Joined: 14 Feb 13
Posts: 2037
Credit: 17,689,662
RAC: 0
Message 1399036 - Posted: 5 Aug 2013, 16:26:37 UTC

I believe the question was why there are only 11 tapes mounted.

You might be able to mount more, but that would create an even larger backlog of MB still to split.
Maybe the solution is to split less AP at a time, so the tapes last longer - spread the work a little more.
But in the long run, you still have to split all the MB work.

Now I know - AP crunching on GPU is far too efficient. therefore the AP WUs don't last long enough. Maybe if we throttled the speed of the AP GPU apps? ;)
A person who won't read has no advantage over one who can't read. (Mark Twain)
ID: 1399036 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1399045 - Posted: 5 Aug 2013, 16:39:58 UTC - in response to Message 1399036.  
Last modified: 5 Aug 2013, 16:41:43 UTC

I believe the question was why there are only 11 tapes mounted.

Yes, why not mount few (not necessary all the possible at the same time) more tapes to split AP WU?

Maybe the solution is to split less AP at a time, so the tapes last longer - spread the work a little more...

Maybe it helps, spread the work sure helps, it´s far more eficient crunch 1AP+1MB on the GPU than 2AP or 2 MB on my host´s at least, by more eficient i means more work done (more WU crunched) in less time.

As i say before several times, a balance must be achieved, specialy after the mess of creditnew with the credit given to the MB WU a lot of top hosts are hungry for new AP to crunch.
ID: 1399045 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1399070 - Posted: 5 Aug 2013, 17:00:28 UTC - in response to Message 1399036.  

I believe the question was why there are only 11 tapes mounted.

You might be able to mount more, but that would create an even larger backlog of MB still to split.
Maybe the solution is to split less AP at a time, so the tapes last longer - spread the work a little more.
But in the long run, you still have to split all the MB work.

Now I know - AP crunching on GPU is far too efficient. therefore the AP WUs don't last long enough. Maybe if we throttled the speed of the AP GPU apps? ;)

Here's an idea. Since the only GPU tasks presently working on a Mac is the ATI AstroPulse task, just send all GPU AstroPulse tasks to Macs. That would work for me, and help keep APs from disappearing for weeks at a time...
nods head...
ID: 1399070 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1399186 - Posted: 5 Aug 2013, 20:57:18 UTC

Yes the start to split one tape for AP.

Anyone know how to automaticaly make the AP WU start to crunch as soon they DL and not need to wait to clear the entire MB cache? But by leaving the MB on the caches ready to start as soon no AP WU were avaiable?


ID: 1399186 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1399249 - Posted: 6 Aug 2013, 0:19:01 UTC

Oh sweet beautiful new APs!

I still had ~2.5 days left of my re-send cache, and got topped-off to a full 10-day cache..

2013-08-05 17:38:25 SETI@home Scheduler request completed: got 4 new tasks
2013-08-05 17:43:32 SETI@home Scheduler request completed: got 1 new tasks
2013-08-05 17:48:40 SETI@home Scheduler request completed: got 2 new tasks
2013-08-05 17:53:46 SETI@home Scheduler request completed: got 1 new tasks
2013-08-05 18:04:00 SETI@home Scheduler request completed: got 2 new tasks
2013-08-05 18:09:07 SETI@home Scheduler request completed: got 2 new tasks
2013-08-05 18:24:30 SETI@home Scheduler request completed: got 1 new tasks
2013-08-05 18:29:37 SETI@home Scheduler request completed: got 2 new tasks
2013-08-05 18:34:43 SETI@home Scheduler request completed: got 2 new tasks
2013-08-05 18:39:50 SETI@home Scheduler request completed: got 2 new tasks
2013-08-05 18:44:57 SETI@home Scheduler request completed: got 5 new tasks
2013-08-05 18:50:04 SETI@home Scheduler request completed: got 1 new tasks
2013-08-05 18:55:11 SETI@home Scheduler request completed: got 3 new tasks
2013-08-05 19:00:19 SETI@home Scheduler request completed: got 3 new tasks
2013-08-05 19:05:26 SETI@home Scheduler request completed: got 6 new tasks
2013-08-05 19:10:33 SETI@home Scheduler request completed: got 1 new tasks
2013-08-05 19:20:45 SETI@home Scheduler request completed: got 1 new tasks
2013-08-05 19:46:22 SETI@home Scheduler request completed: got 5 new tasks
2013-08-05 19:51:29 SETI@home Scheduler request completed: got 6 new tasks

Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1399249 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 680
Credit: 563,640,304
RAC: 597
Australia
Message 1399344 - Posted: 6 Aug 2013, 4:40:23 UTC - in response to Message 1399039.  

I believe the question was why there are only 11 tapes mounted.

You might be able to mount more, but that would create an even larger backlog of MB still to split.
Maybe the solution is to split less AP at a time, so the tapes last longer - spread the work a little more.
But in the long run, you still have to split all the MB work.

Now I know - AP crunching on GPU is far too efficient. therefore the AP WUs don't last long enough. Maybe if we throttled the speed of the AP GPU apps? ;)


Nah, drop the credit for AP on GPU to 10 for a full length AP WU, and to 5 on a similar CPU AP :-)

In that way, APs will last forever, because only the set and forget crunchers would crunch them.

(said the one who only runs AP)



Sten, you evil person :)




ID: 1399344 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 680
Credit: 563,640,304
RAC: 597
Australia
Message 1399345 - Posted: 6 Aug 2013, 4:49:14 UTC - in response to Message 1399186.  

Yes the start to split one tape for AP.

Anyone know how to automaticaly make the AP WU start to crunch as soon they DL and not need to wait to clear the entire MB cache? But by leaving the MB on the caches ready to start as soon no AP WU were avaiable?



Juan, maybe someone with more knowledge can give you the answer your after however, in my case I just wait to see when AP is up, then I switch off v7 in preferences and abort all v7 WUs across all machines. From that point it takes a few cycles for all machines to fill up on AP. I know some people won't be happy about that but the way I see it, there is no point in crunching a WU (v7) that gives you 65% of the run rate of a different type of WU (AP). I move all machines back to v7 only when there are no AP WUs available. In short I suppose you can say that I now use v7 WUs as the filler between APs.

ID: 1399345 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1399347 - Posted: 6 Aug 2013, 4:55:08 UTC
Last modified: 6 Aug 2013, 4:55:28 UTC

Lionel

That´s exactly why i ask, i want to do the same but without the need to abort the V7 already on the cache.

I start a new thread about that and receive some sugestions, until now no one apears with a working solution, but i belive if i could do what i want a lot of others will do the same.

I agree for now crunch MB "paid" a lot less credit, but i don´t want to simply stop crunching MB, just create a balance with both WU.
ID: 1399347 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 680
Credit: 563,640,304
RAC: 597
Australia
Message 1399372 - Posted: 6 Aug 2013, 5:45:40 UTC - in response to Message 1399347.  


Juan, understand exactly where you are coming from. The sad thing is if the credit for v7 WUs was properly benched marked to AP, there would be no need for what you are after or for any of us to keep switching. Sad days I feel.

cheers
ID: 1399372 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 23 · Next

Message boards : Number crunching : Panic Mode On (85) Server Problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.