Panic Mode On (81) Server Problems?

Message boards : Number crunching : Panic Mode On (81) Server Problems?

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 21 · Next

AuthorMessage
juan BFP
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 5847
Credit: 330,552,408
RAC: 7,828
Panama
Message 1334416 - Posted: 3 Feb 2013, 21:20:05 UTC - in response to Message 1334413.  
Last modified: 3 Feb 2013, 21:20:28 UTC

They don't normally swap tapes until they have been split for both MB & AP, so we could have some wait given the current rate of (not) splitting the APs :-(

The question is, that is doing automaticaly (wait of few hours) or manualy (wait almost a day)?

ID: 1334416 · Report as offensive
Profile Mike
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 29579
Credit: 49,101,949
RAC: 17,180
Germany
Message 1334423 - Posted: 3 Feb 2013, 21:26:42 UTC - in response to Message 1334416.  

They don't normally swap tapes until they have been split for both MB & AP, so we could have some wait given the current rate of (not) splitting the APs :-(

The question is, that is doing automaticaly (wait of few hours) or manualy (wait almost a day)?


If some are still in the pipe automatically.
If not ..........

With each crime and every kindness we birth our future.

ID: 1334423 · Report as offensive
kittymanProject Donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 45918
Credit: 815,238,506
RAC: 124,870
United States
Message 1334424 - Posted: 3 Feb 2013, 21:29:36 UTC

My faster rigs are already running out of GPU tasks. The slower ones will hang in there for some hours yet.
My guess is we won't see any new MB work until maybe tomorrow, but I can still be hopeful that will change.


Cats.....what more does one need?

Have made friends in this life.
Most were cats.

ID: 1334424 · Report as offensive
juan BFP
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 5847
Credit: 330,552,408
RAC: 7,828
Panama
Message 1334429 - Posted: 3 Feb 2013, 21:41:23 UTC - in response to Message 1334424.  
Last modified: 3 Feb 2013, 21:42:20 UTC

Im running on SETI drayland too, 100WU on a 2x690 host takes less than 2hr to finish. I realy hate the 100WU limit, they need to change it soon at least to 100WU per GPU.


ID: 1334429 · Report as offensive
rob smithProject Donor
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 13336
Credit: 154,761,791
RAC: 117,872
United Kingdom
Message 1334432 - Posted: 3 Feb 2013, 21:45:44 UTC

Before doing that they need to sort the ability to deliver the >100 work units per hour that many of the more modern crunchers manage.
(For the last couple of weeks my GTX690 based cruncher has bee attacking Malaria, Einstein, Beta, but not the number of S@H I would like...


Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

ID: 1334432 · Report as offensive
kittymanProject Donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 45918
Credit: 815,238,506
RAC: 124,870
United States
Message 1334433 - Posted: 3 Feb 2013, 21:48:27 UTC - in response to Message 1334432.  

Before doing that they need to sort the ability to deliver the >100 work units per hour that many of the more modern crunchers manage.
(For the last couple of weeks my GTX690 based cruncher has bee attacking Malaria, Einstein, Beta, but not the number of S@H I would like...

Well, I do find it rather frustrating when I can't cache enough work to ride out anything much more than a few hours of server problems on my better crunchers.
Cats.....what more does one need?

Have made friends in this life.
Most were cats.

ID: 1334433 · Report as offensive
ExchangeMan
Volunteer tester

Send message
Joined: 9 Jan 00
Posts: 115
Credit: 153,144,010
RAC: 5,158
United States
Message 1334454 - Posted: 3 Feb 2013, 23:17:22 UTC - in response to Message 1334392.  

Actually I just received 3 GPU work units. Someone must be working on it at the lab. Results ready to send now shows as 1.

just resends from WU's who missed their deadline...

You can't be sure of that, his computers are hidden,

I've just received GPU workunits too, they aren't resends, they just happen to be Astropulse GPU Wu's, GPU Wu's none the less.

Claggy

Thanks Claggy. Ya, they weren't resends.

Just curious ..

Which endings the WUs had? x_0, x_1, or x_2, x_3 .. ?


* Best regards! :-) * Sutaru Tsureku, team seti.international founder. * Optimize your PC for higher RAC. * SETI@home needs your help. *

Here's the name of one of the work units. The others had similar long names. No x_0 or x_1 or x_2 or x_3.

28no12ad.22646.124880.140733193388038.10.113

ID: 1334454 · Report as offensive
ClaggyProject Donor
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4623
Credit: 46,348,740
RAC: 2,934
United Kingdom
Message 1334458 - Posted: 4 Feb 2013, 0:06:00 UTC - in response to Message 1334454.  

Actually I just received 3 GPU work units. Someone must be working on it at the lab. Results ready to send now shows as 1.

just resends from WU's who missed their deadline...

You can't be sure of that, his computers are hidden,

I've just received GPU workunits too, they aren't resends, they just happen to be Astropulse GPU Wu's, GPU Wu's none the less.

Claggy

Thanks Claggy. Ya, they weren't resends.

Just curious ..

Which endings the WUs had? x_0, x_1, or x_2, x_3 .. ?


* Best regards! :-) * Sutaru Tsureku, team seti.international founder. * Optimize your PC for higher RAC. * SETI@home needs your help. *

Here's the name of one of the work units. The others had similar long names. No x_0 or x_1 or x_2 or x_3.

28no12ad.22646.124880.140733193388038.10.113

That's the Workunit name, what about the task name? There are Multiple tasks to a workunit, from 2 to 10.

Claggy

ID: 1334458 · Report as offensive
ExchangeMan
Volunteer tester

Send message
Joined: 9 Jan 00
Posts: 115
Credit: 153,144,010
RAC: 5,158
United States
Message 1334460 - Posted: 4 Feb 2013, 0:14:44 UTC - in response to Message 1334458.  

Actually I just received 3 GPU work units. Someone must be working on it at the lab. Results ready to send now shows as 1.

just resends from WU's who missed their deadline...

You can't be sure of that, his computers are hidden,

I've just received GPU workunits too, they aren't resends, they just happen to be Astropulse GPU Wu's, GPU Wu's none the less.

Claggy

Thanks Claggy. Ya, they weren't resends.

Just curious ..

Which endings the WUs had? x_0, x_1, or x_2, x_3 .. ?


* Best regards! :-) * Sutaru Tsureku, team seti.international founder. * Optimize your PC for higher RAC. * SETI@home needs your help. *

Here's the name of one of the work units. The others had similar long names. No x_0 or x_1 or x_2 or x_3.

28no12ad.22646.124880.140733193388038.10.113

That's the Workunit name, what about the task name? There are Multiple tasks to a workunit, from 2 to 10.

Claggy

OK, this should be what you're looking for.

1140189392

ID: 1334460 · Report as offensive
Profile arkaynProject Donor
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4097
Credit: 51,576,341
RAC: 968
United States
Message 1334483 - Posted: 4 Feb 2013, 1:15:43 UTC - in response to Message 1334460.  


That's the Workunit name, what about the task name? There are Multiple tasks to a workunit, from 2 to 10.

Claggy

OK, this should be what you're looking for.

1140189392


I'm going to guess that this is your computer.
http://setiathome.berkeley.edu/show_host_detail.php?hostid=6896017

ID: 1334483 · Report as offensive
ExchangeMan
Volunteer tester

Send message
Joined: 9 Jan 00
Posts: 115
Credit: 153,144,010
RAC: 5,158
United States
Message 1334484 - Posted: 4 Feb 2013, 1:23:21 UTC - in response to Message 1334483.  


That's the Workunit name, what about the task name? There are Multiple tasks to a workunit, from 2 to 10.

Claggy

OK, this should be what you're looking for.

1140189392


I'm going to guess that this is your computer.
http://setiathome.berkeley.edu/show_host_detail.php?hostid=6896017

Yep. I'm still in the process of putting this cruncher together and need more parts. It's running with the mobo sitting on a wooden table and fans blowing on it.

ID: 1334484 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 2871
Credit: 10,621,842
RAC: 321
United States
Message 1334545 - Posted: 4 Feb 2013, 8:19:32 UTC

Well I did just notice that with 5 AP splitters running and no new MB work being produced.. cricket shows that we are not at max capacity.

So if we are MB-only, we run about 60-70mbit, and if we are AP-only, we run about 60-70mbit. If we could get our pipe to be increased to 150mbit (since my understanding is that it is connected via a gigabit connection, but we're only allowed to use 100mbit of it), that should pretty much "fix" the throughput problem.


Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)

ID: 1334545 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 7486
Credit: 91,105,202
RAC: 46,522
Australia
Message 1334556 - Posted: 4 Feb 2013, 9:51:15 UTC - in response to Message 1334545.  

If we could get our pipe to be increased to 150mbit (since my understanding is that it is connected via a gigabit connection, but we're only allowed to use 100mbit of it), that should pretty much "fix" the throughput problem.

It'd be good to push for a minimum of 250Mb/s. No more traffic jams after outages, and it would give room for growth. Be a bit of a pain to finally get an extra 50Mb/s, only to have to argue for it all over again in a matter of months.
Grant
Darwin NT

ID: 1334556 · Report as offensive
juan BFP
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 5847
Credit: 330,552,408
RAC: 7,828
Panama
Message 1334560 - Posted: 4 Feb 2013, 10:23:55 UTC

DonĀ“t forget the limits, MB uses 70% of the avaiable BW with them, or more without.

BTW Still running out of new WU, seems like the change of tapes is done manualy and we need to wait someone arrive at the lab, Others porojects say "thanks to that"


ID: 1334560 · Report as offensive
Profile Vipin Palazhi
Avatar

Send message
Joined: 29 Feb 08
Posts: 276
Credit: 152,417,283
RAC: 39,675
India
Message 1334590 - Posted: 4 Feb 2013, 14:42:28 UTC

I have been crunching for a few years now without ever actually bothering about how long it takes for the discs to be recorded and how/when it gets transported from Arecibo to Berkeley.

ID: 1334590 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 7486
Credit: 91,105,202
RAC: 46,522
Australia
Message 1334649 - Posted: 4 Feb 2013, 17:56:56 UTC - in response to Message 1334590.  
Last modified: 4 Feb 2013, 18:21:00 UTC

Work is being split again, unfortunately most Scheduler requests result in "Couldn't connect to server" messages.
*deep sigh*


EDIT- make that all requests. The Scheduler is borked.


Grant
Darwin NT

ID: 1334649 · Report as offensive
kittymanProject Donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 45918
Credit: 815,238,506
RAC: 124,870
United States
Message 1334655 - Posted: 4 Feb 2013, 18:26:46 UTC - in response to Message 1334649.  

Work is being split again, unfortunately most Scheduler requests result in "Couldn't connect to server" messages.
*deep sigh*


EDIT- make that all requests. The Scheduler is borked.

Not ALL requests...
Very hard to get through right now, but the kitties have managed on a few rigs, and have started to get just a bit of GPU work here and there.
But, yes, the scheduler is tied pretty tight right now.
Cats.....what more does one need?

Have made friends in this life.
Most were cats.

ID: 1334655 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 7486
Credit: 91,105,202
RAC: 46,522
Australia
Message 1334658 - Posted: 4 Feb 2013, 18:37:02 UTC - in response to Message 1334655.  

Work is being split again, unfortunately most Scheduler requests result in "Couldn't connect to server" messages.
*deep sigh*


EDIT- make that all requests. The Scheduler is borked.

Not ALL requests...
Very hard to get through right now, but the kitties have managed on a few rigs, and have started to get just a bit of GPU work here and there.
But, yes, the scheduler is tied pretty tight right now.

Must be the distance thing again then.
45min & not a single request has gotten through on either of my systems.
Grant
Darwin NT

ID: 1334658 · Report as offensive
kittymanProject Donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 45918
Credit: 815,238,506
RAC: 124,870
United States
Message 1334659 - Posted: 4 Feb 2013, 18:42:06 UTC - in response to Message 1334658.  

Work is being split again, unfortunately most Scheduler requests result in "Couldn't connect to server" messages.
*deep sigh*


EDIT- make that all requests. The Scheduler is borked.

Not ALL requests...
Very hard to get through right now, but the kitties have managed on a few rigs, and have started to get just a bit of GPU work here and there.
But, yes, the scheduler is tied pretty tight right now.

Must be the distance thing again then.
45min & not a single request has gotten through on either of my systems.

Well, I have 9 rigs making the attempt, so I would have a few more chances to connect. I tried the update button on a couple of them, but it's pretty useless to do so. Just have to let the kitties keep trying whilst I am gone to work and count on random chance to have some work get through.
Cats.....what more does one need?

Have made friends in this life.
Most were cats.

ID: 1334659 · Report as offensive
Profile Link
Avatar

Send message
Joined: 18 Sep 03
Posts: 805
Credit: 1,678,562
RAC: 22
Germany
Message 1334663 - Posted: 4 Feb 2013, 18:56:39 UTC

Hmm... so they dumped once again lots of unsplited AP. Is AP not worth crunching or how shall we understand that?


.

ID: 1334663 · Report as offensive
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 21 · Next

Message boards : Number crunching : Panic Mode On (81) Server Problems?


 
©2016 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.