Panic Mode On (97) Server Problems?

Message boards : Number crunching : Panic Mode On (97) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 33 · Next

AuthorMessage
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34255
Credit: 79,922,639
RAC: 80
Germany
Message 1667161 - Posted: 19 Apr 2015, 10:11:24 UTC - in response to Message 1667160.  

My laptop was almost dry, but then

19/04/2015 10:19:16 | SETI@home | [sched_op] NVIDIA GPU work request: 27162.41 seconds; 0.00 devices
19/04/2015 10:19:18 | SETI@home | Scheduler request completed: got 6 new tasks
19/04/2015 10:19:18 | SETI@home | [sched_op] estimated total NVIDIA GPU task duration: 27485 seconds

(my TZ UTC+1, so less than an hour ago)

Work exists, work is flowing - but slowly/rarely. A problem re-filling the feeder from the database?


Yep, i think so too.
I got 11 tasks in the morning 53 last night.

Beta is sending tasks more often.


With each crime and every kindness we birth our future.
ID: 1667161 · Report as offensive
WezH
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 576
Credit: 67,033,957
RAC: 95
Finland
Message 1667162 - Posted: 19 Apr 2015, 10:12:05 UTC

Wow, my #1 host was empty for Cuda tasks, then suddenly:

Scheduler request completed: got 88 new tasks

:)
"Please keep Your signature under four lines so Internet traffic doesn't go up too much"

- In 1992 when I had my first e-mail address -
ID: 1667162 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1667166 - Posted: 19 Apr 2015, 10:15:12 UTC - in response to Message 1667160.  

It shows ~318k of tasks ready to send. Definitely work exists, but virtually impossible to get it.
If page states 318k, then those 318k were somehow counted already. Subset of "ready to send" tasks was aquired from whole DB. So, if feeder can't select some tasks from those 318k ones something definitely wrong server-side.
ID: 1667166 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34255
Credit: 79,922,639
RAC: 80
Germany
Message 1667168 - Posted: 19 Apr 2015, 10:18:24 UTC

Just got another 24 tasks.

I`ve disabled CPU work fetch to make it a little easier.


With each crime and every kindness we birth our future.
ID: 1667168 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1667169 - Posted: 19 Apr 2015, 10:23:32 UTC
Last modified: 19 Apr 2015, 10:23:49 UTC

BTW, Current big numbers in "ready to purge" states allows good opportunity to estimate resends overhead.
For MB we have 2.053 tasks per WU, for AP 2.45 tasks per WU.
AP validation quality needs improvements...
ID: 1667169 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1667192 - Posted: 19 Apr 2015, 13:05:26 UTC

Meowch...
I see as I get up this morning nothing has improved yet in the server department.
Ah well, no worries.
At least the kitties have lots of CPU work left to do.
If nothing gets righted today, hopefully tomorrow somebody is back in the lab to do some tyre kicking again.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1667192 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1667201 - Posted: 19 Apr 2015, 13:53:47 UTC

Hmmmmm...
Something afoot, or not?
I had noticed in the last hour that the server response to task list requests was now very good.
I had noticed that the assimilation queue was going down rather quickly.
And now, I noticed that I have received WUs on several rigs after getting nothing for an hour.

Would be nice if some kind of logjam has broken, but I am not 'quite' ready to say so.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1667201 · Report as offensive
Profile ReiAyanami
Avatar

Send message
Joined: 6 Dec 05
Posts: 116
Credit: 222,900,202
RAC: 174
Japan
Message 1667206 - Posted: 19 Apr 2015, 14:33:52 UTC

It's about time
ID: 1667206 · Report as offensive
Profile [B^S] madmac
Volunteer tester
Avatar

Send message
Joined: 9 Feb 04
Posts: 1175
Credit: 4,754,897
RAC: 0
United Kingdom
Message 1667216 - Posted: 19 Apr 2015, 14:57:02 UTC

I am still getting the No new task message so will just sit and wait and see if I run out of WU's
ID: 1667216 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1667218 - Posted: 19 Apr 2015, 15:01:17 UTC - in response to Message 1667216.  

I am still getting the No new task message so will just sit and wait and see if I run out of WU's

Hang in there....
The kitties have sniffed out 600 WUs in the last hour or so.
That is across 9 rigs.

Meowstillsniffing.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1667218 · Report as offensive
The Jedi Alliance - Ranger
Avatar

Send message
Joined: 27 Dec 00
Posts: 72
Credit: 60,982,863
RAC: 0
United States
Message 1667224 - Posted: 19 Apr 2015, 15:38:34 UTC

I have two rigs that have been maintaining 200 WUs each (cpu & gpu) for over an hour. Looks like the logs have cleared from the jam.
ID: 1667224 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1667226 - Posted: 19 Apr 2015, 15:56:17 UTC - in response to Message 1667224.  
Last modified: 19 Apr 2015, 16:12:52 UTC

I have two rigs that have been maintaining 200 WUs each (cpu & gpu) for over an hour. Looks like the logs have cleared from the jam.

It certainly does. Hope it holds.
Now up to 2322 in progress from around 900.
(On 9 rigs).

It was only 7:00AM in Berk when I started to notice things clearing.
I really wonder if somebody was up that early on a Sunday with their workboots on, or if some process running ended on it's own.

Meow?
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1667226 · Report as offensive
Phil Burden

Send message
Joined: 26 Oct 00
Posts: 264
Credit: 22,303,899
RAC: 0
United Kingdom
Message 1667238 - Posted: 19 Apr 2015, 16:34:40 UTC

Looks, from the SSP, as if everyone has filled their quotas, down from 350K to 30K in a few hours, long may it continue ;-)

P.
ID: 1667238 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1667240 - Posted: 19 Apr 2015, 16:38:05 UTC - in response to Message 1667238.  
Last modified: 19 Apr 2015, 16:40:23 UTC

Looks, from the SSP, as if everyone has filled their quotas, down from 350K to 30K in a few hours, long may it continue ;-)

P.

The kitties still need about 500 to top off all the crunchers' tanks.
Almost 2600 out of 3100.
BTW...3100 is not cheating anywhere. That is for 9 CPUs feeding 22 GPUs.

Hope the splitters can keep up once most mouths are fed.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1667240 · Report as offensive
Phil Burden

Send message
Joined: 26 Oct 00
Posts: 264
Credit: 22,303,899
RAC: 0
United Kingdom
Message 1667273 - Posted: 19 Apr 2015, 18:00:07 UTC - in response to Message 1667240.  

Looks, from the SSP, as if everyone has filled their quotas, down from 350K to 30K in a few hours, long may it continue ;-)

P.

The kitties still need about 500 to top off all the crunchers' tanks.
Almost 2600 out of 3100.
BTW...3100 is not cheating anywhere. That is for 9 CPUs feeding 22 GPUs.

Hope the splitters can keep up once most mouths are fed.


I think the splitters may struggle till everyone is satiated, current RTS is down to 3K..

P.
ID: 1667273 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11361
Credit: 29,581,041
RAC: 66
United States
Message 1667703 - Posted: 20 Apr 2015, 18:33:05 UTC

It has been a day since anyone has posted in this thread so as I crunch some MBs I shall post about my disappointment with the lack of APs.
ID: 1667703 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1667716 - Posted: 20 Apr 2015, 19:02:48 UTC

All systems seem to be working fine now, but wait 21 hours until maintenance, and its all going to 'heck' again :(
ID: 1667716 · Report as offensive
Herb Smith
Volunteer tester

Send message
Joined: 28 Jan 07
Posts: 76
Credit: 31,615,205
RAC: 0
United States
Message 1667784 - Posted: 20 Apr 2015, 22:10:10 UTC

It seemed the last set of problems started trying to split some AP's. One tape that went by very quickly was all I saw. And then delivery of MB's became an issue.
ID: 1667784 · Report as offensive
Profile JBird Project Donor
Avatar

Send message
Joined: 3 Sep 02
Posts: 297
Credit: 325,260,309
RAC: 549
United States
Message 1667861 - Posted: 21 Apr 2015, 1:53:48 UTC - in response to Message 1667716.  

I think it's pretty amazing how much cleanup is happening; I mean 6+Million less files in the waiting bucket within 3 days - wow.
Bravo Matt n company!
Only exception is, my Pending bucket is working backwards
Seems BOINC is running LIFO instead of FIFO
Anyone else seeing this?

ID: 1667861 · Report as offensive
OTS
Volunteer tester

Send message
Joined: 6 Jan 08
Posts: 369
Credit: 20,533,537
RAC: 0
United States
Message 1667877 - Posted: 21 Apr 2015, 2:41:45 UTC - in response to Message 1667861.  
Last modified: 21 Apr 2015, 2:50:50 UTC

I think it's pretty amazing how much cleanup is happening; I mean 6+Million less files in the waiting bucket within 3 days - wow.
Bravo Matt n company!
Only exception is, my Pending bucket is working backwards
Seems BOINC is running LIFO instead of FIFO
Anyone else seeing this?


I think it all depends on when your wing person submits his/her result. My guess is that the last in received a second result and were moved out of pending to either valid, invalid, or inconclusive before the earlier ones received a result from your wing person.
ID: 1667877 · Report as offensive
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 33 · Next

Message boards : Number crunching : Panic Mode On (97) Server Problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.