Questions on Servers and WU's


log in

Advanced search

Message boards : Number crunching : Questions on Servers and WU's

Author Message
Terror Australis
Volunteer tester
Send message
Joined: 14 Feb 04
Posts: 1711
Credit: 204,759,556
RAC: 24,072
Australia
Message 913154 - Posted: 2 Jul 2009, 2:19:08 UTC

According to the Server status page as at 0200 UTC there is 1.4 million MB and nearly 5k AP WU's available for download. The Cricket graphs show the system is nowhere near max'ed out and the Server status page has a "green board".

Q1)Why then do I keep getting "No jobs available" messages on my boxes ? Where and what is the problem ?

Q2)One of my crunchers is out of CPU units but still has 60 odd CUDA units waiting to go, this is approx 36 hours of work. Is there any way to set the client for "crunch on first available processor", or any way to convert CUDA units to crunch on the CPU ?

I realise these are old questions but I searched the fora and couldn't find the answers.

TIA
Brodo

Nobodi
Send message
Joined: 30 Nov 03
Posts: 13
Credit: 12,813,443
RAC: 0
Australia
Message 913160 - Posted: 2 Jul 2009, 3:06:02 UTC - in response to Message 913154.

I am seeing similar behaviour 1.4 million MB and nearly 5.7k AP WU's available for download.
settings
Run only the selected applications
SETI@home Enhanced: no
Astropulse: yes
Astropulse v5: yes
from other applications? yes

When there was no ap work it was sending enhanced work at present it not sending any of that or the ap work.

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 913162 - Posted: 2 Jul 2009, 3:20:04 UTC - in response to Message 913154.
Last modified: 2 Jul 2009, 3:21:01 UTC

Q1)Why then do I keep getting "No jobs available" messages on my boxes ? Where and what is the problem ?

The count is the total number of work units split, but not yet assigned.

The splitters split the work.

The feeder grabs 100 work units and those are available to the scheduler.

If the feeder queue is empty (if the scheduler has assigned all of the 100 work unit batch) it will report "no work available" -- until the feeder grabs another 100 work units.

Among other things, this keeps the download server from getting 100,000 requests to download all at once.
____________

Profile Zeus Fab3r
Avatar
Send message
Joined: 17 Jan 01
Posts: 642
Credit: 95,094,730
RAC: 131,539
Serbia
Message 913171 - Posted: 2 Jul 2009, 3:51:48 UTC - in response to Message 913154.


Q2)One of my crunchers is out of CPU units but still has 60 odd CUDA units waiting to go, this is approx 36 hours of work. Is there any way to set the client for "crunch on first available processor", or any way to convert CUDA units to crunch on the CPU ?


There is a thread at lunatics, about rebranding 6.03(CPU) into 6.08(GPU)WUs and vice versa.

GPU rebranding

There are couple solutions, but I'm using ReSchedule 1.7 cause it's standalone and easy to use.
____________

Who the hell is General Failure and why is he reading my harddisk?¿

Josef W. SegurProject donor
Volunteer developer
Volunteer tester
Send message
Joined: 30 Oct 99
Posts: 4242
Credit: 1,047,276
RAC: 293
United States
Message 913187 - Posted: 2 Jul 2009, 4:35:44 UTC - in response to Message 913154.

According to the Server status page as at 0200 UTC there is 1.4 million MB and nearly 5k AP WU's available for download. The Cricket graphs show the system is nowhere near max'ed out and the Server status page has a "green board".

Q1)Why then do I keep getting "No jobs available" messages on my boxes ? Where and what is the problem ?
...
Brodo

See Matt's post from last December for that issue. Just what is delaying the Feeder is a very good question, though.
Joe

Terror Australis
Volunteer tester
Send message
Joined: 14 Feb 04
Posts: 1711
Credit: 204,759,556
RAC: 24,072
Australia
Message 913195 - Posted: 2 Jul 2009, 5:42:51 UTC

All OK now with the rebranding. Thanks Zeus for the pointer to the rebranding program and to Ned and Joe (again) for the answers to the server question.

Brodo

Terror Australis
Volunteer tester
Send message
Joined: 14 Feb 04
Posts: 1711
Credit: 204,759,556
RAC: 24,072
Australia
Message 913235 - Posted: 2 Jul 2009, 10:21:39 UTC

That ReSchedule 1.7 looks like a nice little toy. I think I'm going to enjoy playing with that one :-) Thanks again Zeus and to the boys from Lunatics

Brodo

Richard HaselgroveProject donor
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8491
Credit: 49,756,276
RAC: 55,166
United Kingdom
Message 913276 - Posted: 2 Jul 2009, 14:28:28 UTC - in response to Message 913187.

See Matt's post from last December for that issue. Just what is delaying the Feeder is a very good question, though.
Joe

Something is slowing down the scheduler too. It doesn't usually take 86 seconds to get a null reponse:

02/07/2009 15:20:10|SETI@home|Sending scheduler request: To fetch work
02/07/2009 15:20:10|SETI@home|Requesting 368777 seconds of new work
02/07/2009 15:21:36|SETI@home|Scheduler RPC succeeded [server version 607]
02/07/2009 15:21:36|SETI@home|Message from server: (Project has no jobs available)

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 913352 - Posted: 2 Jul 2009, 18:30:39 UTC - in response to Message 913276.

See Matt's post from last December for that issue. Just what is delaying the Feeder is a very good question, though.
Joe

Something is slowing down the scheduler too. It doesn't usually take 86 seconds to get a null reponse:

02/07/2009 15:20:10|SETI@home|Sending scheduler request: To fetch work
02/07/2009 15:20:10|SETI@home|Requesting 368777 seconds of new work
02/07/2009 15:21:36|SETI@home|Scheduler RPC succeeded [server version 607]
02/07/2009 15:21:36|SETI@home|Message from server: (Project has no jobs available)

... all the other traffic would do this. It could take 86 seconds just to get the SYN/SYN+ACK/ACK through a heavily loaded wire successfully.

At times, the BOINC client is quite effectively doing a Distributed Denial of Service attack on the BOINC servers.
____________

Message boards : Number crunching : Questions on Servers and WU's

Copyright © 2014 University of California