Workunits wont even start downloading

Questions and Answers : Getting started : Workunits wont even start downloading
Message board moderation

To post messages, you must log in.

AuthorMessage
blah

Send message
Joined: 3 Apr 99
Posts: 1
Credit: 87,879
RAC: 0
United Kingdom
Message 6295 - Posted: 11 Jul 2004, 0:08:01 UTC

I'm running predictor and seti on boinc, predictor works fine... always has, however seti wont even start donwloading any workunits. in messages tab all it says is:
SETI@home - 2004-07-11 00:52:03 - Sending request to scheduler: http://setiboincdata.ssl.berkeley.edu/sah_cgi/cgi
SETI@home - 2004-07-11 00:52:06 - Scheduler RPC to http://setiboincdata.ssl.berkeley.edu/sah_cgi/cgi succeeded

and thats it, it doesn't go any further. any help would be much appreciated

Steve
ID: 6295 · Report as offensive
Profile Keck_Komputers
Volunteer tester
Avatar

Send message
Joined: 4 Jul 99
Posts: 1575
Credit: 4,152,111
RAC: 1
United States
Message 6328 - Posted: 11 Jul 2004, 2:02:45 UTC

It usually takes 1 to 4 weeks for resource shares to balance out, that is most likely why you are seeing only P@H work.

John Keck
BOINCing since 2002/12/08
ID: 6328 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 6358 - Posted: 11 Jul 2004, 4:16:03 UTC

BOINC will not download new work until the low water mark is hit.

ID: 6358 · Report as offensive
Profile Jaaku
Volunteer tester
Avatar

Send message
Joined: 29 Oct 02
Posts: 494
Credit: 346,224
RAC: 0
United Kingdom
Message 6461 - Posted: 11 Jul 2004, 11:02:59 UTC

yes that does get annoying :)
ID: 6461 · Report as offensive
Moose

Send message
Joined: 27 Jan 00
Posts: 3
Credit: 21,909,649
RAC: 0
United States
Message 7107 - Posted: 12 Jul 2004, 18:53:53 UTC

I, too, was experiencing this problem. It seems to me that the Predictor work units are quite small compared to the SETI work units. When SETI was not handing out work units, Predictor filled up the work queue and kept it full to the point where a SETI unit never got the chance to be downloaded (even when SETI was working normally). To try to solve, I disabled network access to the BOINC client (FILE->DISABLE BOINC NETWORK ACCESS) and let all of the Predictor work units be processed then, using my firewall, I blocked all packets from the Predictor system. Once I re-enabled BOINC network access, Predictor couldn't get work units, but SETI was able to fill the work unit queue with its data. I then removed the block on the Predictor packets. Now I have more SETI work units than Predictor work units in the queue so I'm hoping that there will be room in the queue for new SETI work units for every SETI work unit that has been processed. Time will tell if this truly is a solution. Hope this helps.
ID: 7107 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 7109 - Posted: 12 Jul 2004, 19:14:48 UTC - in response to Message 7107.  

> I, too, was experiencing this problem. It seems to me that the Predictor work
> units are quite small compared to the SETI work units. When SETI was not
> handing out work units, Predictor filled up the work queue and kept it full to
> the point where a SETI unit never got the chance to be downloaded (even when
> SETI was working normally). To try to solve, I disabled network access to the
> BOINC client (FILE->DISABLE BOINC NETWORK ACCESS) and let all of the
> Predictor work units be processed then, using my firewall, I blocked all
> packets from the Predictor system. Once I re-enabled BOINC network access,
> Predictor couldn't get work units, but SETI was able to fill the work unit
> queue with its data. I then removed the block on the Predictor packets. Now I
> have more SETI work units than Predictor work units in the queue so I'm hoping
> that there will be room in the queue for new SETI work units for every SETI
> work unit that has been processed. Time will tell if this truly is a solution.
> Hope this helps.
>
The queue is based on time, not count. When you join a new project, the new project will dominate the work done until it has caught up. This process can take up to 4 weeks depending on how long you were running the old project before you attached to the new project.

ID: 7109 · Report as offensive
Moose

Send message
Joined: 27 Jan 00
Posts: 3
Credit: 21,909,649
RAC: 0
United States
Message 7128 - Posted: 12 Jul 2004, 19:59:50 UTC - in response to Message 7109.  


> The queue is based on time, not count. When you join a new project, the new
> project will dominate the work done until it has caught up. This process can
> take up to 4 weeks depending on how long you were running the old project
> before you attached to the new project.

I certainly don't dispute that; however, in my case (on this one machine) Predictor was the original project and S@H was the new one (having been added after Predictor), so the old project kept hogging the client and the new project got nothing. After day after day of watching Predictor run at 100% and S@H get 0% and never download a single work unit (even though my other machines were downloading many), I was looking for a way to force the client to start at least one S@H work unit without detaching from Predictor and creating a bunch of orphaned work units. Predictor had already worked completely through the queue two or three times, so I will also admit to being mildly impatient and I didn't want to wait 4 weeks (or however long it might take) to get that first S@H workunit. What I did may not be exactly "standard" but it did get me two S@H work units to process without, as far as I know, harming anyone else. Since that time, I've received 9 more and no additional Predictor work units.
ID: 7128 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 7139 - Posted: 12 Jul 2004, 20:18:42 UTC - in response to Message 7128.  

>
> > The queue is based on time, not count. When you join a new project, the
> new
> > project will dominate the work done until it has caught up. This process
> can
> > take up to 4 weeks depending on how long you were running the old
> project
> > before you attached to the new project.
>
> I certainly don't dispute that; however, in my case (on this one machine)
> Predictor was the original project and S@H was the new one (having been added
> after Predictor), so the old project kept hogging the client and the new
> project got nothing. After day after day of watching Predictor run at 100% and
> S@H get 0% and never download a single work unit (even though my other
> machines were downloading many), I was looking for a way to force the client
> to start at least one S@H work unit without detaching from Predictor and
> creating a bunch of orphaned work units. Predictor had already worked
> completely through the queue two or three times, so I will also admit to being
> mildly impatient and I didn't want to wait 4 weeks (or however long it might
> take) to get that first S@H workunit. What I did may not be exactly "standard"
> but it did get me two S@H work units to process without, as far as I know,
> harming anyone else. Since that time, I've received 9 more and no additional
> Predictor work units.
>
It is quite possible that you hit S@H when it was either out of work or down, and it switched to Predictor to get some work to do.

ID: 7139 · Report as offensive
Pascal, K G
Volunteer tester
Avatar

Send message
Joined: 3 Apr 99
Posts: 2343
Credit: 150,491
RAC: 0
United States
Message 7168 - Posted: 12 Jul 2004, 21:22:08 UTC
Last modified: 12 Jul 2004, 21:49:05 UTC

I have never had more that 5 S@H on my P4 3.0 and 2 S@H on my P4 1.8 preferences are set to 3 to 5 day for 3.0 and 1 to 3 days for 1.8, and both machines have never ran out in 5 days. I just hope the WUWUs do not run out, but if they do P@H will take up the slack as I get 10 WUWUs per machine. I have resources set to 10% P@H and 90% S@H.
Low water mark for 3.0 machine is 3 WUWUs to process and 50% to process on last WUWUs for 1.8 machine. Right now I have a mix of S@H and P@H WUWUs on both machines, as I have my resource set to 10% P@H, only 10 will be processed on the 1.8 machine and then it goes back to S@H I do not know how many P@H the 3.0 will do, but I am thinking either 10 or 20 WUWUs

My amount of WUWUs on hand for S@H is based on the inflated time to completion figures of 24hrs 2mins and 2sec for 3.0 mach and 41hrs 20mins and 22sec for 1.8 mach..

Just checked BOINC and I received a WUWU for S@H that has a normal time for completion.....

As far as I am concerned BOINC is a sucess, now all they need to do is to refine the program and get more projects and the rest will be history.





ID: 7168 · Report as offensive
Moose

Send message
Joined: 27 Jan 00
Posts: 3
Credit: 21,909,649
RAC: 0
United States
Message 7378 - Posted: 13 Jul 2004, 13:07:06 UTC - in response to Message 7168.  

> As far as I am concerned BOINC is a sucess.....

That's one thing that probably hasn't been said enough! All of those who have worked very hard to develop BOINC and S@H should be very proud of their accomplishments - they have done a great job!!! No software project is ever truly "finished" so I'm looking forward to seeing the evolution of BOINC (I wonder what it will look like in five years?) and the development of new applications. (Please forgive the slightly-off-topic response :-)
ID: 7378 · Report as offensive

Questions and Answers : Getting started : Workunits wont even start downloading


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.