This computer has reached a limit on task in progress !?!?!?!?


log in

Advanced search

Questions and Answers : Preferences : This computer has reached a limit on task in progress !?!?!?!?

Author Message
Profile marsinph
Send message
Joined: 7 Apr 01
Posts: 89
Credit: 5,823,673
RAC: 1,896
Belgium
Message 1011761 - Posted: 4 Jul 2010, 18:18:23 UTC
Last modified: 4 Jul 2010, 18:26:18 UTC

I run Boinc 6.10.56 . I have a Q6600 QuadCore
My host average is about 1800
My preferences are to connect each 10 days and keep extra work for 10 days.
Also to run always and to use 100% of processors and 100 of CPU time.
I run Boinc since 2001 and do not have make update the latests months.
I run one WU in about 6 hour/CPU, so 4 WU/6hours
BUT !!!
I only have 40 WU waiting to run (about 60 hours so less than 3 days).
The deadline is 14 august for all WU. No any are running in high priority.
But when I request WU I receive the following message : "This computer has reached a limit on task in progress" !!!

Can someone explain ?
____________

Profile Ageless
Avatar
Send message
Joined: 9 Jun 99
Posts: 12332
Credit: 2,633,975
RAC: 1,213
Netherlands
Message 1011765 - Posted: 4 Jul 2010, 18:42:46 UTC - in response to Message 1011761.

The project is giving out 20 tasks maximum to each host for the time being to give everyone a chance to get some work. The admins are even requesting that people reduce their cache in anticipation of removing this limit on Monday, before the next outage on Tuesday.

For more information, see Jobs Limit Thread and any thread in the Number Crunching forum. (If you fancy some fighting).
____________
Jord

Fighting for the correct use of the apostrophe, together with Weird Al Yankovic

Profile marsinph
Send message
Joined: 7 Apr 01
Posts: 89
Credit: 5,823,673
RAC: 1,896
Belgium
Message 1011771 - Posted: 4 Jul 2010, 18:48:50 UTC - in response to Message 1011765.

I thank you for your response,
but 20 tasks is very few !!!
In my case, only work for about 30 hours
Because the outage is at least 3 days / week, then there are a lot of computer who will nothing do !
Very soon, they will not more be able to calculate all the data they collect in Arecibo !!!
Is it the goal of Seti ???


____________

Profile Ageless
Avatar
Send message
Joined: 9 Jun 99
Posts: 12332
Credit: 2,633,975
RAC: 1,213
Netherlands
Message 1011782 - Posted: 4 Jul 2010, 19:08:37 UTC - in response to Message 1011771.

From the first thread I pointed at, posted by Jeff Cobb, Seti admin:

The idea behind the jobs per host limit (currently set at 20) is to allow
every host that tries to connect to get some amount of work. This is
at the cost of some of the hosts getting as much work as requested.
This limit is necessary only when we are pegged at our outbound network
limit (but if that limit were raised, we would surely hit some other in
due course).

This limit will be tuned. We've just started working with the three-day
outage protocol (in the interest of doing science, ie, looking for ET).
We're learning how to work with it.

Adding to the newness of this way of running the servers, the fact that this is
a holiday weekend, and a big chunk of our team (that being Matt and Eric)
are away at this time, we have a situation that's hard to get just right.

Please bear with us as we work through this. This plea goes out especially
to the power crunchers, who together form the bulk of this project's
computational backbone.

Assuming no crisis, I plan to remove the jobs limit on Monday (a holiday)
so that some (but not all, because we will be pegged), of the crunchers can fill
their queues. My request is that the crunchers reduce their queue sizes so that
more of you can at least partially fill your normal queues before the three-day
outage.


If you want more work, attach to other projects. None of my systems has been out of work during any of Seti's outages.
____________
Jord

Fighting for the correct use of the apostrophe, together with Weird Al Yankovic

Questions and Answers : Preferences : This computer has reached a limit on task in progress !?!?!?!?

Copyright © 2014 University of California