Message boards :
Number crunching :
This computer has reached a limit on tasks in progress??
Message board moderation
Previous · 1 . . . 5 · 6 · 7 · 8
Author | Message |
---|---|
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
Change the interval that your computer connects to the project to 10 days. Then change your cache to 10 days. You will then cache 20 days of work. You might want to browse through the rest of this thread before making suggestions like that. Janice |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13751 Credit: 208,696,464 RAC: 304 |
We should drop this subject... Spoilsport. Grant Darwin NT |
Rasputin Send message Joined: 13 Jun 02 Posts: 1764 Credit: 6,132,221 RAC: 0 |
We should drop this subject... LOL! |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
Change the interval that your computer connects to the project to 10 days. Then change your cache to 10 days. You will then cache 20 days of work. Well it'll work on certain hosts, like my T8100 laptop, which only does Astropulse, but it won't be able to get 20 days worth of Wu's, 8 to 10 days maybe, too quick for the number of tasks allowed at the moment, Claggy |
RottenMutt Send message Joined: 15 Mar 01 Posts: 1011 Credit: 230,314,058 RAC: 0 |
Sooooooo, anyone notice the 200k MP workunits and 10K AP workunits sitting on the server! hummmm time to let the work flow. |
JohnDK Send message Joined: 28 May 00 Posts: 1222 Credit: 451,243,443 RAC: 1,127 |
Sooooooo, anyone notice the 200k MP workunits and 10K AP workunits sitting on the server! And the numbers have risen lately, have many mega crunchers left or what?? |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65777 Credit: 55,293,173 RAC: 49 |
I don't think this will help the GPU guys, but change your project preferences to only run Astropulse. Not a chance of that happening here. The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
Aristoteles Doukas Send message Joined: 11 Apr 08 Posts: 1091 Credit: 2,140,913 RAC: 0 |
I don't think this will help the GPU guys, but change your project preferences to only run Astropulse. have not helped me at all, i get one wu per core and next one when i am done with it. |
Numanoid Send message Joined: 13 Aug 99 Posts: 42 Credit: 4,119,139 RAC: 0 |
I need help with something. I've seem people post that the 20 is updated when it reports but I'm not seeing that. When I report work, it says "not requesting new tasks." To get work, I have to exit BM and launch it again. Also, what setting determines when it "updates" or connects? I can finish a GPU WU in 5-8 minutes and I'd like it to attempt to D/L a new one when done is possible. If unattended, it crunches 20 and sits at ready to report until I click update. |
RottenMutt Send message Joined: 15 Mar 01 Posts: 1011 Credit: 230,314,058 RAC: 0 |
anyone know if splitter stop once a quota is reached in the queue. seems like it since we are going thur the tapes slowly. so when this 20 work unit limit is lifted on Monday we will not have a chance to fill our queues as the splitters will not be able to produce enough work in one day for everyone. please disable the 20 unit limit on Sunday morning. |
Helli_retiered Send message Joined: 15 Dec 99 Posts: 707 Credit: 108,785,585 RAC: 0 |
The Question for me is - how high should i set my Cache? What will be after the next three Day Outage? Are we again reduced to 20 Workunits per Host three Days long? |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
anyone know if splitter stop once a quota is reached in the queue. seems like it since we are going thur the tapes slowly. so when this 20 work unit limit is lifted on Monday we will not have a chance to fill our queues as the splitters will not be able to produce enough work in one day for everyone. I am reducing the %'s I run on my CPU's and GPU's. requesting .5 day cache. That should take some more stress off the servers. I had a 6 day cache requested for over 2 weeks, and never got close to it. Probably because the CPDN I was running were taking 250+ hours. I really do not know. But I do know I am tired of the server/network strain complaints from the staff, and the senseless jabs from people running 5+ year old machines. I guess the only way they can feel better about themselves is by belittling others. But that is really what they want it seems. To wear others out. Nice work. Janice |
-= Vyper =- Send message Joined: 5 Sep 99 Posts: 1652 Credit: 1,065,191,981 RAC: 2,537 |
Yes please do! We're in the same boat there :) .. If they don't realease the plug our machines would crunch through that work in roughly half an hour until its depleted. All we can do is to hope.. Patience is virtue.. Kind regards Vyper _________________________________________________________________________ Addicted to SETI crunching! Founder of GPU Users Group |
Wandering Willie Send message Joined: 19 Aug 99 Posts: 136 Credit: 2,127,073 RAC: 0 |
As I read it on Seti@home Quota is xx WU’s. (1,2,4,or 8) in progress, xx awaiting to start. Message from server is:- This computer has reached a limit on tasks in progress. I have reach my quota of xx WU’s 02/07/2010 17:38:22 SETI@home Message from server: This computer has reached a limit on tasks in progress. Aqua quota for this project is two WU’s one WU is in process and one WU awaiting to run. Message from server is:- (reached limit of 2 CPU tasks in progress) I have reached my quota of Two WU’s 04/07/2010 08:02:09 AQUA@home Message from server: (reached limit of 2 CPU tasks in progress) Michael. |
geronime Send message Joined: 15 Oct 01 Posts: 1 Credit: 3,360,802 RAC: 0 |
I have work only for my slow 9600GT GPU and 3GHz dual Core2 is idle. SETI@Home is probably trying to tell me my idle computer is not needed anymore. If the limits are not raised before the next 3-day outage so that I get some reasonable amount of work in the cache, I'm going to quit the project. Good luck. |
TheFreshPrince a.k.a. BlueTooth76 Send message Joined: 4 Jun 99 Posts: 210 Credit: 10,315,944 RAC: 0 |
Finally I found an unattended workaround to move VLARS to the CPU with such a small cache. Because of the small cache, VLARS where already in a GPU slot before Reschedule 1.9 got them (set on a 1 hour interval). I fixed this by installing a macro recorder (Auto Macro Recorder) that presses the "start" button in Reschedule 1.9 every 900 seconds, so it moves every VLAR to the CPU while moving "healthy" WU's to the GPU (90% to GPU setting). Had to clock my CPU down to 2.8Ghz because of the heat outside (and inside) that made it shutdown (at about 68C). Yesterday I ordered a Scythe Mugen 2 cooler so cooling problems should be over soon :D Rig name: "x6Crunchy" OS: Win 7 x64 MB: Asus M4N98TD EVO CPU: AMD X6 1055T 2.8(1,2v) GPU: 2x Asus GTX560ti Member of: Dutch Power Cows |
Odan Send message Joined: 8 May 03 Posts: 91 Credit: 15,331,177 RAC: 0 |
Sooooooo, anyone notice the 200k MP workunits and 10K AP workunits sitting on the server! That is possible but as far as I can see the reason is mainly that the data out from Berkley is nurdling along at approx 50% of max since the limit of 20 tasks per host has been introduced. This means that the mega crunchers cannot run at 100% and that even small to middling crunchers cannot build up much of a cache. It means that even with only 2 AP splitters and 4 MB splitters actually splitting (despite what the green status bars would misleadingly tell us) splitting is outstripping the throttled back pipe. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
anyone know if splitter stop once a quota is reached in the queue. seems like it since we are going thur the tapes slowly. so when this 20 work unit limit is lifted on Monday we will not have a chance to fill our queues as the splitters will not be able to produce enough work in one day for everyone. Yes, there's a high water mark, but it has been more a matter of not starting a splitter for a channel when there's already some number of "Results ready to send". In the past, that upper limit has appeared to be around 100000, you can see the upper and lower limits on Scarecrow's charts. Under present circumstances it looks like they've set it higher or modified how it's implemented, we have "Results ready to send 259,589 8,614" last I looked There's also a lower limit when the splitters start working again. The count for S@H Enhanced is varying from about 200000 to 300000 every few hours, and about 8000 to 12000 for AP. It's been seen many times that the mb_splitter processes can produce 30 or more new Results per second. The download pipe could barely deliver 30 per second only if there were no AP tasks, so the S@H Enhanced situation ought to be OK unless one or more of the splitters crashes or gets stuck on an unsplittable channel. The two ap_splitter processes may not quite keep up but the "Results ready to send" might not totally empty in 24 hours. If everything worked with no problems, the system could theoretically deliver about 1.8 million S@H Enhanced and 37 thousand AP tasks in 24 hours. The "tapes" shown ready for splitting also could supply that much. It's not enough to sustain all hosts for the following 3 full days, and is unlikely to achieve that much delivery anyhow. I hope lessons learned from this 'on' period will improve what happens for the second one starting next Friday. Joe |
Fulvio Cavalli Send message Joined: 21 May 99 Posts: 1736 Credit: 259,180,282 RAC: 0 |
AND I just add two new CUDA cards to my machines, because they are on the way before all this start. Not good. I really hope this limit go away by tomorrow. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.