Message boards :
Number crunching :
How about raising the limits ....
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · Next
Author | Message |
---|---|
SciManStev Send message Joined: 20 Jun 99 Posts: 6652 Credit: 121,090,076 RAC: 0 |
Looks like the limits have been raised to 200 CPU/400 GPU. You have two cores, so 50 tasks each. The GPU will allow another 400 tasks, but if you are already at 100 CPU tasks, and your rig asks for more CPU work, you will bump into your CPU limit. If it asks for GPU work, you should get more work. Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Looks like the limits have been raised to 200 CPU/400 GPU. It is using total usable processors. A 4 core machine with HT, 8 usable processors, gets 400 tasks. A 4 core machine without HT, 4 usable processors, gets 200 tasks. A dual 6 core Xeon server with HT gets 1200 tasks! SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Lint trap Send message Joined: 30 May 03 Posts: 871 Credit: 28,092,319 RAC: 0 |
Thanks, Steve! I'll pay more attention to what is being asked for. The last 3 requests were for only CPU work. Lt |
Brkovip Send message Joined: 18 May 99 Posts: 274 Credit: 144,414,367 RAC: 0 |
I also would like the limits removed. The bandwidth seems to be able to handle it and I know my machines would like it. They keep saying that they have reached the maximum allowed tasks. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
I also would like the limits removed. The bandwidth seems to be able to handle it and I know my machines would like it. They keep saying that they have reached the maximum allowed tasks. Actually, your machines simply pass on the message from the server without knowing what it means. A rational limits mechanism would instruct the core client not to ask for any more work until it completes some. That could even be extended so that if the user clicked Update the client could explain it wasn't asking for work because it was already at the limit. Joe |
Mad Fritz Send message Joined: 20 Jul 01 Posts: 87 Credit: 11,334,904 RAC: 0 |
I also would like the limits removed. The bandwidth seems to be able to handle it and I know my machines would like it. .... Are you actually running out of work or are you closed to? If it becomes "normal" like it's running after the memory upgrade on the HE router I personally don't need a large cache anymore. I'd rather would like to have AP's again. And then lets see how the cricket looks like ;-) IMHO raising the limits should be last step taken - after all other issues are sorted out. Andy |
Fred J. Verster Send message Joined: 21 Apr 04 Posts: 3252 Credit: 31,903,643 RAC: 0 |
|
Lionel Send message Joined: 25 Mar 00 Posts: 680 Credit: 563,640,304 RAC: 597 |
since the router issue has now been fixed, how about raising the limits .... stress test so to say .... couldn't hurt .... |
SciManStev Send message Joined: 20 Jun 99 Posts: 6652 Credit: 121,090,076 RAC: 0 |
I'm going to post a guess here. I have read that there are two more fixes for the DCF problem that still need to be implemented. Each one will affect the DCF a bit, so by having lower caches, not as many work units are affected. The work in progress is turning over fairly quickly now, rather than be in a cache waiting for its turn. After the second, and hopefully final fix is in place, and has settled all the DCF's, perhaps then is when the limits will be increased, or eliminated. Steve Warning, addicted to SETI crunching! Crunching as a member of GPU Users Group. GPUUG Website |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
I'm going to post a guess here. I have read that there are two more fixes for the DCF problem that still need to be implemented. Each one will affect the DCF a bit, so by having lower caches, not as many work units are affected. The work in progress is turning over fairly quickly now, rather than be in a cache waiting for its turn. After the second, and hopefully final fix is in place, and has settled all the DCF's, perhaps then is when the limits will be increased, or eliminated. That is my understanding as well. They don't want to open the floodgates fully yet, as there is some chance that the remaining adjustments to the server code 'may' goose the DCFs again, possibly resulting in over fetching by hosts until they compensate for the new fudge factors. So I guess it's kinda preemptive damage control. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
So now would be a good time to start sorting out the GPU estimated work time issues... Grant Darwin NT |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
So now would be a good time to start sorting out the GPU estimated work time issues... I am guessing that the next stage in peeling the band-aid off will probably not be until next Tuesday's outage. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Lionel Send message Joined: 25 Mar 00 Posts: 680 Credit: 563,640,304 RAC: 597 |
they don't have to open the gates all the way ... all they need to do is lift them slightly ... and in doing so will take some of the heat out of the system ... it will take a day or two to settle again however they will establish a new/slightly higher operating position which should see some machines not call on seti as much as they are at the moment ... |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
Why bother ?? Now that AP is back on we can't download what's already been allocated. I have units that have been waiting to download for 3 days and downloads are so slow I can't keep up to the current limit. Imagine the chaos if everyone was in "fill my x day cache" mode on top of the current schemozzle ? T.A. |
Kevin Olley Send message Joined: 3 Aug 99 Posts: 906 Credit: 261,085,289 RAC: 572 |
Why bother ?? Now that AP is back on we can't download what's already been allocated. Its starting to improve, not by a lot but the shorties are getting fewer and at the rate that the AP's are going out some users caches must be filling up. Its only the faster machines that are affected by the current limits, a lot of the slower machines or those not running 24/7 will not even be reaching the limit anyway. Kevin |
tbret Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 |
I'm guessing someone needs to cough-up another 100Mb furball for AP. |
Mad Fritz Send message Joined: 20 Jul 01 Posts: 87 Credit: 11,334,904 RAC: 0 |
Its starting to improve, not by a lot but the shorties are getting fewer and at the rate that the AP's are going out some users caches must be filling up. I did run out of CUDA-WUs even during the really short maintenance outage yesterday (local time). No chance at all to keep even a reasonable amount of workload in the cache, to keep them running I have to babysitting the machines. |
Floyd Send message Joined: 19 May 11 Posts: 524 Credit: 1,870,625 RAC: 0 |
Looks like the limits have been raised to 200 CPU/400 GPU. Do the " Pendings" count toward the limits ? I have a quad core and a GTX 460LE GPU , that would make my limits around 600 WU's ? I have 561 pending and 193 (atm) in processing. I have had a total of over 800 in that manner... processing and pending combined... |
Lint trap Send message Joined: 30 May 03 Posts: 871 Credit: 28,092,319 RAC: 0 |
No, not pendings. I was getting limit messages when asking for cpu work and I had 98-100 cpu wu's already in cache. I wasn't paying attention to the fact boinc was asking for more cpu work. Lt |
tbret Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 |
I would like to say something obscene about not figuring out how to delete a double-post, but that would be against the rules. No, simply erasing the body didn't work. It wouldn't let me post a blank. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.