Message boards :
Number crunching :
Inventory Limits
Message board moderation
Author | Message |
---|---|
Herb Smith Send message Joined: 28 Jan 07 Posts: 76 Credit: 31,615,205 RAC: 0 |
In view of the weekly startup issues can the cache limits be raised. It appears there is limit of 100 for the CPU and 100 for each GPU. The GPU limit is the biggest problem. I work through that in about 18 - 27 hours. The CPU cache is gone in about 36 hours. Can these be doubled, at least until all the issues with weekly restarts get resolved. I think this would greatly reduce the amount of angst every Tuesday. Herb Smith |
rob smith Send message Joined: 7 Mar 03 Posts: 22202 Credit: 416,307,556 RAC: 380 |
Doubt that would have any real, positive affect as not everyone runs out of cache between the start of the black out and the start of normal services. The problem of everyone filling their caches NOW would still be there, one thing that might have a real advantage would be introducing a proper hysteresis into the size of cache, allowing cache size to fall during the first few hours after the outrage, and gradually ramping it back up over the next few hours would be more effective than the current situation. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
It might reduce the amount of angst users have about getting work, but it would reintroduce the issues that were caused by not having the limits in place originally, namely the database growing so large that it constantly crashes. They've said they seem to be running up against some size limits of MySQL and are forced to have limits in place to prevent catastrophe. |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
How about something like: SendMax = (the lesser of 200/AverageTurnAround AND 200) - UserCurentCache) X (ReadySendBuffer/DesiredBufferSize) I'm sure that info is already available to the scheduler without any additional database reads. Fast users will get a bigger Cache, slow pokes would get a smaller one. Eventually as the turn around times average out I THINK everyone should have a 2 day cache and I THINK the overall Out In Field would go DOWN a far bit. Not to mention the Validation Inconclusive (and waiting to validate) should drop ALOT not having to wait for slow computers as much. Changing 200 to 100, 300, etc would let them control what work is out there and how fast the get (most of it) back. With that said, can they disable the BOINC user cache so that the slower computers wouldn't keep requesting work if something like this was implemented? |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
How about something like: At the moment there are 2 kinds of limits that the BOINC devs have designed into the server code. That I am aware of. N * CPUs, a hard fixed limit, or a combo of those two options. Milkyway uses (nCPUs * 3) & an upper limits of 48 total CPU tasks. The BOINC options for GPU limits are pretty much the same. With a change not to long ago that applies the limit(s) to each GPU vendor type, such as ATI, Intel, or Nvidia, separately. Any increase to the limits will add to the "out in the field". Everyone wants more tasks to ride out the hiccups, but that comes at a cost of beating the snot out of the MySQL db & causes more hiccups. At the moment the MySQL db is going a bit wonky for another unknown reason. Before creating new/more problems the guys don't have time to fix. I think we should let them fix the current issues that they don't have time to fix. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.