Inventory Limits

Message boards : Number crunching : Inventory Limits
Message board moderation

To post messages, you must log in.

AuthorMessage
Herb Smith
Volunteer tester

Send message
Joined: 28 Jan 07
Posts: 76
Credit: 31,615,205
RAC: 0
United States
Message 1649568 - Posted: 5 Mar 2015, 15:19:11 UTC

In view of the weekly startup issues can the cache limits be raised. It appears there is limit of 100 for the CPU and 100 for each GPU. The GPU limit is the biggest problem. I work through that in about 18 - 27 hours. The CPU cache is gone in about 36 hours.

Can these be doubled, at least until all the issues with weekly restarts get resolved. I think this would greatly reduce the amount of angst every Tuesday.

Herb Smith
ID: 1649568 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22202
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1649602 - Posted: 5 Mar 2015, 16:40:33 UTC

Doubt that would have any real, positive affect as not everyone runs out of cache between the start of the black out and the start of normal services.
The problem of everyone filling their caches NOW would still be there, one thing that might have a real advantage would be introducing a proper hysteresis into the size of cache, allowing cache size to fall during the first few hours after the outrage, and gradually ramping it back up over the next few hours would be more effective than the current situation.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1649602 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1649604 - Posted: 5 Mar 2015, 16:40:44 UTC - in response to Message 1649568.  

It might reduce the amount of angst users have about getting work, but it would reintroduce the issues that were caused by not having the limits in place originally, namely the database growing so large that it constantly crashes. They've said they seem to be running up against some size limits of MySQL and are forced to have limits in place to prevent catastrophe.
ID: 1649604 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1650146 - Posted: 6 Mar 2015, 23:41:21 UTC - in response to Message 1649604.  
Last modified: 6 Mar 2015, 23:41:59 UTC

How about something like:

SendMax = (the lesser of 200/AverageTurnAround AND 200) - UserCurentCache) X (ReadySendBuffer/DesiredBufferSize)

I'm sure that info is already available to the scheduler without any additional database reads.

Fast users will get a bigger Cache, slow pokes would get a smaller one. Eventually as the turn around times average out I THINK everyone should have a 2 day cache and I THINK the overall Out In Field would go DOWN a far bit. Not to mention the Validation Inconclusive (and waiting to validate) should drop ALOT not having to wait for slow computers as much.

Changing 200 to 100, 300, etc would let them control what work is out there and how fast the get (most of it) back.

With that said, can they disable the BOINC user cache so that the slower computers wouldn't keep requesting work if something like this was implemented?
ID: 1650146 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1650246 - Posted: 7 Mar 2015, 6:23:31 UTC - in response to Message 1650146.  

How about something like:

SendMax = (the lesser of 200/AverageTurnAround AND 200) - UserCurentCache) X (ReadySendBuffer/DesiredBufferSize)

I'm sure that info is already available to the scheduler without any additional database reads.

Fast users will get a bigger Cache, slow pokes would get a smaller one. Eventually as the turn around times average out I THINK everyone should have a 2 day cache and I THINK the overall Out In Field would go DOWN a far bit. Not to mention the Validation Inconclusive (and waiting to validate) should drop ALOT not having to wait for slow computers as much.

Changing 200 to 100, 300, etc would let them control what work is out there and how fast the get (most of it) back.

With that said, can they disable the BOINC user cache so that the slower computers wouldn't keep requesting work if something like this was implemented?

At the moment there are 2 kinds of limits that the BOINC devs have designed into the server code. That I am aware of. N * CPUs, a hard fixed limit, or a combo of those two options. Milkyway uses (nCPUs * 3) & an upper limits of 48 total CPU tasks. The BOINC options for GPU limits are pretty much the same. With a change not to long ago that applies the limit(s) to each GPU vendor type, such as ATI, Intel, or Nvidia, separately.
Any increase to the limits will add to the "out in the field". Everyone wants more tasks to ride out the hiccups, but that comes at a cost of beating the snot out of the MySQL db & causes more hiccups. At the moment the MySQL db is going a bit wonky for another unknown reason.
Before creating new/more problems the guys don't have time to fix. I think we should let them fix the current issues that they don't have time to fix.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1650246 · Report as offensive

Message boards : Number crunching : Inventory Limits


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.