Managing the SETI@home maintenance slot

Questions and Answers : Preferences : Managing the SETI@home maintenance slot
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile David@home
Volunteer tester
Avatar

Send message
Joined: 16 Jan 03
Posts: 755
Credit: 5,040,916
RAC: 28
United Kingdom
Message 1894671 - Posted: 11 Oct 2017, 18:16:36 UTC

Hi

I have my cache set to 1.25 days and store additional to 0.1. I assumed this would save 0.625 days (1.25 / 2) worth of CPU and GPU tasks each which should see me though the maintenance windows for SETI@home but it doesn't.

What does the store additional work setting do? It is not very clear in the instructions, e.g. does this mean I should have a cache of 1.35 days in total?

During the last two maintenance windows I have run out of CPU tasks but still had lots of GPU tasks in the cache. Is there away to control the cache on a per application setting? BOINC manager doen't seem to distribute the cache size evenly across CPU and GPU.

Related to this in the past I tried setting a second project (Skynet POGS) to 0 resource share to run only when CPU is free. Unfortunately POGs keeps downloading work which means the cache fills with POGs work and SETI gets pushed out as the full cache means no need to download new work from SETI. Is this problem with the way POGS behaves or have I got this setup wrong for a backup project to only run when spare CPU is available?

Any help to improve my understanding of the cache settings would be appreciated.
ID: 1894671 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22190
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1894691 - Posted: 11 Oct 2017, 19:32:43 UTC

The additional work setting controls the amount of work in a fetch session, well, sort of.
A small setting means you get regular small amounts of work, while a large setting means you get infrequent, large amounts of work. But it doesn't always work the way we would expect it to as there is an absolute limit of 100 tasks for the CPU and 100 tasks per GPU. Three of my four computers always run out of GPU work during the weekly outage, and two of them get very close to running out of CPU work, so I just let them divert to other projects when they run out of SETI work.

I too have found POGS to be a very badly behaved project, so don't use it as a backup project these days.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1894691 · Report as offensive
Profile David@home
Volunteer tester
Avatar

Send message
Joined: 16 Jan 03
Posts: 755
Credit: 5,040,916
RAC: 28
United Kingdom
Message 1894786 - Posted: 12 Oct 2017, 6:35:14 UTC - in response to Message 1894691.  

Thanks Rob,

I didn't know about the 100 CPU/GPU task limit. Is this a grand total across all projects?

I like the science of POGS, but it doesn't work as a backup project due to the way it down loads work.

I tried Rosetta again, it was one of my main projects when I stopped BOINC 10 years ago. Rosetta looks to work OK for work unit downloads in my brief test, but alas it only gives half the credit that SETI does, so Rosetta is out as a second project. Need to think a bit more about using Rosetta as a backup project, at 8 hours per work unit Rosetta could take over the CPUs when no SETI work so it would then be a long time before SETI was crunched once work became available again.

Maybe I should just let my PC go idle if I run out of work in the maintenance slots.
ID: 1894786 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22190
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1894792 - Posted: 12 Oct 2017, 7:42:41 UTC

The 100-per is a SETI thing, other projects may, or may not, have their own limits.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1894792 · Report as offensive
Profile David@home
Volunteer tester
Avatar

Send message
Joined: 16 Jan 03
Posts: 755
Credit: 5,040,916
RAC: 28
United Kingdom
Message 1894995 - Posted: 13 Oct 2017, 6:32:10 UTC

I am seeing this message in the event log:

13/10/2017 07:26:48 | SETI@home | This computer has reached a limit on tasks in progress

I never understood that message before, now I believe it is because it has reached one of 100 CPU/GPU task limits.
ID: 1894995 · Report as offensive
Profile Bernie Vine
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 26 May 99
Posts: 9954
Credit: 103,452,613
RAC: 328
United Kingdom
Message 1894996 - Posted: 13 Oct 2017, 7:11:52 UTC - in response to Message 1894995.  

I am seeing this message in the event log:

13/10/2017 07:26:48 | SETI@home | This computer has reached a limit on tasks in progress

I never understood that message before, now I believe it is because it has reached one of 100 CPU/GPU task limits.

Yes that is correct.
ID: 1894996 · Report as offensive
Profile Chris904395093209d Project Donor
Volunteer tester

Send message
Joined: 1 Jan 01
Posts: 112
Credit: 29,923,129
RAC: 6
United States
Message 1908372 - Posted: 21 Dec 2017, 23:09:09 UTC - in response to Message 1894671.  


Related to this in the past I tried setting a second project (Skynet POGS) to 0 resource share to run only when CPU is free. Unfortunately POGs keeps downloading work which means the cache fills with POGs work and SETI gets pushed out as the full cache means no need to download new work from SETI. Is this problem with the way POGS behaves or have I got this setup wrong for a backup project to only run when spare CPU is available?


I too had another project plugged into BOINC, with my resource share set to 90 for SETI, and 10 for EINSTEIN. That ended up with hundreds of EINSTEIN work units. I just did this last Sunday, and have yet to work on any SETI work units. I was hoping for a balance of 1 EINSTEIN to 10 SETI (or something along those lines).

I used to have my resource share set to 1000 for SETI and 0 for EINSTEIN. That was the best setting to have for having my PCs work on EINSTEIN only when I have ran out and can't download any SETI work units.
~Chris

ID: 1908372 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1908404 - Posted: 22 Dec 2017, 3:46:32 UTC - in response to Message 1908372.  

Unfortunately it doesn't always work that way. If your desire is to run E@H when S@H is down for the maintenance period, better to just seti Seti at 100 and Einstein at 0 so when you run out of work, you can get Einstein work. Otherwise, I might suggest just alternating days you want to run 1 over the other and the day you want to run Einstein, just set S@H not to accept new work or suspend in the advance view of BOINC and it will switch to your other project as long as that one is set to allow new work and not suspended. You can alternate between the different projects this way. I'm sure some will talk to you about "debt" in projects, I find this way easier to split time on the PC.
ID: 1908404 · Report as offensive
Darrell Wilcox Project Donor
Volunteer tester

Send message
Joined: 11 Nov 99
Posts: 303
Credit: 180,954,940
RAC: 118
Vietnam
Message 1908547 - Posted: 23 Dec 2017, 3:04:06 UTC - in response to Message 1894786.  

@ David@home:
... at 8 hours per work unit Rosetta could take over the CPUs ...

Just to clarify a bit, the amount per WU in Rosetta is settable by the user under the "Rosetta@home preferences" in the "Target CPU run time" field. 8 hours/WU is the default if none is set. Values range from 1 hour to 1 day.
ID: 1908547 · Report as offensive

Questions and Answers : Preferences : Managing the SETI@home maintenance slot


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.