credit equals time?

Message boards : Number crunching : credit equals time?
Message board moderation

To post messages, you must log in.

AuthorMessage
Garry

Send message
Joined: 7 Jul 02
Posts: 40
Credit: 535,102
RAC: 1
United States
Message 2030882 - Posted: 5 Feb 2020, 16:07:28 UTC

background: I recently had problems with too-frequent late work units. Good info at https://setiathome.berkeley.edu/forum_thread.php?id=84954. Seems fixed.

goal: Balance time among SETI, Rosetta, and Einstein.

current:
1. All projects have equal priority.

2. Statistics tab shows credit ratios approx 30:10:3 for Einstein:Rosetta:SETI.

3. The scheduler seems prone to allow Einstein to aggressively send work units, all of them roughly 24 hours long. Rosetta gets in fairly often with work units of roughly 6 hours (by adjusting a parameter on their project settings). When SETI gets in, they might get four units of 3 hours in.

4. This minute: All tasks running are Einstein.

❓ Credit per time varies among projects. Presuming time is indeed equal, are these ratios typical? (Seems extreme.)

❓ Presuming these ratios reflect time, is there a better path than cutting Einstein priority to 1/3 of present and increaing SETI priority to a bit over 3 times present (targeting 10:10:10 ratios on the statistics tab)?

Thanks in advance for any answers!
ID: 2030882 · Report as offensive
Profile Mr. Kevvy Crowdfunding Project Donor*Special Project $250 donor
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 3776
Credit: 1,114,826,392
RAC: 3,319
Canada
Message 2030885 - Posted: 5 Feb 2020, 16:27:22 UTC - in response to Message 2030882.  
Last modified: 5 Feb 2020, 16:28:25 UTC

Two things that I found help with work overload having Einstein@Home as a backup project:

1) Set its Resource Share to zero. Then it will only download enough work to keep each CPU core or GPU busy without keeping a cache (this doesn't apply for anyone "spoofing" their GPU count of course.)

2) Disable CPU work. As you noted they take too long to complete compared to the GPU work (and add little credit as CPU is so slow in comparison.)

Both of these settings are in the project's preferences in your Einstein@Home account page.
ID: 2030885 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22188
Credit: 416,307,556
RAC: 380
United Kingdom
Message 2030891 - Posted: 5 Feb 2020, 17:04:37 UTC

2. Statistics tab shows credit ratios approx 30:10:3 for Einstein:Rosetta:SETI.


Your observation of the credit awarded is fairly typical - estimates do vary, but typically Einstein is between 10 and 15 times that of SETI with Rosetta somewhere in between. (If you really want to chase credit try Colatz)

❓ Credit per time varies among projects. Presuming time is indeed equal, are these ratios typical? (Seems extreme.)


Each project is at liberty to award credit as they feel fit, thus using credit to compare between projects is difficult, if not impossible.
SETI uses a credit system called "Credit New", which is supposed to award credit proportional to the number of "FLOPS" that a task has taken. Many other projects do not use CreditNew, but either use a fixed credit per task, or some variation on the credit per task duration.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2030891 · Report as offensive
Garry

Send message
Joined: 7 Jul 02
Posts: 40
Credit: 535,102
RAC: 1
United States
Message 2031224 - Posted: 7 Feb 2020, 16:03:11 UTC - in response to Message 2030891.  

Valuable answers. Thanks. Follow-on questions, because I just noticed BOINC Manager > Projects tab > Avg. work done column.

What units does the Avg. work done column present? Something like minutes per day, maybe? Or credit per day the project awarded?

If it's not obvious from the above answer: Are the ratios there representative of the way the projects are sharing the processor? This minute, my ratios are roughly 13:6:2 for Einstein:Rosetta:SETI.

Thanks in advance.
ID: 2031224 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22188
Credit: 416,307,556
RAC: 380
United Kingdom
Message 2031251 - Posted: 7 Feb 2020, 19:18:37 UTC

It is a rolling average of the daily work done, with a half-life of about ten days.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2031251 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2031255 - Posted: 7 Feb 2020, 20:04:00 UTC - in response to Message 2031251.  

It is a rolling average of the daily work done, with a half-life of about ten days.
'Work done' being measured by the credit awarded - which is not a very good measure. Disbelieve anybody who says that credit can be converted back to derive the number of FLOPs that earned it, according to the cobblestone scale.
ID: 2031255 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13727
Credit: 208,696,464
RAC: 304
Australia
Message 2031281 - Posted: 7 Feb 2020, 21:23:14 UTC - in response to Message 2031255.  

Disbelieve anybody who says that credit can be converted back to derive the number of FLOPs that earned it, according to the cobblestone scale.
... because Credit New doesn't actually award Credit according to the definition of a Cobblestone.
Grant
Darwin NT
ID: 2031281 · Report as offensive
Garry

Send message
Joined: 7 Jul 02
Posts: 40
Credit: 535,102
RAC: 1
United States
Message 2031417 - Posted: 8 Feb 2020, 15:01:16 UTC - in response to Message 2031281.  

Again, much appreciated answers. 🙏

Based on what I now know, if I want to balance processor time among my three projects, I want settings that give each project an equal average number of processors in use. The best way I know to observe that is manually.

Potential avenues of further investigation: Coding something with the BOINC command line. Researching options in the BOINC XML files.

I'll see what I can do without those first. Have great days everyone! ...
ID: 2031417 · Report as offensive
Profile Kissagogo27 Special Project $75 donor
Avatar

Send message
Joined: 6 Nov 99
Posts: 715
Credit: 8,032,827
RAC: 62
France
Message 2031443 - Posted: 8 Feb 2020, 16:50:18 UTC

ID: 2031443 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13727
Credit: 208,696,464
RAC: 304
Australia
Message 2031502 - Posted: 8 Feb 2020, 23:04:40 UTC - in response to Message 2031417.  

if I want to balance processor time among my three projects, I want settings that give each project an equal average number of processors in use. The best way I know to observe that is manually.
Then you are going to be very busy & very frustrated as you fight the BOINC Manager as it tries to honour your resource share settings, and you actively work against it's efforts.
Grant
Darwin NT
ID: 2031502 · Report as offensive
Garry

Send message
Joined: 7 Jul 02
Posts: 40
Credit: 535,102
RAC: 1
United States
Message 2033585 - Posted: 22 Feb 2020, 22:34:03 UTC - in response to Message 2031502.  

An update on my efforts to balance processor usage, in case it helps someone:

All ratios are in the form SETI:Rosetta:Einstein.

THE SHORT VERSION

Current experience is

  • BOINC Manager > Options > Computing preferences
    - Store at least 0.04 days of work (approximately 1 hour)
    - Store up to an additional 0.01 days of work (approximately 15 minutes)
  • Rosetta web site > project preferences > Target CPU run time 6 hours
  • Resource shares 3000:1000:300
  • Concurrent task limits 8:5:1 (8 threads available)
  • Task inventory is 7:2:3
  • Running tasks 5:2:1
  • The default scheduler performance has favored projects most generous in granting work credit, maybe in proportion to their generosity. That's not consistent with my goals.


The scheduler made all decisions getting to this status in response to the settings documented. Maybe there are additional factors, too.

The scheduler may be managing SETI and Rosetta to a reasonable balance and may be managing those two to Einstein appropriately.

As I wrote this, the scheduler changed to running tasks 6:2:0 (1 more SETI and 1 less Einstein). Maybe that's responding to priority between those. Maybe SETI still needs more tasks running than Rosetta. It seems reasonable; I'm interested in achieving a running average.


MORE DETAIL (Wow! You're into it! 😀)

My goal is to find settings for which the scheduler will


  • Accumulate an equal amount of processor time for each of the projects.
    - More for SETI outside the effects of their weekly outage to compensate for running out of SETI work once a week.
  • Minimize the number of accepted tasks completed late.
  • Use all threads unless that results in accepting too many low-priority tasks to meet the goals above.
    - Happened once with the low Einstein resource shares I'm trying to achieve and the relatively large tasks they send.
    - Maybe accepting the smaller Rosetta tasks will prove ok.


I'd prefer that the scheduler would have decided to now have task inventory 8:4:1. (That's 1 more SETI task, 2 more Rosetta tasks, and 2 fewer Einstein tasks.) The SETI tasks coming in are 4 hours each; the Rosetta tasks are about 8 hours long. A 2:1 balance in task numbers reflects an even balance of task time inventory. The Einstein tasks are around 28 hours, so a balance of at least 6:3:1 balance in task numbers is closer than the existing. 8:4:1 is better.

It's a good argument that there's no need to balance of inventory of available task time. Maybe that means a 6:3:1 is better. Prompt receipt of new tasks in small groups of tasks would help here. Often, tasks seem to arrive in larger groups. Einstein especially.

Kudos 🏆 to Rosetta for the setting "target CPU run time". (Thanks for mentioning it @DarrelWilcox. If only all projects had such a setting! Or only Einstein. That'd be pretty great.) The higher I set it, the lower the communication load on the Rosetta server. That's better for all Rosetta users. I will try higher settings than 6 hours when I get other settings closer.

The existing resource shares reflect the information lower in this stream that work credits for the projects are roughly 10:1 or 15:1 (Einstein:SETI) with Rosetta somewhere between. (Thanks @robsmith.)

I'm controlling concurrent running tasks as suggested at https://boinc.berkeley.edu/wiki/Client_configuration > Project-level configuration > project_max_concurrent. (Thanks @Kissagogo27.) The files are in project folders inside the BOINC data directory (https://boinc.berkeley.edu/wiki/BOINC_Data_directory). Example (for Einstein) at "BOINC data directory"\projects\einstein.phys.uwm.edu\app.config.xml

<app_config>
  <project_max_concurrent>1</project_max_concurrent>
</app_config>

Work credit is not a perfect metric for this purpose. I don't know of anything else to try. I'm eager to see whether it is "good enough".

The SETI resource share of 3000 is recent. My previous setting was 2000; maybe the scheduler will accept more SETI tasks.

The Einstein resource share of 300 is recent. The previous setting was 0. 300 is 10:1 with the SETI resource share. Maybe a good starting point.

As to Einstein's resource share of 0: I confirmed @Mr. Kevvy's info below: That's different than "no new tasks" (umm ... really? Resource share of 0 doesn't mean "no resources"? 🤔 Maybe that's because "resource share of 0" should be different than "not contributing". Curious.)

@Mr. Kevvy: Thanks. I believed you. And I wanted to see what it did. Maybe not useful for my goals. Results included acceptance of an out-sized group of Einstein tasks; the current group.

The Einstein concurrent task limit of 1 is recent. I have recently had 3, then 2, responding to completing the Einstein tasks accepted. Maybe my use of this limit has lowered the risk of accepting another out-sized group of tasks.

Maybe the concurrent task limits are only useful for speeding convergence of scheduling data. It'd be nice to be able to relax them.

Given the 10-day scheduler data half-life (thanks @robsmith), half of all prior decisions will be out of the scheduler data in 10 days, and half again (a total of 75%) in 20 days. Maybe that's a good point to next assess current settings.

As to all the thanks I owed here: It's great to have a vibrant community. 🍀
ID: 2033585 · Report as offensive
Garry

Send message
Joined: 7 Jul 02
Posts: 40
Credit: 535,102
RAC: 1
United States
Message 2035898 - Posted: 5 Mar 2020, 3:00:21 UTC - in response to Message 2033585.  

Another update on my continuing efforts to equally balance time among SETI, Rosetta, and Einstein. In case it helps someone.

THE SHORT VERSION

Current experience is:

- Changes:
-- BOINC Manager > Options > Computing Preferences (see below)
--- Use at most 50% of the CPUs. (My computer has eight threads.)
--- Use at most 100% of the CPU time.
-- Number of concurrent tasks: unrestricted for all projects.

- Unchanged: (Ratios expressed in form SETI:Rosetta:Einstein.)
-- BOINC Manager > Options > Computing preferences
--- Store at least 0.04 days of work (approximately 1 hour)
--- Store up to an additional 0.01 days of work (approximately 15 minutes)
-- Rosetta web site > project preferences > Target CPU run time 6 hours
-- Resource share 30:10:3
-- Scheduler data half-life: 10 days (default)
-- Switch tasks each 360 minutes (6 hours).

Early results:
- The switch from "use 100% of CPUs" to "use 50% of CPUs" resulted in my computer running half as many BOINC tasks, but all threads of the computer remaining at nearly 100% busy. More detail below.
- The switch from "use 100% of CPUs" to "use 50% of CPUs" resulted in quickly reducing task inventory from something 9-15 tasks to as low as 4 tasks.
- SETI tasks are arriving at this computer in such small volume, it is impossible to balance time among the three projects.
- When SETI tasks arrive, they get promptly process and exit the system. If only they'd get prompt replacement ...


MORE DETAIL

I downloaded "Core Temp 1.15.1" (Windows) to monitor CPU temperatures. I have no reason to doubt it operates as advertised. It reported periods of excessive temperature. That motivated me to switch configuration to "use 50% of CPUs".

This package has an "overheat protection" feature. Among the possible responses, it can run a batch file if it senses overheat. I wrote one that issues a command to the BOINC command program to top all BOINC activity. I set it for 85 deg C; it activated once, but only then.

I tried TThrottle, advertised to throttle BOINC in response to temperature. It may not have received maintenance in a couple years. It starts operation by running a calibration routine. That routine drove my CPU temperatures near the "do not exceed" temperature for several seconds. Not welcome here.

I'm confident that the scheduling function changed the mix of tasks sent to my computer when I changed to "use 50% of CPUs", but I'll have to operate this way for a while to have numbers. I think each project sends fewer tasks and shorter tasks.

Maybe, the default 10-day half-life for scheduler data was too short for this computer when I was using "all CPUs". I know some will say this is BOINC heresy. If I cannot go back to "all CPUs", I cannot prove it either way with data.

Maybe, the default 10-day half-life will work for this computer with "50% of CPUs". The shorter tasks, fewer number of tasks sent at a time, and less demanding deadlines may make it fine.

I'm told to expect the number of tasks completed per week to decrease with "50% of CPUs", but not 50% as one might suspect. I'm interested to measure that effect.

News as I have it.
ID: 2035898 · Report as offensive

Message boards : Number crunching : credit equals time?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.