New Credit Adjustment?

Message boards : Number crunching : New Credit Adjustment?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 16 · Next

AuthorMessage
Brian Silvers

Send message
Joined: 11 Jun 99
Posts: 1681
Credit: 492,052
RAC: 0
United States
Message 791621 - Posted: 2 Aug 2008, 22:48:56 UTC - in response to Message 791613.  


Note that the change that started this thread only adjusts SETI@Home, and does not reference any project.


The goal was to go BOINC-wide from get-go. It does not matter that it was not known at the start of the thread. It is what it is...
ID: 791621 · Report as offensive
NewtonianRefractor
Volunteer tester
Avatar

Send message
Joined: 19 Sep 04
Posts: 495
Credit: 225,412
RAC: 0
United States
Message 791722 - Posted: 3 Aug 2008, 0:43:16 UTC
Last modified: 3 Aug 2008, 0:48:22 UTC

Can somebody please summarize the credit change that happend? I was away from seti for about 3 weeks so have missed the initial uproar. This thread has gotten really long (over 200 posts).
So can somebody please do a quick recap?

I remember that before there were WU in the 18~19 credit range, in the ~52 credit range some in the ~70 range and very few in the 90+ range. What does it look like now/what has changed? I am currently crunching at climateprediction.net and am looking to switch back to seti when the 20 day WU there is done.
ID: 791722 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 791794 - Posted: 3 Aug 2008, 2:33:21 UTC - in response to Message 791621.  


Note that the change that started this thread only adjusts SETI@Home, and does not reference any project.


The goal was to go BOINC-wide from get-go. It does not matter that it was not known at the start of the thread. It is what it is...

If you have two projects, "A" and "B" and the code adjusts project "A" by referencing the data in the project "A" database, then that's what it does, and if A moves relative to B, it is because the multiplier more closely matches the benchmark * time credit.

If project "B" uses benchmark * time, and compares granted credit to benchmark * time then over a 30 day period, it seems that the multiplier is going to be 1.
ID: 791794 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 791798 - Posted: 3 Aug 2008, 2:37:00 UTC - in response to Message 791722.  

Can somebody please summarize the credit change that happend? I was away from seti for about 3 weeks so have missed the initial uproar. This thread has gotten really long (over 200 posts).
So can somebody please do a quick recap?

I remember that before there were WU in the 18~19 credit range, in the ~52 credit range some in the ~70 range and very few in the 90+ range. What does it look like now/what has changed? I am currently crunching at climateprediction.net and am looking to switch back to seti when the 20 day WU there is done.

There is some new code that finds the median machine, compares the FLOP count based credit to what the benchmark * time credit would have been, and slowly adjusts the credit rate to make them match.

This will also help Multibeam and Astropulse grant credit at the same rate.

If you read Eric Korpela's posts on this thread, that is official.
ID: 791798 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19989
Credit: 40,757,560
RAC: 67
United Kingdom
Message 791804 - Posted: 3 Aug 2008, 2:41:48 UTC - in response to Message 791722.  

Can somebody please summarize the credit change that happend? I was away from seti for about 3 weeks so have missed the initial uproar. This thread has gotten really long (over 200 posts).
So can somebody please do a quick recap?

I remember that before there were WU in the 18~19 credit range, in the ~52 credit range some in the ~70 range and very few in the 90+ range. What does it look like now/what has changed? I am currently crunching at climateprediction.net and am looking to switch back to seti when the 20 day WU there is done.


On how it work's, (stolen from Richard, who got it from ?????)

The script looks at a day's worth of returned results for each app (up
to 10 000). It calculates the granted cpu per unit CPU time for each
host that returned one of these results and finds the median (over all
hosts) of the ratio of granted credit to the credit that would have
been granted based upon the benchmarks. A 30 day moving average of
that ratio is maintained in the database. The scheduler multiplies
claimed credit by the value of that ratio (on the day the result was
sent to the host rather than the day it was returned. Think of it as
a contract.). A median is used rather than an average to avoid
problems where hosts claim zero CPU time or are granted 1e+25 credits
or claim they can do 1e+37 integer operations per second. It also
will be relatively unaffected by optimized apps unless more than half
the people in the project are using them.


And Eric's main post is post 790521
ID: 791804 · Report as offensive
Brian Silvers

Send message
Joined: 11 Jun 99
Posts: 1681
Credit: 492,052
RAC: 0
United States
Message 791883 - Posted: 3 Aug 2008, 5:34:00 UTC - in response to Message 791794.  
Last modified: 3 Aug 2008, 6:03:07 UTC


Note that the change that started this thread only adjusts SETI@Home, and does not reference any project.


The goal was to go BOINC-wide from get-go. It does not matter that it was not known at the start of the thread. It is what it is...

If you have two projects, "A" and "B" and the code adjusts project "A" by referencing the data in the project "A" database, then that's what it does, and if A moves relative to B, it is because the multiplier more closely matches the benchmark * time credit.

If project "B" uses benchmark * time, and compares granted credit to benchmark * time then over a 30 day period, it seems that the multiplier is going to be 1.


The underlying data has problems.... You cannot / will not consider this. No matter how sound the theory may or may not be, a function applied to a flawed data set is a flawed function, that is, if one is expecting output that is not flawed. GIGO (Garbage In - Garbage Out)
ID: 791883 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19989
Credit: 40,757,560
RAC: 67
United Kingdom
Message 791928 - Posted: 3 Aug 2008, 8:45:10 UTC - in response to Message 791883.  
Last modified: 3 Aug 2008, 8:46:18 UTC


Note that the change that started this thread only adjusts SETI@Home, and does not reference any project.


The goal was to go BOINC-wide from get-go. It does not matter that it was not known at the start of the thread. It is what it is...

If you have two projects, "A" and "B" and the code adjusts project "A" by referencing the data in the project "A" database, then that's what it does, and if A moves relative to B, it is because the multiplier more closely matches the benchmark * time credit.

If project "B" uses benchmark * time, and compares granted credit to benchmark * time then over a 30 day period, it seems that the multiplier is going to be 1.


The underlying data has problems.... You cannot / will not consider this. No matter how sound the theory may or may not be, a function applied to a flawed data set is a flawed function, that is, if one is expecting output that is not flawed. GIGO (Garbage In - Garbage Out)

As I understand it, and I am not claiming to be right. Projects are not going to be adjusting to each other. The projects will be adjusting so that their median computer will claim credits in line with its performance relative to the original concept of the mythical computer with benchmark 1000/1000, 100cr/day performance. It will not matter how the projects claim credit, fixed, BM * time or Flop counting. And all our real computers doing A project will line up depending on their performance to the projects median computer.

It is from that understanding I asked the question about different projects having different median computers.
ID: 791928 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 791933 - Posted: 3 Aug 2008, 9:11:57 UTC - in response to Message 791804.  

On how it work's, (stolen from Richard, who got it from ?????)

Eric's email to the BOINC Development mailing list on 23 July 2008.
The script looks at a day's worth of returned results for each app (up
to 10 000). It calculates the granted cpu per unit CPU time for each
host that returned one of these results and finds the median (over all
hosts) of the ratio of granted credit to the credit that would have
been granted based upon the benchmarks. A 30 day moving average of
that ratio is maintained in the database. The scheduler multiplies
claimed credit by the value of that ratio (on the day the result was
sent to the host rather than the day it was returned. Think of it as
a contract.). A median is used rather than an average to avoid
problems where hosts claim zero CPU time or are granted 1e+25 credits
or claim they can do 1e+37 integer operations per second. It also
will be relatively unaffected by optimized apps unless more than half
the people in the project are using them.


And Eric's main post is post 790521

ID: 791933 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 792080 - Posted: 3 Aug 2008, 15:50:34 UTC - in response to Message 791883.  


Note that the change that started this thread only adjusts SETI@Home, and does not reference any project.


The goal was to go BOINC-wide from get-go. It does not matter that it was not known at the start of the thread. It is what it is...

If you have two projects, "A" and "B" and the code adjusts project "A" by referencing the data in the project "A" database, then that's what it does, and if A moves relative to B, it is because the multiplier more closely matches the benchmark * time credit.

If project "B" uses benchmark * time, and compares granted credit to benchmark * time then over a 30 day period, it seems that the multiplier is going to be 1.


The underlying data has problems.... You cannot / will not consider this. No matter how sound the theory may or may not be, a function applied to a flawed data set is a flawed function, that is, if one is expecting output that is not flawed. GIGO (Garbage In - Garbage Out)

You are comparing the complete result database at a single project to the cross-project comparison at BOINCstats.

The result table at SETI contains every work unit that has yet to transition. It does not contain information about any other project.

The cross-project comparision at BOINCstats is a compilation from the XML statistics published by various projects. It does not contain anything about individual work units.

I have considered it. They aren't the same.


ID: 792080 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 792086 - Posted: 3 Aug 2008, 16:00:05 UTC - in response to Message 791928.  
Last modified: 3 Aug 2008, 16:39:20 UTC

As I understand it, and I am not claiming to be right. Projects are not going to be adjusting to each other. The projects will be adjusting so that their median computer will claim credits in line with its performance relative to the original concept of the mythical computer with benchmark 1000/1000, 100cr/day performance. It will not matter how the projects claim credit, fixed, BM * time or Flop counting. And all our real computers doing A project will line up depending on their performance to the projects median computer.

It is from that understanding I asked the question about different projects having different median computers.

It seems to me that the whole problem comes from flop counting vs. benchmark * time credit. (edit: actually, any method that isn't benchmark * time, including any fixed credit scheme)

In benchmark * time, we know what the predicted performance is, and we know how long it took. It is a problem if a project uses a lot of a single operation that is particularly fast (or particularly slow) on a given CPU type.

By counting flops, we get a very accurate "count" of what was done, but there is a scaling factor needed to make credit comparable to benchmark * time.

... currently, that scaling factor is determined experimentally, based on a sample that may not represent the main project.

The assumption (and I think it's valid, but it's an assumption)) is that the median computer is representative, and that it does not have an advantage or disadvantage because of CPU architecture. Take a weighted average of 30 median computers over a 30 day period, and the number won't vary a lot. (VLAR and VHAR units are the most likely reason).

If a project uses benchmark * time, then the ratio between the granted credit and the calculated benchmark * time score is going to average out at 1:1.
ID: 792086 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 792127 - Posted: 3 Aug 2008, 17:33:56 UTC

Now that I'm posting here again, let me chip in to this discussion with two clarifications that I've been trying to get clear in my own head over at Beta.

1) Median
Difficult as the concept is, Eric's script doesn't have any reference to a median computer. If you read the email Winterknight stole (!) from my public postings a few messages ago, the median is "(over all hosts) of the ratio of granted credit to the credit that would have been granted based upon the benchmarks." So it's a number: there are graphs over at Beta, showing individual processor types as points, and the median as a line. There never will be any such thing as the 'median computer'. As I said to Winterknight at Beta, the median ratio will be typical of, but not defined by, any particular processor type.

2) Multiplier
Just to be clear (there was ambiguity earlier in this thread): there are now two multipliers at work.

One has been in place for the last 12 months, and has remained static at 2.85 all that time. It operates at the 'per WU' level: it could, in principle, be used to level out the credit per hour rate of WUs of different angle ranges.

Everything in that last sentence is very much SETI-specific: you can't even talk about the 'angle range' of an Astropulse task. It doen't have one. So this multiplier has no meaning at any other project.

The new multiplier you've been discussing here, on the other hand, works at the application level, and could be used at any project. So there will be, or should be, a separate multiplier gradually normalising the average of all SETI multibeam tasks, and another one gradually doing the same for all Astropulse tasks. I will be watching, in debug mode, to check that they are each working to achieve their stated aims, but I don't quarrel at all with the principle.
ID: 792127 · Report as offensive
Profile Pilot
Avatar

Send message
Joined: 18 May 99
Posts: 534
Credit: 5,475,482
RAC: 0
Message 792172 - Posted: 3 Aug 2008, 18:18:03 UTC - in response to Message 792086.  
Last modified: 3 Aug 2008, 18:18:39 UTC

As I understand it, and I am not claiming to be right. Projects are not going to be adjusting to each other. The projects will be adjusting so that their median computer will claim credits in line with its performance relative to the original concept of the mythical computer with benchmark 1000/1000, 100cr/day performance. It will not matter how the projects claim credit, fixed, BM * time or Flop counting. And all our real computers doing A project will line up depending on their performance to the projects median computer.

It is from that understanding I asked the question about different projects having different median computers.

It seems to me that the whole problem comes from flop counting vs. benchmark * time credit. (edit: actually, any method that isn't benchmark * time, including any fixed credit scheme)

In benchmark * time, we know what the predicted performance is, and we know how long it took. It is a problem if a project uses a lot of a single operation that is particularly fast (or particularly slow) on a given CPU type.

By counting flops, we get a very accurate "count" of what was done, but there is a scaling factor needed to make credit comparable to benchmark * time.

... currently, that scaling factor is determined experimentally, based on a sample that may not represent the main project.

The assumption (and I think it's valid, but it's an assumption)) is that the median computer is representative, and that it does not have an advantage or disadvantage because of CPU architecture. Take a weighted average of 30 median computers over a 30 day period, and the number won't vary a lot. (VLAR and VHAR units are the most likely reason).

If a project uses benchmark * time, then the ratio between the granted credit and the calculated benchmark * time score is going to average out at 1:1.

Ha ha it seems like a great deal to time is being devoted to inflating/deflating the credit or currency of thanking people for donating to the science done by BOINC. I do hope equal amounts of time are dedicated to thinking and rethinking the science itself;)
When we finally figure it all out, all the rules will change and we can start all over again.
ID: 792172 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 792177 - Posted: 3 Aug 2008, 18:24:52 UTC - in response to Message 792172.  


Ha ha it seems like a great deal to time is being devoted to inflating/deflating the credit or currency of thanking people for donating to the science done by BOINC. I do hope equal amounts of time are dedicated to thinking and rethinking the science itself;)

If people cared as much about the science as we apparently do about credit, then this wouldn't matter.
ID: 792177 · Report as offensive
Brian Silvers

Send message
Joined: 11 Jun 99
Posts: 1681
Credit: 492,052
RAC: 0
United States
Message 792181 - Posted: 3 Aug 2008, 18:30:41 UTC - in response to Message 791928.  


As I understand it, and I am not claiming to be right. Projects are not going to be adjusting to each other. The projects will be adjusting so that their median computer will claim credits in line with its performance relative to the original concept of the mythical computer with benchmark 1000/1000, 100cr/day performance. It will not matter how the projects claim credit, fixed, BM * time or Flop counting. And all our real computers doing A project will line up depending on their performance to the projects median computer.

It is from that understanding I asked the question about different projects having different median computers.


...bringing us back to the idea of "slow host project shopping". Also, would it be of benefit to a fast machine to intentionally fudge the benchmark lower? How about as a collective effort as a group?
ID: 792181 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 792182 - Posted: 3 Aug 2008, 18:30:59 UTC - in response to Message 792127.  

Now that I'm posting here again, let me chip in to this discussion with two clarifications that I've been trying to get clear in my own head over at Beta.

1) Median
Difficult as the concept is, Eric's script doesn't have any reference to a median computer. If you read the email Winterknight stole (!) from my public postings a few messages ago, the median is "(over all hosts) of the ratio of granted credit to the credit that would have been granted based upon the benchmarks." So it's a number: there are graphs over at Beta, showing individual processor types as points, and the median as a line. There never will be any such thing as the 'median computer'. As I said to Winterknight at Beta, the median ratio will be typical of, but not defined by, any particular processor type.
Thanks for reminding us. This is an important point.

2) Multiplier
Just to be clear (there was ambiguity earlier in this thread): there are now two multipliers at work.

One has been in place for the last 12 months, and has remained static at 2.85 all that time. It operates at the 'per WU' level: it could, in principle, be used to level out the credit per hour rate of WUs of different angle ranges.

Everything in that last sentence is very much SETI-specific: you can't even talk about the 'angle range' of an Astropulse task. It doen't have one. So this multiplier has no meaning at any other project.
I'm a bit surprised. Does Astropulse count flops? It seems that some scaling would be in order to account for the instruction mix ("hard" floating point ops vs. "easy" floating point ops).

The new multiplier you've been discussing here, on the other hand, works at the application level, and could be used at any project. So there will be, or should be, a separate multiplier gradually normalising the average of all SETI multibeam tasks, and another one gradually doing the same for all Astropulse tasks. I will be watching, in debug mode, to check that they are each working to achieve their stated aims, but I don't quarrel at all with the principle.

I suspect Eric will be running his graphs as well, and that the appropriate lines should converge....
ID: 792182 · Report as offensive
Brian Silvers

Send message
Joined: 11 Jun 99
Posts: 1681
Credit: 492,052
RAC: 0
United States
Message 792193 - Posted: 3 Aug 2008, 18:39:18 UTC - in response to Message 792177.  


Ha ha it seems like a great deal to time is being devoted to inflating/deflating the credit or currency of thanking people for donating to the science done by BOINC. I do hope equal amounts of time are dedicated to thinking and rethinking the science itself;)

If people cared as much about the science as we apparently do about credit, then this wouldn't matter.


As someone, somewhere, pointed out, if that was true, people wouldn't publish papers on the research with their names on it. Continuing that line of thinking, none of us would desire to be called "Dr.", nor would there be any need to have framed versions of degrees in their offices, trophies, etc, etc, etc...

As the character "Mouse" said in "The Matrix":

"To deny our impulses is to deny the very thing that makes us human."
ID: 792193 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 14016
Credit: 208,696,464
RAC: 304
Australia
Message 792351 - Posted: 3 Aug 2008, 22:06:09 UTC - in response to Message 792193.  

As the character "Mouse" said in "The Matrix":

"To deny our impulses is to deny the very thing that makes us human."

Being able to control our impulses is what makes us more than just another animal, more than just a small child.
Grant
Darwin NT
ID: 792351 · Report as offensive
Fred W
Volunteer tester

Send message
Joined: 13 Jun 99
Posts: 2524
Credit: 11,954,210
RAC: 0
United Kingdom
Message 792364 - Posted: 3 Aug 2008, 22:34:04 UTC - in response to Message 792193.  


Ha ha it seems like a great deal to time is being devoted to inflating/deflating the credit or currency of thanking people for donating to the science done by BOINC. I do hope equal amounts of time are dedicated to thinking and rethinking the science itself;)

If people cared as much about the science as we apparently do about credit, then this wouldn't matter.


As someone, somewhere, pointed out, if that was true, people wouldn't publish papers on the research with their names on it. Continuing that line of thinking, none of us would desire to be called "Dr.", nor would there be any need to have framed versions of degrees in their offices, trophies, etc, etc, etc...

As the character "Mouse" said in "The Matrix":

"To deny our impulses is to deny the very thing that makes us human."

Going off at a tangent, again, Brian!
If your employment is in academia, then without having your name attached to published papers your prospects are severely limited; i.e. direct commercial benefit.
Framed certificates of qualifications in the office tend to reassure clients that you are, indeed, qualified to practice in your chosen profession; i.e. direct commercial benefit.
And most of my acquaintances who have higher degrees use their titles or letters of qualification only when applying for jobs or countersigning passports etc.

Boinc credits have no "real-world" benefit whatsoever so the parallel does not hold, I'm afraid.

F.

ID: 792364 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14690
Credit: 200,643,578
RAC: 874
United Kingdom
Message 792382 - Posted: 3 Aug 2008, 23:09:16 UTC - in response to Message 792182.  

2) Multiplier
Just to be clear (there was ambiguity earlier in this thread): there are now two multipliers at work.

One has been in place for the last 12 months, and has remained static at 2.85 all that time. It operates at the 'per WU' level: it could, in principle, be used to level out the credit per hour rate of WUs of different angle ranges.

Everything in that last sentence is very much SETI-specific: you can't even talk about the 'angle range' of an Astropulse task. It doen't have one. So this multiplier has no meaning at any other project.

I'm a bit surprised. Does Astropulse count flops? It seems that some scaling would be in order to account for the instruction mix ("hard" floating point ops vs. "easy" floating point ops).

Let's be clear about this too. Neither the Astropulse application, not the Enhanced Multibeam application, actually 'counts' floating point operations.

SAH_enh tots up blocks of presumed flops in bulk as it goes through its various stages of processing. That approximation gives great consistency between hosts on tasks of the same AR, but falls down when comparison is made between different ARs - the variation between credit rates per hour at different ARs is highly significant. However, we all get the same random allocation of WUs from the pot, and each host's credit claim will be averaged out over time.

Astropulse, on the other hand, has a very consistent processing cycle on all WUs (no AR variation, as I said before). So consistent, that the developers have resorted to the bluntest of blunt instruments:

#define TOTAL_FLOPS 6.21524e+14

(thanks to Urs Echternacht for discovering and reporting this)

So for hosts running BOINC v5.2.7 and above, credit claims will be reported using the FlopCount mechanism, and will be absolutely consistent across all hosts, to the last decimal place. That makes it feasible for the application multiplier to be the only mechanism relied on for normalisation.
ID: 792382 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 792398 - Posted: 3 Aug 2008, 23:36:55 UTC - in response to Message 792382.  
Last modified: 3 Aug 2008, 23:37:14 UTC

2) Multiplier
Just to be clear (there was ambiguity earlier in this thread): there are now two multipliers at work.

One has been in place for the last 12 months, and has remained static at 2.85 all that time. It operates at the 'per WU' level: it could, in principle, be used to level out the credit per hour rate of WUs of different angle ranges.

Everything in that last sentence is very much SETI-specific: you can't even talk about the 'angle range' of an Astropulse task. It doen't have one. So this multiplier has no meaning at any other project.

I'm a bit surprised. Does Astropulse count flops? It seems that some scaling would be in order to account for the instruction mix ("hard" floating point ops vs. "easy" floating point ops).

Let's be clear about this too. Neither the Astropulse application, not the Enhanced Multibeam application, actually 'counts' floating point operations.

Actually counting each individual "flop" would be quite difficult, and probably make it impossible for optimized apps. to use other FFT libraries unless source was available.

Literally counting flops would also slow the app. down -- alot.


SAH_enh tots up blocks of presumed flops in bulk as it goes through its various stages of processing. That approximation gives great consistency between hosts on tasks of the same AR, but falls down when comparison is made between different ARs - the variation between credit rates per hour at different ARs is highly significant. However, we all get the same random allocation of WUs from the pot, and each host's credit claim will be averaged out over time.

Astropulse, on the other hand, has a very consistent processing cycle on all WUs (no AR variation, as I said before). So consistent, that the developers have resorted to the bluntest of blunt instruments:

#define TOTAL_FLOPS 6.21524e+14

(thanks to Urs Echternacht for discovering and reporting this)

Cool. Since this is a straight define, I suspect that whatever "multiplier" might be needed is buried in the #define -- but either way, it's either the right number, or it isn't, and if it isn't, then the self-adjusting credit that Dr. Korpela implemented will handle it.


So for hosts running BOINC v5.2.7 and above, credit claims will be reported using the FlopCount mechanism, and will be absolutely consistent across all hosts, to the last decimal place. That makes it feasible for the application multiplier to be the only mechanism relied on for normalisation.

ID: 792398 · Report as offensive
Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 16 · Next

Message boards : Number crunching : New Credit Adjustment?


 
©2026 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.