Message boards :
Number crunching :
New Credit Adjustment?
Message board moderation
Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 16 · Next
| Author | Message |
|---|---|
|
Brian Silvers Send message Joined: 11 Jun 99 Posts: 1681 Credit: 492,052 RAC: 0
|
The goal was to go BOINC-wide from get-go. It does not matter that it was not known at the start of the thread. It is what it is... |
|
NewtonianRefractor Send message Joined: 19 Sep 04 Posts: 495 Credit: 225,412 RAC: 0
|
Can somebody please summarize the credit change that happend? I was away from seti for about 3 weeks so have missed the initial uproar. This thread has gotten really long (over 200 posts). So can somebody please do a quick recap? I remember that before there were WU in the 18~19 credit range, in the ~52 credit range some in the ~70 range and very few in the 90+ range. What does it look like now/what has changed? I am currently crunching at climateprediction.net and am looking to switch back to seti when the 20 day WU there is done.
|
|
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0
|
If you have two projects, "A" and "B" and the code adjusts project "A" by referencing the data in the project "A" database, then that's what it does, and if A moves relative to B, it is because the multiplier more closely matches the benchmark * time credit. If project "B" uses benchmark * time, and compares granted credit to benchmark * time then over a 30 day period, it seems that the multiplier is going to be 1. |
|
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0
|
Can somebody please summarize the credit change that happend? I was away from seti for about 3 weeks so have missed the initial uproar. This thread has gotten really long (over 200 posts). There is some new code that finds the median machine, compares the FLOP count based credit to what the benchmark * time credit would have been, and slowly adjusts the credit rate to make them match. This will also help Multibeam and Astropulse grant credit at the same rate. If you read Eric Korpela's posts on this thread, that is official. |
W-K 666 ![]() Send message Joined: 18 May 99 Posts: 19989 Credit: 40,757,560 RAC: 67
|
Can somebody please summarize the credit change that happend? I was away from seti for about 3 weeks so have missed the initial uproar. This thread has gotten really long (over 200 posts). On how it work's, (stolen from Richard, who got it from ?????) The script looks at a day's worth of returned results for each app (up And Eric's main post is post 790521 |
|
Brian Silvers Send message Joined: 11 Jun 99 Posts: 1681 Credit: 492,052 RAC: 0
|
The underlying data has problems.... You cannot / will not consider this. No matter how sound the theory may or may not be, a function applied to a flawed data set is a flawed function, that is, if one is expecting output that is not flawed. GIGO (Garbage In - Garbage Out) |
W-K 666 ![]() Send message Joined: 18 May 99 Posts: 19989 Credit: 40,757,560 RAC: 67
|
As I understand it, and I am not claiming to be right. Projects are not going to be adjusting to each other. The projects will be adjusting so that their median computer will claim credits in line with its performance relative to the original concept of the mythical computer with benchmark 1000/1000, 100cr/day performance. It will not matter how the projects claim credit, fixed, BM * time or Flop counting. And all our real computers doing A project will line up depending on their performance to the projects median computer. It is from that understanding I asked the question about different projects having different median computers. |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874
|
On how it work's, (stolen from Richard, who got it from ?????) Eric's email to the BOINC Development mailing list on 23 July 2008. The script looks at a day's worth of returned results for each app (up |
|
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0
|
You are comparing the complete result database at a single project to the cross-project comparison at BOINCstats. The result table at SETI contains every work unit that has yet to transition. It does not contain information about any other project. The cross-project comparision at BOINCstats is a compilation from the XML statistics published by various projects. It does not contain anything about individual work units. I have considered it. They aren't the same. |
|
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0
|
As I understand it, and I am not claiming to be right. Projects are not going to be adjusting to each other. The projects will be adjusting so that their median computer will claim credits in line with its performance relative to the original concept of the mythical computer with benchmark 1000/1000, 100cr/day performance. It will not matter how the projects claim credit, fixed, BM * time or Flop counting. And all our real computers doing A project will line up depending on their performance to the projects median computer. It seems to me that the whole problem comes from flop counting vs. benchmark * time credit. (edit: actually, any method that isn't benchmark * time, including any fixed credit scheme) In benchmark * time, we know what the predicted performance is, and we know how long it took. It is a problem if a project uses a lot of a single operation that is particularly fast (or particularly slow) on a given CPU type. By counting flops, we get a very accurate "count" of what was done, but there is a scaling factor needed to make credit comparable to benchmark * time. ... currently, that scaling factor is determined experimentally, based on a sample that may not represent the main project. The assumption (and I think it's valid, but it's an assumption)) is that the median computer is representative, and that it does not have an advantage or disadvantage because of CPU architecture. Take a weighted average of 30 median computers over a 30 day period, and the number won't vary a lot. (VLAR and VHAR units are the most likely reason). If a project uses benchmark * time, then the ratio between the granted credit and the calculated benchmark * time score is going to average out at 1:1. |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874
|
Now that I'm posting here again, let me chip in to this discussion with two clarifications that I've been trying to get clear in my own head over at Beta. 1) Median Difficult as the concept is, Eric's script doesn't have any reference to a median computer. If you read the email Winterknight stole (!) from my public postings a few messages ago, the median is "(over all hosts) of the ratio of granted credit to the credit that would have been granted based upon the benchmarks." So it's a number: there are graphs over at Beta, showing individual processor types as points, and the median as a line. There never will be any such thing as the 'median computer'. As I said to Winterknight at Beta, the median ratio will be typical of, but not defined by, any particular processor type. 2) Multiplier Just to be clear (there was ambiguity earlier in this thread): there are now two multipliers at work. One has been in place for the last 12 months, and has remained static at 2.85 all that time. It operates at the 'per WU' level: it could, in principle, be used to level out the credit per hour rate of WUs of different angle ranges. Everything in that last sentence is very much SETI-specific: you can't even talk about the 'angle range' of an Astropulse task. It doen't have one. So this multiplier has no meaning at any other project. The new multiplier you've been discussing here, on the other hand, works at the application level, and could be used at any project. So there will be, or should be, a separate multiplier gradually normalising the average of all SETI multibeam tasks, and another one gradually doing the same for all Astropulse tasks. I will be watching, in debug mode, to check that they are each working to achieve their stated aims, but I don't quarrel at all with the principle. |
Pilot Send message Joined: 18 May 99 Posts: 534 Credit: 5,475,482 RAC: 0 |
As I understand it, and I am not claiming to be right. Projects are not going to be adjusting to each other. The projects will be adjusting so that their median computer will claim credits in line with its performance relative to the original concept of the mythical computer with benchmark 1000/1000, 100cr/day performance. It will not matter how the projects claim credit, fixed, BM * time or Flop counting. And all our real computers doing A project will line up depending on their performance to the projects median computer. Ha ha it seems like a great deal to time is being devoted to inflating/deflating the credit or currency of thanking people for donating to the science done by BOINC. I do hope equal amounts of time are dedicated to thinking and rethinking the science itself;) When we finally figure it all out, all the rules will change and we can start all over again. |
|
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0
|
If people cared as much about the science as we apparently do about credit, then this wouldn't matter. |
|
Brian Silvers Send message Joined: 11 Jun 99 Posts: 1681 Credit: 492,052 RAC: 0
|
...bringing us back to the idea of "slow host project shopping". Also, would it be of benefit to a fast machine to intentionally fudge the benchmark lower? How about as a collective effort as a group?
|
|
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0
|
Now that I'm posting here again, let me chip in to this discussion with two clarifications that I've been trying to get clear in my own head over at Beta.Thanks for reminding us. This is an important point. I'm a bit surprised. Does Astropulse count flops? It seems that some scaling would be in order to account for the instruction mix ("hard" floating point ops vs. "easy" floating point ops).
I suspect Eric will be running his graphs as well, and that the appropriate lines should converge.... |
|
Brian Silvers Send message Joined: 11 Jun 99 Posts: 1681 Credit: 492,052 RAC: 0
|
As someone, somewhere, pointed out, if that was true, people wouldn't publish papers on the research with their names on it. Continuing that line of thinking, none of us would desire to be called "Dr.", nor would there be any need to have framed versions of degrees in their offices, trophies, etc, etc, etc... As the character "Mouse" said in "The Matrix": "To deny our impulses is to deny the very thing that makes us human." |
|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 14016 Credit: 208,696,464 RAC: 304
|
As the character "Mouse" said in "The Matrix": Being able to control our impulses is what makes us more than just another animal, more than just a small child. Grant Darwin NT |
|
Fred W Send message Joined: 13 Jun 99 Posts: 2524 Credit: 11,954,210 RAC: 0
|
Going off at a tangent, again, Brian! If your employment is in academia, then without having your name attached to published papers your prospects are severely limited; i.e. direct commercial benefit. Framed certificates of qualifications in the office tend to reassure clients that you are, indeed, qualified to practice in your chosen profession; i.e. direct commercial benefit. And most of my acquaintances who have higher degrees use their titles or letters of qualification only when applying for jobs or countersigning passports etc. Boinc credits have no "real-world" benefit whatsoever so the parallel does not hold, I'm afraid. F.
|
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874
|
2) Multiplier Let's be clear about this too. Neither the Astropulse application, not the Enhanced Multibeam application, actually 'counts' floating point operations. SAH_enh tots up blocks of presumed flops in bulk as it goes through its various stages of processing. That approximation gives great consistency between hosts on tasks of the same AR, but falls down when comparison is made between different ARs - the variation between credit rates per hour at different ARs is highly significant. However, we all get the same random allocation of WUs from the pot, and each host's credit claim will be averaged out over time. Astropulse, on the other hand, has a very consistent processing cycle on all WUs (no AR variation, as I said before). So consistent, that the developers have resorted to the bluntest of blunt instruments: #define TOTAL_FLOPS 6.21524e+14 (thanks to Urs Echternacht for discovering and reporting this) So for hosts running BOINC v5.2.7 and above, credit claims will be reported using the FlopCount mechanism, and will be absolutely consistent across all hosts, to the last decimal place. That makes it feasible for the application multiplier to be the only mechanism relied on for normalisation. |
|
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0
|
2) Multiplier Actually counting each individual "flop" would be quite difficult, and probably make it impossible for optimized apps. to use other FFT libraries unless source was available. Literally counting flops would also slow the app. down -- alot.
Cool. Since this is a straight define, I suspect that whatever "multiplier" might be needed is buried in the #define -- but either way, it's either the right number, or it isn't, and if it isn't, then the self-adjusting credit that Dr. Korpela implemented will handle it.
|
©2026 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.