Let's Play CreditNew (Credit & RAC support thread)

Message boards : Number crunching : Let's Play CreditNew (Credit & RAC support thread)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 13 · Next

AuthorMessage
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937043 - Posted: 25 May 2018, 9:12:35 UTC - in response to Message 1937039.  
Last modified: 25 May 2018, 9:18:41 UTC

(that is, work, not FLOPs, done per unit of time)

What would you use to determine work done, other than FLOPs? As FLOPS is the metric used to determine arithmetic capability/work done on a computer.


(1) Arithmetic capability - yes (with restrictions of (3)). Work done - no.

Examples were provided earlier. Real computational device does not only arithmetic operations. Also it does branching, also it does memory accesses. All this non neglectible part of any real program.
That's why FLOPs count very approximate estimation always.

(2) And regarding SETI work per se and FLOPs - our work (definition what I mean when use word " work") is to determine number and properties of particular signals in given length of radio-signal in given frequence band.
So ideally one should get more credits from SETI if one completes more such work.

And this definition can't be translated in FLOPs. Partly because point (1) partly because same work can be done differently arithmetic-wise. So we can change order of elementary arithmetic operations and NUMBER of them (i.e. FLOPs!) to achieve same result with given precision. That's why I say simple FLOPs counting is inadequate too.

And (3) FLOP represent any arithmetic operation. Real computation device spends different number of ticks per different instruction.
It's known that (for example) multiplication is slower than addition but division is slowest of all. At such degree as Windows compute reciprocal of processor speed on early boot phase and keeps that number to be able to do only multiplications on any timer-involved routines (SETI code does the same too of course). I never heard about MOP(Multiplication-OPeration) or DOP(Division-OPeration) FLOP binds them all....
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937043 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1937045 - Posted: 25 May 2018, 9:26:00 UTC - in response to Message 1937042.  

I would suggest:
Baseline a set of APs and MBs on a defined processor (even a "theoretical" one), decide how much that is worth per %blanked or per degree AR and scale each task to get its "value". Ignore the differences in processor and application (apart from those needed in the validation process) as those are user choices not project decisions.
This way there is time-consistency in that a user will know that an x% blanked AR will have a value of y, and an M-degree MB will have a value of n. Independent of the processor or application.

That's my take on it as well, hence Credit based on the task, determined by the reference system used for the Cobblestone definition.

As to the argument about cross-project comparability in value per task - that doesn't exist today, and may never exist, and that is something I think we have to live with. The truth is many project do not use the fully adaptive scoring system that SETI does, the majority appear to use either a simple fixed, or a more complex time based scoring system. I wouldn't be surprised if one or two actually use a variation on the one I've outlined above.

If all projects based their Credit on the Cobblestone definition, using the definition reference processor, and their WUs were awarded Credit also based on that definition and their estimated required FLOPs, the projects would be comparable.
As I mentioned in a previous post, a project with a more optimised application(s) would give their crunchers more Credit per day, but the Credit awarded for the work done (FLOPs) would be (very similar) between projects- that use the system as intended.
Hence cross project comparability would actually exist.
Grant
Darwin NT
ID: 1937045 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1937046 - Posted: 25 May 2018, 9:34:27 UTC - in response to Message 1937042.  

As to the argument about cross-project comparability in value per task - that doesn't exist today, and may never exist, and that is something I think we have to live with. The truth is many project do not use the fully adaptive scoring system that SETI does, the majority appear to use either a simple fixed, or a more complex time based scoring system. I wouldn't be surprised if one or two actually use a variation on the one I've outlined above.
Sadly, that's an accurate statement of the situation now - the genie is out of the bottle, and we can't put it back - but I don't think we should forget that it ever existed. Projects fell away from the standard because it was poorly implemented and deployed, and because it hasn't been maintained or developed in the eight years since then.

I think credit is still relevant in three areas.

1) It encourages gentle competition and personal goal-setting among project volunteers (us!). Provided it doesn't become too obsessive, that's a good thing - it encourages participation in the projects, and thus gets more research done.
2) Some projects have used - or have been persuaded into using - the alternative credit systems as a means of poaching volunteers and processing power from each other. That's regrettable.
3) The reverse calculation from aggregated credits back to flops (the Zimbabwe syndrome), which is used for comparison with other supercomputing facilities, becomes seriously compromised. That's close to fraud.
ID: 1937046 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937047 - Posted: 25 May 2018, 9:38:40 UTC - in response to Message 1937042.  
Last modified: 25 May 2018, 9:39:38 UTC


I would suggest:
Baseline a set of APs and MBs on a defined processor (even a "theoretical" one), decide how much that is worth per %blanked or per degree AR and scale each task to get its "value". Ignore the differences in processor and application (apart from those needed in the validation process) as those are user choices not project decisions.
This way there is time-consistency in that a user will know that an x% blanked AR will have a value of y, and an M-degree MB will have a value of n. Independent of the processor or application.

Think about it - Today even running the same task, but with a different pair of validating crunchers will probably get you a different value, and that's what gets most peoples backs up.

As to the argument about cross-project comparability in value per task - that doesn't exist today, and may never exist, and that is something I think we have to live with. The truth is many project do not use the fully adaptive scoring system that SETI does, the majority appear to use either a simple fixed, or a more complex time based scoring system. I wouldn't be surprised if one or two actually use a variation on the one I've outlined above.


Smth similar was before CreditScrew applied.
But not only each task was awarded number of "FLOPs"/points (I listed reasons why real FLOPs have very little connection to work measurement so will call these fake "FLOPs" just "points" to reduce confusion) attempts to account for variations in tasks were done by awarding different points for each block of computations.
That was issue when we optimized AP to skip some blocks completely - I had to carefully restore points accounting separately from actual computations to not ruin credits claims.
Same with GPU apps where points arithmetic done on CPU while actual work on GPU.

The system has own degrees of freedom (in part of arbitrary decisions what block/task worth what points).
The issue - 2 different mixs of tasks will result in different pay-off by credits on different hardware. That is, "cherry-picking" for particular host.
But I think it's issue only when network bandwidth is limiting stage. Until bandwidth allows it's just another degree of optimization - to do such work that best fits to particular hardware.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937047 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1937048 - Posted: 25 May 2018, 9:40:13 UTC - in response to Message 1937043.  

(1) Examples were provided earlier. Real computational device does not only arithmetic operations. Also it does branching, also it does memory accesses. All this non neglectible part of any real program.
That's why FLOPs count very approximate estimation always.

That is to actually process a WU, but for the actual calculations required to process the WU, there must be some estimate for what is required, the number of mathematical operations.
That is the FLOPs I am referring to. It's not about the work that is done, or how it is done, but the work that needs to be done. The number of operations that would be required to produce a result, without any shortcuts etc.


(2) And regarding SETI work per se and FLOPs - our work (definition what I mean when use word " work") is to determine number and properties of particular signals in given length of radio-signal in given frequence band.
So ideally one should get more credits from SETI if one completes more such work.

And FLOPS does that., particularly with the second Credit system. Those with lower angle ranges (VLAR) require more processing, they have greater estimated FLOPs, they get more Credit. Those that have higher angle ranges, (VHAR) require less processing, they have lesser estimated FLOPs, they get less Credit. Even now that occurs, just without any consistency in what is granted. And much less than what the Cobblestone definition says we should get.


And this definition can't be translated in FLOPs. Partly because point (1) partly because same work can be done differently arithmetic-wise. So we can change order of elementary arithmetic operations and NUMBER of them (i.e. FLOPs!) to achieve same result with given precision. That's why I say simple FLOPs counting is inadequate too.

True, hence my suggestion for Credit to be based on the work that has to be done, the operations that would be required to process the WU without any optimisations. If optimisations result in a huge boost in performance by skipping 50% of the operations, but still give a valid result, then they still earn the value of Credit they would if all the operations were performed.

And (3) FLOP represent any arithmetic operation. Real computation device spends different number of ticks per different instruction.
It's known that (for example) multiplication is slower than addition but division is slowest of all. At such degree as Windows compute reciprocal of processor speed on early boot phase and keeps that number to be able to do only multiplications on any timer-involved routines (SETI code does the same too of course). I never heard about MOP(Multiplication-OPeration) or DOP(Division-OPeration) FLOP binds them all....

Yep, which is why Credit based on the number of estimated FLOPs required to process the WU using the Cobblestone definition will still give the same amount of Credit for a WU, even if it uses all fast operations, or all slow operations.
The amount of Credit is determined by the work (computations) that needs to be done. Not by how much work (computations) was actually done to produce the result.
Grant
Darwin NT
ID: 1937048 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1937049 - Posted: 25 May 2018, 9:46:50 UTC - in response to Message 1937046.  

3) The reverse calculation from aggregated credits back to flops (the Zimbabwe syndrome), which is used for comparison with other supercomputing facilities, becomes seriously compromised. That's close to fraud.

But with a Credit system actually based on the Cobblestone definition, would make that reverse calculation fairly accurate (if the efficiency of the used hardware is know & applied).
The calculation of FLOPs from RAC was one of the goals of the Credit system.
Grant
Darwin NT
ID: 1937049 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937050 - Posted: 25 May 2018, 9:49:22 UTC - in response to Message 1937037.  


What matters is that a particular WU would require a certain number of operations to be performed if there were no optimisations, short cuts or other operation minimisation used.
That maximum possible number of operations should be what is used in determining the work done. The reference machine in the Cobblestone definition would perform all of those operations in order to process that WU, and so that number of required FLOPs would give the amount of Credit due for that WU.


Hm.. and who will decide that same work can't be done in even MORE number of operations? ;)

Consider simpliest example I wrote before: function value computed each cycle (even it remains the same for whole inner loop) or it computed only once per whole inner loop and stored in variable.
For big function it's very obvious way to optimize. So obvious that hardly any one will consider it as optimization, just as good programming habit.
So, to compute maximum number one should unroll any computation... and this would be another hardly achievable task for project programmers....

I say all this just to stress simple thing: points awarding is ARBITRARY in reality. And it depends on skills of programmer that code initial algorithm.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937050 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1937052 - Posted: 25 May 2018, 9:55:21 UTC - in response to Message 1937050.  
Last modified: 25 May 2018, 10:00:28 UTC


What matters is that a particular WU would require a certain number of operations to be performed if there were no optimisations, short cuts or other operation minimisation used.
That maximum possible number of operations should be what is used in determining the work done. The reference machine in the Cobblestone definition would perform all of those operations in order to process that WU, and so that number of required FLOPs would give the amount of Credit due for that WU.


Hm.. and who will decide that same work can't be done in even MORE number of operations? ;)

Consider simpliest example I wrote before: function value computed each cycle (even it remains the same for whole inner loop) or it computed only once per whole inner loop and stored in variable.
For big function it's very obvious way to optimize. So obvious that hardly any one will consider it as optimization, just as good programming habit.
So, to compute maximum number one should unroll any computation... and this would be another hardly achievable task for project programmers....

So what if it takes more operations to process the work? They will still get the same amount of Credit for processing the WU. That just means they will get less Credit per day because it is taking them longer to do the work.
It's not a problem.
The amount of Credit awarded is based on the work (operations) estimated to process the WU. If it takes you more operations, less operations, the same number of operations, you will still get the same amount of Credit for that WU. Crappy application, less Credit per hour, day week month etc. Great application, more Credit per hour, day week, month etc.


I say all this just to stress simple thing: points awarding is ARBITRARY in reality. And it depends on skills of programmer that code initial algorithm.

Which is why I advocate for Credit payout to be in accordance with the original definition of the Cobblestone.
The application, hardware, OS, drivers, wingman has no baring on the Credit paid out for processing a given WU.

All of the things we have been complaining about would be no more. Yes, there will be other things to complain about, and variations on the theme, however no more variation in Credit awarded for a given WU. No more devaluation of Credits awarded due to implementation of optimisations in the stock applications (there would actually be a boost in per WU Credit to bring us back inline with what we should be getting).
Grant
Darwin NT
ID: 1937052 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937053 - Posted: 25 May 2018, 9:57:22 UTC - in response to Message 1937048.  

(1) Examples were provided earlier. Real computational device does not only arithmetic operations. Also it does branching, also it does memory accesses. All this non neglectible part of any real program.
That's why FLOPs count very approximate estimation always.

That is to actually process a WU, but for the actual calculations required to process the WU, there must be some estimate for what is required, the number of mathematical operations.
That is the FLOPs I am referring to. It's not about the work that is done, or how it is done, but the work that needs to be done. The number of operations that would be required to produce a result, without any shortcuts etc.

Yep, with single but importand addition : ESTIMATE of work. And estimates tend to be different from different peoples/projects. Will not work as inter-project base.

(2) And regarding SETI work per se and FLOPs - our work (definition what I mean when use word " work") is to determine number and properties of particular signals in given length of radio-signal in given frequence band.
So ideally one should get more credits from SETI if one completes more such work.

And FLOPS does that., particularly with the second Credit system. Those with lower angle ranges (VLAR) require more processing, they have greater estimated FLOPs, they get more Credit. Those that have higher angle ranges, (VHAR) require less processing, they have lesser estimated FLOPs, they get less Credit. Even now that occurs, just without any consistency in what is granted. And much less than what the Cobblestone definition says we should get.

Yep, in such implementation it's the same "fixed number of points (call it "FLOPs" or whatever) awarding for given block of work" we all say about. (*)



And this definition can't be translated in FLOPs. Partly because point (1) partly because same work can be done differently arithmetic-wise. So we can change order of elementary arithmetic operations and NUMBER of them (i.e. FLOPs!) to achieve same result with given precision. That's why I say simple FLOPs counting is inadequate too.

True, hence my suggestion for Credit to be based on the work that has to be done, the operations that would be required to process the WU without any optimisations. If optimisations result in a huge boost in performance by skipping 50% of the operations, but still give a valid result, then they still earn the value of Credit they would if all the operations were performed.

Yep, same (*).

So you also propose to establish some fixed payment for particular block of work.
Agree, I think it better than CreditScrew. As I recall Eric favored "FLOPs counting" in such implementation more too but it's out of his range of decisions.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937053 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937054 - Posted: 25 May 2018, 10:02:38 UTC - in response to Message 1937052.  
Last modified: 25 May 2018, 10:05:54 UTC


The amount of Credit awarded is based on the work (operations) estimated to process the WU.

That's issue - estimate can be done differently. It's ok for single project. It's not OK for inter-project comparison (just to be clear).
And I have no proposals how to make inter-project comparisons at all. Also, I don't think they really matter. What if someone very advanced in counting number of sand grains on the beach if I don't care what that number is at all? ;)
Some projects are worthless from my own personal point of view no matter how much credits they pay, some -reverse. Quantities of different dimensionality...
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937054 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1937057 - Posted: 25 May 2018, 10:09:38 UTC - in response to Message 1937053.  

Yep, with single but importand addition : ESTIMATE of work. And estimates tend to be different from different peoples/projects. Will not work as inter-project base.

Depends on the honesty of those involved with the project.
All BOINC project have to have an estimate of the number of computations required to process a job (WU). A reasonably accurate estimate of the work done will result in comparable allocation of Credit.
Of course there is nothing stopping a project from having completely fanciful estimates.

So you also propose to establish some fixed payment for particular block of work.

That's what it comes down to.

What I've actually been proposing, is that we base the payment for work done back to using the Cobblestone definition. That is what Seti initially started with when it moved to BOINC, FLOPS counting made it more consistent, and it used a scaling factor to keep it inline with the original system.
Grant
Darwin NT
ID: 1937057 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1937058 - Posted: 25 May 2018, 10:09:53 UTC - in response to Message 1937049.  

3) The reverse calculation from aggregated credits back to flops (the Zimbabwe syndrome), which is used for comparison with other supercomputing facilities, becomes seriously compromised. That's close to fraud.
But with a Credit system actually based on the Cobblestone definition, would make that reverse calculation fairly accurate (if the efficiency of the used hardware is know & applied).
The calculation of FLOPs from RAC was one of the goals of the Credit system.
And that's precisely why I'm in this conversation, and why I'll offer to help anyone who is willing to make a serious stab at reworking the system into something with is both acceptable and (to a reasonable approximation) accurate. I'm never going to lose sleep over a single miscounted flop ;-)

But it's also why I don't want to lose sight of the cross-project nature of the problem. We can't force other projects to adopt whatever solution we come up with, but you know what they said would happen to the person who built a better mousetrap...
ID: 1937058 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1937060 - Posted: 25 May 2018, 10:16:11 UTC - in response to Message 1937054.  

The amount of Credit awarded is based on the work (operations) estimated to process the WU.

That's issue - estimate can be done differently. It's ok for single project. It's not OK for inter-project comparison (just to be clear).

The type and complexity of the operations isn't relevant- the definition of the Cobblestone sets the base line. What is important is the number that would be required to process without any optimisations/short cuts etc.
An honest & reasonably accurate estimation of that, and all else falls in to place.

And I have no proposals how to make inter-project comparisons at all. Also, I don't think they really matter. What if someone very advanced in counting number of sand grains on the beach if I don't care what that number is at all? ;)
Some projects are worthless from my own personal point of view no matter how much credits they pay, some -reverse. Quantities of different dimensionality...

And if they use an honest estimate of the operations required to process their work, then those projects will all be comparable.

The fact is with Credit new, it's not possible because it awards Credit based on what is claimed (as well as the work estimate). If it's just based on what is estimated, a lot of the cross project divergence is eliminated (with accurate estimates).
Grant
Darwin NT
ID: 1937060 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937061 - Posted: 25 May 2018, 10:20:42 UTC - in response to Message 1937058.  

We can't force other projects to adopt whatever solution we come up with, but you know what they said would happen to the person who built a better mousetrap...

Only if their aim is mouse catching ;)
No matter how scientifically-good your credit system is (in area how accurately it measures FLOPS from RAC or smth else) the one of goals of credit system is to attract resourses to project (social engeneering). And peoples tend to like just bigger numbers. And compare them :)
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937061 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1937062 - Posted: 25 May 2018, 10:24:20 UTC - in response to Message 1937058.  

3) The reverse calculation from aggregated credits back to flops (the Zimbabwe syndrome), which is used for comparison with other supercomputing facilities, becomes seriously compromised. That's close to fraud.
But with a Credit system actually based on the Cobblestone definition, would make that reverse calculation fairly accurate (if the efficiency of the used hardware is know & applied).
The calculation of FLOPs from RAC was one of the goals of the Credit system.
And that's precisely why I'm in this conversation, and why I'll offer to help anyone who is willing to make a serious stab at reworking the system into something with is both acceptable and (to a reasonable approximation) accurate. I'm never going to lose sleep over a single miscounted flop ;-)

But it's also why I don't want to lose sight of the cross-project nature of the problem. We can't force other projects to adopt whatever solution we come up with, but you know what they said would happen to the person who built a better mousetrap...

Which is why I tend to go on, and on, and on (and on) about the Cobblestone definition, and using it as our reference (as was the case when Seti moved to BOINC).
It stops Credit for a given WU varying depending on hardware, OS, drivers, application, wingman etc.
It returns the Credit paid out to the earlier levels.
It stops Credit for a given WU from changing, even if the stock applications improve, degrade, or remain unchanged.
And it would allow cross project comparisons, with a reasonable degree of accuracy- as long as the estimates for their tasks are reasonably accurate.

The biggest impediment to cross project comparisons would be projects providing realistic estimates for their tasks.
Grant
Darwin NT
ID: 1937062 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1937064 - Posted: 25 May 2018, 10:29:46 UTC - in response to Message 1937061.  

one of goals of credit system is to attract resourses to project (social engeneering). And peoples tend to like just bigger numbers. And compare them :)

The original goals of the Credit system was to recognize people for the work they've done (just counting WUs wasn't very good for that with noise bombs v VLARs.), the other to promote BOINC and show prospective projects- "Look here for your computing needs!"
But people being people, yeah, it has been used for other purposes.
Grant
Darwin NT
ID: 1937064 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1937065 - Posted: 25 May 2018, 10:30:54 UTC - in response to Message 1937062.  
Last modified: 25 May 2018, 10:31:53 UTC


The biggest impediment to cross project comparisons would be projects providing realistic estimates for their tasks.

Exactly.
But we should not forget that "FLOPs counting" method isn't smth new. It was used before. And on background of its usage David decided to develop CreditScrew.
I think he also has some more interesting to spend time on... still he did it.
Apparently estimates were too lower quality degree.... AFAIK his aim was exactly inter-project comparison above all else.
And that aim ruined all our own "small" SETI-credits world (I would say remaining non-achieved).
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1937065 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1937066 - Posted: 25 May 2018, 10:34:07 UTC - in response to Message 1937057.  

Of course there is nothing stopping a project from having completely fanciful estimates.
That's perfectly true. There's nothing to stop a project consciously and deliberately using fraudulent estimates to attract more volunteers.

But in general, I think that's a rare and remote possibility. These people are (in the most part) scientists, and scientists have a strong and innate bias against fraud. They would - I think and hope - be appalled if any false figures crept into the scientific part of their project's output.

But credit is seen (by the scientists) as a secondary and very much less important part of the project's output. FLOP estimation (as Raistmer has reminded us) is a difficult and imprecise science, and very much comes into the computational sphere rather than molecular dynamics, ultra-violet astronomy, climate prediction or whatever the project scientist's primary research interest is. Accurate flops estimation is perhaps more likely to be taken seriously in projects which are large enough and well enough staffed to separate the roles of scientist and administrator. But we all know what the "dismal science" (economics) has done to staffing levels over the last 10 years.
ID: 1937066 · Report as offensive
Profile iwazaru
Volunteer tester
Avatar

Send message
Joined: 31 Oct 99
Posts: 173
Credit: 509,430
RAC: 0
Greece
Message 1937135 - Posted: 25 May 2018, 21:56:44 UTC
Last modified: 25 May 2018, 22:02:04 UTC

Looks like I missed the party :)

We don't have to worry our pretty little heads about what the best way to award "credit" is.
There is only one right answer to that question and - good news - it's the one we all want anyway.
FLOPS counting.

It's what supercomputers have.
It's what Stanford's Folding has.
So it's what we ALREADY have and are ALWAYS going to have.
FLOPS counting.

Think of it as an "industry standard".
So any discussion about "credits" not being FLOPS is just mental gymnastics.

CreditNew actually declares itself to be a FLOPS counter and that's exactly what it is. Any project that is either gaming CreditNew or not using it at all (one's that can be measured in flops obviously, we're not talking about QuakeCatcher type situations) are not "close" to being fraudulent... They ARE being fraudulent. I bet Spock would say the same (if he were a real... Vulcan/person).
:)
- - - - - - -
Difficulties in flopcounting is also a very interesting topic for debate but we need not worry our pretty little heads about that one either. We've got Eric for that. And since we all rightfully trust him to be a real scientist we can all safely assume Seti knows how to count flops.

So CN is a flopcounter and Eric can count flops.
The million dollar question no-one has answered in 10 years is:

How close is CN to a real flopcounter??
ID: 1937135 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1937145 - Posted: 25 May 2018, 22:32:26 UTC - in response to Message 1937135.  

How close is CN to a real flopcounter??

Credit New isn't a FLOPs counter. That was done away with due to the need for each project to implement FLOPs counting.

And has been pointed out, optimisations can result in many calculations not being necessary in order to get a valid result. Counting FLOPS would result in them getting less Credit, even though they have processed the same WU as other systems.

Hence my suggestion for Credit to paid for the FLOPs (computations) required a process a WU without any optimisations or shortcuts.
Each project has some idea of the number of operations that would be required to derive the result of their tasks (WUs), those are the estimated FLOPs for that WU. Credit is paid according to that estimate in line with the Cobblestone definition. No need to count FLOPs done to process it.
If an application is developed that only has to do half the number of estimated FLOPs to get a valid result, it will still be paid the same amount as a WU processed with a less efficient application. Given it won't take as long to process, that system will get more Credit over time than the slower systems. Optimisation is rewarded. If that application makes it to be the project stock application- the amount of Credit awarded will still be the same, no decline will result.
Grant
Darwin NT
ID: 1937145 · Report as offensive
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 13 · Next

Message boards : Number crunching : Let's Play CreditNew (Credit & RAC support thread)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.