Observation of CreditNew Impact

Message boards : Number crunching : Observation of CreditNew Impact
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 15 · Next

AuthorMessage
Dorphas
Avatar

Send message
Joined: 16 May 99
Posts: 118
Credit: 8,007,247
RAC: 0
United States
Message 1380655 - Posted: 13 Jun 2013, 13:27:40 UTC

i wish the powers to be would work half as hard on showing us science results of our work than spending all their time tweaking damn credits. it's about time they start showing us the results of our work. then maybe people can talk about what we found in our work instead of credits.
"it's all about the science".....not until they start showing us this "science".
ID: 1380655 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14391
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1380656 - Posted: 13 Jun 2013, 13:27:51 UTC - in response to Message 1380649.  

I won't quote the whole post, but just a couple of points for juan.

1) I totally agree that the total computing power of a project is potentially knowable via proper engineering measurement, and that would be the proper way to do it. I was merely pointing out that, so far, BOINC isn't engineered to do that through the whole reporting chain, server --> client --> server --> stats site. So any TFlops claim for a BOINC project that you see, today, is fatally compromised.

2) My gripe with the Top Participants page is that the numbers are expressed in a false, invalid, unit of measurement. If it said "these volunteers have been awarded so many cobblestones (credits)", I'd be happy. It would be an accurate (though meaningless) statement. But by expressing the values in GFlops, the page is making a scientifically invalid statement - close to a fraudulent claim about the computing power available to scientists, who might be considering whether to set up a BOINC project to service their computational needs. And that page is very prominently linked from from main BOINC website (top right) - it is designed to catch the eye of those very scientists.
ID: 1380656 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1380662 - Posted: 13 Jun 2013, 13:51:07 UTC - in response to Message 1380656.  
Last modified: 13 Jun 2013, 14:34:09 UTC

I won't quote the whole post, but just a couple of points for juan.

1) I totally agree that the total computing power of a project is potentially knowable via proper engineering measurement, and that would be the proper way to do it. I was merely pointing out that, so far, BOINC isn't engineered to do that through the whole reporting chain, server --> client --> server --> stats site. So any TFlops claim for a BOINC project that you see, today, is fatally compromised.

Totaly agree, but still belive must be a way to measure just the WU returned and the real processing power that was used to crunch each one. Some brainstorming could help to find who.

2) My gripe with the Top Participants page is that the numbers are expressed in a false, invalid, unit of measurement. If it said "these volunteers have been awarded so many cobblestones (credits)", I'd be happy. It would be an accurate (though meaningless) statement. But by expressing the values in GFlops, the page is making a scientifically invalid statement - close to a fraudulent claim about the computing power available to scientists, who might be considering whether to set up a BOINC project to service their computational needs. And that page is very prominently linked from from main BOINC website (top right) - it is designed to catch the eye of those very scientists.

I agree with you it´s wrong. But It´s the human nature, the way or brains work, we compare the things the entire day, we all belive a bigger house, better car, a fast hosts, more money, etc... is better. It´s a hard task for our brains try to understand a 80 credit SETI WU needs a lot more computing power to be crunched than a 400 credit WU in E&H for example. It´s not the natural way the human brain works. Now imagine an outsider trying to choose what project he want to crunch or what get the "best performance".

On another hand that´s proves the afirmative "credit means nothing" is totaly wrong, and why the admins of SETI must be think to change the way the treat the credit. They need to make more visible their work, the old "plublish or perish" rule still arround (i´m not totaly sure if the translation to english or the roule is right)...
ID: 1380662 · Report as offensive
MonChrMe

Send message
Joined: 9 Jun 13
Posts: 23
Credit: 113,889
RAC: 0
United Kingdom
Message 1380675 - Posted: 13 Jun 2013, 14:15:07 UTC - in response to Message 1380662.  
Last modified: 13 Jun 2013, 14:15:26 UTC

I won't quote the whole post, but just a couple of points for juan.

1) I totally agree that the total computing power of a project is potentially knowable via proper engineering measurement, and that would be the proper way to do it. I was merely pointing out that, so far, BOINC isn't engineered to do that through the whole reporting chain, server --> client --> server --> stats site. So any TFlops claim for a BOINC project that you see, today, is fatally compromised.

Totaly agree, but still belive must be a way to measure just the WU returned and the real processing power that was used to crunch each one. Some brainstorming could help to find who.


Nah. you can't get a figure that way without simulating the entire operation in a virtual machine and physically counting the operations involved. It'd take a day to run an hour long op.

Best bet is to benchmark the individual components - the benchmark that BOINC runs isn't long enough to give a valid figure though. Power saving, dynamic clocking, thermal throttling, 'boost' modes, etc, these all conspire to produce an unreliable result.

Question is, how many people are going voluntarily run a benchmark that runs for 60 minutes or more to catch all that? Not many.
ID: 1380675 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1380679 - Posted: 13 Jun 2013, 14:30:51 UTC - in response to Message 1380675.  
Last modified: 13 Jun 2013, 14:32:41 UTC

I won't quote the whole post, but just a couple of points for juan.

1) I totally agree that the total computing power of a project is potentially knowable via proper engineering measurement, and that would be the proper way to do it. I was merely pointing out that, so far, BOINC isn't engineered to do that through the whole reporting chain, server --> client --> server --> stats site. So any TFlops claim for a BOINC project that you see, today, is fatally compromised.

Totaly agree, but still belive must be a way to measure just the WU returned and the real processing power that was used to crunch each one. Some brainstorming could help to find who.


Nah. you can't get a figure that way without simulating the entire operation in a virtual machine and physically counting the operations involved. It'd take a day to run an hour long op.

Best bet is to benchmark the individual components - the benchmark that BOINC runs isn't long enough to give a valid figure though. Power saving, dynamic clocking, thermal throttling, 'boost' modes, etc, these all conspire to produce an unreliable result.

Question is, how many people are going voluntarily run a benchmark that runs for 60 minutes or more to catch all that? Not many.

As you point, BOINC does periodicaly a benchmark of the system, in my maind that could be a point to start by ussing this number as a base for comparing the process power needed to process the allready crunched WU, i just belive that was an easy way to measure the real production of the project (any project) and more balaced way to compare them than the fatally compromised credit system.
ID: 1380679 · Report as offensive
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1380683 - Posted: 13 Jun 2013, 15:01:38 UTC - in response to Message 1380656.  

[snip]...If it said "these volunteers have been awarded so many cobblestones (credits)", I'd be happy. It would be an accurate (though meaningless) statement. But by expressing the values in GFlops, the page is making a scientifically invalid statement - close to a fraudulent claim about the computing power available to scientists, who might be considering whether to set up a BOINC project to service their computational needs.


A rare mistake by Richard:)

I hope you don't mind me pointing it out and maybe even reel you in a bit as "fraudulent" seems a bit harsh... A Cobblestone is very much a measure of GFlops. Credits (terrible choice of word, I agree) are readily translatable into floating point operations as can be seen by Seti/Boinc's very own Certificate print-out feature.

But I'd like to take this opportunity to further hijack Lionel's thread (which we've already made a mess of) and ask, "are there any clues"? What's happening? Are people silently troubleshooting why credits are low all of a sudden?

I ask not for 'credit' but I do love a good puzzle (especially when math/logic is involved). I'm sure there's an OCD or two in there as well:)
ID: 1380683 · Report as offensive
Profile William
Volunteer tester
Avatar

Send message
Joined: 14 Feb 13
Posts: 2037
Credit: 17,689,662
RAC: 0
Message 1380686 - Posted: 13 Jun 2013, 15:09:37 UTC - in response to Message 1380683.  

[snip]...If it said "these volunteers have been awarded so many cobblestones (credits)", I'd be happy. It would be an accurate (though meaningless) statement. But by expressing the values in GFlops, the page is making a scientifically invalid statement - close to a fraudulent claim about the computing power available to scientists, who might be considering whether to set up a BOINC project to service their computational needs.


A rare mistake by Richard:)

I hope you don't mind me pointing it out and maybe even reel you in a bit as "fraudulent" seems a bit harsh... A Cobblestone is very much a measure of GFlops. Credits (terrible choice of word, I agree) are readily translatable into floating point operations as can be seen by Seti/Boinc's very own Certificate print-out feature.

But I'd like to take this opportunity to further hijack Lionel's thread (which we've already made a mess of) and ask, "are there any clues"? What's happening? Are people silently troubleshooting why credits are low all of a sudden?

I ask not for 'credit' but I do love a good puzzle (especially when math/logic is involved). I'm sure there's an OCD or two in there as well:)

Want to walk the code?

It's probably a combination of fast (GPU) rigs reporting in first pushing the relevant pfc_whatever into the wrong direction with rsc_fopos_est aiming at too low a DCF (if we still had DCF).
It's also very much unexepected and unintentional.
You could ask David, but I doubt he can answer that riddle...
A person who won't read has no advantage over one who can't read. (Mark Twain)
ID: 1380686 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14391
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1380689 - Posted: 13 Jun 2013, 15:16:58 UTC - in response to Message 1380683.  

[snip]...If it said "these volunteers have been awarded so many cobblestones (credits)", I'd be happy. It would be an accurate (though meaningless) statement. But by expressing the values in GFlops, the page is making a scientifically invalid statement - close to a fraudulent claim about the computing power available to scientists, who might be considering whether to set up a BOINC project to service their computational needs.

A rare mistake by Richard:)

I hope you don't mind me pointing it out and maybe even reel you in a bit as "fraudulent" seems a bit harsh... A Cobblestone is very much a measure of GFlops. Credits (terrible choice of word, I agree) are readily translatable into floating point operations as can be seen by Seti/Boinc's very own Certificate print-out feature.

I'm not convinced. What real-world exchange rates are you using, in each direction?

As we're observing here, the exchange rate from 'real work done' (GFlops) to credits is very much a floating rate - floating very high in the water indeed at some of those credit-candy projects, sinking at SETI. You can't use a variable exchange rate in one direction, and a fixed exchange rate in the return direction, and expect to end up where you started.

Zimbabwean dollars, anyone? If a banker tried it, it would certainly be fraud.
ID: 1380689 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 14251
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1380690 - Posted: 13 Jun 2013, 15:17:56 UTC - in response to Message 1380600.  
Last modified: 13 Jun 2013, 15:22:40 UTC

...Then it is all a game and back to work. But here we try to make it seem serious staff. ;-) :-)

Greetings to the kitties! ;-) ;-) :-) :-)

Sleepy

This is a science project. It's success as such has at times made it a poster child for being a social project as well. ...

Dr. Anderson's algorithms for credit granting have fallen flat on their arse.

I doubt that he shall rewrite them now, too much egg on face to do so. Best to stand pat, eh?...

Mark, good to see you're still crunching onwards ever faster! ;-)


For a science project, I'd have liked to see the 'credits' as something scientific. However, the credits have always been something of an aside for the "social project" side. To be fair, the credits certainly do seem to add a fun competitiveness!

Weirdly, it's the social aspects of the project that appear to have gained all the funding!! Meanwhile, there have been spectacular benefits for other science and social projects working with Boinc.

Amidst all that, the credits have become a low priority nightmare that seem to generate an explosion of interest only when there is some change...


So we started with just counting WUs and CPU time (s@h Classic).

Boinc made those counts moot and so a new unit was punningly cobbled together to be corrupted from Drhystones to be called cobblestones. (Anyone like to elaborate on all the multiple puns there? ;-) )

Note that the cobblestones abstraction is based upon ideas from 1970's hardware and appears to ignore real-world hardware resource costs.

Various tweaks have been made to maintain an uneasy parity across different Boinc projects based upon CPU usage.

Has GPU usage and now optimised clients smashed the old credits balance?


A-N-Other and myself argued a few times over to base credits on a NIST-style hierarchical hosts calibration that would give credit based on how many transistor bit-flips are needed for a WU (including WU network activity). Sorry folks, no funding for that and no student time to pick it up also...


Hence, we have the quick pan-project units, or... Cobblers to you and I!

Happy faster crunchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1380690 · Report as offensive
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1380701 - Posted: 13 Jun 2013, 15:48:50 UTC

@William

I admire your determination to "walk the code" but you keep saying that as if there's a typo somewhere in CreditNew. I bet (my Seti credits:) that CN is working exactly as the DA expects it to (for better or for worse). And yes, if no-one has thought to ask him, there's a chance it might take him 2 seconds to figure out why V7 credits are doing what they are doing. Of course 'fixing' it would be an entirely different kettle of fish... But the question remains: Why?

@Richard

I never said Cobblestones were an accurate measure of GFlops. I was just pointing out that they are a measure of GFlops. In other words, whether any page uses Cobblestones or GFlops... it's the same thing. Simplifying any Cobblestone equation would give you a specific (ie non-variable exchange rate) number of floating point operations. I wasn't implying the rest of what you said was wrong, for you are spot-on as usual.

It's just that you said:
If it said "these volunteers have been awarded so many cobblestones (credits)", I'd be happy.


And I was trying to say "you shouldn't be happy" for Cobblestones imply GFlops too. Now I'll admit to nitpicking and put a sock in it:)
ID: 1380701 · Report as offensive
Profile William
Volunteer tester
Avatar

Send message
Joined: 14 Feb 13
Posts: 2037
Credit: 17,689,662
RAC: 0
Message 1380707 - Posted: 13 Jun 2013, 16:00:54 UTC - in response to Message 1380701.  

@William

I admire your determination to "walk the code" but you keep saying that as if there's a typo somewhere in CreditNew. I bet (my Seti credits:) that CN is working exactly as the DA expects it to (for better or for worse). And yes, if no-one has thought to ask him, there's a chance it might take him 2 seconds to figure out why V7 credits are doing what they are doing. Of course 'fixing' it would be an entirely different kettle of fish... But the question remains: Why?

The way David codes, I'm pretty sure there are bugs left to find.

Also I think the design is inherently flawed - that you can't apply the statistical tools David is using to the kind of data we have.

I also doubt CN is working to design specs.
A person who won't read has no advantage over one who can't read. (Mark Twain)
ID: 1380707 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1380722 - Posted: 13 Jun 2013, 16:25:26 UTC - in response to Message 1380644.  

William wrote:
...
Anybody care to dig out a CPU only machine and compare?

As a partial indicator, the APR's for host 6848434 and host 6894117 for stock v7 are slightly higher than they had for stock v6. Both are i7-2600 machines running Win7 SP1 so will be getting better performance from v7 since it includes AVX routines and uses FFTW 3.3.3 (which also uses AVX on such systems).
                                                                   Joe
ID: 1380722 · Report as offensive
Profile William
Volunteer tester
Avatar

Send message
Joined: 14 Feb 13
Posts: 2037
Credit: 17,689,662
RAC: 0
Message 1380734 - Posted: 13 Jun 2013, 16:47:11 UTC - in response to Message 1380722.  

William wrote:
...
Anybody care to dig out a CPU only machine and compare?

As a partial indicator, the APR's for host 6848434 and host 6894117 for stock v7 are slightly higher than they had for stock v6. Both are i7-2600 machines running Win7 SP1 so will be getting better performance from v7 since it includes AVX routines and uses FFTW 3.3.3 (which also uses AVX on such systems).
                                                                   Joe

Thanks Joe - I was trying to check RAC development on CPU only hosts. But no matter which one I pull out thay all reset their stats in BoincStats making it impossible to compare CPU V6 and V7 RAC :(

Besides my GPU v7 APR is about half of my v6 APR - not a third ...
A person who won't read has no advantage over one who can't read. (Mark Twain)
ID: 1380734 · Report as offensive
Ingleside
Volunteer developer

Send message
Joined: 4 Feb 03
Posts: 1546
Credit: 15,832,022
RAC: 13
Norway
Message 1380783 - Posted: 13 Jun 2013, 18:12:08 UTC - in response to Message 1380701.  

I never said Cobblestones were an accurate measure of GFlops. I was just pointing out that they are a measure of GFlops. In other words, whether any page uses Cobblestones or GFlops... it's the same thing. Simplifying any Cobblestone equation would give you a specific (ie non-variable exchange rate) number of floating point operations. I wasn't implying the rest of what you said was wrong, for you are spot-on as usual.

The problem is, in BOINC-project #1 1 Cobblestone == N FLOPS, in BOINC-project #2 X Cobblestones == N FLOPS, in BOINC-project #3 Y Cobblestones == N FLOPS...

If assumes all 3 projects is the same size, the total production would be 3N FLOPS. But, in practice the FLOPS is reported to be (1 + X + Y)N FLOPS, where X and Y > 1, in some instances X and Y is probably >> 1.

"I make so many mistakes. But then just think of all the mistakes I don't make, although I might."
ID: 1380783 · Report as offensive
Ingleside
Volunteer developer

Send message
Joined: 4 Feb 03
Posts: 1546
Credit: 15,832,022
RAC: 13
Norway
Message 1380792 - Posted: 13 Jun 2013, 18:39:28 UTC - in response to Message 1380644.  

2: Things has already stabilized, this is the intended behaviour.

Less credit wasn't intended - more like a collateral.

Well, I remember back when SETI@home introduced crediting based on counting FLOPS, where many users running optimized applications was very upset they got less credit. For anyone using the default applications on the other hand, the impact was only a few percent so wasn't really a big deal.

The reason for the large impact was, while the optimized was maybe 3x - 5x faster than the standard application before the introduction of FLOPS-counting, some of the optimizations was at the same time added to the standard application and afterwards optimized was maybe only 1.5x times faster. So, the decrease was actually due to the application wasn't so much faster any longer.



So, is the same thing happening now, where the optimized applications suddenly isn't 2.5x faster any longer but is only 1.25x faster and because of this the RAC for anyone running optimized applications is halved, while for anyone running standard application again where's only a few percent difference?

The two computers Josef W. Segur found seems to indicate this can be the case, where the decrease in RAC is due to optimized application isn't so much faster than the standard application any longer.

"I make so many mistakes. But then just think of all the mistakes I don't make, although I might."
ID: 1380792 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6533
Credit: 196,805,888
RAC: 57
United States
Message 1380795 - Posted: 13 Jun 2013, 18:44:38 UTC - in response to Message 1380734.  

William wrote:
...
Anybody care to dig out a CPU only machine and compare?

As a partial indicator, the APR's for host 6848434 and host 6894117 for stock v7 are slightly higher than they had for stock v6. Both are i7-2600 machines running Win7 SP1 so will be getting better performance from v7 since it includes AVX routines and uses FFTW 3.3.3 (which also uses AVX on such systems).
                                                                   Joe

Thanks Joe - I was trying to check RAC development on CPU only hosts. But no matter which one I pull out thay all reset their stats in BoincStats making it impossible to compare CPU V6 and V7 RAC :(

Besides my GPU v7 APR is about half of my v6 APR - not a third ...

You can see the comparison on FreeDC. As their stats were not reset.
I was doing that recently for 6417013. Which changed from 15-18k/day to about 6-7k/day.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the BP6/VP6 User Group today!
ID: 1380795 · Report as offensive
Profile shizaru
Volunteer tester
Avatar

Send message
Joined: 14 Jun 04
Posts: 1130
Credit: 1,967,904
RAC: 0
Greece
Message 1380815 - Posted: 13 Jun 2013, 19:22:51 UTC

@Ingleside

This is tripping everybody up:)

1 Cobblestone == N FLOPS across ALL projects.

I can't tell you the value of N off the top of my head but I assure you it is constant. You are confusing a project's ability to declare how much a WU is worth in FLOPS (which can be inflated), hence the discrepancy you are pointing towards. In other words, if I had 1 Cobblestone of credit at every single BOINC project, and I could print out a certificate for every single one of those projects... all projects would declare the same N floating point operations.

As to your other post, I think the problem this time around is that a stock workunit is worth 70 when it should be worth 100 (I pulled those numbers out of you know where). I'm also pretty sure that all the necessary adjustments have been made (by Eric & crew) for a WU to show 100 but they are still showing 70. And everybody is scratching their head and for the moment blaming CreditNew. This is all AFAICT. I could be wrong in every single sentence in this paragraph.
ID: 1380815 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 14251
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1380817 - Posted: 13 Jun 2013, 19:34:11 UTC - in response to Message 1380792.  

... So, is the same thing happening now, where the optimized applications suddenly isn't 2.5x faster any longer but is only 1.25x faster and because of this the RAC for anyone running optimized applications is halved, while for anyone running standard application again where's only a few percent difference?

The two computers Josef W. Segur found seems to indicate this can be the case, where the decrease in RAC is due to optimized application isn't so much faster than the standard application any longer.

That is my suspicion also.


Also note that a goodly proportion of the people active on these forums will be the optimised apps users...

Meanwhile, the vast silent majority remain silently happy.


Happy fast crunchin',
Martin

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1380817 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14391
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1380826 - Posted: 13 Jun 2013, 19:50:54 UTC - in response to Message 1380815.  

OK, let's try it this way.

Say the US$ is the gold standard, and is the GFlop - the actual 1 billion floating point operations my computer has done. They are factual, and could - conceptually - have been counted.

Say I do 100 billion operations for SETI , and SETI pays me in my local currency - GBpounds. At today's exchange rate, I get 64 pounds for my $100 of flops.

But I could have done my 100 billion Flops for a project which pays in Zimbabwean dollars. It would have paid me 36,190 Zimbabwean dollars - again, at todays exchange rate.

So, after two days of work, I have 64 GBP and 36,190 ZWD

According to David, and to you (sorry, nothing personal) all foreign currency is the same. So when I take my earnings to a US bank, I'm handed $64 plus $36,190, or 36,254 US dollars. The claim is that I've contributed 36.254 TFlops to science. IT'S NOT TRUE. Count them - I only did 200 GFlops of work.
ID: 1380826 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6533
Credit: 196,805,888
RAC: 57
United States
Message 1380838 - Posted: 13 Jun 2013, 20:31:48 UTC - in response to Message 1380826.  

OK, let's try it this way.

Say the US$ is the gold standard, and is the GFlop - the actual 1 billion floating point operations my computer has done. They are factual, and could - conceptually - have been counted.

Say I do 100 billion operations for SETI , and SETI pays me in my local currency - GBpounds. At today's exchange rate, I get 64 pounds for my $100 of flops.

But I could have done my 100 billion Flops for a project which pays in Zimbabwean dollars. It would have paid me 36,190 Zimbabwean dollars - again, at todays exchange rate.

So, after two days of work, I have 64 GBP and 36,190 ZWD

According to David, and to you (sorry, nothing personal) all foreign currency is the same. So when I take my earnings to a US bank, I'm handed $64 plus $36,190, or 36,254 US dollars. The claim is that I've contributed 36.254 TFlops to science. IT'S NOT TRUE. Count them - I only did 200 GFlops of work.

A very good explanation.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the BP6/VP6 User Group today!
ID: 1380838 · Report as offensive
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 15 · Next

Message boards : Number crunching : Observation of CreditNew Impact


 
©2021 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.