Let's Play CreditNew (Credit & RAC support thread)

Message boards : Number crunching : Let's Play CreditNew (Credit & RAC support thread)
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 13 · Next

AuthorMessage
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13732
Credit: 208,696,464
RAC: 304
Australia
Message 1936738 - Posted: 23 May 2018, 4:15:55 UTC - in response to Message 1936709.  
Last modified: 23 May 2018, 4:31:07 UTC

- If less math per task is needed (i.e. fewer floating point operations) then that's fine. Did anyone "clean-up" the math for v7? Because if they did and less math is required, it won't matter what anyone declares a task to be worth. Credit will be lower. As it should.

Ah, no.
And you yourself actually pointed this out previously.

If 2 GPU applications validate against each other, they shouldn't get more Credit than if they validate against a CPU, or 2 CPUs validating the same WU against each other- remember, you pointed this out. Likewise, if 2 GPU applications validate against each other, they shouldn't get less credit than if they validate against a CPU, or 2 CPUs doing the same WU validate against each other.

Sure, the amount of processing they did was less (due to optimisations).
However, what constitutes a Cobblestone- 1 Credit- has been defined, with no variations allowed for efficiency or lack there of.

A WU of a given FLOPs, processed on the reference system, will produce a given amount of Credit. Although the optimised application doesn't have to do all the processing that the reference application does, it is still processing the WU and giving a valid result, and so is due the same amount of Credit as the reference system would earn for that WU.
As you pointed out, systems shouldn't get a Credit bonus depending on who they Validate with. Nor should they take a Credit penalty, just because of who they validate with.


- But if "real" flops are the same or even increased, and you are introducing faster apps and you are seeing less credit for exactly the same kind of work... the question is why?

Because the only way I can imagine for that to happen (doesn't mean I'm right of course) is if we are supposedly running at "peak efficiency". And obviously in the real world this is impossible.

It's simple.
The efficiency of CPUs isn't important, for GPUs it is as far as Credit New is concerned.
A highend GPU will always outperform a highend CPU on a per core basis (which is what counts). Presently, the applications running on the CPUs are pretty close to their theoretical maximum capability. The GPU applications, while producing work many times faster than the CPU could ever hope to achieve, are much, much further from their theoretical capability.
And by design, Credit New punishes poor GPU efficiency, but doesn't care about poor CPU efficiency.
Grant
Darwin NT
ID: 1936738 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13732
Credit: 208,696,464
RAC: 304
Australia
Message 1936757 - Posted: 23 May 2018, 7:34:22 UTC
Last modified: 23 May 2018, 7:39:20 UTC

Attempting to get things straight in my head, and posting here so people can point out where i've gotten lost, or just managed to confuse myself. Who knows, my state of confusion may help with others understanding.
a 1 GigaFLOP machine, running full time, produces 200 units of credit in 1 day

1GFLOP = 200 / 1 day (or 24 hrs, or 86,400 secs)
or 1GFLOP * 1 day = 200 Credits.
or 1 day = 200 / 1GFLOP (expressing it this way really does my head in).


Credit system 1
C = H.whetstone * J.cpu_time

C= Credit claimed
H.whetstone= Whetstone benchmark for that host (hardware), in FLOPs and this is a peak value.
J.cpu_time= Time taken to process WU

I suspect there is a scaling factor missing from this equation, as I don't recall earning (3.7GFLOPs* 4 hours) worth of Credit at any one time. I would guess that the scaling factor would have been based on the Cobblestone definition (ie so that a WU that took 1 day to process on a 1GFLOP processor would get 200 Credits, half a day on a 2GFLOP processor would get 200 Credits, etc).
From the information in the Credit New Wiki, there is a variable called cobblestone_scale which is 200/86400e9. This was most likely used to give the actual value awarded for Credit.


Credit system 2
Same as the first, however the H.whetstone value was actually counted. ie the number of FLOPs done was counted as the WU was processed. As well as the scaling I suspect was used (although not mentioned) on the first system, there was also further scaling so that the Credit per WU awarded was inline with what was being awarded by the first system, less the extreme highs and lows it produced.


Credit New
C = F * cobblestone_scale

C= Credit
F= Claimed Peak Flops Count
cobblestone_scale= 200/86400e9 (200/ seconds in a day * 1GFLOP)
It looks simple enough, but it's how the value for F is determined that really makes things unpleasant.
It actually went back, in part, to the system used in Credit system 1 using the Whestone benchmark for the device, and the time taken to do the work; but that is all modified by factors too numerous & involved for me to have any hope of expressing it in a formula (that F (Claimed Peak Flops Count) part).


Credit New Updated
C= WU.Credit

C= Credit
WU.Credit= the credit that the particular WU is worth.

Attempted in the first system, and mostly realised in system 2 was that Credit was awarded depending on the amount of work required to process a WU (FLOPs). The amount of Credit for a WU was based on the definition of a Cobblestone. A WU of so many FLOPs would take so much time to process on the 1GFLOP reference system, and so would be worth that amount of Credit.
We could base the new system on the results of Credit awarded for work using the second system, or someone with far more ability than myself could workout the Credit that would be given for present WUs with ARs from the lowest (greatest FLOPs required) to the highest (least FLOPs required).
A WU of a particular AR has an estimated FLOPs, and it would be worth the Credit calculated for that number of FLOPs if processed by the reference 1GFLOP system.
Credit would not vary by the hardware or application you use to process it, nor by the hardware or application your wingman used to process it. If you run a more powerful system, you'll get more Credit. If you run an better optimised application, you'll get more Credit.


Goals of the new (third) credit system
•Completely automated - projects don't have to change code, settings, etc.

As far as I can tell, this is the case with the current system- for those that don't allow Anonymous platform & have WUs that are the one size & one size only computationally.
For projects such as Seti with Anonymous platform & different difficulties of WU, I don't believe that is the case. There are quite a few variables that need to be set to help Credit New give the appropriate Credit, and one where it actually requires a separate application to process WUs of differing FLOPs in order for Credit New to work as intended.
With Credit New Updated, even for Seti with it's variable tasks & Anonymous Platform, the amount of initial tweaking of settings would be reduced significantly, and there isn't a need for different application for WUs of differing FLOPs to avoid upsetting the Credit mechanism.

•Device neutrality

If the present system were device Neutral, we wouldn't have this thread (or all the preceding ones).
Credit New Updated would provide that- it doesn't matter what device processes the work, if it's Valid you get Credit for it & there is no variation in Credit depending on your hardware or software or OS or your wingman's hardware, software or OS.

•Limited project neutrality: different projects should grant about the same amount of credit per host-hour, averaged over hosts. Projects with GPU apps should grant credit in proportion to the efficiency of the apps. (This means that projects with efficient GPU apps will grant more credit than projects with inefficient apps. That's OK).

This certainly hasn't been the case with Credit New for the projects that have used it. And then there are the projects that doesn't use it (for various reasons).
With Credit New Updated as long as the estimates for the calculations required to process the work are close to their actual value, RAC between projects will be comparable. However with the differences in efficiency between project applications, those projects with more efficient applications will result in higher RACs for their crunchers.

•Cheat-resistance.

As I don't know if the anti cheating part of the present Credit system is in effect, I can't say whether it works or not. As it is, we still get cherry picking, and maybe some people still try to game the system to get more credit for the work they have done.
With Credit New Updated, Credit is awarded for the WU processed; time to process, benchmarks, efficiency etc of the system doing the work play no part. Cherry picking would be the only way to game the system, and that would best be addressed as part of the Error/invalid/ un-returned work system (which needs work).
Grant
Darwin NT
ID: 1936757 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22190
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1936780 - Posted: 23 May 2018, 12:19:06 UTC

The bulk of the "anti-cheating" is done in the validator, and it is remarkably crude - try and re-submit a work unit you've already done and you won't get credit for it. Return a blank or corrupted result and you'll get nothing. As work units require two people, who "don't know" each other, to return results that pass validation most forms of conspiratorial cheating will probably be caught and ditched.
Rescheduling would be remarkably easy to trap within the validation process as the information required is there. But nobody has grasped the nettle to implement it as it is "not seen as a problem".
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1936780 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1936785 - Posted: 23 May 2018, 12:53:03 UTC

I have three machines with mis-matched GPUs. They have one GTX 970 and one GTX 750Ti each. BOINC thinks it's sending tasks to, and getting results back from, one of the "GTX 970 (2)" which is all it knows about. In fact, while GPUGrid has work, the SETI work is done on the 750Tis.

How much of the statistical underpinning of CreditNew am I compromising?
ID: 1936785 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1936789 - Posted: 23 May 2018, 12:57:00 UTC

I have mixed GPUs on most of my rigs.
I would certainly hope that the credit system could cope properly with that, but maybe not so much.
Meow?
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1936789 · Report as offensive
Profile iwazaru
Volunteer tester
Avatar

Send message
Joined: 31 Oct 99
Posts: 173
Credit: 509,430
RAC: 0
Greece
Message 1936794 - Posted: 23 May 2018, 13:27:33 UTC - in response to Message 1936738.  

Grant you can pretty much ignore my last post as that was not at all what I meant :)
I was in a hurry and blurted out two rhetorical questions, and obviously poorly worded at that!

I just hope Raistmer is in the mood to explain why he's saying his optimizations are getting punished.
And why Eric is saying the opposite (pretty much).
But I know the subject of credit bores him to death and I don't blame him :)
I do however have may fingers crossed that after five years MAYBE we can get him interested in a thread that is more "math puzzle" and less "CN Flame Wars Battleground"
:D
- - - - - - - - -

The first thing I'd like to say about your next post is about Cobblestones. Maybe today/tomorrow I'll finally have some time to write up a small paragraph, but the TL;DR version is that Credit=Cobblestones and we could (and likely should) use those two words interchangeably. Better yet, let's never use the word "credit" ever again ;)
ID: 1936794 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1936795 - Posted: 23 May 2018, 13:29:08 UTC - in response to Message 1936794.  

They're not credits. They're toaster points.
MeowLOL.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1936795 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1936796 - Posted: 23 May 2018, 13:37:57 UTC - in response to Message 1936795.  

More like bitcoins, the closer you get to the prize, the more they hard fork it and water down your credits...
ID: 1936796 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1936797 - Posted: 23 May 2018, 13:42:37 UTC - in response to Message 1936796.  

More like bitcoins, the closer you get to the prize, the more they hard fork it and water down your credits...

A closer analogy than I might like........................
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1936797 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1936799 - Posted: 23 May 2018, 13:45:30 UTC - in response to Message 1936794.  


I just hope Raistmer is in the mood to explain why he's saying his optimizations are getting punished.

Not mood but time to write example, will do later.

And why Eric is saying the opposite (pretty much).

Where? Could you provide link?
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1936799 · Report as offensive
Profile iwazaru
Volunteer tester
Avatar

Send message
Joined: 31 Oct 99
Posts: 173
Credit: 509,430
RAC: 0
Greece
Message 1936802 - Posted: 23 May 2018, 14:09:03 UTC - in response to Message 1936799.  

I was going off memory but I'll try and dig up his old posts.

But for now my daughter is demanding Masha & the Bear :D

(Really)
ID: 1936802 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22190
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1936818 - Posted: 23 May 2018, 16:26:40 UTC

How much of the statistical underpinning of CreditNew am I compromising?

A bit...
Crudely, the scoring system will attempt to "understand what is going on, and should arrive at some sort of compromise scaling between the two GPUs, based on their processing rate and number of tasks completed. In reality it will oscillate around the "correct" value depending on the most recent set of tasks returned and the balance of task types. From memory there isn't too much of a performance difference between the GTX750ti and GTX970 so the impact won't be too big.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1936818 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22190
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1936820 - Posted: 23 May 2018, 16:32:23 UTC

Some time ago I had a face to face with Eric - he was aware of the growing problems associated with the ever increasing GPU performance and the possibility of someone coming up with a very effective optimisation for one or other of the GPU families. I can't recall if we came to any conclusions (apart from the part bottle of spirits we were "inspecting")

One has to remember that sometime Eric has to "toe the party line" wile he is actually biting his tongue - I believe Eric's stand is "its about as stable as we can get it, and don't want to do anything that will cause it to be less stable". Both views are understandable.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1936820 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1936826 - Posted: 23 May 2018, 17:21:13 UTC - in response to Message 1936818.  

From memory there isn't too much of a performance difference between the GTX750ti and GTX970 so the impact won't be too big.
About 2::1, as Shaggie's chart suggests. The main effect of the way I work will be to slightly deflate the performance shown for the 970s there.

It was primarily a tongue-in-cheek remark, but there's a serious underside - BOINC's current reporting will undermine any attempt to 'seed' runtime and credit values with realistic values derived from hardware - such as the benchmarks try (but fail) to do for CPUs.

I'm not sure to what extent the current code follows the CreditNew whitepaper

BOINC estimates the peak FLOPS of each processor. For CPUs, this is the Whetstone benchmark score. For GPUs, it's given by a manufacturer-supplied formula.

GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs).
(I haven't paused and considered those two statements together recently). It would be interesting to resurrect the old Flopcounter code for MB, update it carefully to v8 specification, build some representative apps, and run a few under an old flopcounter-aware version of BOINC. I suspect the CPU efficiency recorded for CPU apps on modern AVX hardware could be 300% or even higher.
ID: 1936826 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22190
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1936832 - Posted: 23 May 2018, 17:44:01 UTC

I read the white paper some time ago - a quick re-read suggests that a number of the assumptions are not valid - in particular those around GPU vs CPU "efficiency". If we take the suggested figures then it would be barely worth running anything below something like a GTX980 in terns of credit/hour/device, and yet we know that there is not insubstantial advantage in running even a 4xx. As I said a time ago, these assumptions which probably form the basis of much of the actual "magic numbers" in the code, were laid down a fir time before the GPU and optimised application boom we've seen in recent years. I benefit from 20/20 hindsight as I would probably have come to similar values back then.

You are correct in your assumption in the way the 750 will drag back the apparent performance of the 970, but the opposite is also true - the 970 will pull up the apparent performance of the 750. "A Reet Royal Mess"... I would guess that the overall impact on your system will be that your credit/task is about 15% down on the 970 and about 5% up on the 750.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1936832 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1936833 - Posted: 23 May 2018, 17:55:04 UTC - in response to Message 1936832.  

I would guess that the overall impact on your system will be that your credit/task is about 15% down on the 970 and about 5% up on the 750.
Except that I usually keep SETI off the 970s unless GPUGrid is out of work, or when - as now - I have a cache full of Arecibo VLARs I want to see the back of this side of doomsday. Screen lag doesn't matter when I'm asleep in bed.
ID: 1936833 · Report as offensive
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 1936847 - Posted: 23 May 2018, 19:53:38 UTC - in response to Message 1936634.  


...
CreditScrew just what it is, based on too many non-valid in real life assumptions.
The most hated (for me) part in it: it directly discourages any app optimization. I think it's counterproductive. Think Petry would agree...


Hi Raistmer!

I totally agree.

From what I have seen I think that the faster a WU is computed the less credit it is asking/awarded. My slowest cards get more points from same type of WU compared to the fastest card. I suspect that even if I had all 4 TITAN V's I'd see less credit per WU after any optimisation in the code.

I remember the time some years ago when I ran 2-8 at at time per GPU. The credit per WU was higher compared to running 1 at a time. And the machine did more WUs per hour. Double 'good'.
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1936847 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1936859 - Posted: 23 May 2018, 22:28:39 UTC - in response to Message 1936802.  
Last modified: 23 May 2018, 22:44:18 UTC

I was going off memory but I'll try and dig up his old posts.

But for now my daughter is demanding Masha & the Bear :D

(Really)

I would say it's for quite small children.
Try to find "Смешарики" (not sure they was translated on other languages though) - much more informative and educating cartoons (actually with layer for adults also ).

Well, regarding FLOPs:
Consider such cycles:

for (int i=0;i<N;i++)
for(int j=0;j<N;j++) a[i][j]+=1;
and
for (int j=0;j<N;j++)
for(int i=0;i<N;i++) a[i][j]+=1;

They have exactly same FLOPs. But performance will be quite different on all modern devices (GPU including).

Next example:
for (int i=0;i<N;i++)
for(int j=0;j<N;j++){
....
a[i][j]+=f(i);
...
}
and
for (int i=0;i<N;i++){
float temp=f(i);
for(int j=0;j<N;j++){
....
a[i][j]+=temp;
...
}}
Obviously second has less FLOPs. But what about performance? It depends!

How big f() implementation, will temp be in register, will temp be in cache....

Sometimes more FLOPs will provide better performance.

All this illustrate that performance !=FLOPs at all. On modern devices with complex memory architecture it's very noticeable.

That's why even FLOPs counting will fail as long as different types of hardware used for computations and computations inhomogenious (as MultiBeam on different ARs is).

Regarding CreditScrew and optimization discourage:

Consider stock app and opt one. Opt provides better performance (mostly due to memory access patterns that can't be measured in FLOPs btw). So, opt hosts do same work faster and earn more credits.
But at one point stock implements same optimizations.... and recalibration occurs. FLOPs are the same. stock app process same work on same hardware faster... so stock recalibrated to get same credits as before.
What will be with opt hosts now?,...

As as said, directly inhibits optimization...

BTW, that issue (habit for not accounting for memory access costs) shows itself very vividly in latest Spectre exploits. They are very elegant in the manner of getting additional info just from timings. It's very resemble some quantum physics cases where one can get additional info about system just because some possibilities exists (non-zero) even if they are not realized in particular experiment at all.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1936859 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13732
Credit: 208,696,464
RAC: 304
Australia
Message 1936894 - Posted: 24 May 2018, 5:56:31 UTC

The bulk of the "anti-cheating" is done in the validator, and it is remarkably crude - try and re-submit a work unit you've already done and you won't get credit for it. Return a blank or corrupted result and you'll get nothing. As work units require two people, who "don't know" each other, to return results that pass validation most forms of conspiratorial cheating will probably be caught and ditched.

Yeah, the Credit New focus on cheating is aimed at inflated claims for credit (by falsifying elapsed time and/or benchmarks), and cherry picking.
Cherry picking would be best sorted out separately from the Credit mechanism.
And having Credit paid out for the WU processed, not based on a claim by the system that processed it, would stop dead any other forms of cheating, without the present complications.



I have three machines with mis-matched GPUs. They have one GTX 970 and one GTX 750Ti each. BOINC thinks it's sending tasks to, and getting results back from, one of the "GTX 970 (2)" which is all it knows about. In fact, while GPUGrid has work, the SETI work is done on the 750Tis.

How much of the statistical underpinning of CreditNew am I compromising?

If Credit New is working as intended, and your claims for Credit on work done by the GTX750Tis are being assessed with the claimed peak performance of the GTX 970, the efficiency portion of the code will reduce the Credit paid out because your GTX 750Ti will be even further from it's (erroneously) claimed capabilities (3,494 GFLOPs) than it is from it's own actual claimed capability (1,305.6 GFLOPs).
With a payout based on the computation required to process a WU, actual & theoretical performance will have no effect on granted Credit.
A more efficient application will result in more Credit per hour. A less efficient application will result in less Credit per hour. Efficiency is rewarded, inefficiency is punished, all without the need for involved manipulations of the Credit granted.
Grant
Darwin NT
ID: 1936894 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13732
Credit: 208,696,464
RAC: 304
Australia
Message 1936896 - Posted: 24 May 2018, 6:39:20 UTC

As to how close Seti comes to awarding Credit in accordance with the definition of a Cobblestone...
from the Wiki on the Cobblestone,
The average FLOPS (floating point operations per second) achieved by a computer or group of computers can be estimated from its Recent Average Credit (RAC) as follows:
GigaFLOPs = RAC/200
TeraFLOPS = RAC/200,000

(Remember that a 1 GigaFLOP machine, running full time, produces 200 units of credit in 1 day).

For my 2600k system, the CPU is rated at 3.7 GFLOPS. My GTX 1070 is rated at 5,783 GFLOPS (boost rating) and I have two of them.
So I figure my system is good for 11,569.7 GFLOPS, maximum possible peak.

Let's assume overall efficiency is around 5%, so that gives us 578.485 GFLOPS. After falling off a cliff for the last little while my RAC is making a slight comeback at present, and is now back up to 48,336.
Going by my RAC, actual performance is around 241.68 GFLOPS.; not even half of my estimate. And I would like to think that while my actual efficiency isn't much higher, I would expect the actual system efficiency is better than the 5% estimate I used.
But you wouldn't know that from my RAC.

It would be interesting to see what the RAC derived FLOPS of a CPU only system, running stock, 24/7 for at least 2 months, comes out as compared to it's benchmark FLOPS rating.
Grant
Darwin NT
ID: 1936896 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 13 · Next

Message boards : Number crunching : Let's Play CreditNew (Credit & RAC support thread)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.