Message boards :
Number crunching :
Observation of CreditNew Impact (2)
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 . . . 20 · Next
Author | Message |
---|---|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs). I'm trying to figure out what he means by efficiency. A GPU can process a WU in much less time than a CPU. To me, that makes the GPU more efficient. I'm sure that if i could do my job in half the time i normally do it, my boss would consider that to be more efficient. If one person could do the work of 2 people in the same period of time, that would be considered more efficient. Grant Darwin NT |
Thomas Send message Joined: 9 Dec 11 Posts: 1499 Credit: 1,345,576 RAC: 0 |
If one person could do the work of 2 people in the same period of time, that would be considered more efficient. Credits = Salary of the volunteers of the SETI@home project I thus ask for a pay rise to the Big Boss :p |
S@NL Etienne Dokkum Send message Joined: 11 Jun 99 Posts: 212 Credit: 43,822,095 RAC: 0 |
I also agree with most that lately the bottom was reached en credit now seems to settle... But in my case that meant a drop op 40% in RAC. |
Russell McGaha Send message Joined: 3 Apr 99 Posts: 11 Credit: 70,871,448 RAC: 106 |
I'm not a usual contributor to these threads; but as a CPU ONLY SETI cruncher I thought I'd give a data point to the discussion. Pre v7 SETI Rac of approx. 10,200 current SETI Rac 2842 I believe that to be a LOT more of a disparity than there SHOULD be. Russell |
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 |
If one person could do the work of 2 people in the same period of time, that would be considered more efficient. Like I said here Message 1387166 "If you are crunching for the credits, think of it like your job. One day you go to your job and the Boss says your work is no longer worth what he was paying you so he's going to pay you less from now on. What are you going to do? " |
ML1 Send message Joined: 25 Nov 01 Posts: 20982 Credit: 7,508,002 RAC: 20 |
Now, could you express that in kibbles so the kitties could understand what you just said? Excellent both! LOL :-) Happy faster crunchin', Martin ps: Thanks for the detail Jason. See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
ML1 Send message Joined: 25 Nov 01 Posts: 20982 Credit: 7,508,002 RAC: 20 |
GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs). Do not confuse "efficient" and "effective"... The GPU can compute in less time than a CPU but for example can the GPU use its 1000 compute cores to do the job 1000 times faster than the CPU? The real world shortfall is the percentage efficiency... For a different measure of "efficiency", there is also the energy efficiency of how many WUs per kWh... Happy fast crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs). It's a ratio modelling 'ideal' Vs real implementations , from mathematics & computer science. algorithmic-complexity / Implementation-complexity. These complexities are usually a function of n, the dataset size. Typically the 'optimal' algorithmic complexity ignores a bunch of real world costs & implementation details, such as communications costs (memory access or parallel data exchange), Quality of implementation, or hardware factors. It's a mathematical (ideal) construct. [Several areas of the current GPU implementations have higher computational complexity than ideal, so are far less efficient than the 'ideal' relative to CPU apps] In the 'real' implementation there are hardware & software implementation factors governing how close you get to this optimal. Most tend to be related to choice of algorithm & how well it fits to hardware & serial or parallel costs. In the current multibeam apps, Examples of high efficiency would include the FFTs on the Cpu and GPU. Pulsefinding efficiency would be high on Cpu and low on GPU, due to some bad mapping choices made back whe nVidia developed the initial Cuda apps. Autocorrelation efficiency would be high on cpu, and moderate to low on GPU. Finding spikes would be high on both, as with chirping, as those implementations are both O(n) A more extreme example, a 'lab' FFT should have complexity O(nlogn) and a naive implementation O(n^2). That would make the naive implementation efficiency (logn / n) which would be very low. O(n^2) is solvable in finite time, but usually there are more efficient implementation choices, since popular algorithms have had decades to centuries of theoretical refinement, from simple sorting to much more complex problems thought to have no solution in finite time. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Thomas Send message Joined: 9 Dec 11 Posts: 1499 Credit: 1,345,576 RAC: 0 |
If one person could do the work of 2 people in the same period of time, that would be considered more efficient. It's a joke Bill... Don't be so serious... ;) |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
As I have stated before, if I were paying my bills or buying kibble for the kitties with Seti creds, I might be up in arms about NewCredit or the current rate of return on validated work here. Since the only worth of Seti credits is to give a relative benchmark among Seti participants' rate of work done (we all know it has no meaning compared to other projects for reasons discussed many times), I am not engaging in the wailing or gnashing of teeth about it that some are. And folks.....this is coming from one who would probably be showing a 600k+ RAC if v6 were still the soup of the day, rather than the 446k I am currently graced with. Is my ox gored? NOT. "Time is simply the mechanism that keeps everything from happening all at once." |
cov_route Send message Joined: 13 Sep 12 Posts: 342 Credit: 10,270,618 RAC: 0 |
The O() notation is what mathematicians and computer scientists use to describe how "hard" a problem is. https://en.wikipedia.org/wiki/Big_O_notation O(log n) means as a problem gets bigger the time needed to solve it increases with the log of the size. This is usually considered "good" behavior. Something like O(n^2) or O(2^n) is considered "bad", it wouldn't be hard to find a problem too big to solve with real-world hardware. Although for small problems the O(2^n) algorithm might be better than the O(log n) algorithm, and you might use it. If you have time on your hands you can also look in to the related P versus NP problem. |
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 |
Who's serious? I don't crunch for the credit. It's seems though some people think there is a nefarious plot afoot. I figure the massive coronaries and strokes should start any time now. That should help stave off the horror of global warming by sequestering kilos of carbon underground in concrete lined pits. :) |
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 |
<snippeth> Have you checked your chicken to see if it' been choked? |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
<snippeth> I think I would know about that. "Time is simply the mechanism that keeps everything from happening all at once." |
tbret Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 |
I'm convinced that there is. Every time I enter the house the kitchen table conversation between my wife and kids goes quiet and the cats all turn and stare at me. |
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 |
No doubt they're staring at your tin foil hat but kitty manners won't let them tell you it's on inside out. Meowrrr. |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
No doubt they're staring at your tin foil hat but kitty I think the kitties are envious and wish to have little tin foil hats of their own. "Time is simply the mechanism that keeps everything from happening all at once." |
tbret Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 |
Good theory, but that can't be it. I wear it under my toupee. |
W-K 666 Send message Joined: 18 May 99 Posts: 19312 Credit: 40,757,560 RAC: 67 |
GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs). Are you not comparing apples with oranges there. The cores are not the same and use very different amounts of energy. My CPU cores uses about 15 W each for crunching. The 1344 GPU cores each use less than 75 mW (total GPU crunching power 100W). |
ML1 Send message Joined: 25 Nov 01 Posts: 20982 Credit: 7,508,002 RAC: 20 |
GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs). Yes... That's the whole point. And that further gets even more confused with "APU" devices. Hence the thoughts long ago to count bit flips or transistor transitions against a 'golden system' (real or virtual) as a standard to more consistently award credit. An automatic side effect of that would be to suitably reward efficiency and optimization rather than blindly rewarding any wasteful make-work regardless. Happy fast crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.