Observation of CreditNew Impact (2)

Message boards : Number crunching : Observation of CreditNew Impact (2)
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 . . . 20 · Next

AuthorMessage
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13832
Credit: 208,696,464
RAC: 304
Australia
Message 1387098 - Posted: 3 Jul 2013, 7:59:59 UTC - in response to Message 1387022.  
Last modified: 3 Jul 2013, 8:00:47 UTC

GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs).

I'm trying to figure out what he means by efficiency.
A GPU can process a WU in much less time than a CPU. To me, that makes the GPU more efficient.
I'm sure that if i could do my job in half the time i normally do it, my boss would consider that to be more efficient.
If one person could do the work of 2 people in the same period of time, that would be considered more efficient.
Grant
Darwin NT
ID: 1387098 · Report as offensive
Thomas
Volunteer tester

Send message
Joined: 9 Dec 11
Posts: 1499
Credit: 1,345,576
RAC: 0
France
Message 1387101 - Posted: 3 Jul 2013, 8:04:09 UTC - in response to Message 1387098.  

If one person could do the work of 2 people in the same period of time, that would be considered more efficient.

Credits = Salary of the volunteers of the SETI@home project
I thus ask for a pay rise to the Big Boss :p
ID: 1387101 · Report as offensive
Profile S@NL Etienne Dokkum
Volunteer tester
Avatar

Send message
Joined: 11 Jun 99
Posts: 212
Credit: 43,822,095
RAC: 0
Netherlands
Message 1387108 - Posted: 3 Jul 2013, 8:16:22 UTC

I also agree with most that lately the bottom was reached en credit now seems to settle... But in my case that meant a drop op 40% in RAC.

ID: 1387108 · Report as offensive
Russell McGaha
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 11
Credit: 70,871,448
RAC: 106
United States
Message 1387160 - Posted: 3 Jul 2013, 13:10:35 UTC

I'm not a usual contributor to these threads; but as a CPU ONLY SETI cruncher I thought I'd give a data point to the discussion.
Pre v7 SETI Rac of approx. 10,200
current SETI Rac 2842

I believe that to be a LOT more of a disparity than there SHOULD be.

Russell
ID: 1387160 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1387171 - Posted: 3 Jul 2013, 13:55:19 UTC - in response to Message 1387101.  

If one person could do the work of 2 people in the same period of time, that would be considered more efficient.

Credits = Salary of the volunteers of the SETI@home project
I thus ask for a pay rise to the Big Boss :p


Like I said here Message 1387166
"If you are crunching for the credits, think of it like your job.
One day you go to your job and the Boss says your work is no longer
worth what he was paying you so he's going to pay you less from now on.

What are you going to do? "
ID: 1387171 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20941
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1387174 - Posted: 3 Jul 2013, 14:17:40 UTC - in response to Message 1387063.  

Now, could you express that in kibbles so the kitties could understand what you just said?


how about 'Quick & dirty gets the job done, but perfection[purrrrfection ?] takes a little more time & effort' ?

Excellent both!

LOL :-)


Happy faster crunchin',
Martin


ps: Thanks for the detail Jason.
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1387174 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20941
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1387176 - Posted: 3 Jul 2013, 14:24:22 UTC - in response to Message 1387098.  
Last modified: 3 Jul 2013, 14:25:11 UTC

GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs).

I'm trying to figure out what he means by efficiency.
A GPU can process a WU in much less time than a CPU. To me, that makes the GPU more efficient. ...

Do not confuse "efficient" and "effective"...

The GPU can compute in less time than a CPU but for example can the GPU use its 1000 compute cores to do the job 1000 times faster than the CPU?

The real world shortfall is the percentage efficiency...


For a different measure of "efficiency", there is also the energy efficiency of how many WUs per kWh...

Happy fast crunchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1387176 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1387187 - Posted: 3 Jul 2013, 15:01:10 UTC - in response to Message 1387098.  
Last modified: 3 Jul 2013, 15:29:06 UTC

GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs).

I'm trying to figure out what he means by efficiency.
A GPU can process a WU in much less time than a CPU. To me, that makes the GPU more efficient.
I'm sure that if i could do my job in half the time i normally do it, my boss would consider that to be more efficient.
If one person could do the work of 2 people in the same period of time, that would be considered more efficient.



It's a ratio modelling 'ideal' Vs real implementations , from mathematics & computer science. algorithmic-complexity / Implementation-complexity. These complexities are usually a function of n, the dataset size.

Typically the 'optimal' algorithmic complexity ignores a bunch of real world costs & implementation details, such as communications costs (memory access or parallel data exchange), Quality of implementation, or hardware factors. It's a mathematical (ideal) construct. [Several areas of the current GPU implementations have higher computational complexity than ideal, so are far less efficient than the 'ideal' relative to CPU apps]

In the 'real' implementation there are hardware & software implementation factors governing how close you get to this optimal. Most tend to be related to choice of algorithm & how well it fits to hardware & serial or parallel costs.

In the current multibeam apps, Examples of high efficiency would include the FFTs on the Cpu and GPU. Pulsefinding efficiency would be high on Cpu and low on GPU, due to some bad mapping choices made back whe nVidia developed the initial Cuda apps. Autocorrelation efficiency would be high on cpu, and moderate to low on GPU. Finding spikes would be high on both, as with chirping, as those implementations are both O(n)

A more extreme example, a 'lab' FFT should have complexity O(nlogn) and a naive implementation O(n^2). That would make the naive implementation efficiency (logn / n) which would be very low. O(n^2) is solvable in finite time, but usually there are more efficient implementation choices, since popular algorithms have had decades to centuries of theoretical refinement, from simple sorting to much more complex problems thought to have no solution in finite time.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1387187 · Report as offensive
Thomas
Volunteer tester

Send message
Joined: 9 Dec 11
Posts: 1499
Credit: 1,345,576
RAC: 0
France
Message 1387189 - Posted: 3 Jul 2013, 15:06:28 UTC - in response to Message 1387171.  

If one person could do the work of 2 people in the same period of time, that would be considered more efficient.

Credits = Salary of the volunteers of the SETI@home project
I thus ask for a pay rise to the Big Boss :p


Like I said here Message 1387166
"If you are crunching for the credits, think of it like your job.
One day you go to your job and the Boss says your work is no longer
worth what he was paying you so he's going to pay you less from now on.

What are you going to do? "

It's a joke Bill... Don't be so serious... ;)
ID: 1387189 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51477
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1387194 - Posted: 3 Jul 2013, 15:25:12 UTC
Last modified: 3 Jul 2013, 15:25:55 UTC

As I have stated before, if I were paying my bills or buying kibble for the kitties with Seti creds, I might be up in arms about NewCredit or the current rate of return on validated work here.

Since the only worth of Seti credits is to give a relative benchmark among Seti participants' rate of work done (we all know it has no meaning compared to other projects for reasons discussed many times), I am not engaging in the wailing or gnashing of teeth about it that some are.

And folks.....this is coming from one who would probably be showing a 600k+ RAC if v6 were still the soup of the day, rather than the 446k I am currently graced with.

Is my ox gored?

NOT.
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1387194 · Report as offensive
Profile cov_route
Avatar

Send message
Joined: 13 Sep 12
Posts: 342
Credit: 10,270,618
RAC: 0
Canada
Message 1387195 - Posted: 3 Jul 2013, 15:25:41 UTC - in response to Message 1387062.  


For my two cents in that context, I'm pleased to see Dr A's theories matching reality, since algorithmically both the CPU & GPU autocorelations are order O(nlogn) , however my baseline/reference 'get it working' GPU autcorrelation implementation uses the 4nfft approach, so becomes 4x(nlogn). It's then pretty easy to see how 10% becomes 40%. A tasty Type 2 DCT kernel with attention to max bandwidth and cache locality should improve that handily, once all the dust has settled and other fires extinguished.

LOL...
Now, could you express that in kibbles so the kitties could understand what you just said?

Grrnffx32(nonoggin) becomes grrrf(kibblenoggin) with DCT kibbles in the bowl.

Meow?

The O() notation is what mathematicians and computer scientists use to describe how "hard" a problem is.

https://en.wikipedia.org/wiki/Big_O_notation

O(log n) means as a problem gets bigger the time needed to solve it increases with the log of the size. This is usually considered "good" behavior.

Something like O(n^2) or O(2^n) is considered "bad", it wouldn't be hard to find a problem too big to solve with real-world hardware. Although for small problems the O(2^n) algorithm might be better than the O(log n) algorithm, and you might use it.

If you have time on your hands you can also look in to the related P versus NP problem.
ID: 1387195 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1387280 - Posted: 3 Jul 2013, 17:21:33 UTC - in response to Message 1387189.  

Who's serious? I don't crunch for the credit.

It's seems though some people think there is a
nefarious plot afoot.

I figure the massive coronaries and strokes should
start any time now. That should help stave off the
horror of global warming by sequestering kilos of
carbon underground in concrete lined pits. :)
ID: 1387280 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1387281 - Posted: 3 Jul 2013, 17:24:25 UTC - in response to Message 1387194.  

<snippeth>
Is my ox gored?

NOT.


Have you checked your chicken to see if
it' been choked?
ID: 1387281 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51477
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1387285 - Posted: 3 Jul 2013, 17:31:48 UTC - in response to Message 1387281.  

<snippeth>
Is my ox gored?

NOT.


Have you checked your chicken to see if
it' been choked?

I think I would know about that.

"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1387285 · Report as offensive
tbret
Volunteer tester
Avatar

Send message
Joined: 28 May 99
Posts: 3380
Credit: 296,162,071
RAC: 40
United States
Message 1387378 - Posted: 3 Jul 2013, 20:11:55 UTC - in response to Message 1387280.  



It's seems though some people think there is a
nefarious plot afoot.



I'm convinced that there is.

Every time I enter the house the kitchen table conversation between my wife and kids goes quiet and the cats all turn and stare at me.


ID: 1387378 · Report as offensive
bill

Send message
Joined: 16 Jun 99
Posts: 861
Credit: 29,352,955
RAC: 0
United States
Message 1387399 - Posted: 3 Jul 2013, 20:43:55 UTC - in response to Message 1387378.  

No doubt they're staring at your tin foil hat but kitty
manners won't let them tell you it's on inside out.

Meowrrr.
ID: 1387399 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51477
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1387412 - Posted: 3 Jul 2013, 21:11:20 UTC - in response to Message 1387399.  

No doubt they're staring at your tin foil hat but kitty
manners won't let them tell you it's on inside out.

Meowrrr.

I think the kitties are envious and wish to have little tin foil hats of their own.
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1387412 · Report as offensive
tbret
Volunteer tester
Avatar

Send message
Joined: 28 May 99
Posts: 3380
Credit: 296,162,071
RAC: 40
United States
Message 1387491 - Posted: 4 Jul 2013, 3:50:01 UTC - in response to Message 1387399.  



No doubt they're staring at your tin foil hat



Good theory, but that can't be it. I wear it under my toupee.
ID: 1387491 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19308
Credit: 40,757,560
RAC: 67
United Kingdom
Message 1387599 - Posted: 4 Jul 2013, 13:21:20 UTC - in response to Message 1387176.  

GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs).

I'm trying to figure out what he means by efficiency.
A GPU can process a WU in much less time than a CPU. To me, that makes the GPU more efficient. ...

Do not confuse "efficient" and "effective"...

The GPU can compute in less time than a CPU but for example can the GPU use its 1000 compute cores to do the job 1000 times faster than the CPU?

The real world shortfall is the percentage efficiency...


For a different measure of "efficiency", there is also the energy efficiency of how many WUs per kWh...

Happy fast crunchin',
Martin

Are you not comparing apples with oranges there.

The cores are not the same and use very different amounts of energy. My CPU cores uses about 15 W each for crunching. The 1344 GPU cores each use less than 75 mW (total GPU crunching power 100W).
ID: 1387599 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20941
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1387613 - Posted: 4 Jul 2013, 14:07:39 UTC - in response to Message 1387599.  
Last modified: 4 Jul 2013, 14:09:09 UTC

GPUs typically have a higher (10-100X) peak FLOPS than CPUs. However, application efficiency is typically lower (very roughly, 10% for GPUs, 50% for CPUs).

I'm trying to figure out what he means by efficiency.
A GPU can process a WU in much less time than a CPU. To me, that makes the GPU more efficient. ...

Do not confuse "efficient" and "effective"...

The GPU can compute in less time than a CPU but for example can the GPU use its 1000 compute cores to do the job 1000 times faster than the CPU?

The real world shortfall is the percentage efficiency...


For a different measure of "efficiency", there is also the energy efficiency of how many WUs per kWh...


Are you not comparing apples with oranges there.

The cores are not the same and use very different amounts of energy. My CPU cores uses about 15 W each for crunching. The 1344 GPU cores each use less than 75 mW (total GPU crunching power 100W).

Yes... That's the whole point. And that further gets even more confused with "APU" devices.

Hence the thoughts long ago to count bit flips or transistor transitions against a 'golden system' (real or virtual) as a standard to more consistently award credit. An automatic side effect of that would be to suitably reward efficiency and optimization rather than blindly rewarding any wasteful make-work regardless.



Happy fast crunchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1387613 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 . . . 20 · Next

Message boards : Number crunching : Observation of CreditNew Impact (2)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.