FLOPS & FPOPS

Message boards : Number crunching : FLOPS & FPOPS
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile Michael Belanger, W1DGL
Avatar

Send message
Joined: 30 Jul 00
Posts: 1887
Credit: 7,441,278
RAC: 49
United States
Message 858784 - Posted: 28 Jan 2009, 3:58:33 UTC
Last modified: 28 Jan 2009, 3:58:51 UTC

This may have been asked (several times) before, but......

I'd like to know if there's an easy-to-understand, "Layman's Terms"-type of way to explain what FLOPS and FPOPS are and how they work?

(I know what the acronyms mean, I just don't understand about them)
ID: 858784 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 858798 - Posted: 28 Jan 2009, 4:37:08 UTC

For all intents and purposes, they are the same thing (though most pedantic types will point out the minor differences, if any). Here's a link to the Wiki page.
ID: 858798 · Report as offensive
Profile Michael Belanger, W1DGL
Avatar

Send message
Joined: 30 Jul 00
Posts: 1887
Credit: 7,441,278
RAC: 49
United States
Message 858802 - Posted: 28 Jan 2009, 4:49:04 UTC - in response to Message 858798.  
Last modified: 28 Jan 2009, 4:50:57 UTC

Thanks for trying with that link to Wiki, Ozz, but I did say, "easy-to-understand" and "layman's terms"; call me <whatever>, but that link didn't really help me (much) to understand what they are and how they do (whatever it is they do); I was cliicking on links so often to understand what they (Wiki) were saying, my head was spinning when I finally stopped.
ID: 858802 · Report as offensive
Alinator
Volunteer tester

Send message
Joined: 19 Apr 05
Posts: 4178
Credit: 4,647,982
RAC: 0
United States
Message 858813 - Posted: 28 Jan 2009, 5:20:04 UTC
Last modified: 28 Jan 2009, 5:24:47 UTC

Hmmmm...

Well, I guess in the simplest terms you are talking about the difference between work (FPOPs) and power (FLOPS). To most physicists and engineers, that's hardly being pedantic to draw a distinction between them.

Conceptually, Floating Point OPerations (FPOPs) are simple. You simply do an arithmetic operation on two real numbers (as opposed to integers only). This is a unit of work, since it is time independant.

The real problem comes when you take that definition and apply it to a practical implementation (like a computer), especially for more complex mathematical functions (trigonometric and transcendental functions for example).

FLoating point Operations Per Second (FLOPS) is a unit of power (work done per unit time).

Alinator
ID: 858813 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20393
Credit: 7,508,002
RAC: 20
United Kingdom
Message 858924 - Posted: 28 Jan 2009, 15:22:32 UTC - in response to Message 858813.  
Last modified: 28 Jan 2009, 15:27:33 UTC

Hmmmm...

Well, I guess ... difference between work (FPOPs) and power (FLOPS)...

Looks more like a typo to me!

You have:

"OPS" - Operations per Second;
"IOPS" - Integer Operations per Second;
"FPOPS" is not normally used and looks to be an "Americanism" introduced into the Boinc world! It is meant to be a count of floating point operations, an absolute count, no 'seconds' included, should really be FpOPs to avoid confusion, ok as a program variable name but confusing when outside of a program;
"FLOPS" - FLoating-point Operations per Second;

Those are normally used as a performance indicator for CPU/GPU/ALU level operations.

For an overall system (computer + software + interactions), you also have "Transactions per Second" as a performance indicator.

The issue for the terminology is that in computing, "floating-point" is usually used as though it is a single word.

Integers are whole numbers such as 1, 2, 3, 4.

Floats (floating-point) are 'real' numbers (fractional) such as 1.2, 3.142, 2.009 * 10^3, etc.

s@h uses a LOT of floating-point calculations to analyse the data. Hence the interest in counting "fpops" (absolute count of how many) and FLOPS (how quickly).


And then the Marketing people come along and then call everything milli-hertz and grams-hertz regardless! (Instead of MHz "mega-Hertz" and GHz "giga-Hertz" when appropriate.)

Is any Science taught in schools?


Hope that hasn't added to the confusion!

Happy fast crunchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 858924 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 858981 - Posted: 28 Jan 2009, 18:58:50 UTC - in response to Message 858784.  

This may have been asked (several times) before, but......

I'd like to know if there's an easy-to-understand, "Layman's Terms"-type of way to explain what FLOPS and FPOPS are and how they work?

(I know what the acronyms mean, I just don't understand about them)

1.5 + 3.8 is a floating point operation (a floating point add).

sqrt(8.3) is a much slower floating point operation (floating point square root).

For the purpose of credit, each counts as "1".
ID: 858981 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 859019 - Posted: 28 Jan 2009, 21:07:09 UTC

Another thing that I think I remember hearing in a computer class at college is that the ALU is not capable of doing multiply and divide. Instead what it does is adds and subtracts a log10 of the two numbers in question. Adding log10's is the same as multiplying, and subtracting is the same as dividing. I thought that was fairly interesting.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 859019 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 859020 - Posted: 28 Jan 2009, 21:12:08 UTC - in response to Message 859019.  

Another thing that I think I remember hearing in a computer class at college is that the ALU is not capable of doing multiply and divide. Instead what it does is adds and subtracts a log10 of the two numbers in question.
Which ALU would that have been?

It is certainly not true of ALUs in general.

ID: 859020 · Report as offensive
Alinator
Volunteer tester

Send message
Joined: 19 Apr 05
Posts: 4178
Credit: 4,647,982
RAC: 0
United States
Message 859036 - Posted: 28 Jan 2009, 22:01:13 UTC - in response to Message 858981.  

This may have been asked (several times) before, but......

I'd like to know if there's an easy-to-understand, "Layman's Terms"-type of way to explain what FLOPS and FPOPS are and how they work?

(I know what the acronyms mean, I just don't understand about them)

1.5 + 3.8 is a floating point operation (a floating point add).

sqrt(8.3) is a much slower floating point operation (floating point square root).

For the purpose of credit, each counts as "1".


Hmmm...

Actually isn't square root by definition always a floating point operation (regardless of whether the argument is a integer or real number)? Same thing would apply to division. IOW the square root of an integer cannot be guaranteed to be an integer and the quotient of two integers doesn't necessarily have to be an integer.

Only for the addition and multiplication of integers is the result always an integer.

Alinator
ID: 859036 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 859049 - Posted: 28 Jan 2009, 22:14:55 UTC - in response to Message 859020.  

Another thing that I think I remember hearing in a computer class at college is that the ALU is not capable of doing multiply and divide. Instead what it does is adds and subtracts a log10 of the two numbers in question.
Which ALU would that have been?

It is certainly not true of ALUs in general.

I don't know, we didn't get the specifics of it. We were just told that microprocessors don't actually multiply and divide, they add and subtract log10's.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 859049 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 859052 - Posted: 28 Jan 2009, 22:17:47 UTC - in response to Message 859036.  

This may have been asked (several times) before, but......

I'd like to know if there's an easy-to-understand, "Layman's Terms"-type of way to explain what FLOPS and FPOPS are and how they work?

(I know what the acronyms mean, I just don't understand about them)

1.5 + 3.8 is a floating point operation (a floating point add).

sqrt(8.3) is a much slower floating point operation (floating point square root).

For the purpose of credit, each counts as "1".


Hmmm...

Actually isn't square root by definition always a floating point operation (regardless of whether the argument is a integer or real number)? Same thing would apply to division. IOW the square root of an integer cannot be guaranteed to be an integer and the quotient of two integers doesn't necessarily have to be an integer.

Only for the addition and multiplication of integers is the result always an integer.

Alinator

I wasn't trying to be 100% comprehensive, but give a couple of examples, one that was fast and easy, and the other that would take a lot more CPU time.

Your statement about addition and multiplication may be mathematically correct, but we're also talking about practical implementation in computers.

In most CPU instruction sets there is an integer divide and a floating point divide. If you use the integer divide to calculate 7/2, the answer will be an integer, 3 (and it would not count as a floating point operation).

This distinction is available in at least one high-level language.
ID: 859052 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20393
Credit: 7,508,002
RAC: 20
United Kingdom
Message 859069 - Posted: 28 Jan 2009, 22:40:28 UTC - in response to Message 859052.  
Last modified: 28 Jan 2009, 22:43:15 UTC

... In most CPU instruction sets there is an integer divide and a floating point divide. If you use the integer divide to calculate 7/2, the answer will be an integer, 3 (and it would not count as a floating point operation).

This distinction is available in at least one high-level language.

In recent times with high powered hardware and fast FPUs, it often does not really matter whether you need to use integer arithmetic or floating-point arithmetic, both operations are fast enough. Only if you are trying to optimise the code you might start looking to do some mix of integer and floats so that you can balance out the loading on the CPU's execution units and so gain extra throughput. Or if you're optimising to use GPUs...

In days of old, you would use integers wherever you could to gain a big speedup! Some languages used "a := b / c" for real arithmetic and "a := b % c" for integer divide (pascal-esq pseudo code!). More recently, you would more correctly use a modulus (modulo) operator for integer division. Some languages let you directly do binary bit-shifts...

It all depends on what you want to do.

In binary hardware, there are all manner of tricks used to speed up calculations. One bit of clever munging that went wrong gave rise to the infamous Intel fdiv error. (Intel's subsequent Marketing error was their real clanger...) Over a century ago, errors in printed books of mathematical tables for trig and logs caused the deaths of many sailors due to causing navigation miscalculations... (And then Babbage produced his "Difference Engine".)


Happy crunchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 859069 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 859102 - Posted: 29 Jan 2009, 0:31:28 UTC - in response to Message 859069.  
Last modified: 29 Jan 2009, 0:35:01 UTC


In days of old, you would use integers wherever you could to gain a big speedup! Some languages used "a := b / c" for real arithmetic and "a := b % c" for integer divide (pascal-esq pseudo code!). More recently, you would more correctly use a modulus (modulo) operator for integer division. Some languages let you directly do binary bit-shifts...

The Pascal "integer" divide is "div" while "/" is a floating point divide.

In Pascal, the modulo operator is "mod" while % is C.

A good C compiler will emit the same code for the following two statements:

b=a/4;
b=a>>2;

The latter being an explicit shift right.

[edit]

In C syntax:

int a,b;
a=7/2;
b=7%2;

"a" would contain 3 (being an integer result) and "b" would contain 1.

"modulo" returns the remainder of the divide operation.
ID: 859102 · Report as offensive
Profile Michael Belanger, W1DGL
Avatar

Send message
Joined: 30 Jul 00
Posts: 1887
Credit: 7,441,278
RAC: 49
United States
Message 859147 - Posted: 29 Jan 2009, 1:44:01 UTC
Last modified: 29 Jan 2009, 1:46:40 UTC

Looks like I got a good (albeit still confusing to me) discussion going. I've never been good in math (in any form); in fact, it was my worst subject all through school - hehehe, now explain to me how I got high marks in Aircraft Mechanic's School, where you need math, and lots of it sometimes)
ID: 859147 · Report as offensive
Profile Allie in Vancouver
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 3949
Credit: 1,604,668
RAC: 0
Canada
Message 859161 - Posted: 29 Jan 2009, 2:23:16 UTC - in response to Message 859147.  
Last modified: 29 Jan 2009, 2:23:42 UTC

Looks like I got a good (albeit still confusing to me) discussion going. I've never been good in math (in any form); in fact, it was my worst subject all through school - hehehe, now explain to me how I got high marks in Aircraft Mechanic's School, where you need math, and lots of it sometimes)

Practical applications vs. theoretical understanding. Vastly different concepts. For what it is worth, my understanding of math is more on the practical level as well. :o)
Pure mathematics is, in its way, the poetry of logical ideas.

Albert Einstein
ID: 859161 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 859215 - Posted: 29 Jan 2009, 5:03:48 UTC - in response to Message 859147.  

Looks like I got a good (albeit still confusing to me) discussion going. I've never been good in math (in any form); in fact, it was my worst subject all through school - hehehe, now explain to me how I got high marks in Aircraft Mechanic's School, where you need math, and lots of it sometimes)

The best math class I ever had was Chemistry. The numbers all meant something.

Fortunately, the type of programming I do does not lean heavily on higher math. I know shifts and masks well because those work nicely when you're comparing network numbers for routing.

Most of the code I write is pure integer math, and the floating point library isn't even linked in.
ID: 859215 · Report as offensive
Profile Greg Hogan
Avatar

Send message
Joined: 3 Mar 04
Posts: 28
Credit: 9,235,626
RAC: 0
New Zealand
Message 859289 - Posted: 29 Jan 2009, 11:37:57 UTC - in response to Message 859049.  

You are both kinda right.
It depends on the ALU implemented.

Most simple micros have stripped down cores with rudimentary ALUs where mult/div operations are done in algorithmically, in code, most often only integer. All the floating point operations are also done in code (eg. a FP function library).
There are others that have dedicated hardware in the ALU, they have instructions for mult/div operations, some even use dedicated fixed point registers.
More sophisticated ALUs can have dedicated floating point registers (for example some DSPs).
It can be argued that GPUs are the natural next generation ALU, one that is capable of matrix operations. There are hardware physics processors are on the horizon promising even greater things. Remember the hassles Sonys Playstation 2 got itself into when it could do realtime vector and 3D spacial co-ord math faster that the ones in the US cruise missiles.. sometimes hardware and capability is intentionally limited for a reason.

I've had a bit of play with VHDL and FPGAs making my own processors and ALUs, the more complex the design, the more hardware required, the bigger the chip and power required and the longer it takes to get it all correctly working. All this needs to be done in 'parallel' at once whereas doing it in code 'sequentially' is more akin to how humans solve problems thus generally easier to visualise and solve.

Sometimes its just better overall to keep it simple unless speed is really critical, but thats just me being lazy I think.
ID: 859289 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20393
Credit: 7,508,002
RAC: 20
United Kingdom
Message 859323 - Posted: 29 Jan 2009, 14:21:11 UTC - in response to Message 859289.  

... I've had a bit of play with VHDL and FPGAs making my own processors and ALUs...

Have you had a play with handle-C? ;-)

Happy fast crunchin',
Martin

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 859323 · Report as offensive
Profile Greg Hogan
Avatar

Send message
Joined: 3 Mar 04
Posts: 28
Credit: 9,235,626
RAC: 0
New Zealand
Message 859674 - Posted: 30 Jan 2009, 9:43:09 UTC - in response to Message 859323.  

Cheers for that tip - will look at that.
Using Xilinx ISE+EDK and Altium with a Nanoboard.
ID: 859674 · Report as offensive

Message boards : Number crunching : FLOPS & FPOPS


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.