Message boards :
Number crunching :
FLOPS & FPOPS
Message board moderation
Author | Message |
---|---|
Michael Belanger, W1DGL Send message Joined: 30 Jul 00 Posts: 1887 Credit: 7,441,278 RAC: 49 |
This may have been asked (several times) before, but...... I'd like to know if there's an easy-to-understand, "Layman's Terms"-type of way to explain what FLOPS and FPOPS are and how they work? (I know what the acronyms mean, I just don't understand about them) |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
For all intents and purposes, they are the same thing (though most pedantic types will point out the minor differences, if any). Here's a link to the Wiki page. |
Michael Belanger, W1DGL Send message Joined: 30 Jul 00 Posts: 1887 Credit: 7,441,278 RAC: 49 |
Thanks for trying with that link to Wiki, Ozz, but I did say, "easy-to-understand" and "layman's terms"; call me <whatever>, but that link didn't really help me (much) to understand what they are and how they do (whatever it is they do); I was cliicking on links so often to understand what they (Wiki) were saying, my head was spinning when I finally stopped. |
Alinator Send message Joined: 19 Apr 05 Posts: 4178 Credit: 4,647,982 RAC: 0 |
Hmmmm... Well, I guess in the simplest terms you are talking about the difference between work (FPOPs) and power (FLOPS). To most physicists and engineers, that's hardly being pedantic to draw a distinction between them. Conceptually, Floating Point OPerations (FPOPs) are simple. You simply do an arithmetic operation on two real numbers (as opposed to integers only). This is a unit of work, since it is time independant. The real problem comes when you take that definition and apply it to a practical implementation (like a computer), especially for more complex mathematical functions (trigonometric and transcendental functions for example). FLoating point Operations Per Second (FLOPS) is a unit of power (work done per unit time). Alinator |
ML1 Send message Joined: 25 Nov 01 Posts: 20393 Credit: 7,508,002 RAC: 20 |
Hmmmm... Looks more like a typo to me! You have: "OPS" - Operations per Second; "IOPS" - Integer Operations per Second; "FPOPS" is not normally used and looks to be an "Americanism" introduced into the Boinc world! It is meant to be a count of floating point operations, an absolute count, no 'seconds' included, should really be FpOPs to avoid confusion, ok as a program variable name but confusing when outside of a program; "FLOPS" - FLoating-point Operations per Second; Those are normally used as a performance indicator for CPU/GPU/ALU level operations. For an overall system (computer + software + interactions), you also have "Transactions per Second" as a performance indicator. The issue for the terminology is that in computing, "floating-point" is usually used as though it is a single word. Integers are whole numbers such as 1, 2, 3, 4. Floats (floating-point) are 'real' numbers (fractional) such as 1.2, 3.142, 2.009 * 10^3, etc. s@h uses a LOT of floating-point calculations to analyse the data. Hence the interest in counting "fpops" (absolute count of how many) and FLOPS (how quickly). And then the Marketing people come along and then call everything milli-hertz and grams-hertz regardless! (Instead of MHz "mega-Hertz" and GHz "giga-Hertz" when appropriate.) Is any Science taught in schools? Hope that hasn't added to the confusion! Happy fast crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
This may have been asked (several times) before, but...... 1.5 + 3.8 is a floating point operation (a floating point add). sqrt(8.3) is a much slower floating point operation (floating point square root). For the purpose of credit, each counts as "1". |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Another thing that I think I remember hearing in a computer class at college is that the ALU is not capable of doing multiply and divide. Instead what it does is adds and subtracts a log10 of the two numbers in question. Adding log10's is the same as multiplying, and subtracting is the same as dividing. I thought that was fairly interesting. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
archae86 Send message Joined: 31 Aug 99 Posts: 909 Credit: 1,582,816 RAC: 0 |
Another thing that I think I remember hearing in a computer class at college is that the ALU is not capable of doing multiply and divide. Instead what it does is adds and subtracts a log10 of the two numbers in question.Which ALU would that have been? It is certainly not true of ALUs in general. |
Alinator Send message Joined: 19 Apr 05 Posts: 4178 Credit: 4,647,982 RAC: 0 |
This may have been asked (several times) before, but...... Hmmm... Actually isn't square root by definition always a floating point operation (regardless of whether the argument is a integer or real number)? Same thing would apply to division. IOW the square root of an integer cannot be guaranteed to be an integer and the quotient of two integers doesn't necessarily have to be an integer. Only for the addition and multiplication of integers is the result always an integer. Alinator |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
Another thing that I think I remember hearing in a computer class at college is that the ALU is not capable of doing multiply and divide. Instead what it does is adds and subtracts a log10 of the two numbers in question.Which ALU would that have been? I don't know, we didn't get the specifics of it. We were just told that microprocessors don't actually multiply and divide, they add and subtract log10's. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
This may have been asked (several times) before, but...... I wasn't trying to be 100% comprehensive, but give a couple of examples, one that was fast and easy, and the other that would take a lot more CPU time. Your statement about addition and multiplication may be mathematically correct, but we're also talking about practical implementation in computers. In most CPU instruction sets there is an integer divide and a floating point divide. If you use the integer divide to calculate 7/2, the answer will be an integer, 3 (and it would not count as a floating point operation). This distinction is available in at least one high-level language. |
ML1 Send message Joined: 25 Nov 01 Posts: 20393 Credit: 7,508,002 RAC: 20 |
... In most CPU instruction sets there is an integer divide and a floating point divide. If you use the integer divide to calculate 7/2, the answer will be an integer, 3 (and it would not count as a floating point operation). In recent times with high powered hardware and fast FPUs, it often does not really matter whether you need to use integer arithmetic or floating-point arithmetic, both operations are fast enough. Only if you are trying to optimise the code you might start looking to do some mix of integer and floats so that you can balance out the loading on the CPU's execution units and so gain extra throughput. Or if you're optimising to use GPUs... In days of old, you would use integers wherever you could to gain a big speedup! Some languages used "a := b / c" for real arithmetic and "a := b % c" for integer divide (pascal-esq pseudo code!). More recently, you would more correctly use a modulus (modulo) operator for integer division. Some languages let you directly do binary bit-shifts... It all depends on what you want to do. In binary hardware, there are all manner of tricks used to speed up calculations. One bit of clever munging that went wrong gave rise to the infamous Intel fdiv error. (Intel's subsequent Marketing error was their real clanger...) Over a century ago, errors in printed books of mathematical tables for trig and logs caused the deaths of many sailors due to causing navigation miscalculations... (And then Babbage produced his "Difference Engine".) Happy crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
The Pascal "integer" divide is "div" while "/" is a floating point divide. In Pascal, the modulo operator is "mod" while % is C. A good C compiler will emit the same code for the following two statements: b=a/4; b=a>>2; The latter being an explicit shift right. [edit] In C syntax: int a,b; a=7/2; b=7%2; "a" would contain 3 (being an integer result) and "b" would contain 1. "modulo" returns the remainder of the divide operation. |
Michael Belanger, W1DGL Send message Joined: 30 Jul 00 Posts: 1887 Credit: 7,441,278 RAC: 49 |
Looks like I got a good (albeit still confusing to me) discussion going. I've never been good in math (in any form); in fact, it was my worst subject all through school - hehehe, now explain to me how I got high marks in Aircraft Mechanic's School, where you need math, and lots of it sometimes) |
Allie in Vancouver Send message Joined: 16 Mar 07 Posts: 3949 Credit: 1,604,668 RAC: 0 |
Looks like I got a good (albeit still confusing to me) discussion going. I've never been good in math (in any form); in fact, it was my worst subject all through school - hehehe, now explain to me how I got high marks in Aircraft Mechanic's School, where you need math, and lots of it sometimes) Practical applications vs. theoretical understanding. Vastly different concepts. For what it is worth, my understanding of math is more on the practical level as well. :o) Pure mathematics is, in its way, the poetry of logical ideas. Albert Einstein |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
Looks like I got a good (albeit still confusing to me) discussion going. I've never been good in math (in any form); in fact, it was my worst subject all through school - hehehe, now explain to me how I got high marks in Aircraft Mechanic's School, where you need math, and lots of it sometimes) The best math class I ever had was Chemistry. The numbers all meant something. Fortunately, the type of programming I do does not lean heavily on higher math. I know shifts and masks well because those work nicely when you're comparing network numbers for routing. Most of the code I write is pure integer math, and the floating point library isn't even linked in. |
Greg Hogan Send message Joined: 3 Mar 04 Posts: 28 Credit: 9,235,626 RAC: 0 |
You are both kinda right. It depends on the ALU implemented. Most simple micros have stripped down cores with rudimentary ALUs where mult/div operations are done in algorithmically, in code, most often only integer. All the floating point operations are also done in code (eg. a FP function library). There are others that have dedicated hardware in the ALU, they have instructions for mult/div operations, some even use dedicated fixed point registers. More sophisticated ALUs can have dedicated floating point registers (for example some DSPs). It can be argued that GPUs are the natural next generation ALU, one that is capable of matrix operations. There are hardware physics processors are on the horizon promising even greater things. Remember the hassles Sonys Playstation 2 got itself into when it could do realtime vector and 3D spacial co-ord math faster that the ones in the US cruise missiles.. sometimes hardware and capability is intentionally limited for a reason. I've had a bit of play with VHDL and FPGAs making my own processors and ALUs, the more complex the design, the more hardware required, the bigger the chip and power required and the longer it takes to get it all correctly working. All this needs to be done in 'parallel' at once whereas doing it in code 'sequentially' is more akin to how humans solve problems thus generally easier to visualise and solve. Sometimes its just better overall to keep it simple unless speed is really critical, but thats just me being lazy I think. |
ML1 Send message Joined: 25 Nov 01 Posts: 20393 Credit: 7,508,002 RAC: 20 |
... I've had a bit of play with VHDL and FPGAs making my own processors and ALUs... Have you had a play with handle-C? ;-) Happy fast crunchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
Greg Hogan Send message Joined: 3 Mar 04 Posts: 28 Credit: 9,235,626 RAC: 0 |
Cheers for that tip - will look at that. Using Xilinx ISE+EDK and Altium with a Nanoboard. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.