Jump to content

Floating point operations per second

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Aleph0~enwiki (talk | contribs) at 15:41, 10 November 2004 (Changed the link to Blue Gene). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Flops redirects here. For the poker term, see flop (poker). For commercial failures, see List of major flops.

In computing, flops is an acronym of floating point operations per second. This is used as a metric (with the flops as unit) for a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations.

One should speak in the singular of a flops and not of a flop, although the latter is frequently encountered. The final s stands for second and does not indicate a plural.

The performance spectrum

Computing devices exhibit an enormous range of performance levels in floating-point applications. Thus it makes sense to introduce larger units than the flops; the standard SI decimal prefixes are used for this purpose. For example, a cheap but modern desktop computer can make billions of floating point operations per second, so its performance is in the range of a few gigaflops (109 flops).

Today's most powerful supercomputers have speeds measured in teraflops (1012 flops). The fastest computer in world as of November 5, 2004 is the IBM Blue Gene supercomputer, measuring 70.72 teraflops. This supercomputer is a prototype of the Blue Gene/L machine IBM is building for the Lawrence Livermore National Laboratory in California which will have a peak speed of 360 teraflops, and is due to be completed in 2005.

Pocket calculators are at the other end of the performance spectrum. Any response time below 0.1 second is experienced as 'instantaneous' by a human operator. Because it is nonsensical to create a faster calculator, one may conclude that a pocket calculator performs at about 10 flops.

Of course, humans are even worse floating-point processors. If it takes a person a quarter of an hour to carry out a pencil-and-paper long division with 10 significant digits, that person would be calculating in the milliflops range.

Flops as a metric

In order for flops to be useful as a metric for floating-point performance, a standard benchmark must be available on all computers of interest. An example is the LINPACK benchmark.

Flops in isolation are arguably not very useful as a benchmark for modern computers. There are many other factors in computer performance other than raw floating-point computation speed, such as interprocessor communication, cache coherence, and the memory hierarchy.

For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more common. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. However, for many scientific jobs such as analysis of data, a flops rating is effective.