Jump to content

Precision (computer science)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 64.95.132.18 (talk) at 21:38, 24 March 2011 (Replaced the argumentative and inaccurate section related to java with a link to the more complete page at http://en.wikipedia.org/wiki/Integer_(computer_science)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer science, precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value.

Rounding error

Precision is often the source of rounding errors in computation. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store sin(0.1) in IEEE single precision floating point standard. The error is then often magnified as subsequent computations are made to the data (it can also be reduced).

See also