Precision (computer science)
In computer science, precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value.
In Java, one of the few programming languages with standardized precision data types, the following precisions are defined for the standard integer numerical types of the language. The ranges given are for signed integer values represented in standard two's complement form.
Type name | Precision (binary bits) | Range |
---|---|---|
byte
|
8 | -128 to +127 |
short
|
16 | -32,768 to 32,767 |
int
|
32 | -2,147,483,648 to 2,147,483,647 |
long
|
64 | -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 |
Rounding error
Precision is often the source of rounding errors in computation. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store sin(0.1) in IEEE single precision floating point standard. The error is then often magnified as subsequent computations are made to the data (it can also be reduced).
See also
- Integer (computer science)
- Arbitrary-precision arithmetic
- Precision (arithmetic)
- IEEE754 (IEEE floating point standard)