Jump to content

Numerical error

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by RHB100 (talk | contribs) at 01:27, 17 December 2008 (Added reason for use of the term "truncation".). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In software engineering and mathematics, numerical error is the combined effect of two kinds of error in a calculation. The first is caused by the finite precision of computations involving floating-point or integer values. The second (sometimes called the theoretical truncation error) is the difference between the exact mathematical solution and the approximate solution obtained when simplifications are made to the mathematical equations to make them more amenable to calculation. The term truncation comes from the fact that these simplifications usually involve the truncation of an infinite series expansion so as to make the computation possible and practical.

Floating-point numerical error is often measured in ULP (unit in the last place).

See also

References

  • Accuracy and Stability of Numerical Algorithms, Nicholas J. Higham, ISBN 0-89871-355-2