Jump to content

Numerical error

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Derek farn (talk | contribs) at 23:01, 3 July 2007 (Start the ball rolling). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In software engineering and mathematics, 'numerical error is the name given to two kinds of error that occur in a calculation. The first is caused by the finite precision of computations involving floating-point values and the second (sometimes called the theoretical truncation error) is the difference between the exact mathematical solution and the approximate solution obtained when simplifications are made to the mathematical equations to make them more amenable to calculation.

Floating-point numerical error is often measured in ULP (Unit in the Last Place).

References

Accuracy and Stability of Numerical Algorithms by Nicholas J. Higham, ISBN 0-89871-355-2