Jump to content

Numerical error

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Qazi Umar FarooqKing (talk | contribs) at 11:26, 16 April 2015 (Numerical Error In computing Absolute Error Relative Error). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Numerical Error In Computing Many computer users, who are not aware of how numbers are represented and processed within computer systems, tend to assume that the results of programs are accurate. For example, if a decimal number is printed as 123.45678 all the figures are assumed to be an accurate representation of the result of whatever calculation was performed. When computer programs are being implemented and tested the errors which can occur include:

Errors in the syntax of the program language, i.e. keywords misspelt, operators missing, etc. The compiler will find such errors and present a more or less meaningful error message indicating the problem. Errors in coding the program specification or algorithms into the programming language, i.e. the program does not do what the user had specified. This type of error is mainly due to inexperience and a lack of understanding of the language. Errors in the program specification or algorithms. The set of rules and/or formulas and/or algebraic steps that the user has specified to solve the problem are incorrect, e.g. a mathematician has made a mistake in the that the user has specified to solve the problem are incorrect, e.g. a mathematician has made a mistake in the development of the equations. It is up to the user to ensure that algorithms being used are correct to the extent required. Errors due to the storage and processing of numerical data, i.e. the accuracy of the hardware used to perform numerical calculations is finite. For example, real number data (with fractional components) is stored in floating point format with a typical accuracy between 8 and 20 significant decimal figures. Thus the results of programs using such floating point representation cannot be exact. In addition many mathematical functions are infinite series which have to be truncated This set of notes considers the type of error listed in 4 above. Relative and Absolute Errors

Absolute error The error in the evaluation of an algorithm can be defined to be the true value minus the approximate value as calculated. Usually in practice only the approximate value (as calculated by the program) is known. In general, however, it is possible to determine something about the error without knowing it precisely. For example, using the technique of Interval Halving to find the root of:

      4 - 9x3 - 2x2 + 120x - 130

The iterations could stop at the point when a root was between -3.60016 and 3.60013 and it is not necessary to halve and try again. In this case the maximum error would be 0.00003 and the root was -3.600145 + 0.000015. Thus, although the value of the error is not known it can be stated that it does not exceed 1.5 * 10-5. This error is called an absolute error and one common notation is to write a bar over the symbol to indicate an approximation and to write an e with subscript to stand for the error. Thus, if x is the true value:

      x = x  + ex

Thus the absolute error is the difference between the true value and the approximation.

Relative error Another way to define the error is using the relative error form which is defined as the absolute error divided by the approximation. It may seem more reasonable to divide the absolute error by the true value but usually this is not known. So long as the error is small the above definition using approximate value should have no sizable bearing on the numerical value of the relative error. Comparison of using absolute and relative error For numbers close to 1 the absolute and relative errors are nearly equal but for large and small numbers the error values can be very different. For example, if we have a true value of 0.00006 and an approximation of 0.00005, the absolute error is only 10-5but the relative error is 0.2 or 20%. On the other hand, if the true value is 100,500 and approximation is 100,100 the absolute error is 500 but the relative error is only 0.005, or 0.5%. Thus in general relative errors are the more useful way of specifying errors.


In software engineering and mathematics, numerical error is the combined effect of two kinds of error in a calculation. The first is caused by the finite precision of computations involving floating-point or integer values. The second usually called truncation error is the difference between the exact mathematical solution and the approximate solution obtained when simplifications are made to the mathematical equations to make them more amenable to calculation. The term truncation comes from the fact that either these simplifications usually involve the truncation of an infinite series expansion so as to make the computation possible and practical, or because the least significant bits of an arithmetic operation are thrown away.

Floating-point numerical error is often measured in ULP (unit in the last place).

See also

References

  • Accuracy and Stability of Numerical Algorithms, Nicholas J. Higham, ISBN 0-89871-355-2
  • "Computational Error And Complexity In Science And Engineering", V. Lakshmikantham, S.K. Sen, ISBN 0444518606