Jump to content

Gauss–Newton algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Frau Holle (talk | contribs) at 11:11, 13 November 2004. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

The Gauß-Newton algorithm is a method for finding zeroes of a function of several real variables. It is a generalisation of Newton's method.

The aim is to solve systems of n (non-linear) equations, which amounts to finding the zeros of continuously differentiable functions F : Rk -> Rk. In the formulation given above, one then has to multiply with the inverse of the k-by-k Jacobian matrix F '(xn) instead of dividing by f '(xn). Rather than actually computing the inverse of this matrix, one can save time by solving the system of linear equations

for the unknown xn+1 - xn. Again, this method only works if the initial value x0 is close enough to the true zero. Typically, a region which is well-behaved is located first with some other method (gradient descent and Newton's method is then used to "polish" a root which is already known approximately.

The Levenberg-Marquardt algorithm flexibly interpolates between a gradient descent and the Gauß-Newton method.