Jump to content

Newton's method in optimization

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 140.112.4.195 (talk) at 12:36, 5 July 2013 (Higher dimensions). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
A comparison of gradient descent (green) and Newton's method (red) for minimizing a function (with small step sizes). Newton's method uses curvature information to take a more direct route.

In mathematics, Newton's method is an iterative method for finding roots of equations. In optimization, Newton's method is specialized to find stationary points of differentiable functions, which are the zeros of the derivative function.

Method

Newton's Method attempts to construct a sequence from an initial guess that converges towards such that . This is called a stationary point of .

The second order Taylor expansion of a function around (where ) is: , and attains its extremum when its derivative with respect to is equal to zero, i.e. when solves the linear equation:

(Considering the right-hand side of the above equation as a quadratic in , with constant coefficients.)

Thus, provided that is a twice-differentiable function well approximated by its second order Taylor expansion and the initial guess is chosen close enough to , the sequence defined by:

will converge towards a root of , i.e. for which .

Geometric interpretation

The geometric interpretation of Newton's method is that at each iteration one approximates by a quadratic function around , and then takes a step towards the maximum/minimum of that quadratic function (in higher dimensions, this may also be a saddle point). Note that if happens to be a quadratic function, then the exact extremum is found in one step.

Higher dimensions

The above iterative scheme can be generalized to several dimensions by replacing the derivative with the gradient, , and the reciprocal of the second derivative with the inverse of the Hessian matrix, . One obtains the iterative scheme

Usually Newton's method is modified to include a small step size instead of

This is often done to ensure that the Wolfe conditions are satisfied at each step of the iteration.

Where applicable, Newton's method converges much faster towards a local maximum or minimum than gradient descent. In fact, every local minimum has a neighborhood such that, if we start with Newton's method with step size converges quadratically (if the Hessian is invertible and a Lipschitz continuous function of in that neighborhood).

Finding the inverse of the Hessian in high dimensions can be an expensive operation. In such cases, instead of directly inverting the Hessian it's better to calculate the vector as the solution to the system of linear equations

which may be solved by various factorizations or approximately (but to great accuracy) using iterative methods. Many of these methods are only applicable to certain types of equations, for example the Cholesky factorization and conjugate gradient will only work if is a positive definite matrix. While this may seem like a limitation, it's often useful indicator of something gone wrong, for example if a minimization problem is being approached and is not positive definite, then the iterations are converging to a saddle point and not a minimum.

On the other hand, if a constrained optimization is done (for example, with Lagrange multipliers), the problem may become one of saddle point finding, in which case the Hessian will be symmetric indefinite and the solution of will need to be done with a method that will work for such, such as the LDLT variant of Cholesky factorization or the conjugate residual method.

There also exist various quasi-Newton methods, where an approximation for the Hessian (or its inverse directly) is built up from changes in the gradient.

If the Hessian is close to a non-invertible matrix, the inverted Hessian can be numerically unstable and the solution may diverge. In this case, certain workarounds have been tried in the past, which have varied success with certain problems. One can, for example, modify the Hessian by adding a correction matrix so as to make positive definite. One approach is to diagonalize and choose so that has the same eigenvectors as , but with each negative eigenvalue replaced by

An approach exploited in the Levenberg–Marquardt algorithm (which uses an approximate Hessian) is to add a scaled identity matrix to the Hessian, , with the scale adjusted at every iteration as needed. For large and small Hessian, the iterations will behave like gradient descent with step size . This results in slower but more reliable convergence where the Hessian doesn't provide useful information.

Other approximations

Some functions are poorly approximated by quadratics, particularly when far from a maximum or minimum. In these cases, approximations other than quadratic may be more appropriate.[1]

See also

Notes

  1. ^ Thomas P. Minka (2002-04-17). "Beyond Newton's Method" (PDF). Retrieved 2009-02-20. {{cite journal}}: Cite journal requires |journal= (help)

References