Jump to content

Conjugate gradient method

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Jitse Niesen (talk | contribs) at 18:56, 30 January 2005 (beginning of an article, still needs sections on convergence analysis, extensions, and use in optimization). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive definite. The conjugate gradient method is an iterative method, so it can be applied to sparse systems which are too large to be handled by direct methods such as the Cholesky decomposition. Such system arise regularly when numerically solving partial differential equations.

The conjugate gradient method can also be used to solve unconstrained optimization problems.

Description of the method

Suppose we want to solve the following system of linear equations

where the n-by-n matrix A is symmetric (i.e., AT = A), positive definite (i.e., xTAx > 0 for all non-zero vectors x in Rn), and real.

We denote the unique solution of this system by x*.

The conjugate gradient method as a direct method

We say that two vectors u and v are conjugate if

Since A is symmetric and postive definite, the left-hand side defines an inner product

So, two vectors are conjugate if they are orthogonal with respect to this inner product.

Suppose that {pk} is a sequence of n conjugate directions. Then the pk form a basis of Rn, so we can expand the solution x* of Ax = b in this basis:

The coefficients are given by

This result is perhaps most transparent by considering the inner product defined above.

This gives the following method for solving the equation Ax = b. We first find a sequence of n conjugate direction and then we compute the coefficients αk.

The conjugate gradient method as an iterative method

If we choose the conjugate directions pk carefully, then we may not need all of them to obtain a good approximation to the solution x*. So, we want to regard the conjugate gradient method as an iterative method. This also allows us to solve systems where n is so large that the direct method would take too much time.

We denote the initial guess for x* by x0. We can assume without loss of generality that x0 = 0 (otherwise, consider the system Az = bAx0 instead). Note that the solution x* is also the unique minimizer of

This suggests taking the first direction p1 to be the gradient of f at x = x0, which equals b. The other directions will be conjugate to the gradient, hence the name conjugate gradient method.

Let rk be the residual at the kth step:

Note that rk is the gradient of f at x = xk, so the gradient descent method would be to move in the direction rk. Here, we insist that the directions pk are conjugate to eachother, so we take the direction closest to the gradient rk under the conjugacy constraint. This gives the following expression:


The resulting algorithm

After some simplifications, this results in the following algorithm for solving Ax = b where A is a real, symmetric, positive-definite matrix.

x0 := 0
k := 0
r0 := b
repeat until rk is "sufficiently small":
k := k + 1
if k = 1
p1 := r0
else
end if
xk := xk-1 + αk pk
rk := rk-1 - αk A pk
end repeat
The result is xk

References

The conjugate gradient method was originally proposed in

  • Magnus R. Hestenes and Eduard Stiefel (1952), Methods of conjugate gradients for solving linear systems, J. Research Nat. Bur. Standards 49, 409–436.

A description of method can be found in the following text books:

  • Kendell A. Atkinson (1988), An introduction to numerical analysis (2nd ed.), Section 8.9, John Wiley and Sons. ISBN 0-471-50023-2.
  • Gene H. Golub and Charles F. Van Loan, Matrix computations (3rd ed.), Chapter 10, John Hopkins University Press. ISBN 0-8018-5414-8.