Jump to content

Generalized minimal residual method

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Jitse Niesen (talk | contribs) at 09:51, 9 March 2006 (In mathematics, the generalized minimal residual method (usually abbreviated GMRES) is an iterative method for the numerical solution of a system of linear equations ...). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In mathematics, the generalized minimal residual method (usually abbreviated GMRES) is an iterative method for the numerical solution of a system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.

The method

Denote the system of linear equations which needs to be solved by

The matrix A is assumed to be invertible of size m-by-m. Furthermore, it is assumed that b is normalized, i.e., ||b|| = 1 (in this article, ||·|| denotes the Euclidean norm).

The nth Krylov subspace for this problem is

GMRES approximates the exact solution of Ax = b by the vector xnKn that minimizes the norm of the residual Axnb.

The vectors b, Ab, … An−1b are almost linearly dependent, so instead of this basis, Arnoldi iteration is used to find orthonormal vectors

which form a basis for Kn. Hence, the vector xn can be written as xn = Qnyn with ynRn.

The Arnoldi process also produces an (n+1)-by-n upper Hessenberg matrix Hn with

Because is orthogonal, we have

where

is the first vector in the standard basis of Rn. Hence, can be found by minimizing ||Hnyne1||. This is a linear least squares problem of size n.

This yields the GMRES method. At every step of the iteration:

  1. do one step of the Arnoldi method;
  2. find the which minimizes ||Hnyne1||;
  3. compute ;
  4. repeat if the residual ||Hnyne1|| is not yet small enough.

The cost of one iteration is one matrix-vector product Aqn and floating-point operations.

Analysis

After m iterations, where m is the size of the matrix A, the Krylov space Km is the whole of Rm and hence the GMRES method arrives at the exact solution. However, the idea is that after a small number of iterations (relative to m), the vector xn is already a good approximation to the exact solution.

The speed with which xn converges is hard to determine. Generally, fast convergence occurs when the eigenvalues of A are clustered away from the origin and A is not too far from normality. Furthermore, A should be sparse matrix or have some other structure so that the matrix-vector product can be computed quickly.

More precisely, the norm of the residuals satisfy

where Pn denotes the set of polynomials of degree at most n with p(0) = 1, κ denotes the condition number, V is the matrix appearing in the spectral decomposition of A, and σ(A) is the spectrum of A (Trefethen & Bau, Thm 35.2).

Extensions of the method

Like other iterative methods, GMRES is usually combined with a preconditioning method in order to speed up convergence.

The cost of the iterations grow like O(n2), where n is the iteration number. Therefore, the method is sometimes restarted after a number, say k, of iterations, with xk as initial guess. The resulting method is called GMRES(k).

References