Jump to content

Matrix splitting

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Anita5192 (talk | contribs) at 18:03, 1 September 2013 (Matrix iterative methods: Changed variable name to be consistent with the rest of the article.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods (e.g., for systems of differential equations) depend upon the direct solution of matrix equations involving matrices more general than tridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised by Richard S. Varga in 1960.[1]

Regular splittings

We seek to solve the matrix equation

where A is a given n × n non-singular matrix, and k is a given column vector with n components. We split the matrix A into

where B and C are n × n matrices. If, for an arbitrary n × n matrix M, M has nonnegative entries, we write M0. If M has only positive entries, we write M > 0. Similarly, if the matrix M1M2 has nonnegative entries, we write M1M2.

Definition: A = BC is a regular splitting of A if and only if B−10 and C0.

We assume that matrix equations of the form

where g is a given column vector, can by solved directly for the vector x. If (2) represents a regular splitting of A, then the iterative method

where x(0) is an arbitrary vector, can be carried out. Equivalently, we write (4) in the form

The matrix D = B−1C has nonnegative entries if (2) represents a regular splitting of A.[2]

It can be shown that if A−1 > 0, then < 1, where represents the spectral radius of D, and thus D is a convergent matrix. As a consequence, the iterative method (5) is necessarily convergent.[3][4]

If, in addition, the splitting (2) is chosen so that the matrix B is a diagonal matrix (with the diagonal entries all non-zero, since B must be invertible), then B can be inverted in linear time (see Time complexity).

Matrix iterative methods

Many iterative methods can be described as a matrix splitting. If the diagonal entries of the matrix A are all nonzero, and we express the matrix A as the matrix sum

where D is the diagonal part of A, and U and L are respectively strictly upper and lower triangular n × n matrices, then we have the following.

The Jacobi method can be represented in matrix form as a splitting

[5][6]

The Gauss-Seidel method can be represented in matrix form as a splitting

[7][8]

The method of successive over-relaxation can be represented in matrix form as a splitting

[9][10]

See also

Notes

  1. ^ Varga (1960)
  2. ^ Varga (1960, pp. 121–122)
  3. ^ Varga (1960, pp. 122–123)
  4. ^ Varga (1962, p. 89)
  5. ^ Burden & Faires (1993, p. 408)
  6. ^ Varga (1962, p. 88)
  7. ^ Burden & Faires (1993, p. 411)
  8. ^ Varga (1962, p. 88)
  9. ^ Burden & Faires (1993, p. 416)
  10. ^ Varga (1962, p. 88)

References

  • Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3.
  • Varga, Richard S. (1960). "Factorization and Normalized Iterative Methods". In Langer, Rudolph E. (ed.). Boundary Problems in Differential Equations. Madison: University of Wisconsin Press. pp. 121–142. LCCN 60-60003.