Jump to content

Matrix analysis

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Maschen (talk | contribs) at 11:38, 9 November 2013 (provide more scope, even if the sections are mostly empty). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematics, particularly in linear algebra and applications, matrix analysis is the study of matrices and their algebraic properties.[1] Some particular topics out of many include; operations defined on matrices (such as matrix addition, matrix multiplication and operations derived from these), functions of matrices (such as matrix exponentiation and matrix logarithm, and even sines and cosines etc. of matrices), and the eigenvalues of matrices (eigendecomposition of a matrix, eigenvalue perturbation theory).

Matrix spaces

The set of all m×n matrices over a number field F denoted in this article Mmn(F) form a vector space. Examples of F include the set of integers ℤ, the real numbers ℝ, and set of complex numbers ℂ. The spaces Mmn(F) and Mpq(F) are different spaces if m and p are unequal, and if n and q are unequal; for instance M32(F) ≠ M23(F). Two m×n matrices A and B in Mmn(F) can be added together to form another matrix in the space Mmn(F):

and multiplied by a α in F, to obtain another matrix in Mmn(F):

Combining these two properties, a linear combination of matrices A and B are in Mmn(F) is another matrix in Mmn(F):

where α and β are numbers in F.

Determinants

The determinant of a square matrix is an important property. The determinant indicates if a matrix is invertible (i.e. the inverse of a matrix exists). Determinants are used for finding eigenvalues of matrices (see below), and for solving a system of linear equations (see Cramer's rule).

Eigenvalues and eigenvectors of matrices

Definitions

An n×n matrix A has eigenvectors x and eigenvalues λ defined by the relation:

In words, the matrix multiplication of A followed by an eigenvector x (here an n-dimensional column matrix), is the same as multiplying the eigenvector by the eigenvalue. For an n×n matrix, there are n eigenvalues. The eigenvalues are the solutions to the characteristic polynomial:

where I is the n×n identity matrix.

Roots of polynomials can all be different, or some may be equal (in which case the eigenvalue has multiplicity). After solving for the eigenvalues, the eigenvectors corresponding to the eigenvalues can be found by the defining equation.

Perturbations of eigenvalues

Matrix similarity

Two n×n matrices A and B are similar if they are related by a similarity transformation:

The matrix P is called a similarity matrix.

Unitary similarity

Canonical forms

Row echelon form

Jordan normal form

Weyr canonical form

Frobenius normal form

Triangular factorization

LU decomposition

LU decomposition splits a matrix into a matrix product of an upper triangular matrix and a lower triangle matrix.

Matrix norms

Since matrices form vector spaces, one can form axioms (analogous to those of vectors) to define a "size" of a particular matrix. The norm of a matrix is a positive real number.

Definition and axioms

For all matrices A and B in Mmn(F), and all numbers α in F, a matrix norm, delimited by double vertical bars || ... ||, fulfills:[note 1]

with equality only for A = 0, the zero matrix.
  • Scalar multiplication:

Frobenius norm

The Frobenius norm is analogous to the dot product of Euclidean vectors; square every matrix element and add up the results, then take the positive square root:

It is defined for matrices of any dimension (i.e. no restriction to square matrices).

Positive definite and semidefinite matrices

Functions

Matrix elements are not restricted to constant numbers, they can be mathematical variables.

Functions of matrices

A functions of a matrix takes in a matrix, and return something else (a number, vector, matrix, etc...).

Matrix-valued functions

A matrix valued function takes in something (a number, vector, matrix, etc...) and returns a matrix.

See also

Footnotes

  1. ^ Some authors, e.g. Horn and Johnson, use triple vertical bars instead of double: |||A|||.

References

Notes

  1. ^ R. A. Horn, C. R. Johnson (2012). Matrix Analysis (2nd ed.). Cambridge University Press. ISBN 052-183-940-8.

Further reading