Symbolic computation of matrix eigenvalues
An important tool for describing eigenvalues of square matrices is the characteristic polynomial: saying that λ is an eigenvalue of A is equivalent to stating that the system of linear equations (A - λI) v = 0 (where I is the identity matrix) has a non-zero solution v (namely an eigenvector), and so it is equivalent to the determinant det(A - λI) being zero. The function p(λ) = det(A - λI) is a polynomial in λ since determinants are defined as sums of products. This is the characteristic polynomial of A: the eigenvalues of a matrix are the zeros of its characteristic polynomial.
(Sometimes the characteristic polynomial is taken to be det(λI - A) instead, which is the same polynomial if V is even-dimensional but has the opposite sign if V is odd-dimensional. This has the slight advantage that its leading coefficient is always 1 rather than -1.)
It follows that we can compute all the eigenvalues of a matrix A by solving the equation . If A is an n-by-n matrix, then has degree n and A can therefore have at most n eigenvalues. Conversely, the fundamental theorem of algebra says that this equation has exactly n roots (zeroes), counted with multiplicity. All real polynomials of odd degree have a real number as a root, so for odd n, every real matrix has at least one real eigenvalue. In the case of a real matrix, for even and odd n, the non-real eigenvalues come in conjugate pairs.
An example of a matrix with no real eigenvalues is the 90-degree rotation
whose characteristic polynomial is and so its eigenvalues are the pair of complex conjugates i, -i.
The Cayley-Hamilton theorem states that every square matrix satisfies its own characteristic polynomial, that is, .
Eigenvalues of 2×2 matrices
An analytic solution for the eigenvalues of 2×2 matrices can be obtained directly from the quadratic formula: if
then the characteristic polynomial is
(notice that the coefficients, up to sign, are the trace and determinant ) so the solutions are
A formula for the eigenvalues of a 3x3 or 4x4 matrix could be derived in an analogous way, using the formulae for the roots of a cubic or quartic equation.
Example computation
The computation of eigenvalue/eigenvector can be computed by the following algorithm.
Consider an n-square matrix A
- 1. Find the roots of the characteristic polynomial of A. These are the eigenvalues.
- If n different roots are found, then the matrix can be diagonalized.
- 2. Find an basis for the kernel of the matrix given by . For each of the eigenvalues. These are the eigenvectors
- The eigenvectors given from different eigenvalues are linear independent.
- The eigenvectors given from a root-multiciply are also linear independent.
Let us determine the eigenvalues of the matrix
which represents a linear operator R3 → R3.
- Identifying eigenvalues
We first compute the characteristic polynomial of A:
This polynomial factors to . Therefore, the eigenvalues of A are 2, 1 and −1.
- Identifying eigenvectors
With the eigenvalues in hand, we can solve sets of simultaneous linear equations to determine the corresponding eigenvectors. For example, one can check that
which confirms that 2 is an eigenvalue of A and gives us a corresponding eigenvector.
Note that if A is a real matrix, the characteristic polynomial will have real coefficients, but its roots will not necessarily all be real. The complex eigenvalues come in pairs which are conjugates. For a real matrix, the eigenvectors of a non-real eigenvalue z , which are the solutions of , cannot be real.
If v1, ..., vm are eigenvectors with different eigenvalues λ1, ..., λm, then the vectors v1, ..., vm are necessarily linearly independent.
The spectral theorem for symmetric matrices states that if A is a real symmetric n-by-n matrix, then all its eigenvalues are real, and there exist n linearly independent eigenvectors for A which are mutually orthogonal. Symmetric matrices are commonly encountered in engineering.
Our example matrix from above is symmetric, and three mutually orthogonal eigenvectors of A are
These three vectors form a basis of R3. With respect to this basis, the linear map represented by A takes a particularly simple form: every vector x in R3 can be written uniquely as
and then we have
See also
- For a more general point of view, see Eigenvalue, eigenvector, and eigenspace
- For numerical methods for computing eigenvalues of matrices, see eigenvalue algorithm