Jump to content

Jacobi eigenvalue algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Crocodile pete (talk | contribs) at 00:48, 26 April 2006 (Algorithm for Eigenvalues of Symmetric Real Matrices). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

The Jacobi Eigenvalue - Method for Symmetric Matrices

Description

For and let denote the matrix with components

describes a rotation with angle f in the plane spanned by the k. and l. unit vector. We have , i.e. is orthogonal.


For a real symmetric matrix S let . This matrix differs from S only in rows and columns k and l where ( if for short) : Failed to parse (syntax error): {\displaystyle S'_{kj} = S'_{jk} = c · S_{kj} + s · S_{jl} , S'_{lj} = S '_{jl} = - s · S_{kj} + c · S_{jl} \quad for \quad j ¹ k, l } and Failed to parse (syntax error): {\displaystyle S'_{kk} = c^2 S_{kk} - 2 s c S_{kl} + s^2 S_{ll} , S'_{ll} = s^2 S_{kk} - 2 s c S_{kl} + c^2 S_{ll0 , S'_{kl} = S'_{lk} = (c^2 - s^2 ) · S_{kl} + s c · (S_{ll} - S_{kk} ). }

Define (the sum of squares of all components), (the sum of squares of all diagonal components), (the sum of squares of all off-diagonal components). is the Frobenius-Norm of S and "G"(S ) is called off-diag norm since "G"(S ) = 0 if and only if S is diagonal.

The F - Norm does not change under unitary rotations, so we have . If we choose f according to then and one finds . Thus the off-norm decreases and hence S' becomes "more" diagonal than S. In order to optimize this effect, should be the largest off-diagonal component. Such an element is called a pivot and the rotated matrix is called a Jacobi - Rotation.

The Jacobi Eigenvalue - Method repeatedly performs Jacobi - Rotations until the matrix becomes almost diagonal. Then the elements in the diagonal are approximations of the (real) eigenvalues of S.

Convergence

If is a pivot element, then by definition for . Since S has exactly N  := ½ n ( n - 1) off-diag elements, we have Failed to parse (syntax error): {\displaystyle p^2 \le \Gamma(S )^2 £ 2 N p^2 } or . This implies or , i.e. the sequence of Jacobi - Rotations converges at least linearly by a factor to a diagonal matrix.

A number of N Jacobi - Rotations is called a sweep; let denote the result. The previous estimate yields , i.e. the sequence of sweeps converges linearly with a factor  » .

However the following result of Schönhage yields locally quadratic convergence. To this end let S have m distinct eigenvalues Failed to parse (syntax error): {\displaystyle \lambda_1, … , \lambda_m } with multiplicities Failed to parse (syntax error): {\displaystyle \nu_1, … , \nu_m } and let be the smallest distance of two different eigenvalues. Let us call a number of Failed to parse (syntax error): {\displaystyle N_S := \frac^1_2 n (n - 1) - \sum{\mu=1}{m} \frac^1_2 \nu_{\mu} (\nu_{\mu} - 1) \le N } Jacobi - Rotations a Schönhage - sweep. If denotes the result then Failed to parse (syntax error): {\displaystyle \Gamma(S^ s ) \le\sqrt{\frac^n_2 - 1} \frac{\gamma^2}{d - 2\gamma}, \quad \gamma := \Gamma(S ) } . Thus convergence becomes quadratic as soon as Failed to parse (syntax error): {\displaystyle \Gamma(S ) < d / (2 + \sqrt{\frac^n_2 - 1}) } .

Cost

Each Jacobi - Rotation can be done in n steps when the pivot element p is known. However the search for p requires inspection of all Failed to parse (syntax error): {\displaystyle N » \frac^1_2 n^2 } off-diag elements. We can reduce this to n steps too if we introduce an additional index array Failed to parse (syntax error): {\displaystyle m_1, … , m_{n - 1} } with the property that is the index of the largest element in row i, (i = 1, … , n - 1) of the current S. Then (k, l) must be one of the pairs . Since only columns k and l change, only must be updated, which again can be done in n steps. Thus each rotation has O (n) cost and one sweep has O (n³) cost which is equivalent to one matrix multiplication. Additionally the must be initialized before the process starts, this can be done in n² steps.

Typically the Jacobi - method converges within numerical precision after a small number of sweeps.

Algorithm

The following algorithm is a description of the Jacobi method in math - like notation. It calculates a vector e which contains the eigenvalues and a matrix E which contains the corresponding eigenvectors, i.e. is an eigenvalue and the column an orthonormal eigenvector for , i = 1, … , n.

procedure jacobi (; out ; out Failed to parse (unknown function "\im"): {\displaystyle E \im IR^{(n,n)}} )

var

function maxind ! index of largest element in row k begin m := k + 1 for i := k + 2 to n do if then m := l endif endfor return m endfunc

procedure rotate ( ) ! perform rotation of </math>S_{ij} , S_{kl} </math> begin endproc

begin

! init e, E and index array ind E := 1n; psum := 0.0 for k := 1 to n do endfor

repeat ! next rotation

! find index (k, l) of pivot p m := 1 for k := 2 to n - 1 do if then m := k endif endfor k := m; psum then exit endif

! calculate c = cos f, s = sin f y := ( ) / 2; t := |y| + sqrt(p ² + y ²); s := sqrt(p ² + t ²); c := t / s; s := p / s; t := p ² / t if y < 0.0 then s := - s; t := - t endif

! update" and e

! rotate rows and columns k and l for i := 1 to k - 1 do rotate (i, k, i, l) endfor for i := k + 1 to l - 1 do rotate (k, i, i, l) endfor for i := l + 1 to n do rotate (k, i, l, i) endfor

! rotate eigenvectors for i := 1 to n do endfor

! rows k, l have changed, update  := maxind (k);  := maxind (l)

loop

endproc


Notes

The absolute values of the pivot are summed in the variable psum. The iteration stops when the current pivot is numerically small compared to psum. It is unlikely that further iterations produce better approximations. The statement psum := psum + |p|; must be executed with rounding to the nearest floating point number.

The upper triangle of the matrix S is destroyed while the lower triangle and the diagonal are unchanged. Thus it is possible to restore S if necessary according to

for k :=1 to n - 1 do for l := k + 1 to n do endfor endfor

The eigenvalues are not necessarily in descending order. This can be achieved by a simple sorting algorithm.

for k := 1 to n - 1 do m := k for l := k + 1 to n do if then m := l endif endfor if k ¹ m then swap  ; swap endif endfor

Example

Let

Then jacobi produces the following eigenvalues and eigenvectors after 3 sweeps (19 iterations) : Failed to parse (syntax error): {\displaystyle e_1 = 2585.25381092892231, \\ \left( \begin{array}{cccc} 0.0291933231647860588 & -0.328712055763188997 & 0.791411145833126331 & -0.514552749997152907 \\ \end{array} \right) , \\ e_2 = 37.1014913651276582, \\ \left( \begin{array}{cccc} -0.179186290535454826 & 0.741917790628453435 & -0.100228136947192199 & -0.638282528193614892 \\ \end{array} \right) , \\ e_3 = 1.4780548447781369, \\ \left( \begin{array}{cccc} -0.582075699497237650 & 0.370502185067093058 & 0.509578634501799626 & 0.514048272222164294\\ \end{array} \right) , \\ e_4 = 0.1666428611718905, \\ \left( \begin{array}{cccc} 0.792608291163763585 & 0.451923120901599794 & 0.322416398581824992 & 0.252161169688241933\\ \end{array} \right) , \\ }

References

1. Jacobi, Über ein leichtes Verfahren, die in der Theorie der Säkularstörungen vorkommenden Gleichungen numerisch aufzulösen, Crelle's Journal 30 (1846), 51 - 94 2. Schönhage, Zur quadratischen Konvergenz Jacobi - Verfahrens, Numer. Math. 6 (1964), 410 - 412 3. Rutishauser, The Jacobi Method for Real Symmetric Matrices, Numer. Math. 9 (1966), 1 - 10