Jump to content

Talk:Lanczos algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 130.243.68.122 (talk) at 14:47, 26 May 2017 (Application for eigendecomposition: Pretty much finished.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Old comments that did not have a section of their own

[[1]] - an unfocused variety of Lanczos algorithm —Preceding unsigned comment added by 134.219.166.104 (talkcontribs) 21:23, 1 September 2005

This doesn't have much but it does have a reference to a book mathworld on Lanczos Algorithm]—Preceding unsigned comment added by RJFJR (talkcontribs) 23:36, 25 September 2005

I don't believe this is the Lanczos algorithm at all. It is the power method. —Preceding unsigned comment added by 130.126.55.123 (talkcontribs) 01:04, 5 August 2006

I don't know if the algorithm is correct, but it's certainly different than the power method, and presented pretty clearly. I think it's gotten me on the right track at least... Thanks. --Jjdonald (talk) 22:22, 17 December 2007 (UTC)[reply]

It is not easy to say it's wrong or correct, since quite some information is missing in order to apply it: (a) how to choose v[1], (b) how to choose m, (c) how to recognize the eigenvalues of A among those of T_mm. Unfortunately, this vagueness is by no means eliminated by the Numerical stability section. — MFH:Talk 21:57, 12 September 2008 (UTC)[reply]
It is certainly not completely correct: there's at least something faulty with the indices. — MFH:Talk 19:59, 8 December 2011 (UTC)[reply]

It should state that "it applies to Hermitian matrices" at the start of the article and not somewhere in the middle. limweizhong (talk) 09:54, 11 November 2008 (UTC)[reply]

There is a paper about Non-Symmetric Lanczos' algorithm (compared to Arnoldi) by Jane Cullum. — MFH:Talk 20:07, 8 December 2011 (UTC)[reply]

In Latent Semantic Indexing, for...

I really think that this sentense has nothing to do in the first paragraph! Please someone who understand anything about it should create a separate section and explain what this is about! Alain Michaud (talk) 16:52, 19 February 2010 (UTC)[reply]

Block Lanczos algorithm

I suppose that Peter Montgomery`s 1995 paper was very good, but I do not see the need to inform everyone about its existence. This topic is much too advanced to be discussed at the top of the page. Please move this (second paragraph) towards the end of the page.

Alain Michaud (talk) 16:50, 19 February 2010 (UTC)[reply]

Extracting information from tridiagonal matrix

So Lanczos gives you a tridiagonal matrix. I think a link would be helpful which explains how to extract low eigenvalues/eigenvectors from this matrix. —Preceding unsigned comment added by 209.6.144.249 (talk) 06:30, 2 March 2008 (UTC)[reply]

Agree - or largest eigenvalues: anyway, the article starts by saying that it's for calculating eigenvalues, but then stops with the tridiag. matrix.
B.t.w., the algorithm calculates up to v[m+1], I think this could be avoided. (also, "unrolling" the 1st part of the m=1 case as initialization should allow to avoid using v[0].) — MFH:Talk 03:09, 11 September 2008 (UTC)[reply]
PS: also, it should be said what is 'm'...
  • Seconded — the article spends a lot of ink on the Lanczos iteration (but could do a better job at explaining it) for producing a tridiagonal matrix, says various other algorithms can be used for calculating eigenvalues and eigenvectors of that tridiagonal matrix, but is almost silent on how the two are related. As far as I can tell, early steps of the iteration tends to put most of the weight on the extreme eigenvalues (largest and smallest both, regardless of their absolute values), meaning those are fairly accurately reproduced in the tridiagonal matrix, and the algorithm proceeds towards less extreme eigenvalues the longer it is run; it's 'tends' because the initial weight distribution depends on the initial vector, which is chosen at random. What is not clear from mere thought experiments is how concentrated the distribution is … 130.243.68.202 (talk) 14:22, 2 May 2017 (UTC)[reply]

Rewrite of algorithm

Thinking some more about this, I find it desirable to modify the way the algorithm is stated — partly to address the case , and partly to do something about the "changes to" remark at the end, which is confusing in that no variable assigned in the algorithm is ever changed. It turns out the remark is boilerplate text in the {{algorithm-end}} template, and there is no option to omit it. Since Wikipedia:WikiProject_Computer_science/Manual_of_style#Algorithms_and_data_structures does not recommend using that template, and instead recommends using (numbered) lists with explicit input and output headings, a rewrite from scratch seems in order.

Input a Hermitian matrix of size , and optionally a number of iterations (as default let )
  • Strictly speaking, the algorithm does not need access to the the explicit matrix, but only a function that computes the product of the matrix by an arbitrary vector. This function is called at most times.
Output an matrix with orthonormal columns and a tridiagonal real symmetric matrix of size . If then is unitary and .
Warning The Lanczos iteration is prone to numerical instability. When executed in non-exact arithmetic, additional measures (as outlined further down) should be taken to ensure validity of the results.
  1. Let be an arbitrary vector with Euclidean norm .
  2. Abbreviated initial iteration step:
    1. Let .
    2. Let .
    3. Let .
  3. For do:
    1. Let (also Euclidean norm).
    2. If then let ,
      else pick as an arbitrary vector with Euclidean norm that is orthogonal to all of .
    3. Let .
    4. Let .
    5. Let .
  4. Let be the matrix with columns . Let .
Note for .

Not counting the matrix–vector multiplication, each iteration does arithmetical operations. If is the average number of nonzero elements in a row of , then the matrix–vector multiplication can be done in arithmetical operations. Total complexity is thus , or if ; the Lanczos algorithm can be really fast for sparse matrices. Schemes for improving numerical stability are typically judged against this high performance.

The vector is not used after is computed, and the vector is not used after is computed. Hence one may use the same storage for all three. Likewise, if only the tridiagonal matrix is sought, then the raw iteration does not need after having computed , although some schemes for improving the numerical stability would need it later on. 130.243.68.122 (talk) 12:39, 24 May 2017 (UTC)[reply]

Application to the eigenproblem

The Lanczos algorithm is most often brought up in the context of finding the eigenvalues and eigenvectors of a matrix, but whereas an ordinary diagonalization of a matrix would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. None the less, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition. First observe that when is , it is similar to : if is an eigenvalue of then it is also an eigenvalue of , and if ( is an eigenvector of ) then is the corresponding eigenvetor of (since ). Thus the Lanczos algorithm transforms the eigendecomposition problem for into the eigendecomposition problem for .

  1. For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example if is an tridiagonal symmetric matrix then:
    • The continuant recursion allows computing the characteristic polynomial in operations, and evaluating it at a point in operations.
    • The divide-and-conquer eigenvalue algorithm can be used to compute the entire eigendecomposition of in operations.
    • The Fast Multipole Method[1] can compute all eigenvalues in just operations.
  2. Some general eigendecomposition algorithms, notably the QR algorithm, are known to converge faster for tridiagonal matrices than for general matrices. Asymptotic complexity is just as for the divide-and-conquer algorithm (though the constant factor may be different); since the eigenvectors together have elements, this is asymptotically optimal.
  3. Even algorithms whose convergence rates are unaffected by unitary transformations, such as the power method and inverse iteration, may enjoy low-level performance benefits from being applied to the tridiagonal matrix rather than the original matrix . Since is very sparse with all nonzero elements in highly predictable positions, it permits compact storage with excellent performance visavi caching. Likewise, is a real matrix with all eigenvectors and eigenvalues real, whereas in general may have complex elements and eigenvectors, so real arithmetic is sufficient for finding the eigenvectors and eigenvalues of .
  4. If is very large, then reducing so that is of a manageable size will still allow finding the more extreme eigenvalues and eigenvectors of ; in the region, the Lanczos algorithm can be viewed as a lossy compression scheme for Hermitian matrices, that emphasises preserving the extreme eigenvalues.

The combination of good performance for sparse matrices and the ability to compute several (without computing all) eigenvalues are the main reasons for choosing to use the Lanczos algorithm.

Application to tridiagonalization

Though the eigenproblem is often the motivation for applying the Lanczos algorithm, the operation that the algorithm primarily performs is rather tridiagonalization of a matrix, but for that the numerically stable Householder transformations have been favoured ever since the 1950's, and during the 1960's the Lanczos algorithm was disregarded. Interest in it was rejuvenated by the Kaniel–Paige theory and the development of methods to prevent numerical instability, but the Lanczos algorithm remains the alternative algorithm that one tries only if Householder is not satisfactory.[2]

Aspects in which the two algorithms differ include:

  • Lanczos takes advantage of being a sparse matrix, whereas Householder does not, and will generate fill-in.
  • Lanczos works throughout with the original matrix (and has no problem with it being known only implicitly), whereas raw Householder rather wants to modify the matrix during the computation (although that can be avoided).
  • Each iteration of the Lanczos algorithm produces another column of the final transformation matrix , whereas an iteration of Householder rather produces another factor in a unitary factorisation of . Each factor is however determined by a single vector, so the storage requirements are the same for both algorithms, and can be computed in time.
  • Householder is numerically stable, whereas raw Lanczos is not.

130.243.68.122 (talk) 14:46, 26 May 2017 (UTC)[reply]

Define variables

It would be nice if variables are defined before (or just after) being used. For example, at the begining, and are not defined and its confusing for non-expert public.

Felipebm (talk) 13:34, 17 May 2011 (UTC)[reply]

problematic matrix decomposition

In the section "Power method for finding eigenvalues", the matrix A is represented as , which is true only for normal matrices. For the general case, SVD decomposition should be used, i.e. where U and V are some orthogonal matrices. — Preceding unsigned comment added by 89.139.52.157 (talk) 12:14, 24 April 2016 (UTC)[reply]

It's not stated explicitly at that point, but presumably is already taken to be Hermitian (as it needs to be for the Lanczos algorithm to work), which means it has an eigendecomposition of the form stated. Instead using the SVD decomposition in this argument won't work, because the entire point is that so that the product telescopes! Possibly it would be clearer to just use , i.e., hold off on requiring orthogonality — the reason being that the paragraph in question is about the plain power method, which applies in a greater generality. 130.243.68.202 (talk) 13:01, 2 May 2017 (UTC)[reply]
  1. ^ Coakley, Ed S.; Rokhlin, Vladimir (2013). "A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices". Applied and Computational Harmonic Analysis. 34: 379–414. doi:10.1016/j.acha.2012.06.003.
  2. ^ Golub, Gene H.; Van Loan, Charles F. (1996). Matrix computations (3. ed. ed.). Baltimore: Johns Hopkins Univ. Press. ISBN 0-8018-5413-X. {{cite book}}: |edition= has extra text (help)