Talk:Lanczos algorithm
Old comments that did not have a section of their own
[[1]] - an unfocused variety of Lanczos algorithm —Preceding unsigned comment added by 134.219.166.104 (talk • contribs) 21:23, 1 September 2005
This doesn't have much but it does have a reference to a book mathworld on Lanczos Algorithm]—Preceding unsigned comment added by RJFJR (talk • contribs) 23:36, 25 September 2005
I don't believe this is the Lanczos algorithm at all. It is the power method. —Preceding unsigned comment added by 130.126.55.123 (talk • contribs) 01:04, 5 August 2006
I don't know if the algorithm is correct, but it's certainly different than the power method, and presented pretty clearly. I think it's gotten me on the right track at least... Thanks. --Jjdonald (talk) 22:22, 17 December 2007 (UTC)
- It is not easy to say it's wrong or correct, since quite some information is missing
in order to apply it: (a) how to choose v[1], (b) how to choose m, (c) how to recognize the eigenvalues of A among those of T_mm. Unfortunately, this vagueness is by no means eliminated by the Numerical stability section. — MFH:Talk 21:57, 12 September 2008 (UTC)
- It is certainly not completely correct: there's at least something faulty with the indices. — MFH:Talk 19:59, 8 December 2011 (UTC)
It should state that "it applies to Hermitian matrices" at the start of the article and not somewhere in the middle. limweizhong (talk) 09:54, 11 November 2008 (UTC)
- There is a paper about Non-Symmetric Lanczos' algorithm (compared to Arnoldi) by Jane Cullum. — MFH:Talk 20:07, 8 December 2011 (UTC)
In Latent Semantic Indexing, for...
I really think that this sentense has nothing to do in the first paragraph! Please someone who understand anything about it should create a separate section and explain what this is about! Alain Michaud (talk) 16:52, 19 February 2010 (UTC)
Block Lanczos algorithm
I suppose that Peter Montgomery`s 1995 paper was very good, but I do not see the need to inform everyone about its existence. This topic is much too advanced to be discussed at the top of the page. Please move this (second paragraph) towards the end of the page.
Alain Michaud (talk) 16:50, 19 February 2010 (UTC)
Extracting information from tridiagonal matrix
So Lanczos gives you a tridiagonal matrix. I think a link would be helpful which explains how to extract low eigenvalues/eigenvectors from this matrix. —Preceding unsigned comment added by 209.6.144.249 (talk) 06:30, 2 March 2008 (UTC)
- Agree - or largest eigenvalues: anyway, the article starts by saying that it's for calculating eigenvalues, but then stops with the tridiag. matrix.
- B.t.w., the algorithm calculates up to v[m+1], I think this could be avoided. (also, "unrolling" the 1st part of the m=1 case as initialization should allow to avoid using v[0].) — MFH:Talk 03:09, 11 September 2008 (UTC)
- PS: also, it should be said what is 'm'...
- Seconded — the article spends a lot of ink on the Lanczos iteration (but could do a better job at explaining it) for producing a tridiagonal matrix, says various other algorithms can be used for calculating eigenvalues and eigenvectors of that tridiagonal matrix, but is almost silent on how the two are related. As far as I can tell, early steps of the iteration tends to put most of the weight on the extreme eigenvalues (largest and smallest both, regardless of their absolute values), meaning those are fairly accurately reproduced in the tridiagonal matrix, and the algorithm proceeds towards less extreme eigenvalues the longer it is run; it's 'tends' because the initial weight distribution depends on the initial vector, which is chosen at random. What is not clear from mere thought experiments is how concentrated the distribution is … 130.243.68.202 (talk) 14:22, 2 May 2017 (UTC)
Rewrite of algorithm
Thinking some more about this, I find it desirable to modify the way the algorithm is stated — partly to address the case , and partly to do something about the "changes to" remark at the end, which is confusing in that no variable assigned in the algorithm is ever changed. It turns out the remark is boilerplate text in the {{algorithm-end}} template, and there is no option to omit it. Since Wikipedia:WikiProject_Computer_science/Manual_of_style#Algorithms_and_data_structures does not recommend using that template, and instead recommends using (numbered) lists with explicit input and output headings, a rewrite from scratch seems in order.
- Input a Hermitian matrix of size , and optionally a number of iterations (as default let )
- Strictly speaking, the algorithm does not need access to the the explicit matrix, but only a function that computes the product of the matrix by an arbitrary vector. This function is called at most times.
- Output an matrix with orthonormal columns and a tridiagonal real symmetric matrix of size . If then is unitary and .
- Warning The Lanczos iteration is prone to numerical instability. When executed in non-exact arithmetic, additional measures (as outlined further down) should be taken to ensure validity of the results.
- Let be an arbitrary vector with Euclidean norm .
- Abbreviated initial iteration step:
- Let .
- Let .
- Let .
- For do:
- Let (also Euclidean norm).
- If then let ,
- else pick as an arbitrary vector with Euclidean norm that is orthogonal to all of .
- Let .
- Let .
- Let .
- Let be the matrix with columns . Let .
- Note for .
Not counting the matrix–vector multiplication, each iteration does arithmetical operations. If is the average number of nonzero elements in a row of , then the matrix–vector multiplication can be done in arithmetical operations. Total complexity is thus , or if ; the Lanczos algorithm can be really fast for sparse matrices. Schemes for improving numerical stability are typically judged against this high performance.
The vector is not used after is computed, and the vector is not used after is computed. Hence one may use the same storage for all three. Likewise, if only the tridiagonal matrix is sought, then the raw iteration does not need after having computed , although some schemes for improving the numerical stability would need it later on. 130.243.68.122 (talk) 12:39, 24 May 2017 (UTC)
Application for eigendecomposition
The Lanczos algorithm is most often brought up in the context of finding the eigenvalues and eigenvectors of a matrix, but whereas an ordinary diagonalization of a matrix would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. None the less, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition. First observe that when is , it is similar to : if is an eigenvalue of then it is also an eigenvalue of , and if ( is an eigenvector of ) then is the corresponding eigenvetor of (since ). Thus the Lanczos algorithm transforms the eigendecomposition problem for into the eigendecomposition problem for .
- For tridiagonal matrices, there exist a number of specialised algorithms, often with better computational complexity than general-purpose algorithms. For example if is an tridiagonal symmetric matrix then:
- The continuant recursion allows computing the characteristic polynomial in operations, and evaluating it at a point in operations.
- The divide-and-conquer eigenvalue algorithm can be used to compute the entire eigendecomposition of in operations.
- The Multiple Relatively Robust Representations (MRRR)[1] method allows computing all eigenvalues and eigenvectors of in as little as operations.
- Some general eigendecomposition algorithms, notably the QR algorithm, are known to converge faster for tridiagonal matrices than for general matrices.
- Even algorithms whose convergence rates are unaffected by unitary transformations, notably the power method, may in practice benefit from being applied to the tridiagonal matrix rather than the original matrix . Since is very sparse with all nonzero elements in highly predictable positions, it permits compact storage with excellent performance visavi caching. Likewise, is a real matrix with all eigenvectors and eigenvalues real, whereas in general may have complex elements and eigenvectors, so real arithmetic is sufficient for finding the eigenvectors and eigenvalues of .
- If is very large, then reducing so that is of a manageable size will still allow finding the more extreme eigenvalues and eigenvectors of ; in the region, the Lanczos algorithm can be viewed as a lossy compression scheme for Hermitian matrices, that emphasises preserving the extreme eigenvalues.
The combination of good performance for sparse matrices and the ability to compute several (without computing all) eigenvalues are the main reasons for choosing to use the Lanczos algorithm. 130.243.68.122 (talk) 18:21, 24 May 2017 (UTC)
Define variables
It would be nice if variables are defined before (or just after) being used. For example, at the begining, and are not defined and its confusing for non-expert public.
Felipebm (talk) 13:34, 17 May 2011 (UTC)
problematic matrix decomposition
In the section "Power method for finding eigenvalues", the matrix A is represented as , which is true only for normal matrices. For the general case, SVD decomposition should be used, i.e. where U and V are some orthogonal matrices. — Preceding unsigned comment added by 89.139.52.157 (talk) 12:14, 24 April 2016 (UTC)
- It's not stated explicitly at that point, but presumably is already taken to be Hermitian (as it needs to be for the Lanczos algorithm to work), which means it has an eigendecomposition of the form stated. Instead using the SVD decomposition in this argument won't work, because the entire point is that so that the product telescopes! Possibly it would be clearer to just use , i.e., hold off on requiring orthogonality — the reason being that the paragraph in question is about the plain power method, which applies in a greater generality. 130.243.68.202 (talk) 13:01, 2 May 2017 (UTC)
- ^ This probably refers to the method described in: Dhillon, Inderjit S.; Parlett, Beresford N. (2004). "Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices". Linear Algebra and its Applications. 387: 1–28. doi:10.1016/j.laa.2003.12.028. ISSN 0024-3795.