Talk:Eigenvalue algorithm
Merge
As a learner, it is good idea to merge them together. On the other hand, "algorithm" is not really an algorithm it should better rephrase as "calculation of eigenvalue"... —The preceding unsigned comment was added by 202.81.251.173 (talk) 09:22, 17 November 2006
I agree. I just think there should be a way to merge discussion on a merger, just seems like a obvious thing to overlook. --ANONYMOUS COWARD0xC0DE 07:44, 25 November 2006 (UTC)
As per my arrogant declaration all further discussion regarding this merge shall take place on Talk:Symbolic_computation_of_matrix_eigenvalues, simply because there have been more comments made there. --ANONYMOUS COWARD0xC0DE 08:06, 23 December 2006 (UTC)
Character set
There are characters used on this page that my browser (Firefox 1.5) does not display. Shouldn't we be sticking to standard character sets? --jdege 11:37, 25 April 2007 (UTC)
Computational complexity?
What is the computational complexity (in the big-O sense) of these eigenvalue algorithms? —Ben FrantzDale (talk) 18:10, 6 December 2008 (UTC)
Eigenvalues of a Symmetric 3x3 Matrix
Hi, does anyone know a derivation (or a proof) for the symmetric 3x3 algorithm? (perhaps the source where it came from) And I would suggest to write instead of , so it works even for eye-matrices. --134.102.204.123 (talk) 11:01, 28 July 2009 (UTC)
- If someone is interested, I found a derivation here: [1] --134.102.204.123 (talk) 15:55, 28 July 2009 (UTC)
Some questions/suggestions about the algorithm:
- afaik, acos will never result in values < 0, so the if-clause is redundant.
- For the Matlab implementation one can write p=norm(K,'fro')/6; instead of the ridiculous loop (alternatively: p=sum(K(:).^2)/6)
- Is the python code really significant? I hardly see a difference to the Matlab implementation (except for syntax)
- Why wouldn't this algorithm work for singular matrices? The only problem I see would be the case of a scaled unit-matrix (making K=zeros(3) and p=0).
--92.76.248.104 (talk) 04:57, 11 June 2012 (UTC)
QR algorithm redundancy
In section "Advanced methods" the QR algorithm is mentioned two times. Is that how it should be?
Mortense (talk) 19:40, 12 October 2009 (UTC)
The section "Power iteration" is obviously wrong
If the eigenvalues are imaginary, there is no chance that the "Power iteration algorithm" will converge to an eigenvector. I am making an obvious correction now (by saying the algorithm converges *on the condition* that there is a dominant eigenvalue). This is a very major issue, since for a matrix with real coefficients, if the eigenvalue of highest || is complex, then necessarily its complex conjugate is also an eigenvalue; the complex conjugate clearly is going to have the same magnitude of ||, so the "power iteration" algorithm is going to fail for this completely generic possibility.
This section needs considerably more work! Tition1 (talk) 01:22, 12 September 2011 (UTC)
- (Note that in practice the case of two conjugate eigenvalues is easily handled. e.g. if you take two subsequent iterations, the vectors generally span the two conjugate eigenvectors so you can solve a small 2×2 eigenproblem with them to get the two eigenvalues and eigenvectors. — Steven G. Johnson (talk) 01:16, 27 October 2011 (UTC))