Talk:Matrix decomposition
This is the talk page for discussing improvements to the Matrix decomposition article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1 |
![]() | Mathematics Start‑class Mid‑priority | |||||||||
|
Smith normal form
Matrix decompositions are definitely used very much in linear algebra, but one can decompose matrices whose elements do not belong to a field, as well. For example, the computation of Smith normal form can be rephrased as a matrix decomposition problem over a Principal ideal domain. Would a note / example be appropriate?
Complexity
Shouldn't we also include a discussion on the algorithmic complexity of various factorizations? —Preceding unsigned comment added by 129.137.25.126 (talk) 17:41, 1 August 2008 (UTC)
- Seems sensible. Please go ahead. -- Jitse Niesen (talk) 18:21, 1 August 2008 (UTC)
LDL decomposition
Came here looking for LDL, and couldnt find it, which to me seems odd. --212.242.167.26 (talk) 08:11, 3 November 2008 (UTC)
- It is now mentioned. See Matrix decomposition#Cholesky decomposition which links to LDL decomposition. JackSchmidt (talk) 13:28, 28 April 2009 (UTC)
"square, symmetric" or just "symmetric"
“ | Applicable to: square, symmetric, positive definite matrix A | ” |
After considering the reason given for revert of my edit that removed the "square" from this statement, I still consider its removal beneficial to the article. I disagree with the defense that "some readers may not notice that symmetric implies square, so the redundancy makes it clearer" is a strong enough justification. Nearly any reader reading about matrix decomposition will already know what symmetric means and that it implies square; so being redundant just doesn't help all that much. I would even suggest that having such redundancy ironically makes it harder on the introductory-level readers because such clutter makes it more difficult to isolate and encapsulate the concepts they want to learn. For the same reasons, I think using redundancy across the whole spectrum of Wikipedia's mathematics articles would be to its own detriment and there's just not enough justification to make an exception here. Jason Quinn (talk) 14:17, 20 February 2009 (UTC)
- The decomposition is in the section for solving linear systems. Linear systems are not naturally square, and so it is important to put emphasis on the incredibly restricted domain of this decomposition. One of the first things one looks for in a linear solver is whether it is general or just for square. Amongst square matrix algorithms, one checks for symmetric or normal or general. Amongst symmetric algorithms, one checks for positive definite, semidefinite, or general. This page is a bit like a telephone book or directory. To remove "square" is similar to saying "All the names on this page start with S, so we'll just omit the S as it is redundant." JackSchmidt (talk) 16:23, 20 February 2009 (UTC)
Where does SVD go?
JackSchmidt-- I'm curious why you moved SVD to the list of decompositions "related to solving linear systems". I thought it was best classified as a generalization of the eigendecomposition. What use, in particular, do you have in mind when you list it as being related to linear systems? --Rinconsoleao (talk) 17:50, 22 February 2009 (UTC)
- Probably it is explained by my view that numerical linear algebra is divided into two sections: linear solvers and eigenvalue problems. In reality there is no sharp distinction between the two (notice how many of the linear solvers are just using eigenvalues, and iterative methods for eigenvalues and linear solvers are extremely similar). I asked myself which problem does the SVD help solve and which problem mentions it more often.
- Well, the SVD is a lot like QR. For instance, one of the important problems when solving linear systems is to determine the rank of the matrix, its kernel, and its image. The number of non-zero singular values is the rank. In slim SVD, one of the unitary matrices is a unitary basis of the image, and the other is a unitary basis for (the complement of) the kernel. Given the SVD, solving a linear system is easy, as inverting unitary and diagonal matrices is trivial. In particular, most properties of least-squares solutions follow very easily from the SVD in theory, and from QR in algorithms. At any rate, this is why it is relevant to linear systems. The SVD is more or less useless in solving eigenvalue problems (that is Ax=cx), however in many fields singular values are called eigenvalues and there is some similarity. Trefethen and Bau's text page 33 has a discussion on the fundamental differences between eigenvalue decomposition and the singular value decomposition including their very distinct uses (but the fact that such a discussion exists, shows that there must be some sort of relation). I think keeping it with linear solvers and mentioning its relation to the eigen-decomposition seems most reasonable, but I am open to other ideas. An alternative that is probably more theoretically sound is to give singular values their own section, but I don't know too many decompositions other than SVD variants that are related, so it might be a poor editorial decision. JackSchmidt (talk) 18:21, 22 February 2009 (UTC)
- I reread those chapters in T&B. One use of the SVD not covered by my "it is a linear solver" is probably its most important use: low rank approximation. I don't think that fits under eigen-decomposition, but it is only a little related to linear solvers. This supports the "third section" suggestion, but I'm not sure what else to put in it. If we allow "decomposition" to also mean additive decompositions, then this could include some other things (Jordan Chevalley is usually additive), like every complex matrix is a low rank matrix plus a matrix small in 2-norm. Not only do the decompositions need to exist, they also need to have wikipedia articles (or at least be suitable for articles), so I'm still not sure this is a good idea. JackSchmidt (talk) 18:36, 22 February 2009 (UTC)
An excellent article
Thank you. I have been trying for some hours to determine what method is appropriate for solving my system of linear equations and found the answer neatly presented here. The article is accessible to a technical non-specialist (and therefore useful in an encyclopedia), clear and concise. I live in hope that more mathematics articles will match up to these criteria. I added a 'See Also' link to Systems of linear equations. Again, thanks. Dhatfield (talk) 17:51, 8 April 2009 (UTC)
LDL^t decomposition
I suggest that this page would benefit by including additional matrix decompositions, particularly the LDL^t decomposition. Does anyone have any input on this? Thanks Dwhitam (talk) 04:18, 28 April 2009 (UTC) Dwhitam
- It is already mentioned. See Matrix decomposition#Cholesky decomposition which links to LDL decomposition (which is the LDL^t decomposition). JackSchmidt (talk) 13:28, 28 April 2009 (UTC)
eigenvalue decomposition of Symmetric matrix
In general, we have , when A is symmetric, do we have ? Jackzhp (talk) 14:10, 7 March 2010 (UTC)
Rank factorization?
A link to Rank factorization was recently added. This looks very similar to LU decomposition, but I don't have time right now to check whether and how it is related. Should both concepts be linked separately on this page, or should they be discussed together? And should the Rank factorization page be merged with the LU decomposition page? Rinconsoleao (talk) 08:50, 21 July 2010 (UTC)
- It's different from LU decomposition -- the article says that it works for rectangular matrices (whereas LU is only for square matrices), and the first matrix in that decomposition isn't triangular. So we shouldn't merge these articles. -- X7q (talk) 14:27, 21 July 2010 (UTC)
LUP decomposition
Is it defined by PA=LU or by A=LUP? There seems to be some contradiction. —Preceding unsigned comment added by 68.102.28.107 (talk) 22:45, 10 May 2011 (UTC)
Decomposition into clans
The material just added regarding clans and petri nets appears to be original research and probably should be removed. Anita5192 (talk) 17:56, 8 April 2013 (UTC)
- I agree; journals published by scirp probably do not pass the threshold for this article, where all other decompositions are well established. Thus, I removed the material. -- Jitse Niesen (talk) 08:53, 9 April 2013 (UTC)
Generalised eigenspaces anyone?
I find it quite amazing that in the long list of decompositions, the one based on the direct sum decomposition of the space into generalised eigenspaces appears to be missing. This is one of the most basic decompositions of any linear operator on a complex vector space without additional (inner product) structure. More precisely it applies to any square matrix whose characterisitic (or minimal) polynomial splits into linear factors. While it is related to the Jordan normal form and somewhat to the Jordan–Chevalley decomposition, it is definitely not the same as either one of them. Is there a particular reason it is missing?
Existence
"Existence: An n-by-n matrix A always has n eigenvalues" It isn't true. Matrix [0,1;-1,0] has no real eigenvalues, only complex: and . An nxn matrix over complex numbers always has n complex eigenvalues, an nxn real matrix has n eigenvalues for example when it is a symmetric matrix. Bartekltg (talk) —Preceding undated comment added 03:16, 22 February 2014 (UTC)
Cholesky decompostion
It is written :
- Decomposition: , where U is upper triangular with positive diagonal entries
Yet, in the {Cholesky decomposition|Cholesky decomposition} page, it is an lower triangular !