Jump to content

Mutual coherence (linear algebra)

From Wikipedia, the free encyclopedia
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

In linear algebra, mutual coherence (or simply coherence) measures the maximum similarity between any two columns of a matrix, defined as the largest absolute value of their cross-correlations.[1][2] First explored by David Donoho and Xiaoming Huo in the late 1990s for pairs of orthogonal bases,[3] it was later expanded by Donoho and Michael Elad in the early 2000s to study sparse representations[4]—where signals are built from a few key components in a larger set.

In signal processing, mutual coherence is widely used to assess how well algorithms like matching pursuit and basis pursuit can recover a signal’s sparse representation from a collection with extra building blocks, known as an overcomplete dictionary.[1][2][5]

Joel Tropp extended this idea with the Babel function, which applies coherence from one column to a group, equaling mutual coherence for two columns while broadening its use for larger sets with any number of columns.[6]

Formal definition

Formally, let be the columns of the matrix A, which are assumed to be normalized such that The mutual coherence of A is then defined as[1][2]

A lower bound is[7]

A deterministic matrix with the mutual coherence almost meeting the lower bound can be constructed by Weil's theorem.[8]

See also

References

  1. ^ a b c Tropp, J.A. (March 2006). "Just relax: Convex programming methods for identifying sparse signals in noise" (PDF). IEEE Transactions on Information Theory. 52 (3): 1030–1051. doi:10.1109/TIT.2005.864420. S2CID 6496872.
  2. ^ a b c Donoho, D.L.; M. Elad; V.N. Temlyakov (January 2006). "Stable recovery of sparse overcomplete representations in the presence of noise". IEEE Transactions on Information Theory. 52 (1): 6–18. doi:10.1109/TIT.2005.860430. S2CID 14813938.
  3. ^ Donoho, D.L.; Xiaoming Huo (November 2001). "Uncertainty principles and ideal atomic decomposition". IEEE Transactions on Information Theory. 47 (7): 2845–2862. CiteSeerX 10.1.1.39.3696. doi:10.1109/18.959265. S2CID 9500527.
  4. ^ Donoho, D.L.; Michael Elad (March 2003). "Optimally sparse representation in general (nonorthogonal) dictionaries via L1 minimization". Proc. Natl. Acad. Sci. 100 (5): 2197–2202. Bibcode:2003PNAS..100.2197D. doi:10.1073/pnas.0437847100. PMC 153464. PMID 16576749.
  5. ^ Fuchs, J.-J. (June 2004). "On sparse representations in arbitrary redundant bases". IEEE Transactions on Information Theory. 50 (6): 1341–1344. doi:10.1109/TIT.2004.828141. S2CID 18432970.
  6. ^ Joel A. Tropp (2004). "Greed is good: Algorithmic results for sparse approximation" (PDF). CiteSeerX 10.1.1.84.5256.
  7. ^ Welch, L. R. (1974). "Lower bounds on the maximum cross-correlation of signals". IEEE Transactions on Information Theory. 20 (3): 397–399. doi:10.1109/tit.1974.1055219.
  8. ^ Zhiqiang, Xu (April 2011). "Deterministic Sampling of Sparse Trigonometric Polynomials". Journal of Complexity. 27 (2): 133–140. arXiv:1006.2221. doi:10.1016/j.jco.2011.01.007. S2CID 2613562.

Further reading