Jump to content

Coppersmith–Winograd algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Jochen Burghardt (talk | contribs) at 02:44, 28 June 2014 (top: linked ISSAC). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In linear algebra, the Coppersmith–Winograd algorithm, named after Don Coppersmith and Shmuel Winograd, was the asymptotically fastest known algorithm for square matrix multiplication until 2010. It can multiply two matrices in time [1] (see Big O notation). This is an improvement over the naïve time algorithm and the time Strassen algorithm. Algorithms with better asymptotic running time than the Strassen algorithm are rarely used in practice. It is possible to improve the exponent further; however, the exponent must be at least 2 (because an matrix has values, and all of them have to be read at least once to calculate the exact result).

In 2010, Andrew Stothers gave an improvement to the algorithm, [2][3] In 2011, Virginia Williams combined a mathematical short-cut from Stothers' paper with her own insights and automated optimization on computers, improving the bound to [4] In 2014, François Le Gall simplified the methods of Williams and obtained an improved bound of [5]

The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware.[6]

Henry Cohn, Robert Kleinberg, Balázs Szegedy and Christopher Umans have re-derived the Coppersmith–Winograd algorithm using a group-theoretic construction. They also showed that either of two different conjectures would imply that the optimal exponent of matrix multiplication is 2, as has long been suspected. However, they were not able to formulate a specific solution leading to a better running-time than Coppersmith-Winograd at the time.[7]

References

  1. ^ Coppersmith, Don; Winograd, Shmuel (1990), "Matrix multiplication via arithmetic progressions" (PDF), Journal of Symbolic Computation, 9 (3): 251, doi:10.1016/S0747-7171(08)80013-2
  2. ^ Stothers, Andrew (2010), On the Complexity of Matrix Multiplication (PDF).
  3. ^ Davie, A.M.; Stothers, A.J. (2013), "Improved bound for complexity of matrix multiplication", Proceedings of the Royal Society of Edinburgh, 143A: 351–370, doi:10.1017/S0308210511001648
  4. ^ Williams, Virginia (2011), Breaking the Coppersmith-Winograd barrier (PDF)
  5. ^ Le Gall, François (2014), "Powers of tensors and fast matrix multiplication", Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC 2014), arXiv:1401.7714
  6. ^ Robinson, Sara (2005), "Toward an Optimal Algorithm for Matrix Multiplication" (PDF), SIAM News, 38 (9)
  7. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1109/SFCS.2005.39, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1109/SFCS.2005.39 instead.

Further reading

  • P. Bürgisser, M. Clausen, and M.A. Shokrollahi. Algebraic complexity theory. Grundlehren der mathematischen Wissenschaften, No. 315 Springer Verlag 1997.

See also