Jump to content

Asymptotic computational complexity

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Altenmann (talk | contribs) at 03:15, 23 January 2008. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computational complexity theory, asymptotic computational complexity is the usage of the asymptotic analysis for the estimation of computational complexity of algorithms and computational problems, commonly associated with the usage of the big O notation.

Since the groundlaying 1965 paper of Hartmanis and Stearns[1] and the 1972 book by Garey and Jackson on NP-completeness, [2] the term "computational complexity" (of algorithms) most commonly refers to the asymptotic computational complexity.

Further, unless specifies otherwise, the term "computational complexity" usually refers to the upper bound for the asymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the "Big Oh", e.g.. Other types of (asymptotic) computational complexity estimates are lower bounds ("Big Omega" notation; e.g., Ω(n)) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (written using the "Big Theta"; e.g., Θ(n log n)).

In terms of the most commonly estimated computational resources, it is spoken about the asymptotic time complexity and asymptotic space complexity. Other asymptotically estimated resources include circuit complexity and various measures of parallel computation, such as the number of (parallel) processors.

References

  1. ^ J. Hartmanis, R. Stearns. "On the computational complexity of algorithms," Transactions of the American Mathematical Society, 1965 vol. 117, pp. 285-306
  2. ^ Michael Garey, and David S. Johnson: Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman & Co., 1979.