Jump to content

Model of computation

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 2800:150:128:960:c577:9dd3:2b9b:5b22 (talk) at 06:54, 5 August 2023 (Gki). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized.[1] The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.

Models

Models of computation can be classified into three categories: sequential models,

Uses

In the field of runtime analysis of algorithms, it is common to specify a computational model in terms of primitive operations allowed which have unit cost, or simply unit-cost operations. A commonly used example is the random-access machine, which has unit cost for read and write access to all of its memory cells. In this respect, it differs from the above-mentioned Turing machine model.

See also

References

  1. ^ "Models of Computation" (PDF).

Further reading

  • Fernández, Maribel (2009). Models of Computation: An Introduction to Computability Theory. Undergraduate Topics in Computer Science. Springer. ISBN 978-1-84882-433-1.
  • Savage, John E. (1998). Models Of Computation: Exploring the Power of Computing. Addison-Wesley. ISBN 978-0201895391.