Jump to content

Algorithmic probability

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Randomness~enwiki (talk | contribs) at 11:55, 15 December 2003 (first draft). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Around 1960 Ray Solomonoff invented the concept of algorithmic probability. Take a universal computer and randomly generate an input program. The program will compute some possibly infinite output.

The algorithmic probability of any given finite output prefix q is the sum of the probabilities of the programs that compute something starting with q. Certain long objects with short programs have high probability.

Algorithmic probability is the main ingredient of Ray Solomonoff's theory of inductive inference, the theory of prediction based on observations. Given a sequence of symbols, which will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Unlike Karl Popper's informal theory, however, Solomonoff's is mathematically sound.

Algorithmic probability is closely related to the concept of Kolmogorov complexity. The Kolmogorov complexity of any computable object is the length of the shortest program that computes it. The invariance theorem shows that it is not really important which computer we use.

Solomonoff's enumerable measure is universal in a certain powerful sense, but it ignores computation time. The book of Ming Li and Paul Vitanyi includes material about time-bounded algorithmic probability measures. The Speed Prior of Juergen Schmidhuber is based on the fastest way of computing objects, not the shortest.