Jump to content

Algorithmic learning theory

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by APH (talk | contribs) at 09:58, 30 November 2003. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Algorithmic learning theory is a framework for learning.

The framework was introduced in Gold's seminal paper "Language Identification in the Limit". In this framework the learner is given a sample at each time unit. After getting a sample the learner output a hypothesis. The learner is said to learn a function in the limit if since a given finite time the learner outputs a hypothesis that is similar with the function. It is important to not that the learner shouldn't know when it reached the correct hypothesis.

Gold showed that any Turing machine can be learned in the limit using enumeration. Enumeration is the ordering of all the Turing machine and checking all the possible options one by one. If there is a suitable Turing machine then the enumeration will reach it.

Gold also showed that if the learner is given only positive examples (only words in the language and no words that are not in the language).

Not that language identification in the limit is a very theoretic model. We do not consider time, space or sample complexity. The enumeration method is sensitive to errors in the sample classification.

Yet, the enumeration method enables learning the most flexible computational model – Turing machines.

Other frameworks of learning consider a mush more restricted classes of function but efficiently (in polynomial time). An example of such a frame work is the Probably approximately correct learning.

The original paper was Gold, E., "Language Identification in the Limit," Information and Control, 10, pp. 447-474, 1967.

This article is a stub. You can help Wikipedia by fixing it.