Concept learning
Concept learning refers to a learning task in which a human or machine learner is trained to to classify objects by being shown a set of examples objects along with their class labels. In the machine learning literature, this task is more typically called supervised learning or supervised classification, in contrast to unsupervised learning or unsupervised classification, in which the learner is not provided with class labels. Colloquially, this task is known as learning from examples.
Modern Psychological Theories of Concept Learning
It is difficult to make any general statements about human (or animal) concept learning without already assuming a particular psychological theory of concept learning. Although the classical views of concepts and concept learning in philosophy speak of a process of abstraction, compression, simplification, and summarization, currently popular psychological theories of concept learning diverge on all these points.
Exemplar Theories of Concept Learning
Exemplar theories of concept learning are those in which the learner is hypothesized to store the provided training examples verbatim, without creating any abstraction or reduced representation (e.g., rules). In machine learning, algorithms of this type are also known as instance learners or lazy learners. The best known exemplar theory of concept learning is the Generalized Context Model, developed by Nosofsky. Some variations are listed below.
Bayesian Theories of Concept Learning
The best known Bayesian theory of concept learning is ACT, developed by Anderson.
Rule-Based Theories of Concept Learning
To be added....
Compression-Based Theories of Concept Learning
To be added...
Explanation-Based Theories of Concept Learning
To be added...