Explanation-based learning
Explanation-Based Learning (EBL) is a form of machine learning that exploits a very strong, or even perfect, domain theory to make generalizations or form concepts from training examples.
EBL software takes three inputs:
- A hypothesis space (the set of all possible conclusions)
- Training examples (specific facts that rule out some possible hypotheses)
- A domain theory (axioms about a domain of interest)
An example of EBL using a perfect domain theory is a program that learns to play chess by being shown examples. A specific chess position that contains an important feature, say, "Forced loss of black queen in two moves," includes many irrelevant features, such as the specific scattering of pawns on the board. EBL can take a single training example and determine what are the relevant features in order to form a generalization.
In essence, an EBL system works by finding a way to deduce each training example from the system's existing database of domain theory. Having a short proof of the training example extends the domain-theory database, enabling the EBL system to find and classify future examples that similar to the training example very quickly.