Winnow (algorithm)
![]() | This article provides insufficient context for those unfamiliar with the subject. |
The winnow algorithm [1] is a technique from machine learning. It is closely related to the Perceptron, but it uses a multiplicative weight-update scheme that allows it perform much better than the perceptron when many dimensions are irrelevant (hence its name). It is not a sophisticated algorithm but it scales well to high-dimensional spaces. During training, winnow is shown a sequence of positive and negative examples. From these it learns a decision hyperplane.
The algorithm
The basic algorithm, Winnow1, is given as follows. The instance space is . The algorithm maintains non-negative weights for which are initially set to 1. When the learner is given an example $(x_1,...x_n)$, the learner follows the following prediction rule:
- If , then it predicts 1
- Otherwise it predicts 0
Where is a real number that is called the 'threshold'. Good bounds are obtained if .
The update rule is (loosely):
- If an example is correctly classified, do nothing.
- If an example is predicted to be 1 but the correct result was 0, all of the weights involved in the mistake are set to zero (demotion step).
- If an example is predicted to be 0 but the correct result was 1, all of the weights involved in the mistake are multiplied by (promotion step).
A good value for is 2.
Variations are also used. For example, Winnow2 is the same as Winnow1 except that in the demotion step, the weights are multiplied by instead of being set to 0.
References
Citations and notes
- ^ Littlestone, N. (1988) "Learning Quickly When Irrelevant Attributes About: A New Linear-threshold Algorithm" Machine Learning 285-318(2)