Jump to content

Winnow (algorithm)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by JonathanWilliford (talk | contribs) at 21:11, 14 September 2008. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The winnow algorithm [1] is a technique from machine learning. It is closely related to the Perceptron, but it uses a multiplicative weight-update scheme that allows it perform much better than the perceptron when many dimensions are irrelevant (hence its name). It is not a sophisticated algorithm but it scales well to high-dimensional spaces. During training, winnow is shown a sequence of positive and negative examples. From these it learns a decision hyperplane.

The algorithm

The basic algorithm, Winnow1, is given as follows. The instance space is . The algorithm maintains non-negative weights for which are initially set to 1. When the learner is given an example $(x_1,...x_n)$, the learner follows the following prediction rule:

  • If , then it predicts 1
  • Otherwise it predicts 0

Where is a real number that is called the 'threshold'. Good bounds are obtained if .

The update rule is (loosely):

  • If an example is correctly classified, do nothing.
  • If an example is predicted to be 1 but the correct result was 0, all of the weights involved in the mistake are set to zero.
  • If an example is predicted to be 0 but the correct result was 1, all of the weights involved in the mistake are multiplied by .

A good value for is 2.

Variations are also used.

References

Citations and notes

  1. ^ Littlestone, N. (1988) "Learning Quickly When Irrelevant Attributes About: A New Linear-threshold Algorithm" Machine Learning 285-318(2)