Jump to content

Learning vector quantization

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Cosmia Nebula (talk | contribs) at 01:09, 23 May 2025 (LVQ1). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer science, learning vector quantization (LVQ) is a prototype-based supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems. LVQ can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all Hebbian learning-based approach. It is a precursor to self-organizing maps (SOM) and related to neural gas and the k-nearest neighbor algorithm (k-NN). LVQ was invented by Teuvo Kohonen.[1]

Definition

An LVQ system is represented by prototypes which are defined in the feature space of observed data. In winner-take-all training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.

An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain.[2] LVQ systems can be applied to multi-class classification problems in a natural way.

A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system, see e.g. (Schneider, Biehl, and Hammer, 2009)[3] and references therein.

LVQ can be a source of great help in classifying text documents.[citation needed]

Algorithm

Set up:[4]

  • Let the data be denoted by , and their corresponding labels by .
  • The complete dataset is .
  • The set of code vectors is .
  • The learning rate at iteration step is denoted by .
  • The hyperparameters and are used by LVQ2 and LVQ3. The original paper suggests and .

LVQ1

Initialize several code vectors per label. Iterate until convergence criteria is reached.

  1. Sample a datum , and find out the code vector , such that falls within the Voronoi cell of .
  2. If its label is the same as that of , then , otherwise, .

LVQ2

LVQ2 is the same as LVQ3, but with this sentence removed: "If and and have the same class, then and .".

LVQ3

Initialize several code vectors per label. Iterate until convergence criteria is reached.

  1. Sample a datum , and find out two code vectors closest to it.
  2. Let .
  3. If , where , then
    • If and have the same class, and and have different classes, then and .
    • If and have the same class, and and have different classes, then and .
    • If and and have the same class, then and .
    • If and have different classes, and and have different classes, then the original paper simply does not explain what happens in this case.
  4. Otherwise, skip.

References

  1. ^ T. Kohonen. Self-Organizing Maps. Springer, Berlin, 1997.
  2. ^ T. Kohonen (1995), "Learning vector quantization", in M.A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, Cambridge, MA: MIT Press, pp. 537–540
  3. ^ P. Schneider; B. Hammer; M. Biehl (2009). "Adaptive Relevance Matrices in Learning Vector Quantization". Neural Computation. 21 (10): 3532–3561. CiteSeerX 10.1.1.216.1183. doi:10.1162/neco.2009.10-08-892. PMID 19635012. S2CID 17306078.
  4. ^ Kohonen, Teuvo (2001), "Learning Vector Quantization", Self-Organizing Maps, vol. 30, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 245–261, doi:10.1007/978-3-642-56927-2_6, ISBN 978-3-540-67921-9, retrieved 2025-05-23

Further reading

  • lvq_pak official release (1996) by Kohonen and his team