Jump to content

Normalization model

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 108.46.61.79 (talk) at 16:27, 30 April 2013. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The normalization model[1] is an influential model of responses of neurons in primary visual cortex. David Heeger developed the model in the early 1990s,[2] and later refined it together with Matteo Carandini and J Anthony Movshon.[3] The model involves a divisive stage. In the numerator is the output of the classical receptive field. In the denominator, a constant plus a measure of local stimulus contrast.

References

  1. ^ Attention: This template ({{cite pmid}}) is deprecated. To cite the publication identified by PMID 22108672 , please use {{cite journal}} with |pmid=22108672 instead.
  2. ^ Attention: This template ({{cite pmid}}) is deprecated. To cite the publication identified by PMID 1504027 , please use {{cite journal}} with |pmid=1504027 instead.
  3. ^ Attention: This template ({{cite pmid}}) is deprecated. To cite the publication identified by PMID 9334433 , please use {{cite journal}} with |pmid=9334433 instead.