Jump to content

Normalization model

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Yobot (talk | contribs) at 08:11, 12 October 2012 (WP:CHECKWIKI error 61 fix, References after punctuation per WP:REFPUNC and WP:PAIC using AWB (8459)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The normalization model is an influential model of responses of neurons in primary visual cortex. David Heeger developed the model in the early 1990s,[1] and later refined it together with Matteo Carandini and J Anthony Movshon.[2] The model involves a divisive stage. In the numerator is the output of the classical receptive field. In the denominator, a constant plus a measure of local stimulus contrast.

References

  1. ^ Attention: This template ({{cite pmid}}) is deprecated. To cite the publication identified by PMID 1504027 , please use {{cite journal}} with |pmid=1504027 instead.
  2. ^ Attention: This template ({{cite pmid}}) is deprecated. To cite the publication identified by PMID 9334433 , please use {{cite journal}} with |pmid=9334433 instead.