Jump to content

Linear-nonlinear-Poisson cascade model

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Jpillow (talk | contribs) at 06:59, 7 May 2009 (tried to clarify description of stages and clarified relationship to classical nonlinear systems analysis). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The Linear-nonlinear-Poisson (LNP) cascade model [1][2] is a simplified functional model of neural spike responses. It has been successfully used to describe the response characteristics of neurons in early sensory pathways, especially the visual system. LNP is the implicit model involved when using reverse correlation or the spike-triggered average to characterize neural responses with white-noise stimuli.

The Linear-Nonlinear-Poisson Cascade Model

There are three stages of the LNP cascade model. The first stage consists of a linear filter, or linear receptive field, which describes how the neuron integrates the stimulus intensity over space and time. The output of this filter is then passed through a nonlinear function, which gives the neuron's instantaneous spike rate as its output. Finally, the instantaneous spike rate is used to generate spikes according to an inhomogeneous Poisson process.

The linear filtering stage performs dimensionality reduction, reducing the high-dimensional spatio-temporal stimulus space to a low-dimensional feature space within which the neuron computes its response. The nonlinearity then ensures that the filter output is positive (spike rates cannot be negative), and can give rise to other nonlinear phenomena such as response saturation. A Poisson spike generator then converts the continuous, scalar-valued spike rate to a series of spike times, where the probability of a spike depends only on the instantaneous spike rate.

Classical nonlinear systems analysis [3][4] provides an alternative to the LNP model using the Volterra kernel or Weiner kernel series expansion for functionals, which is analogous to the Taylor series expansion for functions. The drawback to this approach is that a polynomial expansion does not approximate common neural nonlinearities (e.g., rectification and saturation) without a large number of terms, which can be difficult to estimate from finite datasets.

References

  1. ^ Simoncelli, E. P., Paninski, L., Pillow, J. & Swartz, O. (2004). Characterization of Neural Responses with Stochastic Stimuli in (Ed. M. Gazzaniga) The Cognitive Neurosciences 3rd edn (pp 327-338) MIT press.
  2. ^ Chichilnisky, E. J., A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems 12:199-213. (2001)
  3. ^ Marmarelis & Marmerelis, 1978. Analysis of Physiological Systems: The White Noise Approach. London: Plenum Press.
  4. ^ Korenberg, Sakai, and Naka, 1989. Dissection of neuron network in the catfish inner retina. III. Interpretation of spike kernels. Journal Neurophysiology. 61:1110-1120.