Jump to content

General regression neural network

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Michael Hardy (talk | contribs) at 17:30, 6 May 2017 (Mathematical representation). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Generalized regression neural network (GRNN) is a variation to radial basis neural networks. GRNN was suggested by D.F. Specht in 1991.[1]

GRNN can be used for regression, prediction, and classification. GRNN can also be a good solution for online dynamical systems.

GRNN represents an improved technique in the neural networks based on the non-parametric regression. The idea is that every training sample will represents a mean to a radial basis neuron.[2]

Mathematical representation

where is the prediction value of input and is the distance between the training samples and the input [3]

GRNN has been implemented in many software including Matlab[4] and R- programming language.

Advantages and disadvantages

Similar to RBFNN GRNN has the following advantages:

  • single-pass learning so no backpropagation is required.
  • high accuracy in the estimation since it uses Gaussian functions.
  • it can handle noises in the inputs.

The main disadvantages of GRNN are:

  • Its size can grow to huge size which computationally expensive.
  • There is no optimal method to improve it.

References

  1. ^ "A general regression neural network - IEEE Xplore Document". Ieeexplore.ieee.org. 2002-08-06. Retrieved 2017-03-13.
  2. ^ https://minds.wisconsin.edu/bitstream/handle/1793/7779/ch2.pdf?sequence=14
  3. ^ https://minds.wisconsin.edu/bitstream/handle/1793/7779/ch2.pdf?sequence=14
  4. ^ https://au.mathworks.com/help/nnet/ug/generalized-regression-neural-networks.html