Jump to content

General regression neural network

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Mmottaleb (talk | contribs) at 00:27, 3 October 2018 (grammar). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Generalized regression neural network (GRNN) is a variation to radial basis neural networks. GRNN was suggested by D.F. Specht in 1991.[1]

GRNN can be used for regression, prediction, and classification. GRNN can also be a good solution for online dynamical systems.

GRNN represents an improved technique in the neural networks based on the nonparametric regression. The idea is that every training sample will represent a mean to a radial basis neuron.[2]

Mathematical representation

where:

  • is the prediction value of input
  • is the activation weight for the pattern layer neuron at
  • is the Radial basis function kernel (Gaussian kernel) as formulated below.

where is the squared euclidean distance between the training samples and the input

Implementation

GRNN has been implemented in many computer languages including MATLAB[3], R- programming language and Python (programming language).

Advantages and disadvantages

Similar to RBFNN, GRNN has the following advantages:

The main disadvantages of GRNN are:

  • Its size can be huge, which would make it computationally expensive.
  • There is no optimal method to improve it.

References

  1. ^ "A general regression neural network - IEEE Xplore Document". Ieeexplore.ieee.org. 2002-08-06. Retrieved 2017-03-13.
  2. ^ https://minds.wisconsin.edu/bitstream/handle/1793/7779/ch2.pdf?sequence=14
  3. ^ https://au.mathworks.com/help/nnet/ug/generalized-regression-neural-networks.html