Jump to content

Hyper basis function network

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by ArtemTimoshenko (talk | contribs) at 06:27, 16 December 2014 (Created page with 'In machine learning, a '''Hyper basis function network''', or '''HyperBF network''', is a generalization of Radial basis function network|radial basis func...'). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In machine learning, a Hyper basis function network, or HyperBF network, is a generalization of radial basis function (RBF) networks concept, where the Mahalanobis-like distance is used instead of standard Euclidian distance measure. The activation function at the HyperBF network takes the following form

where is a positive definite matrix. Depending on the application, the following types of matrices are usually considered[1]

  • , where . This case corresponds to the regular RBF network.
  • , where . In this case, the basis functions are radially symmetric, but are scaled with different width.
  • , where . Every neuron has an elliptic shape with a varying size.
  • Positive definite matrix, but not diagonal.

As at the RBF network case, the output of the network is a scalar function of the input vector, , is given by

Training HyperBF networks can be computationally challenging. Moreover, the high degree of freedom of HyperBF leads to overfitting and poor generalization. However, HyperBF networks have an important advantage that a small number of neurons is enough for learning complex functions[2].

References

  1. ^ F. Schwenker, H.A. Kestler and G. Palm (2001). "Three Learning Phases for Radial-Basis-Function Network". Neural Netw. 14:439-458.
  2. ^ R.N. Mahdi, E.C. Rouchka (2011). http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5733426 "Reduced HyperBF Networks: Regularization by Explicit Complexity Reduction and Scaled Rprop-Based Training"]. IEEE Transactions of Neural Networks 2:673–686.