Radial basis functions can be used to solve a function interpolation problem. A set of known input-output pairs values is needed:
where
is the input vector with index t (or the input at time t),
is the output indexed with t,
is the dimension of the input space, and
is the number of points ( can be infinite).
File:060804 architecture.pngFigure 1: Architecture of a radial basis function network. An input vector x is used as input to several radial basis functions, each with different parameters. The output of the network is a linear combination of the outputs from radial basis functions.
In the deterministic case the data is drawn from the set
.
The data can be noisy, in which case is drawn from the set
where is a partially known random process.
In general stochastic case, data is drawn from the joint probability distribution
.
Architecture
RBF networks typically have 3 layers, the input layer, the hidden layer with the RBF non-linearity and a linear output layer. RBF networks have the advantage of not being locked into local minima as
do the MLP networks. RBF architectures come in two forms, normalized and unnormalized. The forms can be expanded into a superposition of local linear models.
RBF types
The most popular choice for the non-linearity is the Gaussian.
Gaussian: for some
Other forms, such as a multiquadratic are also used.
The unnormalized radial basis function architecture, , is
where is the approximation to the data, , known as a "radial basis function," is a local function of the distance between the input vector and a "basis function center"
,
and
are weights to be determined by data. Typically the distance is taken to be the Euclidean distance and the basis function is taken to be Gaussian
.
The weights , , and are determined in a manner that optimizes the fit between and the data.
Figure 3: Two normalized radial basis functions in one input dimension. The basis function centers are located at and .
Normalized
Normalized architecture
The normalized RBF architecture is
where
is known as a "normalized radial basis function."
Figure 4: Three normalized radial basis functions in one input dimension. The additional basis function has center at
Theoretical motivation for normalization
There is theoretical justification for this architecture in the case of stochastic data flow. Assume a Stochastic kernel approximation for the joint probability density
where the weights and are exemplars from the data and we require the kernels to be normalized
and
.
Figure 5: Four normalized radial basis functions in one input dimension. The fourth basis function has center at . Note that the first basis function (dark blue) has become localized.
The probability densities in the input and output spaces are
and
The expectation of y given an input is
where
is the conditional probability of y given .
The conditional probability is related to the joint probability through Bayes theorem
which yields
.
This becomes
when the integrations are performed.
Local linear models
It is sometimes convenient to expand the architecture to include local linear models. In that case the architectures become, to first order,
and
in the unnormalized and normalized cases, respectively. Here are weights to be determined. Higher order linear terms are also possible.
The weights, which we signify by , in the RBF architecture are found through optimization of an objective function. The most common objective function is the least squares function
where
.
We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit.
There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as
where
and
where optimization of S maximizes smoothness and is known as a regularization parameter.
Training
Training the centers and weights to optimize the objective function is typically done in hybrid fashion by first fixing the basis funtion centers and then optimizing the weights. In the sequential training the weights are updated at each time step as data streams in.
Training the basis function centers
Basis function centers can be either randomly sampled among the input instances or found by clustering the samples and choosing the the cluster means as the centers.
The simplest training algorithm is Gradient descent. In gradient descent training the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function
where is a "learning parameter."
For the case of training the linear weights, , the algorithm becomes
in the unnormalized case and
in the normalized case.
For local-linear-architectures gradient-descent training is
Projection operator training of the linear weights
For the case of training the linear weights, and , the algorithm becomes
in the unnormalized case and
in the normalized case and
in the local-linear case.
For one basis function, projection operator training reduces to Newton's method.
Figure 6: Logistic map time series. Repeated iteration of the logistic map generates a chaotic time series. The values lie between zero and one. Displayed here are the 100 training points used to train the examples in this section. The weights c are the first five points from this time series.
Examples
Logistic map
The basic properties of radial basis functions can be illustrated with a simple mathematical map, the logistic map, which maps the unit interval onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore function approximation, time series prediction, and control theory. The map originated from the field of population dynamics and became the prototype chaotic time series. The map, in the fully chaotic regime, is given by
where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map.
Generation of the time series from this equation is the forward problem. The examples here illustrate the inverse problem; identification of the the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate
for f.
Function approximation
Unnormalized radial basis functions
The architecture is
Figure 7: Unnormalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) after one pass through the training set.
where
.
Since the input is a scalar rather than a vector, the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight is taken to be a constant equal to 5. The weights are five exemplars from the time series. The weights are trained with projection operator training:
where the learning rate is taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error is 0.15.
File:060731c Normalized basis functions.pngFigure 8: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) after one pass through the training set. Note the improvement over the unnormalized case.
Normalized radial basis functions
The normalized RBF architecture is
where
.
Again:
.
Again, we choose the number of basis functions as five and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight is taken to be a constant equal to 6. The weights are five exemplars from the time series. The weights are trained with projection operator training:
where the learning rate is again taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error on a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases.
File:060803b chaotic time series prediction.pngFigure 9: Normalized basis functions. The Logistic map (blue) and the approximation to the logistic map (red) as a function of time. Note that the approximation is good for only a few time steps. This is a general characterisitc of chaotic time series.
Time series prediction
Once the underlyting geometry of the time series is estimated as in the previous examples, a prediction for the time series can be made by iteration:
.
A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the the time series estimate for several time steps.
Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as the Lyapunov exponent.
Control of a chaotic time series
Figure 10: Control of the logistic map. The system is allowed to evolve naturally for 49 time steps. At time 50 control is turned on. The desired trajectory for the time series is red. The system under control learns the underlying dynamics and drives the time series to the desired output. The architecture is the same as for the time series prediction example.
We assume the output of the logistic map can be manipulated through a control parameter such that
.
The goal is to choose the control parameter in such a way as to drive the time series to a desired output . This can be done if we choose the control paramer to be
where
is an approximation to the underlying natural dynamics of the system.
Martin D. Buhmann, M. J. Ablowitz (2003). Radial Basis Functions : Theory and Implementations. Cambridge University. ISBN 0-521-63338-9.
Yee, Paul V. and Haykin, Simon (2001). Regularized Radial Basis Function Networks: Theory and Applications. John Wiley. ISBN 0-471-35349-3.{{cite book}}: CS1 maint: multiple names: authors list (link)
John R. Davies, Stephen V. Coggeshall, Roger D. Jones, and Daniel Schutzer, "Intelligent Security Systems," in Freedman, Roy S., Flein, Robert A., and Lederman, Jess, Editors (1995). Artificial Intelligence in the Capital Markets. Chicago: Irwin. ISBN 1-55738-811-3. {{cite book}}: |author= has generic name (help)CS1 maint: multiple names: authors list (link)