Jump to content

Draft:Universal approximation theorem

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Timtrent (talk | contribs) at 14:53, 25 June 2020 (Timtrent moved page User:PatrickKidger/sandbox to Draft:Universal approximation theorem: Preferred location for AfC submissions). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


In the mathematical theory of artificial neural networks, the universal approximation theorem states that feedforward neural networks constructed of artificial neurons can approximate real-valued continuous functions on compact subsets of , to arbitrary accuracy. The theorem thus implies that simple neural networks can in principle be applied to nearly any problem, as they can approximate essentially any function of interest.

Early versions of the theorem considered networks of arbitrary width. In particular these were considered by George Cybenko in 1989,[1] Kurt Hornik in 1991,[2] and Moshe Leshno et al in 1993.[3] A simple general formulation was given by Allan Pinkus in 1999,[4] which is the version stated here. Later versions considered the 'dual' problem for networks of arbitrary depth. In particular these were considered by Zhou Lu et al in 2017 [5] and Boris Hanin and Mark Sellke in 2018.[6] A simple general formulation was given by Patrick Kidger and Terry Lyons in 2020 [7], which is the version stated here.

Several extensions of the theorem exist, such as to discontinuous activation functions, alternative network architectures, other topologies, and noncompact domains. [3][7][8].

Formal statements

The classical arbitrary width version of the theorem may be stated as follows.

Universal approximation theorem; arbitrary width.[4] Let be any continuous function (called the activation function). Let be compact. The space of real-valued continuous functions on is denoted by . Let denote the space of functions of the form

for all integers , real constants and real vectors for .

Then, if and only if is nonpolynomial, the following statement is true: given any and any , there exists such that

for all .

In other words, is dense in with respect to the uniform norm if and only if is nonpolynomial.

This theorem extends straightforwardly to networks with any fixed number of hidden layers: the theorem implies that the first layer can approximate any desired function, and that later layers can approximate the identity function. Thus any fixed-depth network may approximate any continuous function, and this version of the theorem applies to networks with bounded depth and arbitrary width.

The 'dual' version of the the theorem considers networks of bounded width and arbitrary depth. Unlike the previous theorem, it gives sufficient but not necessary conditions.

Universal approximation theorem; arbitrary depth.[7] Let be any nonaffine continuous function which is continuously differentiable at at least one point, with nonzero derivative at that point. Let be compact. The space of real vector-valued continuous functions on is denoted by . Let denote the space of feedforward neural networks with input neurons, output neurons, and an arbitrary number of hidden layers each with neurons, such that every hidden neuron has activation function and every output neuron has the identity as its activation function. Then given any and any , there exists such that

for all .

In other words, is dense in with respect to the uniform norm.

Certain necessary conditions for the bounded width, arbitrary depth case have been established, but there is still a gap between the known sufficient and necessary conditions.[5][6][9].

The arbitrary depth case includes polynomial activation functions, which are specifically excluded from the arbitrary width case. This is an example of a qualitative difference between (particular interpretations of) deep and shallow neural networks.

See also

References

  1. ^ Cybenko, G. (1989) "Approximations by superpositions of sigmoidal functions", Mathematics of Control, Signals, and Systems, 2(4), 303–314. doi:10.1007/BF02551274
  2. ^ Hornik, Kurt (1991). "Approximation capabilities of multilayer feedforward networks". Neural Networks. 4 (2): 251–257. doi:10.1016/0893-6080(91)90009-T.
  3. ^ a b Leshno, Moshe; Lin, Vladimir Ya.; Pinkus, Allan; Schocken, Shimon (January 1993). "Multilayer feedforward networks with a nonpolynomial activation function can approximate any function". Neural Networks. 6 (6): 861–867. doi:10.1016/S0893-6080(05)80131-5.
  4. ^ a b Pinkus, Allan (January 1999). "Approximation theory of the MLP model in neural networks". Acta Numerica. 8: 143–195. doi:10.1017/S0962492900002919.
  5. ^ a b Lu, Zhou; Pu, Homgming; Wang, Feicheng; Hu, Zhiqiang; Wang, Liwei. "The Expressive Power of Neural Networks: A View from the Width". Advances in Neural Information Processing Systems 30. Curran Associates, Inc.: 6231–6239.
  6. ^ a b Hanin, Boris; Sellke, Mark (March 2018). "Approximating Continuous Functions by ReLU Nets of Minimal Width". arXiv:1710.11278.
  7. ^ a b c Kidger, Patrick; Lyons, Terry (July 2020). Universal Approximation with Deep Narrow Networks. Conference on Learning Theory.
  8. ^ Lin, Hongzhou; Jegelka, Stefanie (2018). ResNet with one-neuron hidden layers is a Universal Approximator. Advances in Neural Information Processing Systems 30. Curran Associates, Inc. pp. 6169–6178.
  9. ^ Johnson, Jesse (2019). Deep, Skinny Neural Networks are not Universal Approximators. International Conference on Learning Representations.

Universal approximation theorem

WikiProject iconMathematics Redirect‑class
WikiProject iconThis redirect is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
RedirectThis redirect does not require a rating on Wikipedia's content assessment scale.
WikiProject iconComputer science Redirect‑class
WikiProject iconThis redirect is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
RedirectThis redirect does not require a rating on Wikipedia's content assessment scale.
Things you can help WikiProject Computer science with:

First time making a Wikipedia edit; I think (hope) I'm doing the right thing here. Looking to update the "Universal approximation theorem" article.

Essential changes:

- Updated out-of-date results (by 20 years...)

- Now describe the arbitrary width and depth cases using similar language to each other.

- General tidy-up, so the article is now shorter.

- Added additional references.

COI:

- I wrote one of the papers that's now being discussed. It generalises+simplifies previous work, so I think it's fair.