Jump to content

Neural operators

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Mocl125 (talk | contribs) at 01:32, 21 August 2023 (begin definition and formulation of neural operators). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Neural operators are a class of deep learning architecture designed to learn maps between infinite-dimensional function spaces. As opposed to traditional finite-dimensional artificial neural networks, neural operators directly learn operators in function spaces, so they can receive input functions and can be evaluated at any discretization.[1]

Neural operators have been primarily applied to approximating solutions to partial differential equations (PDEs)[1], and have been shown to achieve a significant speed-up over traditional numerical solvers.[2]

Definition and formulation

Architecturally, neural operators are similar to feed-forward neural networks in the sense that they are comprised of alternating linear maps and non-linearities. Since neural operators act on and output functions, neural operators have been instead formulated as a sequence of alternating linear integral operators and point-wise non-linearities.[1]

References

  1. ^ a b c Kovachki, Nikola; Li, Zongyi; Liu, Burigede; Azizzadenesheli, Kamyar; Bhattacharya, Kaushik; Stuart, Andrew; Anandkumar, Anima. "Neural operator: Learning maps between function spaces" (PDF). Journal of Machine Learning Research. 24: 1-97.
  2. ^ Li, Zongyi; Kovachki, Nikola; Azizzadenesheli, Kamyar; Liu, Burigede; Bhattacharya, Kaushik; Stuart, Andrew; Anima, Anandkumar (2020). "Fourier neural operator for parametric partial differential equations" (PDF). arXiv preprint arXiv:2010.08895.