Jump to content

Neural operators

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Kazizzad (talk | contribs) at 18:00, 4 October 2023 (Definition and formulation). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Neural operators are a class of deep learning architecture designed to learn maps between infinite-dimensional function spaces. Neural Operators represent an extension of traditional artificial neural networks, marking a departure from the typical focus on learning mappings between finite-dimensional Euclidean spaces or finite sets. Neural operators directly learn operators in function spaces; they can receive input functions, and the output function can be evaluated at any discretization.[1]

The primary application of neural operators is in learning surrogate maps for the solution operators of partial differential equations (PDEs)[1]. Standard PDE solvers can be time-consuming and computationally intensive, especially for complex systems. Neural operators have demonstrated improved performance in solving PDEs compared to existing machine learning methodologies, while being significantly faster than numerical solvers.[2]

Operator Learning

Understanding and mapping relationships between function spaces have many applications in engineering and the sciences. In particular, one can cast the problem of solving partial differential equations as identifying a map between function spaces, such as from an initial condition to a time-evolved state. In other PDEs this map takes an input coefficient function and outputs a solution function. Operator learning is a machine learning paradigm to learn solution operators mapping the input function to the output function.

Using traditional machine learning methods, addressing this problem would involve discretizing the infinite-dimensional input and output function spaces into finite-dimensional grids and applying standard learning models, such as neural networks. This approach reduces the operator learning to finite-dimensional function learning and has some limitations, such as generalizing to discretizations beyond the grid used in training.

The primary properties of neural operators that differentiate them from traditional neural networks is discretization invariance and discretization convergence[1]. Unlike conventional neural networks, which are fixed on the discretization of training data, neural operators can adapt to various discretizations without re-training. This property improves the robustness and applicability of neural operators in different scenarios, providing consistent performance across different resolutions and grids.

Definition and formulation

Architecturally, neural operators are similar to feed-forward neural networks in the sense that they are comprised of alternating linear maps and non-linearities. Since neural operators act on and output functions, neural operators have been instead formulated as a sequence of alternating linear integral operators on function spaces and point-wise non-linearities.[1] Using an analogous architecture to finite-dimensional neural networks, similar universal approximation theorems have been proven for neural operators. In particular, it has been shown that neural operators can approximate any continuous operator on a compact set.

Neural operators seek to approximate some operator by building a parametric map . Let , where denotes some input function space. Let denote the output space and let . Neural operators are generally defined in the form

where are the lifting (lifting the codomain of the input function to a higher dimensional space) and projection (projecting the codomain of the intermediate function to the output codimension) operators, respectively. These operators act pointwise on functions and are typically parametrized as a multilayer perceptron. is a point-wise nonlinearity, such as a rectified linear unit (ReLU), or a Gaussian error linear unit(GeLU). Each layer has a respective local operator (usually parameterized by a pointwise neural network) and a bias function . Given some intermediate functional representation with domain in a hidden layer, a kernel integral operator is defined as

where the integral kernel is parametrized by and can be instantiated in many ways. The varying parameterizations of neural operators typically differ in their parameterization of .

There have been various parameterizations of neural operators for different applications[2][3]. The most popular instantiation is the Fourier neural operator (FNO). FNO takes and by applying the convolution theorem, arrives at the following parameterization of the kernel integration:

where represents the Fourier transform and represents the Fourier transform of some periodic function . That is, FNO parameterizes the kernel integration directly in Fourier space, truncating from the discrete Fourier transform frequencies at some specified threshold.

Training

Training neural operators is similar to the training process for a traditional neural network. Neural operators are typically trained in some Lp norm or Sobolev norm. In particular, for a dataset of size , neural operators minimize

,

in some norm Neural operators can be trained directly using backpropagation and gradient descent-based methods.

References

  1. ^ a b c d Kovachki, Nikola; Li, Zongyi; Liu, Burigede; Azizzadenesheli, Kamyar; Bhattacharya, Kaushik; Stuart, Andrew; Anandkumar, Anima. "Neural operator: Learning maps between function spaces" (PDF). Journal of Machine Learning Research. 24: 1-97.
  2. ^ a b Li, Zongyi; Kovachki, Nikola; Azizzadenesheli, Kamyar; Liu, Burigede; Bhattacharya, Kaushik; Stuart, Andrew; Anima, Anandkumar (2020). "Fourier neural operator for parametric partial differential equations" (PDF). arXiv preprint arXiv:2010.08895.
  3. ^ Li, Zongyi; Kovachki, Nikola; Azizzadenesheli, Kamyar; Liu, Burigede; Bhattacharya, Kaushik; Stuart, Andrew; Anima, Anandkumar (2020). "Neural operator: Graph kernel network for partial differential equations" (PDF). arXiv preprint arXiv:2003.03485.