Neural operators
This article, Neural operators, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author |
Neural operators are a class of deep learning architecture designed to learn maps between infinite-dimensional function spaces. As opposed to traditional finite-dimensional artificial neural networks, neural operators directly learn operators in function spaces, so they can receive input functions and can be evaluated at any discretization.[1]
Neural operators have been primarily applied to approximating solutions to partial differential equations (PDEs)[1], and have been shown to achieve a significant speed-up over traditional numerical solvers.[2]
Definition and formulation
Architecturally, neural operators are similar to feed-forward neural networks in the sense that they are comprised of alternating linear maps and non-linearities. Since neural operators act on and output functions, neural operators have been instead formulated as a sequence of alternating linear integral operators and point-wise non-linearities.[1]
References
- ^ a b c Kovachki, Nikola; Li, Zongyi; Liu, Burigede; Azizzadenesheli, Kamyar; Bhattacharya, Kaushik; Stuart, Andrew; Anandkumar, Anima. "Neural operator: Learning maps between function spaces" (PDF). Journal of Machine Learning Research. 24: 1-97.
- ^ Li, Zongyi; Kovachki, Nikola; Azizzadenesheli, Kamyar; Liu, Burigede; Bhattacharya, Kaushik; Stuart, Andrew; Anima, Anandkumar (2020). "Fourier neural operator for parametric partial differential equations" (PDF). arXiv preprint arXiv:2010.08895.