Jump to content

Function approximation

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Devharsh (talk | contribs) at 18:28, 16 July 2024 (Added sections for deterministic and probabilistic polynomial approximations.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Several approximations of a step function
Several progressively more accurate approximations of the step function.
An asymmetrical Gaussian function fit to a noisy curve using regression.
An asymmetrical Gaussian function fit to a noisy curve using regression.

In general, a function approximation problem asks us to select a function among a well-defined class[citation needed][clarification needed] that closely matches ("approximates") a target function[citation needed] in a task-specific way.[1][better source needed] The need for function approximations arises in many branches of applied mathematics, and computer science in particular [why?],[citation needed] such as predicting the growth of microbes in microbiology.[2] Function approximations are used where theoretical models are unavailable or hard to compute.[2]

One can distinguish[citation needed] two major classes of function approximation problems:

First, for known target functions approximation theory is the branch of numerical analysis that investigates how certain known functions (for example, special functions) can be approximated by a specific class of functions (for example, polynomials or rational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.).[3]

Second, the target function, call it g, may be unknown; instead of an explicit formula, only a set of points of the form (x, g(x)) is provided.[citation needed] Depending on the structure of the domain and codomain of g, several techniques for approximating g may be applicable. For example, if g is an operation on the real numbers, techniques of interpolation, extrapolation, regression analysis, and curve fitting can be used. If the codomain (range or target set) of g is a finite set, one is dealing with a classification problem instead.[4]

To some extent, the different problems (regression, classification, fitness approximation) have received a unified treatment in statistical learning theory, where they are viewed as supervised learning problems.[citation needed]

Polynomial Approximation

Traditional polynomial function approximation schemes like Taylor, Fourier, Pade, Chebyshev, and Remez are deterministic. These schemes generate a polynomial with the same coefficients for a given function and an interval. Trivedi et al. presented a Perceptron (Artificial (Feed-forward) Neural Network (ANN)) based probabilistic polynomial function approximation scheme, where the coefficients of the generated polynomial depend on the Perceptron model hyperparameters such as the learning rate and batch size and randomness of initialized neurons before training.

ANN-based Approximation

Polynomial approximation using ANN.
Polynomial approximation using ANN.

While Artificial Neural Networks (ANN) are known for their universal function approximation properties, they are often treated as black boxes and used to calculate the output value. Trivedi et al.[5][6][7][8] proposed the use of a basic three-layer Perceptron consisting of an input layer, a hidden layer, and an output layer, with both hidden and output layers having linear activations to generate the coefficients for an approximation polynomial of a given order. In this architecture, the input layer is dynamic, with the input nodes corresponding to the desired polynomial degrees. While having a variable number of hidden layers is possible, they fix it at a single layer with a single node to minimize the computation. They show coefficient calculations for a third-order polynomial (); a univariate function ; and an input , actual output , and predicted output . The input layer weights are , and the biases are . Thus, the output of the hidden layer is

The predicted output is calculated by

Where the layer weights are the coefficients for the approximating polynomial of order three and the constant term is . Since the predicted output () is probabilistic, it must be fine-tuned with hyperparameter tuning, as incorrect results lead to erroneous (inefficient) approximations.

References

  1. ^ Lakemeyer, Gerhard; Sklar, Elizabeth; Sorrenti, Domenico G.; Takahashi, Tomoichi (2007-09-04). RoboCup 2006: Robot Soccer World Cup X. Springer. ISBN 978-3-540-74024-7.
  2. ^ a b Basheer, I.A.; Hajmeer, M. (2000). "Artificial neural networks: fundamentals, computing, design, and application" (PDF). Journal of Microbiological Methods. 43 (1): 3–31. doi:10.1016/S0167-7012(00)00201-3. PMID 11084225. S2CID 18267806.
  3. ^ Mhaskar, Hrushikesh Narhar; Pai, Devidas V. (2000). Fundamentals of Approximation Theory. CRC Press. ISBN 978-0-8493-0939-7.
  4. ^ Charte, David; Charte, Francisco; García, Salvador; Herrera, Francisco (2019-04-01). "A snapshot on nonstandard supervised learning problems: taxonomy, relationships, problem transformations and algorithm adaptations". Progress in Artificial Intelligence. 8 (1): 1–14. arXiv:1811.12044. doi:10.1007/s13748-018-00167-7. ISSN 2192-6360. S2CID 53715158.
  5. ^ Trivedi, Devharsh (30 September 2023). "Brief Announcement: Efficient Probabilistic Approximations for Sign and Compare". International Symposium on Stabilizing, Safety, and Security of Distributed Systems. doi:10.1007/978-3-031-44274-2_21.
  6. ^ "SigML++: Supervised Log Anomaly with Probabilistic Polynomial Approximation". Cryptography. 19 October 2023. doi:10.3390/cryptography7040052.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  7. ^ "chiku: Efficient Probabilistic Polynomial Approximations Library". Proceedings of the 21st International Conference on Security and Cryptography - SECRYPT. doi:10.5220/0012716000003767.
  8. ^ Trivedi, Devharsh (2024). "Towards Efficient Security Analytics". ProQuest Dissertations and Theses (2024): 226.

See also