Jump to content

Griewank function

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by TFB81 (talk | contribs) at 09:50, 10 December 2024 (Added paragraph about non-smooth variants). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
First order Griewank function

In mathematics, the Griewank function is often used in testing of optimization. It is defined as follows:[1]

The following paragraphs display the special cases of first, second and third order Griewank function, and their plots.

First-order Griewank function

The first order Griewank function has multiple maxima and minima.[2]

Let the derivative of Griewank function be zero:

Find its roots in the interval [−100..100] by means of numerical method,

In the interval [−10000,10000], the Griewank function has 6365 critical points.

Second-order Griewank function

2nd order Griewank function 3D plot
2nd-order Griewank function contour plot

Third-order Griewank function

Third-order Griewank function Maple animation

Non-Smooth Variant

A non-smooth version of the Griewank function has also been developed to simulate optimization problems with piecewise smooth landscapes. This variant modifies the cosine product term to introduce non-smooth transitions, providing a more challenging test for algorithms that rely on gradient information, such as sub-gradient algorithms. The addition of non-smoothness makes this function particularly useful for testing modern optimization methods designed to handle irregular objective functions arising from applications in machine learning

Non-Smooth Variant of the Griewank Function

A non-smooth version of the Griewank function[3] has been developed to emulate the characteristics of objective functions frequently encountered in machine learning (ML) optimization problems. These functions often exhibit piecewise smooth or non-smooth behavior due to the presence of regularization terms, activation functions, or constraints in learning models.

The non-smooth variant of the Griewank function is defined as:

By incorporating non-smooth elements, such as absolute values in the cosine and sine terms, this function mimics the irregularities present in many ML loss landscapes. It offers a robust benchmark for evaluating optimization algorithms, especially those designed to handle the challenges of non-convex, non-smooth, or high-dimensional problems, including sub-gradient, hybrid, and evolutionary methods.

The function's resemblance to practical ML objective functions makes it particularly valuable for testing the robustness and efficiency of algorithms in tasks such as hyperparameter tuning, neural network training, and constrained optimization.


References

  1. ^ Griewank, A. O. "Generalized Descent for Global Optimization." J. Opt. Th. Appl. 34, 11–39, 1981
  2. ^ Locatelli, M. "A Note on the Griewank Test Function." J. Global Opt. 25, 169–174, 2003
  3. ^ Bosse, Torsten F.; Bücker, H. Martin (2024-10-29). "A piecewise smooth version of the Griewank function". Optimization Methods and Software: 1–11. doi:10.1080/10556788.2024.2414186. ISSN 1055-6788.