Jump to content

Random search

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Nkchenjx (talk | contribs) at 02:34, 30 November 2021 (Variants). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Random search (RS) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized, and RS can hence be used on functions that are not continuous or differentiable. Such optimization methods are also known as direct-search, derivative-free, or black-box methods.

Anderson in 1953 reviewed the progress of methods in finding maximum or minimum using a series of guesses distributed with a certain order or pattern in the parameter searching space, e.g. a confounded design with exponentially distributed spacings/steps.[1] This search goes on sequentially on each parameter and refines iteratively on the best guesses from the last sequence. The method was developed to screen the experimental conditions in chemical reactions by a number of scientists listed in Anderson's paper. A MATLAB code reproducing this procedure for general non-linear regression of a given mathematical model can be found here (FitNGuess @ GitHub)[2].

The name "random search" is attributed to Rastrigin[3] who made an early presentation of RS along with basic mathematical analysis. RS works by iteratively moving to better positions in the search-space, which are sampled from a hypersphere surrounding the current position.

The algorithm described herein is a type of local random search, where every iteration is dependent on the prior iteration's candidate solution. There are alternative random search methods which sample from the entirety of the search space (for example pure random search or uniform global random search), but these are not described in this article.

Algorithm

Let f: ℝn → ℝ be the fitness or cost function which must be minimized. Let x ∈ ℝn designate a position or candidate solution in the search-space. The basic RS algorithm can then be described as:

  1. Initialize x with a random position in the search-space.
  2. Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:
    1. Sample a new position y from the hypersphere of a given radius surrounding the current position x (see e.g. Marsaglia's technique for sampling a hypersphere.)
    2. If f(y) < f(x) then move to the new position by setting x = y

Variants

A number of RS variants have been introduced in the literature:

  • Friedman-Savage procedure: Sequentially search each parameter with a set of guesses that have a space pattern between the initial guess and the boundaries[4]. An example of exponentially distributed steps can be found here in a MATLAB code (FigNGuess @ GitHub)[2]. This example code converges 1-2 orders of magnitude slower than the Levenberg–Marquardt algorithm.
  • Fixed Step Size Random Search (FSSRS) is Rastrigin's [3] basic algorithm which samples from a hypersphere of fixed radius.
  • Optimum Step Size Random Search (OSSRS) by Schumer and Steiglitz [5] is primarily a theoretical study on how to optimally adjust the radius of the hypersphere so as to allow for speedy convergence to the optimum. An actual implementation of the OSSRS needs to approximate this optimal radius by repeated sampling and is therefore expensive to execute.
  • Adaptive Step Size Random Search (ASSRS) by Schumer and Steiglitz [5] attempts to heuristically adapt the hypersphere's radius: two new candidate solutions are generated, one with the current nominal step size and one with a larger step-size. The larger step size becomes the new nominal step size if and only if it leads to a larger improvement. If for several iterations neither of the steps leads to an improvement, the nominal step size is reduced.
  • Optimized Relative Step Size Random Search (ORSSRS) by Schrack and Choit [6] approximate the optimal step size by a simple exponential decrease. However, the formula for computing the decrease-factor is somewhat complicated.

See also

References

  1. ^ Anderson, R.L. (1953). "Recent Advances in Finding Best Operating Conditions". Journal of the American Statistical Association. 48 (264): 789–798. doi:10.2307/2281072.
  2. ^ a b "GitHub - Jixin Chen/FigNGuess: A Random Search Algorithm for general mathematical model(s) fittings".
  3. ^ a b Rastrigin, L.A. (1963). "The convergence of the random search method in the extremal control of a many parameter system". Automation and Remote Control. 24 (10): 1337–1342.
  4. ^ Friedman, M.; Savage, L.J. (1947). Planning experiments seeking maxima, chapter 13 of Techniques of Statistical Analysis, edited by Eisenhart, Hastay, and Wallis. McGraw-Hill Book Co., New York.
  5. ^ a b Schumer, M.A.; Steiglitz, K. (1968). "Adaptive step size random search". IEEE Transactions on Automatic Control. 13 (3): 270–276. CiteSeerX 10.1.1.118.9779. doi:10.1109/tac.1968.1098903.
  6. ^ Schrack, G.; Choit, M. (1976). "Optimized relative step size random searches". Mathematical Programming. 10 (1): 230–244. doi:10.1007/bf01580669.