Jump to content

User:Winterfors/sandbox

From Wikipedia, the free encyclopedia

Bayesian Optimization is a probabilistic approach to global optimization using Bayes' rule to account for information gathered about the objective function.

The main advantage of such an approach is that it can account for noise or other uncertainty in the evaluations of the objective function. In addition all performed evaluations are used to constrain the optimum, not only the most recent ones.

Such an approach can be particularly useful when the objective function is (numerically or otherwise) expensive to evaluate.

Information about the objective function is represented as a probability distribution over the space of all possible objective functions. After each (potentially noisy or approximate) evaluation of the objective function, this probability distribution is updated using Bayes' rule to account for the new information.

As the optimization algorithm progresses, it will generally strive to constrain the value range of the objective function around its global extremum, while allowing for larger uncertainty elsewhere.

Mathematical formulation

[edit]

A traditional optimization problem can be stated as follows: given a set A and a real-valued objective function  f : A→ℝ, we want to find an x0A so that  f(x0) ≤ f(x) for all xA.

In order to state the problem in a probabilistic framework, the following additional elements are needed:

  • a prior probability distribution p(f) over the space F of all possible objective functions.
  • a conditional probability distribution p(y| f,x) predicting the probability of the outcome y of an imprecise evaluation of  f at x

Assuming we know a minimum of every fF, we also know a minimum of the expectation E[F] of F. Before performing any evaluations on  f, this minimum is our best guess of the actual minimum.

Given p(f), we have an expected objective function