Jump to content

Subgradient method

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Johngcarlsson (talk | contribs) at 10:10, 6 March 2007 (Created page with ''''Subgradient methods''' are algorithms for solving convex optimization problems that can be used with a non-differentiable objective function or...'). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Subgradient methods are algorithms for solving convex optimization problems that can be used with a non-differentiable objective function originally developed by Shor and others in the 1960s and 1970s. When the objective function is differentiable, subgradient methods for unconstrained problems use the same search direction as the method of steepest descent. Although subgradient methods can be much slower than interior-point methods and Newton's method in practice, they can be immediately applied to a far wider variety of problems and require much less memory. Moreover, by combining the subgradient method with primal or dual decomposition techniques, it is sometimes possible to develop a simple distributed algorithm for a problem.

Basic subgradient update

Let be a convex function with domain . The subgradient method uses the iteration

where denotes a subgradient of at . If is differentiable, its only subgradient is the gradient vector itself. It may happen that is not a descent direction for at . We therefore maintain a list that keeps track of the lowest objective function value found so far, i.e.

Step size rules

Many different types of step size rules are used in the subgradient method. Five basic step size rules for which convergence is guaranteed are:

  • Constant step size, .
  • Constant step length, , which gives .
  • Square summable but not summable step size, i.e. any step sizes satisfying
.
  • Nonsummable diminishing, i.e. any step sizes satisfying
  • Nonsummable diminishing step lengths, i.e. , where
.

Notice that the step sizes listed above are determined before the algorithm is run and do not depend on any data computed during the algorithm. This is very different from the step size rules found in standard descent methods, which depend on the current point and search direction.

Convergence results

For constant step size and constant step length, the subgradient algorithm is guaranteed to converge to within some range of the optimal value, i.e.