Jump to content

Augmented Lagrangian method

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Lavaka (talk | contribs) at 23:44, 16 February 2011 (Creating page. This is a major class of optimization algorithms.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems; the difference is that the augmented Lagrangian method adds an additional term to the unconstrained objective. This additional term is designed to mimic a Lagrange multiplier. The augmented Lagrangian is not the same as the method of Lagrange multipliers.

Viewed differently, the unconstrained objective is the Lagrangian of the constrained problem, with an additional penalty term (the augmentation).

The method was originally know as the Method of Multipliers, and was studied much in the 1970 and 1980s as a good alternative to penalty methods. It was first discussed in Hestenes in 1969[1] and by Powell in 1969[2] The method was also covered extensively by Dimitri Bertsekas and his 1982 book is a good reference[3].

With the advent of sequential quadratic programming and later interior point methods, the augmented Lagrangian method has seen less active research in the past twenty years. The method is still useful in some contexts, and it is still discussed in modern optimization textbooks[4].

General method

Let us say we are solving the following constrained problem:

subject to

This problem can be solved as a series of unconstrained minimization problems. For reference, we first list the penalty method approach:

The penalty method solves this problem, then at the next iteration it re-solves the problem using a larger vale of (and using the old solution as a the initial guess or "warm-start").

The augmented Lagrangian method uses the following unconstrained objective:

and after each iteration, in addition to updating , the variable is also updated according to the rule

where is the solution to the unconstrained problem at the kth step, i.e.

The variable is an estimate of the Lagrange multiplier, and the accuracy of this estimate improves at every step.

The major advantage of the method is that unlike the penalty method,

it is not necessary to take in order to solve the original constrained problem. Instead, because of the presence of the Lagrange multiplier term, can stay much smaller.

The method can be extended to handle inequality constraints. For a discussion of practical improvements, see [5].

Comparison with penalty methods

From [6], it is suggested that the augmented Lagrangian method is generally preferred to the quadratic penalty method since there is little extra computational cost and the parameter need not go to infinity, thus avoiding ill-conditioning.

Software

Some well-known software packages that use the augmented Lagrangian method are MINOS, LANCELOT, and PENNON

See also

References

  1. ^ M.R. Hestenes, "Multiplier and gradient methods", Journal of Optimization Theory and Applications, 4, 1969, pp.303-320
  2. ^ M.J.D. Powell, "A method for nonlinear constraints in minimization problems", in Optimization ed. by R. Fletcher, Academic Press, New York, NY, 1969, pp. 283-298.
  3. ^ Dimitri P. Bertsekas, Constrained optimization and Lagrange multiplier methods, Athena Scientific, 1996 (first published 1982)
  4. ^ Nocedal & Wright (2006), chapter 17
  5. ^ Nocedal & Wright (2006), chapter 17
  6. ^ Nocedal & Wright (2006), chapter 17

Bibliography

  • Nocedal, Jorge; Wright, Stephen J. (2006), Numerical Optimization (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-30303-1