Interior-point method
Interior point methods (also referred to as barrier methods) are a certain class of algorithms to solve linear and nonlinear convex optimization problems.
The interior point method was invented by John von Neumann.[1] Von Neumann suggested a new method of linear programming, using the homogeneous linear system of Gordan (1873) which was later popularized by Karmarkar's algorithm, developed by Narendra Karmarkar in 1984 for linear programming. The method consists of a self-concordant barrier function used to encode the convex set. Contrary to the simplex method, it reaches an optimal solution by traversing the interior of the feasible region.
Any convex optimization problem can be transformed into minimizing (or maximizing) a linear function over a convex set[citation needed]. The idea of encoding the feasible set using a barrier and designing barrier methods was studied in the early 1960s by, amongst others, Anthony V. Fiacco and Garth P. McCormick. These ideas were mainly developed for general nonlinear programming, but they were later abandoned due to the presence of more competitive methods for this class of problems (e.g. sequential quadratic programming).
Yurii Nesterov and Arkadi Nemirovski came up with a special class of such barriers that can be used to encode any convex set. They guarantee that the number of iterations of the algorithm is bounded by a polynomial in the dimension and accuracy of the solution.[2]
Karmarkar's breakthrough revitalized the study of interior point methods and barrier problems, showing that it was possible to create an algorithm for linear programming characterized by polynomial complexity and, moreover, that was competitive with the simplex method. Already Khachiyan's ellipsoid method was a polynomial time algorithm; however, in practice it was too slow to be of practical interest.
The class of primal-dual path-following interior point methods is considered the most successful. Mehrotra's predictor-corrector algorithm provides the basis for most implementations of this class of methods[citation needed].
Primal-dual interior point method for nonlinear optimization
The primal-dual method's idea is easy to demonstrate for constrained nonlinear optimization. For simplicity consider the all-inequality version of a nonlinear optimization problem:
- minimize subject to .
The logarithmic barrier function associated with (1) is
Here is a small positive scalar, sometimes called the "barrier parameter". As converges to zero the minimum of should converge to a solution of (1).
The barrier function gradient is
where is the gradient of the original function and is the gradient of .
In addition to the original ("primal") variable we introduce a Lagrange multiplier inspired dual variable (sometimes called "slack variable")
(4) is sometimes called the "perturbed complementarity" condition, for its resemblance to "complementary slackness" in KKT conditions.
We try to find those which turn gradient of barrier function to zero.
Applying (4) to (3) we get equation for gradient:
where the matrix is the constraint Jacobian.
The intuition behind (5) is that the gradient of should lie in the subspace spanned by the constraints' gradients. The "perturbed complementarity" with small (4) can be understood as the condition that the solution should either lie near the boundary or that the projection of the gradient on the constraint component normal should be almost zero.
Applying Newton's method to (4) and (5) we get an equation for update :
where is the Hessian matrix of and is a diagonal matrix of .
Because of (1), (4) the condition
should be enforced at each step. This can be done by choosing appropriate :
- .
See also
References
- ^ Dantzig, George B.; Thapa, Mukund N. (2003). Linear Programming 2: Theory and Extensions. Springer-Verlag.
- ^ Wright, Margaret H. (2004). "The interior-point revolution in optimization: History, recent developments, and lasting consequences". Bulletin of the American Mathematical Society. 42: 39. doi:10.1090/S0273-0979-04-01040-7. MR 2115066.
Bibliography
- Bonnans, J. Frédéric; Gilbert, J. Charles; Lemaréchal, Claude; Sagastizábal, Claudia A. (2006). Numerical optimization: Theoretical and practical aspects. Universitext (Second revised ed. of translation of 1997 French ed.). Berlin: Springer-Verlag. pp. xiv+490. doi:10.1007/978-3-540-35447-5. ISBN 3-540-35445-X. MR 2265882.
- Karmarkar, N. (1984). "Proceedings of the sixteenth annual ACM symposium on Theory of computing - STOC '84" (PDF): 302. doi:10.1145/800057.808695. ISBN 0-89791-133-4.
{{cite journal}}
:|chapter=
ignored (help); Cite journal requires|journal=
(help) - Mehrotra, Sanjay (1992). "On the Implementation of a Primal-Dual Interior Point Method". SIAM Journal on Optimization. 2 (4): 575. doi:10.1137/0802028.
- Nocedal, Jorge (1999). Numerical Optimization. New York, NY: Springer. ISBN 0-387-98793-2.
{{cite book}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 10.11. Linear Programming: Interior-Point Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
- Wright, Stephen (1997). Primal-Dual Interior-Point Methods. Philadelphia, PA: SIAM. ISBN 0-89871-382-X.
- Boyd, Stephen; Vandenberghe, Lieven (2004). Convex Optimization. Cambridge University Press.