Revised simplex method
In mathematical optimization, the revised simplex method is an variant of George Dantzig's simplex method for linear programming.
The revised simplex method is mathematically equivalent to the standard simplex method but differs in implementation. Instead of maintaining a tableau which explicitly represents the constraints adjusted to a set of basic variables, it maintains a representation of a basis of the matrix representing the constraints. The matrix-oriented approach allows for greater computational efficiency by enabling sparse matrix operations.
Problem formulation
For the rest of the discussion, it is assumed that a linear programming problem has been converted into the following standard form:
where . Without loss of generality, it is assumed that the constraint matrix has full rank and that the problem is feasible, i.e., there is at least one such that . If is rank-deficient, either there are redundant constraints, or the problem is infeasible. Both situations can be handled by a presolve step.
Algorithmic description
Optimality conditions
For linear programming, the Karush–Kuhn–Tucker conditions are both necessary and sufficient for optimality. The KKT conditions of a linear programming problem in the standard form is
where and are the Lagrange multipliers associated with the constraints and , respectively.[1] The last condition, which is equivalent to for all , is called the complementary slackness condition.
By what is sometimes known as the fundamental theorem of linear programming, a vertex of the feasible polytope can be identified by be a basis of chosen from the latter's columns.[a] Since has full rank, is nonsingular. Without loss of generality, assume that . Then is given by
where . Partition and accordingly into
To satisfy the complementary slackness condition, let . It follows that
which implies that
If at this point, the KKT conditions are satisfied, and thus is optimal.
Pivot operation
If the KKT conditions are violated, a pivot operation consisting of introducing a column of into the basis at the expense of an existing column in is performed. In the absence of degeneracy, a pivot operation always results in a strict decrease in . Therefore, if the problem is bounded, the revised simplex method must terminate at an optimal vertex after repeated pivot operations because there are only a finite number of vertices.[3]
Notes and references
Notes
References
- ^ Nocedal & Wright 2006, p. 358, Eq. 13.4.
- ^ Nocedal & Wright 2006, p. 363, Theorem 13.2.
- ^ Nocedal & Wright 2006, p. 370, Theorem 13.4.
Bibliography
- Nocedal, J.; Wright, S. J. (2006). Mikosch, T. V.; Resnick, S. I.; Robinson, S. M. (eds.). Numerical Optimization. Springer Series in Operations Research and Financial Engineering (2nd ed.). New York, NY, USA: Springer. ISBN 978-0-387-30303-1.
{{cite book}}
: Invalid|ref=harv
(help)