Jump to content

Criss-cross algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Kiefer.Wolfowitz (talk | contribs) at 23:24, 21 March 2011 (Create the article as a stub). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In mathematical optimization, the criss-cross algorithm denotes a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear inequality constraints and nonlinear objective functions; there are criss-cross algorithms for linear-fractional programming problems, quadratic-programming problemsm, and linear complementarity problems.[1]

Comparison with the simplex algorithm of Dantzig

In linear programming, the criss-cross algorithm pivots between a sequence of bases but differs from the simplex algorithm of George Dantzig. The simplex algorithm first finds a (primal-) feasible basis by solving a "phase-one problem"; in "phase two", the simplex algorithm pivots between a sequence of basic feasible solutions so that the objective function is non-decreasing with each pivot, terminating when with an optimal solution (also finally finding a "dual feasible" solution).[1]

The criss-cross algorithm is simpler than the simplex algorithm, because the criss-cross algorithm only has one-phase. Its pivoting rules are similar to the least-index pivoting rule of Robert G. Bland. Bland's rule uses only signs of coefficients rather than their (real-number) order when deciding eligible pivots. Bland's rule selects an entering variables by comparing values of reduced costs, using the real-number ordering of the eligible pivots. Unlike Bland's rule, the criss-cross algorithm is "purely combinatorial", selecting an entering variable and a leaving variable by considering only the signs of coefficients rather than their real-number ordering.[1]

Oriented matroids

The criss-cross algorithm was published independently by Tamás Terlaky[2] and by Zhe-Min Wang;[3] related algorithms appeared in unpublished reports by other authors. Their publications considered linear-programming in the abstract setting of oriented matroids (OMs).[4] Indeed, Bland's pivoting rule was based on his previous papers on oriented matroid theory. Much literature on the criss-cross algorithm concerns oriented matroids. Historically, OM algorithms for quadratic-programming problems and linear-complementarity problems had been published by Michael J. Todd before Terlaky and Wang published their criss-cross algorithms. However, Todd's pivoting-rule cycles on nonrealizable oriented matroids (and only on nonrealizable oriented matroids). Such cycling does not stump the OM criss-cross algorithms of Terlaky and Wang, however. There are oriented-matroid variants of the criss-cross algorithm for linear programming, for quadratic programming, and for the linear-complementarity problem.[1]

Like the simplex algorithm of Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. The original criss-cross algorithm of Terlaky visits all the 2D corners of a (perturbed) cube in dimension D, by a result of Roos (which modifies the Klee-Minty construction of a cube on which the simplex algorithm takes 2D steps.[1]

Summary

The criss-cross algorithm is a simply stated algorithm for linear program of great theoretical interest. It was the second earliest fully combinatorial algorithm for linear programming; unlike the first algorithm for oriented matroids by Todd, the criss-cross algorithm does not cycle on nonrealizable oriented-oriented matroid problems. It is not a polynomial-time algorithm, however. Researchers have extended the criss-cross algorithm for many optimization-problems, including linear-fractional programming. The criss-cross algorithm can solve quadratic programming problems and linear complementarity problems in the setting of oriented matroids. It is not a polynomial time algorithm for linear programming, however.

See also

  • Jack Edmonds (pioneer of combinatorial optimization and oriented-matroid theorist; doctoral advisor of Komei Fukuda)

Notes

  1. ^ a b c d e Fukuda & Terlaky (1997)
  2. ^ Terlaky (1985) and Terlaky (1987)
  3. ^ Wang (1987)
  4. ^ The theory of oriented matroids was initiated by R. Tyrrell Rockafellar to describe the sign patterns of the matrices arising through the pivoting operations of Dantzig's simplex algorithm; Rockafellar was inspired by Albert W. Tucker studies of such sign patterns in "Tucker tableaux".

References

  • Fukuda, Komei; Terlaky, Tamás (1997). "Criss-cross methods: A fresh view on pivot algorithms". Mathematical Programming: Series B. 79 (Papers from the 16th International Symposium on Mathematical Programming held in Lausanne, 1997). Amsterdam: North-Holland Publishing Co.: 369–395. doi:10.1016/S0025-5610(97)00062-2. MR 1464775. {{cite journal}}: Invalid |ref=harv (help); More than one of |number= and |issue= specified (help); Unknown parameter |editors= ignored (|editor= suggested) (help)
  • Terlaky, T. (1985). "A convergent criss-cross method". Optimization: A Journal of Mathematical Programming and Operations Research. 16 (5): 683--690. doi:10.1080/02331938508843067. ISSN 0233-1934. MR 0798939. {{cite journal}}: Invalid |ref=harv (help)
  • Terlaky, Tamás (1987). "A finite crisscross method for oriented matroids". Journal of Combinatorial Theory. Series B. 42 (3): 319–327. doi:10.1016/0095-8956(87)90049-9. ISSN 0095-8956. MR 0888684. {{cite journal}}: Invalid |ref=harv (help)
  • Wang, Zhe Min. "A finite conformal-elimination free algorithm over oriented matroid programming". Chinese Annals of Mathematics (Shuxue Niankan B Ji). Series B. 8 (1): 120–125. ISSN 0252-9599. MR 0886756. {{cite journal}}: Invalid |ref=harv (help); Text "year1987" ignored (help)