 Crisscross algorithm

This article is about an algorithm for mathematical optimization. For the naming of chemicals, see crisscross method.
In mathematical optimization, the crisscross algorithm denotes a family of algorithms for linear programming. Variants of the crisscross algorithm also solve more general problems with linear inequality constraints and nonlinear objective functions; there are crisscross algorithms for linearfractional programming problems,^{[1]}^{[2]} quadraticprogramming problems, and linear complementarity problems.^{[3]}
Like the simplex algorithm of George B. Dantzig, the crisscross algorithm is not a polynomialtime algorithm for linear programming. Both algorithms visit all 2^{D} corners of a (perturbed) cube in dimension D, the Klee–Minty cube (after Victor Klee and George J. Minty), in the worst case.^{[4]}^{[5]} However, when it is started at a random corner, the crisscross algorithm on average visits only D additional corners.^{[6]}^{[7]}^{[8]} Thus, for the threedimensional cube, the algorithm visits all 8 corners in the worst case and exactly 3 additional corners on average.
Contents
History
The crisscross algorithm was published independently by Tamás Terlaky^{[9]} and by ZheMin Wang;^{[10]} related algorithms appeared in unpublished reports by other authors.^{[3]}
Comparison with the simplex algorithm for linear optimization
In linear programming, the crisscross algorithm pivots between a sequence of bases but differs from the simplex algorithm of George Dantzig. The simplex algorithm first finds a (primal) feasible basis by solving a "phaseone problem"; in "phase two", the simplex algorithm pivots between a sequence of basic feasible solutions so that the objective function is nondecreasing with each pivot, terminating when with an optimal solution (also finally finding a "dual feasible" solution).^{[3]}^{[11]}
The crisscross algorithm is simpler than the simplex algorithm, because the crisscross algorithm only has onephase. Its pivoting rules are similar to the leastindex pivoting rule of Bland.^{[12]} Bland's rule uses only signs of coefficients rather than their (realnumber) order when deciding eligible pivots. Bland's rule selects an entering variables by comparing values of reduced costs, using the realnumber ordering of the eligible pivots.^{[12]}^{[13]} Unlike Bland's rule, the crisscross algorithm is "purely combinatorial", selecting an entering variable and a leaving variable by considering only the signs of coefficients rather than their realnumber ordering.^{[3]}^{[11]} The crisscross algorithm has been applied to furnish constructive proofs of basic results in real linear algebra, such as the lemma of Farkas.^{[14]}
Description
The crisscross algorithm has been specified in many publications and implemented in many programming languages.
Example
Examples showing the crisscross algorithm on a probleminstance have been published.
Computational complexity: Worst and average cases
The time complexity of an algorithm counts the number of arithmetic operations sufficient for the algorithm to solve the problem. For example, Gaussian elimination requires on the order of D^{3} operations, and so it is said to have polynomial timecomplexity, because its complexity is bounded by a cubic polynomial. There are examples of algorithms that do not have polynomialtime complexity. For example, a generalization of Gaussian elimination called Buchberger's algorithm has for its complexity an exponential function of the problem data (the degree of the polynomials and the number of variables of the multivariate polynomials). Because exponential functions eventually grow much faster than polynomial functions, an exponential complexity implies that an algorithm has slow performance on large problems.
Several algorithms for linear programming—Khachiyan's ellipsoidal algorithm, Karmarkar's projective algorithm, and centralpath algorithms—have polynomial timecomplexity (in the worst case and thus on average). The ellipsoidal and projective algorithms were published before the crisscross algorithm.
However, like the simplex algorithm of Dantzig, the crisscross algorithm is not a polynomialtime algorithm for linear programming. Terlaky's crisscross algorithm visits all the 2^{D} corners of a (perturbed) cube in dimension D, according to a paper of Roos; Roos's paper modifies the Klee–Minty construction of a cube on which the simplex algorithm takes 2^{D} steps).^{[3]}^{[4]}^{[5]} Like the simplex algorithm, the crisscross algorithm visits all 8 corners of the threedimensional cube in the worst case.
When it is initialized at a random corner of the cube, the crisscross algorithm visits only D additional corners, however, according to a 1994 paper by Fukuda and Namiki.^{[6]}^{[7]} Trivially, the simplex algorithm takes on average D steps for a cube.^{[8]}^{[15]} Like the simplex algorithm, the crisscross algorithm visits exactly 3 additional corners of the threedimensional cube on average.
Variants
The crisscross algorithm has been extended to solve more general problems than linear programming problems.
Other optimization problems with linear constraints
There are variants of the crisscross algorithm for linear programming, for quadratic programming, and for the linearcomplementarity problem with "sufficient matrices";^{[3]}^{[6]}^{[16]}^{[17]}^{[18]}^{[19]} conversely, for linear complementarity problems, the crisscross algorithm terminates finitely only if the matrix is a sufficient matrix.^{[18]}^{[19]} A sufficient matrix is a generalization both of a positivedefinite matrix and of a Pmatrix, whose principal minors are each positive.^{[18]}^{[19]}^{[20]} The crisscross algorithm has been adapted also for linearfractional programming.^{[1]}^{[2]}
Vertex enumeration
The crisscross algorithm was used in an algorithm for enumerating all the vertices of a polytope, which was published by David Avis and Komei Fukuda in 1992.^{[21]} Avis and Fukuda presented an algorithm which finds the v vertices of a polyhedron defined by a nondegenerate system of n linear inequalities in D dimensions (or, dually, the v facets of the convex hull of n points in D dimensions, where each facet contains exactly D given points) in time O(nDv) and O(nD) space.^{[22]}
Oriented matroids
The crisscross algorithm is often studied using the theory of oriented matroids (OMs), which is a combinatorial abstraction of linearoptimization theory.^{[17]}^{[23]} Indeed, Bland's pivoting rule was based on his previous papers on orientedmatroid theory. However, Bland's rule exhibits cycling on some orientedmatroid linearprogramming problems.^{[17]} The first purely combinatorial algorithm for linear programming was devised by Michael J. Todd.^{[17]}^{[24]} Todd's algorithm was developed not only for linearprogramming in the setting of oriented matroids, but also for quadraticprogramming problems and linearcomplementarity problems.^{[17]}^{[24]} Todd's algorithm is complicated even to state, unfortunately, and its finiteconvergence proofs are somewhat complicated.^{[17]}
The crisscross algorithm and its proof of finite termination can be simply stated and readily extend the setting of oriented matroids. The algorithm can be further simplified for linear feasibility problems, that is for linear systems with nonnegative variables; these problems can be formulated for oriented matroids.^{[14]} The crisscross algorithm has been adapted for problems that are more complicated than linear programming: There are orientedmatroid variants also for the quadraticprogramming problem and for the linearcomplementarity problem.^{[3]}^{[16]}^{[17]}
Summary
The crisscross algorithm is a simply stated algorithm for linear programming. It was the second fully combinatorial algorithm for linear programming. The partially combinatorial simplex algorithm of Bland cycles on some (nonrealizable) oriented matroids. The first fully combinatorial algorithm was published by Todd, and it is also like the simplex algorithm in that it preserves feasibility after the first feasible basis is generated; however, Todd's rule is complicated. The crisscross algorithm is not a simplexlike algorithm, because it need not maintain feasibility. The crisscross algorithm does not have polynomial timecomplexity, however.
Researchers have extended the crisscross algorithm for many optimizationproblems, including linearfractional programming. The crisscross algorithm can solve quadratic programming problems and linear complementarity problems, even in the setting of oriented matroids. Even when generalized, the crisscross algorithm remains simply stated.
See also
 Jack Edmonds (pioneer of combinatorial optimization and orientedmatroid theorist; doctoral advisor of Komei Fukuda)
Notes
 ^ ^{a} ^{b} Illés, Szirmai & Terlaky (1999)
 ^ ^{a} ^{b} StancuMinasian, I. M. (August 2006). "A sixth bibliography of fractional programming". Optimization 55 (4): 405–428. doi:10.1080/02331930600819613. MR2258634. http://www.informaworld.com/10.1080/02331930600819613.
 ^ ^{a} ^{b} ^{c} ^{d} ^{e} ^{f} ^{g} Fukuda & Terlaky (1997)
 ^ ^{a} ^{b} Roos (1990)
 ^ ^{a} ^{b} Klee, Victor; Minty, George J. (1972). "How good is the simplex algorithm?". In Shisha, Oved. Inequalities III (Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1–9, 1969, dedicated to the memory of Theodore S. Motzkin). New YorkLondon: Academic Press. pp. 159–175. MR332165.
 ^ ^{a} ^{b} ^{c} Fukuda & Terlaky (1997, p. 385)
 ^ ^{a} ^{b} Fukuda & Namiki (1994, p. 367)
 ^ ^{a} ^{b} The simplex algorithm takes on average D steps for a cube. Borgwardt (1987): Borgwardt, KarlHeinz (1987). The simplex method: A probabilistic analysis. Algorithms and Combinatorics (Study and Research Texts). 1. Berlin: SpringerVerlag. pp. xii+268. ISBN 3540170960. MR868467.
 ^ Terlaky (1985) and Terlaky (1987)
 ^ Wang (1987)
 ^ ^{a} ^{b} Terlaky & Zhang (1993)
 ^ ^{a} ^{b} Bland, Robert G. (May 1977). "New finite pivoting rules for the simplex method". Mathematics of Operations Research 2 (2): 103–107. doi:10.1287/moor.2.2.103. JSTOR 3689647. MR459599.
 ^ Bland's rule is also related to an earlier leastindex rule, which was proposed by Katta G. Murty for the linear complementarity problem, according to Fukuda & Namiki (1994).
 ^ ^{a} ^{b} Klafszky & Terlaky (1991)
 ^ More generally, for the simplex algorithm, the expected number of steps is proportional to D for linearprogramming problems that are randomly drawn from the Euclidean unit sphere, as proved by Borgwardt and by Smale.
 ^ ^{a} ^{b} Fukuda & Namiki (1994)
 ^ ^{a} ^{b} ^{c} ^{d} ^{e} ^{f} ^{g} Björner, Anders; Las Vergnas, Michel; Sturmfels, Bernd; White, Neil; Ziegler, Günter (1999). "10 Linear programming". Oriented Matroids. Cambridge University Press. pp. 417–479. doi:10.1017/CBO9780511586507. ISBN 9780521777506. MR1744046. http://ebooks.cambridge.org/ebook.jsf?bid=CBO9780511586507.
 ^ ^{a} ^{b} ^{c} den Hertog, D.; Roos, C.; Terlaky, T. (1 July 1993). "The linear complementarity problem, sufficient matrices, and the crisscross method" (pdf). Linear Algebra and its Applications 187: 1–14. doi:10.1016/00243795(93)901247. http://arno.uvt.nl/show.cgi?fid=72943.
 ^ ^{a} ^{b} ^{c} Csizmadia, Zsolt; Illés, Tibor (2006). "New crisscross type algorithms for linear complementarity problems with sufficient matrices" (pdf). Optimization Methods and Software 21 (2): 247–266. doi:10.1080/10556780500095009. MR2195759. http://www.cs.elte.hu/opres/orr/download/ORR03_1.pdf.
 ^ Cottle, R. W.; Pang, J.S.; Venkateswaran, V. (March–April 1989). "Sufficient matrices and the linear complementarity problem". Linear Algebra and its Applications 114–115: 231–249. doi:10.1016/00243795(89)904631. MR986877. http://www.sciencedirect.com/science/article/pii/0024379589904631.
 ^ Avis & Fukuda (1992, p. 297)
 ^ The v vertices in a simple arrangement of n hyperplanes in D dimensions can be found in O(n^{2}Dv) time and O(nD) space complexity.
 ^ The theory of oriented matroids was initiated by R. Tyrrell Rockafellar. (Rockafellar 1969):
Rockafellar, R. T. (1969). "The elementary vectors of a subspace of R^{N} (1967)". In R. C. Bose and T. A. Dowling. Combinatorial Mathematics and its Applications. The University of North Carolina Monograph Series in Probability and Statistics. Chapel Hill, North Carolina: University of North Carolina Press.. pp. 104–127. MR278972. PDF reprint. http://www.math.washington.edu/~rtr/papers/rtrElemVectors.pdf.
Rockafellar was influenced by the earlier studies of Albert W. Tucker and George J. Minty. Tucker and Minty had studied the sign patterns of the matrices arising through the pivoting operations of Dantzig's simplex algorithm.
 ^ ^{a} ^{b} Todd, Michael J. (1985). "Linear and quadratic programming in oriented matroids". Journal of Combinatorial Theory. Series B 39 (2): 105–133. doi:10.1016/00958956(85)900425. MR811116.
References
 Avis, David; Fukuda, Komei (December 1992). "A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra". Discrete and Computational Geometry 8 (ACM Symposium on Computational Geometry (North Conway, NH, 1991)): 295–313. doi:10.1007/BF02293050. MR1174359. http://www.springerlink.com/content/m7440v7p3440757u/.
 Csizmadia, Zsolt; Illés, Tibor (2006). "New crisscross type algorithms for linear complementarity problems with sufficient matrices" (pdf). Optimization Methods and Software 21 (2): 247–266. doi:10.1080/10556780500095009. MR2195759. http://www.cs.elte.hu/opres/orr/download/ORR03_1.pdf.
 Fukuda, Komei; Namiki, Makoto (March 1994). "On extremal behaviors of Murty's least index method". Mathematical Programming 64 (1): 365–370. doi:10.1007/BF01582581. MR1286455.
 Fukuda, Komei; Terlaky, Tamás (1997). Liebling, Thomas M.; de Werra, Dominique. eds. "Crisscross methods: A fresh view on pivot algorithms". Mathematical Programming: Series B (Amsterdam: NorthHolland Publishing Co.) 79 (Papers from the 16th International Symposium on Mathematical Programming held in Lausanne, 1997): 369–395. doi:10.1016/S00255610(97)000622. MR1464775. Postscript preprint.
 den Hertog, D.; Roos, C.; Terlaky, T. (1 July 1993). "The linear complementarity problem, sufficient matrices, and the crisscross method" (pdf). Linear Algebra and its Applications 187: 1–14. doi:10.1016/00243795(93)901247. MR1221693. http://arno.uvt.nl/show.cgi?fid=72943.
 Illés, Tibor; Szirmai, Ákos; Terlaky, Tamás (1999). "The finite crisscross method for hyperbolic programming". European Journal of Operational Research 114 (1): 198–214. doi:10.1016/S03772217(98)000496. Zbl 0953.90055. Postscript preprint. http://www.sciencedirect.com/science/article/B6VCT3W3DFHBM/2/4b0e2fcfc2a71e8c14c61640b32e805a.
 Klafszky, Emil; Terlaky, Tamás (June 1991). "The role of pivoting in proving some fundamental theorems of linear algebra". Linear Algebra and its Applications 151: 97–118. doi:10.1016/00243795(91)903562. MR1102142. http://www.cas.mcmaster.ca/~terlaky/files/pivotla.ps.
 Roos, C. (1990). "An exponential example for Terlaky's pivoting rule for the crisscross simplex method". Mathematical Programming. Series A 46 (1). doi:10.1007/BF01585729. MR1045573.
 Terlaky, T. (1985). "A convergent crisscross method". Optimization: A Journal of Mathematical Programming and Operations Research 16 (5): 683–690. doi:10.1080/02331938508843067. ISSN 02331934. MR798939.
 Terlaky, Tamás (1987). "A finite crisscross method for oriented matroids". Journal of Combinatorial Theory. Series B 42 (3): 319–327. doi:10.1016/00958956(87)900499. ISSN 00958956. MR888684.
 Terlaky, Tamás; Zhang, Shu Zhong (1993). "Pivot rules for linear programming: A Survey on recent theoretical developments". Annals of Operations Research (Springer Netherlands) 46–47 (Degeneracy in optimization problems): 203–233. doi:10.1007/BF02096264. ISSN 02545330. MR1260019. PDF file of (1991) preprint.
 Wang, Zhe Min (1987). "A finite conformalelimination free algorithm over oriented matroid programming". Chinese Annals of Mathematics (Shuxue Niankan B Ji). Series B 8 (1): 120–125. ISSN 02529599. MR886756.
External links
 Komei Fukuda (ETH Zentrum, Zurich) with publications
 Tamás Terlaky (Lehigh University) with publications
Complementarity problems and algorithms Complementarity Problems Basisexchange algorithms Optimization: Algorithms, methods, and heuristics Unconstrained nonlinear: Methods calling ... ... and gradients... and HessiansConstrained nonlinear GeneralDifferentiableAugmented Lagrangian methods · Sequential quadratic programming · Successive linear programmingConvex minimization GeneralBasisexchangeCombinatorial ParadigmsApproximation algorithm · Dynamic programming · Greedy algorithm · Integer programming (Branch & bound or cut)Graph algorithmsMetaheuristics Categories (Algorithms · Methods · Heuristics) · Software Categories: Linear programming
 Oriented matroids
 Combinatorial optimization
 Optimization algorithms
 Combinatorial algorithms
 Geometric algorithms
 Exchange algorithms
Wikimedia Foundation. 2010.
Look at other dictionaries:
Criss Cross — and variants thereof may refer to: Contents 1 People 2 In TV and Film 3 Books 4 Other uses 5 … Wikipedia
Levenberg–Marquardt algorithm — In mathematics and computing, the Levenberg–Marquardt algorithm (LMA)[1] provides a numerical solution to the problem of minimizing a function, generally nonlinear, over a space of parameters of the function. These minimization problems arise… … Wikipedia
Gauss–Newton algorithm — The Gauss–Newton algorithm is a method used to solve non linear least squares problems. It can be seen as a modification of Newton s method for finding a minimum of a function. Unlike Newton s method, the Gauss–Newton algorithm can only be used… … Wikipedia
Linear programming — (LP, or linear optimization) is a mathematical method for determining a way to achieve the best outcome (such as maximum profit or lowest cost) in a given mathematical model for some list of requirements represented as linear relationships.… … Wikipedia
Oriented matroid — theory allows a combinatorial approach to the max flow min cut theorem. A network with the value of flow equal to the capacity of an s t cut An oriented matroid is a mathematical structure that abstracts the properties of directed graphs and of… … Wikipedia
Pivotverfahren — Pivotverfahren, oder pivotbasierter Suchalgorithmus der Linearen Optimierung, nennt sich jeder Algorithmus, der für seine Suche ein lineares Gleichungssystem aufstellt und in jedem Schritt strategisch ausgewählte Variablen bezüglich der… … Deutsch Wikipedia
Nonlinear conjugate gradient method — In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function : The minimum of f is obtained when the gradient is 0: . Whereas linear conjugate… … Wikipedia
Category:Optimization algorithms — An optimization algorithm is an algorithm for finding a value x such that f(x) is as small (or as large) as possible, for a given function f, possibly with some constraints on x. Here, x can be a scalar or vector of continuous or discrete values … Wikipedia
David Avis — in 1987 Born March 20, 1951 (1951 03 20 … Wikipedia
Dynamic programming — For the programming paradigm, see Dynamic programming language. In mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems… … Wikipedia