 Dual problem

In constrained optimization, it is often possible to convert the primal problem (i.e. the original form of the optimization problem) to a dual form, which is termed a dual problem. Usually dual problem refers to the Lagrangian dual problem but other dual problems are used, for example, the Wolfe dual problem and the Fenchel dual problem. The Lagrangian dual problem is obtained by forming the Lagrangian, using nonnegative Lagrangian multipliers to add the constraints to the objective function, and then solving for some primal variable values that minimize the Lagrangian. This solution gives the primal variables as functions of the Lagrange multipliers, which are called dual variables, so that the new problem is to maximize the objective function with respect to the dual variables under the derived constraints on the dual variables (including at least the nonnegativity).
The solution of the dual problem provides an upper bound to the solution of the primal problem.^{[1]} However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition. Thus, a solution to the dual problem provides a bound on the value of the solution to the primal problem; when the problem is convex and satisfies a constraint qualification, then the value of an optimal solution of the primal problem is given by the dual problem.
Contents
Duality principle
In optimization theory, the duality principle states that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem.
In general given two dual pairs separated locally convex spaces and . Then given the function , we can define the primal problem by
If there are constraint conditions, these can be built in to the function f by letting f = f + I_{constraints} where I is the indicator function. Then let be a perturbation function such that F(x,0) = f(x).^{[2]}
The duality gap is the difference of the right and left hand side of the inequality
where F ^{*} is the convex conjugate in both variables.^{[2]}^{[3]}^{[4]}
The linear case
Linear programming problems are optimization problems in which the objective function and the constraints are all linear. In the primal problem, the objective function is a linear combination of n variables. There are m constraints, each of which places an upper bound on a linear combination of the n variables. The goal is to maximize the value of the objective function subject to the constraints. A "solution" is a vector (a list) of n values that achieves the maximum value for the objective function.
In the dual problem, the objective function is a linear combination of the m values that are the limits in the m constraints from the primal problem. There are n "dual constraints", each of which places a lower bound on a linear combination of m "dual variables".
Relationship between the primal problem and the dual problem
In the linear case, in the primal problem, from each suboptimal point that satisfies all the constraints, there is a direction or subspace of directions to move that increases the objective function. Moving in any such direction is said to remove "slack" between the candidate solution and one or more constraints. An "infeasible" value of the candidate solution is one that exceeds one or more of the constraints.
In the dual problem, the dual vector multiplies the constants that determine the positions of the constraints in the primal. Varying the dual vector in the dual problem is equivalent to revising the upper bounds in the primal problem. The lowest upper bound is sought. That is, the dual vector is minimized in order to remove slack between the candidate positions of the constraints and the actual optimum. An infeasible value of the dual vector is one that is too low. It sets the candidate positions of one or more of the constraints in a position that excludes the actual optimum.
This intuition is made formal by the equations in Linear programming: Duality.
Economic interpretation
If we interpret our primal LP problem as a classical "Resource Allocation" problem, its dual can be interpreted as a "Resource Valuation" problem.
The nonlinear case
In nonlinear programming, the constraints are not necessarily linear. Nonetheless, many of the same principles apply.
To ensure that the global maximum of a nonlinear problem can be identified easily, the problem formulation often requires that the functions be convex and have compact lower level sets.
This is the significance of the Karush–Kuhn–Tucker conditions. They provide necessary conditions for identifying local optima of nonlinear programming problems. There are additional conditions (constraint qualifications) that are necessary so that it will be possible to define the direction to an optimal solution. An optimal solution is one that is a local optimum, but possibly not a global optimum.
The strong Lagrangian principle: Lagrange duality
Given a nonlinear programming problem in standard form
with the domain having nonempty interior, the Lagrangian function is defined as
The vectors λ and ν are called the dual variables or Lagrange multiplier vectors associated with the problem. The Lagrange dual function is defined as
The dual function g is concave, even when the initial problem is not convex. The dual function yields lower bounds on the optimal value p ^{*} of the initial problem; for any and any ν we have .
If a constraint qualification such as Slater's condition holds and the original problem is convex, then we have strong duality, i.e. .
Convex problems
For a convex minimization problem with inequality constraints,
the Lagrangian dual problem is
where the expression within parentheses is the Langrange dual function. Provided that the functions f and are continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem
is called the Wolfe dual problem. This problem may be difficult to deal with computationally, because the objective function is not concave in (u,x) and the equality constraint is nonlinear in general, so the Wolfe dual problem is typically a nonconvex optimization problem and weak duality holds.^{[5]}
History
According to George Dantzig, the duality theorem for linear optimization was conjectured by John von Neumann, immediately after Dantzig presented the linear programming problem. Von Neumann noted that he was using information from his game theory, and conjectured that two person zero sum matrix game was equivalent to linear programming. Rigorous proofs were first published in 1948 by Albert W. Tucker and his group. (Dantzig's forward to Nering and Tucker, 1993)
See also
Notes
 ^ Boyd, Stephen P.; Vandenberghe, Lieven (2004) (pdf). Convex Optimization. Cambridge University Press. ISBN 9780521833783. http://www.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf. Retrieved October 15, 2011.
 ^ ^{a} ^{b} Radu Ioan Boţ; Gert Wanka; SorinMihai Grad (2009). Duality in Vector Optimization. Springer. ISBN 9783642028854.
 ^ Ernö Robert Csetnek (2010). Overcoming the failure of the classical generalized interiorpoint regularity conditions in convex optimization. Applications of the duality theory to enlargements of maximal monotone operators. Logos Verlag Berlin GmbH. ISBN 9783832525033.
 ^ Zălinescu, C.. Convex analysis in general vector spaces. World Scientific Publishing Co., Inc. pp. 106113. ISBN 9812380671. MR1921556.
 ^ Geoffrion, A. M. (1971). "Duality in Nonlinear Programming: A Simplified ApplicationsOriented Development". SIAM Review 13 (1): pp. 137. JSTOR 2028848.
References
Books
 Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin (1993). Network Flows: Theory, Algorithms and Applications. Prentice Hall. ISBN 013617549X.
 Bertsekas, Dimitri P. (1999). Nonlinear Programming: 2nd Edition. Athena Scientific. ISBN 1886529000.
 Bonnans, J. Frédéric; Gilbert, J. Charles; Lemaréchal, Claude; Sagastizábal, Claudia A. (2006). Numerical optimization: Theoretical and practical aspects. Universitext (Second revised ed. of translation of 1997 French ed.). Berlin: SpringerVerlag. pp. xiv+490. doi:10.1007/9783540354475. ISBN 354035445X. MR2265882. http://www.springer.com/mathematics/applications/book/9783540354451.
 William J. Cook, William H. Cunningham, William R. Pulleyblank, Alexander Schrijver; Combinatorial Optimization; John Wiley & Sons; 1 edition (November 12, 1997); ISBN 047155894X.
 Dantzig, G.B., 1963. Linear Programming and Extensions, Princeton University Press, Princeton, NJ.
 HiriartUrruty, JeanBaptiste; Lemaréchal, Claude (1993). Convex analysis and minimization algorithms, Volume I: Fundamentals. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 305. Berlin: SpringerVerlag. pp. xviii+417. ISBN 3540568506.
 HiriartUrruty, JeanBaptiste; Lemaréchal, Claude (1993). "14 Duality for Practitioners". Convex analysis and minimization algorithms, Volume II: Advanced theory and bundle methods. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 306. Berlin: SpringerVerlag. pp. xviii+346. ISBN 3540568522.
 Lasdon, Leon S. (2002). Optimization theory for large systems (reprint of the 1970 Macmillan ed.). Mineola, New York: Dover Publications, Inc.. pp. xiii+523. MR1888251.
 Eugene Lawler (2001). "4.5. Combinatorial Implications of MaxFlow MinCut Theorem, 4.6. Linear Programming Interpretation of MaxFlow MinCut Theorem". Combinatorial Optimization: Networks and Matroids. Dover. ISBN 0486414531.
 Lemaréchal, Claude (2001). "Lagrangian relaxation". In Michael Jünger and Denis Naddef. Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000. Lecture Notes in Computer Science. 2241. Berlin: SpringerVerlag. pp. 112–156. doi:10.1007/3540455868_4. ISBN 3540428771. MR1900016.
 Minoux, M. (1986). Mathematical programming: Theory and algorithms (Translated by Steven Vajda from the (1983 Paris: Dunod) French ed.). Chichester: A WileyInterscience Publication. John Wiley & Sons, Ltd.. pp. xxviii+489. ISBN 0471901709. MR868279. (2008 Second ed., in French: Programmation mathématique: Théorie et algorithmes. Editions Tec & Doc, Paris, 2008. xxx+711 pp. ISBN13: 9782743010003. MR2571910).
 Nering, E.D and Tucker, A.W., 1993, Linear Programming and Related Problems, Academic Press, Boston, MA.
 Christos H. Papadimitriou and Kenneth Steiglitz Combinatorial Optimization : Algorithms and Complexity; Dover Pubns; (paperback, Unabridged edition, July 1998) ISBN 0486402584.
Articles
 Everett, Hugh, III (1963). "Generalized Lagrange multiplier method for solving problems of optimum allocation of resources". Operations Research 11 (3): 399–417. doi:10.1287/opre.11.3.39. JSTOR 168028. MR152360. http://or.journal.informs.org/cgi/reprint/11/3/399.
 Kiwiel, Krzysztof C.; Larsson, Torbjörn; Lindberg, P. O. (August 2007). "Lagrangian relaxation via ballstep subgradient methods". Mathematics of Operations Research 32 (3): 669–686. doi:10.1287/moor.1070.0261. MR2348241. http://mor.journal.informs.org/cgi/content/abstract/32/3/669.
Categories: Mathematical optimization
 Linear programming
 Convex optimization
 Mathematical and quantitative methods (economics)
Wikimedia Foundation. 2010.
Look at other dictionaries:
Constraint satisfaction dual problem — The dual problem is a reformulation of a constraint satisfaction problem expressing each constraint of the original problem as a variable. Dual problems only contain binary constraints, and are therefore solvable by algorithms tailored for such… … Wikipedia
Dual inheritance theory — (DIT), also known as gene culture coevolution, was developed in the late 1970s and early 1980s to explain how human behavior is a product of two different and interacting evolutionary processes: genetic evolution and cultural evolution. DIT is a… … Wikipedia
DualSIMHandy — mit separater Sende /Empfangsanzeige und Annahmetasten für jede Leitung (Prototyp des nie zur Produktionsreife gelangten Modells Twinbell) Ein Dual SIM Handy ist ein Mobiltelefon, das zwei Sende und Empfangseinrichtungen in einem Gehäuse vereint … Deutsch Wikipedia
Dual — may refer to: Dual (mathematics), a notion of paired concepts that mirror one another Dual (category theory), a formalization of mathematical duality . . . see more cases in Category:Duality theories Dual (grammatical number), a… … Wikipedia
Dualchannel architecture — describes a technology that theoretically doubles data throughput from RAM to the memory controller. Dual channel enabled memory controllers utilize two 64 bit data channels, resulting in a total bandwidth of 128 bits, to move data from RAM to… … Wikipedia
Dual flush toilet — A dual flush toilet is a variation of the flush toilet that uses two buttons or handles to flush different levels of water. It was invented by Australian inventor Bruce Thompson in 1980 while working for Caroma,[1] and al … Wikipedia
Dual gauge — Track gauge by size Broad gauge Sta … Wikipedia
Dualmode vehicle — For other types of Hybrid Transportation , see Hybrid vehicle. See also Global Hybrid Cooperation for the General Motors/DaimlerChrysler/BMW hybrid vehicle technology often called Dual Mode A dual mode vehicle is a vehicle that can run on… … Wikipedia
Dualcovenant theology — Christian eschatology Eschatology views Viewpoints • Preterism • Idealism • Historicism • … Wikipedia
Dualmodulus prescaler — A dual modulus prescaler is an electronic circuit used in high frequency synthesizer designs to overcome the problem of generating narrowly spaced frequencies that are nevertheless too high to be passed directly through the feedback loop of the… … Wikipedia