Interior point method

Interior point method

Interior point methods (also referred to as barrier methods) are a certain class of algorithms to solve linear and nonlinear convex optimization problems.

These algorithms have been inspired by Karmarkar's algorithm, developed by Narendra Karmarkar in 1984 for linear programming. The basic elements of the method consists of a self-concordant barrier function used to encode the convex set. Contrary to the simplex method, it reaches an optimal solution by traversing the interior of the feasible region.

Any convex optimization problem can be transformed into minimizing (or maximizing) a linear function over a convex set. The idea of encoding the feasible set using a barrier and designing barrier methods was studied in the early 1960s by, amongst others, Anthony V. Fiacco and Garth P. McCormick. These ideas were mainly developed for general nonlinear programming, but they were later abandoned due to the presence of more competitive methods for this class of problems (e.g. sequential quadratic programming).

Yurii Nesterov and Arkadii Nemirovskii came up with a special class of such barriers that can be used to encode any convex set. They guarantee that the number of iterations of the algorithm is bounded by a polynomial in the dimension and accuracy of the solution.

Karmarkar's breakthrough revitalized the study of interior point methods and barrier problems, showing that it was possible to create an algorithm for linear programming characterized by polynomial complexity and, moreover, that was competitive with the simplex method.Already Khachiyan's ellipsoid method was a polynomial time algorithm; however, in practice it was too slow to be of practical interest.

The class of primal-dual path-following interior point methods is considered the most successful.
Mehrotra's predictor-corrector algorithm provides the basis for most implementations of this class of methods.

References

* Karmarkar, Narendra (1984). "A New Polynomial Time Algorithm for Linear Programming", "Combinatorica", Vol 4, no. 4, pp. 373–395.
* Mehrotra, Sanjay (1992). "On the implementation of a primal-dual interior point method", "SIAM Journal on Optimization", Vol. 2, no. 4, pp. 575--601.
*cite book|title = Numerical Optimization | first=Jorge| last = Nocedal | coauthors= and Stephen Wright| year=1999 | publisher=Springer | location=New York, NY| id=ISBN 0-387-98793-2
*cite book|title = Primal-Dual Interior-Point Methods | first=Stephen| last = Wright | year=1997 | publisher=SIAM | location=Philadelphia, PA| id=ISBN 0-89871-382-X


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Newton's method — In numerical analysis, Newton s method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots (or zeroes) of a real valued function. The… …   Wikipedia

  • Nelder–Mead method — Nelder–Mead simplex search over the Rosenbrock banana function (above) and Himmelblau s function (below) See simplex algorithm for Dantzig s algorithm for the problem of linear opti …   Wikipedia

  • Mehrotra predictor-corrector method — Mehrotra s predictor corrector method in optimization is an implementation of interior point methods. It was proposed in 1989 by Sanjay Mehrotra. [cite journal|last=Mehrotra|first=S.|title=On the implementation of a primal–dual interior point… …   Wikipedia

  • Mehrotra predictor–corrector method — Mehrotra s predictor–corrector method in optimization is an implementation of interior point methods. It was proposed in 1989 by Sanjay Mehrotra.[1] The method is based on the fact that at each iteration of an interior point algorithm it is… …   Wikipedia

  • Iterative method — In computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iterative method, including the termination… …   Wikipedia

  • Newton's method in optimization — A comparison of gradient descent (green) and Newton s method (red) for minimizing a function (with small step sizes). Newton s method uses curvature information to take a more direct route. In mathematics, Newton s method is an iterative method… …   Wikipedia

  • Cutting-plane method — In mathematical optimization, the cutting plane method is an umbrella term for optimization methods which iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts. Such procedures are popularly used to… …   Wikipedia

  • Nonlinear conjugate gradient method — In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function : The minimum of f is obtained when the gradient is 0: . Whereas linear conjugate… …   Wikipedia

  • Ellipsoid method — The ellipsoid method is an algorithm for solving convex optimization problems. It was introduced by Naum Z. Shor, Arkady Nemirovsky, and David B. Yudin in 1972, and used by Leonid Khachiyan to prove the polynomial time solvability of linear… …   Wikipedia

  • interior design — 1. the design and coordination of the decorative elements of the interior of a house, apartment, office, or other structural space, including color schemes, fittings, furnishings, and sometimes architectural features. 2. the art, business, or… …   Universalium

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”