Interior Point Methods
Our standard linear programming problem is
Here, c and x are n vectors, b is an m vector, and A is an m × n matrix. The simplex algorithm moves from basic feasible solution to basic feasible solution. The basic feasible solutions are extreme points of the feasible region for (P). Furthermore, the simplex algorithm moves from one extreme point along an edge of the feasible region to another extreme point. If the feasible region is very big with many extreme points, the simplex algorithm may take a long time before it finds the optimal extreme point. Thus, we may try to use an algorithm which cuts across the middle of the feasible region. Such a method is called an interior point method. There are many different interior point algorithms; we will just consider one: the primal affine scaling algorithm.
Notation: We will let e = [1,…, 1]T .
The boundaries of the feasible region are given by the nonnegativity inequalities x ≥ 0. Therefore, a point is in the interior of the feasible region if it satisfies
Assume we know a strictly positive point xk which is feasible for the problem (P). The best direction to move may appear to be the direction -c, because this will result in the largest decrease in the objective function value for the smallest change in x. There are two problems with this direction:
Consider the problem
so here c = [-1,-1, 1, 1]T . This problem has optimal solution x* = (1, 2, 0, 0) with value -3. The point xk = (0.8, 0.1, 0.2, 1.9) is feasible in this problem, with objective function value 1.2. We want to look at points of the form
for some steplength β, because the objective function value of such a point is
because cT c is the dot product between the vector c and itself, that is, it is just the square of the length of the vector c. Now,
To get around this difficulty, we rescale the problem so that we can take a step of length at least one in any direction. Thus, we introduce a diagonal matrix
We then scale the constraint matrix and the objective function, getting Ak := ADk and ck := Dkc. For the problem (LP1), with xk as given above, we get
We then get a rescaled problem
The rescaled version of (LP1) is
Notice that z = (1, 1, 1, 1) is feasible in (LP1k), with objective function value 1.2. Because all of the components of z are at least one, we can take a reasonable step in any direction.
We list several properties of (Pk). It may help you to understand these properties if you see how they apply to the problem (LP1).
Conceptually, an iteration of the algorithm has the form:
We would like to take d = -ck, in order to decrease the objective function value as quickly as possible. Unfortunately, this may lead to a point which violates the constraint Akz = b. For example, in the problem (LP1k), taking β = 0.5 gives the point
because e is feasible in (Pk). Therefore, we project the direction ck onto the nullspace of Ak. Algebraically, this means that we take
and I denotes the identity matrix. (Aside: We assume that the rows of A are linearly independent. Under this assumption, the projection matrix is well-defined. Note that we need this assumption to hold in order to be able to obtain a basic feasible solution.) For the problem (LP1), we have
We have znew = e + βd, and we need to have znew > 0. Thus, we need to select β so that 1 + βdi > 0 for each component i. So, we need to pick
so we take
This will result in moving 0.9 of the way to the boundary of the feasible region of the rescaled problem (Pk). Notice that one component of znew will be equal to 0.1.
(This discussion on choosing the steplength should remind you of the minimum ratio rule in the simplex algorithm.)
Recall that when we looked at the revised simplex method, we found out that it was not necessary to calculate the whole of the simplex tableau. In the same spirit of efficiency, it is not necessary to calculate the complete projection matrix PAk. In practice, we would perform one iteration as follows:
The algorithm does decrease the objective function at each iteration. This is because a projection matrix M is idempotent, that is, it is symmetric and MM = M. We then get that the objective function value of znew is
It can be shown that
A possible dual iterate is
Notice that the dual slacks are then c - AT vk, and that dk = -Dk(c - AT vk). Thus, the elements of dk give the product between the primal variable d ik and the corresponding dual slack, so they give a measure of the complementary slackness. At optimality, we will have dual feasibility with this choice. It can be shown that this choice is actually dual optimal if xk is optimal for the primal problem.
We stop the simplex algorithm when all the costs in the tableau are nonnegative. Unfortunately, we don’t have such a simple stopping rule for the primal affine scaling algorithm. However, there are some possibilities:
The algorithm still sometimes makes very slow progress when it gets close to a corner of the feasible region. One possible remedy is to try explicitly to center the iterates. We do this by moving in the direction
in the scaled problem (Pk). Notice that this direction is trying to increase all components simultaneously. An algorithm will usually do some steps using the regular dk we found earlier and then mix in some centering steps.
In the example we were looking at earlier, the centering direction is
Current commercial implementations combine affine steps with centering and duality in predictor-corrector methods. In addition to excellent computational performance, variants of these methods converge in a number of iterations that is polynomial in the size of the problem.
Serious research on interior point methods started with the publication of Karmarkar’s paper . Karmarkar made some very bold claims for the performance of his algorithm, which were somewhat borne out by subsequent results. The primal affine scaling method is a simplification of Karmarkar’s original algorithm that was proposed by several researchers in 1986, including Vanderbei, Meketon and Freedman . This method was in fact discovered by the Russian Dikin  in 1967, although this discovery remained unknown in the west until much later. A good (mathematical) description of the algorithm can be found in .
Textbooks on interior point methods include [1, 4, 5, 6, 9].
Current computational results indicate that interior point methods outperform the simplex algorithm on large problems.
It seems that an interior point method will solve almost any problem in at most 40 iterations. The number of iterations required grows very slowly as the size of the problem increases. By contrast, the simplex algorithm seems to need approximately 1.5m iterations to solve a problem with m constraints. The work required at each iteration of an interior point method is far larger than the amount of work needed to compute a simplex pivot, so there is a trade-off: an interior point algorithm needs far fewer iterations, but it takes considerably more time per iteration, when compared to the simplex algorithm.
 I. I. Dikin. Iterative solution of problems of linear and quadratic programming. Doklady Akademiia Nauk SSSR, 174:747–748, 1967. English Translation: Soviet Mathematics Doklady, 1967, Volume 8, pp. 674–675.
 R. J. Vanderbei and J. C. Lagarias. I. I. Dikin’s convergence result for the affine–scaling algorithm. In J. C. Lagarias and M. J. Todd, editors, Mathematical Developments Arising from Linear Programming: Proceedings of a Joint Summer Research Conference held at Bowdoin College, Brunswick, Maine, USA, June/July 1988, volume 114 of Contemporary Mathematics, pages 109–119. American Mathematical Society, Providence, Rhode Island, USA, 1990.