The Simplex Method


The Simplex method is a search procedure that sifts through the set of basic feasible solutions, one at a time, until the optimal basic feasible solution (whenever it exists) is identified. The method is essentially an efficient implementation of both Procedure Search and Procedure Corner Points discussed in the previous section.

Although the method does not require any explicit reliance on the graph, we will first explain the basic idea behind the Simplex method graphically; and we will do this using the example from the last section. Our discussion will refer to Figure LP-5 below.

We will begin the search at (any) one of the corner points and then ascend, as if we are climbing a hill, toward the optimal corner point along the edges of the feasible region. In this particular example, the Simplex method will begin at point A. Our first task is to determine whether or not point A is optimal. Observe that there are two edges that emanate from point A, one ends at point B and the other at point E. In other words, there are two corner points that are adjacent to point A. Therefore, we will compare the objective-function value at point A against those at points B and E. In this case, both B and E happen to have an objective-function value that is higher than that at point A. Therefore, point A is not optimal, and we can choose to travel to either one of these two points. Suppose we choose to go to point E, as indicated by an arrow in Figure LP-5. Then, after arriving at point E, we, again, need to find out whether or not E is optimal. To determine this, we simply repeat the process of comparing the objective-function value at point E against those at the adjacent corner points A and D. Clearly, of these two points, only D needs to be examined, since we just travelled from point A, which is known to be inferior. After checking the objective-function value at point D to find out that it happens to be better than that at point E. We, therefore, next travel to point D (also indicated by an arrow in Figure LP-5). On arrival at point D, we again compare its OFV against those at points C and E. Since the OFV at D turns out to be better than that at C, we finally conclude that point D is optimal.

It is important to realize that in concluding that point D is optimal, we implicitly relied on the fact that the objective function is linear in the decision variables. To see this, consider the simple optimization problem: Maximize (x-2)2 subject to 0£x£3. Here, the objective function f(x)=(x-2)2 is nonlinear, and its plot is shown in Figure LP-6 below.

Now, imagine ourselves standing at the point with x=3, where the objective-function value equals 1 (=(3-2)2); and suppose we plan to travel to the origin, where x=0. If we look in the direction of the origin, the objective-function value will first experience a decline and then an increase, with a final value of 4 (=(0-2)2). This initial decline means that the point at x=3 is locally optimal. However, since the final objective-function value is higher than 1, we see that this point is not globally optimal. Clearly, if we had a linear objective function, such as g(x)=2x (also depicted in Figure LP-6), then this phenomenon cannot occur, since once the objective function starts a decline in a given direction, the decline will continue at a constant rate regardless of how far we travel. Hence, a locally optimal solution for a linear program will always be globally optimal. In our example above, point D is locally optimal; it therefore is globally optimal.

We now begin the solution of the previous example using the Simplex method; and this is given at the link below.

The Simplex Method: An Example

In each iteration of the Simplex method, the primary algebraic task is to transform, using Gaussian elimination, the constraint equations from a given configuration to a new configuration that corresponds to the next basic feasible solution. Such transformations are to be repeated many times in the course of the solution of a problem. It is therefore desirable to execute them in a manner that is as efficient as possible. For this purpose, we will now introduce a very compact way of representing an equation system. The idea, simply, is to present the coefficients of the variables in the constraints in the form of a table. An equation system presented in this manner is said to be in the tabular form. Since we are dealing with equations that arise during the execution of the Simplex method, such tables will also be referred to as Simplex tableaus. With the tableau representation, manipulations of equations in the constraint set is tantamount to manipulations of rows of coefficients in a table.

As an illustration, we will revisit the previous example and carry out the iterations of the Simplex method using the tabular form. This exercise, given at the next link, serves as a good summary and review of what we did in the earlier discussion.

The Simplex Method: The Tabular Form

The Simplex method is also often referred to as the Simplex algorithm. An algorithm is an iterative procedure for solving a class of problems. In this case, we are interested in solving linear programs. A desirable property of an algorithm is that it is finite, meaning that it is guaranteed to generate a solution to any problem instance in the specified class in a finite number of iterations. Is our Simplex algorithm finite? The answer is a qualified yes.

Observe that in our previous example, we explicitly visited three basic feasible solutions (points A, E, and D), and each of these is associated with a single Simplex tableau. The objective-function values corresponding to this sequence of basic feasible solutions are: 0, 8, and 9. Notice that this sequence is strictly increasing. A little bit of reflection on the pivot operation should convince you that this is "almost always" the case, and that exceptions can occur only when the right-hand-side constant of the pivot row in a tableau happens to be zero. Now, let us assume that in the course of the Simplex iterations, we will never encounter such an exception. Then, since the objective-function values are strictly increasing, we will never visit any basic feasible solution more than once (Why?). Finally, since the total number of basic feasible solutions is finite, we see that it is guaranteed that an optimal solution can be found in a finite number of iterations.

Problems where the above-mentioned exceptions can occur are said to be degenerate. We will return to a discussion about degenerate problems in the next section.

Assignment No. 3

Chapter 4: 4.1-5, 4.3-4, 4.3-8, and 4.3-9.

For problem 4.3-4, you can use the tabular form. I also highly recommend that you use the Interactive OR Routines, which can be downloaded here. These routines will relieve you from heavy number crunching. A quick tour is provided here: IOR Tutorial - An Example. This example works with the file here.