**Linear Program Equivalence**

Note that the standard form of a linear program takes the following form

objective:

constraints:

and

where , , and . Yet linear program constraints may take many forms, some of which described below

- and . Define , as follows

If vector satisfies , then

and

which implies . Thus, this linear program may be expressed in standard form.

- and is unbounded. Define as follows

If vector satisfies , then

where , . Thus, this linear program may be expressed in standard form.

We will use these linear program equivalences when we discuss the simplex algorithm. But first, we return to Farkas’ lemma, and provide an intuition for its proof, which you will formalize in your homework

**Farkas’ Lemma**

For any matrix , vector , exactly one of the following holds

- (1) There exists such that

and

- (2) There exists such that

and

Here’s intuition for the proof. Let where is the column vector. For all vectors with nonnegative entries,

matrix-vector product given above must lie in the conical hull induced by column vectors . There are two possibilities

- . Clearly, this implies there exists vector such that and
- . This implies there exists hyperplane that intersects origin such that separates vectors in conical hull from vector . (You will prove this implication in your homework). As such, there exists such that for j from 1 to n and . In other words, and .

This lemma equips us with the ability to prove part (4) of the duality theorem

Linear Programing Duality Theorem

Given primal program , dual program , there exist four possibilities

- P is infeasible, which implies D is unbounded
- P is unbounded, which implies D is infeasible
- P, D are both infeasible
- P, D are feasible

Linear program is infeasible if there exists no solution that satisfies all constraints, and is unbounded if objective function may be incremented towards positive or negative infinity. We shall demonstrate that in case (4), P, D have the same optimal value.

Let P be primal program that is feasible and bounded with objective and constraints and . Also, let be vector that satisfies constraints and achieves optimum .

Dual program D has objective and constraints where is unbounded.

Let . Construct modified matrix , vector as follows

Since for all vectors , case (1) in Farkas’ lemma cannot hold. As such, there exists vector such that

and

and

where and . Since ,

Now, since

, we have . Dividing both and by yields

and

Thus, . By weak duality,

Therefore, setting arbitrarily close to implies that

**Solving Linear Programs**

Let P be primal program that is feasible and bounded with objective and constraints and . Consider row vectors in

Note that is a half-space in , and the feasible region for P is the intersection of these half-spaces. Let F denote this feasible region, which is, by assumption, bounded and nonempty.

Clearly, is a hyperplane in . Vertex is the intersection of hyperplanes with linearly independent normal vectors $\vec{a}$. We will also note here that the feasible region may be expressed as the convex hull of these special points. Note that the vector that has an optimal objective value must occur at one of these vertices due to the fact that this objective function is linear.

We proceed with an example to demonstrate an approach to solving LPs known as the **simplex method.**

**Simplex Method Example**

objective:

constraints:

Let us convert these inequality constraints to equalities using some ‘slack’ variables as follows

objective:

constraints:

We will begin by starting with some feasible solution, say , and rewrite the nonzero (**basic**) terms in terms of the zero (**nonbasic**) terms

- objective:
constraints:

Now we try to increase . Note that . Setting produces solution

- objective:
constraints:

Next we try to increase since it has a positive coefficient in the objective function. Note that . Setting produces solution

- objective:
constraints:

Note that no improvements can be made to objective function, and so solution is optimal vector with value 13.

Sometimes the initial solution cannot be found by setting the original variables to zero. Consider the following constrains:

After adding slack variables, they become

However we cannot set because should be nonnegative. So we add yet another two artificial variables to construct another LP: s.t.

If the original LP is feasible, then and we also get an initial feasible solution. If , then original LP is not feasible. Now we proceed to formalize the simplex method.

**Simplex Method **

Given matrix , constraint vector , and objective vector ,

- Augment constraints by adding slack variables to produce equalities, i.e. for constraint from to
where for from to . Also, rewrite objective function as follows,

where for

- Find initial feasible solution , using method specified above if necessary, and rewrite basic variables in terms of nonbasic variables
- While there exists some positive objective coefficient, select some variable where
- Using constraints, increase selected variable until some basic variable becomes . This variable is also known as the
*leaving*variable - Rewrite constraints so that basic variables are in terms of nonbasic variables, and replace the selected variable in objective function with equivalence constraint

- Using constraints, increase selected variable until some basic variable becomes . This variable is also known as the
- Output solution vector

Now we prove that once simplex terminates, is the vector that maximizes our objective function . Note that when simplex terminates objective function looks like

where . Consider dual program given as

objective:

constraints:

and

Define vector with entries for from to . Clearly, . We still need to demonstrate and .

- . Note that
Setting yields , and so

- . Since ,
Since and ,

And so,