Linear Programming II

Linear Program Equivalence

Note that the standard form of a linear program takes the following form


\max\{ \vec{c} \cdot \vec{x} \}


A \vec{x} \leq \vec{b} and \vec{x} \geq 0

where A \in \mathbb{R}^{m \times n}, \vec{b} \in \mathbb{R}^{m}, and \vec{c} \in \mathbb{R}^{n}. Yet linear program constraints may take many forms, some of which described below

  • A \vec{x} = \vec{b} and \vec{x} \geq 0. Define A' \in \mathbb{R}^{2m \times n}, \vec{b}' \in \mathbb{R}^{2m} as follows

A' = \begin{bmatrix}A \\- A\end{bmatrix}

\vec{b}' = \begin{bmatrix}\vec{b} \\- \vec{b}\end{bmatrix}

If vector \vec{x} \geq 0 satisfies A' \vec{x} \leq \vec{b}', then

A \vec{x} \leq \vec{b} and -A \vec{x} \leq -\vec{b}

which implies A \vec{x} = \vec{b}. Thus, this linear program may be expressed in standard form.

  • A \vec{x} \leq \vec{b} and \vec{x} \in \mathbb{R}^{n} is unbounded. Define A' \in \mathbb{R}^{m \times 2n} as follows

A' = \begin{bmatrix}A, - A\end{bmatrix}

If vector \vec{x} = \big[\vec{y}, \vec{z}\big] \geq 0 satisfies A' \vec{x} \leq \vec{b}, then

A' \vec{x} = A\vec{y} - A \vec{z} = A (\vec{y} - \vec{z}) \leq \vec{b}

where \vec{y}, \vec{z} \geq 0. Thus, this linear program may be expressed in standard form.

We will use these linear program equivalences when we discuss the simplex algorithm. But first, we return to Farkas’ lemma, and provide an intuition for its proof, which you will formalize in your homework

Farkas’ Lemma

For any matrix A \in \mathbb{R}^{m \times n}, vector \vec{b} \in \mathbb{R}^{m}, exactly one of the following holds

  • (1) There exists \vec{x} \in \mathbb{R}^{n} such that

\vec{x} \geq 0 and A \vec{x} = \vec{b}

  • (2) There exists \vec{y} \in \mathbb{R}^{m} such that

\vec{b} \cdot \vec{y} < 0 and A^{T} \vec{y} \geq 0

Here’s intuition for the proof. Let A = \big[\vec{a}_{1}, \vec{a}_{2}, \dots,  \vec{a}_{n}\big] where \vec{a}_{j} is the j^{\text{th}} column vector. For all vectors \vec{x} \geq 0 with nonnegative entries,

A \vec{x} = \sum_{j = 1}^{n} x_{j} \vec{a}_{j}

matrix-vector product given above must lie in the conical hull H = \bigg\{A \vec{x} \big| \vec{x} \geq 0 \bigg\} induced by column vectors \{\vec{a}_{1}, \vec{a}_{2}, \dots,  \vec{a}_{n}\}. There are two possibilities

  1. \vec{b} \in H. Clearly, this implies there exists vector \vec{x} \in \mathbb{R}^{n} such that \vec{x} \geq 0 and A \vec{x} = \vec{b}
  2. \vec{b} \notin H. This implies there exists hyperplane that intersects origin \vec{0} \in \mathbb{R}^{m} such that separates vectors in conical hull H from vector \vec{b}. (You will prove this implication in your homework). As such, there exists \vec{y} \in \mathbb{R}^{m} such that \vec{a}_{j} \cdot \vec{y} \geq 0 for j from 1 to n and \vec{b} \cdot \vec{y} < 0. In other words, \vec{b} \cdot \vec{y} < 0 and A^{T} \vec{y} \geq 0.

This lemma equips us with the ability to prove part (4) of the duality theorem

Linear Programing Duality Theorem

Given primal program P, dual program D, there exist four possibilities

  1. P is infeasible, which implies D is unbounded
  2. P is unbounded, which implies D is infeasible
  3. P, D are both infeasible
  4. P, D are feasible

Linear program is infeasible if there exists no solution that satisfies all constraints, and is unbounded if objective function may be incremented towards positive or negative infinity. We shall demonstrate that in case (4), P, D have the same optimal value.

Let P be primal program that is feasible and bounded with objective \max\{ \vec{c} \cdot \vec{x} \} and constraints A \vec{x} = \vec{b} and \vec{x} \geq 0. Also, let \vec{x}^{*} \geq 0 be vector that satisfies constraints and achieves optimum z^{*} = \vec{c} \cdot \vec{x}^{*}.

Dual program D has objective \min\{ \vec{b} \cdot \vec{y} \} and constraints  A^{T} \vec{y} \geq \vec{c} where \vec{y} \in \mathbb{R}^{m} is unbounded.

Let z > z^{*}. Construct modified matrix A' \in \mathbb{R}^{m + 1 \times n}, vector \vec{b}' \in \mathbb{R}^{m + 1} as follows

A' = \begin{bmatrix}A \\\vec{c}^{T}\end{bmatrix}

\vec{b}' = \begin{bmatrix}\vec{b} \\z\end{bmatrix}

Since \vec{c} \cdot \vec{x} < z for all vectors \vec{x}\geq 0, case (1) in Farkas’ lemma cannot hold. As such, there exists vector \vec{w} = \big[\vec{y}, \alpha\big] \in \mathbb{R}^{m + 1} such that

\vec{b}' \cdot \vec{w} < 0 and A'^{T} \vec{w} \geq 0

\vec{b} \cdot \vec{y} + \alpha z < 0 and A^{T} \vec{y} + \alpha \vec{c} \geq 0

where \vec{y} \in \mathbb{R}^{m} and \alpha \in \mathbb{R}. Since \vec{x}^{*} \geq 0,

\vec{x}^{*} \cdot (A^{T} \vec{y}) + \alpha \vec{x}^{*} \cdot \vec{c} \geq \vec{x}^{*} \cdot 0

\vec{b} \cdot \vec{y} + \alpha z^{*} \geq 0

Now, since
\alpha z > \alpha z^{*}, we have \alpha < 0. Dividing both \vec{b} \cdot \vec{y} + \alpha z < 0 and A^{T} \vec{y} + \alpha \vec{c} \geq 0 by -\alpha yields

    \vec{b} \cdot \frac{\vec{y}}{-\alpha} < z and A^{T} \frac{\vec{y}}{-\alpha} \geq \vec{c}

Thus, z > \vec{b} \cdot \frac{\vec{y}}{-\alpha} \geq \min\{\vec{b} \cdot \vec{y}\}. By weak duality,

z > \min\{\vec{b} \cdot \vec{y}\} \geq \max\{\vec{c} \cdot \vec{x}\} = z^{*}

Therefore, setting z arbitrarily close to z^{*} implies that \min\{\vec{b} \cdot \vec{y}\} = z^{*} = \max\{\vec{c} \cdot \vec{x}\}

Solving Linear Programs

Let P be primal program that is feasible and bounded with objective \max\{ \vec{c} \cdot \vec{x} \} and constraints A \vec{x} \leq \vec{b} and \vec{x} \geq 0. Consider row vectors in A

A = \begin{bmatrix}\vec{a}_{1} \\\vec{a}_{2} \\\dots \\\vec{a}_{m}\end{bmatrix}

Note that \vec{a}_{j} \cdot \vec{x} \leq b_{j} is a half-space in \mathbb{R}^{n}, and the feasible region for P is the intersection of these half-spaces. Let F denote this feasible region, which is, by assumption, bounded and nonempty.

Clearly, \vec{a}_{j} \cdot \vec{x} = b_{j} is a hyperplane in \mathbb{R}^{n}. Vertex v \in \mathbb{R}^{n} is the intersection of n hyperplanes with linearly independent normal vectors $\vec{a}$. We will also note here that the feasible region may be expressed as the convex hull of these special points. Note that the vector that has an optimal objective value must occur at one of these vertices due to the fact that this objective function is linear.

We proceed with an example to demonstrate an approach to solving LPs known as the simplex method.

Simplex Method Example


\max\{ 5x_1 + 4x_2 + 3x_3 \}


\begin{aligned}2x_1 + 3x_2 + x_3 &\leq 5\\4x_1 + x_2 + 2x_3 &\leq 11\\3x_1 + 4x_2 + 2x_3 &\leq 8\end{aligned}

Let us convert these inequality constraints to equalities using some ‘slack’ variables as follows


\max\{ 5x_1 + 4x_2 + 3x_3 \}


\begin{aligned}2x_1 + 3x_2 + x_3 + x_4 &= 5\\4x_1 + x_2 + 2x_3 + x_5 &= 11\\3x_1 + 4x_2 + 2x_3 + x_6 &= 8\end{aligned}

We will begin by starting with some feasible solution, say (0, 0, 0, 5, 11, 8), and rewrite the nonzero (basic) terms x_4, x_5, x_6 in terms of the zero (nonbasic) terms x_1, x_2, x_3

  1. objective:

    z = 5x_1 + 4x_2 + 3x_3 = 0


    \begin{aligned}x_4 &= 5 - 2x_1 - 3x_2 - x_3 \\x_5 &= 11 - 4x_1 - x_2 - 2x_3\\x_6 &= 8 - 3x_1 - 4x_2 - 2x_3\end{aligned}

    Now we try to increase x_1. Note that x_1 \leq \min\{\frac{5}{2}, \frac{11}{4}, \frac{8}{3}\} = \frac{5}{2}. Setting x_1 = \frac{5}{2} produces solution (\frac{5}{2}, 0, 0, 0, 1, \frac{1}{2})

  2. objective:

    z = \frac{25}{2} -\frac{7}{2} x_2 + \frac{1}{2} x_3 - \frac{5}{2} x_4 = \frac{25}{2}


    \begin{aligned}x_1 &= \frac{5}{2} - \frac{3}{2} x_2 - \frac{1}{2} x_3 - \frac{1}{2} x_4  \\x_5 &= 1 + 6 x_2 + 2 x_4\\x_6 &= \frac{1}{2} + \frac{1}{2} x_2 - \frac{1}{2} x_3 + \frac{3}{2} x_4\end{aligned}

    Next we try to increase x_2 since it has a positive coefficient in the objective function. Note that x_2 \leq \min\{5, 1\} =1. Setting x_3 = 1 produces solution (2, 0, 1, 0, 1, 0)

  3. objective:

    z =13 - 3x_2 - x_4 - x_6


    \begin{aligned}x_1 &= 2 - 2x_2 - 2x_4 + x_6  \\x_3 &= 1 + x_2 + 3x_4 - 2x_6  \\x_5 &= 1 + 6 x_2 + 2 x_4\end{aligned}

    Note that no improvements can be made to objective function, and so solution (2, 0, 1, 0, 1, 0) is optimal vector with value 13.

Sometimes the initial solution cannot be found by setting the original variables to zero. Consider the following constrains:

\begin{aligned}-x_1-x_2 &\le -1\\-2x_1+x_2 &\le -1\\x_1,x_2 &\ge 0\end{aligned}

After adding slack variables, they become

\begin{aligned}-x_1-x_2+x_3 &= -1\\-2x_1+x_2+x_4 &= -1\\x_1,x_2,x_3,x_4 &\ge 0\end{aligned}

However we cannot set x_1=x_2=0 because x_3, x_4 should be nonnegative. So we add yet another two artificial variables y_1,y_2 to construct another LP: \min y_1+y_2 s.t.

\begin{aligned}-x_1-x_2+x_3+y_1 &= -1\\2x_1+x_2+x_4+y_2 &= -1\\x_1,x_2,x_3,x_4,y_1,y_2 &\ge 0\end{aligned}

If the original LP is feasible, then \min y_1+y_2=0 and we also get an initial feasible solution. If \min y_1+y_2>0, then original LP is not feasible. Now we proceed to formalize the simplex method.

Simplex Method 

Given matrix A \in \mathbb{R}^{m \times n}, constraint vector \vec{b} \in \mathbb{R}^{m}, and objective vector \vec{c} \in \mathbb{R}^{n},

  1. Augment constraints by adding m slack variables to produce equalities, i.e. for constraint i from 1 to m

    \sum_{j = 1}^{n} a_{ij} x_j + x_{n+i} = b_i

    where x_j \geq 0 for j from 1 to n+m. Also, rewrite objective function as follows,

    z = \sum_{j = 1}^{n+m} c_{j} x_j

    where c_j = 0 for j > n

  2. Find initial feasible solution \vec{x}^{0}, using method specified above if necessary, and rewrite basic variables in terms of nonbasic variables
  3. While there exists some positive objective coefficient, select some variable x_j where c_j > 0
    1. Using constraints, increase selected variable until some basic variable becomes 0. This variable is also known as the leaving variable
    2. Rewrite constraints so that basic variables are in terms of nonbasic variables, and replace the selected variable in objective function with equivalence constraint
  4. Output solution vector \vec{x}^{*}

Now we prove that once simplex terminates, \vec{x}^{*} is the vector that maximizes our objective function \vec{c} \cdot \vec{x}. Note that when simplex terminates objective function looks like

z = z^{*} + \sum_{j=1}^{n+m} \overline{c}_{j} x_{j} = \sum_{j=1}^{n} c_{j} x_{j}

where \overline{c}_{j} \leq 0. Consider dual program given as


\min\{ \vec{b} \cdot \vec{y} \}


A^{T} \vec{y} \geq \vec{c} and \vec{y} \geq 0

Define vector \vec{y}^{*} \in \mathbb{R}^{m} with entries y_{i}^{*} = -\overline{c}_{n+i} for i from 1 to m. Clearly, \vec{y} \geq 0. We still need to demonstrate A^{T} \vec{y}^{*} \geq \vec{c} and \vec{b} \cdot \vec{y}^{*} = z^{*}.

  1. \vec{b} \cdot \vec{y}^{*} = z^{*}. Note that

    \begin{aligned}z &= z^{*} + \sum_{j=1}^{n} \overline{c}_{j} x_{j} + \sum_{i=1}^{m} \overline{c}_{n+i} x_{n+i} \\&= z^{*} + \sum_{j=1}^{n} \overline{c}_{j} x_{j} - \sum_{i=1}^{m} y_{i}^{*} x_{n+i}\\&= z^{*} + \sum_{j=1}^{n} \overline{c}_{j} x_{j} - \sum_{i=1}^{m} y_{i}^{*} \bigg(b_i - \sum_{j=1}^{n} a_{ij}x_{j}\bigg)\\&= z^{*} - \sum_{i=1}^{m} y_{i}^{*} b_{i} + \sum_{j=1}^{n}\big(\overline{c}_{j} + \sum_{i=1}^{m} a_{ij} y_{i}^{*}\big) x_{j}\\\end{aligned}

    Setting x_{1}, \dots, x_{n} = 0 yields z = 0, and so z^{*} - \vec{b} \cdot \vec{y}

  2. A^{T} \vec{y}^{*} \geq \vec{c}. Since z^{*} = \vec{b} \cdot \vec{y},

    \begin{aligned}z &= \sum_{j=1}^{n} c_{j} x_{j} = \sum_{j=1}^{n}\big(\overline{c}_{j} + \sum_{i=1}^{m} a_{ij} y_{i}^{*}\big) x_{j}\\\end{aligned}

    Since c_{j} = \overline{c}_{j} + \sum_{i=1}^{m} a_{ij} y_{i}^{*} and \overline{c}_{j} \leq 0,

    \begin{aligned}c_{j} &= \overline{c}_{j} + \sum_{i=1}^{m} a_{ij} y_{i}^{*} \\c_{j} &\leq \sum_{i=1}^{m} a_{ij} y_{i}^{*} \\\end{aligned}

    And so, \vec{c} \leq A^{T} \vec{y}^{*}

Linear Programming I

Many optimization problems may be formulated as a linear program. Given matrix A \in \mathbb{R}^{m \times n} and vectors \vec{b} \in \mathbb{R}^{m}\vec{c} \in \mathbb{R}^{n}, we seek some vector  \vec{x} \in \mathbb{R}^{n} that satisfies

A \vec{x} \leq \vec{b}

and maximizes

\max \{ \vec{c} \cdot \vec{x} \}

where elements in \vec{x} \geq 0 are nonnegative. We call \max \{ \vec{c} \cdot \vec{x} \} the objective function, and A \vec{x} \leq \vec{b} are known as the constraints.

Flow Networks as Linear Programs

Pushing flow through a network reduces to linear programming!


\max \{ \sum_{(s, u) \in E} f_{su}\}

capacity constraints:

\forall_{e \in E} 0 \leq f_{e} \leq c_{e}

flow preservation:

\forall_{v \neq s, t} \sum_{(u, v) \in E} f_{uv} = \sum_{(v, w) \in E} f_{vw}

Decision Problem Formulation

Note that linear programs can be formulated as a decision problem. Given some t \in \mathbb{R}, does there exist some \vec{x} \in \mathbb{R}^{n} such that

\vec{c} \cdot \vec{x} \geq t    and   A \vec{x} \leq \vec{b}

If YES, then simple provide the vector \vec{x} to be verified. If NO, we need some polynomial certificate to verify there exists no such vector. This is where the notion of a ‘dual’ program comes in. But first, perhaps an example is appropriate


Diet Problem

Suppose you are given the following meal choices: burger at $9, yogurt at $6, and ramen at $2. You also want to satisfy some dietary needs: you require at least 180 calories of protein, 100 calories of carbohydrates, and 20 calories of fat.

\begin{tabular}{ c c c c }& protein & carbohydrates & fat \\burger & 40 & 10 & 8 \\yogurt & 15 & 20 & 5 \\ramen & 5 & 30 & 15\end{tabular}

Given the above caloric breakdown of each meal, how can you satisfy your requirements while spending as little as possible? Let us formulate our problem as a linear program. Let x, y, z be the units of burgers, yogurts, and ramen we purchase. Note that we have the following constraints

40x + 15y + 5z \geq 180

10x + 20y + 30z \geq 100

8x + 5y + 15z \geq 20

where we want to minimize our objective 9x + 6y + 2z. Note that this is a minimization linear program, and to convert it to standard form, all we need to do is negate the objective and constraints, which yields the following linear program


\max \{ -9x - 6y - 15z\}


-40x - 15y - 5z \leq -180

-10x - 20y - 30z \leq -100

-8x - 5y - 15z \leq -20

Another Problem

Consider following linear program


\max \{ 4x_1 + x_2 + 5x_3 + 3x_4\}


(1) x_1 - x_2 - x_3 + 3x_4 \leq 1

(2) 5x_1 + x_2 + 3x_3 + 8x_4 \leq 55

(3) -x_1 + 2x_2 + 3x_3 - 5x_4 \leq 3

Let us assign variable y_{j} to constraint inequality (j), ensuring y_{j} \geq 0. Note that

\begin{aligned}y_1 (x_1 - x_2 - x_3 + 3x_4) &\leq y_1\\ y_2 (5x_1 + x_2 + 3x_3 + 8x_4) &\leq 55y_2\\ y_3 (-x_1 + 2x_2 + 3x_3 - 5x_4) &\leq 3y_3\end{aligned}

Adding each inequality together and grouping by x_{i} yields

\begin{aligned}&x_1 (y_1 + 5y_2 - y_3) \\ &+ x_2 (-y_1 + y_2 + 2y_3) \\ &+ x_3 (-y_1 + y_2 + 3 y_3) \\ &+ x_4 (3y_1 + 8y_2 - 5y_3) \leq y_1 + 55 y_2 + 3 y_3\end{aligned}

For each x_i, constrain multiplicative term to be greater than or equal to c_i, yielding following constraints

y_1 + 5y_2 - y_3 \geq 4

-y_1 + y_2 + 2y_3 \geq 1

-y_1 + 3y_2 + 3 y_3 \geq 5

3y_1 + 8y_2 - 5y_3 \geq 3

These constraints ensure that \max \{ 4x_1 + x_2 + 5x_3 + 3x_4 \} \leq \min \{ y_1 + 55 y_2 + 3 y_3 \}. As such, this is the dual program for original linear program with objective y_1 + 55 y_2 + 3y_3 that we seek to minimize and constraints given above.


Dual Programs

Let A \in \mathbb{R}^{m \times n} be matrix and \vec{b} \in \mathbb{R}^{m}\vec{c} \in \mathbb{R}^{n} be vectors.

Given linear program with following objective and constraints


\max \{ c_1x_1 + c_2x_2 + \dots + c_n x_n\}

constraint i:

a_{i1}x_1 + a_{i2}x_2 + \dots + a_{i1} x_n \leq b_{i}

Dual program is given as follows


\min \{ b_1y_1 + b_2y_2 + \dots + b_m y_m\}

constraint j:

a_{1j}y_1 + a_{2j}y_2 + \dots + a_{mj} y_m \geq c_{j}

where i \in [1, n]j \in [1, m]. Weak duality states

\max \{ \vec{c} \cdot \vec{x} | A \vec{x} \leq \vec{b}, \vec{x} \geq 0\} \leq \min \{ \vec{b} \cdot \vec{y} | A^{T} \vec{y} \geq \vec{c}, \vec{y} \geq 0\} 


Linear Programing Duality Theorem

Given primal program P, dual program D, there exist four possibilities

  1. P is infeasible, which implies D is unbounded
  2. P is unbounded, which implies D is infeasible
  3. P, D are both infeasible
  4. P, D are feasible

Linear program is infeasible if there exists no solution that satisfies all constraints, and is unbounded if objective function may be incremented towards positive or negative infinity. We shall demonstrate that in case (4), P, D have the same optimal value. We will need the following lemma to do this


Farkas’ Lemma

For any matrix A \in \mathbb{R}^{m \times n}, vector \vec{b} \in \mathbb{R}^{m}, exactly one of the following holds

  1. There exists \vec{x} \in \mathbb{R}^{n} such that

\vec{x} \geq 0 and A \vec{x} = \vec{b}

  1. There exists \vec{y} \in \mathbb{R}^{m} such that

\vec{b} \cdot \vec{y} < 0 and A^{T} \vec{y} \geq 0

General Graph Matching

Tutte’s Theorem: graph G = (V, E) has perfect matching M if and only if for all subsets X \subseteq V,

|odd(G \setminus X)| \le |X|

where |odd(G \setminus X)| denotes the number of odd components in the subgraph G \setminus X.

To demonstrate forward direction, assume towards contradiction that G has perfect matching M and there exists subset X \subseteq V such that |odd(G \setminus X)| > |X|. Note that no two distinct odd components in G \setminus X can share the same vertex in X, since the edges should constitute a matching. Thus, there exists some odd component with no matching edge to verses in X, which yields a contradiction

The other direction can be proved by first proving the correctness of Edmond’s algorithm, which computes maximum matchings for general graphs.

Edmond’s Algorithm:

For a matching M, a blossom is an odd cycle of length 2k+1 with k edges from M.

Lemma: Let B be a blossom of G such that B is disjoint from the rest of M. Let G' be the graph obtained from G by contracting the blossom B to a single vertex and let M' be the matching M restricted to G'. Then, G has an augmenting path iff G' has an augmenting path.

Proof: Clearly, if no augmenting path meets B, then the statement is trivial in both directions. Suppose we have an augmenting path in G' that ends at the blossom vertex (notice that the blossom vertex is unmatched in M'). Since the blossom is an odd cycle, there must be a path in G to the unmatched vertex in the blossom. Similarly, if we have an augmenting path in G that meets the blossom at some vertex, then we get a corresponding augmenting path in G' that ends at the blossom vertex.

With this idea, it is possible to modify the augmenting path algorithm from bipartite graphs to work for general graphs. Let the vertices on the even level be called outer vertices and the vertices on the odd levels be called inner vertices.

  • Let F be forest with roots initialized as all unmatched vertices in G
  • For every outer x \in F and y \notin F with (x,y) \in E, add (x,y) and (y,z) to F where (y,z) is a matching edge.
  • For outer x,y from different components of F, if (x,y) \in E, then we found an augmenting path, so output it and exit
  • For outer x,y from the same component and (x,y) \in E
    • Identify blossom (Least Common Ancestor of x,y)
    • Shrink the blossom, swap the matching and non matching edges from the blossom vertex to the root of its component.
  • Repeat till either augmenting path is found or if no more new vertices can be added.

Time Complexity: The running time is O(m n^2) since we perform graph search which takes O(m) steps, the size of the matching increases by 1 every time we find an augmenting path, and there can be at most \frac{n}{2} blossoms in the graph.


We have proved (in previous lecture) that G is a maximum matching if and only if there are no augmenting paths. Suppose that Edmond’s algorithm does not find an augmenting path. But does this mean there are no augmenting paths? Is Edmond’s algorithm guaranteed to find an augmenting path in a graph if it exists?

First, observe that there are no edges between outer vertices:

  1. If there is an edge between outer vertices in the same component of the forest, then this implies a blossom.
  2. If there is an edge between outer vertices in different components, then this implies an augmenting path.

Claim For a subset X \subseteq V that leaves k isolated odd components in G \backslash X, then the number of unmatched vertices in any matching is at least k-|X|.

Proof The claim follows from the following observation: there must be at least one unmatched vertex in each odd component, and for this vertex to be matched, it must be matched to a vertex in X.

Proof (Edmond’s algorithm) Take X the be all inner vertices, which leaves a number of isolated odd components equal to the number of outer vertices. Thus, the number of unmatched vertices in any matching is at least #outer-#inner vertices. Note that #outer-#inner is equal to the number of components in the forest found by Edmond’s algorithm, which is also equal to the number of unmatched vertices. Thus, if Edmond’s algorithm does not find an augmenting path, then the matching is maximum!

Corollary (Tutte’s Theorem) G has a perfect matching iff for all X \subseteq V, |X| \ge |odd(G\backslash X)|.


Linear Equations I

Given n \times n integer matrix A \in \mathbb{Z}^{n \times n}, and  n dimensional vector \vec{b} \in \mathbb{Z}^{n}, output vector \vec{x} \in \mathbb{Z}^{n} which satisfies

A \vec{x} = \vec{b}

if such vector exists. Assume integer entries a_{ij} \in A, b_{k} \in \vec{b} may be represented using at most L bits for i, j, k from 1 to n. Note that this implies input size is O(n^2 L).

Gaussian Elimination


h = 1, k = 1 (initialize pivot row, column)

while h \leq m and k \leq n

    i^{*} = argmax_{i \in [h, m]}(|A[i][k]|) (find k-th pivot)

    if A[i^{*}][k] = 0

        k \gets k + 1 (no pivot in this column, skip)
       swap rows h, i^{*}

        for i from h + 1 to m

            f = A[i][k] / A[h][k]

            A[i][k] \gets 0

            for j from k + 1 to n

                A[i][j] \gets A[i][j] - f \cdot A[h][j]

        h \gets h + 1, k \gets k + 1

Return A


GE(A, b):

    A' \gets [A, b] (augmented matrix)

    A' \gets RowReduce(A') (row-echelon form)

    Return \vec{x} \gets BackSub(A') (back substitution)


Consider following example

A=\begin{bmatrix}3&5&1\\ 1&2&4\\ -1&0&6\end{bmatrix}            \vec{b} =\begin{bmatrix}0\\ 1\\ 2\end{bmatrix}

Row reduction for augmented matrix given as

[A, b] = \begin{bmatrix}3&5&1&0\\ 1&2&4&1\\ -1&0&6&2\end{bmatrix} \sim \begin{bmatrix}1&\frac{5}{3}&\frac{1}{3}&0\\ 0&\frac{1}{3}&\frac{11}{3}&1\\ 0&\frac{5}{3}&\frac{19}{3}&2\end{bmatrix} \sim  \begin{bmatrix}1&\frac{5}{3}&\frac{1}{3}&0\\ 0&1&11&3\\ 0&0&-12&-3\end{bmatrix} \sim \begin{bmatrix}1&\frac{5}{3}&\frac{1}{3}&0\\ 0&1&11&3\\ 0&0&1&\frac{1}{4}\end{bmatrix}

Back substitution yields

[A, b] \sim \begin{bmatrix}1&\frac{5}{3}&\frac{1}{3}&0\\ 0&1&11&3\\ 0&0&1&\frac{1}{4}\end{bmatrix} \sim \begin{bmatrix}1&\frac{5}{3}&0&-\frac{1}{12}\\ 0&1&0&\frac{1}{4}\\ 0&0&1&\frac{1}{4}\end{bmatrix} \sim \begin{bmatrix}1&0&0&-\frac{1}{2}\\ 0&1&0&\frac{1}{4}\\ 0&0&1&\frac{1}{4}\end{bmatrix} 

Thus, \vec{x} = [-\frac{1}{2}, \frac{1}{4}, \frac{1}{4}]

Gaussian Elimination Complexity

Note that row reduction on k-th iteration requires at most O(n^2) elementary operations, and so time complexity is O(n^3). Since back substitution  requires O(n^2) elementary steps, gaussian elimination takes O(n^3) operations. Yet, what of the bit complexity of the operands that undergo these elementary operations?

On first iteration, entries may have bit size 2L. On second, size 4L. As such, on k-th iteration, entries may have size 2^{k} L. Thus, space complexity for storing and reading such iterations during gaussian elimination becomes exponential in input size. This is problematic. How can we fix this?


Modified Gaussian Elimination


    k = 1 (initialize pivot column)

    A^{0} = A (initialize matrix)

    A^{0}[0][0] = 1

    while k \leq \min\{n, m\}

        i^{*} = argmax_{i \in [k, m]}(|A^{k-1}[i][k]|) (find k-th pivot)

        if A^{k-1}[i^{*}][k] = 0
            A^{k} \gets A^{k-1}
            k \gets k + 1 (no pivot in this column, exit)


           swap rows k, i^{*}

            for i from 1 to k
               for j from 1 to n
                   A^{k}[i][j] \gets A^{k-1}[i][j]

            for i from k + 1 to m

                A^{k}[i][k] \gets 0

                for j from k + 1 to n

                    A^{k}[i][j] \gets \frac{A^{k-1}[k][k] A^{k-1}[i][j] - A^{k-1}[i][k] A^{k-1}[k][j]}{A^{k-1}[k-1][k-1]}

            k \gets k + 1

Return A^{k}


As per previous example,

A^{0} =\begin{bmatrix}3&5&1\\ 1&2&4\\ -1&0&6\end{bmatrix}

A^{1} = \begin{bmatrix}3&5&1\\ 0&3 \cdot 2 - 1 \cdot 5&3 \cdot 4 - 1 \cdot 1\\ 0&3 \cdot 0 - (-1) \cdot 5 &3 \cdot 6 - (-1) \cdot 1\end{bmatrix} = \begin{bmatrix}3&5&1\\ 0&1&11\\ 0&5&19\end{bmatrix}

A^{2} = \begin{bmatrix}3&5&1\\ 0&1&11\\ 0&0&\frac{1 \cdot 19 - 5 \cdot 11}{3}\end{bmatrix} = \begin{bmatrix}3&5&1\\ 0&1&11\\ 0&0&-12\end{bmatrix}

Lemma 1

For i \in [1, m], j \in [1, n], entry a^{k}_{ij} in matrix A^{k} is determinant of the submatrix of A with rows [1, \dots, k, i] and columns [1, \dots, k, j]

a^{k}_{ij}  = \begin{vmatrix}a_{11}&a_{12}&\dots&a_{1k}&a_{1j}\\ a_{21}&a_{22}&\dots&a_{2k}&a_{2j}\\ \dots&\dots&\dots&\dots&\dots\\ a_{k1}&a_{k2}&\dots&a_{kk}&a_{kj}\\ a_{i1}&a_{i2}&\dots&a_{ik}&a_{ij}\end{vmatrix}

Since integer matrices have integer determinants, A^{k} has integer entries given input matrix A \in \mathbb{Z}^{m \times n}. Given that each entry in A uses at most L bits, determinant a^{k}_{ij} is n! sum over n products, which implies

[a^{k}_{ij}] = \log(a^{k}_{ij}) \leq \log(c^{nL} \cdot n!) \leq nL \log(c) + n \log(n)

Thus, bit complexity for representing a^{k}_{ij} is O(n(L + \log(n))).

Lemma 0

Before we can prove lemma 1, we need some results on determinants. Suppose matrix B \in \mathbb{R}^{n \times n} results from applying an elementary row operation to matrix A \in \mathbb{R}^{n \times n} as specified below

  • Swapping two rows in A

\text{det}(B) = - \text{det}(A)

  • Multiplying row in A by nonzero number c

\text{det}(B) = c \text{det}(A)

  • Adding multiple of one row to another row in A

\text{det}(B) = \text{det}(A)

Each of the above equivalences result from \text{det}(C \cdot D) = \text{det}(C) \cdot \text{det}(D)

Proof of Lemma 1:

Define matrix B^{k} as submatrix of A^{k} with rows [1, \dots, k, i_{1}, \dots, i_{l}] and columns [1, \dots, k, j_{1}, \dots, j_{l}]

B^{k} = \begin{bmatrix}a_{11}&\dots&a_{1k}&a_{1j_{1}}&\dots&a_{1j_{l}}\\ a_{21}&\dots&a_{2k}&a_{2j_{1}}&\dots&a_{2j_{l}}\\ \dots&\dots&\dots&\dots&\dots&\dots\\ a_{k1}&\dots&a_{kk}&a_{kj_{1}}&\dots&a_{kj_{l}}\\ a_{i_{1}1}&\dots&a_{i_{1}k}&a_{i_{1}j_{1}}&\dots&a_{i_{1}j_{l}}\\ \dots&\dots&\dots&\dots&\dots&\dots\\ a_{i_{l}1}&\dots&a_{i_{l}k}&a_{i_{l}j_{1}}&\dots&a_{i_{l}j_{l}}\\ \end{bmatrix}

Each row i > k is multiplied by \frac{a^{k}_{kk}}{a^{k-1}_{k-1,k-1}} subtracted by \frac{a^{k-1}_{ik}}{a^{k-1}_{k-1,k-1}} times $k$-th row. By lemma 0,

\text{det}(B^{k}) = (\frac{a^{k}_{kk}}{a^{k-1}_{k-1,k-1}})^{l} det(B^{k-1})

Assume l = 1, in that i_{1} = i and j_{1} = j,

B^{k} = \begin{bmatrix}a_{11}&a_{12}&\dots&a_{1k}&a_{1j}\\ a_{21}&a_{22}&\dots&a_{2k}&a_{2j}\\ \dots&\dots&\dots&\dots&\dots\\ a_{k1}&a_{k2}&\dots&a_{kk}&a_{kj}\\ a_{i1}&a_{i2}&\dots&a_{ik}&a_{ij}\end{bmatrix}

For notational simplicity, let a_{j} = a^{j}_{jj}, Note that

\text{det}(B^{k}) = (\frac{a_{k}}{a_{k-1}}) \text{det}(B^{k-1}) = (\frac{a_{k}}{a_{k-1}}) (\frac{a_{k-1}}{a_{k-2}})^{2} \cdot \cdot \cdot (\frac{a_{1}}{a_{0}})^{k} \text{det}(B^{0})

\text{det}(B^{k}) = a_{k} a_{k-1} \cdot \cdot \cdot a_{1} \text{det}(B^{0})

Since B^{k} is upper triangular, \text{det}(B^{k}) = a_{1} \cdot \cdot \cdot a_{k-1} a_{k} \cdot a^{k}_{ij}. Thus,

\text{det}(B^{k}) = a_{k} a_{k-1} \cdot \cdot \cdot a_{1} \text{det}(B^{0}) = a_{1} \cdot \cdot \cdot a_{k-1} a_{k} a^{k}_{ij}

\text{det}(B^{0}) = a^{k}_{ij} \text{    } \square

By Lemma 1, bit complexity for representing a^{k}_{ij} is O(n(L + \log(n))). Note that gaussian elimination with modified row reduction still requires O(n^{3}) arithmetic operations on entries with bit size O(n(L + \log(n))), and so overall complexity is O(n^{4}(L + \log(n))).

It has been demonstrated that gaussian elimination can be done in time O(n^{\omega} L) where n^{\omega} is matrix multiplication time.