Divide & Conquer works when a problem can be solved by solving smaller problems of the same type.

In Dynamic Programming (DP), we typically have to define and solve a **more general problem**.

## Shortest Path

**Problem:** Given a graph with edge lengths , find the shortest path from vertex to .

- If all edges were of unit-length a breadth first search could solve the problem in .
- Previous solution can be extended to weighted version by breaking edges into smaller unit edges, though the complexity of such a solution would be exponential on size of input! (An edge of length L can be represented in bits)

**Observation:** A shortest path is also a shortest path, for all vertices on .

**More General Problem: **Find shortest paths, .

### Algorithm (Dijkstra):

Idea: Maintaining tentative distances to each vertex, i.e., .

- while :
- Let be the vertex not in with smallest
- Update

**Theorem:** Dijkstra’s algorithm correctly finds shortest path from to all vertices.

Either of the following lemmas immediately result the theorem.

**Lemma:** At every step, for every vertex , is the length of shortest path and members of are the nearest vertices to the .

**Proof:** We prove this by induction on . Let’s name the set after iterations . The basis trivially holds for . Induction step: ( to ) Let be the ‘th distant vertex from . The previous (to the end) vertex on a shortest path (name it ) is nearer to the source and by induction hypothesis is in and is distance of from the source. Considering the algorithm at a previous step, when was added to , it has updated its neighbor so . On the other hand at any point of the algorithm and for all vertices , is at least equal to distance of from the source, because according to updates sequence there exists a path of that length from source to . Hence, we must have , equal to distance of from the source. As for all other vertices we have , will be the vertex added into at this step for which is distance from the source and it is the ‘th distant vertex from as required for .

Alternatively we can prove the theorem using the following lemma.

**Lemma:** At every step, for each vertex , is the shortest path distance from to it and there is a path of length that falls wholly inside .

**Proof:** Similarly, we have simple induction on . Basis holds for . Induction step: Assuming the lemma for , Let be the nearest vertex to source from . A shortest path from to starts in and ends outside of it. By the choice of all previous vertices in the path to it must be in , else we have contradiction by choice of . Name the previous (to ) vertex on that path . Considering the algorithm at a previous step, when was added to , it has updated its neighbor so . On the other hand at any point of the algorithm and for all vertices , is at least equal to distance of from the source, because according to updates sequence there exists a path of that length from source to . Hence, we must have , equal to distance of from the source. As for all other vertices we have , will be the vertex added into at this step for which is distance from the source and there exist a shortest path to it using only vertices in , hence completing the induction hypothesis.

Note: Using data structures like heap, the runtime for above procedure would be .

**Note:** We found the length of shortest path to each vertex. To retrieve the shortest path itself, we could iteratively check all neighbors of destination for being a previous node in a shortest path, or we could store this information while updating its .

## Longest Increasing Subsequence

**Problem:** Given a sequence of numbers , find (the length of) its longest increasing subsequence (LIS).

**Example:** LIS for is that is of length .

Note that D&Q technique does not work for this problem.

Let’s solve a **more general problem**: Finding (length of) LIS ending at element for all indices , name it .

**Recursion:** .

Using previous observation we solved the LIS problem, computing all in time .

**Note**: This problem has an solution, using monotonicity of smallest tail of increasing subsequences of fixed lengths.

## Matrix Chain Multiplication

**Problem:** Given matrices , want to compute most efficiently by choosing the order in which we perform multiplications.

For simplicity say for each multiplication we use the naive method, taking time to multiply an and an matrix, and we only want to find the minimum cost of computing the final matrix.

Let the ‘th matrix be of size .

**Example:** If sizes of matrices be , multiplying the left pair first results in total cost of while the other scenario (right first) costs .

**More general problem:** For every , what is the (minimum) cost of multiplications from the ‘th matrix to the ‘th one, i.e., , name this value .

Observation:

Analysis: Storing in a two dimensional array, we need to compute values for elements above the diagonal, computing values where each takes time, summing up to for our total computation.

**Note:** In DP we usually use smaller instances of the general problem to solve larger ones. While we compute the answer for an instance using others it is important that referred instances be previously solved and known. In last example an acceptable order of finding values is in increasing order of , i.e., diagonals of -d array moving up, starting at the main diagonal.