CS261 Lecture 15: the LP of Max Flow

In which we look at the linear programming formulation of the maximum flow problem, construct its dual, and find a randomized-rounding proof of the max flow – min cut theorem.

In the first part of the course, we designed approximation algorithms “by hand,” following our combinatorial intuition about the problems. Then we looked at linear programming relaxations of the problems we worked on, and we saw that approximation algorithms for those problems could also be derived by rounding a linear programming solution. We also saw that our algorithms could be interpreted as constructing, at the same time, an integral primal solution and a feasible solution for the dual problem.

Now that we have developed exact combinatorial algorithms for a few problems (maximum flow, minimum s-t cut, global min cut, maximum matching and minimum vertex cover in bipartite graphs), we are going to look at linear programming relaxations of those problems, and use them to gain a deeper understanding of the problems and of our algorithms.

We start with the maximum flow and the minimum cut problems.

Continue reading

CS261 Lecture 6: Duality in Linear Programming

In which we introduce the theory of duality in linear programming.

1. The Dual of Linear Program

Suppose that we have the following linear program in maximization standard form:

\displaystyle   \begin{array}{ll} {\rm maximize} & x_1 + 2x_2 + x_3 + x_4\\ {\rm subject\ to}\\ & x_1 +2 x_2 + x_3 \leq 2\\ & x_2 + x_4\leq 1\\ & x_1 + 2x_3\leq 1\\ & x_1 \geq 0\\ & x_2 \geq 0\\ &x_3 \geq 0 \end{array} \ \ \ \ \ (1)

and that an LP-solver has found for us the solution {x_1:=1}, {x_2:=\frac 12}, {x_3:=0}, {x_4:= \frac 12} of cost {2.5}. How can we convince ourselves, or another user, that the solution is indeed optimal, without having to trace the steps of the computation of the algorithm?

Observe that if we have two valid inequalities

\displaystyle  a\leq b \ {\rm and } \ c \leq d

then we can deduce that the inequality

\displaystyle  a+c \leq b+d

(derived by “summing the left hand sides and the right hand sides” of our original inequalities) is also true. In fact, we can also scale the inequalities by a positive multiplicative factor before adding them up, so for every non-negative values {y_1,y_2 \geq 0} we also have

\displaystyle  y_1 a + y_2 c \leq y_1 b + y_2 d

Going back to our linear program (1), we see that if we scale the first inequality by {\frac 12}, add the second inequality, and then add the third inequality scaled by {\frac 12}, we get that, for every {(x_1,x_2,x_3,x_4)} that is feasible for (1),

\displaystyle  x_1 + 2x_2 + 1.5 x_3 + x_4 \leq 2.5

And so, for every feasible {(x_1,x_2,x_3,x_4)}, its cost is

\displaystyle  x_1 + 2x_2 + x_3 + x_4 \leq x_1 + 2x_2 + 1.5 x_3 + x_4 \leq 2.5

meaning that a solution of cost {2.5} is indeed optimal.

In general, how do we find a good choice of scaling factors for the inequalities, and what kind of upper bounds can we prove to the optimum?

Continue reading