# CS261 Lecture 5: Linear Programming

In which we introduce linear programming.

1. Linear Programming

A linear program is an optimization problem in which we have a collection of variables, which can take real values, and we want to find an assignment of values to the variables that satisfies a given collection of linear inequalities and that maximizes or minimizes a given linear function.

(The term programming in linear programming, is not used as in computer programming, but as in, e.g., tv programming, to mean planning.)

For example, the following is a linear program.

$\displaystyle \begin{array}{ll} {\rm maximize} & x_1 + x_2\\ {\rm subject\ to}\\ & x_1 + 2x_2 \leq 1\\ & 2x_1 + x_2\leq 1\\ & x_1 \geq 0\\ & x_2 \geq 0 \end{array} \ \ \ \ \ (1)$

The linear function that we want to optimize (${x_1+x_2}$ in the above example) is called the objective function. A feasible solution is an assignment of values to the variables that satisfies the inequalities. The value that the objective function gives to an assignment is called the cost of the assignment. For example, ${x_1 := \frac 13}$ and ${x_2 := \frac 13}$ is a feasible solution, of cost ${\frac 23}$. Note that if ${x_1,x_2}$ are values that satisfy the inequalities, then, by summing the first two inequalities, we see that

$\displaystyle 3x_1 + 3 x_2 \leq 2$

that is,

$\displaystyle x_1 + x_2 \leq \frac 23$

and so no feasible solution has cost higher than ${\frac 23}$, so the solution ${x_1 := \frac 13}$, ${x_2 := \frac 13}$ is optimal. As we will see in the next lecture, this trick of summing inequalities to verify the optimality of a solution is part of the very general theory of duality of linear programming.

Linear programming is a rather different optimization problem from the ones we have studied so far. Optimization problems such as Vertex Cover, Set Cover, Steiner Tree and TSP are such that, for a given input, there is only a finite number of possible solutions, so it is always trivial to solve the problem in finite time. The number of solutions, however, is typically exponentially big in the size of the input and so, in order to be able to solve the problem on reasonably large inputs, we look for polynomial-time algorithms. In linear programming, however, each variable can take an infinite number of possible values, so it is not even clear that the problem is solvable in finite time.

As we will see, it is indeed possible to solve linear programming problems in finite time, and there are in fact, polynomial time algorithms and efficient algorithms that solve linear programs optimally.

There are at least two reasons why we are going to study linear programming in a course devoted to combinatorial optimization:

• Efficient linear programming solvers are often used as part of the toolkit to design exact or approximate algorithms for combinatorial problems.
• The powerful theory of duality of linear programming, that we will describe in the next lecture, is a very useful mathematical theory to reason about algorithms, including purely combinatorial algorithms for combinatorial problems that seemingly have no connection with continuous optimization.