*In which we introduce linear programming.*

**1. Linear Programming **

A *linear program* is an optimization problem in which we have a collection of variables, which can take real values, and we want to find an assignment of values to the variables that satisfies a given collection of linear inequalities and that maximizes or minimizes a given linear function.

(The term *programming* in *linear programming*, is not used as in *computer programming*, but as in, e.g., *tv programming*, to mean *planning.*)

For example, the following is a linear program.

The linear function that we want to optimize ( in the above example) is called the *objective function*. A *feasible solution* is an assignment of values to the variables that satisfies the inequalities. The value that the objective function gives to an assignment is called the *cost* of the assignment. For example, and is a feasible solution, of cost . Note that if are values that satisfy the inequalities, then, by summing the first two inequalities, we see that

that is,

and so no feasible solution has cost higher than , so the solution , is optimal. As we will see in the next lecture, this trick of summing inequalities to verify the optimality of a solution is part of the very general theory of *duality* of linear programming.

Linear programming is a rather different optimization problem from the ones we have studied so far. Optimization problems such as Vertex Cover, Set Cover, Steiner Tree and TSP are such that, for a given input, there is only a finite number of possible solutions, so it is always trivial to solve the problem in finite time. The number of solutions, however, is typically exponentially big in the size of the input and so, in order to be able to solve the problem on reasonably large inputs, we look for polynomial-time algorithms. In linear programming, however, each variable can take an infinite number of possible values, so it is not even clear that the problem is solvable in finite time.

As we will see, it is indeed possible to solve linear programming problems in finite time, and there are in fact, polynomial time algorithms and efficient algorithms that solve linear programs optimally.

There are at least two reasons why we are going to study linear programming in a course devoted to combinatorial optimization:

- Efficient linear programming solvers are often used as part of the toolkit to design exact or approximate algorithms for combinatorial problems.
- The powerful theory of
*duality* of linear programming, that we will describe in the next lecture, is a very useful mathematical theory to reason about algorithms, including purely combinatorial algorithms for combinatorial problems that seemingly have no connection with continuous optimization.

Continue reading →