*In which we prove the easy case of Cheeger’s inequality.*

**1. Expansion and The Second Eigenvalue **

Let be an undirected -regular graph, its adjacency matrix, its normalized adjacency matrix, and be the eigenvalues of .

Recall that we defined the *edge expansion* of a cut of the vertices of as

and that the edge expansion of is .

We also defined the related notion of the *sparsity* of a cut as

and ; the *sparsest cut* problem is to find a cut of minimal sparsity.

Recall also that in the last lecture we proved that if and only if is disconnected. This is equivalent to saying that if and only if . In this lecture and the next we will see that this statement admits an *approximate version* that, qualitatively, says that is small if and only if is small. Quantitatively, we have

Theorem 1 (Cheeger’s Inequalities)

**2. The Easy Direction **

In this section we prove

Lemma 2

From which we have one direction of Cheeger’s inequality, after recalling that .

Let us find an equivalent restatement of the sparsest cut problem. If represent a set as a bit-vector , then

and

so that, after some simplifications, we can write

Note that, when take boolean values, then so does , so that we may also equivalently write

In the last lecture, we gave the following characterization of :

Now we claim that the following characterization is also true

This is because

so for every such that we have that , and so

To conclude the argument, take an that maximized the right-hand side of (4), and observe that if we shift every coordinate by the same constant then we obtain another optimal solution, because the shift will cancel in all the expressions both in the numerator and the denominator. In particular, we can define such that and note that the entries of sum to zero, and so . This proves that

and so we have established (4).

Comparing (4) and (3), it is clear that the quantity is a *continuous relaxation* of , and hence .

**3. Other Relaxations of **

Having established that we can view as a *relaxation* of , the proof that can be seen as a *rounding* algorithm, that given a real-valued solution to (4) finds a comparably good solution for (3).

Later in the course we will see two more approximation algorithms for sparsest cut and edge expansion. Both are based on continuous relaxations of starting from (2).

The algorithm of Leighton and Rao is based on a relaxation that is defined by observing that every bit-vector defines the semi-metric over the vertices; the Leighton-Rao relaxation is obtained by allowing arbitrary semi-metrics:

It is not difficult to express as a linear programming problem.

The algorithm of Arora-Rao-Vazirani is obtained by noting that, for a bit-vector , the distances define a metric which can also be seen as the Euclidean distance between the , because , and such that is also a semi-metric, trivially so because . If a distance function is a semi-metric such that is a Euclidean semi-metric, then is called a *negative type* semi-metric. The Arora-Rao-Vazirani relaxation is

The Arora-Rao-Vazirani relaxation can be expressed as a semi-definite programming problem.

From this discussion it is clear that the Arora-Rao-Vazirani relaxation is a tightening of the Leigthon-Rao relaxation and that we have

It is less obvious in this treatment, and we will see it later, that the Arora-Rao-Vazirani is also a tightening of the relaxation of given by , that is

The relaxations and are incomparable.

Do you know if there is a version of Cheeger’s inequality that relates to non-uniform sparsest cut?

Pingback: Lecture 8 « Pseudorandomness and Derandomization

Pingback: Lecture 9 « Pseudorandomness and Derandomization

Pingback: Superexpanders: part 1 | Probability, Geometry & Group Theory blog