*In which we begin to discuss the Arora-Rao-Vazirani rounding procedure.*

Recall that, in a graph with adjacency matrix , then ARV relaxation of the sparsest cut problem is the following semidefinite program.

If we denote by the optimum of the relaxation, then we claimed that

where the first inequality follows from the fact that is a relaxation of , and the second inequality is the result whose proof we begin to discuss today.

**1. Rounding the Arora-Rao-Vazirani Relaxation **

Given the equivalence between the sparsest cut problem and the “ relaxation” of sparsest cut, it will be enough to prove the following result.

Theorem 1 (Rounding of ARV)Let be a graph, its adjacency matrix, and be a feasible solution to the ARV relaxation.Then there is a mapping such that

As in the rounding of the Leighton-Rao relaxation via Bourgain’s theorem, we will identify a set , and define

Recall that, as we saw in the proof of Bourgain’s embedding theorem, no matter how we choose the set we have

where we are not using any facts about other than the fact that, for solutions of the ARV relaxation, it is a distance function that obeys the triangle inequality.

This means that, in order to prove the theorem, we just have to find a set such that

and this is a considerable simplification because the above expression is completely independent of the graph! The remaining problem is purely one about geometry.

Recall that if we have a set of vectors such that the distance function satisfies the triangle inequality, then we say that is a (semi-)metric of *negative type*.

After these preliminaries observations, our goal is to prove the following theorem.

Theorem 2 (Rounding of ARV — Revisited)If is a semimetric of negative type over a set , then there is a set such that if we definewe have

Furthermore, the set can be found in randomized polynomial time with high probability given a set of vector such that .

Since the statement is scale-invariant, we can restrict ourselves, with no loss of generality, to the case .

Remark 1Let us discuss some intuition before continuing with the proof.As our experience proving Bourgain’s embedding theorem shows us, it is rather difficult to pick sets such that is not much smaller than . Here we have a somewhat simpler case to solve because we are not trying to preserve all distances, but only the average pairwise distance. A simple observation is that if we find a set which contains elements and such that elements of are at distance from , then we immediately get , because there will be pairs such that and . In particular, if we could find such a set with then we would be done. Unfortunately this is too much to ask for in general, because we always have , which means that if we want to have noticeably large terms we must also have that is noticeably large for pairs of points, which is not always true.

There is, however, the following argument, which goes back to Leighton and Rao: either there are points concentrated in a ball whose radius is a quarter (say) of the average pairwise distance, and then we can use that ball to get an mapping with only

constanterror; or there are points in a ball of radius twice the average pairwise distance, such that the pairwise distances of the points in the ball account for a constant fraction of all pairwise distances. In particular, the sum of pairwise distances includes terms which are .

After we do this reduction and some scaling, we are left with the task of proving the following theorem: suppose we are given an -point negative type metric in which the points are contained in a ball of radius 1 and are such that the sum of pairwise distances is ; then there is a subset of size such that there are points whose distance from the set is . This theorem is the main result of the Arora-Rao-Vazirani paper. (Strictly speaking, this form of the theorem was proved later by Lee — Arora, Rao and Vazirani had a slightly weaker formulation.)

We begin by considering the case in which a constant fraction of the points are concentrated in a small ball.

Definition 3 (Ball)For a point and a radius , the ball of radius and center is the set

Lemma 4For every vertex , if we define , then

*Proof:* Our first calculation is to show that the typical value of is rather large. We note that for every two vertices and , if we call a closest vertex in to , and a closest vertex to in , we have

and so

that is,

Now we can get a lower bound to the sum of distances given by the embedding .

This means that if there is a vertex such that , or even , then we are done.

Otherwise, we will find a set of vertices such that their average pairwise distances are within a constant factor of their maximum pairwise distances, and then we will work on finding an embedding for such a set of points. (The condition that the average distance is a constant fraction of the maximal distance will be very helpful in subsequent calculations.)

Lemma 5Suppose that for every vertex we have . Then there is a vertex such that, if we set , we have

*Proof:* Let be a vertex that maximizes ; then , because if we had for every vertex , then we would have

Regarding the sum of pairwise distances of elements of , we have

The proof of the main theorem now reduces to proving the following geometric fact.

Theorem 6Let be a negative-type metric over a set such that the points are contained in a unit ball and have constant average distance, that is,

- there is a vertex such that for every
Then there are sets such that

- ;
- for every and every ,

where the multiplicative factors hidden in the and notations depend only on .

## 3 comments

Comments feed for this article

March 15, 2011 at 1:24 pm

ssHorribly Selfish request: Could we get the rest of the analysis of ARV?

March 15, 2011 at 2:13 pm

lucait’s coming

March 28, 2014 at 5:49 am

NbHi, in this research paper there is one step in which we prove,

“There is Randomize polynomial time algorithm that finds with high probability a cut that is c’ balanced and has size 0(W*sqrt(logn)). ”

Can you please explain in that how

W >= integration over s=0 to delta |Es| and how algo produced the result with probability 1/2