In which we continue the analysis of the ARV rounding algorithm
We are continuing the analysis of the Arora-Rao-Vazirani rounding algorithm, which rounds a Semidefinite Programming solution of a relaxation of sparsest cut into an actual cut, with an approximation ratio .
In previous lectures, we reduced the analysis of the algorithm to the following claim.
Suppose that, for constants and a parameter , we have that there is a probability that there are at least pairs such that and .
Then there is a constant , that depends only on and , such that
1. Concentration of Measure
In the last lecture, we are have already introduced two useful properties of Gaussian distributions: that there is a small probability of being much smaller than the standard deviation in absolute value, and a very small probability of being much larger than the standard deviation in absolute value. Here we introduce a third property of a somewhat different flavor.
For a set and a distance parameter , define
the set of points at distance at most from . Then we have:
Theorem 2 (Gaussian concentration of measure) There is a constant such that, for every and for every set , if
for every , where the probabilities are taken according to the Gaussian measure in , that is , where and the are independent Gaussians of mean 0 and variance 1.
The above theorem says that if we have some property that is true with probability for a random Gaussian vector , then there is a probability that is within distance of a vector that satisfies the required property. In high dimension , this is a non-trivial statement because, with very high probability is about , and so the distance between and is small relative to the length of the vector.
We will use the following corollary.
Corollary 3 Let be vectors in and let . Let be a random Gaussian vector in , and let . If, for some and , we have
By assumption, we have , and so, by concentration of measure:
The even in the above probability can be rewritten as
and the above condition gives us
The (use of the) above statement is by far the most innovative part of the analysis of Arora, Rao and Vazirani, so it is worth developing an intuitive feeling for its meaning.
Let’s say that we are interested in the distribution of . We know that the random variables are Gaussians of mean 0 and standard deviation at most , but it is impossible to say anything about, say, the average value or the median value of without knowing something about the correlation of the random variables .
Interestingly, the above Corollary says something about the concentration of without any additional information. The corollary says that, for example, the first percentile of and the 99-th percentile of differ by at most , and that we have a concentration result of the form
which is a highly non-trivial statement for any configuration of for which .
2. Reworking the Assumption
Lemma 4 Under the assumptions of Lemma 1, there is a fixed set , , and a set of disjoint pairs , dependent on , such that, for every and for every pair we have
and for all we have
Proof: Let be the set of disjoint pairs promised by the assumptions of Lemma 1. Construct a weighted graph , where the weight of the edge is . The degree of every vertex is at most 1, and the sum of the degrees is twice the expectation of , and so, by the assumptions of Lemma 1, it is at least .
Now, repeatedly delete from all vertices of degree at most , and all the edges incident on them, until no such vertex remains. At the end we are left with a (possibly empty!) graph in which all remaining vertices have degree at most ; each deletion reduces the sum of the degree by at most , and so the residual graph has total degree at least , and hence at least vertices
By the above Lemma, the following result implies Lemma 1 and hence the ARV Main Lemma.
Lemma 5 Let be a semi-metric over a set such that for all , let be a collection of vectors in , such that is a semimetric, let be a random Gaussian vector in , define , and suppose that, for every , we can define a set of disjoint pairs such that, with probability 1 over ,