In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest cut, and we reduce its analysis to a key lemma that we will prove in the next lecture(s)
1. The Goemans-Linial Relaxation
Recall that, for two undirected graphs , the sparsest cut problem is to optimize
and the Leighton-Rao relaxation is obtained by noting that if we define then
is a semimetric over
, meaning that the following quantity is a relaxation of
:
If is
-regular,
is a clique, and
are the eigenvalues of the normalized Laplacian of
, then
which is a relaxation of , because, for every
, every
and every
,
.
We note that if we further relax (1) by allowing to be mapped into a higher dimension space
instead of
, and we replace
by
, the optimum remains the same.
Fact 1
Proof: For every , if we write
, we have
The above observations give the following comparison between the Leighton-Rao relaxation and the spectral relaxation: both are obtained by replacing with a “distance function”
; in the Leighton-Rao relaxation,
is constrained to satisfy the triangle inequality; in the spectral relaxation,
is constrained to be the square of the Euclidean distance between
and
for some mapping
.
The Arora-Rao-Vazirani relaxation is obtained by enforcing both conditions, that is, by considering distance functions that satisfy the triangle inequality and can be realized of
for some mapping
.
Definition 2 A semimetric
is called of negative type if there is a dimension
and a mapping
such that
for every
.
With the above definition, we can formulate the Goemans-Linial relaxation as
Remark 1 The relaxation (2) was first proposed by Goemans and Linial. Arora, Rao and Vazirani were the first to prove that it achieves an approximation guarantee which is better than the approximation guarantee of the Leighton-Rao relaxation.
We have, by definition,
and, when is a clique and
is
-regular,
and so the approximation results that we have proved for and
apply to
. For every graphs
and
:
and for every -regular graph
Interestingly, known examples of graphs for which and
give poor approximation are complementary. When
is a clique, if
is a cycle, then
is a poor approximation of
, but
is a good approximation of
; if
is a constant-degree expander then
is a poor approximation of
, but
is a good approximation.
When Goemans and Linial (separately) proposed to study the relaxation (2), they conjectured that it would always provide a constant-factor approximation of . Unfortunately, the conjecture turned out to be false, but Arora, Rao and Vazirani were able to prove that (2) does provide a strictly better approximation than the Leighton-Rao relaxation. In the next lectures, we will present parts of the proof of the following results.
Theorem 3 There is a constant
such that, for every graph
,
Theorem 4 There is a constant
such that, for every graphs
,
,
Theorem 5 There is a constant
and an infinite family of graphs
such that
Theorem 6 There are families of graphs
and
such that, for every
and every sufficiently larger
,
2. Polynomial Time Solvability
In this section we show that the Ellipsoid algorithm can compute in polynomial time.
Definition 7 If
is a set, then a separation oracle for
is a procedure that, on input
,
- If
, outputs “yes”
- If
, outputs coefficients
such that
but, for every
,
Note that a set can have a separation oracle only if it is convex. Under certain additional mild conditions, if has a polynomial time computable separation oracle, then the optimization problem
is solvable in polynomial time using the Ellipsoid Algorithm.
It remains to see how to put the Arora-Rao-Vazirani relaxation into the above form.
Recall that a matrix is positive semidefinite if all its eigenvalues are nonnegative. We will use the set of all
positive semidefinite matrices as our set
(thinking of them as
-dimensional vectors). If we think of two matrices
as
-dimensional vectors, then their “inner product” is
Lemma 8 The set of
positive semidefinite matrices has a separation oracle computable in time polynomial in
.
Proof: Given a symmetric matrix , its smallest eigenvalue is
the vector achieving the minimum is a corresponding eigenvector, and both the smallest eigenvalue and the corresponding eigenvector can be computed in polynomial time.
If we find that the smallest eigenvalue of is non-negative, then we answer “yes.” Otherwise, if
is an eigenvector of the smallest eigenvalue we output the matrix
. We see that we have
but that, for every positive semidefinite matrix , we have
This implies that any optimization problem of the following form can be solved in polynomial time
where are square matrices of coefficients,
are scalars, and
is a square matrix of variables. An optimization problem like the one above is called a semidefinite program.
It remains to see how to cast the Arora-Rao-Vazirani relaxation as a semidefinite program.
Lemma 9 For a symmetric matrix
, the following properties are equivalent:
is positive semidefinite;
- there are vectors
such that, for all
,
;
- for every vector
,
![]()
Proof: That (1) and (3) are equivalent follows from the characterization of the smallest eigenvalue of as the minimum of
over all unit vectors
.
To see that (2) (3), suppose that vectors
exist as asserted in (2), and let
be the matrix whose columns are the vectors
, so that
. Take any vector
, and see that
Finally, to see that (1) (2), let
be the eigenvalues of
with multiplicities, and let
be a corresponding orthonormal set of eigenvectors. Then
that is,
if we define as the vectors such that
.
This means that the generic semidefinite program (4) can be rewritten as an optimization problem in which the variables are the vectors as in part (2) of the above lemma.
where the dimension is itself a variable (although one could fix it, without loss of generality, to be equal to
). In this view, a semidefinite program is an optimization problem in which we wish to select
vectors such that their pairwise inner products satisfy certain linear inequalities, while optimizing a cost function that is linear in their pairwise inner product.
The square of the Euclidean distance between two vectors is a linear function of inner products
and so, in a semidefinite program, we can include expressions that are linear in the pairwise squared distances (or squared norms) of the vectors. The ARV relaxation can be written as follows
and so it is a semidefinite program, and it can be solved in polynomial time.
Remark 2 Our discussion of polynomial time solvability glossed over important issues about numerical precision. To run the Ellipsoid Algorithm one needs, besides the separation oracle, to be given a ball that is entirely contained in the set of feasible solutions and a ball that entirely contains the set of feasible solutions, and the running time of the algorithm is polynomial in the size of the input, polylogarithmic in the ratio of the volumes of the two balls, and polylogarithmic in the desired amount of precision. At the end, one doesn’t get an optimal solution, which might not have a finite-precision exact representation, but an approximation within the desired precision. The algorithm is able to tolerate a bounded amount of imprecision in the separation oracle, which is an important feature because we do not have exact algorithms to compute eigenvalues and eigenvectors (the entries in the eigenvector might not have a finite-precision representation).
The Ellipsoid algorithm is typically not a practical algorithm. Algorithms based on the interior point method have been adapted to semidefinite programming, and run both in worst-case polynomial time and in reasonable time in practice.
Arora and Kale have developed an time algorithm to solve the ARV relaxation within a multiplicative error
. The dependence on the error is worse than that of generic algorithms, which achieve polylogarithmic dependency, but this is not a problem in this application, because we are going to lose an
factor in the rounding, so an extra constant factor coming from an approximate solution of the relaxation is a low-order consideration.
3. Rounding when is a clique
Given the equivalence between the sparsest cut problem and the “ relaxation” of sparsest cut, it will be enough to prove the following result.
Theorem 10 (Rounding of ARV) Let
be a graph, and
be a feasible solution of the relaxation
.
Then there is a mapping
such that
As in the rounding of the Leighton-Rao relaxation via Bourgain’s theorem, we will identify a set , and define
Recall that, as we saw in the proof of Bourgain’s embedding theorem, no matter how we choose the set we have
where we are not using any facts about other than the fact that, for solutions of the ARV relaxation, it is a distance function that obeys the triangle inequality.
This means that, in order to prove the theorem, we just have to find a set such that
and this is a considerable simplification because the above expression is completely independent of the graph! The remaining problem is purely one about geometry.
Recall that if we have a set of vectors such that the distance function
satisfies the triangle inequality, then we say that
is a (semi-)metric of negative type.
After these preliminaries observations, our goal is to prove the following theorem.
Theorem 11 (Rounding of ARV — Revisited) If
is a semimetric of negative type over a set
, then there is a set
such that if we define
we have
Furthermore, the set
can be found in randomized polynomial time with high probability given a set of vector
such that
.
Since the statement is scale-invariant, we can restrict ourselves, with no loss of generality, to the case .
Remark 3 Let us discuss some intuition before continuing with the proof.
As our experience proving Bourgain’s embedding theorem shows us, it is rather difficult to pick sets such that
is not much smaller than
. Here we have a somewhat simpler case to solve because we are not trying to preserve all distances, but only the average pairwise distance. A simple observation is that if we find a set
which contains
elements and such that
elements of
are at distance
from
, then we immediately get
, because there will be
pairs
such that
and
. In particular, if we could find such a set with
then we would be done. Unfortunately this is too much to ask for in general, because we always have
, which means that if we want
to have
noticeably large terms we must also have that
is noticeably large for
pairs of points, which is not always true.
There is, however, the following argument, which goes back to Leighton and Rao: either there are
points concentrated in a ball whose radius is a quarter (say) of the average pairwise distance, and then we can use that ball to get an
mapping with only constant error; or there are
points in a ball of radius twice the average pairwise distance, such that the pairwise distances of the points in the ball account for a constant fraction of all pairwise distances. In particular, the sum of pairwise distances includes
terms which are
.
After we do this reduction and some scaling, we are left with the task of proving the following theorem: suppose we are given an
-point negative type metric in which the points are contained in a ball of radius 1 and are such that the sum of pairwise distances is
; then there is a subset
of size
such that there are
points whose distance from the set is
. This theorem is the main result of the Arora-Rao-Vazirani paper. (Strictly speaking, this form of the theorem was proved later by Lee — Arora, Rao and Vazirani had a slightly weaker formulation.)
We begin by considering the case in which a constant fraction of the points are concentrated in a small ball.
Definition 12 (Ball) For a point
and a radius
, the ball of radius
and center
is the set
Lemma 13 For every vertex
, if we define
, then
Proof: Our first calculation is to show that the typical value of is rather large. We note that for every two vertices
and
, if we call
a closest vertex in
to
, and
a closest vertex to
in
, we have
and so
that is,
Now we can get a lower bound to the sum of distances given by the embedding
.
This means that if there is a vertex such that
, or even
, then we are done.
Otherwise, we will find a set of vertices such that their average pairwise distances are within a constant factor of their maximum pairwise distances, and then we will work on finding an embedding for such a set of points. (The condition that the average distance is a constant fraction of the maximal distance will be very helpful in subsequent calculations.)
Lemma 14 Suppose that for every vertex
we have
. Then there is a vertex
such that, if we set
, we have
![]()
![]()
Proof: Let be a vertex that maximizes
; then
, because if we had
for every vertex
, then we would have
Regarding the sum of pairwise distances of elements of , we have
The proof of the main theorem now reduces to proving the following geometric fact.
Lemma 15 (ARV Main Lemma) Let
be a negative-type metric over a set
such that the points are contained in a unit ball and have constant average distance, that is,
- there is a vertex
such that
for every
![]()
![]()
Then there are sets
such that
;
- for every
and every
,
![]()
where the multiplicative factors hidden in the
and
notations depend only on
.
Indeed, applying the ARV Main Lemma to tells us that there are subsets
of
, both of size
such that
for every
and
. If we consider the Frechet embedding
, we have
It remains the prove the ARV Main Lemma.
For Fact 1, should it be “r / n * \lambda_2”?
Under Remark 1, I think it should be LR(G, H) <= ARV(G, H) <= sc(G, H), the inequality direction is incorrect
Above Theorem 3, there is an inequation:
n/r ARV(G, K_n) <= \sqrt{8n/r sc(G, K_n)}
How does it come?