In which we introduce the Arora-Rao-Vazirani relaxation of sparsest cut, and discuss why it is solvable in polynomial time.
1. The Arora-Rao-Vazirani Relaxation
Recall that the sparsest cut of a graph
with adjacency matrix
is defined as
and the Leighton-Rao relaxation is obtained by noting that if we define then
is a semimetric over
, so that the following quantity is a relaxation of
:
If is
-regular, and we call
the normalized adjacency matrix of
, and we let
be the eigenvalues of
with multiplicities, then we proved in a past lecture that
which is also a relaxation of , because, for every
, every
and every
,
.
We note that if we further relax (1) by allowing to be mapped into a higher dimension space
instead of
, and we replace
by
, the optimum remains the same.
Fact 1
Proof: For a mapping , define
It is enough to show that, for every ,
. Let
be the
-th coordinate of
. Then
where the second-to-last inequality follows from the fact, which we have already used before, that for nonnegative and positive
we have
The above observations give the following comparison between the Leighton-Rao relaxation and the spectral relaxation: both are obtained by replacing with a “distance function”
; in the Leighton-Rao relaxation,
is constrained to satisfy the triangle inequality; in the spectral relaxation,
is constrained to be the square of the Euclidean distance between
and
for some mapping
.
The Arora-Rao-Vazirani relaxation is obtained by enforcing both conditions, that is, by considering distance functions that satisfy the triangle inequality and can be realized of
for some mapping
.
Definition 2 A semimetric
is called of negative type if there is a dimension
and a mapping
such that
for every
.
With the above definition, we can formulate the Arora-Rao-Vazirani relaxation as
Remark 1 The relaxation (2) was first proposed by Goemans and Linial. Arora, Rao and Vazirani were the first to prove that it achieves an approximation guarantee which is better than the approximation guarantee of the Leighton-Rao relaxation.
We have, by definition,
and so the approximation results that we have proved for and
apply to
. For every graph
and for every regular graph
Interestingly, the examples that we have given of graphs for which and
give poor approximation are complementary. If
is a cycle, then
is a poor approximation of
, but
is a good approximation of
; if
is a constant-degree expander then
is a poor approximation of
, but
is a good approximation.
When Goemans and Linial (separately) proposed to study the relaxation (2), they conjectured that it would always provide a constant-factor approximation of . Unfortunately, the conjecture turned out to be false, but Arora, Rao and Vazirani were able to prove that (2) does provide a strictly better approximation than the Leighton-Rao relaxation. In the next lectures, we will present parts of the proof of the following results.
Theorem 3 There is a universal constant
such that, for every graph
,
Theorem 4 There is an absolute constant
and an infinite family of graphs
such that
In the rest of this lecture we discuss the polynomial time solvability of (2).
2. The Ellipsoid Algorithm and Semidefinite Programming
Definition 5 If
is a set, then a separation oracle for
is a procedure that, on input
,
- If
, outputs “yes”
- If
, outputs coefficients
such that
but, for every
,
Note that a set can have a separation oracle only if it is convex. Under certain additional mild conditions, if has a polynomial time computable separation oracle, then the optimization problem
is solvable in polynomial time using the Ellipsoid Algorithm.
It remains to see how to put the Arora-Rao-Vazirani relaxation into the above form.
Recall that a matrix is positive semidefinite if all its eigenvalues are nonnegative. We will use the set of all
positive semidefinite matrices as our set
(thinking of them as
-dimensional vectors). If we think of two matrices
as
-dimensional vectors, then their “inner product” is
Lemma 6 The set of
positive semidefinite matrices has a separation oracle computable in time polynomial in
.
Proof: Given a symmetric matrix , its smallest eigenvalue is
the vector achieving the minimum is a corresponding eigenvector, and both the smallest eigenvalue and the corresponding eigenvector can be computed in polynomial time.
If we find that the smallest eigenvalue of is non-negative, then we answer “yes.” Otherwise, if
is an eigenvector of the smallest eigenvalue we output the matrix
. We see that we have
but that, for every positive semidefinite matrix , we have
This implies that any optimization problem of the following form can be solved in polynomial time
where are square matrices of coefficients,
are scalars, and
is a square matrix of variables. An optimization problem like the one above is called a semidefinite program.
It remains to see how to cast the Arora-Rao-Vazirani relaxation as a semidefinite program.
Lemma 7 For a symmetric matrix
, the following properties are equivalent:
is positive semidefinite;
- there are vectors
such that, for all
,
;
- for every vector
,
![]()
Proof: That (1) and (3) are equivalent follows from the characterization of the smallest eigenvalue of as the minimum of
over all unit vectors
.
To see that (2) (3), suppose that vectors
exist as asserted in (2), take any vector
, and see that
Finally, to see that (1) (2), let
be the eigenvalues of
with multiplicities, and let
be a corresponding orthonormal set of eigenvectors. Then
that is,
if we define as the vectors such that
.
This means that the generic semidefinite program (4) can be rewritten as an optimization problem in which the variables are the vectors as in part (2) of the above lemma.
where the dimension is itself a variable (although one could fix it, without loss of generality, to be equal to
). In this view, a semidefinite program is an optimization problem in which we wish to select
vectors such that their pairwise inner products satisfy certain linear inequalities, while optimizing a cost function that is linear in their pairwise inner product.
The square of the Euclidean distance between two vectors is a linear function of inner products
and so, in a semidefinite program, we can include expressions that are linear in the pairwise squared distances (or squared norms) of the vectors. The ARV relaxation can be written as follows
and so it is a semidefinite program, and it can be solved in polynomial time.
Remark 2 Our discussion of polynomial time solvability glossed over important issues about numerical precision. To run the Ellipsoid Algorithm one needs, besides the separation oracle, to be given a ball that is entirely contained in the set of feasible solutions and a ball that entirely contains the set of feasible solutions, and the running time of the algorithm is polynomial in the size of the input, polylogarithmic in the ratio of the volumes of the two balls, and polylogarithmic in the desired amount of precision. At the end, one doesn’t get an optimal solution, which might not have a finite-precision exact representation, but an approximation within the desired precision. The algorithm is able to tolerate a bounded amount of imprecision in the separation oracle, which is an important feature because we do not have exact algorithms to compute eigenvalues and eigenvectors (the entries in the eigenvector might not have a finite-precision representation).
The Ellipsoid algorithm is typically not a practical algorithm. Algorithms based on the interior point method have been adapted to semidefinite programming, and run both in worst-case polynomial time and in reasonable time in practice.
Arora and Kale have developed an time algorithm to solve the ARV relaxation within a multiplicative error
. The dependency on the error is worse than that of generic algorithms, which achieve polylogarithmic dependency, but this is not a problem in this application, because we are going to lose an
factor in the rounding, so an extra constant factor coming from an approximate solution of the relaxation is a low-order consideration.
Michel Goemans once said to me that he didn’t view the Goemans-Linial conjecture as an actual conjecture, and he could remember whether he believed the integrality gap of (2) to be a constant or not.
I believe his discussion of relaxation (2) is from this paper:
http://www-math.mit.edu/~goemans/PAPERS/semidef-survey.ps
On page 18, he says (paraphrasing) “If you can embed negative type metrics into l1 with constant distortion, then this shows the integrality gap of (2) is a constant. This is a very intriguing question”.
I think it’s probably more accurate to say the “Goemans-Linial Question” rather than “Goemans-Linial Conjecture”. That said, I haven’t read what Linial wrote about it.
In the display just after Remark 1, shouldn’t it be max{LR(G),1-\lambda_2<= ARV(G) <= phi(G) (and similarly in a couple of other places in the post)?
Really enjoying this series of blogposts, by the way. Thanks!
I checked Linial’s ICM paper. Linial asked “what is the minimal
such that” an
-point negative type metric can be embedded in
with distortion
, which indicates that he was not necessarily expecting the answer to be a constant.
5 years later… The paper of mine from 1997 that Nick is referring to actually says (without paraphrasing): “If one could show that negative type metrics can be embedded into $l_2$ with distortion $O(\sqrt{\log n})$ (or possibly even into $l_1$ within a constant), this would give a worst-case ratio that is $O(\sqrt{\log n})$ (resp. constant)”. Definitely a question, but not a conjecture! We should always go back to the original sources… 🙂 The statement of O(\sqrt{\log n}) was motivated partly by a result of mine (see Magen and Moharrami http://www.cs.toronto.edu/~avner/papers/no-dimred.pdf ) that a negative type metric in dimension d can be embedded into $l_2$ with distortion $\sqrt{d}$.