Today I would like to discuss the Khot-Naor approximation algorithm for the 3-XOR problem, and an open question related to it.
Tag Archives: semidefinite programming
Beyond Worst Case Analysis: Lecture 4
Scribed by Rachel Lawrence
In which we introduce semidefinite programming and apply it to Max Cut.
1. Overview
We begin with an introduction to Semidefinite Programming (SDP). We will then see that, using SDP, we can find a cut with the same kind of near-optimal performance for Max Cut in random graphs as we got from the greedy algorithm — that is,
in random graphs . More generally, we will prove that you can always find a cut at least this large in the case that G is triangle-free and with maximum vertex degree
, which will imply the bound in random graphs. We will also see how to use SDP to certify an upper bound:
with high probability in
Methods using SDP will become particularly helpful in future lectures when we consider planted-solution models instead of fully random graphs: greedy algorithms will fail on some analogous problems where methods using SDP can succeed.
2. Semidefinite Programming
Semidefinite Programming (SDP) is a form of convex optimization, similar to linear programming but with the addition of a constraint stating that, if the variables in the linear program are considered as entries in a matrix, that matrix is positive semidefinite. To formalize this, we begin by recalling some basic facts from linear algebra.
2.1. Linear algebra review
Definition 1 (Positive Semidefinite) A matrix
is positive semidefinite (abbreviated PSD and written
) if it is symmetric and all its eigenvalues are non-negative.
We will also make use of the following facts from linear algebra:
- If
is a symmetric matrix, then all the eigenvalues of
are real, and, if we call
the eigenvalues of
with repetition, we have
where the
are orthonormal eigenvectors of the
.
- The smallest eigenvalue of
has the characterization
and the optimization problem in the right-hand side is solvable up to arbitrarily good accuracy
This gives us the following lemmas:
Lemma 2
if and only if for every vector
we have
.
Proof: From part (2) above, the smallest eigenvalue of M is given by
Noting that we always have , then
if and only if the numerator
on the right is always non-negative.
Lemma 3 If
, then
![]()
Proof: ,
. By Lemma 2, this implies
.
Lemma 4 If
and
, then
![]()
Proof: ,
. By Lemma 2, this implies
.
2.2. Formulation of SDP
With these characterizations in mind, we define a semidefinite program as an optimization program in which we have real variables
, with
, and we want to maximize, or minimize, a linear function of the variables such that linear constraints over the variables are satisfied (so far this is the same as a linear program) and subject to the additional constraint that the matrix
is PSD. Thus, a typical semidefinite program (SDP) looks like
where the matrices and the scalars
are given, and the entries of
are the variables over which we are optimizing.
We will also use the following alternative characterization of PSD matrices
Lemma 5 A matrix
is PSD if and only if there is a collection of vectors
such that, for every
, we have
.
Proof: Suppose that and
are such that
for all
and
. Then
is PSD because for every vector
we have
Conversely, if is PSD and we write it as
we have
and we see that we can define vectors
by setting
and we do have the property that
This leads to the following equivalent formulation of the SDP optimization problem:
where our variables are vectors . This is the statement of the optimization problem that we will most commonly use.
2.3. Polynomial time solvability
From lemmas 3 and 4, we recall that if and
are two matrices such that
and
, and if
is a scalar, then
and
. This means that the set of PSD matrices is a convex subset of
, and that the above optimization problem is a convex problem.
Using the ellipsoid algorithm, one can solve in polynomial time (up to arbitrarily good accuracy) any optimization problem in which one wants to optimize a linear function over a convex feasible region, provided that one has a separation oracle for the feasible region: that is, an algorithm that, given a point,
- Checks whether it is feasible and, if not,
- Constructs an inequality that is satisfied by all feasible point but not satisfied by the given point.
In order to construct a separation oracle for a SDP, it is enough to solve the following problem: given a matrix , decide if it is PSD or not and, if not, construct an inequality
that is satisfied by the entries of all PSD matrices but that is not satisfied by
. In order to do so, recall that the smallest eigenvalue of
is
and that the above minimization problem is solvable in polynomial time (up to arbitrarily good accuracy). If the above optimization problem has a non-negative optimum, then is PSD. If it is a negative optimum
, then the matrix is not PSD, and the inequality
is satisfied for all PSD matrices but fails for
. Thus we have a separation oracle and we can solve SDPs in polynomial time up to arbitrarily good accuracy.
3. SDP Relaxation of Max Cut and Random Hyperplane Rounding
The Max Cut problem in a given graph has the following equivalent characterization, as a quadratic optimization problem over real variables
, where
:
We can interpret this as associating every vertex with a value
, so that the cut edges are those with one vertex of value
and one of value
.
While quadratic optimization is NP-hard, we can instead use a relaxation to a polynomial-time solvable problem. We note that any quadratic optimization problem has a natural relaxation to an SDP, in which we relax real variables to take vector values and we change multiplication to inner product:
Figure 1: The hyperplane through the origin defines a cut partitioning the vertices into sets and
.
Solving the above SDP, which is doable in polynomial time up to arbitrarily good accuracy, gives us a unit vector for each vertex
. A simple way to convert this collection to a cut
is to take a random hyperplane through the origin, and then define
to be the set of vertices
such that
is above the hyperplane. Equivalently, we pick a random vector
according to a rotation-invariant distribution, for example a Gaussian distribution, and let
be the set of vertices
such that
.
Let be an edge: One sees that if
is the angle between
and
, then the probability
is cut is proportional to
:
and the contribution of to the cost function is
Some calculus shows that for every we have
and so
so we have a polynomial time approximation algorithm with worst-case approximation guarantee .
Next time, we will see how the SDP relaxation behaves on random graphs, but first let us how it behaves on a large class of graphs.
4. Max Cut in Bounded-Degree Triangle-Free Graphs
Theorem 6 If
is a triangle-free graph in which every vertex has degree at most
, then
Proof: Consider the following feasible solution for the SDP: we associate to each node an
-dimensional vector
such that
,
if
, and
otherwise. We immediately see that
for every
and so the solution is feasible.
For example, if we have a graph such that vertex 1 is adjacent to vertices 3 and 5:
|
2 | 3 | 4 | 5 | |
|
|
|
0 | |
0 | |
|
|
0 | |
0 | 0 | 0 | |
|
|
0 | |
0 | 0 | |
|
|
|||||
|
0 | 0 | 0 | 0 | 0 | |
Let us transform this SDP solution into a cut using a random hyperplane.
We see that, for every edge we have
The probability that is cut by
is
and
so that the expected number of cut edges is at least .
Two recent papers by Cui Peng
Cui Peng of Renmin University in Beijing has recently released two preprints, one claiming a proof of P=NP and one claiming a refutation of the Unique Games Conjecture; I will call them the “NP paper” and the “UG paper,” respectively.
Of all the papers I have seen claiming a resolution of the P versus NP problem, and, believe me, I have seen a lot of them, these are by far the most legit. On Scott Aronson’s checklist of signs that a claimed mathematical breakthrough is wrong, they score only two.
Unfortunately, both papers violate known impossibility results.
The two papers follow a similar approach: a certain constraint satisfaction problem is proved to be approximation resistant (under the assumption that PNP, or under the UGC, depending on the paper) and then a Semidefinite Programming approximation algorithm is developed that breaks approximation resistance. (Recall that a constraint satisfaction problem is approximation resistant if there is no polynomial time algorithm that has a worst-case approximation ratio better than the algorithm that picks a random assignment.)
In both papers, the approximation algorithm is by Hast, and it is based on a semidefinite programming relaxation studied by Charikar and Wirth.
The reason why the results cannot be correct is that, in both cases, if the hardness result is correct, then it implies an integrality gap for the Charikar-Wirth relaxation, which makes it unsuitable to break the approximation resistance as claimed.
Suppose that we have a constraint satisfaction problem in which every constraint is satisfied by a fraction of assignment. Then for such a problem to not be approximation resistant, we have to devise an algorithm that, for some fixed positive
, returns a solution whose cost (the number of constraints that it satisfies) is at least
times the optimum. The analysis of such an algorithm needs to include some technique to prove upper bounds for the true optimum; this is because if you are given an instance in which the optimum satisfies at most a
fraction of constraints, as is the case for a random instance, then the algorithm will satisfy at most a
fraction of constraints, but then the execution of the algorithm and the proof of correctness will give a (polynomial-time computable and polynomial-time checkable) certificate that the optimum satisfies at most a
fraction of constraints.
For algorithms that are based on relaxations, such certificates came from the relaxation itself: one shows that the algorithm satisfies a number of constraints that is at least times the optimum of the relaxation, and the optimum of the relaxation is at least the optimum of the constraint satisfaction problem. But if there are instances for which the optimum is
and the optimum of the relaxation is
, then one cannot use such a relaxation to design an algorithm that breaks approximation-resistance. (Because on, such instances, the algorithm will not be able to satisfy a number of constraint equal to
times the optimum of the relaxation.)
In the UG paper, the approximation resistance relies on a result of Austrin and Håstad. Like all UGC-based inapproximability results that I am aware of, the hardness results of Austrin and Håstad are based on a long code test. A major result of Raghavendra is that for every constraint satisfaction problem one can write a certain SDP relaxation such that the integrality gap of the relaxation is equal to the ratio between soundness and completeness in the best possible long code test that uses predicates from the constraint satisfaction problem. In particular, in Section 7.7 of his thesis, Prasad shows that if you have a long code test with soundness and completeness
for a constraint satisfaction problem, then for every
there is an instance of the problem in which no solution satisfies more than
fraction of constraints, but there is a feasible SDP solution whose cost is at least a
fraction of the number of constraints. The SDP relaxation of Charikar and Wirth is the same as the one studied by Prasad. This means that if you prove, via a long code test, that a certain problem is approximation resistant, then you also show that the SDP relaxation of Charikar and Wirth cannot be used to break approximation resistance.
The NP paper adopts a technique introduced by Siu On Chan to prove inapproximability results by starting from a version of the PCP theorem and then applying a “hardness amplification” reduction. Tulsiani proves that if one proves a hardness-of-approximation result via a “local” approximation-reduction from Max 3LIN, then the hardness-of-approximation result is matched by an integrality gap for Lasserre SDP relaxations up to a super-constant number of rounds. The technical sense in which the reduction has to be “local” is as follows. A reduction from Max 3LIN (the same holds for other problems, but we focus on starting from Max 3LIN for concreteness) to another constraint satisfaction problems has two parameters: a “completeness” parameter and a “soundness” parameter
, and its properties are that:
- (Completeness Condition) the reduction maps instances of 3LIN in which the optimum is
to instances of the target problem in which the optimum is at least
;
- (Soundness Condition) the reduction maps instances of 3LIN in which the optimum is
to instances of the target problem in which the optimum is at most
.
Since we know that it’s NP-hard to distinguish Max 3LIN instances in which the optimum is from instances in which the optimum is
, such a reduction shows that, in the target problem, it is NP-hard to distinguish instances in which the optimum is
from instances in which the optimum is
. The locality condition studied by Tulsiani is that the Completeness Condition is established by describing a mapping from solutions satisfying a
fractions of the Max 3LIN constraints to solutions satisfying a
fraction of the target problem constraints, and the assignment to each variable of the target problem can be computed by looking at a sublinear (in the size of the Max 3LIN instance) number of Max 3LIN variables. Reductions that follows the Chan methodology are local in the above sense. This means that if one proves that a problem is approximation-resistant using the Chan methodology starting from the PCP theorem, then one has a local reduction from Max 3LIN to the problem with completeness
and soundness
, where, as before,
is the fraction of constraints of the target problem satisfied by a random assignment. In turn, this implies that not just the Charikar-Wirth relaxation, but that, for all relaxations obtained in a constant number of rounds of Lasserre relaxations, there are instances of the target problem that have optimum
and SDP optimum
, so that the approximation resistance cannot be broken using such SDP relaxations.
More than what you may want to know about Grothendieck’s inequality
The Bulletin of the AMS has an extensive 87-page survey article on Grothendieck’s inequality in the theory of Banach spaces, and on how it shows up in several contexts, including the use of semidefinite programming to approximate graph partitioning problems.
CS359G Lecture 11: ARV
In which we introduce the Arora-Rao-Vazirani relaxation of sparsest cut, and discuss why it is solvable in polynomial time.