Sufficiently many to start a soccer team.
Some constraint satisfaction problems are approximation resistant, in the sense that, unless P=NP, there is no polynomial time algorithm that achieves a better approximation ratio than the trivial algorithm that picks a random assignment. For example, a random assignment satisfies (on average) a fraction of the clauses of a given Max 3SAT instance, and, for every
, it is NP-hard to achieve approximation
. Max 3LIN is the variant of Max 3SAT in which every constraint is a XOR instead of an OR of variables; it is another example of an approximation resistant problem, because a random assignment satisfies (on average) half of the constraints, and approximation
is NP-hard for every
. (These, and more, hardness of approximation results were proved by Håstad in 1997, in a paper with a notably understated title.)
In 2000, Håstad proved that if we restrict constraint satisfaction problems to instances in which every variable occurs in (at most) a fixed constant number of constraints, then the problem is never approximation resistant. If we have a constraint satisfaction problem in which each constraint is satisfied with probability by a random assignment, and each variable appears in at most
constraint, then there is a simple polynomial time algorithm that achieves an approximation ratio
. The following year, I showed that if we have a constraint satisfaction problem that is NP-hard to approximate within a factor of
, then it is also NP-hard to approximate within a factor
, where
is a constant (whose value depends on the specific constraint satisfaction problem), when restricted to instances in which every variable occurs at most
times.
Thus, for example, if we restrict to instances in which every variable occurs in at most constraints, Max 3SAT can be approximated within a factor
but not
, and Max 3LIN can be approximated within a factor
but not
in polynomial time, unless
, where
are constants.
Last Fall, Prasad Raghavendra and I worked for a while on the problem of bridging this gap. The difficulty with Max 3SAT is that there are instances derived from Max Cut such that every variable occurs in at most clauses, there is no “trivial contradiction” (such as 8 clauses over the same 3 variables, which have a fixed contribution to the cost function and can be eliminated without loss of generality), and every assignment satisfies at most
clauses. If we want an approximation ratio
, we need our algorithm to certify that such instances are unsatisfiable. It is probably possible to show that there are simple LP or SDP relaxations of Max 3SAT such that a polynomial time algorithm can find assignments that satisfies a number of clauses which is at least a
times the optimum of the relaxation, but we could not find any way to reason about it, and we gave up. Also, we wrongly assumed that there was the same issue with Max 3LIN.
Meanwhile, Farhi, Goldstone and Gutmann, who had successfully developed a quantum algorithm to approximate Max Cut on bounded degree graphs, were looking for another problem to tackle, and asked Madhu Sudan what was known about NP-hard and Unique Games-hard problems on bounded degree instances. They were particularly interested in Max 3SAT in bounded-degree instances. Madhu referred them to me, Sam Gutmann happened to be in the Bay Area, and so we met in November and I pointed them to the known literature and suggested that they should try Max 3LIN instead.
A month later, I heard back from them, and they had a approximate algorithm for Max 3LIN. That sounded amazing, so I went looking into the paper for the section in which they discuss their upper bound technique, and there is none. They show that, for every instance that does not have trivial contradictions (meaning two constraints that are the negation of each other), there is an assignment that satisfies a
fraction of constraints, and they describe a distribution that, on average, satisfies at least as many. The distribution is samplable by a quantum computer, so the approximation, in their paper, is achieved by a quantum algorithm.
After realizing that we had been wrong all along on the need for non-trivial upper bounds for Max 3LIN, Prasad and I tried to find a way to replicate the result of Farhi et al. with a classical algorithm, and we found a way to satisfy a fraction of constraints in instances of constraint satisfaction problems “without triangles” (a result of this form is also in the paper of Farhi et al.), and then a
fraction of constraints in all Max 3LIN instances.
The day before submitting our paper to ICALP (from which it would have been rejected without consideration anyways), I saw a comment by Boaz Barak on Scott Aronson’s blog announcing the same results, so we got in contact with Boaz, who welcomed us to the club of people who had, meanwhile, gotten those results, which also included Ankur Moitra, Ryan O’Donnell, Oded Regev, David Steurer, Aravindan Vijayaraghavan, David Witmer, and John Wright. Later, Johan Håstad also discovered the same results. If you kept count, that’s eleven theoreticians.
The paper is now online (with only 10 authors, Johan may write posted a separate note); we show that a fraction of constraints can be satisfied in all Max kLIN instances, with odd
, and a
advantage over the random assignment can be achieved in all “triangle-free” instances of constraint satisfaction problems. It remains an open problem to improve Håstad’s
approximation for Max 3SAT.
The argument for Max 3LIN is very simple. Khot and Naor prove that, given a Max 3LIN instance , one can construct a bipartite Max 3LIN instance
such that an assignment satisfying a
fraction of constraints in
can be easily converted into an assignment satisfying a
fraction of constraints of
; furthermore, if every variable occurs in at most
constraints of
, then every variable occurs in at most
constraints of
.
An instance is bipartite if we can partition the set of variables into two subsets and
, such that each constraint involves two variables from
and one variable from
. The reduction creates two new variables
and
for each variable
of
; every constraint
of
is replaced by the three constraints
Given an assignment to the and
variables that satisfies a
fraction of the constraints of
, Khot and Naor show that either
, or
, or an assignment obtained by choosing
to be
with probability
or
with probability
, satisfies at least a
fraction of constraints of
.
It remains to show that, given a bipartite instance of Max 3LIN in which every variable occurs in at most constraints (and which does not contain two equations such that one is the negation of the other), we can find an assignment that satisfies a
fraction of constraints.
The idea is to first pick the variables at random, and then to pick the
variables greedily given the choice of the
variables.
When we pick the variables at random, the instance reduces to a series of constraints of the form
. Each variable
belongs to (at most, but let’s assume exactly, which is actually the worst case for the analysis)
such constraints; on average, half of those constraints will be
and half will be
. If the fixings of clauses of
were mutually independent (which would be the case in “triangle-free” instances), then we would expect that the difference between 0s and 1s be about
, so the greedy fixing has a
advantage over the random assignment.
In general instances, although we do not have mutual independence, we do have pairwise independence and “almost four-wise independence.” Fix a variable , and let us call
the set of pairs
such that constraint
is part of the instance, for some
, and let us call
the random variable which is
if
and
otherwise, for a random choice of the
variables. We want to argue that, with constant probability,
.
First, we see that the are unbiased, and they are pairwise independent, so
. The fourth moment of
is
plus the number of 4-cycles in the graph that has vertices
and the edges in
. Now,
contains
edges, a four-cycle is completely described by the first and third edge of the cycle, so the fourth moment is
. Finally, it is a standard fact that if we have a sum of
unbiased
random variables, and the second moment of their sum is
and the fourth moment of their sum is
, then the absolute value of the sum is, on average (and with constant probability)
.
The algorithm for general CSPs on triangle-free instances is similarly based on the idea or randomly fixing some variables and then greedily fixing the remaining variables. Without the reduction to the “bipartite” case, which does not hold for problems like Max 3SAT, it is more technically involved to argue the advantage over the random assignment.
Is there a polynomial time algorithm that achieves a approximation ratio for Max 3SAT in instances such that each variable occurs at most
times? This remains an open question.
Pingback: Shtetl-Optimized » Blog Archive » Five announcements
Reblogged this on Quantized Complex Crypto and commented:
probably the longest author list I ever saw in theoretical computer science papers…
A followup question: So far we have approximation bounds on the general MAX-3LIN problem, and on the MAX-3LIN problem where each variable occurs a fixed number of times.
Do we have any bounds on MAX-3LIN in $n$ variables when each variable can occur $O(n^c)$ times for fixed $c>0$?
This comes up semi-naturally: Imagine an instance of MAX-3LIN with $n$ variables and $m$ clauses. The general case might have $m ~ O(n^3)$, while the restricted case is a special subset of the $m ~ O(n)$ instances. This problem would be in the intermediate range.
@Craig: this is a good question. Both the NP-hardness result and the algorithm for Max 3LIN work when
is not constant. So if you are looking at instances in which every variable occurs
times, with
, then the algorithm satisfies at least
fraction of clauses, but approximation
is NP-hard.
The NP-hardness result requires
to be at most
for some
, And this is again tight, because when you have
constraints you can get arbitrarily good approximation in subexponential time.
I always spent my half an hour to read this website’s posts
daily along with a mug of coffee.
Hi Luca, thanks for the great post!
that you mentioned in the comments. If I understand correctly, you said that for Max 3LIN getting
is NP-hard. I’m assuming that this is hardness for distinguishing
vs
3LIN instances. However, the Khot-Naor algorithm (that you wrote about a week ago) is able to get
. Am I misunderstanding something here? Thank you!
I’m confused about the hardness result for super-constant
I think I got confused in the answer to the previous question.
, works as follows: given an instance with n variables in which it is NP-hard to distinguish
satisfiable instances from
satisfiable instances, you get an instance with
variables, each one occurring at most
times, in which it is NP-hard to distinguish
satisfiable instances from
satisfiable instances.
and
, the fastest growing
for which you can prove the hardness claimed in the post depends on what is the smallest
for which you have hardness in the general case. This means that the hardness in the bounded degree case will always be less tight that whatever result you can prove in the general case, and I believe that the best
we can prove in the general case is still logarithmic and definitely of the form
.
for which the problem is hard in the general case. The hardest instances that we know are random instances with
constraints, where it seems hard to solve the problem with
of the order of 
The reduction to the bounded-degree case, for super-constant
Even ignoring the difference between
By the way, we do not even have a plausible guess for what would should the smallest
Got it. Thank you!