Two Announced Breakthroughs

I was at a conference last week, at which I heard two pieces of mathematical gossip.

One was that Arora, Barak and Steurer have developed an algorithm that, given a Unique Games in which a 1-\epsilon fraction of constraints are satisfiable, finds an assignment satisfying a constant fraction of constraints in time 2^{n^{poly(\epsilon)}}. This is now officially announced in an earlier (public) paper by Arora, Impagliazzo, Matthews and Steurer, which presented a slower algorithm running in time 2^{\alpha n} where \alpha = exp(-1/\epsilon).

I suppose that the next targets are now the approximation problems for which the only known hardness is via unique games. Is there a subexponential algorithm achieving \log\log n approximation for sparsest cut or 1.99 approximation for Vertex Cover?

The other news is on the Polynomial Ruzsa-Freiman conjecture, one of the main open problems in additive combinatorics. Apologies in advance to readers if I get some details wrong. In the special case of \mathbb{F}_2, the conjecture is that if F: \mathbb{F}_2^n \rightarrow \mathbb{F}_2^m is any function such that

\mathbb{P}_{x,y,z} [ F(x) + F(y)  + F(z)= F(x+y+z) ] \geq \epsilon

then there is a matrix M and a vector b such that

\mathbb{P}_{x} [ F(x) = Mx + b ] \geq \epsilon^{O(1)}

where the probability in the conclusion is independent of n,m and is polynomial in \epsilon. Various proofs were known achieving a bound of exp(-poly(1/\epsilon). The first proof, due to Samorodnitsky achieves, I believe, a bound of exp(-O(1/\epsilon^2)), while the results from this paper should imply a bound of exp(-\tilde O(1/\epsilon)) where we use the notation \tilde O(n) := O(n \log n).

At the conference, Ben Green announced a result of Schoen implying a subexponential bound of 1/2^{2^{\sqrt{\log 1/\epsilon}}}.

The proof begins with the standard step of applying a theorem of Ruzsa to find a subset A\subseteq \mathbb{F}_2^n such that |A| \geq \epsilon^{O(1)} 2^n, and F on A is a “Freiman 16-homomorphism,” meaning that for every 32 elements of A such that

a_1 + \cdots + a_{16} = a_{17} + \cdots + a_{32}

we have

F(a_1) + \cdots + F(a_{16}) = F(a_{17}) + \cdots + F(a_{32})

The choice of 16 is just whatever makes the rest of the proof work. The theorem of Ruzsa works for any arbitrarily large constant. Then we consider the set B := 8A of all the elements that can written as a_1 + \cdots + a_8 with a_i \in A, and we define a function F' on 8A by setting

F'(a_1 + \cdots + a_8) := F(a_1) + \cdots + F(a_8)

which is a well-posed definition because F is a Freiman 16-homomorphism. (It would have been sufficient if F had been an 8-homomorphism.) Note that for every b_1,b_2,b_3,b_4 \in B such that b_1 + b_2 = b_3 + b_4 we have

F'(b_1) + F'(b_2) = F'(b_3) + F'(b_4)

Now the result of Schoen implies that if A is a subset of \mathbb {F}_2^n of size 2^n/K, then there is a subspace V of dimension n - 2^{O(\sqrt {\log K})} such that V is entirely contained in 8A.

Previously, it was known how to get a subspace of dimension n - \tilde O(K), by adapting a technique of Chang (see this survey paper).

Note that F' is now a linear map on V, and that it agrees with F on V. (I cannot reconstruct how this last step follows, however — maybe that claim is that F agrees with F' on a poly(\epsilon) fraction of elements of V? ) It now just remains to extend, arbitrarily, F' to a linear map over the entire space.

10 thoughts on “Two Announced Breakthroughs

  1. Can you please expound upon why the 2^n^{eps} alg of ABS is a breakthrough?

    Are there no other NP-hard problems that have an algorithm with such a running time?

    Does this mean that distinguishing between 1-eps and eps instances of UG cannot be NP-hard?

    Thanks!!

  2. If one NP-complete problem had algorithms of running time 2^{n^{o(1)}}, then all problems in NP would have such algorithms, and this seems very unlikely. Indeed, it seems entirely plausible that 3SAT does not have algorithms running in time 2^{o(n)}.

    It is possible that the Unique Games conjecture, as originally stated, is true, but now it seems impossible that we could have a polynomial-time reduction establishing NP-hardness of unique games with \epsilon = o(1). This is notable because such reductions exist for 2-query PCPs (i.e. Label Cover), so a hypothetical proof of the Unique Games conjecture would have to be fundamentally different from a 2-query PCP constructions.

  3. “If one NP-complete problem had algorithms of running time 2^{n^{o(1)}}, then all problems in NP would have such algorithms,”

    What is the name of this theorem? Simple NP-hardness reductions do not seem to imply this since the size of “n” (e.g. number of vertices) maybe increased or decreased polynomially in a reduction. Thus, a 2^{n^{o(1)}} algorithm for problem A wouldn’t directly imply it for problem B via a polytime reduction if problem A has, say, n^2 vertices after a reduction from problem B to problem A.

  4. “… and this seems very unlikely”

    Why do you say this? For all we know, couldn’t they all have n^{O(1)}-time algorithms?

  5. If eps is a constant such that 0 < eps < 1, then 2^{n^{poly(eps)}} is not 2^{n^{o(1)} since poly(eps) = Theta(1).

    So could there still be some constant eps such that it is "not unlikely" that UG is NP-hard to distinguish between cases in which 1-eps fraction are satisfiable?

    (Sorry, couldn’t get the formulas in my previous post to “parse”.)

  6. It still possible, as I said in the second comment, that the original statement of the Unique Games conjecture is true, that is, for every \epsilon >0 there is a polynomial time reduction from an NP-complete problem, say, 3SAT, to unique games producing a gap of 1-\epsilon versus \epsilon.

    The reduction, however, when it starts from instances of 3SAT of size N, must produce instances of unique games of size N^{poly(1/\epsilon)}, or else we would have subexponential algorithms for all of NP.

    What now seems unlikely is a polynomial time reduction from an NP-complete problem to Unique Games producing a gap of 1-o(1) versus o(1), the sort of stronger assumption needed to prove superconstant inapproximability results for sparsest cut and other problems.

    Usually, in PCP construction, if one can achieve arbitrarily small soundness with polynomial time reductions then one can also obtain sub-constant soundness also in polynomial time with PCPs of the same kind. If the Unique Games conjecture is true and NP does not have sub-exponential algorithms, this step from arbitrarily small \epsilon to \epsilon = o(1) is impossible for unique games.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s