How many theoreticians does it take to approximate Max 3LIN?

Sufficiently many to start a soccer team.

Some constraint satisfaction problems are approximation resistant, in the sense that, unless P=NP, there is no polynomial time algorithm that achieves a better approximation ratio than the trivial algorithm that picks a random assignment. For example, a random assignment satisfies (on average) a {\frac 78} fraction of the clauses of a given Max 3SAT instance, and, for every {\epsilon >0}, it is NP-hard to achieve approximation {\frac 78 + \epsilon}. Max 3LIN is the variant of Max 3SAT in which every constraint is a XOR instead of an OR of variables; it is another example of an approximation resistant problem, because a random assignment satisfies (on average) half of the constraints, and approximation {\frac 12 + \epsilon} is NP-hard for every {\epsilon >0}. (These, and more, hardness of approximation results were proved by Håstad in 1997, in a paper with a notably understated title.)

In 2000, Håstad proved that if we restrict constraint satisfaction problems to instances in which every variable occurs in (at most) a fixed constant number of constraints, then the problem is never approximation resistant. If we have a constraint satisfaction problem in which each constraint is satisfied with probability {p} by a random assignment, and each variable appears in at most {D} constraint, then there is a simple polynomial time algorithm that achieves an approximation ratio {p + \Omega\left( \frac 1 D \right)}. The following year, I showed that if we have a constraint satisfaction problem that is NP-hard to approximate within a factor of {r}, then it is also NP-hard to approximate within a factor {r + \frac c{\sqrt D}}, where {c} is a constant (whose value depends on the specific constraint satisfaction problem), when restricted to instances in which every variable occurs at most {D} times.

Thus, for example, if we restrict to instances in which every variable occurs in at most {D} constraints, Max 3SAT can be approximated within a factor {\frac 78 + \frac{c_1}{D}} but not {\frac 78 + \frac{c_2}{\sqrt D}}, and Max 3LIN can be approximated within a factor {\frac 12 + \frac {c_3}{D}} but not {\frac 12 + \frac{c_4}{\sqrt D}} in polynomial time, unless {P=NP}, where {c_1,c_2,c_3,c_4} are constants.

Last Fall, Prasad Raghavendra and I worked for a while on the problem of bridging this gap. The difficulty with Max 3SAT is that there are instances derived from Max Cut such that every variable occurs in at most {D} clauses, there is no “trivial contradiction” (such as 8 clauses over the same 3 variables, which have a fixed contribution to the cost function and can be eliminated without loss of generality), and every assignment satisfies at most {\frac 78 + \frac {O(1)}{D}} clauses. If we want an approximation ratio {\frac 78 + \Omega(D^{-1/2})}, we need our algorithm to certify that such instances are unsatisfiable. It is probably possible to show that there are simple LP or SDP relaxations of Max 3SAT such that a polynomial time algorithm can find assignments that satisfies a number of clauses which is at least a {\frac 78 + \Omega(D^{-1/2})} times the optimum of the relaxation, but we could not find any way to reason about it, and we gave up. Also, we wrongly assumed that there was the same issue with Max 3LIN.

Meanwhile, Farhi, Goldstone and Gutmann, who had successfully developed a quantum algorithm to approximate Max Cut on bounded degree graphs, were looking for another problem to tackle, and asked Madhu Sudan what was known about NP-hard and Unique Games-hard problems on bounded degree instances. They were particularly interested in Max 3SAT in bounded-degree instances. Madhu referred them to me, Sam Gutmann happened to be in the Bay Area, and so we met in November and I pointed them to the known literature and suggested that they should try Max 3LIN instead.

A month later, I heard back from them, and they had a {\frac 12 + \Omega \left( \frac 1 {D^{3/4}} \right)} approximate algorithm for Max 3LIN. That sounded amazing, so I went looking into the paper for the section in which they discuss their upper bound technique, and there is none. They show that, for every instance that does not have trivial contradictions (meaning two constraints that are the negation of each other), there is an assignment that satisfies a {\frac 12 + \Omega ( D^{-3/4})} fraction of constraints, and they describe a distribution that, on average, satisfies at least as many. The distribution is samplable by a quantum computer, so the approximation, in their paper, is achieved by a quantum algorithm.

After realizing that we had been wrong all along on the need for non-trivial upper bounds for Max 3LIN, Prasad and I tried to find a way to replicate the result of Farhi et al. with a classical algorithm, and we found a way to satisfy a {\frac 12 + \Omega(D^{-1/2})} fraction of constraints in instances of constraint satisfaction problems “without triangles” (a result of this form is also in the paper of Farhi et al.), and then a {\frac 12 + \Omega(D^{-1/2})} fraction of constraints in all Max 3LIN instances.

The day before submitting our paper to ICALP (from which it would have been rejected without consideration anyways), I saw a comment by Boaz Barak on Scott Aronson’s blog announcing the same results, so we got in contact with Boaz, who welcomed us to the club of people who had, meanwhile, gotten those results, which also included Ankur Moitra, Ryan O’Donnell, Oded Regev, David Steurer, Aravindan Vijayaraghavan, David Witmer, and John Wright. Later, Johan Håstad also discovered the same results. If you kept count, that’s eleven theoreticians.

The paper is now online (with only 10 authors, Johan may write posted a separate note); we show that a {\frac 12 + \Omega(D^{-1/2})} fraction of constraints can be satisfied in all Max kLIN instances, with odd {k}, and a {\Omega(D^{-1/2})} advantage over the random assignment can be achieved in all “triangle-free” instances of constraint satisfaction problems. It remains an open problem to improve Håstad’s {\frac 78 + O(D^{-1})} approximation for Max 3SAT.

The argument for Max 3LIN is very simple. Khot and Naor prove that, given a Max 3LIN instance {I}, one can construct a bipartite Max 3LIN instance {I'} such that an assignment satisfying a {\frac 12 + \epsilon} fraction of constraints in {I'} can be easily converted into an assignment satisfying a {\frac 12 + \Omega(\epsilon)} fraction of constraints of {I}; furthermore, if every variable occurs in at most {D} constraints of {I}, then every variable occurs in at most {2D} constraints of {I'}.

An instance is bipartite if we can partition the set of variables into two subsets {X} and {Y}, such that each constraint involves two variables from {X} and one variable from {Y}. The reduction creates two new variables {x_i} and {y_i} for each variable {z_i} of {I}; every constraint {z_i \oplus z_j \oplus z_k = b} of {I} is replaced by the three constraints

\displaystyle  x_i \oplus x_j \oplus y_k = b

\displaystyle  x_i \oplus y_j \oplus x_k = b

\displaystyle  y_i \oplus x_j \oplus x_k = b

Given an assignment to the {X} and {Y} variables that satisfies a {\frac 12 + \epsilon} fraction of the constraints of {I'}, Khot and Naor show that either {X}, or {Y}, or an assignment obtained by choosing {z_i} to be {x_i} with probability {1/2} or {y_i} with probability {1/2}, satisfies at least a {\frac 12 + \Omega(\epsilon)} fraction of constraints of {I}.

It remains to show that, given a bipartite instance of Max 3LIN in which every variable occurs in at most {D} constraints (and which does not contain two equations such that one is the negation of the other), we can find an assignment that satisfies a {\frac 12 + \Omega( D^{-1/2})} fraction of constraints.

The idea is to first pick the {X} variables at random, and then to pick the {Y} variables greedily given the choice of the {X} variables.

When we pick the {X} variables at random, the instance reduces to a series of constraints of the form {y_i = b}. Each variable {y_i} belongs to (at most, but let’s assume exactly, which is actually the worst case for the analysis) {D} such constraints; on average, half of those constraints will be {y_i = 0} and half will be {y_i = 1}. If the fixings of clauses of {y_i} were mutually independent (which would be the case in “triangle-free” instances), then we would expect that the difference between 0s and 1s be about {\sqrt D}, so the greedy fixing has a {\Omega(D^{-1/2})} advantage over the random assignment.

In general instances, although we do not have mutual independence, we do have pairwise independence and “almost four-wise independence.” Fix a variable {y_i}, and let us call {E} the set of pairs {\{ j,k \}} such that constraint {y_i \oplus x_j \oplus x_k = b_{j,k}} is part of the instance, for some {b_{j,k}}, and let us call {R_{j,k}} the random variable which is {1} if {x_j \oplus x_k \oplus b_{j,k}=0} and {-1} otherwise, for a random choice of the {X} variables. We want to argue that, with constant probability, {|\sum_{j,k} R_{j,k}| \geq \Omega(\sqrt D)}.

First, we see that the {R_{j,k}} are unbiased, and they are pairwise independent, so {\mathop{\mathbb E} ( \sum_{j,k} R_{j,k})^2 = D}. The fourth moment of {\sum_{j,k} R_{j,k}} is {3D^2 - 2D} plus the number of 4-cycles in the graph that has vertices {x_j} and the edges in {E}. Now, {E} contains {D} edges, a four-cycle is completely described by the first and third edge of the cycle, so the fourth moment is {O(D^2)}. Finally, it is a standard fact that if we have a sum of {D} unbiased {+/- 1} random variables, and the second moment of their sum is {\Omega(D)} and the fourth moment of their sum is {O(D^2)}, then the absolute value of the sum is, on average (and with constant probability) {\Omega(\sqrt D)}.

The algorithm for general CSPs on triangle-free instances is similarly based on the idea or randomly fixing some variables and then greedily fixing the remaining variables. Without the reduction to the “bipartite” case, which does not hold for problems like Max 3SAT, it is more technically involved to argue the advantage over the random assignment.

Is there a polynomial time algorithm that achieves a {\frac 78 + \Omega(D^{-1/2})} approximation ratio for Max 3SAT in instances such that each variable occurs at most {D} times? This remains an open question.

Mystery solved!

This interesting article in today’s NYT tells the story of the fad of Tibetan mastiffs as high-end pets in China.

It is an appealing story about social changes, new riches, inequalities, and so on, but it also, finally, explains why I could never find in San Francisco hot pot that tasted like in Beijing:

Instead, earlier this year Nibble and 20 more unlucky mastiffs found themselves stuffed into metal chicken crates and packed onto a truck with 150 other dogs. If not for a band of Beijing animal rights activists who literally threw themselves in front of the truck, Nibble and the rest would have ended up at a slaughterhouse in northeast China where, at roughly $5 a head, they would have been rendered into hot pot ingredients, imitation leather and the lining for winter gloves.

(Emphasis added)

I almost fell for it

This year, the chair of ICALP decided to play an April Fool’s prank three weeks early, and I received the following message:

“Dear author, we regret to inform you that the margins in your submission are too small, and hence we are rejecting it without review”

I was almost fooled. In my defense, the second time that I applied for a position in Italy, the hiring committee judged all my publications to be non-existent, because the (multiple) copies I had sent them had not been authenticated by a notary. So I am trained not to consider it too strange that a paper could be evaluated based on the width of its margins (or the stamps on its pages) rather than on the content of its theorem.

Paul Erdös’s 102-ennial

Paul Erdös would be 102 year old this year, and in celebration of this the Notices of the AMS have published a two-part series of essays on his life and his work: [part 1] and [part 2].

Of particular interest to me is the story of the problem of finding large gaps between primes; recently Maynard, Ford, Green, Konyagin, and Tao solved an Erdös $10,000 question in this direction. It is probably the Erdös open question with the highest associated reward ever solved (I don’t know where to look up this information — for comparison, Szemeredi’s theorem was a $1,000 question), and it is certainly the question whose statement involves the most occurrences of “\log“.

Alexander Grothendieck

Alexander Grothendieck died on Thursday at age 86 in Saint-Girons.

Grothendieck, who started his work in functional analysis, is known for his far-reaching (and still incomplete) program of creating new foundations for algebraic geometry, a work that he led at IHES in the 50s and 60s and that is documented in the thousands of pages of EGA and SGA. If modern algebraic geometry is built on schemes, has applications to number theory, has definitions given in a category-theoretic language, and is completely incomprehensible to the novice, it is all thanks to Grothendieck’s vision.

In his 40s, and the top of his game, Grothendieck left IHES over the issue of military funding, and progressively detached from the mathematical community, while embracing environmental and anti-war causes.

Grothendieck’s life story, from his escaping Nazi Germany, to his revolution in mathematics, to his radical politics and later hermit life, is spellbinding. Some of it is told in a great two-part article in the Notices of the AMS (part 1, part 2) and I would also recommend this more technical essay and this more philosophical one.

Grothendieck has a flair for effective naming, he had a way with words, and he liked elaborate metaphors. Here is his famous “how to break a nut” analogy describing his style of mathematical research

I can illustrate the second approach with the same image of a nut to be opened. The first analogy that came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more flexible
through weeks and months—when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado!

An Alternative to the Seddighin-Hajiaghayi Ranking Methodology

[Update 10/24/14: there was a bug in the code I wrote yesterday night, apologies to the colleagues at Rutgers!]

[Update 10/24/14: a reaction to the authoritative study of MIT and the University of Maryland. Also, coincidentally, today Scott Adams comes down against reputation-based rankings]

Saeed Seddighin and MohammadTaghi Hajiaghayi have proposed a ranking methodology for theory groups based on the following desiderata: (1) the ranking should be objective, and based only on quantitative information and (2) the ranking should be transparent, and the methodology openly revealed.

Inspired by their work, I propose an alternative methodology that meets both criteria, but has some additional advantages, including having an easier implementation. Based on the same Brown University dataset, I count, for each theory group, the total number of letters in the name of each faculty member.

Here are the results (apologies for the poor formatting):

1 ( 201 ) Massachusetts Institute of Technology
2 ( 179 ) Georgia Institute of Technology
3 ( 146 ) Rutgers – State University of New Jersey – New Brunswick
4 ( 142 ) University of Illinois at Urbana-Champaign
5 ( 141 ) Princeton University
6 ( 139 ) Duke University
7 ( 128 ) Carnegie Mellon University
8 ( 126 ) University of Texas – Austin
9 ( 115 ) University of Maryland – College Park
10 ( 114 ) Texas A&M University
11 ( 111 ) Northwestern University
12 ( 110 ) Stanford University
13 ( 108 ) Columbia University
14 ( 106 ) University of Wisconsin – Madison
15 ( 105 ) University of Massachusetts – Amherst
16 ( 105 ) University of California – San Diego
17 ( 98 ) University of California – Irvine
18 ( 94 ) New York University
19 ( 94 ) State University of New York – Stony Brook
20 ( 93 ) University of Chicago
21 ( 91 ) Harvard University
22 ( 91 ) Cornell University
23 ( 87 ) University of Southern California
24 ( 87 ) University of Michigan
25 ( 85 ) University of Pennsylvania
26 ( 84 ) University of California – Los Angeles
27 ( 81 ) University of California – Berkeley
28 ( 78 ) Dartmouth College
29 ( 76 ) Purdue University
30 ( 71 ) California Institute of Technology
31 ( 67 ) Ohio State University
32 ( 63 ) Brown University
33 ( 61 ) Yale University
34 ( 54 ) University of Rochester
35 ( 53 ) University of California – Santa Barbara
36 ( 53 ) Johns Hopkins University
37 ( 52 ) University of Minnesota – Twin Cities
38 ( 49 ) Virginia Polytechnic Institute and State University
39 ( 48 ) North Carolina State University
40 ( 47 ) University of Florida
41 ( 45 ) Rensselaer Polytechnic Institute
42 ( 44 ) University of Washington
43 ( 44 ) University of California – Davis
44 ( 44 ) Pennsylvania State University
45 ( 40 ) University of Colorado Boulder
46 ( 38 ) University of Utah
47 ( 36 ) University of North Carolina – Chapel Hill
48 ( 33 ) Boston University
49 ( 31 ) University of Arizona
50 ( 30 ) Rice University
51 ( 14 ) University of Virginia
52 ( 12 ) Arizona State University
53 ( 12 ) University of Pittsburgh

I should acknowledge a couple of limitations of this methodology: (1) the Brown dataset is not current, but I believe that the results would not be substantially different even with current data, (2) it might be reasonable to only count the letters in the last name, or to weigh the letters in the last name by 1 and the letters in the first name by 1/2. If there is sufficient interest, I will post rankings according to these other methodologies.