What does it mean when it’s hard to find hard instances?

[In the provincial spirit of Italian newspapers, that often have headlines like “Typhoon in South-East Asia causes widespread destruction; what are the consequences for Italian exports?”, and of men who overhear discussions about women’s issue and say things like “yes, but men have issues too,” I am going to comment on how Babai’s announcement affects me and the kind of problems I work on.]

If someone had told me last week: “a quasi-polynomial time algorithm has been found for a major open problem for which only a slightly subexponential algorithm was known before,” I would have immediately thought Unique Games!

Before Babai’s announcement, Graph Isomorphism had certain interesting properties in common with problems such as Factoring, Discrete Log, and Approximate Closest Vector (for approximation ratios of the order of sqrt (n) or more): no polynomial time algorithm is known, non-trivial algorithms that are much faster than brute force are known, and NP-completeness is not possible because the problem belongs to either NP \cap coNP or NP \cap coAM.

But there is an important difference: there are simple distributions of inputs on which Factoring, Discrete Log, and Closest Vector approximation are believed to be hard on average, and if one proposes an efficiently implementable algorithms for such problems, it can be immediately shown that it does not work. (Or, if it works, it’s already a breakthrough even without a rigorous analysis.)

In the case of Graph Isomorphism, however, it is easy to come up with simple algorithms for which it is very difficult to find counterexamples, and there are algorithms that are rigorously proved to work on certain distributions of random graphs. Now we know that there are in fact no hard instances at all, but, even before, if we believed that Graph Isomorphism was hard, we had to believe that the hard instances were rare and strange, rather than common.

It is also worth pointing out that, using Levin’s theory of average-case complexity, one can show that if any problem at all in NP is hard under any samplable distribution, then for every NP-complete problem we can find a samplable distribution under which the problem is hard. And, in “practice,” natural NP-complete problems do have simple distributions that seem to generate hard instances.

What about Small-set Expansion, Unique Games, and Unique-Games-Hard problems not known to be NP-hard, like O(1)-approximation of Sparsest Cut? We don’t know of any distribution for which it is plausible to conjecture that such problems are hard, and we have algorithms (Lasserre relaxations of constant degree) with no known counterexample. Many simple distributions of instances are rigorously solved by known algorithms. So, if we want to believe the Unique Games conjecture, we have to believe that there are hard instances, but they are rare and strange.

I am sure that it is possible, under standard assumptions, to construct an artificial problem L in NP that is in average-case-P according to Levin’s definition but not in P. Such a problem would not be polynomial time solvable, but it would be easy to solve on average under any samplable distribution and, intuitively, it would be a problem that is hard even though hard instances are rare and strage.

But can a natural problem in NP exhibit this behavior? Now that Graph Isomorphism is not a plausible example any more, I am inclined to believe (until the next surprise) that no natural problem has this behavior, and my guess concerning the Unique Games conjectures is going to be that it is false (or “morally false” in the sense that a quasipolynomial time algorithm exists) until someone comes up with a distribution of Unique Games instances that are plausibly hard on average and that, in particular, exhibit integrality gaps for Lasserre relaxations (even just experimentally).

Laci Babai and Graph Isomorphism

Next Tuesday, a week from today, Laci Babai will talk at the University of Chicago about a new algorithm that solves graph isomorphism in quasipolynomial time. There should also be a follow-up talk the following Thursday that, by a lucky coincidence, I will be able to attend, and then report back.

Meanwhile, if you have any gossip on the proof, then, by any means, go ahead and share it in the comments.

Aldo Fabrizi

Today Aldo Fabrizi would turn 110. Outside of Italy, those who know him at all probably know him from Rome, Open City, one of the early movies of the Neorealismo genre. (It is available in the US in a Criterion edition.)

But, in Italy, Fabrizi is famous for being one of the giants of the first generation of Italian-style comedy, from the 1950s and 1960s. My favorite movies of his are those in which he acts as a straight man for Totò, and my absolute favorite is Guardie e Ladri, which never had an American release.

For those who understand Italian, it’s possible to find the whole movie on youtube. Here is one iconic scene.

How was FOCS 2015?

Back around 2010, the Simons Institute for the Theory of Computing at Berkeley offered to organize FOCS in 2013, 2015 and 2017. So far, the IEEE technical committee on mathematical foundations of computing has taken us up on this offer in 2013 and 2015, and, unless a competing bid is presented, FOCS will come again to Berkeley in 2017.

Unfortunately there is no hotel in downtown Berkeley that is able to accommodate FOCS. The Shattuck hotel almost but not quite is. (There are two conference rooms, but they are of very different size, and the space to hang out for coffee breaks is much too small for 200+ people, and it’s outdoors, which is potentially bad because rain in October is unlikely but not impossible in Berkeley.)

This leaves us with the Doubletree hotel in the Berkeley Marina, which has some advantages, such as views of the bay and good facilities, and some disadvantages, such as the isolated location and the high prices. The location also forces us to provide lunches, because it would be inconvenient for people to drive to lunch places and then drive back during the lunch break. Being well aware of this, the hotel charges extortionate fees for food.

This is to say that, planning for FOCS 2017, there is nothing much different that we can do, although there are lots of little details that we can adjust, and it would be great to know how people’s experience was.

For example, did the block of discounted hotel rooms run out too soon? Would you have liked to have received something else with your registration than just the badge? If so, what? (So far, I have heard suggestions for FOCS-branded hats, t-shirts, and teddy bears.) Wasn’t it awesome to have a full bar at the business meeting? Why did nobody try the soups at lunch? The soups were delicious!

FOCS 2015

This is an odd-numbered year, and FOCS is back in Berkeley. The conference, whose early registration deadline is coming up, will be held on October 18-20 at the Double Tree hotel near the Berkeley marina, the same location of FOCS 2013, and it will be preceded by a day-long conference in honor of Dick Karp’s 80th birthday.

Early registration closes next Friday, so make sure that you register before then.

The weekend before FOCS there will be the Treasure Island Music Festival; Treasure Island is halfway along the Bay Bridge between Oakland and San Francisco, and from the Island there are beautiful views of the Bay Area.

After FOCS, there is a South Asian Film Festival in San Francisco.

If you arrive on Friday the 16th and you want to spend an afternoon in San Francisco, at the end of the day you can find your way to the De Young Museum in Golden Gate park, which stays open until 8:30pm on Fridays, and it has live music and a bar in the lobby from 5:30 to 8:30.

Did I mention that the early registration deadline is coming up? Don’t forget to register.

On Berkeley and Recycling

For the past few days, I have been getting emails that are the Platonic ideal of the U.C. Berkeley administration.

Today, there was a one-hour presentation on recycling and composting at Soda Hall, the computer science building. This is worth saying once more: a one hour presentation on putting glass, metal, and certain plastics in one container, clean paper in another, and compostable material in a third one. We received an email announcement, then an invitation to add the event to our calendar, then two remainders.

But what if one cannot make it? Not to worry! There will be a second one hour presentation on recycling, for those who missed the first one, and for those that were so enthralled by the first one that they want to spend one more hour being told about recycling.

Meanwhile, I have been trying since February to get a desk, a conference table and a bookshelf for my office in Soda Hall. So far I got the desk.

I asked Christos what he thought about the two one-hour presentations on recycling, and he said it reminded him of a passage from a famous essay by Michael Chabon:

Passersby feel empowered-indeed, they feel duty-bound-to criticize your parking technique, your failure to sort your recycling into brown paper and white, your resource-hogging four-wheel-drive vehicle, your use of a pinch-collar to keep your dog from straining at the leash.

Sometimes I think that when the administration started hearing about MOOCs, they must have started to dream about a future with no professors, because the students all take MOOCs, and no students on campus, because they all take the MOOCs from their home, and the campus would just be filled by Assistant Chancellors of this and that, giving each other training workshops. And this would be like the episode of Get Smart in which Max infiltrates a criminal gang until he finds out that everybody is an infiltrator and there is no criminal left.


Happy Qi Xi festival, everybody. This is the “Chinese Valentine’s day,” which falls on July 7th on the lunar calendar, which this year is August 20th. The festivity relates to a story that, like many Chinese stories, is a pretty long story.

The gist of it is that the (seventh) daughter of a goddess at some point came to earth to live as a mortal and met a cowboy (as in, a guy whose job is to herd cows). The two fell in love, got married, had two children (I told you, it’s a long story) and they were pretty happy, until the goddess mom realized what happened.

As is mothers-in-law’s wont, she did not approve, and she recalled the daughter to heaven, where she is now the star Vega. The guy was desperate, but then one of his cows suggested that he kills it, and then use its skin to fly to heaven (don’t ask) and reunite with his wife.

He does so, and it works, so that he is now the star Altair, but then the mom-in-law found out again. So she created a river, the Milky Way, to separate them once more. And now they are forever separated, except that, every year, magpies (which are a kind of crows) fly to heaven and use their bodies to create a bridge over the Milky Way, so that the two lovers can use it to meet. And this happens on the 7th day and the 7th month of the year.

What a difference thirty years make

From the majority opinions of two rulings of the Supreme Court of the United States.

Bowers v. Hardwick (1986):

The Constitution does not confer a fundamental right upon homosexuals to engage in sodomy. […] to claim that a right to engage in such conduct is “deeply rooted in this Nation’s history and tradition” or “implicit in the concept of ordered liberty” is, at best, facetious.

Obergefell v. Hodges (2015)

[Same-sex couples] ask for equal dignity in the eyes of the law. The Constitution grants them that right. The Constitution […] does not permit the State to bar same-sex couples from marriage on the same terms as accorded to couples of the opposite sex.

Two recent papers by Cui Peng

Cui Peng of Renmin University in Beijing has recently released two preprints, one claiming a proof of P=NP and one claiming a refutation of the Unique Games Conjecture; I will call them the “NP paper” and the “UG paper,” respectively.

Of all the papers I have seen claiming a resolution of the P versus NP problem, and, believe me, I have seen a lot of them, these are by far the most legit. On Scott Aronson’s checklist of signs that a claimed mathematical breakthrough is wrong, they score only two.

Unfortunately, both papers violate known impossibility results.

The two papers follow a similar approach: a certain constraint satisfaction problem is proved to be approximation resistant (under the assumption that P{\neq}NP, or under the UGC, depending on the paper) and then a Semidefinite Programming approximation algorithm is developed that breaks approximation resistance. (Recall that a constraint satisfaction problem is approximation resistant if there is no polynomial time algorithm that has a worst-case approximation ratio better than the algorithm that picks a random assignment.)

In both papers, the approximation algorithm is by Hast, and it is based on a semidefinite programming relaxation studied by Charikar and Wirth.

The reason why the results cannot be correct is that, in both cases, if the hardness result is correct, then it implies an integrality gap for the Charikar-Wirth relaxation, which makes it unsuitable to break the approximation resistance as claimed.

Suppose that we have a constraint satisfaction problem in which every constraint is satisfied by a {p} fraction of assignment. Then for such a problem to not be approximation resistant, we have to devise an algorithm that, for some fixed positive {\delta>0}, returns a solution whose cost (the number of constraints that it satisfies) is at least {p+\delta} times the optimum. The analysis of such an algorithm needs to include some technique to prove upper bounds for the true optimum; this is because if you are given an instance in which the optimum satisfies at most a {p+o(1)} fraction of constraints, as is the case for a random instance, then the algorithm will satisfy at most a {p+o(1)} fraction of constraints, but then the execution of the algorithm and the proof of correctness will give a (polynomial-time computable and polynomial-time checkable) certificate that the optimum satisfies at most a {(p+o(1))/(p+\delta) < 1 - \delta + o(1)} fraction of constraints.

For algorithms that are based on relaxations, such certificates came from the relaxation itself: one shows that the algorithm satisfies a number of constraints that is at least {p+\delta} times the optimum of the relaxation, and the optimum of the relaxation is at least the optimum of the constraint satisfaction problem. But if there are instances for which the optimum is {p+o(1)} and the optimum of the relaxation is {1-o(1)}, then one cannot use such a relaxation to design an algorithm that breaks approximation-resistance. (Because on, such instances, the algorithm will not be able to satisfy a number of constraint equal to {p+\delta} times the optimum of the relaxation.)

In the UG paper, the approximation resistance relies on a result of Austrin and Håstad. Like all UGC-based inapproximability results that I am aware of, the hardness results of Austrin and Håstad are based on a long code test. A major result of Raghavendra is that for every constraint satisfaction problem one can write a certain SDP relaxation such that the integrality gap of the relaxation is equal to the ratio between soundness and completeness in the best possible long code test that uses predicates from the constraint satisfaction problem. In particular, in Section 7.7 of his thesis, Prasad shows that if you have a long code test with soundness {c} and completeness {s} for a constraint satisfaction problem, then for every {\epsilon > 0} there is an instance of the problem in which no solution satisfies more than {s+\epsilon} fraction of constraints, but there is a feasible SDP solution whose cost is at least a {c-\epsilon} fraction of the number of constraints. The SDP relaxation of Charikar and Wirth is the same as the one studied by Prasad. This means that if you prove, via a long code test, that a certain problem is approximation resistant, then you also show that the SDP relaxation of Charikar and Wirth cannot be used to break approximation resistance.

The NP paper adopts a technique introduced by Siu On Chan to prove inapproximability results by starting from a version of the PCP theorem and then applying a “hardness amplification” reduction. Tulsiani proves that if one proves a hardness-of-approximation result via a “local” approximation-reduction from Max 3LIN, then the hardness-of-approximation result is matched by an integrality gap for Lasserre SDP relaxations up to a super-constant number of rounds. The technical sense in which the reduction has to be “local” is as follows. A reduction from Max 3LIN (the same holds for other problems, but we focus on starting from Max 3LIN for concreteness) to another constraint satisfaction problems has two parameters: a “completeness” parameter {c} and a “soundness” parameter {s}, and its properties are that:

  • (Completeness Condition) the reduction maps instances of 3LIN in which the optimum is {1-o(1)} to instances of the target problem in which the optimum is at least {c-o(1)};
  • (Soundness Condition) the reduction maps instances of 3LIN in which the optimum is {1/2 +o(1)} to instances of the target problem in which the optimum is at most {s+o(1)}.

Since we know that it’s NP-hard to distinguish Max 3LIN instances in which the optimum is {1-o(1)} from instances in which the optimum is {1/2 +o(1)}, such a reduction shows that, in the target problem, it is NP-hard to distinguish instances in which the optimum is {c-o(1)} from instances in which the optimum is {s+o(1)}. The locality condition studied by Tulsiani is that the Completeness Condition is established by describing a mapping from solutions satisfying a {1-o(1)} fractions of the Max 3LIN constraints to solutions satisfying a {c-o(1)} fraction of the target problem constraints, and the assignment to each variable of the target problem can be computed by looking at a sublinear (in the size of the Max 3LIN instance) number of Max 3LIN variables. Reductions that follows the Chan methodology are local in the above sense. This means that if one proves that a problem is approximation-resistant using the Chan methodology starting from the PCP theorem, then one has a local reduction from Max 3LIN to the problem with completeness {1-o(1)} and soundness {p+o(1)}, where, as before, {p} is the fraction of constraints of the target problem satisfied by a random assignment. In turn, this implies that not just the Charikar-Wirth relaxation, but that, for all relaxations obtained in a constant number of rounds of Lasserre relaxations, there are instances of the target problem that have optimum {p+o(1)} and SDP optimum {1-o(1)}, so that the approximation resistance cannot be broken using such SDP relaxations.