Two recent papers by Cui Peng

Cui Peng of Renmin University in Beijing has recently released two preprints, one claiming a proof of P=NP and one claiming a refutation of the Unique Games Conjecture; I will call them the “NP paper” and the “UG paper,” respectively.

Of all the papers I have seen claiming a resolution of the P versus NP problem, and, believe me, I have seen a lot of them, these are by far the most legit. On Scott Aronson’s checklist of signs that a claimed mathematical breakthrough is wrong, they score only two.

Unfortunately, both papers violate known impossibility results.

The two papers follow a similar approach: a certain constraint satisfaction problem is proved to be approximation resistant (under the assumption that P{\neq}NP, or under the UGC, depending on the paper) and then a Semidefinite Programming approximation algorithm is developed that breaks approximation resistance. (Recall that a constraint satisfaction problem is approximation resistant if there is no polynomial time algorithm that has a worst-case approximation ratio better than the algorithm that picks a random assignment.)

In both papers, the approximation algorithm is by Hast, and it is based on a semidefinite programming relaxation studied by Charikar and Wirth.

The reason why the results cannot be correct is that, in both cases, if the hardness result is correct, then it implies an integrality gap for the Charikar-Wirth relaxation, which makes it unsuitable to break the approximation resistance as claimed.

Suppose that we have a constraint satisfaction problem in which every constraint is satisfied by a {p} fraction of assignment. Then for such a problem to not be approximation resistant, we have to devise an algorithm that, for some fixed positive {\delta>0}, returns a solution whose cost (the number of constraints that it satisfies) is at least {p+\delta} times the optimum. The analysis of such an algorithm needs to include some technique to prove upper bounds for the true optimum; this is because if you are given an instance in which the optimum satisfies at most a {p+o(1)} fraction of constraints, as is the case for a random instance, then the algorithm will satisfy at most a {p+o(1)} fraction of constraints, but then the execution of the algorithm and the proof of correctness will give a (polynomial-time computable and polynomial-time checkable) certificate that the optimum satisfies at most a {(p+o(1))/(p+\delta) < 1 - \delta + o(1)} fraction of constraints.

For algorithms that are based on relaxations, such certificates came from the relaxation itself: one shows that the algorithm satisfies a number of constraints that is at least {p+\delta} times the optimum of the relaxation, and the optimum of the relaxation is at least the optimum of the constraint satisfaction problem. But if there are instances for which the optimum is {p+o(1)} and the optimum of the relaxation is {1-o(1)}, then one cannot use such a relaxation to design an algorithm that breaks approximation-resistance. (Because on, such instances, the algorithm will not be able to satisfy a number of constraint equal to {p+\delta} times the optimum of the relaxation.)

In the UG paper, the approximation resistance relies on a result of Austrin and Håstad. Like all UGC-based inapproximability results that I am aware of, the hardness results of Austrin and Håstad are based on a long code test. A major result of Raghavendra is that for every constraint satisfaction problem one can write a certain SDP relaxation such that the integrality gap of the relaxation is equal to the ratio between soundness and completeness in the best possible long code test that uses predicates from the constraint satisfaction problem. In particular, in Section 7.7 of his thesis, Prasad shows that if you have a long code test with soundness {c} and completeness {s} for a constraint satisfaction problem, then for every {\epsilon > 0} there is an instance of the problem in which no solution satisfies more than {s+\epsilon} fraction of constraints, but there is a feasible SDP solution whose cost is at least a {c-\epsilon} fraction of the number of constraints. The SDP relaxation of Charikar and Wirth is the same as the one studied by Prasad. This means that if you prove, via a long code test, that a certain problem is approximation resistant, then you also show that the SDP relaxation of Charikar and Wirth cannot be used to break approximation resistance.

The NP paper adopts a technique introduced by Siu On Chan to prove inapproximability results by starting from a version of the PCP theorem and then applying a “hardness amplification” reduction. Tulsiani proves that if one proves a hardness-of-approximation result via a “local” approximation-reduction from Max 3LIN, then the hardness-of-approximation result is matched by an integrality gap for Lasserre SDP relaxations up to a super-constant number of rounds. The technical sense in which the reduction has to be “local” is as follows. A reduction from Max 3LIN (the same holds for other problems, but we focus on starting from Max 3LIN for concreteness) to another constraint satisfaction problems has two parameters: a “completeness” parameter {c} and a “soundness” parameter {s}, and its properties are that:

  • (Completeness Condition) the reduction maps instances of 3LIN in which the optimum is {1-o(1)} to instances of the target problem in which the optimum is at least {c-o(1)};
  • (Soundness Condition) the reduction maps instances of 3LIN in which the optimum is {1/2 +o(1)} to instances of the target problem in which the optimum is at most {s+o(1)}.

Since we know that it’s NP-hard to distinguish Max 3LIN instances in which the optimum is {1-o(1)} from instances in which the optimum is {1/2 +o(1)}, such a reduction shows that, in the target problem, it is NP-hard to distinguish instances in which the optimum is {c-o(1)} from instances in which the optimum is {s+o(1)}. The locality condition studied by Tulsiani is that the Completeness Condition is established by describing a mapping from solutions satisfying a {1-o(1)} fractions of the Max 3LIN constraints to solutions satisfying a {c-o(1)} fraction of the target problem constraints, and the assignment to each variable of the target problem can be computed by looking at a sublinear (in the size of the Max 3LIN instance) number of Max 3LIN variables. Reductions that follows the Chan methodology are local in the above sense. This means that if one proves that a problem is approximation-resistant using the Chan methodology starting from the PCP theorem, then one has a local reduction from Max 3LIN to the problem with completeness {1-o(1)} and soundness {p+o(1)}, where, as before, {p} is the fraction of constraints of the target problem satisfied by a random assignment. In turn, this implies that not just the Charikar-Wirth relaxation, but that, for all relaxations obtained in a constant number of rounds of Lasserre relaxations, there are instances of the target problem that have optimum {p+o(1)} and SDP optimum {1-o(1)}, so that the approximation resistance cannot be broken using such SDP relaxations.

8 thoughts on “Two recent papers by Cui Peng

  1. Releasing a paper “proving” that “P = NP”, and then afterwards releasing a second paper proving that “P != NP implies that UGC is false” seems like a pretty good strong warning sign. A second strong warning sign is the high number of revisions (17) of the “P = NP” paper posted to arxiv over the last year and a half.

  2. Before writing this post, I asked the author why he had posted a “UGC is false” paper after the “P=NP” paper, and he said that, after writing the P=NP paper he looked at how his argument would specialize to the unique games conjecture, and he came up with a simpler proof for that case, which he wrote in the second paper. It is a fairly reasonable answer: if I thought I had a P=NP proof, and I saw that nobody took it seriously, I would look at the first non-trivial consequence of my ideas.

  3. thx for taking the time to clarify/ refute this. the UGC paper (saw/ skimmed it) showed a lot of technical flair and background in the area. there was another recent arxiv paper refuting another P=?NP paper written by (apparently undergraduate) students, thought it was a great idea. the overall “community” tends to have a dead silence on these types of attempts & typically refuses to comment whatsoever (even in emails or whatever). dont think that approach is entirely helpful. but on the other hand there is no collective incentive to do anything otherwise & it can be a thankless/ frustration exercise etc (spking from deep experience). a blog on the topic is great. now you can get to all the other papers on woegingers list. just kidding. note also that RJLipton has also written in his blog that failed attempts have at least some marginal value. might blog about this (again) sometime.
    see also idea/outline for a NP vs P/Poly proof via monotone circuits
    reminds me how once merely cited a collatz proof paper by chinese writers on stackexchange cstheory site & it got deleted quite quickly. bzzzzt.

  4. Pingback: leading authorities weigh in on P vs NP proofs | Turing Machine

  5. Hi Luca, I’m curious if you heard anything back from the author after you posted this? Does he agree with it? I noticed he uploaded new versions on Arxiv on June 1 which is after you posted.

  6. Even though the main results is false, is there anything of value in the papers? An interesting
    theorem, though of course not as interesting as the author hoped?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s