You are currently browsing the tag archive for the ‘Integrality gap’ tag.

*[I am preparing a survey talk on Unique Games for a mathematics conference, and writing a survey paper for a booklet that will be distributed at the conference. My first instinct was to write a one-line paper that would simply refer to Subhash's own excellent survey paper. Fearing that I might come off as lazy, I am instead writing my own paper. Here are some fragments. Comments are very welcome.]*

**1. Why is the Unique Games Conjecture Useful **

In a previous post we stated the Unique Games Conjecture and made the following informal claim, here rephrased in abbreviated form:

*To reduce Label Cover to a graph optimization problem like Max Cut, we map variables to collections of vertices and we map equations to collections of edges; then we show how to “encode” assignments to variables as 2-colorings of vertices which cut a fraction of edges, and finally (this is the hardest part of the argument) we show that given a 2-coloring that cuts a fraction of edges, then *

- the given 2-coloring must be somewhat “close” to a 2-coloring coming from the encoding of an assignment and
- if we “decode” the given 2-coloring to an assignment to the variables, such an assignment satisfies a noticeable fraction of equations.

Starting our reduction from a Unique Game instead of a Label Cover problem, we only need to prove (1) above, and (2) more or less follows for free.

To verify this claim, we “axiomatize” the properties of a reduction that only achieves (1): we describe a reduction mapping a single variable to a graph, such that assignments to the variable are mapped to good cuts, and somewhat good cuts can be mapped back to assignments to the variable. The reader can then go back to our analysis of the Max Cut inapproximability proof in the previous post, and see that the properties below are sufficient to implement the reduction.

In 2004 I wrote a survey on hardness of approximation as a book chapter for a book on approximation algorithm. I have just prepared a revised version for a new edition of the book.

While it would have been preferable to rethink the organization and start from scratch, because of time constraints I was only able to add sections on integrality gaps and on unique games, and to add references to more recent work (e.g. the combinatorial proof of the PCP theorem, the new 2-query PCPs, the tight results on minimizing congestion in directed networks, and so on).

If your (or your friend’s) important results are not cited, do let me know. The deadline to submit the chapter has long passed, but the threats from the editor haven’t yet escalated to the point where I feel that I have to submit it or else.

The fact that Linear Programming (LP) can be solved in polynomial time (and, also, efficiently in practice) and that it has such a rich geometric theory and such remarkable expressive power makes LP a powerful unifying concept in the theory of algorithms. It “explains” the existence of polynomial time algorithms for problems such as Shortest Paths and Min Cut, and if one thinks of the combinatorial algorithms for such problems as algorithms for the corresponding linear programs, one gains additional insights.

When looking for algorithms for a new combinatorial problem, a possible approach is to express the problem as a 0/1 integer program, then relax it to a linear program, by letting variables range between 0 and 1, and then hope for the best. “The best” being the lucky event that the value of the optimum of the relaxation is the same as that of the combinatorial program, or at least a close approximation. If one finds that, instead, the relaxation has optimal fractional solutions of cost very different from the combinatorial optimum, one may try to add further inequalities that are valid for 0/1 solutions and that rule out the problematic fractional solutions.

Many “P=NP” papers follow this approach, usually by presenting a polynomial-size linear programming relaxation of TSP and then “proving” that the optimum of the relaxation is the same as the combinatorial optimum. One can find recent examples here and here.

Similar results were “proved” in a notorious series of paper by Ted Swart in the mid 1980s. After counterexamples were found, he would revise the paper adding more inequalities that would rule out the counterexample.

Finally, Mihalis Yannakakis took matters into his own hands and proved that all “symmetric” relaxations of TSP of sub-exponential size have counterexamples on which the optimum of the relaxation is different from the combinatorial optimum. (All of the LPs suggested by Swart where “symmetric” according to Yannakakis’s definition.)

This is actually one of the few known lower bounds that actually applies to a “model of computation” capturing a general (and otherwise promising) class of algorithms.

(I first read about this story in Christos Papadimitriou’s complexity book, but I found the above references in Gerhard J Woeginger’s P versus NP page.)

In the theory of approximation algorithms, we have plenty of problems that are believed to be intractable but that are not known to be NP-hard, such as approximating Vertex Cover within a factor of 1.9, or approximating Sparsest Cut within a factor of 10. LP and Semidefinite Programming (SDP) approaches are more or less the only promising tools we have to prove that such problems are tractable and, while we wait for NP-hardness result (for now, we only have “Unique-Games-hardness”), it is good to see whether certain candidate LP and SDP algorithms have any chance, or if they admit counterexamples showing large gaps between the optimum of the relaxation and the combinatorial optimum.

The problem with this approach is the nearly endless variety of relaxations that one can consider: what happens when we add triangle inequalities? and pentagonal inequalities? and so on. As in the case of Yannakakis’s result, it would be great to have a result that says “all SDP relaxations of Vertex Cover of type X fail to achieve an approximation ratio smaller than 2,” where “type X” is a general class of sub-exponential size SDP relaxations that include the type of inequalities that people use “in practice.”

Lovasz and Schrijver describe a method, denoted LS+, that starts from an LP relaxation of a problem (actually it can start from any convex relaxation), and then turns it into tighter and tighter SDP relaxations, by adding auxiliary variables and linear and semidefinite constraints. A weaker version of the method, denoted LS, only adds auxiliary variables and linear constraints.

A nice thing about the method is that, after you apply it to your initial relaxation, thus getting a tighter relaxation, you can then apply it again to the tighter one, thus getting an even better relaxation, and so on. Starting from an LP relaxation with variables and poly() constraints, applications of the method yield a relaxation solvable in time, which is polynomial for all fixed and sub-exponential for . Lovasz and Schrijver prove that, after applications (or “rounds”) the resulting relaxation enforces all inequalities over -tuples of variables that are valid for 0/1 solutions. (In particular, one gets the combinatorial optimum after rounds.) Typical approaches in the design of approximation algorithms are SDP with local inequalities (triangle inequalities etc.), and this is all captured after a few rounds of LS+.

It would be great to show that no constant (ideally, no sublinear) number of rounds of LS+ starting from the basic LP relaxation gives a approximation for vertex cover. Arora, ~~and others~~ Bollobas and Lovasz considered related questions in a FOCS 2002 paper that has inspired a considerable amount of later work. (See the introduction of the journal version.) Unfortunately the question remains open evern for two rounds of LS+. After one round, one gets an SDP relaxation equivalent to (number of vertices minus) the Lovasz Theta function, and Goemans and Kleinberg prove that such SDP does not achieve approximation better than 2. Beyond that, it is pretty much terra incognita. Charikar proves that a relaxation with triangle inequalities (which is incomparable with two rounds of LS+ and is weaker than three rounds) does not achieve approximation better than 2. Also, a sublinear number of rounds of LS+ does not achieve approximation better than 7/6. For LS, which, I remind you, generates *linear* programming relaxations rather than *semidefinite* programming ones, we know that no sublinear number of rounds leads to an approximation better than 2.

I will illustrate the main idea in the LS and LS+ method using the example of Vertex Cover. In the linear programming formulation, we have variables , one for each vertex , and the constraints that for each edge and that . We would like to add constraints that are only satisfied by 0/1 solutions, and the constraint would work beautifully except that it is not linear (nor convex). Instead, Lovasz and Schrijver add new variables , one for each pair of vertices, with the idea that we would like to have ; then they add the requirement and various inequalities that make the be “sort of like” . In particular, we would like to require that if , then . This is again a non-linear constraint, but, at least, we can check whether, for each fixed , is a fractional vertex cover: we just need to check the inequalities for each edge

Similarly, if , we expect to have . We cannot check it directly, but we can check if

hold for each edge . Finally, we obviously want . This describe the LS method. For LS+, we add the requirement that the symmetric matrix be positive semidefinite. Being positive semidefinite means that there are vectors such that

where denotes inner product. If then is clearly positive semidefinite.

## Recent Comments