# On the Unique Games Conjecture (part 2)

[I am preparing a survey talk on Unique Games for a mathematics conference, and writing a survey paper for a booklet that will be distributed at the conference. My first instinct was to write a one-line paper that would simply refer to Subhash’s own excellent survey paper. Fearing that I might come off as lazy, I am instead writing my own paper. Here are some fragments. Comments are very welcome.]

1. Why is the Unique Games Conjecture Useful

In a previous post we stated the Unique Games Conjecture and made the following informal claim, here rephrased in abbreviated form:

To reduce Label Cover to a graph optimization problem like Max Cut, we map variables to collections of vertices and we map equations to collections of edges; then we show how to “encode” assignments to variables as 2-colorings of vertices which cut a ${\geq c_1}$ fraction of edges, and finally (this is the hardest part of the argument) we show that given a 2-coloring that cuts a ${\geq c_2}$ fraction of edges, then

1. the given 2-coloring must be somewhat “close” to a 2-coloring coming from the encoding of an assignment and
2. if we “decode” the given 2-coloring to an assignment to the variables, such an assignment satisfies a noticeable fraction of equations.

Starting our reduction from a Unique Game instead of a Label Cover problem, we only need to prove (1) above, and (2) more or less follows for free.

To verify this claim, we “axiomatize” the properties of a reduction that only achieves (1): we describe a reduction mapping a single variable to a graph, such that assignments to the variable are mapped to good cuts, and somewhat good cuts can be mapped back to assignments to the variable. The reader can then go back to our analysis of the Max Cut inapproximability proof in the previous post, and see that the properties below are sufficient to implement the reduction.

Definition 1 (${(c_1,c_2)}$-Graph Family) A ${(c_1,c_2)}$ graph family is a collection of graphs ${G_m = (V_m,E_m)}$, for each positive integer ${m}$, together with an encoding function ${Enc_m : \{1,\ldots,m\} \rightarrow 2^{V_m}}$ and a randomized decoding process ${Dec_m : 2^{V_m} \rightarrow \{1,\ldots,m\}}$ such that

• For every ${m}$ and every ${i\in m}$, let ${S_i := Enc_m(i)}$. Then the partition ${(S_i,V_m-S_i)}$ cuts at least a ${c_1}$ fraction of the edges of ${G_m}$;

• If ${(S,V_m-S)}$ is a partition of the vertices of ${G_m}$ that cuts at least a ${c_2 + \delta}$ fraction of the edges, then there is an index ${i\in \{1,\ldots,m\}}$ such that the probability

$\displaystyle \mathop{\mathbb P} [ Dec_m (S) = i ] \geq p(\delta) >0$

is at least a positive quantity ${p(\delta)}$ independent of ${m}$;

• The encoding and decoding procedures are symmetric. That is, it is possible to define an action of the symmetric group of ${\{1,\ldots,m\}}$ on ${V_m}$, by which we mean that for every ${x\in V_m}$ and every bijection ${\pi \{1,\ldots,m \} \rightarrow \{1,\ldots,m\}}$ we define an element ${x\circ \pi \in V_m}$ (for a set ${S}$, we define ${\pi(S)}$ to be ${\{ x\circ \pi : x\in S\}}$) such that for every ${i\in m}$ and every bijection ${\pi: \{1,\ldots,m\} \rightarrow \{1,\ldots, m \}}$ we have

$\displaystyle Enc_m ({\pi(i)}) = \pi (Enc_m ({i} ))$

and

$\displaystyle Dec_m (\pi (S)) \approx \pi ( Dec_m (S))$

where ${D_1 \approx D_2}$ means that ${D_1}$ and ${D_2}$ have the same distribution.

We claim that, in the previous post, we defined a ${(1-\epsilon, 1 - \frac 2\pi \sqrt{\epsilon})}$ graph family, and that this was sufficient to prove that intractability of approximating Max Cut under the Unique Games Intractability Conjecture.

The graph family is the following. For a given ${m}$:

1. The vertex set is ${V_m := \{ 0,1 \}^m}$;
2. The graph is weighted complete graph with edges of total weight ${1}$. The weight of edge ${(x,y)}$ is the probability of generating the pair ${(x,y)}$ by sampling ${x}$ at random and sampling ${y}$ from the distribution ${N_{1-\epsilon}(x)}$;
3. ${Enc_m(i)}$ defines the cut ${(S_i,V_m-S_i)}$ in which ${S_i}$ is the set of all vertices ${x}$ such that ${x_i = 1}$

4. ${Dec_m(S)}$ proceeds as follows. Define ${f(x) := -1}$ if ${x\in S}$ and ${f(x) := 1}$ if ${x\not \in S}$. Compute the Fourier expansion

$\displaystyle f(x) = \sum_R \hat f(R) (-1)^{\sum_i \in R x_i}$

Sample a set ${R}$ with probability proportional to ${\hat f^2(R)}$, and then output a random element of ${R}$

2. Semidefinite Programming

Solving an instance of a combinatorial optimization problem of minimization type is a task of the form

$\displaystyle \begin{array}{l} \max cost (z)\\ \mbox{subject to}\\ z\in Sol \end{array} \ \ \ \ \ (1)$

where ${Sol}$ is the set of admissible solutions and ${cost(z)}$ is the cost of solution ${z}$. For example the problem of finding the maximum cut in a graph ${G=(V,E)}$ is a problem of the above type where ${Sol}$ is the collection of all subsets ${S\subseteq V}$, and ${cost(S)}$ is the number of edges cut by the vertex partition ${(S,V-S)}$.

If ${Sol \subseteq Rel}$, and ${cost' : Rel \rightarrow {\mathbb R}}$ is a function that agrees with ${cost()}$ on ${Sol}$, then we call the problem

$\displaystyle \begin{array}{l} \min cost'(z)\\ \mbox{subject to}\\ z\in Rel\end{array} \ \ \ \ \ (2)$

a relaxation of the problem in (1). The interest in this notion is that combinatorial optimization problems in which the solution space is discrete are often NP-hard, while there are general classes of optimization problems defined over a continuous convex solution space that can be solved in polynomial time. A fruitful approach to approximating combinatorial optimization problems is thus to consider relaxations to tractable convex optimization problems, and then argue that the optimum of the relaxation is close to the optimum of the original discrete problem.

The Unique Games Intractability Conjecture appears to be deeply related to the approximation quality of Semidefinite Programming relaxations of combinatorial optimization problems.

2.1. Semidefinite Programming

A symmetric matrix ${A}$ is positive semidefinite, written ${A \succeq {\bf 0}}$, if all its eigenvalues are non-negative. We write ${A \succeq B}$ if ${A-B}$ if positive semidefinite. We quote without proof the following facts:

• A matrix ${A\in {\mathbb R}^{n \times n}}$ is positive semidefinite if and only if there are vectors ${v^{1},\ldots,v^n \in {\mathbb R}^m}$ such that for every ${i,j}$ we have ${A_{ij} = \langle v^i,v^j \rangle}$. Furthermore, there is an algorithm of running time polynomial in ${n}$ that, given a matrix ${A}$, tests whether ${A}$ is positive semidefinite and, if so, finds vectors ${v^1,\ldots,v^n}$ as above.

• The set of positive semidefinite matrices is a convex subset of ${{\mathbb R}^{n\times n}}$. In fact, it is a convex cone, that is, for every two positive semidefinite matrices ${A,B}$ and non-negative scalars ${\alpha,\beta}$, the matrix ${\alpha A + \beta B}$ is positive semidefinite.

It often the case the optimizing a linear function over a convex subset of ${{\mathbb R}^N}$ is a polynomial time solvable problem, and indeed there are polynomial time algorithms for the following problem:

Definition 2 (Semidefinite Programming) The Semidefinite Programming problem is the following computational program: given matrices ${C,A^1,\ldots,A^m \in {\mathbb R}^{n \times n}}$ and scalars ${b_1,\ldots,b_m \in {\mathbb R}}$, find a matrix ${X}$ that solves the following optimization problem (called a semidefinite program):

$\displaystyle \begin{array}{l} \max C \bullet X\\ \mbox{subject to}\\ A^1 \bullet X \leq b_1\\ A^2 \bullet X \leq b_2\\ \cdots\\ A^m \bullet X \leq b_m\\ X \succeq {\bf 0} \end{array} \ \ \ \ \ (3)$

where we use the notation ${A \bullet B := \sum_{ij} A_{ij} \cdot B_{ij}}$.

In light of the characterization of positive semidefinite matrices described above, the semidefinite program (3) can be equivalently written as

$\displaystyle \begin{array}{l} \max \sum_{ij} C_{ij} \cdot \langle v^i ,v^j \rangle \\ \mbox{subject to}\\ \sum_{ij} A^1_{ij} \cdot \langle v^i,v^j \rangle \leq b_1\\ \sum_{ij} A^2_{ij} \cdot \langle v^i,v^j \rangle \leq b_2\\ \cdots\\ \sum_{ij} A^m_{ij} \cdot \langle v^i,v^j \rangle \leq b_m\\ v^1,\ldots,v^n \in {\mathbb R}^n \end{array} \ \ \ \ \ (4)$

That is, as an optimization problem in which we are looking for a collection ${v^1,\ldots,v^n}$ of vectors that optimize a linear function of their inner products subject to linear inequalities about their inner products.

2.2. Semidefinite Programming and Approximation Algorithms

A quadratic program is an optimization problem in which we are looking for reals ${x_1,\ldots,x_n}$ that optimize a quadratic form subject to quadratic inequalities, that is an optimization problem that can be written as

$\displaystyle \begin{array}{l} \max \sum_{ij} C_{ij} \cdot x_i \cdot x_j \\ \mbox{subject to}\\ \sum_{ij} A^1_{ij} \cdot x_i \cdot x_j \leq b_1\\ \sum_{ij} A^2_{ij} \cdot x_i \cdot x_j \leq b_2\\ \cdots\\ \sum_{ij} A^m_{ij} x_i \cdot x_j \leq b_m\\ x_1,\ldots,x_n \in {\mathbb R} \end{array} \ \ \ \ \ (5)$

Since the quadratic condition ${x\cdot x = 1}$ can only be satisfied if ${x\in \{ -1,1 \}}$, quadratic programs can express discrete optimization problems. For example, the Max Cut problem in a graph ${G=(V,E)}$, where ${V=\{ 1,\ldots,n\}}$ can be written as a quadratic program in the following way

$\displaystyle \begin{array}{l} \max \sum_{ij \in E} \frac 12 - \frac 12 x_i \cdot x_j\\ \mbox{subject to}\\ x_1^2 = 1\\ \cdots\\ x_n^2 = 1\\ x_1,\ldots,x_n \in {\mathbb R} \end{array} \ \ \ \ \ (6)$

Every quadratic program has a natural Semidefinite Programming relaxation in which we replace reals ${x_i}$ with vectors ${v^i}$ and we replace products ${x_i \cdot x_j}$ with inner products ${\langle v^i,v^j \rangle}$. Applying this generic transformation to the quadratic programming formulation of Max Cut we obtain the following semidefinite programming formulation of Max Cut

$\displaystyle \begin{array}{l} \max \sum_{ij \in E} \frac 12 - \frac 12\langle v^i,v^j \rangle\\ \mbox{subject to}\\ \langle v^1,v^1 \rangle = 1\\ \cdots\\ \langle v^n,v^n \rangle = 1\\ v^1,\ldots,v^n \in {\mathbb R}^n \end{array} \ \ \ \ \ (7)$

The Max Cut relaxation (7) is the one used by Goemans and Williamson.

Algorithms based on semidefinite programming provide the best known polynomial-time approximation guarantees for a number of other graph optimization problems and of constraint satisfaction problem.

2.3. Semidefinite Programming and Unique Games

The quality of the approximation of Relaxation (7) for the Max Cut problem exactly matches the intractability results proved assuming the Unique Games Intractability Assumptions. This has been true for a number of other optimization problems.

Remarkably, Prasad Raghavendra has shown that for a class of problems (which includes Max Cut as well as boolean and non-boolean constraint satisfaction problems), there is a semidefinite programming relaxation such that, assuming the Unique Games Intractabiltiy Conjecture, no other polynomial time algorithm can provide a better approximation than that relaxation.

If one believes the conjecture, this means that the approximability of all such problems has been resolved, and a best-possible polynomial time approximation algorithm has been identified for each such problem. An alternative view is that, in order to contradict the Unique Games Intractabiltiy Conjecture, it is enough to find a new algorithmic approximation techniques that works better than semidefinite programming for any of the problems that fall into Raghavendra’s framework, or maybe find a different semidefinite programming relaxation that works better than the one considered in Raghavendra’s work.

2.4. Sparsest Cut, Semidefinite Programming, and Metric Embeddings

If, at some point in the future, the Unique Games Intractability Conjecture will be refuted, then some of the theorems that we have discussed will become vacuous. There are, however, a number of unconditional results that have been discovered because of the research program that originated from the conjecture, and that would survive a refutation.

First of all, the analytic techniques developed to study reductions from Unique Games could become part of future reductions from Label Cover or from other variants of the PCP Theorem. As discussed above, reductions from Unique Games give ways of encoding values of variables of a Label Cover instance as good feasible solutions in the target optimization problems, and ways of decoding good feasible solutions in the target optimization problems as values for the variables of the Label Cover instance.

It is also worth noting that some of the analytic techniques developed within the research program of Unique Games have broader applicability. For example the impetus to prove the Invariance Theorem of Mossel, O’Donnell and Oleszkiewicz came from its implications for conditional inapproximability results, but it settles a number of open questions in social choice theory.

Perhaps the most remarkable unconditional theorems motivated by Unique Games regard integrality gaps of Semidefinite Programming relaxations. The integrality gap of a relaxation of a combinatorial optimization problem is the worst-case (over all instances) ratio between the optimum of the combinatorial problem and the optimum of the relaxation. The integrality gap defines how good is the optimum of the relaxation as a numerical approximation of the true optimum, and it is usually a bottleneck to the quality of approximation algorithms that are based on the relaxation.

The integrality gap of Relaxation (7) is ${.8785\cdots}$, the same as the hardness of approximation result proved assuming the Unique Games Intractabiltiy Conjecture. Indeed, the graph that exhibits the ${.8785\cdots}$ gap is related to the graph used in the reduction from Unique Games to Max Cut. This is part of the larger pattern discovered by Raghavendra (cited above), who shows that, for a certain class of optimization problems, every integrality gap instance for certain semidefinite programming relaxations can be turned into a conditional inapproximability result assuming the Unique Games Intractability Assumption. The Sparsest Cut problem, described in the previous post, has a Semidefinite Programming relaxation, first studied by Goemans and Linial, whose analysis is of interest even outside of the area of approximation algorithms. A metric space ${(X,d)}$ is of negative type if ${(X,\sqrt d)}$ is also a metric space and is isometrically embeddable in Euclidean space. If every ${n}$-point metric space of negative type can be embedded into ${L1}$ with distortion at most ${c(n)}$, then the Semidefinite Programming relaxation of Goemans and Linial can be used to provide a ${c(n)}$-approximate algorithm for sparsest cut, where ${n}$ is the number of vertices, and the integrality gap of the relaxation is at most ${c(n)}$. Equivalently, if there is an ${n}$-vertex instance of Sparsest Cut exhibiting an integrality gap at least ${c(n)}$, then there is an ${n}$-point negative-type metric space that cannot be embedded into ${L1}$ without incurring distortion at least ${c(n)}$.

Interestingly, there is a generalization of the Sparsest Cut problem, the Non-uniform Sparsest Cut problem, for which the converse is also true, that is, the integrality gap of the Goemans-Linial Semidefinite Programming relaxation of the Non-uniform Sparsest Cut problem for graphs with ${n}$ vertices is ${\leq c(n)}$ if and only if every ${n}$-point negative-type metric space can be embedded into ${L1}$ with distortion at most ${c(n)}$.

It had been conjectured by Goemans and Linial that the integrality gap of the semidefinite relaxations of Sparsest Cut and Non-Uniform Sparsest Cut was at most a constant. Arora, Rao and Vazirani proved in 2004 that the Sparsest Cut relaxation had integrality gap ${O(\sqrt {\log n})}$, and Arora, Lee and Naor proved in 2005 that Non-Uniform Sparsest Cut relaxation had integrality gap ${O(\sqrt{\log n} \cdot \log\log n)}$, results that were considered partial progress toward the Goemans-Linial conjecture.

Later in 2005, however, Khot and Vishnoi proved that the relaxation of Non-Uniform Sparsest Cut has an integrality gap ${(\log\log n)^{\Omega(1)}}$ that goes to infinity with ${n}$. Their approach was to:

1. Prove that the Non-Uniform Sparsest Cut problem does not have a constant-factor approximation, assuming the Unique Games Intractability Conjecture, via a reduction from unique games to non-uniform sparsest cut;
2. Prove that a natural Semidefinite Programming relaxation of Unique Games has integrality gap ${(\log\log n)^{\Omega(1)}}$;
3. Show that applying the reduction in (1) to the Unique Games instance in (2) produces an integrality gap instance for the Goemans-Linial Semidefinite Programming relaxation of Non-Uniform Sparsest Cut.

In particular, Khot and Vishnoi exhibit an ${n}$-point negative-type metric space that requires distortion ${(\log\log n)^{\Omega(1)}}$ to be embedded into ${L1}$. This has been a rather unique approach to the construction of counterexamples in metric geometry. The lower bound was improved to ${\Omega(\log\log n)}$ by Krauthgamer and Rabani, and the following year, Devanur, Khot, Saket and Vishnoi showed that even the Sparsest Cut relaxation has an integrality gap ${\Omega(\log\log n)}$.

Cheeger, Kleiner and Naor have recently exhibited a ${(\log n)^{\Omega(1)}}$ integrality gap for Non-Uniform Sparsest Cut, via very different techniques.

3. Algorithms for Unique Games

When Khot introduced the Unique Games Conjecture, he also introduced a Semidefinite Programming relaxation. Charikar, Makarychev and Makarychev provide a tight analysis of the approximation guarantee of that Semidefinite Program, showing that, given a unique game with range ${\Sigma}$ in which a ${1-\epsilon}$ fraction of the equations can be satisfied, it is possible to find in polynomial time a solution that satisfies at least a ${1/\Sigma^{O(\epsilon)}}$ fraction of constraints.

This is about as good as can be expected, because earlier work had shown that if the Unique Games Intractability Conjecture holds, then there is no polynomial time algorithm able to satisfy a ${1/\Sigma^{o_\Sigma(\epsilon)}}$ fraction of constraints in a unique game with range ${\Sigma}$ in which a ${(1-\epsilon)}$ fraction of equations is satisfiable. Furthermore, the analysis of Charikar, Makarychev and Makarychev is (unconditionally) known to be tight for the specific Semidefinite Programming relaxation used in their algorithm because of the integrality gap result of Khot and Vishnoi discussed in the previous section.

Recently, Arora, Barak and Steurer have devised an algorithm that satisfies in time ${2^{n^{O(\epsilon)}}}$ a constant fraction of the equations in an instance of unique games in which it is possible to satisfy a ${1-\epsilon}$ fraction of equations. Although this result is far from refuting the Unique Games Intractability Conjecture, it casts some doubts on the Unique Games NP-hardness Conjecture. The following stronger form of the ${P\neq NP}$ conjecture is generally considered to be very likely: that for every NP-hard problem that is a ${c>0}$ such that the problem cannot be solved with worst-case running time faster than ${2^{n^c}}$, where ${n}$ is the size of the input. This means that if the running time of the Arora-Barak-Steurer algorithm could be improved to ${2^{n^{o(1)}}}$ for a fixed ${\epsilon}$, the Unique Games NP-hardness Conjecture would be in disagreement with the above conjecture about NP-hard problems, and would have to be considered unlikely.

## 3 thoughts on “On the Unique Games Conjecture (part 2)”

1. In program (1), there should be “MIN” instead of “MAX”.

Thank you for sharing this “talk” on UGC for the non expert!

2. “indeed there are polynomial time algorithms for the following problem:
Definition 2 (Semidefinite Programming)”

I don’t think this is correct as stated. The ellipsoid algorithm for SDP requires an extra bound on the feasible region, where it looks for solutions. (And interior point methods require much more, as well as they are analyzed in a weird complexity model, to say the least…)

Further evidence against such a claim:
there are examples due to Khachiyan and Porkololab that show that the bitsize of any feasible SDP point might be exponential.
there is a paper by S.Tarasov and M.Vyalyi (portal.acm.org/citation.cfm?id=1393773) that shows that (exactly) solving SDPs in polynomial type would allow efficient arithmetic on rational numbers given by arithmetic circuits.

Although in combinatorial optimization one usually has boundedness necessary for the ellipsoid method to run in polynomial time.

3. It seems you kept referring to “post” in the Bulletin adaptation of this post on the UGC.