# In which way is computer science the opposite of commutative rings?

Find out in this month’s issue of the notices of the AMS

# On the Unique Games Conjecture (part 2)

[I am preparing a survey talk on Unique Games for a mathematics conference, and writing a survey paper for a booklet that will be distributed at the conference. My first instinct was to write a one-line paper that would simply refer to Subhash’s own excellent survey paper. Fearing that I might come off as lazy, I am instead writing my own paper. Here are some fragments. Comments are very welcome.]

1. Why is the Unique Games Conjecture Useful

In a previous post we stated the Unique Games Conjecture and made the following informal claim, here rephrased in abbreviated form:

To reduce Label Cover to a graph optimization problem like Max Cut, we map variables to collections of vertices and we map equations to collections of edges; then we show how to “encode” assignments to variables as 2-colorings of vertices which cut a ${\geq c_1}$ fraction of edges, and finally (this is the hardest part of the argument) we show that given a 2-coloring that cuts a ${\geq c_2}$ fraction of edges, then

1. the given 2-coloring must be somewhat “close” to a 2-coloring coming from the encoding of an assignment and
2. if we “decode” the given 2-coloring to an assignment to the variables, such an assignment satisfies a noticeable fraction of equations.

Starting our reduction from a Unique Game instead of a Label Cover problem, we only need to prove (1) above, and (2) more or less follows for free.

To verify this claim, we “axiomatize” the properties of a reduction that only achieves (1): we describe a reduction mapping a single variable to a graph, such that assignments to the variable are mapped to good cuts, and somewhat good cuts can be mapped back to assignments to the variable. The reader can then go back to our analysis of the Max Cut inapproximability proof in the previous post, and see that the properties below are sufficient to implement the reduction.

# Presenting a Beamer Talk the Right Way

I sometimes use Beamer (a LaTex package) to prepare slides for conference or seminar presentations, and sometimes I use Keynote.

Keynote has a simple but very desirable feature: during the presentation, the laptop screen, instead of going blank or showing a copy of the current slide, shows the current slide, and the next slide, and a timer. If you have ever used Keynote, you know how useful it is to have the time and the next slide always in front of you.

When a slide presentation is prepared with Beamer, the output is a pdf file which is then displayed using Acrobat Reader, or the OS X Preview application, and one gets a blank screen on the laptop during the presentation. Since pdf handling is built natively into OS X, and since a timer and a next-slide display are really simple things, I assumed there would be some program that would do a Keynote-style presentation from a pdf file.

Unfortunately, I wasn’t able to find any such thing for OS X. (Interestingly, there is a program for Windows that does that.)

Thankfully, Melissa O’Neil has done the next best thing, or maybe an equally good thing: a program that converts a pdf file into a Keynote file. So you can create your pdf with Beamer, then convert it to the Keynote format, and use Keynote to display the presentation.

Not the cleanest of workflows, but it works. Thanks, Melissa O’Neil!

# On the Unique Games Conjecture (part 1)

[I am preparing a survey talk on Unique Games for a mathematics conference, and writing a survey paper for a booklet that will be distributed at the conference. My first instinct was to write a one-line paper that would simply refer to Subhash’s own excellent survey paper. Fearing that I might come off as lazy, I am instead writing my own paper. Here is part 1 of some fragments. Comments are very welcome.]

Khot formulated the Unique Games Conjecture in a remarkably influential 2002 paper. In the subsequent eight years, the conjecture has motivated and enabled a large body of work on the computational complexity of approximating combinatorial optimization problems (the original context of the conjecture) and on the quality of approximation provided by “semidefinite programming” convex relaxations (a somewhat unexpected byproduct). Old and new questions in analysis, probability and geometry have played a key role in this development. Representative questions are:

• The central limit theorem explains what happens when we sum several independent random variables. What happens if, instead of summing them, we apply a low degree polynomial function to them?
• In Gaussian space, what is the body of a given volume with the smallest boundary?
• What are the balanced boolean function whose value is most likely to be the same when evaluated at two random correlated inputs?
• What conditions on the Fourier spectrum of a function of several variables imply that the function essentially depends on only few variables?
• With what distortion is it possible to embed various classes of finite metric spaces into L1?

# A Circuit Lower Bound Breakthrough

One of the great embarrassments of research in circuit lower bounds has been the inability to prove any lower bounds for ACC0 circuits, that is bounded-depth circuits with unbounded-fan-in AND, OR, NOT, and general Modular gates. (A “mod m” gate takes $n$ boolean inputs and its boolean outpus is 0 if and only if the number of ones among the inputs is a multiple of $m$. A “mod 2” gate is a XOR gate.)

There are two known methods to prove lower bounds for AC0 circuits (like the above, but without the modular gates). One is to argue that the function computed by a polynomial size AC0 circuit is likely to become a constant when one fixes several variables to random values; this implies that the XOR function is not in AC0 because it remains undetermined even if all but one variables are fixed. This method seems hopeless when applied to ACC0 circuits, and in fact I think that the class ACC0 was considered as a way to “turn off” the random restriction method and to consider what other approaches there are to constant-depth circuits. The other method is to approximate an AC0 circuit via a low-degree polynomial, and to show that certain functions cannot be approximated this way and hence are not in AC0. This method also works with ACC0 circuits provided that all modular gates use the same prime modulus, but it seems to break down completely if one works with a composite modulus.

In particular, the following question was open: are all problems in $EXP^{NP}$ solvable by depth-3 circuits with unbounded fan-in AND, OR, NOT and MOD 6 gates?.

(The class $EXP^{NP}$ contains by definition all the problems solvable in exponential time with oracle access to NP — exponentially long oracle queries are allowed.)

Finally, the embarrassment is over: in a remarkable paper, the first to put forward a new viable lower bound technique for bounded-depth circuits in two decades, Ryan Williams has proved that there are problems in NEXP (non-deterministic exponential time) that are not in ACC0, and that for every fixed depth $d$ there are problems in $EXP^{NP}$ not solvable by ACC0 circuits of depth d and size smaller than $2^{n^\delta}$, for some $\delta = \delta(d)>0$.

# Cryptography at STOC/FOCS

It has been too long and now there is no point telling stories from the last two days of FOCS, such as the distinguished theoretician who “flew” on the zip-line on Fremont street and then got drunk at Double Down and asked the two big scary guys who were playing pool if they were twins (they said they were).

As soon as the videos of the conference go online, however, I would suggest to everybody who wasn’t there to watch Dan Spielman’s invited talk, which was phenomenal. Dan talked about the problem of solving linear equations when the constraint matrix is the Laplacian of a graph, the result of a long-term collaboration with Shang-hua Teng. Two major parts of this research program have gone on to become their own research area: