Survey on Hardness of Approximation

In 2004 I wrote a survey on hardness of approximation as a book chapter for a book on approximation algorithm. I have just prepared a revised version for a new edition of the book.

While it would have been preferable to rethink the organization and start from scratch, because of time constraints I was only able to add sections on integrality gaps and on unique games, and to add references to more recent work (e.g. the combinatorial proof of the PCP theorem, the new 2-query PCPs, the tight results on minimizing congestion in directed networks, and so on).

If your (or your friend’s) important results are not cited, do let me know. The deadline to submit the chapter has long passed, but the threats from the editor haven’t yet escalated to the point where I feel that I have to submit it or else.

Advertisements

STOC 2009

The list of STOC 2009 accepted papers is out and, in a very welcome innovation, abstracts are included.

I am looking forward to a number of talks. Among many others:

  • Craig Gentry will present a construction of homomorphic encryption based on the hardness of a lattice problem. In a homomorphic encryption scheme, given the encryption of {x} one can compute the encryption of {f(x)} for a class of functions {f()}, without knowing the decryption key and without gaining any partial information about {x}. According to the abstract, Gentry’s scheme works for all functions {f} in {NC1}. The existence of homomorphic encryption is one of the great open questions in cryptography. My guess had been that it is impossible for any reasonable large class of functions.

  • Ryan O’Donnell and Yi Wu have found a conditional solution (based on Khot’s “{d}-to-1 conjecture) to the problem of what is the best PCP with three queries and perfect completeness. This had been, for more than ten years, the only remaining open question on the power of three-query PCP.

  • Klim Efremenko has a sub-exponential 3-query locally decodable code construction that, unlike Sergey Yekhanin’s, does not rely on the existence of arbitrarily large Mersenne primes.

Also, Reid Andersen and Yuval Peres have devised a new approach to local graph partitioning, and I have been told that their ideas are genuinely new and likely to be applicable to other problems. I have been meaning to understand (and maybe post about) local graph partitioning algorithms, but probably this will not happen until the end of the cryptography class. Impagliazzo, Kabanets and Wigderson continue to explore “direct product” results, a staple of complexity theory which seemed settled in the early 1980s but that periodically generates new ideas and applications. In the new paper, some ideas from the beautiful STOC 2008 paper by Impagliazzo, Jaiswal, Kabanets and Wigderson are finding their way into PCP constructions. (A series of posts on direct products and “xor lemmas” is another of my post-CS276 plans.)

Terminology

Different communities have different traditions for terminology. Mathematicians appropriate common words, like ring, field, scheme, ideal,… and the modern usage of the term bears no connection with the everyday meaning of the word. Physicists have a lot of fun with their sparticles and their strangeness and charm and so on. Theoretical computer scientists, like the military, and NASA, prefer acronyms.

We have some isolated examples of felicitous naming. Expander, for example, is great: it sounds right and it is suggestive of the technical meaning. Extractor is my favorite, combining a suggestion of the meaning with a vaguely threatening sound. I think it’s too bad that seedless extractor has come to pass, because it evokes some kind of device to get grape juice. (I was on the losing side that supported deterministic extractor.)

Unfortunate namings are of course more common. Not only is the complexity class PP embarrassing to pronounce, but its name, derived from Probabilistic Polynomial time, is a poor description of it. By analogy with #P and $\oplus$P, it should be called MajP.

I heard the story of a famous (and famously argumentative) computer scientist complaining to one of the authors of the PCP theorem about the term PCP, which stands for Probabilistically Checkable Proof. “I too can define a probabilistic checker for SAT certificates,” he supposedly said, “with probability half check the certificate, with probability half accept without looking at it.” The point being that the terminology emphasizes a shortcoming of the construction (the probabilistic verification) instead of the revolutionary feature (the constant query complexity). Personally, I would prefer Locally Testable Proof.

Of course we will never change the name of PP or PCP, and the seedless extractors are here to stay, but there is one terminology change for which I’d like to start a campaign.

Naor and Naor constructed in 1990 a pseudorandom generator whose output is secure against linear tests. They called such a generator $\epsilon$-biased if the distinguishing probability of every linear test is at most $\epsilon$. Such generators have proved to be extremely useful in a variety of applications, most recently in the Bogdanov-Viola construction of pseudorandom generators again degree-2 polynomials.

Shall we start calling such generators $\epsilon$-unbiased? Seeing as it is the near lack of bias, rather than its presence, which is the defining feature of such generators?

(I know the reason for the Naor-Naor terminology: zero-bias generator makes perfect sense, while zero-unbiased makes no sense. But how about the fact that it is technically correct to say that the uniform distribution is $\frac {1}{10}$-biased?)

[Update: earlier posts on the same topic here and here]