Knuth Prize to Avi Wigderson

avi

Congratulations to the Knuth prize committee chaired by Avrim Blum for the excellent choice of awarding the 2019 Knuth prize to Avi Wigderson.

Avi has worked on all aspects of computational complexity theory, and he has had a transformative influence on the way theoretical computer science relates to pure mathematics. I will not repeat what I wrote about his work on the occasion of his 60th birthday here and here. Long-term readers of in theory will remember that I consider him as one of the saints of computational complexity.

The organizers of the coming FOCS would like me to remind you that the deadline is this Friday, and that someone, for some reason, has set up a fake submission site (on the domain aconf dot org) but the true submission site (that, to be honest, looks less legit than the fake one) is at focs19.cs.utexas.edu.

Also, the deadline to submit nominations for the inaugural FOCS test of time award is in three weeks. There will be three awards, one for papers appeared in FOCS 1985-89, one for FOCS 1995-99 and one for FOCS 2005-09.

On an unrelated note, GMW appeared in FOCS 1986 and the Nisan-Wigderson “Hardness versus randomness” paper appeared in FOCS 1988.

Avi60: Zero Knowledge for all NP

1982 was the annus mirabilis of the foundations of cryptography. In their paper “probabilistic encryption,” Goldwasser and Micali introduced two rigorous definitions of security for encryption, which they proved to be equivalent. One definition required the distributions of encryptions of any two messages to be computationally indistinguishable (a concept they introduce in the paper), the other, which they call semantic security, is the property that whatever can be efficiently computed about a message given the cyphertext can also be efficiently computed without the cyphertext. Later the same year, Blum and Micali gave a rigorous definitions of security for pseudorandom generators, and Yao wrapped all these results in a more general framework requiring generic, rather than number-theoretic, assumptions.

The concept of semantic security inspired most subsequent definitions, and proofs, of security based on the concept of simulation. Instead of trying to specify a list of things than adversary should not be able to do, one defines an idealized model in which the adversary has no access to private and encrypted data, and one defines a given system to be secure if whatever an attacker can efficiently compute given the ability of eavesdrop (and possibly mount an active attack), can also be efficiently computed in the ideal model. One then proves a system to be secure by developing a simulator in the ideal model for every real-world adversary.

Together with Rackoff, Goldwasser and Micali took this idea one step further from encryption to interactive communication, and came up with the idea of Zero-Knowledge Proofs. A zero-knowledge proof is a probabilistic proof system in which a prover can convince a verifier, with high confidence, of the truth of a statement, with the additional property that there is a simulator that is able to sample from the distribution of verifier’s views of the interaction. Thus the verifier is convinced of the truth of the statement being proved, but gains no additional information. In their paper, Goldwasser, Micali and Rackoff introduce the concept and present a zero-knowledge proof for a conjecturally intractable number-theoretic problem. The paper was famously rejected several times, eventually appearing in 1985.

The following year, Goldreich, Micali and Avi Wigderson published a paper giving zero knowledge proof systems for all problems in NP. Their work made zero-knowdge proofs a key tool in the design of secure cryptosystem: it was now possible for a party to publish a commitment to a secret x and then, at any time, be able to prove that x has a certain property without releasing any additional information about x. This ability was a key ingredient in the development of secure multi-party computation in 1987, by the same authors.

So how does one prove in zero knowledge that, say, a graph is 3-colorable? (Once you have zero-knowledge proofs for one NP-complete problems, you immediately have them for all problems in NP.)

Suppose the prover and the verifier know a graph G and the prover knows a 3-coloring. A physical analog of the protocol (which can be implemented using the notion of commitment schemes) is the following: the prover randomizes the color labels, then takes |V| lockboxes, each labeled by a vertex, and puts a piece of paper with the color of vertex v in the lockbox labeled by v, for every v. The prover locks all the lockboxes, and sends them to the verifier. The verifier picks a random edge (u,v) and asks for the keys of the lockboxes for u and for v. If they contain different colors, the verifier accepts, otherwise it rejects.

The protocol is complete, in the sense that if the graph is 3-colorable and the parties follow the protocol, then the verifier accepts with probability 1.

The protocol is sound, in the sense that if the graph is not 3-colorable, then, no matter what the prover does, there will have to some edge (u,v) such that the lockboxes of u and v are the same, and the verifier has probability at least 1/|E| of picking such an edge and rejecting. Thus the verifier accepts with probability at most 1 - 1/|E|, which can be made negligibly small by repeating the protocol several times.

As per the zero-knowledge property, the view of the verifier is the choice of a random edge, two open lockboxes corresponding to the endpoints of the edge, containing two random different colors, and |V|-2 unopened lockboxes. A view with such a distribution can be easily sampled, and the same is true when the physical implementation is replaced by a commitment scheme. (Technically, this is argument only establishes honest-verifier zero knowledge, and a bit more work is needed to capture a more general property.)

Avi60

The festivities in honor of Avi Wigderson’s 60th birthday start tomorrow in Princeton, with a dream team of speakers. I will not be able to attend, but luckily a livestream will be available.

During the week, I will post a random selection of results of Avi’s.

Did you know that Avi’s first paper was an algorithm to color 3-colorable graphs using O(\sqrt n) colors? Here is the algorithm, which has the flavor of Ramsey theory proofs.

Suppose all nodes have degree < \sqrt n, then we can easily color the graph with \sqrt n colors. Otherwise, there is a node v of degree \geq \sqrt n. The neighbors of v induce a bipartite graph (because, in the 3-coloring that we are promised to exist, they are colored with whichever are the two colors that are different from the color of v), and so we can find in linear time an independent set of size \geq \sqrt n / 2. So we keep finding independent sets (which we assign a color to, and remove) of size \geq \sqrt n /2 until we get to a point where we know how to color the residual graph with \leq \sqrt n colors, meaning that we can color the whole graph with \leq 3 \sqrt n colors.

On Paul Erdos and Kim Kardashian

This year is the centennial of Paul Erdős’s birth. Erdős lived most of his adult life as a traveling mathematician, “couchsurfing,” as we would say now, from place to place and from mathematical conference to mathematical conference. He wrote more than 1,500 papers with more than 500 different coauthors, introduced the probabilistic method and was the defining figure of the “Hungarian approach” to combinatorics. He died at age 83 while attending a mathematical conference.

Last year, we celebrated the centennial of Alan Turing’s birth. Turing and Erdős have become such iconic figures both for the impact of their work and for the fascinating facts of their lives. I would like to argue that the cultural archetype through which we interpret their lives is that of the saint. It is clearly that of the martyr saint in the case of Turing, while Erdős gave up material possessions and devoted his life to others, traveling everywhere and “preaching” to everybody, much in the mold of Saint Francis.

(A comparison of the Turing centennial celebration and Erdős’s, and a look at the frescoes of Medieval Catholic churches will show which kind of saint people are more interested in.)

The first step to become a saint of the Catholic church is to establish that the person exhibited “heroic virtues,” which is a great expression. This is an archetype that is not restricted to religion: you see it occurring in communist propaganda (Stakhanov, Lei Feng) and in every civil rights movement.

Saints were the “celebrities” of the Middle Ages, those whose life people liked to talk about. But contemporary celebrities come from a totally different archetype, that of the Greek God. Greek (and Roman) gods were petty and jealous, they cheated on their spouses, they were terrible parents, but there were good stories to be told about them. We don’t want (at least, I don’t) to live the life of a saint, but thinking about them is certainly inspirational and it makes us think that if someone can be so much better than us, maybe we can be a little better ourself in the practice of “virtues”, whatever this may mean to us. And we don’t admire gods, but, well, it’s probably fun to be one.

As usual, I have lost track of what I was trying to say, but I think that it speaks well of the academic community that we are more interested in saints than in gods, I will close by saying that my favorite saint of complexity theory is Avi Wigderson, I will keep to myself who my favorite god of complexity theory is, and I will leave it to the readers to contribute their picks.

The Next Viral Videos

Back in August, Boaz Barak and Moses Charikar organized a two-day course on additive combinatorics for computer scientists in Princeton. Boaz and Avi Wigderson spoke on sum-product theorems and their applications, and I spoke on techniques in the proofs of Szemeredi’s theorem and their applications. As an Australian model might say, that’s interesting!

Videos of the talks are now online. The quality of the audio and video is quite good, you’ll have to decide for yourself on the quality of the lectures. The schedule of the event was grueling, and in my last two lectures (on Gowers uniformity and applications) I am not very lucid. In earlier lectures, however, I am merely sleep deprived — I can be seen falling asleep in front of the board a few times. Boaz’s and Avi’s lectures, however, are flawless.

Pseudorandomness and more pseudorandomness

The first talk of the morning is by Terry Tao. Tim Gowers, in his essay The Two Cultures of Mathematics explains the difference between depth and generality in combinatorics versus other areas of pure math. In combinatorics, one is unlikely to find deep definitions (have you ever tried to understand the statements of the conjectures in the Langlands program?) and very general theorems. In combinatorics, it is more usual to find generality in the form of principles that the practictioners of the subject are aware of and that are exemplified in the proofs of certain theorems. (I hope I have not distorted and oversimplified Gowers’s point too much.)

Tao’s talk very explicitly points out the principle that emerges from his work on arithmetic progressions. The idea is that one can often define notions of pseudorandomness and structure such that one has a theorem of the following kind:

  • Dichotomy: if an object is not pseudorandom, then it has large stuctured component.

For example, in the setting of looking for arithmetic progressions of length 3, a subset of {1,…,N} is “structured” if it has a large Fourier coefficient and it is pseudorandom if it has approximately the same number of length-3 progressions of a random subset of {1,…,N} with the same density.

By iterative arguments, one then has theorems of the form

  • Decomposition: every object can be written as a “sum” of a pseudorandom part and a structured part

This is very important in the proof of the Green-Tao theorem on arithmetic progressions in the primes, but it can be seen, implicitly, in the Szemeredi regularity Lemma and in all proofs of Szemeredi’s theorem.

The proof typically goes by: if our object is pseudorandom, we are done; otherwise we find a large structured “piece.” We “subtract” it. If what remains is pseudorandom, we are fine, otherwise we subtract another structured part, and so on.

The advantage of this type of decomposition (whether it is done explicitely or it is implicit in the proof) is that one can use techniques from analysis and probability to analyse the pseudorandom part and (depending on the definition of “structure”) techniques from analysis and geometry to analyse the structured part. This is how the proof of the Green-Tao theorem ends up using analysis, ergodic theory, combinatorics, and so on.

Often, one can use geometry and algebra only provided that the object is “exactly” structured, while results of the above type may only give “approximate” structure. To bridge this gap one employs results of the type:

  • Rigidity: an object that is “approximately structured” is “close” to an object that is structured

These are results in the spirit of property testing. (And he mentions property testing in his talk.)

This is all quite abstract, but reading papers in the area (or approaching new problems) with this point of view in mind is very helpful. He also proceeds to summarize the proof of the Green-Tao theorem in terms of the above perspective.

By the way, earlier in the talk, about the general importance of understanding pseudorandomness, Tao talks mentions expander graphs and the P versus BPP question. (As a context, not as specific problems that he is going to work on. For the time being, we are safe.)

Overall, the talk is a model of clarity and insight.

After the break, Avi Wigderson gives his talk on P, NP and Mathematics. Yesterday, Avi told me that, of all the material that he cover in his 50-page proceedings paper, he had decided to talk about pseudorandomness and derandomization, from the perspective that randomness seems to help in computation, but complexity theory suggests that it does not. I despaired, because this is exactly my talk, but luckily Avi’s talk has a somewhate complementary emphasis on topics. (So I won’t have to spend the next two days remaking my slides and I can go see the Prado tomorrow.)

Avi explains the “hardness versus randomness” principle in terms that could not be clearer, and one can see heads nodding as he goes along (nodding in agreement, not nodding off). He ends up with the following question about the complexity of the permanent: is there a linear transformation L of nXn matrices into kXk matrices such that, (i) for every matrix M, the determinant of L(M) equals the permanent of M and (ii) k is only polynomial in n? (The answer is supposed to be NO.) From known results one can derive that k need not be more than exp(O(n)) and a Omega(n2) lower bound is also known.

This is to say that one can work on complexity theory without having to learn about such things as algorithms, Turing machines and so on, and that purely algebraic questions are completely open and very related to complexity questions.

Going to STOC and FOCS

With the exception of STOC in Montreal, where I could not go for a visa problem, I have been to every STOC and every FOCS since STOC’97. The official (and historic) purpose of scientific conferences has something to do with the rapid dissemination of research results. In reality, most papers presented at STOC/FOCS had long been disseminated before the conference takes place. There are many other reasons, however, why I like going to these conferences and why I think they are important.

For one thing, they keep our community together. Over time, the scope of theoretical computer science increases, as we become interested in more and more subjects, like error-correcting codes, quantum computing, game theory, and so on. I think it is very important that we have a single place to go to, where results on all these subjects are talked about, and where people can notice connections between what they do and the new things they hear about. Avi has already discussed very eloquently the need to keep in touch with what other theoreticians are doing, so I will refrain from repeating his points. I just want to say: can we have single sessions in STOC too?

This STOC in Seattle had one of the strongest programs among conferences that I can rememeber. There were perhaps four papers each of which would have easily been considered the best paper at some earlier FOCS/STOC. Indeed, the quality of most papers was outstanding.

Most talks were very good too. James Lee says that the reason he goes to talks is that often the speaker will say one sentence that gives some insight that would have been very hard to extract from the paper. That sentence, he says, is what makes the talk worthwhile. He is quite right, and, indeed, someone who went to several talks and who writes often on the web should collect these sentences and put them online. The 20-minutes format helps. It is very hard to prepare such a short talk, but the time constraint puts some pressure to cut to the chase and just say what the paper is about and what new ideas are there in the proof. Among several other talks that I enjoyed (such as Anup Rao’s and Irit Dinur’s), I liked Guy Kindler’s talk on this very technical result. I can’t say how well he explained the problem, because I was already familiar with it, but I think his explanation was very clear. With a few minnutes to spare, he started to talk about their very technical proof and he gave an explanation for “why” the result is true that was very clear and that I had not gotten from reading the paper and thinking about related questions.

Normally, another reason why I enjoy going to FOCS/STOC is to see my far-away friends that I get to meet only once in a while, and to catch up with what is new in life and in theory. In this conference, however, I went to talks in almost all the sessions, except when jet lag made me oversleep. (Hence I missed the session on zero knowledge and Russell Impagliazzo’s invited talk, which were highly praised by those who attended.) In fact, a few times, there were two talks that I was interested in that were scheduled against each other. Bad parallel sessions, bad, bad, parallel sessions!

By the way, I am aware of a problem here. If there are single sessions then fewer papers can be accepted, and if I thought that almost all papers were so good, how can I support a system that would have led to the rejection of some of them? Well, by now my readers know that coherence is not the strong point of my posts. I don’t know, maybe we should have a four-day conference. Or maybe the good rejected papers would be resubmitted to FOCS in Berkeley. Perhaps, by random fluctuations, the quality of the other submittd papers will not be so high, and things will even out.

Which reminds me: people who attended the business meeting at this STOC and at the previous FOCS might have some doubts, but I hear that, yes, there will really be a FOCS’06 in Berkeley.

P, NP, and Mathematics

Avi Wigderson has just posted the paper based on his forthcoming ICM plenary lecture.

I suggest you stop doing whatever you are doing (I mean, come on, you are reading a blog, you can’t be too busy…) and read it now.

In August, Avi will bring the lieta novella of the P versus NP question (and of complexity theory in general) to, literally, thousands of mathematicians: ICM 2002 in Beijing was attended by more than 4,000 people, and many of them went for the conference not just for the food.

This will be an important turning point on how other mathematicians look at what we do. Even now, of the latest generation of pure mathematicians, those who like problem-solving more than theory-building (and who naturally gravitate around analysis, combinatorics, analytic number theory and probability) can be very sophisticated in their appreciation of complexity, and, I think, they see it as one of those parts of pure mathematics that one should know something about, even if one does not work on it.

When Terence Tao (analyst, number theorist, and combinatorialist of “primes in arithmetic progression” fame and likely 2006 Fields Medalist) visited Berkeley, we talked for some time and he asked a few questions. “When you have a subset of the hypercube whose Fourier spectrum is concentrated on a few Fourier coefficients, is there a way to find them without looking at the whole hypercube,” yes, it’s the Goldreich-Levin theorem (he wanted to see the proof, we went through it in about five minutes); “if P!=NP, would this settle the question of whether there are problems that are hard on average?” It’s unlikely; “is it plausible to use Fourier analytic techniques to study the circuit complexity of functions?” not for general circuits, because of Razborov-Rudich (he had heard of it before, and he wanted to know what exactly is the statement of Razborov-Rudich).

If you have enough time to read my Beijing restaurant reviews, you also have enough time to read this article by 1998 Field Medalist Tim Gowers titled “on the importance of mathematics.” The P versus NP question figures quite prominently. Since you are at it, you may try to find the video of his talk at the Clay Institute web site (the link from Gowers’ page is broken) [Update the talk is available at http://claymath.msri.org/gowers2000.mov] on the same topic. And, I don’t mean to boss you around, but you should also buy and read this book, a masterpiece of mathematical exposition for the general public. (Circuit complexity and NP-completeness are featured.)