For the excellent choice of recognizing Noam Nisan for his work on complexity lower bounds, derandomization, and mechanism design.

Noam is known to readers of *in theory* for the development of the Nisan-Wigderson pseudorandom generator construction, which remains at the foundation of conditional derandomization results, and for Nisan’s generator, which is secure against log-space statistical test, and whose seed length has not been improved upon in the past 25+ years. The modern definition of randomness extractors was made in a paper of Noam and David Zuckerman, which was also motivated by space-bounded derandomization.

Besides introducing almost all the techniques used in the main results on derandomization and pseudorandomness, Noam also introduced many of the techniques that underlie the main lower bound results that we can prove in restricted models, including the idea of approximating functions by polynomials, of looking at partial derivates to obtain artihmetic lower bounds and the connection between rank and communication complexity. With Linial and Mansour, he showed that the Hastad switching lemma could be used to bound the Fourier coefficients of functions computable by bounded-depth circuits, leading to quasi-polynomial learning algorithms for them (and to the realization that bounded-depth circuits cannot realize pseudorandom functions).

On November 27, 1989, Noam sent an email to a group of colleagues with a proof that (a decision problem equivalent to) the permanent had a multi-prover interactive proof; this set in motion a flurry of activity which led in a matter of days to the LFKN paper showing that had a (one-prover) interactive proof and to Shamir’s proof that .

At the end of the 1990s, having done enough to keep the computational complexity community occupied for several subsequent decades, Noam wrote a paper with Amir Ronen called *Algorithmic mechanism design.* Around the same time, Elias Koutsoupias and Christos Papadimitriou published their work on worst-case equilibria and Tim Roughgarden and Eva Tardos published their work on selfish routing. A couple of years later, Christos gave back-to-back invited talks at SODA 2001, STOC 2001, ICALP 2001 and FOCS 2001 on game theory, and algorithmic game theory and algorithmic mechanism design have gone on to become a major part of theoretical computer science in the subsequent time.

Congratulations again to the prize committee, and please use the comments section to talk about the result of Noam’s that I didn’t know well enough to write about.

Congrats to Noam! Excellent news.

Pingback: Noam Nisan wins the Knuth Prize | A bunch of data

Very well deserved! Congrats, Noam.

@Luca: I think you may mean Elias Koutsoupias instead of Xiaotie Deng.

Indeed! Apologies to both of them and to the readers

In another interesting paper titled “Approximate Inclusion-Exclusion” Linial and Nisan study how well one can approximate the size of a union of n sets knowing only the sizes of their k-wise intersections for some k <> sqrt{n} one can get a 1 + o(1) approximation, whereas for k << sqrt{n} one can't get any constant factor approximation.

My previous comment should read:

In another interesting paper titled “Approximate Inclusion-Exclusion” Linial and Nisan study how well one can approximate the size of a union of n sets knowing only the sizes of their k-wise intersections for some k > sqrt{n} one can get a 1 + o(1) approximation, whereas for k << sqrt{n} one can't get any constant factor approximation.

My previous comment should read as follows (WordPress interpreting “less than” as the beginning of an HTML tag destroyed part of it twice):

In another interesting paper titled “Approximate Inclusion-Exclusion” Linial and Nisan study how well one can approximate the size of a union of n sets knowing only the sizes of their k-wise intersections for some k. They show that a threshold exists around k = Theta(sqrt{n}): for k = omega(sqrt{n}) one can get a 1 + o(1) approximation, whereas for k = o(sqrt{n}) one can’t get any constant factor approximation.