Congratulations

I was really delighted with all the prizes that were announced at STOC this year.

pasinOur own Pasin Manurangsi received the Danny Lewin STOC Student Paper Award for his work on the hardness of the dense k-subgraph problem. This is the problem in which we are given a graph and a number k, and we want to find the set of k vertices that induces the most edges. Pasin, who is co-advised by Prasad Raghavendra and me, discovered a new, simple but ingenious reduction that establishes hardness up to almost polynomial factors.

I received the same award exactly twenty years ago, also for a hardness-of-approximation result established via a simple reduction. (Prasad also received it, nine years ago, for a hardness-of-approximation result established via a difficult reduction.) I then spent time at MIT, where Oded Goldreich was, and, partly thanks to his influence, I did my best work there. Pasin is spending this summer at Weizmann, where Oded Goldreich is, so, no pressure, but let’s see what happens. . .

alistairsinclair01-resize

Alistair Sinclair received the ACM SIGACT Distinguished Service prize, for his work setting up and leading the Simons Institute for the Theory of Computing.

Those who have been to the institute, that is, almost the whole theoretical computer science community, have seen that it is a place uniquely conducive to do good work. If you stop at think about what it is that makes it so, Alistair’s hand is behind it. The open layout of the second floor, with the whiteboards dividing the space and absorbing sound? Alistair worked closely with the architect, for a year, during the renovation, to make sure that the design would best fit the needs of our community. The friendly, competent and responsive staff? Alistair sat in all the interviews when the staff was recruited, and participates in their performance review. So many things happening and never a conflict? You know whom to thank.

More substantially, almost all the programs that we have had were, to some extent, solicited, and Alistair led the conversations and negotiations with all prospective organizers, shepherding promising concepts to approved programs.

Alistair has also been relentless in making people do things, and making them do things by prescribed deadlines, something that is notoriously difficult in our community. The Simons Institute programs have succeeded, in part, because of the tremendous amount of volunteer work that the organizers donated to our community, and while they would all have been extremely generous with their time in any case, Alistair made sure that they were extra generous. A personal anecdote: I was one of the organizers of one of the Fall 2013 inaugural programs. At that point, I was at Stanford and we were beginning to discuss the idea that I could come back to Berkeley. At some point, around October, I get a phone call from Alistair, and I assume he wants to talk about it. Instead, he goes “you know, I haven’t been seeing you much at the Institute so far. We expect organizers to be around a lot more.” A few months later, I got the offer to move to Berkeley, with a 50% affiliation at the Institute. Even knowing who my boss would be, I enthusiastically accepted.

oded Oded Goldreich received the Knuth Prize. I have already said how I feel about Oded, so there is no need to repeat myself, but I will add that I am also really happy for the Knuth Prize itself, that has managed to consistently make really good choices for the past 21 years, which is an outstanding record.

godelFinally, and I can’t believe that it took so long, the paper of Dwork, McSherry, Nissim and Smith, that introduced differential privacy, has been recognized with the Godel prize. I am very happy for them, especially for my matron of honor and former neighbor Cynthia.

Congratulations to all, and by all I don’t mean just the aforementioned awardees, but also our whole community, that nurtures so many great people, inspires so many good ideas, and makes being part of it such a joy (even when Alistair makes me do things).

Advertisements

Chariots of Fire: Silvio Micali on Oded Goldreich and Scientific Collaborations

At the aforementioned Oded Fest that took place at Weizmann a couple of weeks ago, Silvio Micali read from an epic prepared speech, which tied together the early work on foundations of cryptography, ancient Greece, the Renaissance, Viennese cafés, and the movies “Chariots of Fire” and “The Seven Samurai.”

Silvio has given his kind permission to share the speech, and he has put it in a pdf form that includes the pictures that he used as slides.

Here it is

Fests

I have been in Israel for the last couple of days attending an event in honor of Oded Goldreich‘s 60th birthday.

Oded has touched countless lives, with his boundless dedication to mentoring, executed with a unique mix of tough love and good humor. He embodies a purity of vision in the pursuit of the “right” definitions, the “right” conceptual point of view and the “right” proofs in the areas of theoretical computer science that he has transformed with his work and his influence.

A turning point in my own work in theoretical computer science came when I found this paper online in the Spring of 1995. I was a second-year graduate student in Rome, and I was interested in working on PCP-based hardness of approximation, but this seemed like an impossible goal for me. Following the publication of ALMSS, there had been an avalanche of work between 1992 and 1995, mostly in the form of extended abstracts that were impossible to understand without an awareness of a context that was, at that point, purely an oral tradition. The aforementioned paper, instead, was a 100+ page monster, that explained everything. Studying that paper gave me an entrance into the area.

Three years later, while i was a postdoc at MIT and Oded was there on sabbatical, he played a key role in the series of events that led me to prove that one can get extractors from pseudorandom generators, and it was him who explained to me that this was, in fact, what I had proved. (Initially, I thought my argument was just proving a much less consequential result.) For the most part, it was this result that got me a good job and that is paying my mortgage.

Like me, there are countless people who started to work in a certain area of theoretical computer science because of a course that Oded taught or a set of lecture notes that he wrote, and countless people whose work was made possible by Oded nudging, or usually shoving, them along the right path.

The last two days have felt a bit like going to a wedding, and not just because I saw friends that I do not get to see too often and because there was a lot to eat and to drink. A wedding is a celebration of the couple getting married, but it is also a public event in which friends and family, by affirming their bonds to the newlyweds, also affirm their bonds to each other.

I was deeply moved by the speeches given by Silvio and Shafi, and really everybody did a great job at telling Oded stories and bringing to life various aspects of his work and personality. But perhaps the most fittingly weird tribute was Benny Chor presenting the Chor-Goldreich paper (the one that introduced min-entropy as a measure of randomness for weak random sources, and the problem of 2-source extraction) using the original 1985 slides.

IMG_6726

Speaking of public celebrations, there is less than a month left to register for STOC 2017, the “Theory Fest” that will take place in Montreal in June.

Coincidence?

“Art imitates life, but life imitates bad TV” (Woody Allen)

The mention for a major alumni award given by U.C. Berkeley is for excellence in achievement.

Meanwhile, in the episode “Brother, can you spare two dimes?”, Mr. Burns has to come up on the spot with the name for a fake awards, and he comes up with an award for outstanding achievement in the field of excellence.

 

(You’ll note that the dancers in the video are wearing gold and blue)

What’s New at the Simons Institute

This Thursday (December 15) is the deadline to apply for post-doctoral positions at the Simons Institute for the next academic year. In Fall 2017 we will run a single program, instead of two programs in parallel, on optimization. The program will be double the size of our typical programs, and will focus on the interplay between discrete and continuous methods in optimization. In Spring 2018 there will be a program on real-time decision-making and one on theoretical neuroscience.

In a few weeks, our Spring 2017 programs will start. The program on pseudorandomness will have a mix of complexity theorists and number theorists, and it will happen at the same time as a program on analytic number theory at MSRI. It will start with a series of lectures. It will start with a series of lectures in the third week of January. The program on foundations of machine learning has been much anticipated, and it will also start with a series of lectures, in the fourth week of January, which will bring to Berkeley at impressive collection of machine learning practitioners whose research is both applied and rigorous. All lectures will be streamed live, and I encourage you to set up viewing parties.

During the current semester, the Simons Institute had for the first time a journalist in residence. The inaugural resident journalist has been Erica Klarreich, that many of you probably know for her outstanding writing on mathematics and computer science for quanta magazine.

Next semester, our resident journalist will be Pulitzer-prize winner John Markoff, who just retired from the New York Times.

Avi60: Zero Knowledge for all NP

1982 was the annus mirabilis of the foundations of cryptography. In their paper “probabilistic encryption,” Goldwasser and Micali introduced two rigorous definitions of security for encryption, which they proved to be equivalent. One definition required the distributions of encryptions of any two messages to be computationally indistinguishable (a concept they introduce in the paper), the other, which they call semantic security, is the property that whatever can be efficiently computed about a message given the cyphertext can also be efficiently computed without the cyphertext. Later the same year, Blum and Micali gave a rigorous definitions of security for pseudorandom generators, and Yao wrapped all these results in a more general framework requiring generic, rather than number-theoretic, assumptions.

The concept of semantic security inspired most subsequent definitions, and proofs, of security based on the concept of simulation. Instead of trying to specify a list of things than adversary should not be able to do, one defines an idealized model in which the adversary has no access to private and encrypted data, and one defines a given system to be secure if whatever an attacker can efficiently compute given the ability of eavesdrop (and possibly mount an active attack), can also be efficiently computed in the ideal model. One then proves a system to be secure by developing a simulator in the ideal model for every real-world adversary.

Together with Rackoff, Goldwasser and Micali took this idea one step further from encryption to interactive communication, and came up with the idea of Zero-Knowledge Proofs. A zero-knowledge proof is a probabilistic proof system in which a prover can convince a verifier, with high confidence, of the truth of a statement, with the additional property that there is a simulator that is able to sample from the distribution of verifier’s views of the interaction. Thus the verifier is convinced of the truth of the statement being proved, but gains no additional information. In their paper, Goldwasser, Micali and Rackoff introduce the concept and present a zero-knowledge proof for a conjecturally intractable number-theoretic problem. The paper was famously rejected several times, eventually appearing in 1985.

The following year, Goldreich, Micali and Avi Wigderson published a paper giving zero knowledge proof systems for all problems in NP. Their work made zero-knowdge proofs a key tool in the design of secure cryptosystem: it was now possible for a party to publish a commitment to a secret x and then, at any time, be able to prove that x has a certain property without releasing any additional information about x. This ability was a key ingredient in the development of secure multi-party computation in 1987, by the same authors.

So how does one prove in zero knowledge that, say, a graph is 3-colorable? (Once you have zero-knowledge proofs for one NP-complete problems, you immediately have them for all problems in NP.)

Suppose the prover and the verifier know a graph G and the prover knows a 3-coloring. A physical analog of the protocol (which can be implemented using the notion of commitment schemes) is the following: the prover randomizes the color labels, then takes |V| lockboxes, each labeled by a vertex, and puts a piece of paper with the color of vertex v in the lockbox labeled by v, for every v. The prover locks all the lockboxes, and sends them to the verifier. The verifier picks a random edge (u,v) and asks for the keys of the lockboxes for u and for v. If they contain different colors, the verifier accepts, otherwise it rejects.

The protocol is complete, in the sense that if the graph is 3-colorable and the parties follow the protocol, then the verifier accepts with probability 1.

The protocol is sound, in the sense that if the graph is not 3-colorable, then, no matter what the prover does, there will have to some edge (u,v) such that the lockboxes of u and v are the same, and the verifier has probability at least 1/|E| of picking such an edge and rejecting. Thus the verifier accepts with probability at most 1 - 1/|E|, which can be made negligibly small by repeating the protocol several times.

As per the zero-knowledge property, the view of the verifier is the choice of a random edge, two open lockboxes corresponding to the endpoints of the edge, containing two random different colors, and |V|-2 unopened lockboxes. A view with such a distribution can be easily sampled, and the same is true when the physical implementation is replaced by a commitment scheme. (Technically, this is argument only establishes honest-verifier zero knowledge, and a bit more work is needed to capture a more general property.)

Avi60

The festivities in honor of Avi Wigderson’s 60th birthday start tomorrow in Princeton, with a dream team of speakers. I will not be able to attend, but luckily a livestream will be available.

During the week, I will post a random selection of results of Avi’s.

Did you know that Avi’s first paper was an algorithm to color 3-colorable graphs using O(\sqrt n) colors? Here is the algorithm, which has the flavor of Ramsey theory proofs.

Suppose all nodes have degree < \sqrt n, then we can easily color the graph with \sqrt n colors. Otherwise, there is a node v of degree \geq \sqrt n. The neighbors of v induce a bipartite graph (because, in the 3-coloring that we are promised to exist, they are colored with whichever are the two colors that are different from the color of v), and so we can find in linear time an independent set of size \geq \sqrt n / 2. So we keep finding independent sets (which we assign a color to, and remove) of size \geq \sqrt n /2 until we get to a point where we know how to color the residual graph with \leq \sqrt n colors, meaning that we can color the whole graph with \leq 3 \sqrt n colors.

Congratulations to the 2016 Knuth Prize Selection Committee!

For the excellent choice of recognizing Noam Nisan for his work on complexity lower bounds, derandomization, and mechanism design.

Noam is known to readers of in theory for the development of the Nisan-Wigderson pseudorandom generator construction, which remains at the foundation of conditional derandomization results, and for Nisan’s generator, which is secure against log-space statistical test, and whose O(\log^2 n) seed length has not been improved upon in the past 25+ years. The modern definition of randomness extractors was made in a paper of Noam and David Zuckerman, which was also motivated by space-bounded derandomization.

Besides introducing almost all the techniques used in the main results on derandomization and pseudorandomness, Noam also introduced many of the techniques that underlie the main lower bound results that we can prove in restricted models, including the idea of approximating functions by polynomials, of looking at partial derivates to obtain artihmetic lower bounds and the connection between rank and communication complexity. With Linial and Mansour, he showed that the Hastad switching lemma could be used to bound the Fourier coefficients of functions computable by bounded-depth circuits, leading to quasi-polynomial learning algorithms for them (and to the realization that bounded-depth circuits cannot realize pseudorandom functions).

On November 27, 1989, Noam sent an email to a group of colleagues with a proof that (a decision problem equivalent to) the permanent had a multi-prover interactive proof; this set in motion a flurry of activity which led in a matter of days to the LFKN paper showing that P^{\# P} had a (one-prover) interactive proof and to Shamir’s proof that IP = PSPACE.

At the end of the 1990s, having done enough to keep the computational complexity community occupied for several subsequent decades, Noam wrote a paper with Amir Ronen called Algorithmic mechanism design. Around the same time, Elias Koutsoupias and Christos Papadimitriou published their work on worst-case equilibria and Tim Roughgarden and Eva Tardos published their work on selfish routing. A couple of years later, Christos gave back-to-back invited talks at SODA 2001, STOC 2001, ICALP 2001 and FOCS 2001 on game theory, and algorithmic game theory and algorithmic mechanism design have gone on to become a major part of theoretical computer science in the subsequent time.

Congratulations again to the prize committee, and please use the comments section to talk about the result of Noam’s that I didn’t know well enough to write about.

Ellenberg’s announcement of a solution to the cap-set problem

Jordan Ellenberg has just announced a resolution of the “cap problem” using techniques of Croot, Lev and Pach, in a self-contained three-page paper. This is a quite unexpected development for a long-standing open problem in the core of additive combinatorics.

Perhaps the right starting point for this story is 1936, when Erdos and Turan conjectured that, for every {k}, if {A} is a subset of {\{1,\ldots, N\}} without {k}-terms arithmetic progressions, then {|A|= o_k(n)}, or, equivalently, that if {A} is a subset of the integers of positive density, then it must have arbitrarily long arithmetic progressions. Their goal in stating this conjecture was that resolving it would be a stepping stone to proving that the prime numbers have arbitrarily long arithmetic progressions. This vision came true several decades later. Szemeredi proved the conjecture in 1975, and Green and Tao proved that the primes contain arbitrarily long arithmetic progressions in 2004, with Szemeredi’s theorem being a key ingredient in their proof.

Rewinding a bit, the first progress on the Erdos-Turan conjecture came from Roth, who proved the {k=3} case In 1955. Roth’s proof establishes that if {A \subseteq \{ 1,\ldots, N\}} does not have length-3 arithmetic progressions, then {|A|} is at most, roughly {N/\log\log N}. Erdos also conjectured that the bound should be {o(N/\log N)}, and if this were true it would imply that the primes have infinitely many length-3 arithmetic progressions simply because of their density.

Roth’s proof uses Fourier analysis, and Meshulam, in 1995, noted that the proof becomes much cleaner, and it leads to better bounds, if one looks at the analog problem in {{\mathbb F}_p^n}, where {{\mathbb F}_p} is a finite field (of characteristic different from 2). In this case, the question is how big can {A\subseteq {\mathbb F}_p^n} be if it does not have three points on a line. An adaptation of Roth’s techniques gives an upper bound of the order of {p^n/n}, which, for constant {p}, is of the order of {N/\log N} if {N} is the size of the universe of which {A} is a subset.

Bourgain introduced a technique to work on {{\mathbb Z}/N{\mathbb Z}} “as if” it where a vector space over a finite field, and proved upper bounds of the order of {N/\sqrt {\log N}} and then {N/(\log N)^{3/4}} to the size of a subset of {\{ 1,\ldots , N\}} without length-3 arithmetic progressions. The latest result in this line is by Sanders, who proved a bound of {(N poly\log\log N)/\log N}, very close to Erdos’s stronger conjecture.

How far can these results be pushed? A construction of Behrend’s shows that there is a set {A\subseteq \{ 1,\ldots, N\}} with no length-3 arithmetic progression and size roughly {N/2^{\sqrt {\log N}}}. The construction is simple (it is a discretization of a sphere in {\sqrt {\log N}} dimensions) and it has some unexpected other applications. This means that the right bound in Roth’s theorem is of the form {N^{1-o(1)}} and that the “only” question is what is the {N^{-o(1)}} term.

In the finite vector space case, there is no analog of Behrend’s construction, and so the size of say, the largest subset of {{\mathbb F}_3^n} without three points on a line, was completely open, with an upper bound of the order of {3^n/n} and lower bounds of the order of {c^n} for some constant {c<3}. The cap problem was the question of whether the right bound is of the form {3^{(1-o(1)) }} or not.

Two weeks ago, Croot, Lev and Pach proved that if {A} is a subset of {({\mathbb Z}/4{\mathbb Z})^n} without length-3 arithmetic progressions, then {|A|} is at most of the order of {4^{.926 \cdot n}}. This was a strong indication that the right bound in the cap problem should be sub-exponential.

This was done a couple of days ago by Ellenberg, who proved an upper bound of the form {(2.756)^n} holds in {{\mathbb F}_3^n}. The proof is not specific to {{\mathbb F}_3} and generalizes to all finite fields.

Both proofs use the polynomial method. Roughly speaking, the method is to associate a polynomial to a set of interest (for example, by finding a non-zero low-degree polynomial that is zero for all points in the set), and then to proceed with the use of simple properties of polynomials (such as the fact that the space of polynomials of a certain degree has a bounded dimension, or that the set of zeroes of a univariate non-zero polynomial is at most the degree) applied either to the polynomial that we constructed or to the terms of its factorization.

Let {P_d} be the vector space of {n}-variate polynomials over {{\mathbb F}_3} of total degree {d} that are cube-free (that is, such that all variables occur in monomials with degree 0, 1, or 2), and let {m_d} be its dimension.

If {A\subseteq {\mathbb F}_3^n} is a set such that there are no distinct {a,b,c} such that {a+b+c=0} (a different property from being on a line, but the draft claims that the same argument works for the property of not having three points on a line as well), then Ellenberg shows that

\displaystyle  m_d - 3^n + |A| \leq 3 m_{d/2}

then the bound follows from computing that {m_{2n/3} \geq 3^n - c^n} and {m_{n/3} \leq c^n} for {c \approx 2.756\cdots}.

The finite field Kakeya problem is another example of a problem that had resisted attacks from powerful Fourier-analytic proofs, and was solved by Zeev Dvir with a relatively simple application of the polynomial method. One may hope that the method has not yet exhausted its applicability.

Gil Kalai has posted about further consequence of the results of Croot, Lev, Pach and Ellenberg.

Laci Babai and Graph Isomorphism

Next Tuesday, a week from today, Laci Babai will talk at the University of Chicago about a new algorithm that solves graph isomorphism in quasipolynomial time. There should also be a follow-up talk the following Thursday that, by a lucky coincidence, I will be able to attend, and then report back.

Meanwhile, if you have any gossip on the proof, then, by any means, go ahead and share it in the comments.