The concept of semantic security inspired most subsequent definitions, and proofs, of security based on the concept of simulation. Instead of trying to specify a list of things than adversary should not be able to do, one defines an idealized model in which the adversary has no access to private and encrypted data, and one *defines* a given system to be secure if whatever an attacker can efficiently compute given the ability of eavesdrop (and possibly mount an active attack), can also be efficiently computed in the ideal model. One then *proves* a system to be secure by developing a *simulator* in the ideal model for every real-world adversary.

Together with Rackoff, Goldwasser and Micali took this idea one step further from encryption to interactive communication, and came up with the idea of *Zero-Knowledge Proofs*. A zero-knowledge proof is a probabilistic proof system in which a prover can convince a verifier, with high confidence, of the truth of a statement, with the additional property that there is a *simulator* that is able to sample from the distribution of verifier’s views of the interaction. Thus the verifier is convinced of the truth of the statement being proved, but gains no additional information. In their paper, Goldwasser, Micali and Rackoff introduce the concept and present a zero-knowledge proof for a conjecturally intractable number-theoretic problem. The paper was famously rejected several times, eventually appearing in 1985.

The following year, Goldreich, Micali and Avi Wigderson published a paper giving zero knowledge proof systems for all problems in NP. Their work made zero-knowdge proofs a key tool in the design of secure cryptosystem: it was now possible for a party to publish a commitment to a secret and then, at any time, be able to prove that has a certain property without releasing any additional information about . This ability was a key ingredient in the development of secure multi-party computation in 1987, by the same authors.

So how does one prove in zero knowledge that, say, a graph is 3-colorable? (Once you have zero-knowledge proofs for one NP-complete problems, you immediately have them for all problems in NP.)

Suppose the prover and the verifier know a graph and the prover knows a 3-coloring. A physical analog of the protocol (which can be implemented using the notion of *commitment schemes*) is the following: the prover randomizes the color labels, then takes lockboxes, each labeled by a vertex, and puts a piece of paper with the color of vertex in the lockbox labeled by , for every . The prover locks all the lockboxes, and sends them to the verifier. The verifier picks a random edge and asks for the keys of the lockboxes for and for . If they contain different colors, the verifier accepts, otherwise it rejects.

The protocol is *complete*, in the sense that if the graph is 3-colorable and the parties follow the protocol, then the verifier accepts with probability 1.

The protocol is *sound*, in the sense that if the graph is not 3-colorable, then, no matter what the prover does, there will have to some edge such that the lockboxes of and are the same, and the verifier has probability at least of picking such an edge and rejecting. Thus the verifier accepts with probability at most , which can be made negligibly small by repeating the protocol several times.

As per the zero-knowledge property, the view of the verifier is the choice of a random edge, two open lockboxes corresponding to the endpoints of the edge, containing two random different colors, and unopened lockboxes. A view with such a distribution can be easily sampled, and the same is true when the physical implementation is replaced by a commitment scheme. (Technically, this is argument only establishes *honest-verifier zero knowledge*, and a bit more work is needed to capture a more general property.)

]]>

During the week, I will post a random selection of results of Avi’s.

Did you know that Avi’s first paper was an algorithm to color 3-colorable graphs using colors? Here is the algorithm, which has the flavor of Ramsey theory proofs.

Suppose all nodes have degree , then we can easily color the graph with colors. Otherwise, there is a node of degree . The neighbors of induce a bipartite graph (because, in the 3-coloring that we are promised to exist, they are colored with whichever are the two colors that are different from the color of ), and so we can find in linear time an independent set of size . So we keep finding independent sets (which we assign a color to, and remove) of size until we get to a point where we know how to color the residual graph with colors, meaning that we can color the whole graph with colors.

]]>

Noam is known to readers of *in theory* for the development of the Nisan-Wigderson pseudorandom generator construction, which remains at the foundation of conditional derandomization results, and for Nisan’s generator, which is secure against log-space statistical test, and whose seed length has not been improved upon in the past 25+ years. The modern definition of randomness extractors was made in a paper of Noam and David Zuckerman, which was also motivated by space-bounded derandomization.

Besides introducing almost all the techniques used in the main results on derandomization and pseudorandomness, Noam also introduced many of the techniques that underlie the main lower bound results that we can prove in restricted models, including the idea of approximating functions by polynomials, of looking at partial derivates to obtain artihmetic lower bounds and the connection between rank and communication complexity. With Linial and Mansour, he showed that the Hastad switching lemma could be used to bound the Fourier coefficients of functions computable by bounded-depth circuits, leading to quasi-polynomial learning algorithms for them (and to the realization that bounded-depth circuits cannot realize pseudorandom functions).

On November 27, 1989, Noam sent an email to a group of colleagues with a proof that (a decision problem equivalent to) the permanent had a multi-prover interactive proof; this set in motion a flurry of activity which led in a matter of days to the LFKN paper showing that had a (one-prover) interactive proof and to Shamir’s proof that .

At the end of the 1990s, having done enough to keep the computational complexity community occupied for several subsequent decades, Noam wrote a paper with Amir Ronen called *Algorithmic mechanism design.* Around the same time, Elias Koutsoupias and Christos Papadimitriou published their work on worst-case equilibria and Tim Roughgarden and Eva Tardos published their work on selfish routing. A couple of years later, Christos gave back-to-back invited talks at SODA 2001, STOC 2001, ICALP 2001 and FOCS 2001 on game theory, and algorithmic game theory and algorithmic mechanism design have gone on to become a major part of theoretical computer science in the subsequent time.

Congratulations again to the prize committee, and please use the comments section to talk about the result of Noam’s that I didn’t know well enough to write about.

]]>

*And what has the theory of computing done for us in the last twenty years?*

Differential privacy? Apple just announced it will be used in iOS 10

Yes, and the application to preventing false discovery and overfitting is now used in production.

*Ok, fine, but apart from differential privacy, what has theory done for us in the last twenty years?*

Quantum algorithms? There wouldn’t be such a push to realize quantum computers if it wasn’t for Shor’s algorithm.

And quantum error correcting! There would be no hope of realizing quantum computers without quantum error correction

*Very well, but apart from differential privacy and quantum computing, what has theory done for us in the …*

Streaming algorithms? It all started with a theory paper and now it is a major interdisciplinary effort.

*Yes, fair enough, but apart from differential privacy, quantum computing, and streaming algorithms, what has theory done for us…*

Linear time decodable LDPC error-correcting codes? The first generation was not practical, but now they are part of major standards

*Sure, ok, but apart from differential privacy, quantum computing, streaming algorithms, and error-correcting codes, what has theory…*

Homomorphic encryption? The first-generation solutions were inefficient, but it might be only a matter of time before we have usable homomorphic encryption standards.

Linear-time SDD solvers? Algorithms like this and this are implementable and we may be one more idea away from algorithms that can be put in production.

Sublinear time algorithms like sparse FFT?

*All right! But apart from differential privacy, quantum computing, streaming algorithms, error-correcting codes, homomorphic encryption, linear-time equation solvers and sub-linear time algorithms, what has the theory of computing ever done for us in the past twenty years?*

. . .

[Could be continued. Indeed, please continue in the comments]

]]>

It’s like if you were on a plane and you wanted to choose a pilot. You have one person, Hillary, who says, “Here’s my license. Here’s all the thousands of flights that I’ve flown. Here’s planes I’ve flown in really difficult situations. I’ve had some good flights and some bad flights, but I’ve been flying for a very long time, and I know exactly how this plane works.” Then you’ve got Bernie, who says, “Everyone should get a ride right to their house with this plane.” “Well, how are you going to do that?” “I just think we should. It’s only fair that everyone gets to use the plane equally.” And then Trump says, “I’m going to fly so well. You’re not going to believe how good I’m going to fly this plane, and by the way, Hillary never flew a plane in her life.” “She did, and we have pictures.” “No, she never did it.”

]]>

*Bernie Sanders* for President of the United States

*Kamala Harris* for United States Senator

*Nancy Pelosi* for United States Representative

*Scott Weiner* for California State Senator

*David Chiu* for California State Assemblyman

*Victor Hwang* for Superior Court Judge

]]>

Perhaps the right starting point for this story is 1936, when Erdos and Turan conjectured that, for every , if is a subset of without -terms arithmetic progressions, then , or, equivalently, that if is a subset of the integers of positive density, then it must have arbitrarily long arithmetic progressions. Their goal in stating this conjecture was that resolving it would be a stepping stone to proving that the prime numbers have arbitrarily long arithmetic progressions. This vision came true several decades later. Szemeredi proved the conjecture in 1975, and Green and Tao proved that the primes contain arbitrarily long arithmetic progressions in 2004, with Szemeredi’s theorem being a key ingredient in their proof.

Rewinding a bit, the first progress on the Erdos-Turan conjecture came from Roth, who proved the case In 1955. Roth’s proof establishes that if does not have length-3 arithmetic progressions, then is at most, roughly . Erdos also conjectured that the bound should be , and if this were true it would imply that the primes have infinitely many length-3 arithmetic progressions simply because of their density.

Roth’s proof uses Fourier analysis, and Meshulam, in 1995, noted that the proof becomes much cleaner, and it leads to better bounds, if one looks at the analog problem in , where is a finite field (of characteristic different from 2). In this case, the question is how big can be if it does not have three points on a line. An adaptation of Roth’s techniques gives an upper bound of the order of , which, for constant , is of the order of if is the size of the universe of which is a subset.

Bourgain introduced a technique to work on “as if” it where a vector space over a finite field, and proved upper bounds of the order of and then to the size of a subset of without length-3 arithmetic progressions. The latest result in this line is by Sanders, who proved a bound of , very close to Erdos’s stronger conjecture.

How far can these results be pushed? A construction of Behrend’s shows that there is a set with no length-3 arithmetic progression and size roughly . The construction is simple (it is a discretization of a sphere in dimensions) and it has some unexpected other applications. This means that the right bound in Roth’s theorem is of the form and that the “only” question is what is the term.

In the finite vector space case, there is no analog of Behrend’s construction, and so the size of say, the largest subset of without three points on a line, was completely open, with an upper bound of the order of and lower bounds of the order of for some constant . The *cap problem* was the question of whether the right bound is of the form or not.

Two weeks ago, Croot, Lev and Pach proved that if is a subset of without length-3 arithmetic progressions, then is at most of the order of . This was a strong indication that the right bound in the cap problem should be sub-exponential.

This was done a couple of days ago by Ellenberg, who proved an upper bound of the form holds in . The proof is not specific to and generalizes to all finite fields.

Both proofs use the polynomial method. Roughly speaking, the method is to associate a polynomial to a set of interest (for example, by finding a non-zero low-degree polynomial that is zero for all points in the set), and then to proceed with the use of simple properties of polynomials (such as the fact that the space of polynomials of a certain degree has a bounded dimension, or that the set of zeroes of a univariate non-zero polynomial is at most the degree) applied either to the polynomial that we constructed or to the terms of its factorization.

Let be the vector space of -variate polynomials over of total degree that are cube-free (that is, such that all variables occur in monomials with degree 0, 1, or 2), and let be its dimension.

If is a set such that there are no distinct such that (a different property from being on a line, but the draft claims that the same argument works for the property of not having three points on a line as well), then Ellenberg shows that

then the bound follows from computing that and for .

The *finite field Kakeya problem* is another example of a problem that had resisted attacks from powerful Fourier-analytic proofs, and was solved by Zeev Dvir with a relatively simple application of the polynomial method. One may hope that the method has not yet exhausted its applicability.

Gil Kalai has posted about further consequence of the results of Croot, Lev, Pach and Ellenberg.

]]>

J.Z.: In China, we say that if you sneeze once, it means that someone is thinking of you. If you sneeze twice, it means someone is cursing you.

Me: and what does it mean when I sneeze three times or more?

J.Z.: it means you have a cold.

]]>

It would make sense if, to mitigate his negatives, Trump chose a person of color and someone who has a history of speaking out against income inequality.

He or she would have to be someone who is media-savvy and with some experience running a campaign, but definitely not a career politician. And of course he or she should be someone who endorsed Trump early on, like, say, in January.

I can think of only one person: Jimmy McMillan!

]]>

*In which we prove properties of expander graphs.*

**1. Quasirandomness of Expander Graphs **

Recall that if is a -regular graph, and is its adjacency matrix, then, if we call the eigenvalues of with repetitions, we are interested in the parameter , and we have

where is the matrix with a one in each entry, and is the matrix norm .

Our fist result today is to show that, when is small, the graph has the following *quasirandomness* property: for every two disjoint sets , the number of edges between and is close to what we would expect in a random graph of average degree , that is, approximately .

For two (possibly overlapping) sets of vertices , we define to be the number of edges with one endpoint in and one endpoint in , with edges having both endpoints in , if any, counted twice.

Lemma 1 (Expander Mixing Lemma)Let be a -regular graph, and let and be two disjoint subsets of vertices. Then

*Proof:* We have

and

so

Note that, for every disjoint , we have , and so the right-hand side in the expander mixing lemma is at most , which is a small fraction of the total number of edges if is small compared to .

**2. Random Walks in Expanders **

A *-step random walk* is the probabilistic process in which we start at a vertex, then we pick uniformly at random one of the edges incident on the vertices and we move to the other endpoint of the edge, and then repeat this process times.

If is the normalized adjacency matrix of an undirected regular graph , then is the probability that, in one step, a random walk started at reaches . This is why the normalized adjacency matrix of a regular graph is also called its *transition matrix*.

Suppose that we start a random walk at a vertex chosen according to a probability distribution , which we think of as a vector such that for every and . After taking one step, the probability of being at vertex is , which means that the probability distribution after one step is described by the vector , and because of the symmetric of , this is the same as .

Iterating the above reasoning, we see that, after a -step random walk whose initial vertex is chosen according to distribution , the last vertex reached by the walk is distributed according to .

The parameter of is equal to , and so if has a parameter bounded away from , and if is large enough, we have that the parameter of is very small, and so is close to in matrix norm. If was actually equal to , then would be equal to the uniform distribution, for every distribution . We would thus expect to be close to the uniform distribution for large enough .

Before formalizing the above intuition, we need to fix a good measure of distance for distributions. If we think of distributions as vectors, then a possible notion of distance between two distributions is the Euclidean distance between the corresponding vectors. This definition, however, has various shortcoming and, in particular, can assign small distance to distributions that are intuitively very different. For example, suppose that and are distributions that are uniform over a set , and over the complement of , respectively, where is a set of size . Then all the entries of are and so , which is vanishingly small even though distributions over disjoint supports should be considered as maximally different distributions.

A very good measure is the *total variation distance*, defined as

that is, as the maximum over all events of the difference between the probability of the event happening with respect to one distribution and the probability of it happening with respect to the other distribution. This measure is usually called *statistical distance* in computer science. It is easy to check that the total variation distance between and is precisely . Distributions with disjoint support have total variation distance 1, which is largest possible.

Lemma 2 (Mixing Time of Random Walks in Expanders)Let be a regular graph, and be its normalized adjacency matrix. Then for every distribution over the vertices and every , we havewhere is the uniform distribution.

In particular, if , then , where is an absolute constant.

*Proof:* Let be the normalized adjacency matrix of a clique with self-loops. Then, for every distribution , we have . Recall also that .

We have

The last result that we discussed today is one more instantiation of the general phenomenon that “if is small then a result that is true for the clique is true, within some approximation, for .”

Suppose that we take a -step random walk in a regular graph starting from a uniformly distributed initial vertex. If is a clique with self-loops, then the sequence of vertices encountered in the random walk is a sequence of independent, uniformly distributed, vertices. In particular, if is a bounded function, the Chernoff-Hoeffding bounds tell us that the empirical average of over the points of the random walk is very close to the true average of , except with very small probability, that is, if we denote by the set of vertices encountered in the random walk, we have

where . A corresponding Chernoff-Hoeffding bound can be proved for the case in which the random walk is taken over a regular graph such that is small.

Lemma 3 (Chernoff-Hoeffding Bound for Random Walks in Expanders)Let be a regular graph, and the distribution of -tuples constructed by sampling independently, and then performing a -step random walk starting at . Let be any bounded function. Then

We will not prove the above result, but we briefly discuss one of its many applications.

Suppose that we have a polynomial-time probabilistic algorithm that, on inputs of length , uses random bits and then outputs the correct answer with probability, say, at least . One standard way to reduce the error probability is to run the algorithm times, using independent randomness each time, and then take the answer that comes out a majority of the times. (This is for problems in which we want to compute a function exactly; in combinatorial optimization we would run the algorithm times and take the best solutions, and in an application in which the algorithm performs an approximate function evaluation we would run the algorithm times and take the median. The reasoning that follows for the case of exact function computation can be applied to the other settings as well.)

On average, the number of iterations of the algorithms that give a correct answer is , and the cases in which the majority is erroneous correspond to cases in which the number of iterations giving a correct answer is . This means that the case in which the modified algorithm makes a mistake correspond to the case in which the empirical average of independent 0/1 random variables deviates from its expectation by more than , which can happen with probability at most , which becomes vanishingly small for large .

This approach uses random bits. Suppose, instead, that we consider the following algorithm: pick random strings for the algorithm by performing a -step random walk in an expander graph of degree with vertices and such that , and then take the majority answer. A calculation using the Chernoff bound for expander graphs show that the error probability is , and it is achieved using only random bits instead of .

]]>