I stared into the abyss, and the abyss stared back into me and said “what the hell is wrong with you people up there?”
If you can’t bear to think about tomorrow’s election, and what will happen to the ninth seat of the supreme court if the Republicans hold control of the Senate, and if you would rather hear about another country’s impending constitutional crisis, let me present you with the latest goings on in Hong Kong.
In 1997, Hong Kong was “handed over” back to China, with the agreement that it would retain a “high degree of autonomy” until 2047, and that there would be elections with universal suffrage by 2017. Hong Kong kept (and will keep until 2047) its own legal system and its own currency, Hong Kong citizens have their passport, and a mini-constitution, called “basic law” was drafted to outline the working of its own political system.
In the 1997 system, Hong Kong is governed by a Chief Executive, who is elected in a system that gives votes to constituencies such as business associations, as well, with a smaller weight, to popular vote. The Legislative Council (Legco) is similarly elected in a way in which some members reflect the public’s vote and other are appointed by business and trade groups. The system guarantees a pro-Beijing chief executive and a majority of pro-Beijing representatives in the Legco.
In 2012, legislature was passed that would have introduced “patriotic” (pro-PRC) propaganda in K-12 education. The move produced an uproar among student groups, which coalesced under the umbrella of the Scholarism student association, and which led to huge demonstrations, which in turn led the Hong Kong government to shelve the patriotic education initiative.
In 2014, a long-running process to decide how universal suffrage was going to be implemented in the 2017 Chief Executive election, yielded a proposal in which only Beijing-approved candidates could run for office, thus negating the purpose of having elections in the first place. A broad protest movement emerged, including both Scholarism students and veterans of the pro-democracy movement that had existed since 1997. Outrage at the police response to early protests led to large popular protests that turned the center of Hong Kong into an occupied zone, where thousands of people pitched tents and stayed for weeks.
Eventually the movement failed to win any concessions, the occupation ended, but a new generation of Hong Kong young people became involved in politics and in the pro-democracy movement like never before. Scholarism dissolved as a student group, and reformed as a political party called Demosisto, and a number of other pro-democracy parties arose, including Youngspiration, which has an explicit pro-autonomy platform.
In the last September election, Nathan Law Kwun-chung became the youngest person to win a seat in Legco, running for Demosisto, and two Younsgpiration members, Sixtus “Baggio” Leung Chung-hang and Yau Wai-ching, also won seats, among other pro-democracy representatives.
On October 4, Joshua Wong, the teenage former Scholarism leader and current Demosisto member, was detained in Thailand on his way to speak at a University in Bangkok, and deported back to Hong Kong, under pressure from China, highlighting how much the PRC feels threatened by the Hong Kong pro-democracy movement.
On October 12, the swearing-in ceremony of the elected Legco members took place. If you look at the video halfway down this article, you will see “Long Hair” Leung Kwok-Hung come to the lectern with a yellow umbrella, the symbol of the 2014 protests, and a copy of the election law, which he proceeds to shred; at the end of the video you see Nathan Law make a speech after his swearing-in, and, in the middle, there are Baggio Leung and Yau Wai-ching. They each approach the lectern with a banner that says “Hong Kong is not China,” and then recite the oath in English (the other option is do so in Cantonese). In the oath, legislators swear allegiance to the “Hong Kong Special Administrative Region of the People’s Republic of China,” and they both pronounce China as “Chee-na.” The officer presiding the swearing-in refuses the recognizes their oath as valid.
Here is where things get complicated. “Chee-na” is how the Japanese called China during WW2, and it is considered an offensive slur in the Mainland. Leung and Yau, apparently quite disingenuously, insist that they just have poor English pronunciation, and they have refused to apologize. Meanwhile, they have twice shown up at Legco meetings to repeat their swearing-in, which the presiding officer has refused to do. Instead, the government asked the Hong Kong supreme court to rule on whether by having “refused to take the oath” (a rather questionable interpretation of the event) Leung and Yau should vacate their seats.
So we come to the constitutional crisis: before the Hong Kong supreme court ruled on the matter, the legislative branch of the PRC stepped in yesterday, to rule against Leung and Yau, producing an interpretation of the basic law suggesting that all pro-autonomy legislators could be stripped of their post. They did so because the PRC legislature is indeed supposed to be the ultimate interpreter of the meaning of the basic law, but this is only the second time since 1997 that it has exercised this prerogative, and the first time that it has done for such an incendiary matter. This could be the beginning of the end of the autonomy of Hong Kong’s judiciary system, which so far was never doubted, as well as the end of the remaining semblance of democracy in Hong Kong’s political system.
Protests have already started.
1982 was the annus mirabilis of the foundations of cryptography. In their paper “probabilistic encryption,” Goldwasser and Micali introduced two rigorous definitions of security for encryption, which they proved to be equivalent. One definition required the distributions of encryptions of any two messages to be computationally indistinguishable (a concept they introduce in the paper), the other, which they call semantic security, is the property that whatever can be efficiently computed about a message given the cyphertext can also be efficiently computed without the cyphertext. Later the same year, Blum and Micali gave a rigorous definitions of security for pseudorandom generators, and Yao wrapped all these results in a more general framework requiring generic, rather than number-theoretic, assumptions.
The concept of semantic security inspired most subsequent definitions, and proofs, of security based on the concept of simulation. Instead of trying to specify a list of things than adversary should not be able to do, one defines an idealized model in which the adversary has no access to private and encrypted data, and one defines a given system to be secure if whatever an attacker can efficiently compute given the ability of eavesdrop (and possibly mount an active attack), can also be efficiently computed in the ideal model. One then proves a system to be secure by developing a simulator in the ideal model for every real-world adversary.
Together with Rackoff, Goldwasser and Micali took this idea one step further from encryption to interactive communication, and came up with the idea of Zero-Knowledge Proofs. A zero-knowledge proof is a probabilistic proof system in which a prover can convince a verifier, with high confidence, of the truth of a statement, with the additional property that there is a simulator that is able to sample from the distribution of verifier’s views of the interaction. Thus the verifier is convinced of the truth of the statement being proved, but gains no additional information. In their paper, Goldwasser, Micali and Rackoff introduce the concept and present a zero-knowledge proof for a conjecturally intractable number-theoretic problem. The paper was famously rejected several times, eventually appearing in 1985.
The following year, Goldreich, Micali and Avi Wigderson published a paper giving zero knowledge proof systems for all problems in NP. Their work made zero-knowdge proofs a key tool in the design of secure cryptosystem: it was now possible for a party to publish a commitment to a secret and then, at any time, be able to prove that has a certain property without releasing any additional information about . This ability was a key ingredient in the development of secure multi-party computation in 1987, by the same authors.
So how does one prove in zero knowledge that, say, a graph is 3-colorable? (Once you have zero-knowledge proofs for one NP-complete problems, you immediately have them for all problems in NP.)
Suppose the prover and the verifier know a graph and the prover knows a 3-coloring. A physical analog of the protocol (which can be implemented using the notion of commitment schemes) is the following: the prover randomizes the color labels, then takes lockboxes, each labeled by a vertex, and puts a piece of paper with the color of vertex in the lockbox labeled by , for every . The prover locks all the lockboxes, and sends them to the verifier. The verifier picks a random edge and asks for the keys of the lockboxes for and for . If they contain different colors, the verifier accepts, otherwise it rejects.
The protocol is complete, in the sense that if the graph is 3-colorable and the parties follow the protocol, then the verifier accepts with probability 1.
The protocol is sound, in the sense that if the graph is not 3-colorable, then, no matter what the prover does, there will have to some edge such that the lockboxes of and are the same, and the verifier has probability at least of picking such an edge and rejecting. Thus the verifier accepts with probability at most , which can be made negligibly small by repeating the protocol several times.
As per the zero-knowledge property, the view of the verifier is the choice of a random edge, two open lockboxes corresponding to the endpoints of the edge, containing two random different colors, and unopened lockboxes. A view with such a distribution can be easily sampled, and the same is true when the physical implementation is replaced by a commitment scheme. (Technically, this is argument only establishes honest-verifier zero knowledge, and a bit more work is needed to capture a more general property.)
The festivities in honor of Avi Wigderson’s 60th birthday start tomorrow in Princeton, with a dream team of speakers. I will not be able to attend, but luckily a livestream will be available.
During the week, I will post a random selection of results of Avi’s.
Did you know that Avi’s first paper was an algorithm to color 3-colorable graphs using colors? Here is the algorithm, which has the flavor of Ramsey theory proofs.
Suppose all nodes have degree , then we can easily color the graph with colors. Otherwise, there is a node of degree . The neighbors of induce a bipartite graph (because, in the 3-coloring that we are promised to exist, they are colored with whichever are the two colors that are different from the color of ), and so we can find in linear time an independent set of size . So we keep finding independent sets (which we assign a color to, and remove) of size until we get to a point where we know how to color the residual graph with colors, meaning that we can color the whole graph with colors.
For the excellent choice of recognizing Noam Nisan for his work on complexity lower bounds, derandomization, and mechanism design.
Noam is known to readers of in theory for the development of the Nisan-Wigderson pseudorandom generator construction, which remains at the foundation of conditional derandomization results, and for Nisan’s generator, which is secure against log-space statistical test, and whose seed length has not been improved upon in the past 25+ years. The modern definition of randomness extractors was made in a paper of Noam and David Zuckerman, which was also motivated by space-bounded derandomization.
Besides introducing almost all the techniques used in the main results on derandomization and pseudorandomness, Noam also introduced many of the techniques that underlie the main lower bound results that we can prove in restricted models, including the idea of approximating functions by polynomials, of looking at partial derivates to obtain artihmetic lower bounds and the connection between rank and communication complexity. With Linial and Mansour, he showed that the Hastad switching lemma could be used to bound the Fourier coefficients of functions computable by bounded-depth circuits, leading to quasi-polynomial learning algorithms for them (and to the realization that bounded-depth circuits cannot realize pseudorandom functions).
On November 27, 1989, Noam sent an email to a group of colleagues with a proof that (a decision problem equivalent to) the permanent had a multi-prover interactive proof; this set in motion a flurry of activity which led in a matter of days to the LFKN paper showing that had a (one-prover) interactive proof and to Shamir’s proof that .
At the end of the 1990s, having done enough to keep the computational complexity community occupied for several subsequent decades, Noam wrote a paper with Amir Ronen called Algorithmic mechanism design. Around the same time, Elias Koutsoupias and Christos Papadimitriou published their work on worst-case equilibria and Tim Roughgarden and Eva Tardos published their work on selfish routing. A couple of years later, Christos gave back-to-back invited talks at SODA 2001, STOC 2001, ICALP 2001 and FOCS 2001 on game theory, and algorithmic game theory and algorithmic mechanism design have gone on to become a major part of theoretical computer science in the subsequent time.
Congratulations again to the prize committee, and please use the comments section to talk about the result of Noam’s that I didn’t know well enough to write about.
And what has the theory of computing done for us in the last twenty years?
Differential privacy? Apple just announced it will be used in iOS 10
Yes, and the application to preventing false discovery and overfitting is now used in production.
Ok, fine, but apart from differential privacy, what has theory done for us in the last twenty years?
Quantum algorithms? There wouldn’t be such a push to realize quantum computers if it wasn’t for Shor’s algorithm.
And quantum error correcting! There would be no hope of realizing quantum computers without quantum error correction
Very well, but apart from differential privacy and quantum computing, what has theory done for us in the …
Streaming algorithms? It all started with a theory paper and now it is a major interdisciplinary effort.
Yes, fair enough, but apart from differential privacy, quantum computing, and streaming algorithms, what has theory done for us…
Sure, ok, but apart from differential privacy, quantum computing, streaming algorithms, and error-correcting codes, what has theory…
Homomorphic encryption? The first-generation solutions were inefficient, but it might be only a matter of time before we have usable homomorphic encryption standards.
All right! But apart from differential privacy, quantum computing, streaming algorithms, error-correcting codes, homomorphic encryption, linear-time equation solvers and sub-linear time algorithms, what has the theory of computing ever done for us in the past twenty years?
. . .
[Could be continued. Indeed, please continue in the comments]
From an interview for New York Magazine:
It’s like if you were on a plane and you wanted to choose a pilot. You have one person, Hillary, who says, “Here’s my license. Here’s all the thousands of flights that I’ve flown. Here’s planes I’ve flown in really difficult situations. I’ve had some good flights and some bad flights, but I’ve been flying for a very long time, and I know exactly how this plane works.” Then you’ve got Bernie, who says, “Everyone should get a ride right to their house with this plane.” “Well, how are you going to do that?” “I just think we should. It’s only fair that everyone gets to use the plane equally.” And then Trump says, “I’m going to fly so well. You’re not going to believe how good I’m going to fly this plane, and by the way, Hillary never flew a plane in her life.” “She did, and we have pictures.” “No, she never did it.”
So you are a San Francisco Democratic primary voter, a reader of “in theory,” and you do not like to think for yourself? You are in luck, because, for the first time ever, we are doing endorsements:
Bernie Sanders for President of the United States
Kamala Harris for United States Senator
Nancy Pelosi for United States Representative
Scott Weiner for California State Senator
David Chiu for California State Assemblyman
Victor Hwang for Superior Court Judge
Jordan Ellenberg has just announced a resolution of the “cap problem” using techniques of Croot, Lev and Pach, in a self-contained three-page paper. This is a quite unexpected development for a long-standing open problem in the core of additive combinatorics.
Perhaps the right starting point for this story is 1936, when Erdos and Turan conjectured that, for every , if is a subset of without -terms arithmetic progressions, then , or, equivalently, that if is a subset of the integers of positive density, then it must have arbitrarily long arithmetic progressions. Their goal in stating this conjecture was that resolving it would be a stepping stone to proving that the prime numbers have arbitrarily long arithmetic progressions. This vision came true several decades later. Szemeredi proved the conjecture in 1975, and Green and Tao proved that the primes contain arbitrarily long arithmetic progressions in 2004, with Szemeredi’s theorem being a key ingredient in their proof.
Rewinding a bit, the first progress on the Erdos-Turan conjecture came from Roth, who proved the case In 1955. Roth’s proof establishes that if does not have length-3 arithmetic progressions, then is at most, roughly . Erdos also conjectured that the bound should be , and if this were true it would imply that the primes have infinitely many length-3 arithmetic progressions simply because of their density.
Roth’s proof uses Fourier analysis, and Meshulam, in 1995, noted that the proof becomes much cleaner, and it leads to better bounds, if one looks at the analog problem in , where is a finite field (of characteristic different from 2). In this case, the question is how big can be if it does not have three points on a line. An adaptation of Roth’s techniques gives an upper bound of the order of , which, for constant , is of the order of if is the size of the universe of which is a subset.
Bourgain introduced a technique to work on “as if” it where a vector space over a finite field, and proved upper bounds of the order of and then to the size of a subset of without length-3 arithmetic progressions. The latest result in this line is by Sanders, who proved a bound of , very close to Erdos’s stronger conjecture.
How far can these results be pushed? A construction of Behrend’s shows that there is a set with no length-3 arithmetic progression and size roughly . The construction is simple (it is a discretization of a sphere in dimensions) and it has some unexpected other applications. This means that the right bound in Roth’s theorem is of the form and that the “only” question is what is the term.
In the finite vector space case, there is no analog of Behrend’s construction, and so the size of say, the largest subset of without three points on a line, was completely open, with an upper bound of the order of and lower bounds of the order of for some constant . The cap problem was the question of whether the right bound is of the form or not.
Two weeks ago, Croot, Lev and Pach proved that if is a subset of without length-3 arithmetic progressions, then is at most of the order of . This was a strong indication that the right bound in the cap problem should be sub-exponential.
This was done a couple of days ago by Ellenberg, who proved an upper bound of the form holds in . The proof is not specific to and generalizes to all finite fields.
Both proofs use the polynomial method. Roughly speaking, the method is to associate a polynomial to a set of interest (for example, by finding a non-zero low-degree polynomial that is zero for all points in the set), and then to proceed with the use of simple properties of polynomials (such as the fact that the space of polynomials of a certain degree has a bounded dimension, or that the set of zeroes of a univariate non-zero polynomial is at most the degree) applied either to the polynomial that we constructed or to the terms of its factorization.
Let be the vector space of -variate polynomials over of total degree that are cube-free (that is, such that all variables occur in monomials with degree 0, 1, or 2), and let be its dimension.
If is a set such that there are no distinct such that (a different property from being on a line, but the draft claims that the same argument works for the property of not having three points on a line as well), then Ellenberg shows that
then the bound follows from computing that and for .
The finite field Kakeya problem is another example of a problem that had resisted attacks from powerful Fourier-analytic proofs, and was solved by Zeev Dvir with a relatively simple application of the polynomial method. One may hope that the method has not yet exhausted its applicability.
Gil Kalai has posted about further consequence of the results of Croot, Lev, Pach and Ellenberg.
[I sneeze several times and then the following conversation happens]
J.Z.: In China, we say that if you sneeze once, it means that someone is thinking of you. If you sneeze twice, it means someone is cursing you.
Me: and what does it mean when I sneeze three times or more?
J.Z.: it means you have a cold.