We need to occupy unsafe spaces

Today an atmosphere of grief pervaded Berkeley, and a series of emails came from higher and higher up the chain of command, culminating with one from Janet Napolitano, reaffirming the University of California’s commitment to its principles of inclusivity, diversity, and all things that are good. Here is an email from the Vice-Chancellor for Equity and Inclusion:

Dear Cal Students, Staff, and Faculty,

We know that the results of yesterday’s election have sparked fear and concern among many in our community; in particular our immigrant and undocumented communities, Muslim, African American, Chicanx/Latinx, LGBTQ+, Asian and Pacific Islander communities, survivors of sexual assault, people with disabilities, women, and many others. We are reaching out to you with a message of support. UC Berkeley leadership remains steadfast in our values and committed to the safety and well-being of all of our students, faculty, and staff. We condemn bigotry and hatred in all forms, and hold steadfast in our commitment to equity, access, and a campus that is safe, inclusive, and welcoming to all.

Various communities have organized the following community spaces and resources:

  • A community space for undocumented students tonight at 6:30pm in Chavez Room 105.
  • CLSD and CLPR are hosting space at the Shorb House, 2547 Channing Way from 12pm-5pm for students to come by. Faculty and staff will be there in community with our students for support.
  • MCC is holding a safe space for POC/Black students from 8pm-10pm this evening.
  • QTAP is hosting a QTOPC dinner in Anthony Hall at 6pm.
  • The Gender Equity Resource Center is open today, until 5pm, for those who wish for a quiet space for contemplation and community. GenEq is also hosting the following healing spaces:
    • Women’s Healing Space – Today, November 9th, 1pm-2:30pm
    • LGBTQ+ Healing Space – Today, November 9th, 2:30pm-4pm

Now, without discounting the value of having a good place to quietly sob with like-minded people, this strikes me as the message you would send after the Charleston shooting, or the Orlando shooting. This was not the act of one disturbed person. Sixty million Americans went to the trouble of going all the way to the polling place to vote for Trump, or of filling up their absentee ballot, and then mailing it in. That’s one in four adult Americans, a diverse coalition of white people: educated white people and less educated white people, male white people and female white people, religious white people and secular white people, and, one should note, even a few people who are not white. White people who thought it’s their constitutional right to discriminate against gay people, white people who think it’s ok to grab women by their pussy, and that you can get away with it if you are a star, white people who think muslims should not come to the US.

Our students will meet these people everywhere. They will be their neighbors, their coworkers, their bosses, and maybe their in-laws. When our students graduate, there will be no safe space.

And if there were, that is not where they should go. Because those sixty million people will not change their minds by talking among themselves.

But the oppressed cannot be left alone to speak out for themselves.

A feature of rape prevention programs is bystander intervention training: telling people how to recognize a situation in which someone is at risk of assault, and intervene.

We need white people to speak out against racism, christians to speak out for the religion freedom of non-christians, men to speak out against misogyny, straight people to speak out for LGBT people, Republicans to speak out against Trump.

And we need them to do this where it is not safe to do so, because otherwise all we have is angry people talking to the like-minded on the internet.

Edited to add a useful link

How Baggio Leung and Yau Wai-ching Lost their Legco Seat

If you can’t bear to think about tomorrow’s election, and what will happen to the ninth seat of the supreme court if the Republicans hold control of the Senate, and if you would rather hear about another country’s impending constitutional crisis, let me present you with the latest goings on in Hong Kong.

In 1997, Hong Kong was “handed over” back to China, with the agreement that it would retain a “high degree of autonomy” until 2047, and that there would be elections with universal suffrage by 2017. Hong Kong kept (and will keep until 2047) its own legal system and its own currency, Hong Kong citizens have their passport, and a mini-constitution, called “basic law” was drafted to outline the working of its own political system.

In the 1997 system, Hong Kong is governed by a Chief Executive, who is elected in a system that gives votes to constituencies such as business associations, as well, with a smaller weight, to popular vote. The Legislative Council (Legco) is similarly elected in a way in which some members reflect the public’s vote and other are appointed by business and trade groups. The system guarantees a pro-Beijing chief executive and a majority of pro-Beijing representatives in the Legco.

In 2012, legislature was passed that would have introduced “patriotic” (pro-PRC) propaganda in K-12 education. The move produced an uproar among student groups, which coalesced under the umbrella of the Scholarism student association, and which led to huge demonstrations, which in turn led the Hong Kong government to shelve the patriotic education initiative.

In 2014, a long-running process to decide how universal suffrage was going to be implemented in the 2017 Chief Executive election, yielded a proposal in which only Beijing-approved candidates could run for office, thus negating the purpose of having elections in the first place. A broad protest movement emerged, including both Scholarism students and veterans of the pro-democracy movement that had existed since 1997. Outrage at the police response to early protests led to large popular protests that turned the center of Hong Kong into an occupied zone, where thousands of people pitched tents and stayed for weeks.

Eventually the movement failed to win any concessions, the occupation ended, but a new generation of Hong Kong young people became involved in politics and in the pro-democracy movement like never before. Scholarism dissolved as a student group, and reformed as a political party called Demosisto, and a number of other pro-democracy parties arose, including Youngspiration, which has an explicit pro-autonomy platform.

In the last September election, Nathan Law Kwun-chung became the youngest person to win a seat in Legco, running for Demosisto, and two Younsgpiration members, Sixtus “Baggio” Leung Chung-hang and Yau Wai-ching, also won seats, among other pro-democracy representatives.

On October 4, Joshua Wong, the teenage former Scholarism leader and current Demosisto member, was detained in Thailand on his way to speak at a University in Bangkok, and deported back to Hong Kong, under pressure from China, highlighting how much the PRC feels threatened by the Hong Kong pro-democracy movement.

On October 12, the swearing-in ceremony of the elected Legco members took place. If you look at the video halfway down this article, you will see “Long Hair” Leung Kwok-Hung come to the lectern with a yellow umbrella, the symbol of the 2014 protests, and a copy of the election law, which he proceeds to shred; at the end of the video you see Nathan Law make a speech after his swearing-in, and, in the middle, there are Baggio Leung and Yau Wai-ching. They each approach the lectern with a banner that says “Hong Kong is not China,” and then recite the oath in English (the other option is do so in Cantonese). In the oath, legislators swear allegiance to the “Hong Kong Special Administrative Region of the People’s Republic of China,” and they both pronounce China as “Chee-na.” The officer presiding the swearing-in refuses the recognizes their oath as valid.

Here is where things get complicated. “Chee-na” is how the Japanese called China during WW2, and it is considered an offensive slur in the Mainland. Leung and Yau, apparently quite disingenuously, insist that they just have poor English pronunciation, and they have refused to apologize. Meanwhile, they have twice shown up at Legco meetings to repeat their swearing-in, which the presiding officer has refused to do. Instead, the government asked the Hong Kong supreme court to rule on whether by having “refused to take the oath” (a rather questionable interpretation of the event) Leung and Yau should vacate their seats.

So we come to the constitutional crisis: before the Hong Kong supreme court ruled on the matter, the legislative branch of the PRC stepped in yesterday, to rule against Leung and Yau, producing an interpretation of the basic law suggesting that all pro-autonomy legislators could be stripped of their post. They did so because the PRC legislature is indeed supposed to be the ultimate interpreter of the meaning of the basic law, but this is only the second time since 1997 that it has exercised this prerogative, and the first time that it has done for such an incendiary matter. This could be the beginning of the end of the autonomy of Hong Kong’s judiciary system, which so far was never doubted, as well as the end of the remaining semblance of democracy in Hong Kong’s political system.

Protests have already started.

Avi60: Zero Knowledge for all NP

1982 was the annus mirabilis of the foundations of cryptography. In their paper “probabilistic encryption,” Goldwasser and Micali introduced two rigorous definitions of security for encryption, which they proved to be equivalent. One definition required the distributions of encryptions of any two messages to be computationally indistinguishable (a concept they introduce in the paper), the other, which they call semantic security, is the property that whatever can be efficiently computed about a message given the cyphertext can also be efficiently computed without the cyphertext. Later the same year, Blum and Micali gave a rigorous definitions of security for pseudorandom generators, and Yao wrapped all these results in a more general framework requiring generic, rather than number-theoretic, assumptions.

The concept of semantic security inspired most subsequent definitions, and proofs, of security based on the concept of simulation. Instead of trying to specify a list of things than adversary should not be able to do, one defines an idealized model in which the adversary has no access to private and encrypted data, and one defines a given system to be secure if whatever an attacker can efficiently compute given the ability of eavesdrop (and possibly mount an active attack), can also be efficiently computed in the ideal model. One then proves a system to be secure by developing a simulator in the ideal model for every real-world adversary.

Together with Rackoff, Goldwasser and Micali took this idea one step further from encryption to interactive communication, and came up with the idea of Zero-Knowledge Proofs. A zero-knowledge proof is a probabilistic proof system in which a prover can convince a verifier, with high confidence, of the truth of a statement, with the additional property that there is a simulator that is able to sample from the distribution of verifier’s views of the interaction. Thus the verifier is convinced of the truth of the statement being proved, but gains no additional information. In their paper, Goldwasser, Micali and Rackoff introduce the concept and present a zero-knowledge proof for a conjecturally intractable number-theoretic problem. The paper was famously rejected several times, eventually appearing in 1985.

The following year, Goldreich, Micali and Avi Wigderson published a paper giving zero knowledge proof systems for all problems in NP. Their work made zero-knowdge proofs a key tool in the design of secure cryptosystem: it was now possible for a party to publish a commitment to a secret x and then, at any time, be able to prove that x has a certain property without releasing any additional information about x. This ability was a key ingredient in the development of secure multi-party computation in 1987, by the same authors.

So how does one prove in zero knowledge that, say, a graph is 3-colorable? (Once you have zero-knowledge proofs for one NP-complete problems, you immediately have them for all problems in NP.)

Suppose the prover and the verifier know a graph G and the prover knows a 3-coloring. A physical analog of the protocol (which can be implemented using the notion of commitment schemes) is the following: the prover randomizes the color labels, then takes |V| lockboxes, each labeled by a vertex, and puts a piece of paper with the color of vertex v in the lockbox labeled by v, for every v. The prover locks all the lockboxes, and sends them to the verifier. The verifier picks a random edge (u,v) and asks for the keys of the lockboxes for u and for v. If they contain different colors, the verifier accepts, otherwise it rejects.

The protocol is complete, in the sense that if the graph is 3-colorable and the parties follow the protocol, then the verifier accepts with probability 1.

The protocol is sound, in the sense that if the graph is not 3-colorable, then, no matter what the prover does, there will have to some edge (u,v) such that the lockboxes of u and v are the same, and the verifier has probability at least 1/|E| of picking such an edge and rejecting. Thus the verifier accepts with probability at most 1 - 1/|E|, which can be made negligibly small by repeating the protocol several times.

As per the zero-knowledge property, the view of the verifier is the choice of a random edge, two open lockboxes corresponding to the endpoints of the edge, containing two random different colors, and |V|-2 unopened lockboxes. A view with such a distribution can be easily sampled, and the same is true when the physical implementation is replaced by a commitment scheme. (Technically, this is argument only establishes honest-verifier zero knowledge, and a bit more work is needed to capture a more general property.)


The festivities in honor of Avi Wigderson’s 60th birthday start tomorrow in Princeton, with a dream team of speakers. I will not be able to attend, but luckily a livestream will be available.

During the week, I will post a random selection of results of Avi’s.

Did you know that Avi’s first paper was an algorithm to color 3-colorable graphs using O(\sqrt n) colors? Here is the algorithm, which has the flavor of Ramsey theory proofs.

Suppose all nodes have degree < \sqrt n, then we can easily color the graph with \sqrt n colors. Otherwise, there is a node v of degree \geq \sqrt n. The neighbors of v induce a bipartite graph (because, in the 3-coloring that we are promised to exist, they are colored with whichever are the two colors that are different from the color of v), and so we can find in linear time an independent set of size \geq \sqrt n / 2. So we keep finding independent sets (which we assign a color to, and remove) of size \geq \sqrt n /2 until we get to a point where we know how to color the residual graph with \leq \sqrt n colors, meaning that we can color the whole graph with \leq 3 \sqrt n colors.

Congratulations to the 2016 Knuth Prize Selection Committee!

For the excellent choice of recognizing Noam Nisan for his work on complexity lower bounds, derandomization, and mechanism design.

Noam is known to readers of in theory for the development of the Nisan-Wigderson pseudorandom generator construction, which remains at the foundation of conditional derandomization results, and for Nisan’s generator, which is secure against log-space statistical test, and whose O(\log^2 n) seed length has not been improved upon in the past 25+ years. The modern definition of randomness extractors was made in a paper of Noam and David Zuckerman, which was also motivated by space-bounded derandomization.

Besides introducing almost all the techniques used in the main results on derandomization and pseudorandomness, Noam also introduced many of the techniques that underlie the main lower bound results that we can prove in restricted models, including the idea of approximating functions by polynomials, of looking at partial derivates to obtain artihmetic lower bounds and the connection between rank and communication complexity. With Linial and Mansour, he showed that the Hastad switching lemma could be used to bound the Fourier coefficients of functions computable by bounded-depth circuits, leading to quasi-polynomial learning algorithms for them (and to the realization that bounded-depth circuits cannot realize pseudorandom functions).

On November 27, 1989, Noam sent an email to a group of colleagues with a proof that (a decision problem equivalent to) the permanent had a multi-prover interactive proof; this set in motion a flurry of activity which led in a matter of days to the LFKN paper showing that P^{\# P} had a (one-prover) interactive proof and to Shamir’s proof that IP = PSPACE.

At the end of the 1990s, having done enough to keep the computational complexity community occupied for several subsequent decades, Noam wrote a paper with Amir Ronen called Algorithmic mechanism design. Around the same time, Elias Koutsoupias and Christos Papadimitriou published their work on worst-case equilibria and Tim Roughgarden and Eva Tardos published their work on selfish routing. A couple of years later, Christos gave back-to-back invited talks at SODA 2001, STOC 2001, ICALP 2001 and FOCS 2001 on game theory, and algorithmic game theory and algorithmic mechanism design have gone on to become a major part of theoretical computer science in the subsequent time.

Congratulations again to the prize committee, and please use the comments section to talk about the result of Noam’s that I didn’t know well enough to write about.

A conversation on what theory has done for us

[Inspired by Lance Fortnow’s retrospective post on the “Karp report,” Avi Wigderson’s response, and the Monty Python]

And what has the theory of computing done for us in the last twenty years?

Differential privacy? Apple just announced it will be used in iOS 10

Yes, and the application to preventing false discovery and overfitting is now used in production.

Ok, fine, but apart from differential privacy, what has theory done for us in the last twenty years?

Quantum algorithms? There wouldn’t be such a push to realize quantum computers if it wasn’t for Shor’s algorithm.

And quantum error correcting! There would be no hope of realizing quantum computers without quantum error correction

Very well, but apart from differential privacy and quantum computing, what has theory done for us in the …

Streaming algorithms? It all started with a theory paper and now it is a major interdisciplinary effort.

Yes, fair enough, but apart from differential privacy, quantum computing, and streaming algorithms, what has theory done for us…

Linear time decodable LDPC error-correcting codes? The first generation was not practical, but now they are part of major standards

Sure, ok, but apart from differential privacy, quantum computing, streaming algorithms, and error-correcting codes, what has theory…

Homomorphic encryption? The first-generation solutions were inefficient, but it might be only a matter of time before we have usable homomorphic encryption standards.

Linear-time SDD solvers? Algorithms like this and this are implementable and we may be one more idea away from algorithms that can be put in production.

Sublinear time algorithms like sparse FFT?

All right! But apart from differential privacy, quantum computing, streaming algorithms, error-correcting codes, homomorphic encryption, linear-time equation solvers and sub-linear time algorithms, what has the theory of computing ever done for us in the past twenty years?

. . .

[Could be continued. Indeed, please continue in the comments]

Louis CK on the 2016 presidential campaign

From an interview for New York Magazine:

It’s like if you were on a plane and you wanted to choose a pilot. You have one person, Hillary, who says, “Here’s my license. Here’s all the thousands of flights that I’ve flown. Here’s planes I’ve flown in really difficult situations. I’ve had some good flights and some bad flights, but I’ve been flying for a very long time, and I know exactly how this plane works.” Then you’ve got Bernie, who says, “Everyone should get a ride right to their house with this plane.” “Well, how are you going to do that?” “I just think we should. It’s only fair that everyone gets to use the plane equally.” And then Trump says, “I’m going to fly so well. You’re not going to believe how good I’m going to fly this plane, and by the way, Hillary never flew a plane in her life.” “She did, and we have pictures.” “No, she never did it.”

The first ever `in theory’ endorsements

So you are a San Francisco Democratic primary voter, a reader of “in theory,” and you do not like to think for yourself? You are in luck, because, for the first time ever, we are doing endorsements:

Bernie Sanders for President of the United States

Kamala Harris for United States Senator

Nancy Pelosi for United States Representative

Scott Weiner for California State Senator

David Chiu for  California State Assemblyman

Victor Hwang for Superior Court Judge

Ellenberg’s announcement of a solution to the cap-set problem

Jordan Ellenberg has just announced a resolution of the “cap problem” using techniques of Croot, Lev and Pach, in a self-contained three-page paper. This is a quite unexpected development for a long-standing open problem in the core of additive combinatorics.

Perhaps the right starting point for this story is 1936, when Erdos and Turan conjectured that, for every {k}, if {A} is a subset of {\{1,\ldots, N\}} without {k}-terms arithmetic progressions, then {|A|= o_k(n)}, or, equivalently, that if {A} is a subset of the integers of positive density, then it must have arbitrarily long arithmetic progressions. Their goal in stating this conjecture was that resolving it would be a stepping stone to proving that the prime numbers have arbitrarily long arithmetic progressions. This vision came true several decades later. Szemeredi proved the conjecture in 1975, and Green and Tao proved that the primes contain arbitrarily long arithmetic progressions in 2004, with Szemeredi’s theorem being a key ingredient in their proof.

Rewinding a bit, the first progress on the Erdos-Turan conjecture came from Roth, who proved the {k=3} case In 1955. Roth’s proof establishes that if {A \subseteq \{ 1,\ldots, N\}} does not have length-3 arithmetic progressions, then {|A|} is at most, roughly {N/\log\log N}. Erdos also conjectured that the bound should be {o(N/\log N)}, and if this were true it would imply that the primes have infinitely many length-3 arithmetic progressions simply because of their density.

Roth’s proof uses Fourier analysis, and Meshulam, in 1995, noted that the proof becomes much cleaner, and it leads to better bounds, if one looks at the analog problem in {{\mathbb F}_p^n}, where {{\mathbb F}_p} is a finite field (of characteristic different from 2). In this case, the question is how big can {A\subseteq {\mathbb F}_p^n} be if it does not have three points on a line. An adaptation of Roth’s techniques gives an upper bound of the order of {p^n/n}, which, for constant {p}, is of the order of {N/\log N} if {N} is the size of the universe of which {A} is a subset.

Bourgain introduced a technique to work on {{\mathbb Z}/N{\mathbb Z}} “as if” it where a vector space over a finite field, and proved upper bounds of the order of {N/\sqrt {\log N}} and then {N/(\log N)^{3/4}} to the size of a subset of {\{ 1,\ldots , N\}} without length-3 arithmetic progressions. The latest result in this line is by Sanders, who proved a bound of {(N poly\log\log N)/\log N}, very close to Erdos’s stronger conjecture.

How far can these results be pushed? A construction of Behrend’s shows that there is a set {A\subseteq \{ 1,\ldots, N\}} with no length-3 arithmetic progression and size roughly {N/2^{\sqrt {\log N}}}. The construction is simple (it is a discretization of a sphere in {\sqrt {\log N}} dimensions) and it has some unexpected other applications. This means that the right bound in Roth’s theorem is of the form {N^{1-o(1)}} and that the “only” question is what is the {N^{-o(1)}} term.

In the finite vector space case, there is no analog of Behrend’s construction, and so the size of say, the largest subset of {{\mathbb F}_3^n} without three points on a line, was completely open, with an upper bound of the order of {3^n/n} and lower bounds of the order of {c^n} for some constant {c<3}. The cap problem was the question of whether the right bound is of the form {3^{(1-o(1)) }} or not.

Two weeks ago, Croot, Lev and Pach proved that if {A} is a subset of {({\mathbb Z}/4{\mathbb Z})^n} without length-3 arithmetic progressions, then {|A|} is at most of the order of {4^{.926 \cdot n}}. This was a strong indication that the right bound in the cap problem should be sub-exponential.

This was done a couple of days ago by Ellenberg, who proved an upper bound of the form {(2.756)^n} holds in {{\mathbb F}_3^n}. The proof is not specific to {{\mathbb F}_3} and generalizes to all finite fields.

Both proofs use the polynomial method. Roughly speaking, the method is to associate a polynomial to a set of interest (for example, by finding a non-zero low-degree polynomial that is zero for all points in the set), and then to proceed with the use of simple properties of polynomials (such as the fact that the space of polynomials of a certain degree has a bounded dimension, or that the set of zeroes of a univariate non-zero polynomial is at most the degree) applied either to the polynomial that we constructed or to the terms of its factorization.

Let {P_d} be the vector space of {n}-variate polynomials over {{\mathbb F}_3} of total degree {d} that are cube-free (that is, such that all variables occur in monomials with degree 0, 1, or 2), and let {m_d} be its dimension.

If {A\subseteq {\mathbb F}_3^n} is a set such that there are no distinct {a,b,c} such that {a+b+c=0} (a different property from being on a line, but the draft claims that the same argument works for the property of not having three points on a line as well), then Ellenberg shows that

\displaystyle  m_d - 3^n + |A| \leq 3 m_{d/2}

then the bound follows from computing that {m_{2n/3} \geq 3^n - c^n} and {m_{n/3} \leq c^n} for {c \approx 2.756\cdots}.

The finite field Kakeya problem is another example of a problem that had resisted attacks from powerful Fourier-analytic proofs, and was solved by Zeev Dvir with a relatively simple application of the polynomial method. One may hope that the method has not yet exhausted its applicability.

Gil Kalai has posted about further consequence of the results of Croot, Lev, Pach and Ellenberg.