You are currently browsing the monthly archive for October 2007.

The Green-Tao theorem states that the primes contain arbitrarily long arithmetic progressions; its proof can be, somewhat inaccurately, broken up into the following two steps:

Thm1: Every constant-density subset of a pseudorandom set of integers contains arbitrarily long arithmetic progressions.

Thm2: The primes have constant density inside a pseudorandom set.

Of those, the main contribution of the paper is the first theorem, a “relative” version of Szemeredi’s theorem. In turn, its proof can be (even more inaccurately) broken up as

Thm 1.1: For every constant density subset D of a pseudorandom set there is a “model” set M that has constant density among the integers and is indistinguishable from D.

Thm 1.2 (Szemeredi) Every constant density subset of the integers contains arbitrarily long arithmetic progressions, and many of them.

Thm 1.3 A set with many long arithmetic progressions cannot be indistinguishable from a set with none.

Following this scheme is, of course, easier said than done. One wants to work with a definition of pseudorandomness that is weak enough that (2) is provable, but strong enough that the notion of indistinguishability implied by (1.1) is in turn strong enough that (1.3) holds. From now on I will focus on (1.1), which is a key step in the proof, though not the hardest.

Recently, Tao and Ziegler proved that the primes contain arbitrarily long “polynomial progressions” (progressions where the increments are given by polynomials rather than linear functions, as in the case of arithmetic progressions). Their paper contains a very clean formulation of (1.1), which I will now (accurately, this time) describe. (It is Theorem 7.1 in the paper. The language I use below is very different but equivalent.)

We fix a finite universe ; this could be in complexity-theoretic applications or in number-theoretic applications. Instead of working with subsets of , it will be more convenient to refer to probability distributions over ; if is a set, then is the uniform distribution over . We also fix a family of “easy” function . In a complexity-theoretic applications, this could be the set of boolean functions computed by circuits of bounded size. We think of two distributions as being -indistinguishable according to if for every function we have

and we think of a distribution as pseudorandom if it is indistinguishable from the uniform distribution . (This is all standard in cryptography and complexity theory.)

Now let’s define the natural analog of “dense subset” for distributions. We say that a distribution is -dense in if for every we have

Note that if and for some sets , then is -dense in if and only if and .

So we want to prove the following:

Theorem (Green, Tao, Ziegler)

Fix a family of tests and an ; then there is a “slightly larger” family and an such that if is an -pseudorandom distribution according to and is -dense in , then there is a distribution that is -dense in and that is -indistinguishable from according to .

[The reader may want to go back to (1.1) and check that this is a meaningful formalization of it, up to working with arbitrary distributions rather than sets. This is in fact the “inaccuracy” that I referred to above.]

In a complexity-theoretic setting, we would like to say that if is defined as all functions computable by circuits of size at most , then should be and should contain only functions computable by circuits of size . Unfortunately, if one follows the proof and makes some simplifications asuming contains only boolean functions, one sees that contains functions of the form , where , , and could be arbitrary and, in general, have circuit complexity exponential in and . Alternatively one may approximate as a low-degree polynomial and take the “most distinguishing monomial.” This will give a version of the Theorem (which leads to the actual statement of Thm 7.1 in the Tao-Ziegler paper) where contains only functions of the form , but then will be exponentially small in and . This means that one cannot apply the theorem to “cryptographically strong” notions of pseudorandomness and indistinguishability, and in general to any setting where and are super-logarithmic (not to mention super-linear).

This seems like an unavoidable consequence of the “finitary ergodic theoretic” technique of iterative partitioning and energy increment used in the proof, which always yields at least a singly exponential complexity.

Omer Reingold, Madhur Tulsiani, Salil Vadhan and I have recently come up with a different proof where both and the complexity of are polynomial. This gives, for example, a new characterization of the notion of pseudoentropy. Our proof is quite in the spirit of Nisan’s proof of Impagliazzo’s hard-core set theorem, and it is relatively simple. We can also deduce a version of the theorem where, as in Green-Tao-Ziegler, contains only bounded products of functions in . In doing so, however, we too incur an exponential loss, but the proof is somewhat simpler and demonstrates the applicability of complexity-theoretic techniques in arithmetic combinatorics.

Since we can use (ideas from) a proof of the hard core set theorem to prove the Green-Tao-Ziegler result, one may wonder whether one can use the “finitary ergodic theory” techniques of iterative partitioning and energy increment to prove the hard-core set theorem. Indeed, we do this too. In our proof, the reduction loses a factor that is exponential in certain parameters (while other proofs are polynomial), but one also gets a more “constructive” result.

If readers can stomach it, a forthcoming post will describe the complexity-theory-style proof of the Green-Tao-Ziegler result as well as the ergodic-theory-style proof of the Impagliazzo hard core set theorem.

If memory serves me well, I have attended all STOC and FOCS conferences since STOC 1997 in El Paso, except STOC 2002 in Montreal (for visa problems), which should add up to 21 conferences. In most of those conferences I have also attended the “business meeting.” This is a time when attendees convene after dinner, have beer, the local organizers talk about their local organization, the program committee chair talks about how they put the program together (“papers were submitted, then we reviewed them, finally we accepted some of those. Let me show you twenty slides of meaningless statistics about said papers”), organizers of future conferences talk about their ongoing organizing, David Johnson raises issues to be discussed, and so on. The SODA drinking game gives a good idea of what goes on.

A fixture of business meetings is also a presentation of the state of National Science Foundation (NSF) funding for theory in the US. In the first several conferences I attended, the NSF program director for theory would take the podium, show a series of incomprehensible slides, and go something like “there is no money; you should submit a lot of grant applications; I will reject all applications because there is no money, but low acceptance rates could bring us more money in future years; you should apply to non-theory programs, because there is no money in theory, but don’t make it clear you are doing theory, otherwise they’ll send your proposal to me, and I have no money. In conclusion, I have no money and we are all doomed.”

Things hit rock bottom around 2004, when several issues (DARPA abandoning basic research, the end of the NSF ITR program, a general tightening of the NSF budget at a time of increased student tuition, a change in NSF accounting system requiring multi-year grants to be funded entirely from the budget of the year of the award, ….) conspired to create a disastrous funding season. At that point several people in the community, with Sanjeev Arora playing a leading role, realized that something had to be done to turn things around. A SIGACT committee was formed to understand what had gone wrong and how to repair it.

I don’t know if it is an accurate way of putting it, but my understanding is that our community had done a very bad job in publicizing its results to a broader audience. Indeed I remember, in my job interviews, a conversation that went like “What do you do?” “Complexity theory” “Structural complexity or descriptive complexity?” “??”. (I also got a “What complexity classes do you study?”) And I understand that whenever people from the SIGACT committee went to talk to NSF higher-ups about theory, everybody was interested and the attitude was almost “why haven’t you told us about this stuff before?”

For various reasons, it is easier at NSF to put funding into a new initiative than to increase funding of an existing one, and an idea that came up early on was to fund an initiative on “theory as a lens for the sciences,” to explore work in economics, quantum mechanics, biology, statistical physics, etc., where the conceptual tools of theoretical computer science are useful to even phrase the right questions, as well as work towards their solution. This idea took on a life of its own, grew much more broad than initially envisioned (so that the lens thing is now a small part of it), received an appropriately cringe-inducing name, and is now the Cyber-Enabled Discovery and Innovation (CDI) program, that is soon accepting its first round of submissions.

Thanks to the work that Bill Steiger put in as program director in the last year and a half, and to the efforts of the SIGACT committee, the outlook for theory funding is now much optimistic.

At the FOCS 2007 business meeting last Monday, Bill talked about the increase in funding that happened under his watch, Sanjeev Arora talked about the work of the committee and the new funding opportunities (of which CDI is only one). In addition, as happened a few times in the last couple of years, Mike Foster from NSF gave his own, generally theory-friendly, presentation. Mike is a mid-level director at NSF (one or two levels above the theory program), and the regular presence of people in his position at STOC and FOCS is, I think, without precedent before 2005. (Or at least between 1997 and 2004.)

The NSF is relatively lean, efficient and competent for being a federal bureaucracy, but it is still a federal bureaucracy, with its quirks.

A few years ago, it started a much loathed requirement to explicitly state the “broader impact” of any proposed grant. I actually don’t mind this requirement: it does not ask to talk about “applications,” but rather of all the important research work that is not just establishing technical result. Disseminating results, for example, writing notes, expository work, and surveys and making them available, bringing research-level material to freshmen in a new format, doing outreach, doing something to increase representation of women and minority, and so on.

As reported by Sanjeev Arora in his presentation, however, NSF is now requiring to state how the research in a given proposal is “transformative.” (I just got a spelling warning after typing it.) I am not sure this makes any sense. The person sitting next to me commented, “Oh no, the goal of my research is always to maintain the status quo.”

FOCS 2007 started yesterday in Providence with a series of tutorials.

Terry Tao gave a talk similar to the one he gave in Madrid, discussing the duality between pseudorandomness and efficiency which is a way to give a unified view of techniques coming from analysis, combinatorics and ergodic theory.

In typical such results, one has a set $F$ of “simple” functions (for example linear, or low-degree polynomials, or, in conceivable complexity-theoretic applications, functions of low circuit complexity) and one wants to write an arbitrary function $g$ as

$ g(x) = g_{pr} (x) + g_{str} (x) + g_{err} (x) $

where $g_{pr}$ is *pseudorandom* with respect to the “distinguishers” in $F$, $g_{str}$ is a “simple combination” of functions from $\cal F$, and $g_{err}$ accounts for a possible small approximation error. There are a number of ways to instantiate this general template, as can be seen on the accompanying notes, and it is nice to see how even the Szemeredi regularity lemma can be fit into this template. (The “functions” are adjacency matrices of graphs, and the “efficient” functions are complete bipartite subgraphs.)

Dan Boneh spoke on pairing-based cryptography, an idea that has grown into a whole, rich, area, with specialized conferences and, according to Google Scholar, 1,200+ papers published so far. In this setting one has a group $G$ (for example points on an elliptic curve) such that there is a mapping $e: G X G \rightarrow G_T$ that takes pairs of elements of $G$ into an element of another group $G_T$ satisfying a bilinearity condition. (Such a mapping is a “pairing,” hence the name of the area.) Although such mapping can lead to attacks on the discrete log problem in $G$, if the mapping is chosen carefully one may still assume intractability of discrete log in $G$, and the pairing can be very useful in constructing cryptographic protocols and proving their security. In particular, one can get “identity-based encryption,” a type of public key cryptography where a user’s public key can be her own name (or email address, or any deterministically chosen name), which in turn can be used as a primitive in other applications.

Dan Spielman spoke on spectral graph theory, focusing on results and problems that aren’t quite studied enough by theoreticians. He showed some remarkable of example of graph drawings obtained by simply plotting a vertex $i$ to the point $(v(i),w(i))$, where $v$ and $w$ are the second largest and third largest eigenvalues of the laplacian of the adjacency matrix. The sparse cut promised by Cheeger inequality is, in such a drawing, just the cut given by a vertical line across the drawing, and there are nice algebraic explanations for why the drawing looks intuitively “nice” for many graphs but not for all. Spectral partitioning has been very successful for image segmentation problems, but it has some drawbacks and it would be nice to find theoretically justified algorithms that would do better.

Typically, I don’t go to an Italian restaurant in the US unless I have been there before and liked it, a rule that runs into certain circularity problems. I was happy that yesterday I made an exception to go to Al Forno, which proved to be truly exceptional.

Or at least she cannot sue a rival campaign if it makes such a claim.

The Washington State Supreme Court has found that politicians have a constitutional right to lie.

(It is still illegal to *libel*, but libel is defined extremely narrowly in the US. It is not enough to make a false statement of fact, it is not even enough if this is done with malice. It must, in addition, cause harm to the reputation of the victim.)

## Recent Comments