The Triangle Removal Lemma

[At the end of a survey paper on additive combinatorics and computational complexity which is to appear in SIGACT News, I list three major open questions in additive combinatorics which might be amenable to a “computer science proof.” They are all extremely well studied questions, by very smart people, for the past several years, so they are all very long shots. I don’t recommend anybody to start working on them, but I think it is good that as many people as possible know about these questions, because when the right technique comes along its applicability can be more quickly realized.]

The first question is to improve the Triangle Removal Lemma. I have talked here about what the triangle removal lemma is, how one can prove it from the Szemerédi Regularity Lemma, and how it implies the length-3 case of Szemerédi’s Theorem.

As a short recap, the Triangle Removal Lemma states that if {G} is an {n}-vertex graph with {o(n^3)} triangles, then there is a set of {o(n^2)} edges such that the removal of those edges eliminates all the triangles. Equivalently, it says that if a graph has {\Omega(n^2)} triangles which are all pair-wise edge-disjoint, then there must be {\Omega(n^3)} triangles overall.

The connection with Szemerédi’s Theorem is that if {H} is an abelian group with {n} elements, and {A} is a subset of {H} with no length-3 arithmetic progressions (i.e., {A} is such that there are no three distinct elements {a,b,c} in {A} such that {b-a = c-b}), then we can construct a graph {G=(V,E)} that has {3n} vertices, {|A| \cdot n} pair-wise edge-disjoint triangles, and no other triangles. This contradicts the triangle removal lemma if {|A| = \Omega(n)}, and so we must have {|A| = o(n)}.

This is great, until we start looking at the relationships between the constants hidden by the {o(\cdot )} notation. Quantitatively, the Triangle Removal Lemma states that for every {\epsilon} there is a {\delta = \delta(\epsilon)} such that if a graph has at least {\epsilon \cdot n^2} pair-wise edge-disjoint triangles, then it has at least {\delta \cdot n^3} triangles. The only known proof, however, has {\delta} incredibly small: {1/\delta} grows like a tower of exponentials of height polynomial in {1/\epsilon}. The proof uses the Szemerédi Regularity Lemma, and the Regularity Lemma is known to require such very bad dependencies.

63 years ago, Behrend showed that {{\mathbb Z}/N{\mathbb Z}}, {N} prime, has a subset {A} that contains no length-3 arithmetic progression and whose size is {N/2^{O(\sqrt {\log N})}}. (Last year, Elkin gave the first improvement in 62 years to Behrend’s bound, but the improvement is only a multiplicative polylog {N} factor.) Combined with the graph construction mentioned above, this gives a graph with {3N} vertices, {N^2/^{O(\sqrt {\log N})}} edge-disjoint triangles, and no other triangle. Thus, the graph has {\leq \delta N^3} triangles where {\delta < 1/N}, but one needs to remove {> \epsilon N^2} edges to make it triangle-free, where {\epsilon > 2^{-O(\sqrt{\log N})}}. This shows that, in the Triangle Removal Lemma, {1/\delta} must grow super-polynomially in {1/\epsilon}, and be at least {1/\epsilon^{\log 1/\epsilon}}.

The question is to shorten the gap between the tower-of-exponential relationship between {1/\delta} and {1/\epsilon} coming from the proof via the Szemerédi Regularity Lemma and the mildly super-polynomial lower bound coming from the above argument.

Continue reading

Dense Subsets of Pseudorandom Sets

The Green-Tao theorem states that the primes contain arbitrarily long arithmetic progressions; its proof can be, somewhat inaccurately, broken up into the following two steps:

Thm1: Every constant-density subset of a pseudorandom set of integers contains arbitrarily long arithmetic progressions.

Thm2: The primes have constant density inside a pseudorandom set.

Of those, the main contribution of the paper is the first theorem, a “relative” version of Szemeredi’s theorem. In turn, its proof can be (even more inaccurately) broken up as

Thm 1.1: For every constant density subset D of a pseudorandom set there is a “model” set M that has constant density among the integers and is indistinguishable from D.

Thm 1.2 (Szemeredi) Every constant density subset of the integers contains arbitrarily long arithmetic progressions, and many of them.

Thm 1.3 A set with many long arithmetic progressions cannot be indistinguishable from a set with none.

Following this scheme is, of course, easier said than done. One wants to work with a definition of pseudorandomness that is weak enough that (2) is provable, but strong enough that the notion of indistinguishability implied by (1.1) is in turn strong enough that (1.3) holds. From now on I will focus on (1.1), which is a key step in the proof, though not the hardest.

Recently, Tao and Ziegler proved that the primes contain arbitrarily long “polynomial progressions” (progressions where the increments are given by polynomials rather than linear functions, as in the case of arithmetic progressions). Their paper contains a very clean formulation of (1.1), which I will now (accurately, this time) describe. (It is Theorem 7.1 in the paper. The language I use below is very different but equivalent.)

We fix a finite universe \Sigma; this could be \{ 0,1\}^n in complexity-theoretic applications or {\mathbb Z}/N{\mathbb Z} in number-theoretic applications. Instead of working with subsets of \Sigma, it will be more convenient to refer to probability distributions over \Sigma; if S is a set, then U_S is the uniform distribution over S. We also fix a family F of “easy” function f: \Sigma \rightarrow [0,1]. In a complexity-theoretic applications, this could be the set of boolean functions computed by circuits of bounded size. We think of two distributions X,Y as being \epsilon-indistinguishable according to F if for every function f\in F we have

| E [f(X)] - E[f(Y)] | \leq \epsilon

and we think of a distribution as pseudorandom if it is indistinguishable from the uniform distribution U_\Sigma. (This is all standard in cryptography and complexity theory.)

Now let’s define the natural analog of “dense subset” for distributions. We say that a distribution A is \delta-dense in B if for every x\in \Sigma we have

Pr [ B=x] \geq \delta Pr [A=x]

Note that if B=U_T and A=U_S for some sets S,T, then A is \delta-dense in B if and only if S\subseteq T and |S| \geq \delta |T|.

So we want to prove the following:

Theorem (Green, Tao, Ziegler)
Fix a family F of tests and an \epsilon>0; then there is a “slightly larger” family F' and an \epsilon'>0 such that if R is an \epsilon'-pseudorandom distribution according to F' and D is \delta-dense in R, then there is a distribution M that is \delta-dense in U_\Sigma and that is \epsilon-indistinguishable from D according to F.

[The reader may want to go back to (1.1) and check that this is a meaningful formalization of it, up to working with arbitrary distributions rather than sets. This is in fact the “inaccuracy” that I referred to above.]

In a complexity-theoretic setting, we would like to say that if F is defined as all functions computable by circuits of size at most s, then \epsilon' should be poly (\epsilon,\delta) and F' should contain only functions computable by circuits of size s\cdot poly(1/\epsilon,1/\delta). Unfortunately, if one follows the proof and makes some simplifications asuming F contains only boolean functions, one sees that F' contains functions of the form g(x) = h(f_1(x),\ldots,f_k(x)), where f_i \in F, k = poly(1/\epsilon,1/\delta), and h could be arbitrary and, in general, have circuit complexity exponential in 1/\epsilon and 1/\delta. Alternatively one may approximate h() as a low-degree polynomial and take the “most distinguishing monomial.” This will give a version of the Theorem (which leads to the actual statement of Thm 7.1 in the Tao-Ziegler paper) where F' contains only functions of the form \Pi_{i=1}^k f_i(x), but then \epsilon' will be exponentially small in 1/\epsilon and 1/\delta. This means that one cannot apply the theorem to “cryptographically strong” notions of pseudorandomness and indistinguishability, and in general to any setting where 1/\epsilon and 1/\delta are super-logarithmic (not to mention super-linear).

This seems like an unavoidable consequence of the “finitary ergodic theoretic” technique of iterative partitioning and energy increment used in the proof, which always yields at least a singly exponential complexity.

Omer Reingold, Madhur Tulsiani, Salil Vadhan and I have recently come up with a different proof where both \epsilon' and the complexity of F' are polynomial. This gives, for example, a new characterization of the notion of pseudoentropy. Our proof is quite in the spirit of Nisan’s proof of Impagliazzo’s hard-core set theorem, and it is relatively simple. We can also deduce a version of the theorem where, as in Green-Tao-Ziegler, F' contains only bounded products of functions in F. In doing so, however, we too incur an exponential loss, but the proof is somewhat simpler and demonstrates the applicability of complexity-theoretic techniques in arithmetic combinatorics.

Since we can use (ideas from) a proof of the hard core set theorem to prove the Green-Tao-Ziegler result, one may wonder whether one can use the “finitary ergodic theory” techniques of iterative partitioning and energy increment to prove the hard-core set theorem. Indeed, we do this too. In our proof, the reduction loses a factor that is exponential in certain parameters (while other proofs are polynomial), but one also gets a more “constructive” result.

If readers can stomach it, a forthcoming post will describe the complexity-theory-style proof of the Green-Tao-Ziegler result as well as the ergodic-theory-style proof of the Impagliazzo hard core set theorem.

The unreasonable effectiveness of additive combinatorics in computer science

As I have written several times on these pages, techniques from additive combinatorics seem to be very well suited to attack problems in computer science, and already a good amount of applications have been found. For example, “sum-product theorems” originally developed in a combinatorial approach to the Kakeya problem have been extremely valuable in recent constructions of randomness extractors. The central theorem of additive combinatorics, Szemeredi’s theorem, has now four quite different proofs, one based on graph theory and Ramsey theory, one based on analytical methods, one based on ergodic theory and one based on hypergraph theory. The first proof introduced the Szemeredi regularity lemma, which is a fixture of algorithmic work on property testing. The analytical proof of Gowers introduced the notion of Gowers uniformity that, so far, has found application in PCP constructions, communication complexity , and pseudorandomness. There is also work in progress on complexity-theoretic applications of some of the ergodic-theoretic techniques.

Why is it the case that techniques developed to study the presence of arithmetic progressions in certain sets are so useful to study such unrelated notions as sub-linear time algorithms, PCP systems, pseudorandom generators, and multi-party protocols? This remains, in part, a mystery. A unifying theme in the recent advances in additive combinatorics is the notion that every large combinatorial object can be “decomposed” into a “pseudorandom” part and a “small-description” part, and that many questions that we might be interested in are easy to answer, at least approximately, on pseudorandom and on small-description objects. Since computer scientists almost always deal with worst-case scenario, and are typically comfortable with approximations, it is reasonable that we can take advantage of techniques that reduce the analysis of arbitrary worst cases to the analysis of much simpler scenarios.

Whatever the reason for their effectiveness, it is worthwhile for any theoretical computer scientist to learn more about this fascinating area of math. One of the tutorials in FOCS 2007 will be on additive combinatorics, with a celebrity speaker. More modestly, following Random-Approx 2007, in Princeton, there will be a course on additive combinatorics for (and by) computer scientists. (If you want to go, you have to register by August 1 and reserve the hotel by this weekend.)

Property testing and Szemeredi’s Theorem

After discussing Szemeredi’s Theorem and the analytical approaches of Roth and Gowers, let’s see some ideas and open problems in the combinatorial approach.

The following result is a good starting point.

Triangle Removal Lemma. For every \delta >0 there is a constant \epsilon(\delta) > 0 such that if G is an n-vertex graph with at most \epsilon(\delta)n^3 triangles, then it is possible to make G triangle-free by removing at most \delta n^2 edges.

This result follows easily from the Szemeredi Regularity Lemma, and it is a prototype of several results in the field of property testing.

Indeed, consider the problem of distinguishing triangle-free graphs from graphs that are not even close to being triangle-free. For the purpose of this example, we think of a graph as “close to being triangle-free” if it can be made triangle-free by removing at most \delta n^2 edges, for some small constant \delta > 0. Then the Lemma tells us that in a not-even-close graph we have at least \epsilon(\delta) n^3 triangles, and so a sample of O(1/\epsilon(\delta)) vertices is likely to contain a triangle. So here is an algorithm for the problem: pick at random O(1/\epsilon(\delta)) vertices and see if they induce a triangle. We note that:

  • The algorithm satisfies our requirement;
  • The running time is a constant independent of n and dependent only on \delta;
  • this all makes sense only if 1/\epsilon(\delta) grows moderately as a function of 1/\delta.

Let us now see that the Triangle Removal Lemma proves Szemeredi’s Theorem for sequences of length 3. As we discussed earlier, it is enough to prove the Theorem in groups of the type {\mathbf Z}_N, with N prime; we will do better and prove the Theorem in any abelian group. So let G be an abelian group of size N, an let A be a subset of G of size \delta N. Construct a tri-partite graph (X,Y,Z,E) where X,Y,Z are sets of vertices, each of size N, and each a copy of the group G. The set of edges is defined as follows:

  1. for x \in X and y \in Y, (x,y) is an edge if there is an a \in A such that x+a=y
  2. for x \in X and z \in Z, (x,z) is an edge if there is a b\in A such that x + b + b = z
  3. for y \in Y and z \in Z, (y,z) is an edge if there is a c \in A such that y+c=z

Now we notice that if x,y,z is a triangle, then there are a,b,c in A such that (after some cancellations) a+c = b+b, which means that a,b,c is an arithmetic progression of length 3 in the group G. In fact, we see that the number of triangles in the graph is precisely N times the number of triples (a,b,c) in arithmetic progression in A.

Consider now the N \cdot |A| = \delta N^2 triangles corresponding to the “trivial” progressions of the form a,a,a. (These are the triangles x,x+a,x+a+a.) We can see that these triangles are edge-disjoint, and so in order just to remove such triangles we have to remove from the graph at least \delta N^2 edges. So the Triangle Removal Lemma implies that there are at least \epsilon(\delta)N^3 triangles in the graph, and so at least \epsilon(\delta)N^2 triples in arithmetic progression in A. If N > \delta/\epsilon(\delta), then some of those arithmetic progressions must be non-trivial, and we have the length-3 case of Szemeredi’s theorem.

We see that both for the “property testing” application and for the Szemeredi Theorem application it is important to have good quantitative bounds on \epsilon(\delta). We know that, in Szemeredi’s theorem, N must be super-polynomial in 1/\delta, and so, in the Triangle Removal Lemma, 1/\epsilon(\delta) must be super-polynomial in 1/\delta.

The known bounds, however, are quite unreasonable: we only know how to bound 1/\epsilon(\delta) by a tower of exponentials whose height is poly(1/\delta). Unfortunately, such unreasonable bounds are unavoidable in any proof of the Triangle Removal Lemma based on Szemeredi’s Regularity Lemma. This is because, as proved by Gowers, such tower-of-exponentials bounds are necessary in the Regularity Lemma.

Can we find a different proof of the Triangle Removal Lemma that has only a singly- or doubly-exponential \epsilon(\delta)? That would be wonderful, both for property testing (because the proof would probably apply to other sub-graph homomorphism problems) and for additive combinatorics. Over the last year, I have found at least three such proofs. Unfortunately they were all fatally flawed.

Or, is there a more-than-exponential lower bound for the Triangle Removal Lemma? This would also be a major result: it would show that several results in property testing for dense graphs have no hope of being practical, and that there is a “separation” between the quantitative version of Szemeredi’s Theorem provable with analytical methods versus what can be proved with the above reduction. Besides, it’s not often that we have super-exponential lower bounds for natural problems, with Gowers’s result being a rare exception.

By the way, what about Szemeredi’s Theorem for longer sequences? For sequences of length k one needs a “k-clique removal lemma” for (k-1)-uniform hypergraphs, which in turn can be derived from the proper generalization of the Regularity Lemma to hypergraphs. This turns out to be quite complicated, and it has been accomplished only very recently in independent work by Nagle, Rödl and Schacht; and by Gowers. An alternative proof has been given by Tao. The interested reader can find more in expository paper by Gowers and Tao.

What about Szemeredi’s own proof? It does use the Regularity Lemma, which was conceived and proved specifically for this application, and it involves a reduction to a graph-theoretic problem. I have to admit that I don’t know much else.

Analytical approaches to Szemeredi’s Theorem: general case

After having discussed the notion of Gowers uniformity, the statement of Szemeredi’s theorem, and the proof for progressions of length 3, let me try to say something about the case of progressions of length 4 or more, and the several related open questions.

As already discussed, Gowers’s proof is based on an “algorithm” that, given a subset A of ZN of size dN, does one of the following two things:

  1. It immediately finds a progression of length k in A; or
  2. it constructs a subset A’ of ZN’ such that: (i) if there is a progression of length k in A’, then there is also such a progression in A; (ii) A’ has size at least (d+poly(d))*N’; N’ is at least a constant root of N.

As in the proof for the case of progressions of length 3, once we have such an algorithm we are able to find a progression of length k in A, provided that, initially, N is at least exp(exp(poly(1/d))).

How does the “algorithm” work? Let us, again, identify the set A with its characteristic function A: ZN -> {0,1}, and consider the expression

(*) Ex,y A(x)*A(x+y)*A(x+2y)*…*A(x+(k-1)*y)

It is possible to show that (*) equals dk (which is what we would expect if A were a “pseudorandom” set) plus or minus an error term that depends on Uk-1(A-d), where Uk-1 is the dimension-(k-1) Gowers uniformity as defined earlier. A result of this kind (relating Gowers uniformity and number of arithmetic progressions) is called a “generalized Von Neumann theorem” in the literature, for reasons that I unfortunately ignore. The proof is not difficult at all, it is just a series of applications of Cauchy-Schwartz. The genius is in the choice of the right definition. (And the technical difficulties are in what comes next.)

This means that if the dimension-(k-1) Gowers uniformity of the function f(x):=A(x)-d is small, then A contains a lot of length-k arithmetic progressions, and we are in case (1)

What remains to prove is that if the set A is such that f(x):=A(x)-d has noticeably large dimension-(k-1) Gowers uniformity then we can find a set A’ in ZN’ as in case (2) above. Gowers’s paper is approximately 130 pages long, and about 100 pages are devoted to proving just that.

Obviously, I have no idea what goes on in those 100 pages, but (based on later work by Green and Tao) I can guess how it would be if we were trying to prove Szemeredi’s theorem not in ZN but in Fnp with p prime and larger than k. Translated to such a setting (I think), Gowers’s argument would proceed by showing that if Uk(A-d) is at least eps, then there is an affince subspace V of Fnp of dimension n/O(1) such that A restricted to V has density at least d+poly(eps). This, in turn, would be proved by showing

  1. If f has noticeably large dimension-(k-1) Gowers uniformity, then there is an affine subspace V of dimension n/O(1) and a polynomial phase function g of degree (k-2) such that, restricted to V, f and g are noticeably correlated
  2. If f is a function that has noticeable correlation with a low degree polynomial phase function, then there is a subspace of dimension n/O(1) such that f is correlated to a linear function when restricted to that subspace.

Let’s make things simple by restricting ourselves to Boolean function (even though this p=2 case has no direct application to results about arithmetic progressions).

Then, part (1) is saying that if f:{0,1}n->{-1,1} has noticeably large dimension-k Gowers uniformity, then there is a polynomial p() of degree (k-1) over Z2, and an affine subspace V of dimesion n/O(1), such that f(x) and (-1)p(x) agree on noticeably more than 1/2 of the inputs x. Part (2) is saying that if f:{0,1}n->{-1,1} and (-1)p(x) agree on noticeably more than 1/2 of the inputs, where p is a low-degree polynomial, then there is an affine subspace V of dimension n/O(1) such that, restricted to V, f has agreement with a linear function on noticeably more than 1/2 of the inputs.

The two results together imply that if f has noticeably large Gowers uniformity then there is a large sub-space in which f is correlated with a linear function. (To reach the conclusion, you need to apply part (1), then do a change of variables so that now the V of part (1) looks like {0,1}n/O(1), and then apply part(2).)

Actually, what I just wrote may be an open question, but it should be provable along the lines of Gowers’s proof, and it should be much easier to prove.

For the case of progressions of length 4 in Fnp, for small prime p>4, Green and Tao prove that if f is a bounded function of noticeably large dimension-3 Gowers uniformity, then f is correlated to a degree-2 polynomial over all of Fnp, and the correlation is especially good on a subspace of dimension n-O(1). (As opposed to n/O(1).) This improvement, and several additional ideas, lead to an improved quantitative version of Szemeredi’s theorem for progressions of length 4 in Fnp, and they also announced a similar improved result for progression of length 4 over the integers.

Their proof relies on the analysis of a certain “linearity test in the highly noisy case,” which is also used (and, I think, proved for the first time) in Gowers’s paper. In the, simpler, Boolean, case, the result is

Let F:{0,1}n -> {0,1}m, and suppose that

Prx,y[F(x)+F(y) = F(x+y)] > eps

where all operations are bitwise mod 2. Then there is an eps’ (depending only on eps, but independent of n,m) and a matrix M such that the functions x->F(x) and x->Mx have agreement at least eps’

(Update: as Alex explains in his comment below, the above sentence is quite misleading. Gowers proves quite a different statement in ZN, a setting in which an analogous “highly noise linearity test” statement is provably false, and where it is difficult to even get the statement of the right analog. The Boolean statement above is proved for the first time in Alex’s paper, with a proof whose outline is similar to Gowers’s; Green and Tao prove the highly noise linearity test result in other prime fields, also following Gowers’s argument.)

In the known proof, which uses difficult results from additive combinatorics, eps’ is exponentially small in eps. Here two major open questions are: (i) find a simple proof and (ii) find a proof where eps’=poly(eps)

More ambitiously, there could be a simple proof that (say, in the Boolean case) if f has noticeable Gowers uniformity, then f is noticeably correlated with a linear function when restricted to a large subspace. The experts in the area would probably be able to turn a proof in the Boolean case to a proof that applies to the setting where we have ZN instead of {0,1}n, Bohr sets instead of subspaces, and things I do not understand instead of polynomials. Even so, if the proof in Boolean setting were simple enough, the whole thing could be significantly simpler than Gowers’s proof, and there could be room for quantitative improvements.

Analytical approaches to Szemeredi’s Theorem: k=3

Here is the idea of Roth’s proof of the k=3 case of Szemeredi’s Theorem. (See yesterday’s post if you have no idea what I am talking about.) We have a subset A of ZN of size dN and we want to find a length-3 arithmetic progression in A. A first observation is that if d > 2/3 then we are done, because if we pick a,b at random there is positive probability that a,a+b,a+b+b are all in A. (Use the union bound.) The proof will work by “induction” on d, showing that the truth of the theorem for a smaller value of d can be derived from the truth of the theorem for a larger value d. Of course d is a continous parameter, so it will not really be induction, but that’s how we should think of it.

The main result is an “algorithm” that given A subset of ZN of size dN does one of the following two things:

  1. It immediately finds a length-3 progression in A; or
  2. it constructs a subset A’ of ZN’ such that (i) if A’ has a length-3 progression then so does A; (ii) A’ has size at least about (d+d2)*N’; and (iii) N’ is about sqrt(N).

So, given A, we run the algorithm, and we either immediately find the progression, or we get A’. In the latter case, we run the algorithm on A’, and we either get a progression in A’ (implying a progression in A) or we get a new set A”, and so on. But how many times do we have to keep doing this? At each step, the density of the set increases, and if it ever goes above 2/3 then we are also done. So, how many times can you do the d -> d + d2 transformation until you get 2/3? Certainly no more than O(1/d2) times, but actually O(1/d) is enough. This means that we will always find a length-3 progression in A, provided that N is large enough that we can take its square root O(1/d) times. So if N is doubly exponential in 1/d we have our proof.

Gowers’s proof for the general case has the same outline. Now we want to find a length-k progression in A. If the density of A is more than (k-1)/k, we are done. Gowers describes an algorithm that, given A, either finds a length-k progression or constructs a set A’ in ZN’ such that if A’ has a length-k progression that so does A’. Furthermore, A’ has density d+poly(d) and N’ is a constant root of N.

How do these arguments work? Let us look at Roth’s proof in the case in which A is a subset of Fnp with prime p; for concreteness, take p=3. Given A, consider its characteristic function (that we denote again A) A: Fn3 -> {0,1} and compute its Fourier transform.

Consider now the probability that, when picking a,b at random, the three points a,a+b,a+b+b are all in A. This is the same as

(*) Ea,b A(a)*A(a+b)*A(a+b+b)

which can be expressed really cleanly in terms of the Fourier coefficients of A. In particular, the expression (*) is d3, which is the number you would get if you were looking at 3 independent points, plus or minus an error term that depends on the largest Fourier coefficient of A. If all Fourier coefficients are less than, say, d2/2, then epression (*) is at least d3/2, and we have plenty of length-3 progressions in A; this is case (1) of the proof. In this case, we should think of A as a “pseudorandom set against adversaries that look for random length-3 progressions.”

What happens if A has a large Fourier coefficient? This means that A is correlated with a linear function, and so there is an affine subspace V in Fn3 of dimension n-1 such that A is denser in V than in the rest of Fn3. Now, map V to Fn-13 via an invertible affine map, and let A’ be the image of A under this map. If there is a length-3 progression in A’, that’s just 3 points on a line; but then it means that also in A we had 3 points on a line. The set A’ has density about d+d2 in Fn-13, and so we have part (2) of the argument.

As a bonus, we only lost a constant fraction of our group size at each step, so d can be as large as about 1/logN, where N=3n is the size of the initial group containing A.

Note the “win-win” structure of the argument. If A has small Fourier coefficients, then it is “pseudorandom”, and we get what we want from the pseudorandomness. Otherwise, A has some “linear structure,” and we can take advantage of it to reduce to a simpler instance of our original problem.

This “either we get what we want or we reduce to a simpler case” proof strategy has been used in some constructions of extractors, but it seems such a good idea that it may apply more widely, and one should keep it mind.

What about lower bounds? Excellent question. There is a very simple construction due to Behrend of a subset of size N/exp(sqrt(log N)) of {1,…,N} that has no length-3 progression, so N must be super-polynomial in 1/d when we prove the theorem over the integers (or over ZN with prime N, the two cases are essentially equivalent). Note that if you try to find a large set with no length-3 progression using the probabilistic method, you will not go very far. This is a better than random explicit construction, a rarity in extremal combinatorics. By the way, Behrend’s construction is very simple (here is a half-page description), it is 60 year old, and nobody has been able to improve it ever since.

What about lower bounds for Fn3? There is no known construction of a large subset avoiding length-3 progressions. The best lower bounds are of the form cn, with c<3. Is there a (3-o(1))n lower bound? This is a major open question. If there were a (2.999)n upper bound, then it would be a stunning result, because it would show that the business of translating proofs from finite fields to the integers using Bohr sets cannot be applied as a black box, but it works on some proofs and does not work on other proofs.

Szemeredi’s theorem

Szemeredi’s theorem on arithmetic progressions is one of the great triumphs of the “Hungarian approach” to mathematics: pose very difficult problems, and let deep results, connections between different areas of math, and applications, come out as byproducts of the search for a solution.

(Tim Gowers’s essay The two cultures of mathematics discusses brilliantly this “problem solving” way of doing math, as distinct from the “theory building” way.)

Some of the math surrounding Szemeredi’s theorem has found applications in property testing, in PCP constructions, and in extractor constructions, and more future applications are likely. Besides, it is truly beautiful (and somewhat accessible) math. It can be a lot of fun for a computer scientist to delve into this literature.

In telling the story of Szemeredi’s theorem, one usually starts from Van der Waerden’s 1927 theorem that, no matter how you color the integers with a finite number of colors, there are arbitrarily long monochromatic arithmetic progressions. (An aritmetic progression is a sequence of the form a,a+b,a+2b,…,a+kb,….) In a more quantitative form, the theorem says that for every constants c,k there is an N(c,k) such that for every N>N(c,k), no matter how we color the integers {1,…,N} with c colors, there will be a monochromatic arithmetic progression of length k.

In 1936, Erdos and Turan conjectured that the coloring in Van der Waerden’s theorem was just a “distraction,” and that the “true reason” why the theorem is true is that every dense enough set of integers (in particular, the largest color class in Van der Waerden’s theorem) must contain long arithmetic progressions. Specifically, they conjectured that for every density 0 < d <1 and every integer k there is an N(d,k) such that every subset A of {1,…,N} of cardinality dN contains a length-k arithmetic progression, provided N> N(d,k).

This conjecture became one of the great unsolved problems in Ramsey theory. In 1953, Roth proved the k=3 case of the Erdos-Turan conjecture. In Roth’s proof, N(d,3) is doubly exponential in 1/d. In other words, Roth proves that a subset of {1,…,N} containing at least about N/log log N elements must contain a length-3 progression. The proof is analytical, and uses Fourier analysis. (Or the “circle method” as number theorists say.)

It was only in 1974 that Szemeredi proved the Erdos-Turan conjecture in the general case, and this is the result that is now known as Szemeredi’s Theorem. Szemeredi’s proof is combinatorial, and works via a reduction to a graph-theoretic problem. The Szemeredi regularity Lemma, that has several applications in computer science, is part of the proof. The value of even N(d,4) is a tower of exponentials in Szemeredi’s proof; one only gets that a subset of {1,…,N} of size about N/log* log* n must contain a length-4 progression.

In 1977 Furstemberg found a completely different proof. He proved a “transfer theorem” showing that results about arithmetic progressions (and other related structures) can be derived from statements about certain continous objects, and then he proved the continous statements by “standard methods” in Ergodic Theory. The transfer theorem uses a result that requires the Axiom of Choice and so, while the proof establishes that finite values of N(d,k) exist, it is impossible, even in principle, to extract any quantitative bound.

Quantitative bounds are important because of the long-standing (and recently solved) question of whether the primes contain arbitrarily long arithmetic progressions. If one could show that a subset of {1,…,N} of size about N/log N contains long arithmetic progressions, then the result for the primes follows purely from the fact that the set of primes is dense enough.

There is a simple reduction that shows that, to prove Szemeredi’s theorem over the integers, it is enough to prove it for the additive group ZN, with N prime. (That is, it is enough to look for arithmetic progressions mod N in {1,…,N} .) Also, if one looks at Roth’s proof, it can be seen that it gives a much stronger bound in the additive group Fn3; in that setting, the proof shows that in every subset A of size about 3n/n there are three points in arithmetic progressions (that is, three points of the form a,a+b,a+b+b). The bound is of the form N/log N, where N is the size of the group. The proof, however, uses the fact that Fn3 is a linear space, that it has subspaces, and so on. Bourgain (1999) was able to “simulate” this proof in ZN by using the notion of “Bohr” set of ZN, and showing that it can play a role analogous to that of “subspace” in Fn3. Bourgain obtains a bound that is about N/sqrt(log N).

The great quantitative breakthrough came with the work of Gowers (1998-2001), who proved bounds of the form N/loglog N (as in Roth’s proof) for progressions of any length. His work introduced the notion of Gowers uniformity I discussed two days ago.

So there are now three completely different proofs, Szemeredi’s proof using graph theoretic arguments, Furstemberg’s proof using Ergodic Theory, and Gowers’s analytical proof. Recent work by Tao, however, shows that there are deep similarities between the proofs, although a complete “understanding” of what goes on is probably still to come.

There is also a new (2004-05) fourth proof, that uses a hypergraph regularity lemma, and that is similar in spirit to (but technically very different from, and simpler than) Szemeredi’s proof. It also has deep relations with the the other three proofs. (See e.g. this paper.) It is an independent achievement of Nagle, Rödl, Schacht, and Skokan; Gowers; and Tao.

The most famous result on arithmetic progressions is probably the Green-Tao proof that the primes contain arbitrarily long arithmetic progressions.

The structure of the proof is quite amazing, and perhaps it contains ideas that could also be useful in computer science. They begin by defining a notion of pseudorandomness for sets of integers (which requires two conditions: Gowers uniformity and an additional properties). Then they show:

  1. Assuming Szemeredi’s theorem as a black-box, the theorem is also true for subsets of pseudorandom sets. That is, if S is a large enough and pseudorandom enough set of integers, and A is a subset of A of constant density, then A must contain long arithmetic progressions. (Roughly speaking, a “distinguisher” that looks for arithmetic progressions cannot distinguish a dense subset of a pseudorandom set from a dense subset of all the integers. This is akin to the notion of “pseudo-entropy” in HILL, but the quantification is different.)
  2. The set of “pseudoprimes” (integers having only few, large, divisors) is pseudorandom in the integers. (This uses new results of Goldston and Yildirim.)
  3. The primes have constant density in the pseudoprimes.

Coming up next: Roth’s proof for k=3, the structure of Gowers’s proof for general k, the Green-Tao improvement for k=4, and the constellation of wonderful open questions around the k=3 and k=4 cases in finite vector spaces. (After that, back to politics, food, movies, and China.)