The next issue of the Bulletin of the AMS will have an article by Jaikumar Radhakrishnan and Madhu Sudan on Dinur’s proof of the PCP theorem. The article introduces the notion of PCP and its applications, as well as Irit’s proof.

There will also be an article on the work of Goldston-Pintz-Yildirim on small gaps between primes. Their main result is that for every eps>0 there are infinitely many pairs of consecutive primes whose difference is less than eps times the average distance between consecutive primes of that magnitude. That is, we can find infinitely many pairs of primes p,p’, p’ > p such that p’-p < eps log p. This is a step in the direction of proving that there are infinitely many pairs p,p’ such that p’-p =2, the *twin primes conjecture*.

This result is fascinating in various ways. One is the way it was discovered: Goldston and Yildirim had been working on gaps between primes since 1999, and announced the above result in 2003. But then a gap was found in the proof, which seemed to doom the whole approach. They kept working on it for two more years, however, until they made their breakthrough. You can read about it in Daniel Goldston’s amusing artcile my 30 minutes of fame.

Technically, their results improve on previous “sieve” techniques whose spirit is to show that the “primes are ‘pseudo-random’ except in the obvious ways in which they aren’t,” as Green and Tao more or less put it, a statement that is made precise in the Hardy-Littlewood conjecture. The idea is to start by thinking of the following probabilistic “model” for prime numbers: a number N is “prime” with probability 1/ln N, and to observe that certain properties of the primes (for example, the prime number theorem) hold in this model. Faced with a conjecture, one can check if it is true in the model, and then see if it is true for all sets of integers that have some “pseudorandomness” property and finally verify that the primes have such pseudorandomness property, usually via Fourier analysis. (This would be the “circle method” in number theory.)

The model, of course, has some major shortcomings. For example it predicts infinitely many even numbers to be primes, and infinitely many pairs of primes that differ only by one. We may correct the model by, more realistically, saying that if N>2 is even, then it is never prime, and if it is odd then it is prime with probability 2/ln N. This is already better, and it predicts some true asymptotic properties of the primes, but it still has problems. For example, it predicts inifinitely many triples of primes p,p+2,p+4, while 3,5,7 is the only such triple. (Why?) An even better model is that N>6 is prime with probability 3/ln N if N mod 6 is either 1 or 5, and N is never prime otherwise.

We see we can define a hierarchy of more and more accurate models: in general, for a fixed W, we can look at the distribution that picks a 1/ln N fraction of numbers from 1 to N and that only picks numbers N such that N and W share no common factor. As we pick W to be a product of more and more of the first few primes, we get a more and more accurate model. (This would be the issue of “major arcs” versus “minor arcs” in the circle method.)

Now look at a conjecture about primes, for example the twin primes conjecture, and see what these models predict. They all predict *const**N/(ln N)^{2} twin primes between 1 and N, with the constant depending on W; for larger W, the constant tends to a limit c_{twin}. It is conjectured that there are in fact about c_{twin}N/(ln N)^{2} twin primes between 1 and N.

More or less, the Hardy-Littlewood conjecture implies that the prediction given by this model is always right for questions that involve “linear equations over primes.”

Apparently (I haven’t read the papers) Goldston-Pintz-Yildirim approach the pseudorandomness of primes by looking at “higher moments.” Green and Tao have been working on a different program based on studying the “Gowers uniformity” of the primes. (Technically, they study the Gowers uniformity of the Mobius function rather than of the characteristic function of the primes, but it’s close enough in spirit.)

In the paper Quadratic Uniformity of the Mobius Function they show that the primes have small dimension-3 Gowers uniformity, and, together with their earlier result on Gowers uniformity and polynomials, this is used in a new paper to prove that the predictions of the Hardy-Littlewood conjecture hold for every question about linear equations over primes of “bounded complexity.” For example, if you write certain pairs of linear equations over four prime unknowns, then the fraction of solutions will be as predicted by the limit of the random model. This includes the case of length-4 arithmetic progressions over primes, where you look at solutions p1,p2,p3,p4 to the equations p4-p3=p3-p2 and p3-p2=p2-p1.

These results and conjectures are part of an even bigger set of results whose spirit is that “multiplicative structure is pseudorandom with respect to addition,” that can be seen in various results that have applications to combinatorial constructions. This comes up most directly in sum-product results in finite fields and over the integers, used to construct extractors, in Weil’s result on character sums, which is used to construct $\epsilon$-biased generators, in certain expander constructions related to Ramanujan graphs, and so on.

It is hard to believe that, until recently, analytic number theory was considered an unfashionable area, what with its *quantitative* results and its lack of connections to algebraic geometry. For example, I understand that the Berkeley math department, which is quite large, has no analytic number theorist.

On a side note, I think that page 8 on the link “my 30 minutes of fame”, given in the post, contains some amusing stuff.

Thanks very much for this blog!

I recently took an amateur’s effort at proof the twin prime conjecture (no link needed). I would have done better to read your blog and learn about the subtleties of sieve theory.

When I was analyzing twin primes, I found that for the first 15 million primes (http://primes.utm.edu/lists/small/millions/), when 6i+3 where i is any positive integer, there is always a twin prime between 6i+3 and 10i+5 which amount to 6i+3 and (5/3)(6i+3).

This seems wrong to me so I am assuming that my verification program must have a bug. Does the Prime Number Theorem contradict my finding? I know that Bertrand’s Postulate predicts that there should be a prime between n and 2n but I have never seen a prove that there should be a prime between (6i+3) and (5/3)(6i+3).

If you would prefer to respond directly to me, my e-mail address is larry.freeman@gmail.com.

Thanks very much!

-Larry

The prime number theorem implies that, for every sufficiently large n, there is a prime between n and 5n/3 (or between n and 1.001*n, and so on), so there is no contradiction.

(The heuristic arguments based on probabilistic “models,” which are supported by “experiments” on the first few million primes, would also suggest the existence of many twin primes between n and 1.001*n, for large enough n.)

No matter what proof of the prime twin conjecture is ever offered it will never really explain the fundamental forces at work with the primes such as

Randomness = Recursive Self Complication

and why the primes and prime twins are almost effectively finite, that is their incidence fizzle to near zero probability.

My humble work:

http://www.rankyouragent.com/primes/primes.htm

may offer guidance to anyone capable of escaping mathematic dogma.