You are currently browsing the monthly archive for January 2008.

*Young Homeless Guy is sitting on the floor with a cardboard sign. Another guy walks by, holding what look like large leftover bags from a restaurant.*

*Guy With Bags:* [stops and offers the bags] would you like something to eat?*Young Homeless Guy:* is there garlic or avocado in it?*GWB:* I don’t think so, why?*YHG:* I am allergic to both. Especially avocado: when I eat it, my throat gets all scratchy.

Women in their sophomore or junior year of college who are thinking about doing research and going to graduate school should read this article (via Andrew Sullivan). Living the life of the mind is very rewarding, and, apparently, the chances of dating male models are not bad either. (If the author could get some mileage out of being an undergrad at Harvard, just imagine what it can do for you to be a grad student at Berkeley!)

After a hiatus of almost four year, the graduate computational complexity course returns to Berkeley.

To get started, I proved Cook’s non-deterministic hierarchy theorem, a 1970s result with a beautifully clever proof, which I first learned from Sanjeev Arora. (And that is not very well known.)

Though the full result is more general, say we want to prove that there is a language in NP that cannot be solved by non-deterministic Turing machines in time .

(If one does not want to talk about non-deterministic Turing machines, the same proof will apply to other quantitative restrictions on NP, such as bounding the length of the witness and the running time of the verification.)

In the deterministic case, where we want to find a language in P not solvable in time , it’s very simple. We define the language that contains all pairs where: (i) is a Turing machine, (ii) is a binary string, (iii) rejects the input within steps, where denotes the length of a string .

It’s easy to see that is in P, and it is also easy to see that if a machine could decide this problem in time on all sufficiently large inputs, then the behavior of on input , for every long enough, leads to a contradiction.

We could try the same with NP, and define to contain pairs such that is a non-deterministic Turing machine that has no accepting path of length on input . It would be easy to see that cannot be solved non-deterministically in time , but it’s hopeless to prove that is in NP, because in order to solve we need to decide whether a given non-deterministic Turing machine rejects, which is, in general, a coNP-complete problem.

Here is Cook’s argument. Define the function as follows: , . Hence, is a tower of exponentials of height . Now define the language as follows.

contains all pairs where is a non-deterministic Turing machine and is a sequence of zeroes such that one of the following conditions is satisfied

- There is a such that , and has no accepting computation on input of running time ;
- is not of the form for any , and has an accepting computation on input of running time .

Now let’s see that is in NP. When we are given an input we can first check if there is a such that .

- If there is, we can compute and deterministically simulate all computations of on inputs up to running time . This takes time which is polynomial in .
- Otherwise, we non-deterministically simulate on input for up to steps. (And reject after time-out.)

In either case, we are correctly deciding the language.

Finally, suppose that could be decided by a non-deterministic Turing machine running in time . In particular, for all sufficiently large , the machine runs in time on input .

Choose to be sufficiently large so that for every in the interval the above property is true.

Now we can see that accepts if and only if accepts if and only if … if and only if accepts if and only if rejects , and we have our contradiction.

I am currently in Hong Kong for my second annual winter break visit to the Chinese University of Hong Kong. If you are around, come to CUHK on Tuesday afternoon for a series of back-to-back talks by Andrej Bogdanov and me.

First, I’d like to link to this article by Gloria Steinem. (It’s old but I have been behind with my reading.) I believe this presidential campaign will bring up serious reflections on issues of gender and race, and I look forward to the rest of it.

Secondly, I’d like to talk about pseudorandomness against low-degree polynomials.

Naor and Naor constructed in 1990 a pseudorandom generator whose output is pseudorandom against tests that compute affine functions in . Their construction maps a seed of length into an -bit string in such that if is an arbitrary affine function, is the distribution of outputs of the generator, and is the uniform distribution over , we have

(1)

This has numerous applications, and it is related to other problems. For example, if is a linear error-correcting code with codewords, and if it is such that any two codewords differ in at least a fraction of coordinates, and in at most a fraction, then one can derive from the code a Naor-Naor generator mapping a seed of length into an output of length . (It is a very interesting exercise to figure out how.) Here is another connection: Let be the (multi)set of outputs of a Naor-Naor generator over all possible seeds, and consider the Cayley graph constructed over the additive group of using as a set of generators. (That is, take the graph that has a vertex for every element of , and edge between and for every , where operations are mod 2 and componentwise.) Then this graph is an expander: the largest eigenvalue is , the degree, and all other eigenvalues are at most in absolute value. (Here too it’s worth figuring out the details by oneself. The hint is that in a Cayley graph the eigenvectors are always the characters, regardless of what generators are chosen.) In turn this means that if we pick uniformly and according to a Naor-Naor distribution, and if is a reasonably large set, then the events and are nearly independent. This wouldn’t be easy to argue directly from the definition (1), and it is an example of the advantages of this connection.

There is more. If is such that the sum of the absolute values of the Fourier coefficients is , is a Naor-Naor distribution, and is uniform, we have

and so a Naor-Naor distribution is pseudorandom against too, if is not too large. This has a number of applications: Naor-Naor distribution are pseudorandom against tests that look only at a bounded number of bits, it is pseudorandom against functions computable by read-once branching programs of width 2, and so on.

Given all these wonderful properties, it is natural to ask whether we can construct generators that are pseudorandom against quadratic polynomials over , and, in general, low-degree polynomials. This question has been open for a long time. Luby, Velickovic, and Wigderson constructed such a generator with seed length , using the Nisan-Wigderson methodology, and this was not improved upon for more than ten years.

When dealing with polynomials, several difficulties arise that are not present when dealing with linear functions. One is the correspondence between pseudorandomness against linear functions and Fourier analysis; until the development of Gowers uniformity there was no analogous analytical tool to reason about pseudorandomness against polynomials (and even Gowers uniformity is unsuitable to reason about very small sets). Another difference is that, in Equation (1), we know that , except for the constant function (against which, pseudorandomness is trivial). This means that in order to prove (1) it suffices to show that for every non-constant . When we deal with a quadratic polynomial , the value can be all over the place between and (for non-constant polynomials), and so we cannot simply prove that is close to a certain known value.

A first breakthrough with this problem came with the work of Bogdanov on the case of large fields. (Above I stated the problem for , but it is well defined for every finite field.) I don’t completely understand his paper, but one of the ideas is that if is an absolutely irreducible polynomial (meaning it does not factor even in the algebraic closure of ), then is close to uniform over the field ; so to analyze his generator construction in this setting one “just” has to show that is nearly uniform, where is the output of his generator. If factors then somehow one can analyze the construction “factor by factor,” or something to this effect. This approach, however, is not promising for the case of small fields, where the absolutely irreducible polynomial has noticeable bias.

The breakthrough for the boolean case came with the recent work of Bogdanov and Viola. Their starting point is the proof that if and are two independent Naor-Naor generators, then is pseudorandom for quadratic polynomials. To get around the unknown bias problem, they divide the analysis into two cases. First, it is known that, up to affine transformations, a quadratic polynomial can be written as , so, since applying an affine transformation to a Naor-Naor generator gives a Naor-Naor generator, we may assume our polynomial is in this form.

- Case 1: if is small, then the polynomial depends on few variables, and so even just one Naor-Naor distribution is going to be pseudorandom against it;
- Case 2: if is large, then the polynomial has very low bias, that is, . This means that it is enough to prove that , which can be done using (i) Cauchy-Schwartz, (ii) the fact that and are nearly independent if is uniform and is Naor-Naor, and (iii) the fact that for fixed the function is linear.

Now, it would be nice if every degree-3 polynomial could be written, up to affine transformations, as , but there is no such characterization, so one has to find the right way to generalize the argument.

In the Bogdanov-Viola paper, they prove

- Case 1: if of degree is correlated with a degree polynomial, and if is a distribution that is pseudorandom against degree polynomials, then is also pseudorandom against ;
- Case 2: if of degree has small Gowers uniformity norm of dimension , then , which was known, and if is pseudorandom for degree and is a Naor-Naor distribution, then too.

There is a gap between the two cases, because Case 1 requires correlation with a polynomial of degree and Case 2 requires small Gowers uniformity . The Gowers norm inverse conjecture of Green Tao is that a noticeably large norm implies a noticeable correlation with a degree polynomial, and so it fills the gap. The conjecture was proved by Samorodnitsky for in the boolean case and for larger field and by Green and Tao. Assuming the conjecture, the two cases combine to give an inductive proof that if are independent Naor-Naor distributions then is pseudorandom for every degree- polynomial.

Unfortunately, Green and Tao and Lovett, Meshulam, and Samorodnitsky prove that the Gowers inverse conjecture fails (as stated above) for in the boolean case.

Lovett has given a different argument to prove that the sum of Naor-Naor generators is pseudorandom for low-degree polynomials. His analysis also breaks down in two cases, but the cases are defined based on the *largest Fourier coefficient* of the polynomial, rather than based on its Gowers uniformity. (Thus, his analysis does not differ from the Bogdanov-Viola analysis for quadratic polynomials, because the dimension-2 Gowers uniformity measures the largest Fourier coefficient, but it differs when .) Lovett’s analysis only shows that is pseudorandom for degree- polynomials, where are independent Naor-Naor generators, compared to the that would have sufficed in the conjectural analysis of Bogdanov and Viola.

The last word on this problem (for now) is this paper by Viola, where he shows that the sum of independent Naor-Naor generators is indeed pseudorandom for degree- polynomials.

Again, there is a case analysis, but this time the cases depend on whether or not .

If is noticeably biased (this corresponds to a small in the quadratic model case), then it follows from the previous Bogdanov-Viola analysis that a distribution that is pseudorandom against degree polynomials will also be pseudorandom against .

The other case is when is nearly unbiased, and we want to show

that is nearly unbiased. Note how weak is the assumption, compared to the assumption that has small dimension- Gowers norm (in Bogdanov-Viola) or that all Fourier coefficients of are small (in Lovett). The same three tools that work in the quadratic case, however, work here too, in a surprisingly short proof.

Alonzo Church and Alan Turing imagined programming languages and computing machines, and studied their limitations, in the 1930s; computers started appearing in the 1940s; but it took until the 1960s for computer science to become its own discipline, and to provide a common place for the logicians, combinatorialists, electrical engineers, operations researchers, and others, who had been studying the uses and limitations of computers. That was a time when giants were roaming the Earth, and when results that we now see as timeless classics were discovered.

Don Knuth is one of the most revered of the great researchers of that time. A sort of pop-culture icon to a certain geek set (see for example these two xkcd comics here and here, and this story). Beyond his monumental accomplishments, his eccentricities, and humor are the stuff of legends. (Like, say, the fact that he does not use email, or how he optmized the layout of his kitchen.)

As a member of a community whose life is punctuated by twice-yearly conferences, what I find most inspiring about Knuth is his dedication to perfection, whatever time it might take to achieve it.

As the well known story goes, more than forty years ago Knuth was asked to write a book about compilers. As initial drafts started to run into the thousands of pages, it was decided the “book” would become a seven-volume series, *The Art of Computer Programming*, the first three of which appeared between 1968 and 1973. An unparalleled in-depth treatment of algorithms and data structures, the books defined the field of analysis of algorithms.

At this point Knuth became frustrated with the quality of electronic typesetting systems, and decided he had to take matters in his own hands. In 1977 he started working on what would become TeX and METAFONT, a development that was completed only in 1989. Starting from scratch, he created a complete document preparation system (TeX) which became the universal standard for writing documents with mathematical content, along the way devising new algorithms for formatting paragraphs of texts. To generate the fonts to go with it, he created METAFONT, which is a system that converts a geometric description of a character into a bit-map representation usable by TeX. (New algorithmic work arose from METAFONT too.) And since he was not satisfied with the existing tools available to write a large program involving several non-trivial algorithms, he came up with the notion of “literate programming” and wrote an environment to support it. It is really too bad that he was satisfied enough with the operating system he was using.

One now takes TeX for granted, but try to imagine a world without it. One shudders at the thought. We would probably be writing scientific articles in Word, and I would have probably spent the last month reading STOC submissions written in Comic Sans.

Knuth has made mathematical exposition his life work. We may never see again a work of the breadth, ambition, and success of *The Art of Computer Programming*, but as theoretical computer science broadens and deepens, it is vital that each generation cherishes the work of accumulating, elaborating, systematizing and synthesizing knowledge, so that we may preserve the unity of our field.

Don Knuth turns 70 tomorrow. I would send him my best wishes by email, but that wouldn’t work…

[This post is part of a "blogfest" conceived and coordinated by Jeff Shallit, with posts by Jeff, Scott Aaronson, Mark Chu-Carroll, David Eppstein, Bill Gasarch, Suresh Venkatasubramanian, and Doron Zeilberger.]

## Recent Comments