You are currently browsing the tag archive for the ‘Avi Wigderson’ tag.

This year is the centennial of Paul Erdős’s birth. Erdős lived most of his adult life as a traveling mathematician, “couchsurfing,” as we would say now, from place to place and from mathematical conference to mathematical conference. He wrote more than 1,500 papers with more than 500 different coauthors, introduced the probabilistic method and was the defining figure of the “Hungarian approach” to combinatorics. He died at age 83 while attending a mathematical conference.

Last year, we celebrated the centennial of Alan Turing’s birth. Turing and Erdős have become such iconic figures both for the impact of their work and for the fascinating facts of their lives. I would like to argue that the cultural archetype through which we interpret their lives is that of the saint. It is clearly that of the martyr saint in the case of Turing, while Erdős gave up material possessions and devoted his life to others, traveling everywhere and “preaching” to everybody, much in the mold of Saint Francis.

(A comparison of the Turing centennial celebration and Erdős’s, and a look at the frescoes of Medieval Catholic churches will show which kind of saint people are more interested in.)

The first step to become a saint of the Catholic church is to establish that the person exhibited “heroic virtues,” which is a great expression. This is an archetype that is not restricted to religion: you see it occurring in communist propaganda (Stakhanov, Lei Feng) and in every civil rights movement.

Saints were the “celebrities” of the Middle Ages, those whose life people liked to talk about. But contemporary celebrities come from a totally different archetype, that of the Greek God. Greek (and Roman) gods were petty and jealous, they cheated on their spouses, they were terrible parents, but there were good stories to be told about them. We don’t want (at least, I don’t) to live the life of a saint, but thinking about them is certainly inspirational and it makes us think that if someone can be so much better than us, maybe we can be a little better ourself in the practice of “virtues”, whatever this may mean to us. And we don’t admire gods, but, well, it’s probably fun to be one.

As usual, I have lost track of what I was trying to say, but I think that it speaks well of the academic community that we are more interested in saints than in gods, I will close by saying that my favorite saint of complexity theory is Avi Wigderson, I will keep to myself who my favorite god of complexity theory is, and I will leave it to the readers to contribute their picks.

Back in August, Boaz Barak and Moses Charikar organized a two-day course on additive combinatorics for computer scientists in Princeton. Boaz and Avi Wigderson spoke on sum-product theorems and their applications, and I spoke on techniques in the proofs of Szemeredi’s theorem and their applications. As an Australian model might say, that’s interesting!

Videos of the talks are now online. The quality of the audio and video is quite good, you’ll have to decide for yourself on the quality of the lectures. The schedule of the event was grueling, and in my last two lectures (on Gowers uniformity and applications) I am not very lucid. In earlier lectures, however, I am merely sleep deprived — I can be seen falling asleep in front of the board a few times. Boaz’s and Avi’s lectures, however, are flawless.

The first talk of the morning is by Terry Tao. Tim Gowers, in his essay The Two Cultures of Mathematics explains the difference between depth and generality in combinatorics versus other areas of pure math. In combinatorics, one is unlikely to find deep definitions (have you ever tried to understand the statements of the conjectures in the Langlands program?) and very general theorems. In combinatorics, it is more usual to find generality in the form of principles that the practictioners of the subject are aware of and that are exemplified in the proofs of certain theorems. (I hope I have not distorted and oversimplified Gowers’s point too much.)

Tao’s talk very explicitly points out the principle that emerges from his work on arithmetic progressions. The idea is that one can often define notions of pseudorandomness and structure such that one has a theorem of the following kind:

  • Dichotomy: if an object is not pseudorandom, then it has large stuctured component.

For example, in the setting of looking for arithmetic progressions of length 3, a subset of {1,…,N} is “structured” if it has a large Fourier coefficient and it is pseudorandom if it has approximately the same number of length-3 progressions of a random subset of {1,…,N} with the same density.

By iterative arguments, one then has theorems of the form

  • Decomposition: every object can be written as a “sum” of a pseudorandom part and a structured part

This is very important in the proof of the Green-Tao theorem on arithmetic progressions in the primes, but it can be seen, implicitly, in the Szemeredi regularity Lemma and in all proofs of Szemeredi’s theorem.

The proof typically goes by: if our object is pseudorandom, we are done; otherwise we find a large structured “piece.” We “subtract” it. If what remains is pseudorandom, we are fine, otherwise we subtract another structured part, and so on.

The advantage of this type of decomposition (whether it is done explicitely or it is implicit in the proof) is that one can use techniques from analysis and probability to analyse the pseudorandom part and (depending on the definition of “structure”) techniques from analysis and geometry to analyse the structured part. This is how the proof of the Green-Tao theorem ends up using analysis, ergodic theory, combinatorics, and so on.

Often, one can use geometry and algebra only provided that the object is “exactly” structured, while results of the above type may only give “approximate” structure. To bridge this gap one employs results of the type:

  • Rigidity: an object that is “approximately structured” is “close” to an object that is structured

These are results in the spirit of property testing. (And he mentions property testing in his talk.)

This is all quite abstract, but reading papers in the area (or approaching new problems) with this point of view in mind is very helpful. He also proceeds to summarize the proof of the Green-Tao theorem in terms of the above perspective.

By the way, earlier in the talk, about the general importance of understanding pseudorandomness, Tao talks mentions expander graphs and the P versus BPP question. (As a context, not as specific problems that he is going to work on. For the time being, we are safe.)

Overall, the talk is a model of clarity and insight.

After the break, Avi Wigderson gives his talk on P, NP and Mathematics. Yesterday, Avi told me that, of all the material that he cover in his 50-page proceedings paper, he had decided to talk about pseudorandomness and derandomization, from the perspective that randomness seems to help in computation, but complexity theory suggests that it does not. I despaired, because this is exactly my talk, but luckily Avi’s talk has a somewhate complementary emphasis on topics. (So I won’t have to spend the next two days remaking my slides and I can go see the Prado tomorrow.)

Avi explains the “hardness versus randomness” principle in terms that could not be clearer, and one can see heads nodding as he goes along (nodding in agreement, not nodding off). He ends up with the following question about the complexity of the permanent: is there a linear transformation L of nXn matrices into kXk matrices such that, (i) for every matrix M, the determinant of L(M) equals the permanent of M and (ii) k is only polynomial in n? (The answer is supposed to be NO.) From known results one can derive that k need not be more than exp(O(n)) and a Omega(n2) lower bound is also known.

This is to say that one can work on complexity theory without having to learn about such things as algorithms, Turing machines and so on, and that purely algebraic questions are completely open and very related to complexity questions.

With the exception of STOC in Montreal, where I could not go for a visa problem, I have been to every STOC and every FOCS since STOC’97. The official (and historic) purpose of scientific conferences has something to do with the rapid dissemination of research results. In reality, most papers presented at STOC/FOCS had long been disseminated before the conference takes place. There are many other reasons, however, why I like going to these conferences and why I think they are important.

For one thing, they keep our community together. Over time, the scope of theoretical computer science increases, as we become interested in more and more subjects, like error-correcting codes, quantum computing, game theory, and so on. I think it is very important that we have a single place to go to, where results on all these subjects are talked about, and where people can notice connections between what they do and the new things they hear about. Avi has already discussed very eloquently the need to keep in touch with what other theoreticians are doing, so I will refrain from repeating his points. I just want to say: can we have single sessions in STOC too?

This STOC in Seattle had one of the strongest programs among conferences that I can rememeber. There were perhaps four papers each of which would have easily been considered the best paper at some earlier FOCS/STOC. Indeed, the quality of most papers was outstanding.

Most talks were very good too. James Lee says that the reason he goes to talks is that often the speaker will say one sentence that gives some insight that would have been very hard to extract from the paper. That sentence, he says, is what makes the talk worthwhile. He is quite right, and, indeed, someone who went to several talks and who writes often on the web should collect these sentences and put them online. The 20-minutes format helps. It is very hard to prepare such a short talk, but the time constraint puts some pressure to cut to the chase and just say what the paper is about and what new ideas are there in the proof. Among several other talks that I enjoyed (such as Anup Rao’s and Irit Dinur’s), I liked Guy Kindler’s talk on this very technical result. I can’t say how well he explained the problem, because I was already familiar with it, but I think his explanation was very clear. With a few minnutes to spare, he started to talk about their very technical proof and he gave an explanation for “why” the result is true that was very clear and that I had not gotten from reading the paper and thinking about related questions.

Normally, another reason why I enjoy going to FOCS/STOC is to see my far-away friends that I get to meet only once in a while, and to catch up with what is new in life and in theory. In this conference, however, I went to talks in almost all the sessions, except when jet lag made me oversleep. (Hence I missed the session on zero knowledge and Russell Impagliazzo’s invited talk, which were highly praised by those who attended.) In fact, a few times, there were two talks that I was interested in that were scheduled against each other. Bad parallel sessions, bad, bad, parallel sessions!

By the way, I am aware of a problem here. If there are single sessions then fewer papers can be accepted, and if I thought that almost all papers were so good, how can I support a system that would have led to the rejection of some of them? Well, by now my readers know that coherence is not the strong point of my posts. I don’t know, maybe we should have a four-day conference. Or maybe the good rejected papers would be resubmitted to FOCS in Berkeley. Perhaps, by random fluctuations, the quality of the other submittd papers will not be so high, and things will even out.

Which reminds me: people who attended the business meeting at this STOC and at the previous FOCS might have some doubts, but I hear that, yes, there will really be a FOCS’06 in Berkeley.

Avi Wigderson has just posted the paper based on his forthcoming ICM plenary lecture.

I suggest you stop doing whatever you are doing (I mean, come on, you are reading a blog, you can’t be too busy…) and read it now.

In August, Avi will bring the lieta novella of the P versus NP question (and of complexity theory in general) to, literally, thousands of mathematicians: ICM 2002 in Beijing was attended by more than 4,000 people, and many of them went for the conference not just for the food.

This will be an important turning point on how other mathematicians look at what we do. Even now, of the latest generation of pure mathematicians, those who like problem-solving more than theory-building (and who naturally gravitate around analysis, combinatorics, analytic number theory and probability) can be very sophisticated in their appreciation of complexity, and, I think, they see it as one of those parts of pure mathematics that one should know something about, even if one does not work on it.

When Terence Tao (analyst, number theorist, and combinatorialist of “primes in arithmetic progression” fame and likely 2006 Fields Medalist) visited Berkeley, we talked for some time and he asked a few questions. “When you have a subset of the hypercube whose Fourier spectrum is concentrated on a few Fourier coefficients, is there a way to find them without looking at the whole hypercube,” yes, it’s the Goldreich-Levin theorem (he wanted to see the proof, we went through it in about five minutes); “if P!=NP, would this settle the question of whether there are problems that are hard on average?” It’s unlikely; “is it plausible to use Fourier analytic techniques to study the circuit complexity of functions?” not for general circuits, because of Razborov-Rudich (he had heard of it before, and he wanted to know what exactly is the statement of Razborov-Rudich).

If you have enough time to read my Beijing restaurant reviews, you also have enough time to read this article by 1998 Field Medalist Tim Gowers titled “on the importance of mathematics.” The P versus NP question figures quite prominently. Since you are at it, you may try to find the video of his talk at the Clay Institute web site (the link from Gowers’ page is broken) [Update the talk is available at http://claymath.msri.org/gowers2000.mov] on the same topic. And, I don’t mean to boss you around, but you should also buy and read this book, a masterpiece of mathematical exposition for the general public. (Circuit complexity and NP-completeness are featured.)

a

Follow

Get every new post delivered to your Inbox.

Join 280 other followers