I just returned from the 2011 joint meeting of the AMS and of MAA (the Mathematical Association of America), which took place in New Orleans last week. Because I was teaching on Thursday afternoon, I was only able to get to New Orleans on Friday evening, and I basically only attended my session on Saturday afternoon and an earlier session on Saturday morning. In particular I missed the happy event of Assaf Naor receiving the Bocher Memorial Prize, in part for his work on metric embeddings and approximation algorithms, and of Alexander Lubotzky giving a series of three lectures on expander graphs.
This was my second time attending a large mathematics conference, after the International Congress of Mathematicians (ICM) in Madrid in 2006. Both have thousands of attendees.
It is often noted that we do not have such large conferences in computer science, and it is debated whether we should. The FCRC (Federated Computing Research Conference) was started with the goal of becoming our own broad and large conference.
I thought that ICM was awesome. The mathematical community makes such a big deal of it that they are usually able to have heads of state attend the opening ceremony and deliver the prizes; in China in 2002, the opening ceremony was in the Great Hall of the People, and the buses with the attendees crossed town with a police escort; then-president Jiang Zemin was present. The king of Spain came to the opening ceremony in Madrid, and the president of India was present in 2010. Perhaps equally importantly, the opening ceremony is when the Fields Medals and the Nevanlinna prizes are announced, after having been kept secret up to the point of the ceremony.
Also, again because the conference is such a big deal, the speakers try very hard to give good and understandable talks, especially in the plenary sessions, and I was able to understand something even in the few specialized session talks that I attended.
The annual AMS/MAA meetings, being annual, and not being associated with the major prizes such as the Fields Medals, doesn’t feel “special” in the same way. Also, the program at the ICM is very simple: plenary talks in the morning (and possibly early afternoon), with no competition, and specialized talks in the afternoon, in parallel, with parallel sessions organized with some sense. At the AMS/MAA meeting, instead, the schedule was a jumbled mess: even the “plenary” speakers were speaking while specialized sessions were going on, and sessions on the same topic (e.g. Graph Theory I and Graph Theory II) could be going on at the same time.
New Orleans, in any case, is a perfect venue for conference, with the food, and the entertainment, the reasonably priced flights, and the reasonably priced hotels. We should really have a STOC/FOCS there in the near future.
My session was on “current events”, a series started in 2003 in which speakers talk about a recent development, not due to them, of special significance, and give a talk aimed at non-specialists. Each talk has a 25-minute first part, meant to introduce the problem/theory, a break, and then a 25-minute second part. I spoke about the Unique Games Conjecture, and I felt bad that I couldn’t even state the conjecture in the first 25 minutes, because for it to make sense I first had to review the concept of NP-completeness, of approximation algorithms, and of PCP, but my troubles were nothing compared to what other speakers had to go through. Overall everybody seemed to stay awake and the talk seemed to be well received.
The last talk in the session was by David Nadler on the Fundamental Lemma, which had been a long-standing open question in the Langlands Program, which is a mysterious and highly prized areas of mathematics. When Ngo Bao Chau announced two years ago a proof of the Fundamental Lemma, the news generated a lot of excitement, and his Field Medal last year was considered a given. The typical comment I heard about it was “I have no idea what this is about, but it is a very big deal.” I will note that Thomas Hales has written a 18-page paper titled A statement of the fundamental lemma, and that’s actually all the paper does, moving from one definition to the next, and leaving behind any bewildered reader who has no idea where the definitions are coming from.
Nadler starting his talk admitting that, in 50 minutes, he would not be able to state the Fundamental Lemma; that’s because in order to do so, and make sense of it, one should have some understanding of the theory of endoscopic groups, and to see why one would want such a theory, one needs to have some sense about Langlands duality, which is a big piece of the Langlands program, understanding which would take one very far.
Instead, he developed some very basic examples, and gave a sense of what the Fundamental Lemma says about those examples, thus providing intuition for what it says in the general case. He also had the incredible energy of writing a 51-page exposition in which these elementary examples are more fully developed, and a more or less complete statement of the fundamental lemma and an idea of the techniques going into the proof is given.
Unfortunately I did not take notes and I soon forgot everything, but the story seemed to start from representation theory, that is from the study of how one can map each element of a group to an invertible matrix in, say, in such a way that group product becomes matrix multiplication, that is, . If is Abelian and finite (or at least “locally compact”), we can take , that is have all matrices be just scalars, the representations are the characters of the group, and we get Fourier analysis. When the group is non-Abelian things are much more complicated, but understanding representations is still a good way to understand random processes on the group, Cayley graphs defined over the group, and so on. In fact, I will probably have to talk a bit about representation theory later on in my course on expanders this term.
So we want to understand the representations of a given group and, hopefully, to explicitly construct them. One way to construct representations is to first get a representation of a sub-group and then extend it to a representation of the whole group. In the finite case, the Frobenius reciprocity theorem helps, and the Frobenius reciprocity formula helps in getting representations for the whole group from representations of a subgroup.
There is a whole stack of generalizations of the Frobenius reciprocity formula that work on more general groups; the Selberg Trace Formula, the Arthur-Selberg Trace Formula, and a version conjectured by Langlands and established by Arthur. In this last formula, which I don’t know how to write down and I don’t know what it says, the right-hand side has a “stable” part, which is nice to deal with, and a “twisted part” which is difficult to deal with. Unless I am completely off, the Fundamental Lemma says that the twisted part can be rewritten as if it were the stable part for a different group, so the whole computation can be done in terms of stable parts which, so to speak, makes things easier.
One thing that sounded very interesting about the Langlands program, but I am not sure how directly related it is to the Fundamental Lemma, is that it provides a version of Pontryagin duality for non-commutative groups. If is an Abelian group, then its characters also form a group, which is called the Pontryagin dual group of . Confusingly, in the finite case is actually isomorphic to , but in general it is not, for example the dual group of
is of is the circle group . However, the dual of is always (isomorphic to) , and this is one of the key reasons why the Fourier transform is so useful. There is a theory of Langlands duality for general group, that when applied to Abelian groups more or less gives Pontryagin duality, but that always makes sense, even when the group is not Abelian and there is no Fourier analysis.
If I had a decade or so of free time, I would really like to learn more about this kind of math!