(As will become clear soon, I am totally out of my depth: Readers who find glaring mistakes, and even small ones, should note them in the comments.)

In the previous post on expanders we left off with the observation that if we want to construct an expanding Cayley graph, then we need to start from a group where all elements of the group can be obtained by a sum of terms chosen, with repetitions, from a set of generators; this means that a constant root of the ways of rearranging such sums must lead to different results. The group, then, has to be non-commutative in a very strong sense.

I will describe a way in which a group used to construct expanders is “very non-commutative.” This will not be a sufficient property for the expansion, but it is closely related and it will come up in the paper of Gowers that we will talk about later.

With this objective in mind, let’s see the notion of group representation. Let’s start from the simplest group, the cyclic group of elements with the operation of addition (or as mathematicians write). One can visualize this group as points equally spaced along a circle in the plane, with element forming an angle of with the axis. If we want to add an element that forms an angle to an element that forms an angle , that’s the same as putting the “two angles next to each other” and we get a point that forms an angle . More algebraically, we identify an element with the complex number and we realize group addition as multiplication of complex numbers. There is, however, another way of visualizing the cyclic group on a cycle. We can think of group element as the operation that takes a point on the cycle and *moves* it by an angle of . Addition of group elements now corresponds to “composition,” that is, corresponds to applying one rotation after the other.

In this view, element is now the function that maps complex number into complex number .

Suppose now that our group is a product of cyclic groups. This means that a group elements if a -tuple of the form , and that

It now makes sense to view a group element as a “high-dimensional rotation” operation that takes in input complex numbers and outputs the complex numbers

If we take this view, we have, again, the property that group addition becomes composition of functions. Note, also, that the function that we have associated to each group element is a very simple type of linear function: it is simply multiplication of times a diagonal matrix that has diagonal

Notice, also, that if is a function of the form , where is a matrix, and is a function of the form where is a matrix, then . That is, for linear functions, function composition is the same as matrix multiplication.

To summarize, we have started from the group , and we have found a way to associate a complex-valued diagonal matrix to each group element in such a way that group addition becomes matrix multiplication.

It is known that all finite Abelian groups can be written as , so this type of representation via diagonal matrices is possible for every finite Abelian group.

What about more general groups? If is an arbitrary finite group, it is possible to associate to each element a *block-diagonal* matrix in such a way that group addition becomes matrix multiplication.

(It is common to call a group operation “multiplication” when a group is not Abelian, but for consistency with the rest of the post I will keep calling it addition.)

(By the way, it is also possible to represent *infinite* groups by associating a linear operator to each group element, but we will only discuss finite groups here.)

If the group is Abelian, then the matrices are diagonal, that is, they are block-diagonal matrices with “block size one.” So one way of quantifying “how non-Abelian” is a group is to consider how small the blocks can be in such matrix representations. That’s the *dimension* of the representation.

Here is an example of a family of groups whose representations cannot be low-dimensional. Let be a prime (it could also be a prime power) and let us consider matrices whose entries are integers . Let us restrict ourselves to matrices whose determinant is 1 modulo , and consider the operation of matrix multiplication (where, also, all operations are mod ). This set of matrices forms a group, because the matrices are invertible (the determinant is non-zero) and the set contains the identity matrix and is closed under product. This group is called . The group contains the tiny subgroup of two elements ; in the representation of this shows up as a block of size 1. If we take the quotient of by then we get another group, which is called .

It is now a theorem of Frobenius that every representation of has dimension at least . This is really large compared to the size of : the group has elements, and so has elements. The dimension of the representation is thus approximately the cube root of the number of elements of the group.

Going back to representations of Abelian groups, we see that, in that case, not only each block had size one, but also that the entire matrix had size at most logarithmic in the number of elements of the group. This shows that and are, at least in this particular sense, “very non-Abelian,” and it is an encouraging sign that they may be useful in constructing expanders and other quasi-random objects.

On his way toward a question about subsets of groups not containing triples of the form , Gowers shows that *every* dense Cayley graph constructed on has constant edge expansion. (By dense, we mean that the degree is , where is the number of vertices.) This is a result that, assuming Frobenius theorem, has a simple proof, which I hope to understand and describe at a later point.

The celebrated *Ramanujan graphs* of Lubotzky, Phillips and Sarnak are constant-degree Cayley graphs constructed from , which is the same as except that we put no restriction on the determinant. Their relationship between degree and eigenvalue gap is best possible. The analysis of Lubotzky et al. is, unfortunately, completely out of reach.

## 7 comments

Comments feed for this article

December 9, 2006 at 3:13 am

ScottThanks, Luca! Greg Kuperberg has been on my case to learn some representation theory. So I bought a textbook, but gave up halfway through the first chapter. Then I read your blog post and understood everything.

One tiny correction:

It is now a theorem of Frobenius that every representation of PSL(2,p) has dimension at least (q-1)/2.You mean (p-1)/2.

December 10, 2006 at 3:59 am

AnonymousSo, can anyone recommend a good reference for representation theory?

December 10, 2006 at 6:59 am

Wim van DamSo, can anyone recommend a good reference for representation theory?Depending on your style I can recommend the following two books. If you like your math crisp and clear there is probably nothing better than Serre’s “Linear representations of finite groups”. This is an absolute classic and it manages to explain all the basics in less than 50 pages (did I mention that it is crisp?). I’m curious if this is the book that Scott tried to read.

On the other side of the spectrum (ha!) there is “Fourier Analysis on Finite Groups and Applications” by Audrey Terras. This book describes lots of applications of representation theory that are relevant for theoretical computer science. It also tries to explain various deep results that go far beyond the proven material in the book such as the Selberg trace formula. Some people might enjoy this, while others will find it annoying. The latter should go for Serre.

December 10, 2006 at 7:10 am

Wim van DamFurthermore, for a given group, the representation is unique, up to rearranging the rows and the columns.This doesn’t sound right. For the mod 3 group Z/3, there are (at least) three different representations:

A) All elements in Z/3 get mapped to 1

B) 0 gets mapped to 1, 1 gets mapped to exp(2 pi i/3) and 2 gets mapped to exp(2 pi i 2/3)

C) 0 gets mapped to 1, 1 gets mapped to exp(2 pi i 2/3) and 2 gets mapped to exp(2 pi i 1/3).

In all cases the elements of the group get represented by 1×1 dimensional matrices, so there is not much rearranging of the columns/rows that can be done, yet the representations are different. What did you mean to say here?

December 10, 2006 at 7:11 am

Anonymousanother small correction

>It is also worth noticing that, among diagonal matrices, product is commutative, and so this representation is impossible for finite groups that are not Abelian.

its the other way around, non abelian groups can have abelian representation, but abelian groups cant have non abelian representations(f(g)and f(h) can commute even though g and h dont,but not the other way)

so a non-abelian gp which has a cyclic quotient can have a rep of the above form

December 10, 2006 at 9:58 am

EyalActually, for SL(2,p) more is known. Every generating set of size p^x yields an expanding Cayley graph. This holds for any x>0, and the expansion depends only on x. This follows from a result of Helfgott appearing in

http://arxiv.org/abs/math.GR/0509024

However, Helfgott’s proof uses a sum-product argument. Gowers only seems to use information on the dimensions of irreducible representations, so it’s much more general.

Building upon Helfgott’s argument, Bourgain and Gamburd prove that if a for bounded size set U in SL(2,p) the Cayley graph has large enough girth then the Cayley graph is automatically an expander. This girth condition holds in particular for random subsets, and it also holds for the “Ramanujan graph” generators – no very fancy math is needed. I THINK girth is all they need – their paper is not online and I only heard a talk about it a year ago. The paper is called “new results on expanders”, and it has apparently appeared on ComptesRendus Acad. Sci. Paris, Ser. I, 342, 2006, 717-721.

So the Ramanujan graph construction is not so inaccessible I think. If you want the *optimal* expansion constant you indeed need a strong theorem in number theory whose proof I don’t know (Deligne’s proof of some Ramanujan conjecture), but it is simple to state: it estimates the number of solutions mod p to some given system of equations very accurately. Less accurate estimates are not as hard to obtain, and still give enough to prove expansion.

The proof of expansion of these SL(2,p) results is based on two things:

(1) In any Cayley graph of a group G, the second eigenvalue appears with multiplicity which is at least the minimal dimension of an irreducible representation of G. In SL(2,p), this means that in any Cayley graph of it the second eigenvalue appears at least (p-1) times.

(2) The number of k-cycles in a graph is equal to the sum of the k-th powers of all the eigenvalues of the graph.

So if in some group you can show that there are “few” cycles of some length k, then you have shown that the sum of the k-th powers of all the eigenvalues is small. This is usually not enough – it gives you information about the average k-th power of ALL eigenvalues, while we are interested in the second eigenvalue. BUT: if the second eigenvalue appears with high enough multiplicity, then knowing the average of the k-th power of all eigenvalues does say something about the k-th power of the second eigenvalue itself.

That’s basically the whole proof idea in the SL(2,p) case. You need a good estimate on the number of k-cycles in the given Cayley graph, along with lower bounds. This is not trivial, but it’s doable.

This proof idea does not work for other groups. For example, the symmetric group S_n has representations of dimension n-1, which is only logarithmic in the size of the group. Still, generating sets which make an expanding Cayley graph are known. This is a result of Kassabov, and it also uses representation theory, but in a completely different way!

December 10, 2006 at 11:06 pm

LucaThanks Wim, I don’t know what I was thinking about the uniqueness remark.