Before trying to describe how one reasons about the eigenvalues of the adjacency matrix of a Cayley graph, I would like to describe the “easy direction” of the relationships between eigenvalues and expansion.

First, a one-minute summary of linear algebra. If is an matrix, then a (column) non-zero vector is an *eigenvector* of provided that, for some scalar , we have . If so, then is an *eigenvalue* of . A couple of immediate observations:

- The equation can be rewritten as , so if is an eigenvalue of then the columns of the matrix are not linearly independent, and so the determinant of is zero. But we can write as a degree- polynomial in the unknown , and a degree- polynomial cannot have more than roots.
So, an matrix has at most eigenvalues.

- If is an eigenvalue of , then the set of vectors such that is a linear space. If and are different eigenvalues, then their respective spaces of eigenvectors are independent (they only intersect at 0).

If is a symmetric real-valued matrix, then a series of miracles (or obvious things, depending on whether you have studied this material or not) happen:

- All the roots of the polynomial are real; so, counting multiplicities, has real eigenvalues.
- If is an eigenvalue of , and it is a root of multiplicity of , then the space of vectors such that has dimension .
- If are two different eigenvalues, then their respective spaces of eigenvectors are orthogonal

From this fact, it is easy to conclude that starting from symmetric real matrix we can find a sequence of numbers

and a sequence of unit vectors such that the are orthogonal to each other and such that .

Now the claim is that these numbers and vectors completely describe . Indeed, suppose that we are given a vector and that we want to compute . First, we can write as a linear combination of the :

and since the are an orthonormal basis, we know what the coefficients are like: each is just the inner product of and .

Now, by linearity, we have

and, since the are eigenvectors,

So, once we express vectors in the basis defined by the , we see that multiplication by is just the effect of multiplying coordinate by the value .

But there is more. Recall that and so we can also write

and so, in particular, matrix is just

a sum of rank-1 matrices, with the coefficients of the sum being the eigenvalues.

Let now be the adjacency matrix of a -regular undirected graph (so is real-valued and symmetric), be its eigenvalues, and be a corresponding set of orthonormal eigenvectors.

It is not hard to see (no, really, it’s two lines, see the bottom of page 2 of these notes) that all eigenvalues are between and . Also, is always an eigenvalue, as shown by the eigenvector . This means that and we can take

and we have

Suppose now that all the other , are much smaller than in absolute value. Then, intuitively, the sum is going to be dominated by the first term, and behaves sort of like the matrix , that is, the matrix that has the value in each entry.

You may be skeptical that such an approximation makes sense, but see where this leads. If our graph is , and is a cut of the set of vertices into two subsets, let be the -dimensional vector that is 1 in coordinates corresponding to and 0 in the other coordinates, and define similarly. Then the number of edges crossing the cut is precisely

and if we replace by the matrix that has in each coordinate we get the approximation , which is the expected number of edges that would cross the cut if we had a random graph of degree . This is pretty much the right result: if are all much smaller than , and if and are at least a constant times , then the number of edges crossing the cut is indeed very close to . This result is known as the *Expander Mixing Lemma*, and the proof is just a matter of doing the “obvious” calculations. (Write and in the eigenvector basis, distribute the product, remember that the are orthogonal, use Cauchy-Schwarz …)

It is perhaps less intuitive that, provided that all other eigenvalues are at least *a little bit* smaller than , then a reasonably large number of edges crosses the cut. In particular, if and if , then at least edges cross the cut. That is, if all eigenvalues except the largest one are at most then the edge expansion of the graph is at least . (It takes about five lines to prove this, see the beginning of the proof of Thm 1 in these notes.)

To summarize: the adjacency matrix of an undirected graph is symmetric, and so all eigenvalues are real. If the graph is -regular, then is the largest eigenvalue, and all others are between and . If all the others are between and , then the graph has edge-expansion (or better).

One minor correction,

Luca:”…, and $A$ behaves sort of like the matrix $(d v_i^T v_i)$, that is, the matrix …”

$v_i$ should be $v_1$.

Great job, Luca. We are awaiting the rest of this series of posts.

I greatly enjoy your technical posts – they’re both intuitive and informative. Looking forward to more of them…

Wonderful post! Thanks a ton.