Today, after a lecture in the spectral graph theory boot camp at the Simons institute, I was asked what the *expander mixing lemma* is like in graphs that are not regular.

I don’t know if I will have time to return to this tomorrow, so here is a quick answer.

First, for context, the expander mixing lemma in regular graph. Say that is a -regular undirected graph, and is its adjacency matrix. Then let the eigenvalues of the normalized matrix be

We are interested in graphs for which all the eigenvalues are small in absolute value, except , that is, if we define

we are interested in graphs for which is small. The expander mixing lemma is the fact that for every two disjoint sets and of vertices we have

The inequality (1) says that, if is small, then the number of edges between every two large sets of vertices is almost determined just by the size of the sets, and it is equal to the expected number of edges between the two sets in a random -regular graph, up to an error term that depends on .

For the proof, we observe that, if we call the matrix that has ones everywhere, then

and then we substitute and in the above expression and do the calculations.

In the case of an irregular undirected graph , we are going to consider the normalized adjacency matrix , where is the adjacency matrix of and is the diagonal matrix such that , where is the degree of . As in the regular case, the eigenvalues of the normalized adjacency matrix satisfy

Let us define

the second largest eigenvalue in absolute value of .

We will need two more definitions: for a set of vertices , its volume is defined as

the sum of the degrees and the average degree, so that . Now we have

Lemma 1 (Expander Mixing Lemma)For every two disjoint sets of vertices , , we have

So, once again, we have that the number of edges between and is what one would expect in a random graph in which the edge exists with probability , up to an error that depends on .

To prove the lemma, we prove the following claim:

where is the matrix such that , and then the lemma will follow by just substituting and in the above expression.

The proof would be a bit cleaner if we had defined the normalized adjacency matrix to be an operator over a vector space with a different type of inner product from the standard Euclidean one (so that would become the norm of ), but this requires some set-up and explanations, so we will just carry on with the above definitions.

Write the eigenvalue decomposition of the normalized adjacency matrix : it is going to be

where the are an orthonormal basis of eigenvectors.

To calculate the eigenvector of we see that

where we use the change of variable and the maximum is attained for so that is the unit vector parallel to , that is and .

Now, to compute we just have to compute the spectral norm of , so

where the last step is the change of variable and . It remains to observe that and that

which proves Equation (3) and thus the lemma.

In equation (2), shouldn’t that be E(S,T) in the term on the LHS?