Let be a
-regular graph, and let
be the eigenvalues of the adjacency matrix of counted with multiplicities and sorted in descending order.
How good can the spectral expansion of be?
1. Simple Bounds
The simplest bound comes from a trace method. We have
by using one definition of the trace and
using the other definition and observing that counts the paths that go from
to
in two steps, of which there are at least
: follow an edge to a neighbor of
, then follow the same edge back. (There could be more if
has multiple edges or self-loops.)
So we have
and so
The condition is necessary to get lower bounds of
; in the clique, for example, we have
and
.
A trace argument does not give us a lower bound on , and in fact it is possible to have
and
, for example in the bipartite complete graph.
If the diameter of is at least 4, it is easy to see that
. Let
be two vertices at distance 4. Define a vector
as follows:
,
if
is a neighbor of
,
and
if
is a neighbor of
. Note that there cannot be any edge between a neighbor of
and a neighbor of
. Then we see that
, that
(because there are
edges, each counted twice, that give a contribution of
to
) and that
is orthogonal to
.
2. Nilli’s Proof of the Alon-Boppana Theorem
Nilli’s proof of the Alon-Boppana theorem gives
where is the diameter of
. This means that if one has a family of (constant) degree-
graphs, and every graph in the family satisfies
, then one must have
. This is why families of Ramanujan graphs, in which
, are special, and so hard to construct, or even to prove existence of.
Friedman proves a stronger bound, in which the error term goes down with the square of the diameter. Friedman’s proof is the one presented in the Hoory-Linial-Wigderson survey. I like Nilli’s proof, even if it is a bit messier than Friedman’s, because it starts off with something that clearly is going to work, but the first two or three ways you try to establish the bound don’t work (believe me, I tried, because I didn’t see why some steps in the proof had to be that way), but eventually you find the right way to break up the estimate and it works.
So here is Nilli’s proof.
We are going to use essentially the same vector that we used to analyze the spectrum of the infinite tree, although the analysis will be a bit different.
Let be two vertices in
at distance
, and call
. Let
be a neighbor of
. We say that the distance of a vertex
from
is the smallest of the shortest path distance from
to
and from
to
.
We construct a vector as follows:
-
if
-
if
is at distance
from
Note that this is more or less the same vector we constructed in the case of the tree. The reason for talking about the distance from two vertices instead of one, is that we want to say that every vertex is adjacent to at most vertices whose value of
is strictly smaller; in the case of a tree, the root is exceptional because it has
neighbors whose value of
is strictly smaller. There will be a step in the proof in which this choice really makes a difference.
We claim
It turns out that, in order to prove (1) it is easier to reason about the Laplacian matrix than about the adjacency matrix. Define to be the (non-normalized) Laplacian of
. We have the following nice expression for the quadratic form of
For every vertex let us call
(for smaller) the set of neighbors
of
such that
. We always have
. Let us call
the set of vertices at distance exactly
from
.
Now we do the calculations:
Finally,
because decreases (or at least does not increase) for increasing
.
Putting everything together we have
and so
Now we are finally almost done: define a vector with the same construction we did for
, but using the set
as the reference for the distance, where
is a neighbor of
. We then have
It is clearly not possible for a vertex at distance from
to also be at distance
from
, otherwise we would have a path of length
from
to
, so the vectors
and
are non-zero on disjoint subsets of coordinate, and hence are orthogonal.
But we can say more: we also have , because there cannot be an edge
such that both
and
, because otherwise we would have a path of length
from
to
.
This means that if we take any linear combination of
and
we have
so we have found a two-dimensional set of vectors whose Rayleigh quotient is at most the above expression, and so
What did just happen? The basic intuition is that, as in the infinite tree, we set weights to go down by every time we get away from the “root,” and we would like to argue that, for every node
, we have
by reasoning that one of the neighbors must be closer to the root, and hence have value , while the other
neighbors are all at least
. This bound fails at the “leaves” of the construction, which is fine because they account for a small portion of
, but it also fails at the root, which is not adjacent to any larger vertex. In the case of the infinite tree this is still ok, because the root also accounts for only a small portion of
; in general graphs, however, the “root” vertex might account for a very large fraction of
.
Indeed, the root contributes 1 to , and each set
contributes
. If the size of
grows much more slowly than
, then the contribution of the root to
is too large and we have a problem. In this case, however, for many levels
, there have to be many vertices in
that have fewer than
edges going forward to
, and in that case, for many vertices
we will have that
will be much more than
.
Although it seems hopeless to balance this argument and charge the weight of the edge
in just the right way to
and to
, the calculation with the Laplacian manages to do that automatically.
3. A More Sophisticated “Trace” Method
The Hoory-Linial-Wigderson survey also gives a very clean and conceptual proof that
via a trace argument. I am not sure who was the first to come up with this idea.
Let us pick two vertices at distance
, and call
,
and
. Fix
. Then
and
are orthogonal vectors, because the coordinates on which
is nonzero correspond to vertices at distance
from
and the coordinates on which
is nonzero correspond to vertices at distance
from
, and these conditions cannot be simultaneously satisfied. This means that
.
Since is orthogonal to
, we have
but we also have
Now, is the number of walks of length
in
that start at
and get back to
. In every
-regular graph, and for every start vertex, the number of such walks is at least the corresponding number in the infinite
-ary tree. This is clear if
is a Cayley graph; it takes a few minutes (at least I had to think about it for a bit) to see that it holds in every graph.
Ok, then, what is the number of closed walks of length in the infinite
-ary tree? This is a long story, but it is a very well-studied question and there is a bound (not the tightest know, but sufficient for our purposes) giving
so,
I put “trace” in quote in the title of this section, but one can turn the argument into an honest-to-God application of the trace method, although there is no gain in doing so. Pick an even number . We can say that
and
which gives
for , which is implied by the bound based on diameter that we proved above.
Nice writeup. By the way, A. Nilli is a pseudonym for Noga Alon. (One of Noga’s daughters is named Nilli.)
— Ravi Boppana
In the simple bound section in last para, it seems that x^T A x can be >= 4 (instead of =4) since there is also contribution from edges between two neighbors or a, or betweem two neighbors of b?
Yes, it’s at least
, I corrected it.
Ravi, I went on for 1385 words without revealing Noga’s secret identity and you give it away like this!
Pingback: The expansion of the Payley graph | in theory
Pingback: Random 3-regular graphs | Eventually Almost Everywhere