CS294 Lecture 3: Cheeger Inequalities

In which we generalize the notion of normalized Laplacian to irregular graphs, we extend the basic spectral graph theory results from last lecture to irregular graphs, and we prove the easy direction of Cheeger’s inequalities.

1. Irregular Graphs

Let {G=(V,E)} be an undirected graph, not necessarily regular. We will assume that every vertex has non-zero degree. We would like to define a normalized Laplacian matrix associated to {G} so that the properties we proved last time are true: that the multiplicity of 0 as an eigenvalue is equal to the number of connected components of {G}, that the largest eigenvalue is at most 2, and that it is 2 if and only if (a connected component of) the graph is bipartite.

Continue reading

CS294 Lecture 2: Basics of Spectral Graph Theory

In which we introduce the Laplacian matrix and we prove our first results in spectral graph theory.

1. The Basics of Spectral Graph Theory

Given an undirected graph {G= (V,E)}, the approach of spectral graph theory is to associate a symmetric real-valued matrix to {G}, and to related the eigenvalues of the matrix to combinatorial properties of {G}.

For the sake of this lecture, we will restrict ourselves to the case in which {G} is a {d}-regular graph, and we will then see how to extend our results to apply to irregular graphs as well.

The most natural matrix to associate to {G} is the adjacency matrix {A} such that {A_{i,j} = 1} if {\{ i,j\} \in E} and {A_{i,j} = 0} otherwise. In the second part of the course, in which we will study expander graphs, the adjacency matrix will indeed be the most convenient matrix to work with. For the sake of the algorithms that we will analyze in the first part of the course, however, a slight variation called the normalized Laplacian is more convenient.

Continue reading

CS294 Lecture 1: Introduction

In which we describe what this course is about.

1. Overview

This is class is about applications of linear algebra to graph theory and to graph algorithms. In the finite-dimensional case, linear algebra deals with vectors and matrices, and with a number of useful concepts and algorithms, such as determinants, eigenvalues, eigenvectors, and solutions to systems of linear equations.

The application to graph theory and graph algorithms comes from associating, in a natural way, a matrix to a graph {G=(V,E)}, and then interpreting the above concepts and algorithms in graph-theoretic language. The most natural representation of a graph as a matrix is via the {|V| \times |V|} adjacency matrix of a graph, and certain related matrices, such as the Laplacian and normalized Laplacian matrix will be our main focus. We can think of {|V|}-dimensional Boolean vectors as a representing a partition of the vertices, that is, a cut in the graph, and we can think of arbitrary vectors as fractional cuts. From this point of view, eigenvalues are the optima of continuous relaxations of certain cut problems, the corresponding eigenvectors are optimal solutions, and connections between spectrum and cut structures are given by rounding algorithms converting fractional solutions into integral ones. Flow problems are dual to cut problems, so one would expect linear algebraic techniques to be helpful to find flows in networks: this is the case, via the theory of electrical flows, which can be found as solutions to linear systems.

The course can be roughly subdivided into three parts: in the first part of the course we will study spectral graph algorithms, that is, graph algorithms that make use of eigenvalues and eigenvectors of the normalized Laplacian of the given graph. In the second part of the course we will look at constructions of expander graphs, and their applications. In the third part of the course, we will look at fast algorithms for solving systems of linear equations of the form {L {\bf x} = {\bf b}}, where {L} is Laplacian of a graph, their applications to finding electrical flows, and the applications of electrical flows to solving the max flow problem.

Continue reading

CS294 Lecture 0: Linear Algebra Review

In which we review linear algebra prerequisites.

The following background from linear algebra will be sufficient for the sake of this course: to know what is an eigenvalue and an eigenvector, to know that real symmetric matrices have real eigenvalues and their real eigenvectors are orthogonal, and to know the variational characterization of eigenvalues.

1. Basic Definitions

If {x=a+ib} is a complex number, then we let {\bar x = a- ib} denote its conjugate. Note that a complex number {x} is real if and only if {x = \bar x}. If {M \in {\mathbb C}^{m \times n}} is a matrix, then {M^*} denotes the conjugate transpose of {M}, that is, {(M^*)_{i,j} = \overline{M_{j,i}}}. If the entries of {M} are real, then {M^* = M^T}, where {M^T} is the transpose of {M}, that is, the matrix such that {(M^T)_{i,j} = M_{j,i}}.

We say that a matrix {M} is Hermitian if {M = M^*}. In particular, real symmetric matrices are Hermitian.

If {{\bf x},{\bf y} \in {\mathbb C}^n} are two vectors, then their inner product is defined as

\displaystyle  \langle {\bf v},{\bf w} \rangle := {\bf v}^* {\bf w} = \sum_i \overline{v_i} \cdot w_i \ \ \ \ \ (1)

Notice that, by definition, we have {\langle {\bf v}, {\bf w} \rangle = (\langle {\bf w}, {\bf v} \rangle)^*} and {\langle {\bf v}, {\bf v} \rangle = || {\bf v} ||^2}. Note also that, for two matrices {A,B}, we have {(A \cdot B)^* = B^* \cdot A^*}, and that for every matrix {M} and every two vectors {{\bf x}}, {{\bf y}}, we have

\displaystyle  \langle M {\bf x} , {\bf y} \rangle = {\bf x}^* M^* {\bf y} = \langle {\bf x} , M^* {\bf y} \rangle

If {M\in {\mathbb C}^{n \times n}} is a square matrix, {\lambda \in {\mathbb C}} is a scalar, {{\bf v} \in {\mathbb C}^n - \{ {\bf {0}} \}} is a non-zero vector and we have

\displaystyle   M {\bf v} = \lambda {\bf v} \ \ \ \ \ (2)

then we say that {\lambda} is an eigenvalue of {M} and that {{\bf v}} is eigenvector of {M} corresponding to the eigenvalue {\lambda}.

2. The Spectral Theorem

We want to prove

Theorem 1 (Spectral Theorem) Let {M\in {\mathbb R}^{n\times n}} be a symmetric matrix with real-valued entries, then there are {n} real numbers (not necessarily distinct) {\lambda_1,\ldots,\lambda_n} and {n} orthonormal real vectors {{\bf x}_1,\ldots,{\bf x}_n}, {{\bf x}_i \in {\mathbb R}^n} such that {{\bf x}_i} is an eigenvector of {\lambda_i}.

Assuming the fundamental theorem of algebra (that every polynomial has a complex root) and basic properties of the determinant, the cleanest proof of the spectral theorem is to proceed by induction on {n}, and to show that {M} must have a real eigenvalue {\lambda_1} with a real eigenvector {{\bf v}_1}, and to show that {M} maps vectors orthogonal to {{\bf v}_1} to vectors orthogonal to {{\bf v}_1}. Then one applies the inductive hypothesis to {M} restricted to the {(n-1)}-dimensional space of vectors orthogonal to {{\bf v}_1} and one recovers the remaining {(n-1)} eigenvalues and eigenvectors.

The cleanest way to formalize the above proof is to give all definitions and results in terms of linear operators {T: X \rightarrow X} where {X} is an arbitrary vector space over the reals. This way, however, we would be giving several definitions that we would never use in the future, so, instead, the inductive proof will use a somewhat inelegant change of basis to pass from {M} to an {(n-1) \times (n-1)} matrix {M'}.

We begin by showing that a real symmetric matrix has real eigenvalues and eigenvectors.

Theorem 2 If {M\in {\mathbb R}^{n\times n}} is symmetric, then there is a real eigenvalue {\lambda \in {\mathbb R}} and a real eigenvector {{\bf v} \in {\mathbb R}^n} such that {M {\bf v} = \lambda {\bf v}}.

We begin by noting that every matrix has a complex eigenvalue.

Lemma 3 For every matrix {M\in {\mathbb C}^{n \times n}}, there is an eigenvalue {\lambda \in {\mathbb C}} and an eigenvector {{\bf v} \in {\mathbb C}^n} such that {M {\bf v} = \lambda {\bf v}}.

Proof: Note that {\lambda} is an eigenvalue for {M} if and only if

\displaystyle  \exists {\bf x} \neq {\bf 0}. \ ( M - \lambda I ) {\bf x} = {\bf 0}

which is true if and only if the rows of {M-\lambda I} are not linearly independent, which is true if and only if

\displaystyle  \det (M - \lambda I ) = 0

Now note that the mapping {t \rightarrow \det (M - t I )} is a univariate polynomial of degree {n} in {t}, and so it must have a root {\lambda} by the fundamental theorem of algebra. \Box

Next we show that if {M} is real and symmetric, then its eigenvalues are real.

Lemma 4 If {M} is Hermitian, then, for every {{\bf x}} and {{\bf y}},

\displaystyle  \langle M {\bf x} , {\bf y} \rangle = \langle {\bf x} , M {\bf y} \rangle

Proof:

\displaystyle  \langle M {\bf x} , {\bf y} \rangle = \langle {\bf x} , M^* {\bf y} \rangle = \langle {\bf x} , M {\bf y} \rangle

\Box

Lemma 5 If {M} is Hermitian, then all the eigenvalues of {M} are real.

Proof: Let {M} be an Hermitian matrix and let {\lambda} be a scalar and {{\bf x}} be a non-zero vector such that {M {\bf x} = \lambda {\bf x}}. We will show that {\lambda = \lambda^*}, which implies that {\lambda} is a real number.

We note that

\displaystyle  \langle M{\bf x}, {\bf x} \rangle = \langle \lambda {\bf x}, {\bf x} \rangle = \lambda^* ||x||^2

and

\displaystyle  \langle {\bf x}, M {\bf x} \rangle = \langle {\bf x}, \lambda {\bf x} \rangle = \lambda ||x||^2

and by the fact that {\langle M{\bf x}, {\bf x} \rangle = \langle {\bf x}, M{\bf x} \rangle} , we have {\lambda = \lambda^*}. \Box

In order to prove Theorem 2, it remains to argue that, for a real eigenvalue of a real symmetric matrix, we can find a real eigenvector.

Proof: } Let {M \in {\mathbb R}^{n \times n}} be a real symmetric matrix, then {M} has a real eigenvalue {\lambda} and a (possibly complex valued) eigenvector {{\bf z} = {\bf x} + i {\bf y}}, where {{\bf x}} and {{\bf y}} are real vectors. We have

\displaystyle  M {\bf x} + i M {\bf y} = \lambda {\bf x} + i \lambda {\bf y}

from which (recalling that the entries of {M} and the scalar {\lambda} are real) it follows that {M {\bf x} = \lambda {\bf x}} and that {M {\bf y} = \lambda {\bf y}}; since {{\bf x}} and {{\bf y}} cannot both be zero, it follows that {\lambda} has a real eigenvector. \Box

We are now ready to prove the spectral theorem

Proof: We proceed by induction on {n}. The case {n=1} is trivial.

Assume that the statement is true for dimension {n-1}. Let {\lambda_1} be a real eigenvalue of {M} and {{\bf x}_1} be a real eigenvector {\lambda_1}.

Now we claim that for every vector {{\bf y}} that is orthogonal to {{\bf x}_1}, then {M{\bf y}} is also orthogonal to {{\bf x}_1}. Indeed, the inner product of {M{\bf y}} and {{\bf x}_1} is

\displaystyle  \langle {\bf x}_1, M{\bf y} \rangle = \langle M {\bf x}_1 , {\bf y} \rangle = \langle \lambda {\bf x}_1,{\bf y} \rangle = 0

Let {X} be the {n-1}-dimensional subspace of {{\mathbb R}^n} that contains all the vectors orthogonal to {{\bf x}_1}. We want to apply the inductive hypothesis to {M} restricted to {X}; we cannot literally do that, because the theorem is not stated in terms of arbitrary linear operators over vector spaces, so we will need to do that by fixing an appropriate basis for {X}.

let {B \in {\mathbb R}^{n \times (n-1)}} be a matrix that computes a bijective map from {{\mathbb R}^{n-1}} to {X}. (If {{\bf b}_1, \ldots,{\bf b}_{n-1}} is an orthonormal basis for {X}, then {B} is just the matrix whose columns are the vectors {{\bf b}_i}.) Let also {B' \in {\mathbb R}^{(n-1) \times n}} be the matrix such that, for every {{\bf y} \in X}, {BB' {\bf y} = {\bf y}}. (We can set {B' = B^T} where {B} is as described above.) We apply the inductive hypothesis to the matrix

\displaystyle  M' := B'MB \in {\mathbb R}^{(n-1) \times (n-1)}

and we find eigenvalues {\lambda_2,\ldots,\lambda_n} and orthonormal eigenvectors {{\bf y}_2,\ldots,{\bf y}_n} for {M'}.

For every {i=2,\ldots,n}, we have

\displaystyle  B'MB {\bf y}_i = \lambda_i {\bf y}_i

and so

\displaystyle  BB'MB{\bf y}_i = \lambda_i B {\bf y}_i

Since {B{\bf y}_i} is orthogonal to {{\bf x}_1}, it follows that {MB{\bf y}_i} is also orthogonal to {{\bf x}_1}, and so {BB'MB{\bf y}_i = MB{\bf y}_i}, so we have

\displaystyle  MB{\bf y}_i = \lambda_i B{\bf y}_i

and, defining {{\bf x}_i := B{\bf y}_i}, we have

\displaystyle  M {\bf x}_i = \lambda_i {\bf x}_i

Finally, we observe that the vectors {{\bf x}_i} are orthogonal. By construction, {{\bf x}_1} is orthogonal to {{\bf x}_2,\ldots,{\bf x}_n}, and, for every {2 \leq i < j \leq n}, we have that

\displaystyle  \langle {\bf x}_i , {\bf x}_j \rangle = \langle B {\bf y}_i , B {\bf y}_j \rangle = \langle {\bf y}_i , B^T B {\bf y} _j \rangle = \langle {\bf y}_i , {\bf y}_j \rangle = 0

We have not verified that the vectors {{\bf x}_i} have norm 1 (which is true), but we can scale them to have norm 1 if not. \Box

3. Variational Characterization of Eigenvalues

We conclude these notes with the variational characterization of eigenvalues for real symmetric matrices.

Theorem 6 Let {M\in {\mathbb R}^{n \times n}} be a symmetric matrix, and {\lambda_1 \leq \lambda_2 \leq \cdots \lambda_n} be the eigenvalues of {M} in non-increasing order. Then

\displaystyle  \lambda_k = \min_{k-{\rm dim} \ X} \ \ \ \max_{{\bf x} \in X - \{ {\bf 0} \} } \ \ \frac{ {\bf x}^T M {\bf x}}{{\bf x}^T {\bf x}}

The quantity {\frac { {\bf x}^T M {\bf x}}{{\bf x}^T {\bf x}} } is called the Rayleigh quotient of {{\bf x}} with respect to {M}, and we will denote it by {R_M({\bf x})}.

Proof: Let {{\bf v}_1,\ldots,{\bf v}_n} be orthonormal eigenvectors of the eigenvalues {\lambda_1,\ldots,\lambda_n}, as promised by the spectral theorem. Consider the {k}-dimensional space spanned by {{\bf v}_1, \ldots,{\bf v}_k}. For every vector {{\bf x} = \sum_{i=1}^k a_i {\bf v}_i} in such a space, the numerator of the Rayleigh quotient is

\displaystyle  \sum_{i,j} a_i a_j {\bf v}_i^T M {\bf v}_j = \sum_{i,j} a_i a_j \lambda_j {\bf v}_i^T {\bf v}_j = \sum_{i=1}^k \lambda_i a_i^2 \leq \lambda_k \cdot \sum_{i=1}^k a_i^2

The denominator is clearly {\sum_{i=1}^k a_j^2}, and so {R_M({\bf x}) \leq \lambda_k}. This shows that

\displaystyle  \lambda_k \geq \min_{k-{\rm dim} \ X} \ \ \ \max_{{\bf x} \in X - \{ {\bf 0} \}} \ \ \frac{ {\bf x}^T M {\bf x}}{{\bf x}^T {\bf x}}

For the other direction, let {X} be any {k}-dimensional space: we will show that {X} must contain a vector of Rayleigh quotient {\geq \lambda_k}. Let {S} be the span of {{\bf v}_k,\ldots,{\bf v}_n}; since {S} has dimension {n-k+1} and {X} has dimension {k}, they must have some non-zero vector in common. Let {{\bf x}} be one such vector, and let us write {{\bf x} = \sum_{i=k}^n a_i {\bf v}_i}. The numerator of the Rayleigh quotient of {{\bf x}} is

\displaystyle  \sum_{i=k}^n \lambda_i a_i^2 \geq \lambda_k \sum_i a_i^2

and the denominator is {\sum_i a_i^2}, so {R_M({\bf x}) \geq \lambda_k}. \Box

We have the following easy consequence.

Fact 7 If {\lambda_1} is the smallest eigenvalue of a real symmetric matrix {M}, then

\displaystyle  \lambda_1 = \min_{{\bf x} \neq {\bf 0}} R_M({\bf x})

Furthermore, every minimizer is an eigenvector of {\lambda_1}.

Proof: The identity is the {k=1} case of the previous theorem. For the furthermore part, let {\lambda_1 \leq \cdots \lambda_n} be the list of eigenvalues of {M} in non-decreasing order, and {{\bf v}_1,\ldots,{\bf v}_n} be corresponding eigenvectors. If {{\bf x} = \sum_i a_i {\bf v}_i} is any vector, then

\displaystyle  R_M( {\bf x} ) = \frac { \sum_i \lambda_i a_i^2}{\sum_i a_i^2}

If {R_M({\bf x}) = \lambda_1}, then {a_i = 0} for every {i} such that {\lambda_i > \lambda_1}, that is, {{\bf x}} is a linear combination of eigenvectors of {\lambda_1}, and hence it is an eigenvector of {\lambda_1}. \Box

Fact 8 If {\lambda_n} is the largest eigenvalue of a real symmetric matrix {M}, then

\displaystyle  \lambda_n = \max_{{\bf x} \neq {\bf 0}} R_M({\bf x})

Furthermore, every maximizer is an eigenvector of {\lambda_n}.

Proof: Apply Fact 7 to the matrix {-M}. \Box

Fact 9 If {\lambda_1} is the smallest eigenvalue of a real symmetric matrix {M}, and {{\bf x}_1} is an eigenvector of {\lambda_1}, then

\displaystyle  \lambda_2 = \min_{{\bf x} \neq {\bf 0},\ {\bf x} \perp {\bf x}_1} \ \ \ R_M({\bf x})

Furthermore, every minimizer is an eigenvector of {\lambda_2}.

Proof: A more conceptual proof would be to consider the restriction of {M} to the space orthogonal to {{\bf x}_1}, and then apply Fact 7 to such a linear operator. But, since we have not developed the theory for general linear operators, we would need to explicitly reduce to an {(n-1)}-dimensional case via a projection operator as in the proof of the spectral theorem.

Instead, we will give a more hands-on proof. Let {\lambda_1 \leq \lambda_2 \leq \cdots \lambda_n} be the list of eigenvalues of {M}, with multiplicities, and {{\bf v}_1,\ldots,{\bf v}_n} be orthonormal vectors as given by the spectral theorem. We may assume that {{\bf v}_1 = {\bf x}_1}, possibly by changing the orthonormal basis of the eigenspace of {\lambda_1}. For every vector {{\bf x} = \sum_{i=2}^k a_i {\bf v}_i } orthogonal to {{\bf v}_1}, its Rayleigh quotient is

\displaystyle  R_M({\bf x}) = \frac { \sum_{i=2}^n \lambda_i a_i^2}{\sum_i a_i^2} \geq \lambda_2

and the minimum is achieved by vectors {{\bf x}} such that {a_i = 0} for every {\lambda_i > \lambda_2}, that is, for vectors {{\bf x}} which are linear combinations of the eigenvectors of {\lambda_2}, and so every minimizer is an eigenvector of {\lambda_2}. \Box

Course on spectral methods and expanders

This semester, starting tomorrow, I am teaching a course on spectral methods and expanders. This is similar to a course I offered twice at Stanford, but this time it will be a 15-week course instead of a 10-week one.

The Stanford course had two main components: (1) spectral algorithms for sparsest cut, and comparisons with LP and SDP based methods, and (2) properties and constructions of expanders.

I will use the additional time to talk a bit more about spectral algorithms, including clustering algorithms, and about constructions of expanders, and to add a third part about electrical networks, sparsification, and max flow.

Lecture notes will be posted here after each lecture.

In some more detail, the course will start with a review of linear algebra and a proof of basic spectral graph theory facts, such as the multiplicity of 0 as an eigenvalue of the Laplacian being the same as the number of connected components of a graph.

Then we will introduce expansion and conductance, and prove Cheeger’s inequality. We will do so in the language of approximation algorithms, and we will see how the analysis of Fiedler’s algorithm given by Cheeger’s inequality compares to the Leighton-Rao analysis of the LP relaxation and the Arora-Rao-Vazirani analysis of the SDP relaxation. Then we will prove several variants of Cheeger’s inequality, interpreting them as analyses of spectral algorithms for clustering and max cut.

In the second part of the course, we will see properties of expanders and combinatorial and algebraic constructions of expanders. We will talk about the theory that gives eigenvalues and eigenvectors of Abelian Cayley graphs, the zig-zag graph product, and the Margulis-Gabber-Galil construction. I would also like to talk about the expansion of random graphs, and to explain how one gets expander constructions from Selberg’s “3/16 theorem,” although I am not sure if there will be time for that.

The first two parts will be tied together by looking at the MCMC algorithm to approximate the number of perfect matchings in a dense bipartite graph. The analysis of the algorithm depends on the mixing time of a certain exponentially big graph, the mixing time will be determined (as shown in a previous lecture on properties of expanders) by the eigenvalue gap, the eigenvalue gap will be determined (as shown by Cheeger’s inequality) by the conductance, and the conductance can be bounded by constructing certain multicommodity flows (as shown in the analysis of the Leighton-Rao algorithms).

In the third part, we will talk about electrical networks, effective resistance and electrical flows, see how to get sparsifiers using effective resistance, a sketch of how to salve Laplacian equations in nearly linear time, and how to approximate max flow using electrical flows.

The expansion of the Paley graph

Suppose that we want to construct a very good family of {d}-regular expander graphs. The Alon-Boppana theorem says that the best we can hope for, from the point of view of spectral expansion, is to have {\lambda_2 \leq 2 \sqrt{d-1}}, and we would certainly be very happy with a family of graphs in which {\lambda_2 \leq O(\sqrt d)}.

Known constructions of expanders produce Cayley graphs (or sometimes Schreier graphs, which is a related notion), because it is easier to analyze the spectra of such graphs. If {\Gamma} is a group with operation {\cdot} and {a^{-1}} is the inverse of element {a}, and {S} is a symmetric set of generators, then the Cayley graph {Cay(\Gamma, S)} is the graph whose vertices are the elements of {\Gamma} and whose edges are the pairs {(a,b)} such that {a\cdot b^{-1} \in S}.

When the group is Abelian, there is good news and bad news. The good news is that the eigenvectors of the graphs are completely characterized (they are the characters of {\Gamma}) and the eigenvalues are given by a nice formula. (See here and here.) The bad news is that constant-degree Cayley graphs of Abelian groups cannot be expanders.

That’s very bad news, but it is still possible to get highly expanding graphs of polylogarithmic degree as Cayley graphs of Abelian groups.

Here we will look at the extreme case of a family of graphs of degree {d_n = n/2}, where {n} is the number of vertices. Even with such high degree, the weak version of the Alon-Boppana theorem still implies that we must have {\sigma_2 \geq \Omega(\sqrt d_n)}, and so we will be happy if we get a graph in which {\sigma_2 \leq O(\sqrt d) = O(\sqrt n)}. Highly expanding graphs of degree {n/2} are interesting because they have some of the properties of random graphs from the {G_{n,\frac 12}} distribution. In turn, graphs from {G_{n,\frac 12}} have all kind of interesting properties with high probabilities, including being essentially the best known Ramsey graphs and having the kind of discrepancy property that gives seedless extractors for two independent sources. Unfortunately, these properties cannot be certified by spectral methods. The graph that we will study today is believed to have such stronger properties, but there is no known promising approach to prove such conjectures, so we will content ourselves with proving strong spectral expansion.

The graph is the Paley graph. If {p} is a prime, {{\mathbb Z} / p {\mathbb Z}} is the group of addition modulo {p}, and {Q} is the set of elements of {{\mathbb Z}/ p{\mathbb Z}} of the form {r^2 \bmod p}, then the graph is just {Cay ( {\mathbb Z}/p{\mathbb Z}, Q)}. That is, the graph has a vertex {v} for each {v\in \{ 0,\ldots,p-1\}}, and two vertices {a,b} are adjacent if and only if there is an {r\in \{ 0,\ldots,p-1\}} such that {a-b \equiv r^2 \pmod p}.

Continue reading

The Alon-Boppana Theorem

Let {G=(V,E)} be a {d}-regular graph, and let

\displaystyle  d= \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n

be the eigenvalues of the adjacency matrix of {A} counted with multiplicities and sorted in descending order.

How good can the spectral expansion of {G} be?

1. Simple Bounds

The simplest bound comes from a trace method. We have

\displaystyle  trace(A^2) = \sum_i \lambda_i^2

by using one definition of the trace and

\displaystyle  trace(A^2) = \sum_{v,v} (A^2)_{v,v} \geq dn

using the other definition and observing that {(A^2)_{v,v}} counts the paths that go from {v} to {v} in two steps, of which there are at least {d}: follow an edge to a neighbor of {v}, then follow the same edge back. (There could be more if {G} has multiple edges or self-loops.)

So we have

\displaystyle  dn \leq d^2 + \sum_{i=2,\ldots,n} \lambda_i ^2

and so

\displaystyle  \max_{i=2,\ldots,n} |\lambda_i | \geq \sqrt d \cdot \sqrt{\frac {n-d}{n-1}}

The condition {d \leq n(1-\epsilon)} is necessary to get lower bounds of {\Omega(\sqrt d)}; in the clique, for example, we have {d=n-1} and {\lambda_2 = \lambda_n = -1}.

A trace argument does not give us a lower bound on {\lambda_2}, and in fact it is possible to have {\lambda_2=0} and {d= n/2}, for example in the bipartite complete graph.

If the diameter of {G} is at least 4, it is easy to see that {\lambda_2 \geq \sqrt d}. Let {a,b} be two vertices at distance 4. Define a vector {x} as follows: {x_a = 1}, {x_v = 1/\sqrt d} if {v} is a neighbor of {a}, {x_b=-1} and {x_v = - 1/\sqrt d} if {v} is a neighbor of {b}. Note that there cannot be any edge between a neighbor of {a} and a neighbor of {b}. Then we see that {||x||^2 = 4}, that {x^T A x \geq 4\sqrt d} (because there are {2d} edges, each counted twice, that give a contribution of {1/\sqrt d} to {\sum_{u,v} A_{uv} x_u x_v}) and that {x} is orthogonal to {(1,\ldots,1)}.

2. Nilli’s Proof of the Alon-Boppana Theorem

Nilli’s proof of the Alon-Boppana theorem gives

\displaystyle  \lambda_2 \geq 2 \sqrt{ d-1 } - O \left( \frac {\sqrt{d-1}}{diam(G)-4} \right)

where {diam(G) \geq \frac {\log |V|}{\log d-1}} is the diameter of {G}. This means that if one has a family of (constant) degree-{d} graphs, and every graph in the family satisfies {\lambda_2 \leq \lambda}, then one must have {\lambda \geq 2 \sqrt{d -1}}. This is why families of Ramanujan graphs, in which {\lambda_2 \leq 2 \sqrt{d-1}}, are special, and so hard to construct, or even to prove existence of.

Friedman proves a stronger bound, in which the error term goes down with the square of the diameter. Friedman’s proof is the one presented in the Hoory-Linial-Wigderson survey. I like Nilli’s proof, even if it is a bit messier than Friedman’s, because it starts off with something that clearly is going to work, but the first two or three ways you try to establish the bound don’t work (believe me, I tried, because I didn’t see why some steps in the proof had to be that way), but eventually you find the right way to break up the estimate and it works.

So here is Nilli’s proof. Continue reading

Notes on expanders, sparsest cut, and spectral graph theory

While preparing for the spectral graph theory boot camp, which starts this Tuesday at the Simons Institute, I collected the lecture notes of my class on graph partitioning and expanders in one file.

There are no references and, most likely, plenty of errors. If you use the notes and find mistakes, please let me know by either emailing luca at berkeley or leaving a comment here.

How would you call it?

In preparation for the special program on spectral graph theory at the Simons Institute, which starts in a week, I have been reading on the topics of the theory that I don’t know much about: the spectrum of random graphs, properties of highly expanding graphs, spectral sparsification, and so on.

I have been writing some notes for myself, and here is something that bothers me: How do you call the second largest, in absolute value, eigenvalue of the adjacency matrix of a graph, without resorting to the sentence I just wrote? And how do you denote it?

I have noticed that the typical answer to the first question is “second eigenvalue,” but this is a problem when it creates confusion with the actual second largest eigenvalue of the adjacency matrix, which could be a very different quantity. The answer to the second question seems to be either a noncommittal “{\lambda}” or a rather problematic “{\lambda_2}.”

For my own use, I have started to used the notation {\lambda_{2abs}}, which can certainly use some improvement, but I am still at a loss concerning terminology.

Perhaps one should start from where this number is coming from, and it seems that its important property is that, if the graph is {d} regular and has {n} vertices, and has adjacency matrix A, this number is the spectral norm of {A - \frac dn J} (where {J} is the matrix with ones everywhere), so that it measures the distance of {A} from the “perfect {d}-regular expander” in a norm that is useful to reason about cuts and also tractable to compute.

So, since it is the spectral norm of a modification of the adjacency matrix, how about calling it {<}adjective{>} spectral norm? I would vote for shifted spectral norm because I would think of subtracting {\frac dn J} as a sort of shift.

Please, do better in the comments!