In which we define and analyze the zig-zag graph product.

1. Replacement Product and Zig-Zag Product

In the previous lecture, we claimed it is possible to “combine” a {d}-regular graph on {D} vertices and a {D}-regular graph on {N} vertices to obtain a {d^2}-regular graph on {ND} vertices which is a good expander if the two starting graphs are. Let the two starting graphs be denoted by {H} and {G} respectively. Then, the resulting graph, called the zig-zag product of the two graphs is denoted by {G {\textcircled z} H}.

Using {\lambda(G)} to denote the eigenvalue with the second-largest absolute value for a graph {G}, we claimed that if {\lambda(H) \leq b} and {\lambda(G) \leq a}, then {\lambda(G {\textcircled z} H) \leq a+ 2b + b^2}. In this lecture we shall describe the construction for the zig-zag product and prove this claim.

2. Replacement product of two graphs

We first describe a simpler product for a “small” {d}-regular graph on {D} vertices (denoted by {H}) and a “large” {D}-regular graph on {N} vertices (denoted by {G}). Assume that for each vertex of {G}, there is some ordering on its {D} neighbors. Then we construct the replacement product (see figure) {G {\textcircled r} H} as follows:

  • Replace each vertex of {G} with a copy of {H} (henceforth called a cloud). For {v \in V(G), i \in V(H)}, let {(v,i)} denote the {i^{th}} vertex in the {v^{th}} cloud.
  • Let {(u, v) \in E(G)} be such that {v} is the {i}-th neighbor of {u} and {u} is the {j}-th neighbor of {v}. Then {((u,i),(v,j)) \in E(G {\textcircled r} H)}. Also, if {(i,j) \in E(H)}, then {\forall u \in V(G)~ ((u,i),(u,j)) \in E(G {\textcircled r} H)}.

Note that the replacement product constructed as above has {ND} vertices and is {(d+1)}-regular.

3. Zig-zag product of two graphs

Given two graphs {G} and {H} as above, the zig-zag product {G{\textcircled z} H} is constructed as follows (see figure):

  • The vertex set {V(G {\textcircled z} H)} is the same as in the case of the replacement product.
  • {((u,i),(v,j)) \in E(G {\textcircled z} H)} if there exist {\ell} and {k} such that {((u,i)(u,\ell)}, {((u,\ell),(v,k))} and {((v,k),(v,j))} are in {E(G {\textcircled r} H)} i.e. {(v,j)} can be reached from {(u,i)} by taking a step in the first cloud, then a step between the clouds and then a step in the second cloud (hence the name!).

It is easy to see that the zig-zag product is a {d^2}-regular graph on {ND} vertices.

Let {M \in {\mathbb R}^{([N] \times [D]) \times ([N] \times [D])}} be the normalized adjacency matrix of {G {\textcircled z} H}. Using the fact that each edge in {G {\textcircled r} H} is made up of three steps in {G {\textcircled r} H}, we can write {M} as {BAB}, where

\displaystyle  \begin{array}{rcl}  B[(u,i),(v,j)] = \left\{ \begin{array}{ll} 0 & if~u\neq v \\ M_H[i,j] & if~u=v \\ \end{array} \right. \end{array}

And {A[(u,i),(v,j)]=1} if {u} is the {j}-th neighbor of {v} and {v} is the {i}-th neighbor of {u}, and {A[(u,i),(v,j)]=0} otherwise.

Note that {A} is the adjacency matrix for a matching and is hence a permutation matrix.

4. Preliminaries on Matrix Norms

Recall that, instead of bounding {\lambda_2}, we will bound the following parameter (thus proving a stronger result).

Definition 1 Let {M} be the normalized adjacency matrix of a graph {G=(V,E)}, and {\lambda_1 \geq \ldots \geq \lambda_n} be its eigenvalues with multiplicities. Then we use the notation

\displaystyle  \lambda(M) := \max_{i=2,\ldots,n} \{ |\lambda_i| \} = \max \{ \lambda_2,-\lambda_n \}

The parameter {\lambda} has the following equivalent characterizations.

Fact 2

\displaystyle  \lambda(M) = \max_{{\bf x} \in {\mathbb R}^V - \{ {\bf {0}} \} , {\bf x} \perp {\bf 1}} \frac {|| M x||}{||x||} = \max_{{\bf x} \in {\mathbb R}^v ,{\bf x} \perp {\bf 1}, ||{\bf x}||=1} || M x ||

Another equivalent characterization, which will be useful in several contexts, can be given using the following matrix norm.

Definition 3 (Spectral Norm) The spectral norm of a matrix {M\in {\mathbb R}^{n \times n}} is defined as

\displaystyle  ||M|| = \max_{{\bf x}\in {\mathbb R}^V, || {\bf x} ||=1} || M x ||

If {M} is symmetric with eigenvalues {\lambda_1,\ldots,\lambda_n}, then the spectral norm is {\max_i |\lambda_i|}. Note that {M} is indeed a norm, that is, for every two square real matrices {A,B} we have {||A+B||\leq ||A || + ||B||} and for every matrix {A} and scalar {\alpha} we have {||\alpha A|| = \alpha ||A||}. In addition, it has the following useful property:

Fact 4 For every two matrices {A,B \in {\mathbb R}^{n\times n}} we have

\displaystyle  || A B || \leq ||A|| \cdot ||B||

Proof: For every vector {x} we have

\displaystyle  ||AB {\bf x}|| \leq ||A|| \cdot ||B {\bf x}|| \leq ||A|| \cdot ||B|| \cdot ||{\bf x}||

where the first inequality is due to the fact that {||Az|| \leq ||A|| \cdot ||{\bf z}||} for every vector {{\bf z}}, and the second inequality is due to the fact that {||B{\bf x}|| \leq ||B|| \cdot ||{\bf x}||}. So we have

\displaystyle  \min_{{\bf x}\in {\mathbb R}^n , {\bf x}\neq {\bf {0}}}

\Box

We can use the spectral norm to provide another characterization of the parameter {\lambda(M)} of the normalized adjacency matrix of a graph.

Lemma 5 Let {G} be a regular graph and {M\in {\mathbb R}^{n \times n}} be its normalized adjacency matrix. Then

\displaystyle  \lambda(M) = || M - \frac 1n J||

where {J} is the matrix with a 1 in each entry.

Proof: Let {\lambda_1=1 \geq \lambda_2 \geq \cdots \lambda_n} be the eigenvalues of {M} and {{\bf v}_1 = \frac 1 {\sqrt n} {\bf 1}}, {{\bf v}_2}, {\ldots}, {{\bf v}_n} a corresponding system of orthonormal eigenvector. Then we can write

\displaystyle  M = \lambda_1 {\bf v}_1 {\bf v}_1^T + \cdots + \lambda_n {\bf v}_n {\bf v}_n^T

Noting that {{\bf v}_1{\bf v}_1^T = \frac 1n J}, we have

\displaystyle  M -\frac 1n J = 0 \cdot {\bf v}_1{\bf v}_2^T + \sum_{i=2}^n \lambda_i {\bf v}_i {\bf v}_i^T

and so {{\bf v}_1,\ldots,{\bf v}_n} is also a system of eigenvectors for {M-\frac 1n J}, with corresponding eigenvalues {0,\lambda_2,\ldots,\lambda_n}, meaning that

\displaystyle  || M - \frac 1n J|| = \max \{ 0,\lambda_2,\ldots,\lambda_n \} = \lambda(M)

\Box

The above lemma has several applications. It states that, according to a certain definition of distance, when a graph is a good expander then it is close to a clique. (The matrix {\frac 1n J} is the normalized adjacency matrix of a clique with self-loops.) The proof of several results about expanders is based on noticing that the result is trivial for cliques, and then on “approximating” the given expander by a clique using the above lemma.

We need one more definition before we can continue with the analysis of the zig-zag graph product.

Definition 6 (Tensor Product) Let {A \in {\mathbb R}^{N\times N}} and {B\in {\mathbb R}^{D \times D}} be two matrices. Then {A \otimes B \in {\mathbb R}^{ND \times ND}} is a matrix whose rows and columns are indexed by pairs {(u,i)\in [N] \times [D]} such that

\displaystyle  (A\otimes B)_{(u,i),(v,j)} = A_{u,v} \cdot B_{i,j}

For example {I \otimes M} is a block-diagonal matrix in which every block is a copy of {M}.

5. Analysis of the Zig-Zag Product

Suppose that {G} and {H} are identical cliques with self-loops, that is, are both {n}-regular graphs with self-loops. Then the zig-zag product of {G} and {H} is well-defined, because the degree of {G} is equal to the number of vertices of {H}. The resulting graph {G{\textcircled z} H} is a {n^2}-regular graph with {n^2} vertices, and an inspection of the definitions reveals that {G{\textcircled z} H} is indeed a clique (with self-loops) with {n^2} vertices.

The intuition for our analysis is that we want to show that the zig-zag graph product “preserves” distances measured in the matrix norm, and so if {G} is close (in matrix norm) to a clique and {H} is close to a clique, then {G{\textcircled z} H} is close to the zig-zag product of two cliques, that is, to a clique. (Strictly speaking, what we just said does not make sense, because we cannot take the zig-zag product of the clique that {G} is close to and of the clique that {H} is close to, because they do not have the right degree and number of vertices. The proof, however, follows quite closely this intuition.)

Theorem 7 If {\lambda(M_G) = a} and {\lambda(M_H) = b}, then

\displaystyle  \lambda( G {\textcircled z} H ) \leq a + 2b + b^2

Proof: Let {M} be the normalized adjacency matrix of {G{\textcircled z} H}, and let {{\bf x}} be a unit vector such that {{\bf x} \perp {\bf 1}} and

\displaystyle  \lambda(M) = || M {\bf x} ||

Recall that we defined a decomposition

\displaystyle  M = BAB

where {A} is a permutation matrix, and {B = I \otimes M_H}. Let us write {E:= M_H - \frac 1D J}, then {B = I \otimes \frac 1D J + I \otimes E}. Let us call {\bar J : = I \otimes \frac 1D J} and {\bar E:= I \otimes E}.

First, we argue that the matrix norm of {\bar E} is small. Take any vector {{\bf z} \in {\mathbb R}^{ND}} and write is as {{\bf z} = ({\bf z}_1,\ldots,{\bf z}_N)}, where, for each {u\in [N]}, {{\bf z}_u} is the {D}-dimensional restriction of {{\bf z}} to the coordinates in the cloud of {u}. Then

\displaystyle  ||( I \otimes E ){\bf z} ||^2 = \sum_u || E {\bf z}_u||^2 \leq \sum_u ||E||^2 \cdot ||{\bf z}_u ||^2 = ||E||^2 \cdot ||{\bf z}||^2

and so we have

\displaystyle  || I \otimes E|| \leq ||E|| \leq b

Then we have

\displaystyle  BAB = ( \bar J + \bar E) A (\bar J + \bar E )

\displaystyle  = \bar J A \bar J + \bar J A \bar E + \bar E A \bar J + \bar E A \bar A

and so, using the triangle inequality and the property of the matrix norm, we have

\displaystyle  ||BAB {\bf x}|| \leq ||\bar J A \bar J {\bf x} || + || \bar E A \bar J || + || \bar J A \bar E|| + || \bar E A \bar E||

where

\displaystyle  || \bar E A \bar J || \leq || \bar E|| \cdot || A || \cdot || \bar J|| \leq ||\bar E|| \leq b

\displaystyle  || \bar J A \bar E || \leq || \bar J|| \cdot || A || \cdot || \bar E|| \leq ||\bar E|| \leq b

\displaystyle  || \bar E A \bar E || \leq || \bar E|| \cdot || A || \cdot || \bar E|| \leq ||\bar E||^2 \leq b^2

It remains to prove that {||\bar J A \bar J {\bf x} || \leq a}. If we let {A_G = D M_G} be the adjacency matrix of {G}, then we can see that

\displaystyle  ( \bar J A \bar J ) _{(u,i),(v,j)} = \frac {1}{D^2} (A_G)_{u,v} = \frac 1D (M_G)_{u,v} = ( M_G \otimes \frac 1D J)_{(u,i),(v,j) }

That is,

\displaystyle  \bar J A \bar J = M_G \otimes \frac 1D J

Finally, we write {{\bf x} = ({\bf x}_1,\ldots,{\bf x}_N)}, where {{\bf x}_u} is the {D}-dimensional vector of entries corresponding to the cloud of {u}, we call {y_u := \sum_i {\bf x}_u(i)/D}, and we note that, by Cauchy-Schwarz:

\displaystyle  || {\bf y}||^2 = \sum_u \left( \sum_i \frac 1D {\bf x}_{u,i} \right)^2 \leq \sum_u \left( \sum_i \frac 1D^2 \right) \cdot \left ( \sum_i {\bf x}_{u,i}^2 \right) = \frac 1D || {\bf x}||^2

The final calculation is:

\displaystyle  || \bar J A \bar J {\bf x} ||^2 = \left|| \left ( M_G \otimes \frac 1D J \right) {\bf x} \right||^2

\displaystyle  = \sum_{u,i} \left( \sum_{v,j} \frac 1D (M_G)_{u,v} {\bf x}_{u,i} \right )^2

\displaystyle  = \sum_{u,i} \left( \sum_{v} (M_G)_{u,v} y_u \right)^2

\displaystyle  = D \cdot \sum_u \left( \sum_{v} (M_G)_{u,v} y_u \right)^2

\displaystyle  = D \cdot || M_G {\bf y} ||^2

\displaystyle  \leq D \cdot a^2 \cdot || {\bf y}||^2

\displaystyle  \leq a^2 \cdot ||{\bf x}^2 ||^2

\Box

About these ads