This week, the topic of my online course on graph partitioning and expanders is the computation of approximate eigenvalues and eigenvectors with the power method.

If is a positive semidefinite matrix (a symmetric matrix all whose eigenvalues are nonnegative), then the power method is simply to pick a random vector , and compute . If is of the order of , then one has a constant probability that

where is the largest eigenvalue of . If we are interested in the Laplacian matrix of a -regular graph, where is the adjacency matrix of the graph, this gives a way to compute an approximation of the largest eigenvalue, and a vector of approximately maximum Rayleigh quotient, which is useful to approximate Max Cut, but not to apply spectral partitioning algorithms. For those, we need a vector that approximates the eigenvector of the second smallest eigenvalue.

Equivalently, we want to approximate the second largest eigenvalue of the adjacency matrix . The power method is easy to adjust to compute the second largest eigenvalue instead of the largest (if we know an eigenvector of the largest eigenvalue): after you pick the random vector, subtract the component of the vector that is parallel to the eigenvector of the largest eigenvalue. In the case of the adjacency matrix of a regular graph, subtract from every coordinate of the random vector the average of the coordinates.

The adjacency matrix is not positive semidefinite, but we can adjust it to be by adding a multiple of the identity matrix. For example we can work with . Then the power method reduces to the following procedure: pick randomly , then subtract from every entry of , then repeat the following process times: for every entry , assign , that is, replace the value that the vector assigns to vertex with a convex combination of the current value and the current value of the neighbors. (Note that one iteration can be executed in time .

The problem is that if we started from a graph whose Laplacian matrix has a second smallest eigenvalue , the matrix has second largest eigenvalue , and if the power method finds a vector of Rayleigh quotient at least for , then that vector has Rayleigh quotient about for , and unless we choose of the same order as we get nothing. This means that the number of iterations has to be about , which can be quite large.

The video below (taken from this week’s lecture) shows how slowly the power method progresses on a small cycle with 31 vertices. It goes faster on the hypercube, which has a much larger .

A better way to apply the power method to find small eigenvalues of the Laplacian is to apply the power method to the pseudoinverse of the Laplacian. If the Laplacian of a connected graph has eigenvalues , then the pseudoinverse has eigenvalues with the same eigenvectors, so approximately finding the largest eigenvalue of is the same problem as approximately finding the second smallest eigenvalue of .

Although we do not have fast algorithms to compute , what we need to run the power method is, for a given , to find the such that , that is, to solve the linear system in given and .

For this problem, Spielman and Teng gave an algorithm nearly linear in the number of nonzero of , and new algorithms have been developed more recently (and with some promise of being practical) by Koutis, Miller and Peng and by Kelner, Orecchia, Sidford and Zhu.

Coincidentally, just this week, Nisheeth Vishnoi has completed his monograph Lx=b on algorithms to solve such linear systems and their applications. It’s going to be great summer reading for those long days at the beach.

In the video of the power method method one says, in the intro, that for the hypercube the arrangement is such that the sum of the edge generators is zero.

Why is this important? What is the wider principle?

Turing did the same, albeit with subgroup of Enigma groups. Ive never figured the significance to date.

Ah, we are used to _balanced_ binary functions (and their relevance to crypto). If the set of generators is {-26…-1,1,…26} instead of {-1,1}, then I see that the sum of the powers being zero is a more general “balancedness”. This is a algebraic “standard reference point”, much as the uniform distribution of a maximally mixed state/phase is a standard reference point

Pingback: Multitype Branching Processes | Eventually Almost Everywhere

Regarding the visualization of the power method applied to the cycle graph in the video, you might be interested in the following paper on why the cycle tends to an ellipse, in case you don’t know it:

A Elmachtoub, C Van Loan, “From Random Polygon to Ellipse: An Eigenanalysis,” SIAM Review, 2010.