You are currently browsing the category archive for the ‘teaching’ category.

Suppose that we want to construct a very good family of {d}-regular expander graphs. The Alon-Boppana theorem says that the best we can hope for, from the point of view of spectral expansion, is to have {\lambda_2 \leq 2 \sqrt{d-1}}, and we would certainly be very happy with a family of graphs in which {\lambda_2 \leq O(\sqrt d)}.

Known constructions of expanders produce Cayley graphs (or sometimes Schreier graphs, which is a related notion), because it is easier to analyze the spectra of such graphs. If {\Gamma} is a group with operation {\cdot} and {a^{-1}} is the inverse of element {a}, and {S} is a symmetric set of generators, then the Cayley graph {Cay(\Gamma, S)} is the graph whose vertices are the elements of {\Gamma} and whose edges are the pairs {(a,b)} such that {a\cdot b^{-1} \in S}.

When the group is Abelian, there is good news and bad news. The good news is that the eigenvectors of the graphs are completely characterized (they are the characters of {\Gamma}) and the eigenvalues are given by a nice formula. (See here and here.) The bad news is that constant-degree Cayley graphs of Abelian groups cannot be expanders.

That’s very bad news, but it is still possible to get highly expanding graphs of polylogarithmic degree as Cayley graphs of Abelian groups.

Here we will look at the extreme case of a family of graphs of degree {d_n = n/2}, where {n} is the number of vertices. Even with such high degree, the weak version of the Alon-Boppana theorem still implies that we must have {\sigma_2 \geq \Omega(\sqrt d_n)}, and so we will be happy if we get a graph in which {\sigma_2 \leq O(\sqrt d) = O(\sqrt n)}. Highly expanding graphs of degree {n/2} are interesting because they have some of the properties of random graphs from the {G_{n,\frac 12}} distribution. In turn, graphs from {G_{n,\frac 12}} have all kind of interesting properties with high probabilities, including being essentially the best known Ramsey graphs and having the kind of discrepancy property that gives seedless extractors for two independent sources. Unfortunately, these properties cannot be certified by spectral methods. The graph that we will study today is believed to have such stronger properties, but there is no known promising approach to prove such conjectures, so we will content ourselves with proving strong spectral expansion.

The graph is the Paley graph. If {p} is a prime, {{\mathbb Z} / p {\mathbb Z}} is the group of addition modulo {p}, and {Q} is the set of elements of {{\mathbb Z}/ p{\mathbb Z}} of the form {r^2 \bmod p}, then the graph is just {Cay ( {\mathbb Z}/p{\mathbb Z}, Q)}. That is, the graph has a vertex {v} for each {v\in \{ 0,\ldots,p-1\}}, and two vertices {a,b} are adjacent if and only if there is an {r\in \{ 0,\ldots,p-1\}} such that {a-b \equiv r^2 \pmod p}.

Read the rest of this entry »

Let {G=(V,E)} be a {d}-regular graph, and let

\displaystyle  d= \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n

be the eigenvalues of the adjacency matrix of {A} counted with multiplicities and sorted in descending order.

How good can the spectral expansion of {G} be?

1. Simple Bounds

The simplest bound comes from a trace method. We have

\displaystyle  trace(A^2) = \sum_i \lambda_i^2

by using one definition of the trace and

\displaystyle  trace(A^2) = \sum_{v,v} (A^2)_{v,v} \geq dn

using the other definition and observing that {(A^2)_{v,v}} counts the paths that go from {v} to {v} in two steps, of which there are at least {d}: follow an edge to a neighbor of {v}, then follow the same edge back. (There could be more if {G} has multiple edges or self-loops.)

So we have

\displaystyle  dn \leq d^2 + \sum_{i=2,\ldots,n} \lambda_i ^2

and so

\displaystyle  \max_{i=2,\ldots,n} |\lambda_i | \geq \sqrt d \cdot \sqrt{\frac {n-d}{n-1}}

The condition {d \leq n(1-\epsilon)} is necessary to get lower bounds of {\Omega(\sqrt d)}; in the clique, for example, we have {d=n-1} and {\lambda_2 = \lambda_n = -1}.

A trace argument does not give us a lower bound on {\lambda_2}, and in fact it is possible to have {\lambda_2=0} and {d= n/2}, for example in the bipartite complete graph.

If the diameter of {G} is at least 4, it is easy to see that {\lambda_2 \geq \sqrt d}. Let {a,b} be two vertices at distance 4. Define a vector {x} as follows: {x_a = 1}, {x_v = 1/\sqrt d} if {v} is a neighbor of {a}, {x_b=-1} and {x_v = - 1/\sqrt d} if {v} is a neighbor of {b}. Note that there cannot be any edge between a neighbor of {a} and a neighbor of {b}. Then we see that {||x||^2 = 4}, that {x^T A x \geq 4\sqrt d} (because there are {2d} edges, each counted twice, that give a contribution of {1/\sqrt d} to {\sum_{u,v} A_{uv} x_u x_v}) and that {x} is orthogonal to {(1,\ldots,1)}.

2. Nilli’s Proof of the Alon-Boppana Theorem

Nilli’s proof of the Alon-Boppana theorem gives

\displaystyle  \lambda_2 \geq 2 \sqrt{ d-1 } - O \left( \frac {\sqrt{d-1}}{diam(G)-4} \right)

where {diam(G) \geq \frac {\log |V|}{\log d-1}} is the diameter of {G}. This means that if one has a family of (constant) degree-{d} graphs, and every graph in the family satisfies {\lambda_2 \leq \lambda}, then one must have {\lambda \geq 2 \sqrt{d -1}}. This is why families of Ramanujan graphs, in which {\lambda_2 \leq 2 \sqrt{d-1}}, are special, and so hard to construct, or even to prove existence of.

Friedman proves a stronger bound, in which the error term goes down with the square of the diameter. Friedman’s proof is the one presented in the Hoory-Linial-Wigderson survey. I like Nilli’s proof, even if it is a bit messier than Friedman’s, because it starts off with something that clearly is going to work, but the first two or three ways you try to establish the bound don’t work (believe me, I tried, because I didn’t see why some steps in the proof had to be that way), but eventually you find the right way to break up the estimate and it works.

So here is Nilli’s proof. Read the rest of this entry »

While preparing for the spectral graph theory boot camp, which starts this Tuesday at the Simons Institute, I collected the lecture notes of my class on graph partitioning and expanders in one file.

There are no references and, most likely, plenty of errors. If you use the notes and find mistakes, please let me know by either emailing luca at berkeley or leaving a comment here.

In preparation for the special program on spectral graph theory at the Simons Institute, which starts in a week, I have been reading on the topics of the theory that I don’t know much about: the spectrum of random graphs, properties of highly expanding graphs, spectral sparsification, and so on.

I have been writing some notes for myself, and here is something that bothers me: How do you call the second largest, in absolute value, eigenvalue of the adjacency matrix of a graph, without resorting to the sentence I just wrote? And how do you denote it?

I have noticed that the typical answer to the first question is “second eigenvalue,” but this is a problem when it creates confusion with the actual second largest eigenvalue of the adjacency matrix, which could be a very different quantity. The answer to the second question seems to be either a noncommittal “{\lambda}” or a rather problematic “{\lambda_2}.”

For my own use, I have started to used the notation {\lambda_{2abs}}, which can certainly use some improvement, but I am still at a loss concerning terminology.

Perhaps one should start from where this number is coming from, and it seems that its important property is that, if the graph is {d} regular and has {n} vertices, and has adjacency matrix A, this number is the spectral norm of {A - \frac dn J} (where {J} is the matrix with ones everywhere), so that it measures the distance of {A} from the “perfect {d}-regular expander” in a norm that is useful to reason about cuts and also tractable to compute.

So, since it is the spectral norm of a modification of the adjacency matrix, how about calling it {<}adjective{>} spectral norm? I would vote for shifted spectral norm because I would think of subtracting {\frac dn J} as a sort of shift.

Please, do better in the comments!

Judge Rolf Treu has ruled that teachers’ tenure violates California students’ constitutional right to an education.

The California Constitution, which I am now reading for the first time, has an entire section on education (also, an entire section on water), it specifically mandates state funding for the University of California, and it states, right at the beginning, a fundamental right to privacy. (Using the word “privacy,” which does not occur in the US constitution.) Yay California Constitution!

Back to Judge Treu, he has found that the right to an education implies a right to good teachers, and that tenure causes students to have bad teachers. Hence tenure is unconstitutional. Also the bad teachers disproportionally end up in districts with poor and minority students, so tenure is unconstitutional also on equal protection grounds. I am now looking forward to conservative California judges striking down other laws and regulations that affect the educational opportunities of poor and minority students, including prop 209.

This week, the topic of my online course on graph partitioning and expanders is the computation of approximate eigenvalues and eigenvectors with the power method.

If M is a positive semidefinite matrix (a symmetric matrix all whose eigenvalues are nonnegative), then the power method is simply to pick a random vector x\in \{ -1,+1 \}^n, and compute y:= M^k x. If k is of the order of \frac 1 \epsilon \log \frac n \epsilon, then one has a constant probability that

\frac {y^T M y}{y^T y} \geq (1-\epsilon) \max_{x} \frac {x^T M x}{x^T x} = (1-\epsilon) \lambda_1

where \lambda_1 is the largest eigenvalue of M. If we are interested in the Laplacian matrix L = I - \frac 1d A of a d-regular graph, where A is the adjacency matrix of the graph, this gives a way to compute an approximation of the largest eigenvalue, and a vector of approximately maximum Rayleigh quotient, which is useful to approximate Max Cut, but not to apply spectral partitioning algorithms. For those, we need a vector that approximates the eigenvector of the second smallest eigenvalue.

Equivalently, we want to approximate the second largest eigenvalue of the adjacency matrix A. The power method is easy to adjust to compute the second largest eigenvalue instead of the largest (if we know an eigenvector of the largest eigenvalue): after you pick the random vector, subtract the component of the vector that is parallel to the eigenvector of the largest eigenvalue. In the case of the adjacency matrix of a regular graph, subtract from every coordinate of the random vector the average of the coordinates.

The adjacency matrix is not positive semidefinite, but we can adjust it to be by adding a multiple of the identity matrix. For example we can work with \frac 12 I + \frac 1{2d} A. Then the power method reduces to the following procedure: pick randomly x \sim \{ -1,1\}, then subtract \sum_i x_i/n from every entry of x, then repeat the following process k = O\left( \frac 1 \epsilon \log \frac n \epsilon \right) times: for every entry i, assign x_i := \frac 12 x_i + \frac 1 {2d} \sum_{j: (i,j) \in E} x_j, that is, replace the value that the vector assigns to vertex i with a convex combination of the current value and the current value of the neighbors. (Note that one iteration can be executed in time O(|V|+|E|).

The problem is that if we started from a graph whose Laplacian matrix has a second smallest eigenvalue \lambda_2, the matrix \frac 12 I + \frac 1{2d} A has second largest eigenvalue 1- \frac {\lambda_2}2, and if the power method finds a vector of Rayleigh quotient at least (1-\epsilon) \cdot \left( 1- \frac {\lambda_2}2 \right) for \frac 12 I + \frac 1{2d} A, then that vector has Rayleigh quotient about \lambda_2 - 2\epsilon for L, and unless we choose \epsilon of the same order as \lambda_2 we get nothing. This means that the number of iterations has to be about 1/\lambda_2, which can be quite large.

The video below (taken from this week’s lecture) shows how slowly the power method progresses on a small cycle with 31 vertices. It goes faster on the hypercube, which has a much larger \lambda_2.

A better way to apply the power method to find small eigenvalues of the Laplacian is to apply the power method to the pseudoinverse L^+ of the Laplacian. If the Laplacian of a connected graph has eigenvalues 0 = \lambda_1 < \lambda_2 \leq \cdots \leq \lambda_n, then the pseudoinverse L^+ has eigenvalues 0, \frac 1 {\lambda_2}, \cdots, \frac 1 {\lambda_n} with the same eigenvectors, so approximately finding the largest eigenvalue of L^+ is the same problem as approximately finding the second smallest eigenvalue of L.

Although we do not have fast algorithms to compute L^+, what we need to run the power method is, for a given x, to find the y such that L y = x, that is, to solve the linear system Ly = x in y given L and x.

For this problem, Spielman and Teng gave an algorithm nearly linear in the number of nonzero of L, and new algorithms have been developed more recently (and with some promise of being practical) by Koutis, Miller and Peng and by Kelner, Orecchia, Sidford and Zhu.

Coincidentally, just this week, Nisheeth Vishnoi has completed his monograph Lx=b on algorithms to solve such linear systems and their applications. It’s going to be great summer reading for those long days at the beach.

Long in the making, the online course on expanders starts today.

In the first week of class: what are the topics of the course, and how to prove that the eigenvalues of the adjacency matrix of a regular graph tell you how many connected components there are in the graph.

Long in the planning, my online course on graph partitioning algorithms, expanders, and random walks, will start next month.

The course page is up for people to sign up. A friend of mine has compared my camera presence to Sheldon Cooper’s in “Fun with Flags,” which is sadly apt, but hopefully the material will speak for itself.

Meanwhile, I will be posting about some material that I have finally understood for the first time: the analysis of the Arora-Rao-Vazirani approximation algorithm, the Cheeger inequality in manifolds, and the use of the Selberg “3/16 theorem” to analyze expander constructions.

If you are not a fan of recorded performances, there will be a live show in Princeton at the end of June.

Last Fall, three Stanford classes were “offered online” for free: Andrew Ng’s machine learning class, Sebastian Thrun’s AI class, and Jennifer Widom’s data base class. There had been interest and experiments in online free education for a long time, with the MITx initiative being a particularly significant one, but there were a few innovations in last year’s Stanford classes, and they probably contributed to their runaway success and six-digit enrollment.

One difference was that they did not post videos of the in-class lectures. There was, in fact, no in-class lecture. Instead, they taped short videos, rehearsed and edited, with the content of a standard 90-minute class broken down in 4 ten-minutes video or so. This is about the difference between taping a play and making a movie. Then the videos came with some forms of “interactivity” (quizzes that had to be answered to continue), and they were released at the rate in which the class progressed, so that there was a community of students watching the videos at the same time and able to answer each other’s questions in forums. Finally, the videos were used in the Stanford offerings of the classes: the students were instructed to watch the videos by themselves, and during the lecture time they would solve problems, or have discussions or have guest lectures and so on. (In K-12 education, this is called the “flipped classroom” model, in which students take lectures at home and solve homeworks in class, instead of the traditional other way around.)

In the past few months, there has been a lot of thinking, and a lot of acting, about the success of this experiment. Sebastian Thrun started a company called udacity to offer online courses “branded” by the company itself, and Daphne Koller and Andrew Ng started a company called coursera to provide a platform for universities to put their courses online, and, meanwhile, Harvard and Berkeley joined MIT to create edX.

At a time when the growth of higher education costs in the United States appear unsustainable, particularly in second-tier universities, and when the demand for high-quality higher education is exploding in the developing world, these projects have attracted a lot of interest.

While the discussion has been focused on the “summer blockbusters” of higher education, and what they should be like, who is going to produce them, how to make money from them, and so on, I would like to start a discussion on the “art house” side of things.

In universities all over the world, tens of thousands of my colleagues, after they have “served” their departments teaching a large undergraduate classes and maybe a required graduate class, get to have fun teaching a research-oriented graduate class. Their hard-earned insights into problems about which they are the world’s leading expert, be it a particular organ of the fruit fly or a certain corner of the Langlands program, are distilled into a series of lectures featuring content that cannot be found anywhere else. All for the benefit of half a dozen or a dozen students.

If these research-oriented, hyper-specialized courses were available online, those courses might have an audience of 20 or 30 students, instead of 100,000+, but their aggregate effect on their research communities would be, I believe, very significant.

One could also imagine such courses being co-taught by people at different universities. For example, imagine James Lee and Assaf Naor co-teaching a course on metric embeddings and approximation algorithms: they would devise a lesson plan together, each would produce half of the videos, and then at both NYU and UW the students would watch the videos and meet in class for discussions and working on problems; meanwhile study groups would probably pop up in many theory groups, of students watching the videos and working on the problem sets together.

So someone should put a research-oriented graduate course online, and see what happens. This is all to say that I plan to teach my class on graph partitioning, expander graphs, and random walks online in Winter 2013. Wish me luck!

Despite no popular demand, I have collected all the notes from CS261, the course on algorithms for combinatorial optimization problems that I taught in the past term, in one pdf file, available here, and I have created a new page to collect links to my lecture notes.

For the occasion, I have also posted a single file containing the notes from my Spring 2009 class on the foundations of cryptography. As explained in the foreword to the crypto notes, they use a definition of CCA-security that is wrong, that is, a definition that is weaker than the standard one in a way that actually allows potentially dangerous attacks. The weaker definition, however, is much simpler to define and work with, and I think it is pedagogically justified. I believe that everything else in the notes is consistent with standard definitions. As far as I know, the notes are the only place in which one can find a concrete-security treatment of zero knowledge.

a

Follow

Get every new post delivered to your Inbox.

Join 289 other followers