CS359G Lecture 16: Constructions of Expanders

In which we give an explicit construction of expander graphs of polylogarithmic degree, state the properties of the zig-zag product of graphs, and provide an explicit construction of a family of constant-degree expanders using the zig-zag product and the polylogarithmic-degree construction.

A family of expanders is a family of graphs {G_n = (V_n,E_n)}, {|V_n|=n}, such that each graph is {d_n}-regular, and the edge-expansion of each graph is at least {h}, for an absolute constant {h} independent of {n}. Ideally, we would like to have such a construction for each {n}, although it is usually enough for most applications that, for some constant {c} and every {k}, there is an {n} for which the construction applies in the interval {\{ k, k+1, \ldots, ck \}}, or even the interval {\{ k, \ldots, ck^c\}}. We would also like the degree {d_n} to be slowly growing in {n} and, ideally, to be bounded above by an explicit constant. Today we will see a simple construction in which {d_n = O(\log^2 n)} and a more complicated one in which {d_n = O(1)}.

An explicit construction of a family of expanders is a construction in which {G_n} is “efficiently computable” given {n}. The weakest sense in which a construction is said to be explicit is when, given {n}, the (adjacency matrix of the) graph {G_n} can be constructed in time polynomial in {n}. A stronger requirement, which is necessary for several applications, is that given {n} and {i\in \{ 1,\ldots,n\}}, the list of neighbors of the {i}-th vertex of {G_n} can be computed in time polynomial in {\log n}.

In many explicit constructions of constant-degree expanders, the construction is extremely simple, and besides satisfying the stricter definition of “explicit” above, it is also such that the adjacency list of a vertex is given by a “closed-form formula.” The analysis of such constructions, however, usually requires very sophisticated mathematical tools.

Example 1 Let {p} be a prime, and define the graph {G_p = (V_p,E_p)} in which {V_p = \{ 0,\ldots,p-1\}}, and, for {a\in V_p - \{ 0\}}, the vertex {a} is connected to {a+1 \bmod p}, to {a-1 \bmod p} and to its multiplicative inverse {a^{-1} \bmod p}. The vertex {0} is connected to {1}, to {p-1}, and has a self-loop. Counting self-loops, the graph is 3-regular: it is the union of a cycle over {V_p} and of a matching over the {p-3} vertices {V_p - \{ 0,1,p-1 \}}; the vertices {0}, {1}, {p-1} have a self-loop each. There is a constant {h>0} such that, for each {p}, the graph {G_p} has edge expansion at least {h}. Unfortunately, no elementary proof of this fact is known. The graph {G_{59}} is shown in the picture below.

Constructions based on the zig-zag graph product, which we shall see next, are more complicated to describe, but much simpler to analyze.

We begin by describing a building block in the construction, which is also an independently interesting construction: a family of expanders with polylogarithmic degree, which have both a very simple description and a very simple analysis.

Continue reading


CS261 Lecture 15: the LP of Max Flow

In which we look at the linear programming formulation of the maximum flow problem, construct its dual, and find a randomized-rounding proof of the max flow – min cut theorem.

In the first part of the course, we designed approximation algorithms “by hand,” following our combinatorial intuition about the problems. Then we looked at linear programming relaxations of the problems we worked on, and we saw that approximation algorithms for those problems could also be derived by rounding a linear programming solution. We also saw that our algorithms could be interpreted as constructing, at the same time, an integral primal solution and a feasible solution for the dual problem.

Now that we have developed exact combinatorial algorithms for a few problems (maximum flow, minimum s-t cut, global min cut, maximum matching and minimum vertex cover in bipartite graphs), we are going to look at linear programming relaxations of those problems, and use them to gain a deeper understanding of the problems and of our algorithms.

We start with the maximum flow and the minimum cut problems.

Continue reading

CS359G Lecture 14: ARV Rounding

In which we begin to discuss the Arora-Rao-Vazirani rounding procedure.

Recall that, in a graph {G=(V,E)} with adjacency matrix {A}, then ARV relaxation of the sparsest cut problem is the following semidefinite program.

\displaystyle  \begin{array}{lll} {\rm minimize} & \frac 1 {2|E|} \sum_{u,v} A_{u,v} || {\bf x}_u - {\bf x}_v ||^2\\ {\rm subject\ to}\\ & \sum_{u,v} || {\bf x}_u - {\bf x}_v ||^2 = {|V|^2}\\ & || {\bf x}_u - {\bf x}_v||^2 \leq || {\bf x}_u - {\bf x}_w||^2 + || {\bf x}_w - {\bf x}_v||^2 & \forall u,v,w \in V\\ & \mbox{} {\bf x}_u \in {\mathbb R}^d & \forall u\in V \end{array}

If we denote by {ARV(G)} the optimum of the relaxation, then we claimed that

\displaystyle  ARV(G) \leq \phi(G) \leq O(\sqrt{\log |V|}) \cdot ARV(G)

where the first inequality follows from the fact that {ARV(G)} is a relaxation of {\phi(G)}, and the second inequality is the result whose proof we begin to discuss today.

Continue reading

The keyboard that goes “CLICK”

When I got a computer for my new office at Stanford last year, it came with the Apple wireless keyboard, a piece of equipment that some people like very much, and that has a handsome and minimalist design. The bluetooth connection was, however, occasionally flaky, and it seemed silly to have a battery operated device sitting in front of a desktop. So I decided to buy a wired keyboard instead, and since everybody raves about how good it is to type on Lenovo laptops, I thought Lenovo would sell a keyboard made to feel like their laptops. Unfortunately there does not seem to be such a thing.

Searching for information about keyboards, however, I found a whole online cult devoted to the keyboards that IBM made for its PC in the 1980s: the IBM Model M keyboard. Although I never owned or used an IBM PC, I remember using similar keyboards when I was a graduate student in the mid 1990s, and we each had a terminal on our desk connected to a mainframe in the basement. The terminals had monochromatic, text-only, displays, but they keyboards were good, and, every time you would press a key, they would go *CLICK*, just like the 1980s IBM keyboards. (The terminals were made by HP and were 1980s technology.)

The license/patents to make these keyboards went to Lexmark, when IBM spun off its printers/devices business. Making the keyboards, however, was not profitable because they never break — see for example this video of a Model M versus a watermelon.

Lexmark then sold the license/patents to Unicomp, an American company whose business is to make clones of the IBM Model M keyboard and other 1980s models.

So that’s what I got for my office and, while I feel rather self-conscious about showing enthusiasm about a keyboard, it is awesome. (To be precise, the keyboard I got is not an exact clone of the IBM model M: mine has a USB cable, a “Windows” key, and it works with a Mac without drivers. The Windows key becomes the “command” key. For the purists, it is possible to buy actual IBM models M, with a PS/2 interface which can be connected to a USB via converter, at clickykeyboards.com, where they even have “mint condition never used” ones.)

This term, as readers of in theory might have noticed, I am writing notes for two classes, which means that I am typing for roughly 15-20 hours a week, mostly at home. Usually, at home I would work using the laptop on the sofa, and use my desk for storage, but this wasn’t good this term, so I (mostly) cleared the desk, got a monitor, a mouse, another, awesome Unicomp keyboard, and hooked it all up to my MacBook Air (which has new hinges, yay!). Continue reading