You are currently browsing the monthly archive for June 2008.

More images from Saturday and Sunday after the cut. Read the rest of this entry »

In the Max Cut problem, we are given an undirected graph G=(V,E) and we want to find a partition (L,\bar L) of the set of vertices such that as many edges as possible have one endpoint in L and one endpoint in \bar L, and are hence cut by the partition.

It is easy, as recognized since the 1970s, to find a partition that cuts half of the edges and that, thus, is at least half as good as an optimal solution. No approximation better than 1/2 was known for this problem, until Goemans and Williamson famously proved that one could achieve a .878… approximation using semidefinite programming (SDP).

No other approximation algorithm achieving an approximation asymptotically better than 1/2 is known, and it seems that a fundamental difficulty is the following. Suppose we prove that a certain algorithm achieves approximation > 51\%. Then, given a graph in which the optimum is, say, < 50.4 \%, the algorithm and its analysis must provide a certificate that the optimum cut in the given graph is < 50.4/51 < 99\%, and there is no general technique to prove upper bounds to the Max Cut optimum of a general graph other than Semidefinite Programming. (And see here and here for negative results showing that large classes of linear programming relaxations are unable to give such certificates.)

Spectral techniques can prove upper bounds to Max Cut in certain cases (and can be seen as special cases of the upper bounds provided by the Goemans-Williamson relaxation).

In the simplified case in which G=(V,E) is a d-regular graph, let A be the adjacency matrix of G and \lambda_1 \geq \lambda_2 \geq \cdots \lambda_n be the eigenvalues of A; then it is easy to show that

(1) \ \ \ \displaystyle maxcut(G) \leq \frac 12  + \frac 12 \cdot \frac {|\lambda_n|}{d}

where maxcut(G) is the fraction of edges cut by an optimal solution. Unfortunately (1) does not quite have an approximate converse: there are graphs where |\lambda_n|=d but maxcut(G) = \frac 12 + o_d(1).

The following fact, however, is always true and well known:

  • |\lambda_n|=d if and only if G contains a bipartite connected component.

Is there an “approximate” version of the above statement characterizing the cases in which d-|\lambda_n| is small? Surprisingly, as far as I know the question had not been considered before.

For comparison, the starting point of the theory of edge expansion is the related fact

  • \lambda_2=d if and only if G is disconnected.

Which can be rephrased as:

  • \lambda_2=d if and only if there is a non-empty S\subseteq V, |S| \leq |V|/2 such that edges(S,V-S)=0.

Cheeger’s inequality characterizes the case in which d-\lambda_2 is small:

  • If there is a non-empty S\subseteq V, |S| \leq |V|/2 such that edges(S,V-S) \leq \epsilon \cdot d \cdot |S|, then \lambda_2 \geq d\cdot (1-2\epsilon);
  • If \lambda_2 \geq d\cdot (1-\epsilon) then there is a non-empty S\subseteq V, |S| \leq |V|/2 such that edges(S,V-S) \leq \sqrt{2 \epsilon} \cdot d \cdot |S|.

For a subset S\subseteq V, and a bipartition L,R=S-L of S, we say that an edge (i,j) fails [to be cut by] the bipartition if (i,j) is incident on S but it is not the case that one endpoint is in L and one endpoint is in R. (This means that either both endpoints are in L, or both endpoints are in R, or one endpoint is in S and one endpoint is not in S.) Then we can express the well-known fact about \lambda_n as

  • |\lambda_n|=d if and only if there is S\subseteq V and a bipartition of S with zero failed edges.

In this new paper I prove the following approximate version

  • If there is a non-empty S\subseteq V, and a bipartition of S with at most \epsilon \cdot d \cdot |S| failed edges, then |\lambda_n| \geq d\cdot (1-4\epsilon);
  • If |\lambda_n| \geq d\cdot (1-\epsilon), then there is a non-empty S\subseteq V, and a partition of S with at most \sqrt{2 \epsilon} \cdot d \cdot |S| failed edges.

The following notation makes the similarity with Cheeger’s inequality clearer. Define the edge expansion of a graph G as

\displaystyle h(G) = \min_{S\subseteq V. \ |S| \leq \frac {|V|}{2} } \frac {edges(S,V-S)} {d|S|}

Let us define the bipartiteness ratio of G as

\displaystyle \beta(G) = \min_{S, \ L \subseteq S, \ R=S-L} \frac{edges(L) + edges(R) + edges(S,V-S)}{d|S|}

that is, as the minimum ratio between failed edges of a partition of a set S over d|S|.

Then Cheeger’s inequality gives

\displaystyle \frac 12 \cdot \frac{d-\lambda_2}{d}  \leq h(G) \leq \sqrt{2 \cdot \frac{d-\lambda_2}{d} }

and our results give

\displaystyle \frac 14 \cdot \frac{d-|\lambda_n|}{d} \leq \beta(G) \leq \sqrt{2 \cdot \frac{d-|\lambda_n|}{d}}

This translates into an efficient algorithm that, given a graph G such that maxcut(G) \geq 1-\epsilon, finds a set S and a bipartition of S such that at least a 1- 4\sqrt{\epsilon} fraction of the edges incident on S are cut by the bipartition. Removing the vertices in S and continuing recursively on the residual graph yields a .50769… approximation algorithm for Max Cut. (The algorithm stops making recursive calls, and uses a random partition, when the partition of S found by the algorithm has too many failed edges.)

The paper is entirely a by-product of the ongoing series of posts on edge expansion: the question of relations between spectral techniques to max cut was asked by a commenter and the probabilistic view of the proof of Cheeger’s inequality that I wrote up in this post was very helpful in understanding the gap between \lambda_n and -d.

Green, Tao and Ziegler, in their works on patterns in the primes, prove a general result of the following form: if X is a set, R is a, possibly very sparse, “pseudorandom” susbset of X, and D is a dense subset of R, then D may be “modeled” by a large set M which has the same density in X as the density of D in R.

They use this result with X being the integers in a large interval \{ 1,\ldots, N\}, R being the “almost-primes” in X (integers with no small factor), and D being the primes in X. Since the almost-primes can be proved to be “pseudorandom” in a fairly strong sense, and since the density of the primes in the almost-primes is at least an absolute constant, it follows that the primes are “indistinguishable” from a large set M containing a constant fraction of all integers. Since such large sets are known to contain arbitrarily long arithmetic progressions, as proved by Szemeredi, Green and Tao are able to prove that the primes too must contain arbitrarily long arithmetic progressions. Such large sets are also known to contain arbitrarily long “polynomial progressions,” as proved by Bergelson and Leibman, and this allows Tao and Ziegler to argue that the primes too much contain arbitrarily long polynomial progressions.

(The above account is not completely accurate, but it is not lying too much.)

As announced last October here, and here, Omer Reingold, Madhur Tulsiani, Salil Vadhan and I found a new proof of this “dense model” theorem, which uses the min-max theorem of game theory (or, depending on the language that you prefer to use, the duality of linear programming or the Hahn Banach theorem) and was inspired by Nisan’s proof of the Impagliazzo hard-core set theorem. In complexity-theoretic applications of the theorem, our reduction has polynomial complexity, while the previous work incurred an exponential loss.

As discussed here and here, we also show how to use the Green-Tao-Ziegler techniques of “iterative partitioning” to give a different proof of Impagliazzo’s theorem.

After long procrastination, we recently wrote up a paper about these results.

In the Fall, we received some feedback from additive combinatorialists that while our proof of the Green-Tao-Ziegler result was technically simpler than the original one, the language we used was hard to follow. (That’s easy to believe, because it took us a while to understand the language in which the original proof was written.) We then wrote an expository note of the proof in the analyst’s language. When we were about to release the paper and the note, we were contacted by Tim Gowers, who, last Summer, had independently discovered a proof of the Green-Tao-Ziegler results via the Hahn-Banach theorem, essentially with the same argument. (He also found other applications of the technique in additive combinatorics. The issue of polynomial complexity, which does not arise in his applications, is not considered.)

Gowers announced his results in April at a talk at the Fields institute in Toronto. (Audio and slides are available online.)

Gowers’ paper already contains the proof presented in the “analyst language,” making our expository note not so useful any more; we have still posted it anyways because, by explaining how one translates from one notation to the other, it can be a short starting point for the computer scientist who is interested in trying to read Gowers’ paper, or for the combinatorialist who is interested in trying to read our paper.

In which we spend more than sixteen hundred words to explain a three-line proof.

In the last post, we left off with the following problem. We have a set V of n “vertices,” a semi-metric d: V\times V \rightarrow {\mathbb R}, and we want to find a distribution over sets A\subseteq V such that for every two vertices i,j

(1) \ \ \ \displaystyle {\mathbb E} | d(A,i) - d(A,j) | \geq \frac 1 {O(\log n)}  \cdot d(i,j)

where

\displaystyle d(A,i) := \min_{a\in A} d(a,i)

This will give us a way to round a solution of the Leighton-Rao relaxation to an actual cut with only an O(\log n) loss in the approximation.

Before getting to the distribution which will do the trick, it is helpful to consider a few examples.

  • Example 1: all points are at distance 1 from each other.

    Then |d(A,i)-d(A,j)| is equal to either 0 or 1, and it is 1 if and only if A contains exactly one of i or j. If A is a uniformly chosen random set, then the above condition is satisfied with probability 1/2, so we have the stronger bound

    \displaystyle {\mathbb E} | d(A,i) - d(A,j) | \geq \frac 12 \cdot d(i,j)

    [Indeed, even better, we have {\mathbb E} | d(A,i) - d(A,j) | = \frac 12 \cdot d(i,j), which is an isometric embedding.]

  • Example 2: all points are at distance either 1 or 2 from each other.

    If A contains exactly one of the vertices i,j, then |d(A,i)-d(A,j)| \geq 1, and so, if we choose A uniformly at random we have

    \displaystyle {\mathbb E} | d(A,i) - d(A,j) | \geq \frac 14 \cdot d(i,j)

These examples may trick us into thinking that a uniformly chosen random set always work, but this unfortunately is not the case.

  • Example 3: Within a set S of size n/2, all distances are 1, and the same is true within V-S; the distance between elements of S and elements of V-S is n.

    If we consider i \in S and j \in V-S, then we are in trouble whenever A contains elements from both sets, because then |d(A,i)-d(A,j)| \leq 1, while d(i,j)=n. If we pick A uniformly at random, then A will essentially always, except with exponentially small probability, contain elements from both S and V-S. If, however, we pick A to be a random set of size 1, then we are going to get |d(A,i)-d(A,j)| \geq n-2 with probability at last 1/2, which is great.

    Choosing a set of size 1, however, is a disaster inside S and inside V-S, where almost all distances |d(A,i)-d(A,j)| collapse to zero. For those pairs, however, we know that choosing A uniformly at random works well.

    The solution is thus: with probability 1/2, pick A uniformly at random; with probability 1/2, pick A of size 1.

So far we are actually getting away with {\mathbb E} |X_i - X_j| being a constant fraction of d(i,j). Here is a slightly trickier case.

  • Example 4: The shortest path metric in a \sqrt n \times \sqrt n grid.

    Take two vertices i,j at distance d. We can get, say, |d(A,i) - d(A,j)| \geq d/3, provided that A avoids all vertices at distance \leq 2d/3 from i, and it includes some vertex at distance \leq d/3 from j. In a grid, the number of vertices at distance \leq d from a given vertex is \Theta(d^2), so our goal is to pick A so that it avoids a certain set of size \Theta((2d/3)^2) = \Theta(d^2) and it hits another set of size \Theta( (d/3)^2) = \Theta(d^2). If we pick A to be a random set of size about n/d^2, both events hold with constant probability.

    Now, what works for a certain distance won’t work for a different distance, so it seems we have to do something like picking t from 1 to n, and then pick a random set A of size n/t. This is however too bad, because our chance of getting a set of the right size would only be 1/n, while we can only lose a factor 1/\log n. The solution is to pick t at random from \{ 1,2,4,\cdots,n\}, and then pick A of size A/t. With probability 1/\log n we get the right size of A, up to a factor of two.

It turns out that the last example gives a distribution that works in all cases:

  • Pick t at random in 1 \cdots \log n
  • Pick a random set A so that each i\in V is selected to be in A independently and with probability 1/2^t

Now, it would be nice to show that (as in the examples we have seen so far) for every semi-metric d and two vertices i,j, there is a size parameter t such that when A is chosen to be a random set of size n/2^t we have {\mathbb E} |d(A,i)-d(A,j)| \geq \Omega(d(i,j)).

This would mean that, after we lose a factor of \log n to “guess” the right density, we have the desired bound (3). Unfortunately this is too much to ask for; we shall instead work out an argument that uses contributions from all densities.

It is good to see one more example.

  • Example 5: A 3-regular graph of logarithmic girth.

    Let i,j be two vertices whose distance d is less than the girth. In the example of the grid, we considered all vertices at distance \leq 2d/3 from i and all vertices at distance \leq d/3 from j; in this case, there are \approx 2^{2d/3} of the former, and \approx 2^{d/3} of the latter, and it is hopeless to expect that A, no matter its density, can avoid all of the former, but hit some of the latter.

    If, however, I consider the points at distance \leq \ell+1 from i and the points at distance \leq \ell from j, they are off only by a constant factor, and there is a constant probability of avoiding the former and hitting the latter when t = \ell. So conditioned on each t between 1 and d/2, the expectation of |d(A,i) - d(A,j)| is at least \Omega(1), and, overall, the expectation is at least \Omega(d/\log n).

We are now more or less ready to tackle the general case.

We look at two vertices i,j at distance d(i,j) and we want to estimate {\mathbb E} |d(A,i)-d(A,j)|.

Let us estimate the contribution to the expectation coming from the case in which we choose a particular value of t. In such a case, there is a constant probability that

  1. A contains none of the 2^{t+1}-1 vertices closest to i, but at least one of the of 2^t vertices closest to j. [Assuming the two sets are disjoint]
  2. A contains none of the 2^{t+1}-1 vertices closest to j, but at least one of the of 2^t vertices closest to i. [Assuming the two sets are disjoint]

Notice that events (1) and (2) are disjoint, so we are allowed to sum their contributions to the expectation, without doing any double-counting.

Call ri_k the distance of the the k-th closest vertex from i, and similarly rj_k. Then, if event (1) happens, d(A,i) \geq ri_{2^{t+1}} and d(A,j) \leq rj_{2^t}, in which case

|d(A,i) - d(A,j)| \geq d(A,i) - d(A,j) \geq ri_{2^{t+1}} - rj_{2^t}

we can similarly argue that if (2) happens, then

|d(A,i) - d(A,j)| \geq d(A,j) - d(A,i) \geq rj_{2^{t+1}} - ri_{2^t}

Call r'i_{k} := \min \{ d(i,j)/3, ri_k\} and r'j_{k} := \min \{ d(i,j)/3, rj_k\}.

Let t^* be the smallest t such that either ri_{2^{t+1}}\geq d(i,j)/3
or rj_{2^{t+1}}\geq d(i,j)/3. Then for t\leq t^*-1 the events described in (1) and (2) are well-defined, and the contributions to the expectation is at least

\frac 1 {\log n} \cdot ( ri_{2^{t+1}} - rj_{2^t}  + rj_{2^{t+1}} - ri_{2^t} ) =
= \frac 1 {\log n} \cdot ( r'i_{2^{t+1}} - r'j_{2^t}  + r'j_{2^{t+1}} - r'i_{2^t} )

When t=t^*, we can verify that the contribution to the expectation is at least

= \frac 1 {\log n} \cdot ( r'i_{2^{t^*+1}} - r'j_{2^{t^*}}  + r'j_{2^{t^*+1}} - r'i_{2^{t^*}} )
\geq \frac 1 {\log n} \cdot ( d(i,j)/3  - r'j_{2^{t^*}} - r'i_{2^{t^*}}

And if we sum the contributions for t=0,\cdots t^*, the sum telescopes and we are left with

\displaystyle {\mathbb E} |d(A,i) - d(A,j) | \geq \Omega\left( \frac 1 {\log n} \right) \cdot \left( \frac{d(i,j)}{3} - r'i_{1} - r'j_1 \right)

where r'i_1 = r'j_1 = 0.

At long last, we have completed the proof.

Notice that the O(\log n) factor we have lost is best possible in light of the expander example we saw in the previous post. In many examples, however, we lost only a constant factor. It is a great open question whether it is possible to lose only a constant factor whenever the metric is a shortest-path metric on a planar graph.

Continuing our seemingly never-ending series on approximations to edge expansion, we come to the question of how good is the Leighton-Rao relaxation.

Given a graph G=(V,E), we are trying to approximate the sparsest cut problem (which, in turn, approximates the edge expansion problem within a factor of two) defined as

\displaystyle \phi(G) := \min_{S\subseteq V} \frac {edges(S,V-S)} {\frac 1n |S| \cdot |V-S|}

And we have seen that an equivalent characterization of sparsest cut is:

(1) \ \ \ \displaystyle \phi(G) = \min_{x\in {\mathbb R}^V}  \frac{ \sum_{i,j} A(i,j) | x_i - x_j | } {\frac 1n \sum_{i,j} |x_i - x_j | }

In the Leighton-Rao relaxation, we have

(2) \ \ \ \displaystyle LR(G) := \min_{d\in {\cal M}(V)} \frac{ \sum_{i,j} A(i,j) d(i,j) }{\frac 1n \sum_{i,j} d(i,j)}

where {\cal M}(V) is the set of all semi-metrics d: V \times V \rightarrow {\mathbb R}, that is, of all functions such that d(i,j) = d(j,i) \geq 0, d(i,i)=0, and that obey the “triangle inequality”

d(i,j) \leq d(i,k) + d(k,j)

for all i,j,k \in V.

This is clearly a relaxation of the sparsest cut problem, because for every x \in {\mathbb R}^V the “distances” |x_i - x_j| define indeed a semi-metric over V.

How bad can the relaxation be? Take a family of regular constant-degree expander graphs, that is graphs where \phi is a constant and the degree k is a constant independent of n=|V|. Define d(i,j) to be the shortest path distance between i and j in the graph. This is clearly a semi-metric, and we have

\displaystyle \sum_{i,j} A(i,j) d(i,j) = kn = O(n)

because along every edge the distance is precisely one, and

\displaystyle \sum_{i,j} d(i,j) = \Omega (n^2 \log_k n)

because, for every vertex i, at most k^t other vertices can be within
distance t, and so at least half the vertices are at distance \Omega( \log_k n).

So we have

\displaystyle LR(G) = O \left( \frac 1 {\log n} \right)

even though \phi(G) = \Omega(1), so the error in the approximation can be as bad as a factor of \log n. We shall prove that this is as bad as it gets.

Our task is then: given a feasible solution to (2), that is, a distance function defined on the vertices of G, find a feasible solution to sparsest cut whose cost is as small as possible, and no more than a O(\log n) factor off from the cost of the solution to (2) that we started from. Instead of directly looking for a cut, we shall look for a solution to (1), which is a more flexible formulation.

The general strategy will be, given d, to find a distribution X over vectors x \in {\mathbb R}^V such that for every two vertices i,j we have

(3) \ \ \ \displaystyle \Omega\left( \frac {1}{\log n} \right) \cdot d(i,j) \leq {\mathbb E} | X_i - X_j | \leq d(i,j)

Suppose that we start from a solution to (2) of cost c, that is, we have a semimetric d(\cdot,\cdot) such that

\displaystyle \sum_{i,j} A_{i,j} d(i,j) = c \cdot \frac 1n \sum_{i,j} d(i,j)

Then our distribution X is such that

\displaystyle {\mathbb E}  \sum_{i,j} A_{i,j} |X_i - X_j| \leq \sum_{i,j} A_{i,j} d(i,j)

and

\displaystyle \sum_{i,j} d(i,j) \leq O \left(  {\log n} \right) {\mathbb E} \sum_{i,j}|X_i - X_j|

so

\displaystyle {\mathbb E}  \left[ \sum_{i,j} A_{i,j} |X_i - X_j| - O(c\log n) \cdot \frac 1n \sum_{i,j} |X_i - X_j| \right] \leq 0

and there must exist a vector x in the support of the distribution X such that

\displaystyle \sum_{i,j} A_{i,j} |x_i - x_j| - O(c\log n) \cdot \frac 1n \sum_{i,j} |x_i - x_j|  \leq 0

and now we are done because the cost of x in (1) is at most O(\log n) times the optimum of the Leighton-Rao relaxation.

Though this works, it seems an overkill to require condition (3) for all pairs of vertices. It seems sufficient to require

{\mathbb E} |X_i-X_j| \leq d(i,j)

only for edges, and to require

d(i,j) \leq O(\log n) \cdot  {\mathbb E} |X_i-X_j|

only “on average” over all pairs of vertices. It can be shown, however, that if one wants to round a generalization of the Leighton-Rao relaxation to sparsest cut with “non-uniform demands,” then any rounding algorithm can be turned into a rounding algorithm that satisfies (3).

So how do we start from an arbitrary metric and come up with a probabilistic linear order of the vertices so that their distances in the original metric are well approximated in the linear ordering? We will describe a method of Bourgain, whose applicability in this setting is due Linial, London and Rabinovich.

The starting point is to see that if A is a set of vertices, and we define

\displaystyle X_i := d(A,i) :=  \min_{a\in A} d(a,i)

then, no matter how we choose A, we always have

|X_i - X_j | = |d(A,i) - d(A,j) | \leq d(i,j)

[Hint: use the triangle inequality to see that d(A,j) \leq d(A,i) + d(i,j).]

This means that “all we have to do” is to find a distribution over sets A such that for every i,j we have

(3) \ \ \ \displaystyle {\mathbb E} | d(A,i) - d(A,j) | \geq \Omega \left( \frac 1 {\log n} \right) \cdot d(i,j)

If you want to make it sound more high-brow, you may call such a distribution a Frechet embedding of d into \ell_1. Constructing such a distribution will be the subject of the next post.

a

Follow

Get every new post delivered to your Inbox.

Join 261 other followers