More images from Saturday and Sunday after the cut. Continue reading
In the Max Cut problem, we are given an undirected graph and we want to find a partition of the set of vertices such that as many edges as possible have one endpoint in and one endpoint in , and are hence cut by the partition.
It is easy, as recognized since the 1970s, to find a partition that cuts half of the edges and that, thus, is at least half as good as an optimal solution. No approximation better than 1/2 was known for this problem, until Goemans and Williamson famously proved that one could achieve a .878… approximation using semidefinite programming (SDP).
No other approximation algorithm achieving an approximation asymptotically better than 1/2 is known, and it seems that a fundamental difficulty is the following. Suppose we prove that a certain algorithm achieves approximation . Then, given a graph in which the optimum is, say, , the algorithm and its analysis must provide a certificate that the optimum cut in the given graph is , and there is no general technique to prove upper bounds to the Max Cut optimum of a general graph other than Semidefinite Programming. (And see here and here for negative results showing that large classes of linear programming relaxations are unable to give such certificates.)
Spectral techniques can prove upper bounds to Max Cut in certain cases (and can be seen as special cases of the upper bounds provided by the Goemans-Williamson relaxation).
In the simplified case in which is a -regular graph, let be the adjacency matrix of and be the eigenvalues of ; then it is easy to show that
where is the fraction of edges cut by an optimal solution. Unfortunately (1) does not quite have an approximate converse: there are graphs where but .
The following fact, however, is always true and well known:
- if and only if contains a bipartite connected component.
Is there an “approximate” version of the above statement characterizing the cases in which is small? Surprisingly, as far as I know the question had not been considered before.
For comparison, the starting point of the theory of edge expansion is the related fact
- if and only if is disconnected.
Which can be rephrased as:
- if and only if there is a non-empty , such that .
Cheeger’s inequality characterizes the case in which is small:
- If there is a non-empty , such that , then ;
- If then there is a non-empty , such that .
For a subset , and a bipartition of , we say that an edge fails [to be cut by] the bipartition if is incident on but it is not the case that one endpoint is in and one endpoint is in . (This means that either both endpoints are in , or both endpoints are in , or one endpoint is in and one endpoint is not in .) Then we can express the well-known fact about as
- if and only if there is and a bipartition of with zero failed edges.
In this new paper I prove the following approximate version
- If there is a non-empty , and a bipartition of with at most failed edges, then ;
- If , then there is a non-empty , and a partition of with at most failed edges.
The following notation makes the similarity with Cheeger’s inequality clearer. Define the edge expansion of a graph as
Let us define the bipartiteness ratio of as
that is, as the minimum ratio between failed edges of a partition of a set over .
Then Cheeger’s inequality gives
and our results give
This translates into an efficient algorithm that, given a graph such that , finds a set and a bipartition of such that at least a fraction of the edges incident on are cut by the bipartition. Removing the vertices in and continuing recursively on the residual graph yields a .50769… approximation algorithm for Max Cut. (The algorithm stops making recursive calls, and uses a random partition, when the partition of found by the algorithm has too many failed edges.)
The paper is entirely a by-product of the ongoing series of posts on edge expansion: the question of relations between spectral techniques to max cut was asked by a commenter and the probabilistic view of the proof of Cheeger’s inequality that I wrote up in this post was very helpful in understanding the gap between and .
Green, Tao and Ziegler, in their works on patterns in the primes, prove a general result of the following form: if is a set, is a, possibly very sparse, “pseudorandom” susbset of , and is a dense subset of , then may be “modeled” by a large set which has the same density in as the density of in .
They use this result with being the integers in a large interval , being the “almost-primes” in (integers with no small factor), and being the primes in . Since the almost-primes can be proved to be “pseudorandom” in a fairly strong sense, and since the density of the primes in the almost-primes is at least an absolute constant, it follows that the primes are “indistinguishable” from a large set containing a constant fraction of all integers. Since such large sets are known to contain arbitrarily long arithmetic progressions, as proved by Szemeredi, Green and Tao are able to prove that the primes too must contain arbitrarily long arithmetic progressions. Such large sets are also known to contain arbitrarily long “polynomial progressions,” as proved by Bergelson and Leibman, and this allows Tao and Ziegler to argue that the primes too much contain arbitrarily long polynomial progressions.
(The above account is not completely accurate, but it is not lying too much.)
As announced last October here, and here, Omer Reingold, Madhur Tulsiani, Salil Vadhan and I found a new proof of this “dense model” theorem, which uses the min-max theorem of game theory (or, depending on the language that you prefer to use, the duality of linear programming or the Hahn Banach theorem) and was inspired by Nisan’s proof of the Impagliazzo hard-core set theorem. In complexity-theoretic applications of the theorem, our reduction has polynomial complexity, while the previous work incurred an exponential loss.
After long procrastination, we recently wrote up a paper about these results.
In the Fall, we received some feedback from additive combinatorialists that while our proof of the Green-Tao-Ziegler result was technically simpler than the original one, the language we used was hard to follow. (That’s easy to believe, because it took us a while to understand the language in which the original proof was written.) We then wrote an expository note of the proof in the analyst’s language. When we were about to release the paper and the note, we were contacted by Tim Gowers, who, last Summer, had independently discovered a proof of the Green-Tao-Ziegler results via the Hahn-Banach theorem, essentially with the same argument. (He also found other applications of the technique in additive combinatorics. The issue of polynomial complexity, which does not arise in his applications, is not considered.)
Gowers announced his results in April at a talk at the Fields institute in Toronto. (Audio and slides are available online.)
Gowers’ paper already contains the proof presented in the “analyst language,” making our expository note not so useful any more; we have still posted it anyways because, by explaining how one translates from one notation to the other, it can be a short starting point for the computer scientist who is interested in trying to read Gowers’ paper, or for the combinatorialist who is interested in trying to read our paper.
In which we spend more than sixteen hundred words to explain a three-line proof.
In the last post, we left off with the following problem. We have a set of “vertices,” a semi-metric , and we want to find a distribution over sets such that for every two vertices
This will give us a way to round a solution of the Leighton-Rao relaxation to an actual cut with only an loss in the approximation.
Before getting to the distribution which will do the trick, it is helpful to consider a few examples.
- Example 1: all points are at distance 1 from each other.
Then is equal to either 0 or 1, and it is 1 if and only if contains exactly one of or . If is a uniformly chosen random set, then the above condition is satisfied with probability , so we have the stronger bound
[Indeed, even better, we have , which is an isometric embedding.]
- Example 2: all points are at distance either 1 or 2 from each other.
If contains exactly one of the vertices , then , and so, if we choose uniformly at random we have
These examples may trick us into thinking that a uniformly chosen random set always work, but this unfortunately is not the case.
- Example 3: Within a set of size , all distances are 1, and the same is true within ; the distance between elements of and elements of is .
If we consider and , then we are in trouble whenever contains elements from both sets, because then , while . If we pick uniformly at random, then will essentially always, except with exponentially small probability, contain elements from both and . If, however, we pick to be a random set of size 1, then we are going to get with probability at last , which is great.
Choosing a set of size 1, however, is a disaster inside and inside , where almost all distances collapse to zero. For those pairs, however, we know that choosing uniformly at random works well.
The solution is thus: with probability 1/2, pick uniformly at random; with probability 1/2, pick of size 1.
So far we are actually getting away with being a constant fraction of . Here is a slightly trickier case.
- Example 4: The shortest path metric in a grid.
Take two vertices at distance . We can get, say, , provided that avoids all vertices at distance from , and it includes some vertex at distance from . In a grid, the number of vertices at distance from a given vertex is , so our goal is to pick so that it avoids a certain set of size and it hits another set of size . If we pick to be a random set of size about , both events hold with constant probability.
Now, what works for a certain distance won’t work for a different distance, so it seems we have to do something like picking from to , and then pick a random set of size . This is however too bad, because our chance of getting a set of the right size would only be , while we can only lose a factor . The solution is to pick at random from , and then pick of size . With probability we get the right size of , up to a factor of two.
It turns out that the last example gives a distribution that works in all cases:
- Pick at random in
- Pick a random set so that each is selected to be in independently and with probability
Now, it would be nice to show that (as in the examples we have seen so far) for every semi-metric and two vertices , there is a size parameter such that when is chosen to be a random set of size we have .
This would mean that, after we lose a factor of to “guess” the right density, we have the desired bound (3). Unfortunately this is too much to ask for; we shall instead work out an argument that uses contributions from all densities.
It is good to see one more example.
- Example 5: A 3-regular graph of logarithmic girth.
Let be two vertices whose distance is less than the girth. In the example of the grid, we considered all vertices at distance from and all vertices at distance from ; in this case, there are of the former, and of the latter, and it is hopeless to expect that , no matter its density, can avoid all of the former, but hit some of the latter.
If, however, I consider the points at distance from and the points at distance from , they are off only by a constant factor, and there is a constant probability of avoiding the former and hitting the latter when . So conditioned on each between and , the expectation of is at least , and, overall, the expectation is at least .
We are now more or less ready to tackle the general case.
We look at two vertices at distance and we want to estimate .
Let us estimate the contribution to the expectation coming from the case in which we choose a particular value of . In such a case, there is a constant probability that
- contains none of the vertices closest to , but at least one of the of vertices closest to . [Assuming the two sets are disjoint]
- contains none of the vertices closest to , but at least one of the of vertices closest to . [Assuming the two sets are disjoint]
Notice that events (1) and (2) are disjoint, so we are allowed to sum their contributions to the expectation, without doing any double-counting.
Call the distance of the the -th closest vertex from , and similarly . Then, if event (1) happens, and , in which case
we can similarly argue that if (2) happens, then
Call and .
Let be the smallest such that either
or . Then for the events described in (1) and (2) are well-defined, and the contributions to the expectation is at least
When , we can verify that the contribution to the expectation is at least
And if we sum the contributions for , the sum telescopes and we are left with
At long last, we have completed the proof.
Notice that the factor we have lost is best possible in light of the expander example we saw in the previous post. In many examples, however, we lost only a constant factor. It is a great open question whether it is possible to lose only a constant factor whenever the metric is a shortest-path metric on a planar graph.
Given a graph , we are trying to approximate the sparsest cut problem (which, in turn, approximates the edge expansion problem within a factor of two) defined as
And we have seen that an equivalent characterization of sparsest cut is:
In the Leighton-Rao relaxation, we have
where is the set of all semi-metrics , that is, of all functions such that , , and that obey the “triangle inequality”
for all .
This is clearly a relaxation of the sparsest cut problem, because for every the “distances” define indeed a semi-metric over .
How bad can the relaxation be? Take a family of regular constant-degree expander graphs, that is graphs where is a constant and the degree is a constant independent of . Define to be the shortest path distance between and in the graph. This is clearly a semi-metric, and we have
because along every edge the distance is precisely one, and
because, for every vertex , at most other vertices can be within
distance , and so at least half the vertices are at distance .
So we have
even though , so the error in the approximation can be as bad as a factor of . We shall prove that this is as bad as it gets.
Our task is then: given a feasible solution to (2), that is, a distance function defined on the vertices of , find a feasible solution to sparsest cut whose cost is as small as possible, and no more than a factor off from the cost of the solution to (2) that we started from. Instead of directly looking for a cut, we shall look for a solution to (1), which is a more flexible formulation.
The general strategy will be, given , to find a distribution over vectors such that for every two vertices we have
Suppose that we start from a solution to (2) of cost , that is, we have a semimetric such that
Then our distribution is such that
and there must exist a vector in the support of the distribution such that
and now we are done because the cost of in (1) is at most times the optimum of the Leighton-Rao relaxation.
Though this works, it seems an overkill to require condition (3) for all pairs of vertices. It seems sufficient to require
only for edges, and to require
only “on average” over all pairs of vertices. It can be shown, however, that if one wants to round a generalization of the Leighton-Rao relaxation to sparsest cut with “non-uniform demands,” then any rounding algorithm can be turned into a rounding algorithm that satisfies (3).
So how do we start from an arbitrary metric and come up with a probabilistic linear order of the vertices so that their distances in the original metric are well approximated in the linear ordering? We will describe a method of Bourgain, whose applicability in this setting is due Linial, London and Rabinovich.
The starting point is to see that if is a set of vertices, and we define
then, no matter how we choose , we always have
[Hint: use the triangle inequality to see that .]
This means that “all we have to do” is to find a distribution over sets such that for every we have
If you want to make it sound more high-brow, you may call such a distribution a Frechet embedding of into . Constructing such a distribution will be the subject of the next post.