In June, I wrote about my work on using spectral partitioning to approximate Max Cut.

I have now posted a revised paper with a couple new things.

One is an improved analysis due to Moses Charikar, of the algorithm described in the June paper. Moses shows that if one starts from a graph in which the optimum cuts a fraction of edges, then the algorithm cuts at least a fraction of edges (and also at least half of the edges). Cutting more than a fraction of edges is Unique-Games-hard. Optimizing the fraction

we see that the approximation ratio of the algorithm is always at least .

The second new result answers a question raised by an anonymous commenter in June: what about Max Cut Gain?

Max Cut Gain is the variant of Max Cut in which we count *the number of cut edges minus the number of uncut edges*. This means that if the Max Cut optimum is , then the Max Cut Gain is . (The name comes from the fact that we are looking at how much we can *gain* compared to a random solution that cuts half of the edges.)

Assuming hardness of Unique Games, this problem does not allow constant approximation, and the best one can hope for is to cut a fraction of edges in graphs in which the optimum is . This is achieved by an algorithm by Charikar and Wirth.

In my paper I show the considerably weaker result that, using a recursive spectral partitioning algorithm, it is possible to cut a fraction of edges in graphs in which the optimum is .

The main result (which gives the basic step in the recursive procedure) is a randomized rounding process such that if we are given a vector such that

we can find a vector such that

We do so by finding a distribution over vectors such that

The main difficulty in achieving (3) is that the terms and can be positive and negative, and so it is not sufficient to establish that is well approximated by . Indeed, suppose that we could prove that for every we have ; then it could be that when , and when . Then it could be the case that (1) is positive while (3) is negative.

It seems nearly unavoidable, then, to construct a rounding scheme such that is *exactly equal*, up to a scaling factor, to .

This, however, is problematic, because then we would have be proportional to , while we need it to be proportional to in order to derive (3). This means that the approximation that we get depends on the ratio between the smaller and larger , which can be arbitrarily bad. (A similar difficulty arises in the analysis of the algorithm of Charikar and Wirth.)

A more careful analysis is to establish that, for a scaling factor , we have

because then we have, from (1),

If can then establish that, for some , we have , then we deduce

.

We show the existence of a distribution satisfying the above equations with .

Pingback: Lecture 1: Cheating with foams « tcs math - some mathematics of theoretical computer science

Luca,

Does your result imply a gap between x(A)=(.5(1+|lambda_min(1/d(A))|)) and OPT? i.e. like an integrality gap?

Or can it be the case that x(A) is close to |E(A)| and OPT is close to half the edges?

The gap between x(A) and OPT can be as bad as 1/2 + o(1)

If your graph has two connected components, one of size o(|V|) and bipartite (say, a K_{d,d} graph) and one of size V-o(V) and with a max cut of (1/2 + o(1))|E|, then x(A) is |E(A)|, because the smallest eigenvalue is -d, but the max cut is (1/2+o(1))|E|.

If one looks at the “primal dual” interpretation of the algorithm in my paper, then the spectral upper bound for which there is a .531 guaranteed approximation is one where you choose a set V’ of vertices, and your upper bound is |E|-|E’|+x(A’) where E’ are the edges of the subgraph induced by V’, and A’ is the adjacency matrix of the subgraph. (Not exactly, because x(A) as defined by the commenter above is defined only for regular graphs, so one has to take the proper generalization to non-regular graphs.) Probably, even after optimizing for the best V’, this upper bound is still too weak to give .878 approximation, but I have not tried to find integrality gap instances. (It can be seen that, even optimizing for the best V’, the upper bound is still at least as large as the optimum of the GW relaxation, so instances that give a .878 gap for GW also give a gap at least as strong for this upper bound.)