In June, I wrote about my work on using spectral partitioning to approximate Max Cut.

I have now posted a revised paper with a couple new things.

One is an improved analysis due to Moses Charikar, of the algorithm described in the June paper. Moses shows that if one starts from a graph in which the optimum cuts a 1-\epsilon fraction of edges, then the algorithm cuts at least a 1-4\sqrt \epsilon + 8\epsilon fraction of edges (and also at least half of the edges). Cutting more than a 1-\frac 2 \pi \sqrt \epsilon + o_\epsilon(1) fraction of edges is Unique-Games-hard. Optimizing the fraction

\displaystyle \frac{ \max \{ \ \frac 12  \ , \ (1-4\sqrt \epsilon + 8\epsilon) \ \} }{1-\epsilon}

we see that the approximation ratio of the algorithm is always at least .531.

The second new result answers a question raised by an anonymous commenter in June: what about Max Cut Gain?

Max Cut Gain is the variant of Max Cut in which we count the number of cut edges minus the number of uncut edges. This means that if the Max Cut optimum is \frac 12 + \frac \epsilon 2, then the Max Cut Gain is \epsilon. (The name comes from the fact that we are looking at how much we can gain compared to a random solution that cuts half of the edges.)

Assuming hardness of Unique Games, this problem does not allow constant approximation, and the best one can hope for is to cut a \frac 12 + \Omega \left( \frac \epsilon {\log 1/\epsilon} \right) fraction of edges in graphs in which the optimum is \frac 12 + \epsilon. This is achieved by an algorithm by Charikar and Wirth.

In my paper I show the considerably weaker result that, using a recursive spectral partitioning algorithm, it is possible to cut a \frac 12 + exp(-O(1/\epsilon)) fraction of edges in graphs in which the optimum is \frac 12 + \epsilon.

The main result (which gives the basic step in the recursive procedure) is a randomized rounding process such that if we are given a vector x \in {\mathbb R}^V such that

(1) \ \ \ \displaystyle \frac{-\sum_{i,j} A_{i,j} x_i x_j }{\sum_i d_i x_i^2} \geq \epsilon

we can find a vector y \in \{ -1,0,1\} such that

(2) \ \ \ \displaystyle \frac{-\sum_{i,j} A_{i,j} y_iy_j}{ \sum _i d_i |y_i|} \geq exp \left(- \frac 1\epsilon \right)

We do so by finding a distribution Y over vectors \{ -1,0,1\}^V such that

(3) \ \ \ \displaystyle \frac{-{\mathbb E} \sum_{i,j} A_{i,j} Y_iY_j}{ {\mathbb E} \sum _i d_i |Y_i|} \geq exp \left(- \frac 1\epsilon \right)

The main difficulty in achieving (3) is that the terms x_i x_j and {\mathbb E} Y_i Y_j can be positive and negative, and so it is not sufficient to establish that x_i x_j is well approximated by {\mathbb E} Y_i Y_j. Indeed, suppose that we could prove that for every i,j we have x_ix_j \leq {\mathbb E} Y_iY_j \leq 2x_ix_j; then it could be that {\mathbb E} Y_iY_j = 2x_ix_j when x_ix_j <0, and {\mathbb E} Y_iY_j = x_ix_j when x_ix_j \geq 0. Then it could be the case that (1) is positive while (3) is negative.

It seems nearly unavoidable, then, to construct a rounding scheme such that {\mathbb E} Y_iY_j is exactly equal, up to a scaling factor, to x_ix_j.

This, however, is problematic, because then we would have {\mathbb E} |Y_i| be proportional to |x_i|, while we need it to be proportional to x_i^2 in order to derive (3). This means that the approximation that we get depends on the ratio between the smaller and larger |x_i|, which can be arbitrarily bad. (A similar difficulty arises in the analysis of the algorithm of Charikar and Wirth.)

A more careful analysis is to establish that, for a scaling factor c, we have

\displaystyle | c{\mathbb E} Y_iY_j - x_ix_j | \leq \delta \max \{ x_i^2,x_j^2 \}

because then we have, from (1),

\displaystyle - \sum_{i,j} A_{i,j} c{\mathbb E} Y_iY_j \geq (\epsilon - \delta) \cdot \sum_i d_i x_i^2

If can then establish that, for some c', we have {\mathbb E} |Y_i|\leq c'x_i^2 , then we deduce

\displaystyle - \sum_{i,j} A_{i,j} {\mathbb E} Y_iY_j \geq (\epsilon - \delta) \cdot \frac {1}{cc'} \sum_i d_i |Y_i|.

We show the existence of a distribution Y satisfying the above equations with cc' \leq \delta \cdot exp(1/\delta).

About these ads