Max Cut-Gain and the Smallest Eigenvalue

In June, I wrote about my work on using spectral partitioning to approximate Max Cut.

I have now posted a revised paper with a couple new things.

One is an improved analysis due to Moses Charikar, of the algorithm described in the June paper. Moses shows that if one starts from a graph in which the optimum cuts a 1-\epsilon fraction of edges, then the algorithm cuts at least a 1-4\sqrt \epsilon + 8\epsilon fraction of edges (and also at least half of the edges). Cutting more than a 1-\frac 2 \pi \sqrt \epsilon + o_\epsilon(1) fraction of edges is Unique-Games-hard. Optimizing the fraction

\displaystyle \frac{ \max \{ \ \frac 12  \ , \ (1-4\sqrt \epsilon + 8\epsilon) \ \} }{1-\epsilon}

we see that the approximation ratio of the algorithm is always at least .531.

The second new result answers a question raised by an anonymous commenter in June: what about Max Cut Gain?

Max Cut Gain is the variant of Max Cut in which we count the number of cut edges minus the number of uncut edges. This means that if the Max Cut optimum is \frac 12 + \frac \epsilon 2, then the Max Cut Gain is \epsilon. (The name comes from the fact that we are looking at how much we can gain compared to a random solution that cuts half of the edges.)

Assuming hardness of Unique Games, this problem does not allow constant approximation, and the best one can hope for is to cut a \frac 12 + \Omega \left( \frac \epsilon {\log 1/\epsilon} \right) fraction of edges in graphs in which the optimum is \frac 12 + \epsilon. This is achieved by an algorithm by Charikar and Wirth.

In my paper I show the considerably weaker result that, using a recursive spectral partitioning algorithm, it is possible to cut a \frac 12 + exp(-O(1/\epsilon)) fraction of edges in graphs in which the optimum is \frac 12 + \epsilon.

The main result (which gives the basic step in the recursive procedure) is a randomized rounding process such that if we are given a vector x \in {\mathbb R}^V such that

(1) \ \ \ \displaystyle \frac{-\sum_{i,j} A_{i,j} x_i x_j }{\sum_i d_i x_i^2} \geq \epsilon

we can find a vector y \in \{ -1,0,1\} such that

(2) \ \ \ \displaystyle \frac{-\sum_{i,j} A_{i,j} y_iy_j}{ \sum _i d_i |y_i|} \geq exp \left(- \frac 1\epsilon \right)

We do so by finding a distribution Y over vectors \{ -1,0,1\}^V such that

(3) \ \ \ \displaystyle \frac{-{\mathbb E} \sum_{i,j} A_{i,j} Y_iY_j}{ {\mathbb E} \sum _i d_i |Y_i|} \geq exp \left(- \frac 1\epsilon \right)

The main difficulty in achieving (3) is that the terms x_i x_j and {\mathbb E} Y_i Y_j can be positive and negative, and so it is not sufficient to establish that x_i x_j is well approximated by {\mathbb E} Y_i Y_j. Indeed, suppose that we could prove that for every i,j we have x_ix_j \leq {\mathbb E} Y_iY_j \leq 2x_ix_j; then it could be that {\mathbb E} Y_iY_j = 2x_ix_j when x_ix_j <0, and {\mathbb E} Y_iY_j = x_ix_j when x_ix_j \geq 0. Then it could be the case that (1) is positive while (3) is negative.

It seems nearly unavoidable, then, to construct a rounding scheme such that {\mathbb E} Y_iY_j is exactly equal, up to a scaling factor, to x_ix_j.

This, however, is problematic, because then we would have {\mathbb E} |Y_i| be proportional to |x_i|, while we need it to be proportional to x_i^2 in order to derive (3). This means that the approximation that we get depends on the ratio between the smaller and larger |x_i|, which can be arbitrarily bad. (A similar difficulty arises in the analysis of the algorithm of Charikar and Wirth.)

A more careful analysis is to establish that, for a scaling factor c, we have

\displaystyle | c{\mathbb E} Y_iY_j - x_ix_j | \leq \delta \max \{ x_i^2,x_j^2 \}

because then we have, from (1),

\displaystyle - \sum_{i,j} A_{i,j} c{\mathbb E} Y_iY_j \geq (\epsilon - \delta) \cdot \sum_i d_i x_i^2

If can then establish that, for some c', we have {\mathbb E} |Y_i|\leq c'x_i^2 , then we deduce

\displaystyle - \sum_{i,j} A_{i,j} {\mathbb E} Y_iY_j \geq (\epsilon - \delta) \cdot \frac {1}{cc'} \sum_i d_i |Y_i|.

We show the existence of a distribution Y satisfying the above equations with cc' \leq \delta \cdot exp(1/\delta).

3 thoughts on “Max Cut-Gain and the Smallest Eigenvalue

  1. Pingback: Lecture 1: Cheating with foams « tcs math - some mathematics of theoretical computer science

  2. Luca,

    Does your result imply a gap between x(A)=(.5(1+|lambda_min(1/d(A))|)) and OPT? i.e. like an integrality gap?

    Or can it be the case that x(A) is close to |E(A)| and OPT is close to half the edges?

  3. The gap between x(A) and OPT can be as bad as 1/2 + o(1)

    If your graph has two connected components, one of size o(|V|) and bipartite (say, a K_{d,d} graph) and one of size V-o(V) and with a max cut of (1/2 + o(1))|E|, then x(A) is |E(A)|, because the smallest eigenvalue is -d, but the max cut is (1/2+o(1))|E|.

    If one looks at the “primal dual” interpretation of the algorithm in my paper, then the spectral upper bound for which there is a .531 guaranteed approximation is one where you choose a set V’ of vertices, and your upper bound is |E|-|E’|+x(A’) where E’ are the edges of the subgraph induced by V’, and A’ is the adjacency matrix of the subgraph. (Not exactly, because x(A) as defined by the commenter above is defined only for regular graphs, so one has to take the proper generalization to non-regular graphs.) Probably, even after optimizing for the best V’, this upper bound is still too weak to give .878 approximation, but I have not tried to find integrality gap instances. (It can be seen that, even optimizing for the best V’, the upper bound is still at least as large as the optimum of the GW relaxation, so instances that give a .878 gap for GW also give a gap at least as strong for this upper bound.)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s