In which we introduce the sparsest cut problem and the Leighton-Rao relaxation.
1. The Uniform Sparsest Cut problem, Edge Expansion and
Let be an undirected graph with
vertices.
We define the uniform sparsity of a cut as
(we will omit the subscript when clear from the context) and the uniform sparsest cut of a graph is
In -regular graphs, approximating the uniform sparsest cut is equivalent (up to a factor of 2 in the approximation) to approximating the edge expansion, because, for every cut
, we have
and, noting that, for every, ,
we have, for every ,
and so
It will be instructive to see that, in -regular graphs,
is a relaxation of
, a fact that gives an alternative proof of the easy direction
of Cheeger’s inequalities.
If is
-regular, then
satisfies
where the first identity above comes from the fact that
and the second identity follows by noticing that the cost function is invariant by addition of a multiple of , and so optimizing over all non-zero vectors gives the same result as optimizing over all vectors orthogonal to
.
On the other hand, the uniform sparsest cut problem can be formulated as
(because the square of a number in is the same as its absolute value) and we see that
can be considered a continuous relaxation of
.
2. The Non-Uniform Sparsest Cut Problem
In the non-uniform sparsest cut problem, we are given two graphs and
, over the same set of vertices; the non-uniform sparsity of a cut
is defined as
and the non-uniform sparsest cut problem is the optimization problem
Note that the non-uniform sparsest cut problem generalizes the sparsest cut problem (consider the case in which is a clique).
If is the graph that contains the single edge
, then
is the undirected min-st-cut problem, in which we want to find the cut that separates two vertices
and
and that minimizes the number of crossing edges.
3. The Leighton-Rao Relaxation
We can write the non-uniform sparsity of a set as
The observation that led us to see as the optimum of a continuous relaxation of
was to observe that
, and then relax the problem by allowing arbitrary functions
instead of indicator functions
.
The Leighton-Rao relaxation of sparsest cut is obtained using, instead, the following observation: if, for a set , we define
, then
defines a semi-metric over the set
, because
is symmetric,
, and the triangle inequality holds. So we could think about allowing arbitrary semi-metrics in the expression for
, and define
This might seem like such a broad relaxation that there could be graphs on which bears no connection to
. Instead, we will prove the fairly good estimate
The value and an optimal
can be computed in polynomial time by solving the following linear program
that has a variable for every unordered pair of distinct vertices
. Clearly, every solution to the linear program (3) is also a solution to the right-hand side of the definition (1) of the Leighton-Rao parameter, with the same cost. Also every semi-metric can be normalized so that
by multiplying every distance by a fixed constant, and the normalization does not change the value of the right-hand side of (1); after the normalization, the semimetric is a feasible solution to the linear program (3), with the same cost.
4. An L1 Relaxation of Sparsest Cut
In the Leighton-Rao relaxation, we relax distance functions of the form to completely arbitrary distance functions. Let us consider an intermediate relaxation, in which we allow distance functions that can be realized by an embedding of the vertices in an
space.
Recall that, for a vector , its
norm is defined as
, and that this norm makes
into a metric space with the
distance function
The distance function is an example of a distance function that can be realized by mapping each vertex to a real vector, and then defining the distance between two vertices as the
norm of the respective vectors. Of course it is an extremely restrictive special case, in which the dimension of the vectors is one, and in which every vertex is actually mapping to either zero or one. Let us consider the relaxation of sparsest cut to arbitrary
mappings, and define
This may seem like another very broad relaxation of sparsest cut, whose optimum might be much smaller than the sparsest cut optimum. The following theorem shows that this is not the case.
Theorem 1 For every graphs
,
.
Furthermore, there is a polynomial time algorithm that, given a mapping
, finds a cut
such that
Proof: We use ideas that have already come up in the proof the difficult direction of Cheeger’s inequality. First, recall that for every nonnegative reals and positive reals
we have
as can be seen by noting that
Let be the
-th coordinate of the vector
, thus
. Then we can decompose the right-hand side of (4) by coordinates, and write
This already shows that, in the definition of , we can map, with no loss of generality, to 1-dimensional
spaces.
Let be the coordinate that achieves the minimum above. Because the cost function is invariant under the shifts and scalings (that is, the cost of a function
is the same as the cost of
for every two constants
and
) there is a function
such that
has the same cost function as
and it has a unit-length range
.
Let us now pick a threshold uniformly at random from the interval
, and define the random variables
We observe that for every pairs of vertices we have
and so we get
Finally, by an application of (5), we see that there must be a set among the possible values of
such that (4) holds. Notice that the proof was completely constructive: we simply took the coordinate
of
with the lowest cost function, and then the “threshold cut” given by
with the smallest sparsity.
5. A Theorem of Bourgain
We will derive our main result (2) from the L1 “rounding” process of the previous section, and from the following theorem of Bourgain (the efficiency considerations are due to Linial, London and Rabinovich).
Theorem 2 (Bourgain) Let
be a semimetric defined over a finite set
. Then there exists a mapping
such that, for every two elements
,
where
is an absolute constant. Given
, the mapping
can be found with high probability in randomized polynomial time in
.
To see that the above theorem of Bourgain implies (2), consider a graph , and let
be the optimal solution of the Leighton-Rao relaxation of the sparsest cut problem on
, and let
be a mapping as in Bourgain’s theorem applied to
. Then
Typo on definition of L1sc above Theorem 1: should be “L1sc(G, H)” rather than “L1sc(G)”
Typo in Theorem 2: should be “u, v \in V”
Look this problem:
A succinct representation of a set of (distinct) b-bits positive integers is a Boolean circuit C with b input gates. The set represented by C, denoted S_{C}, is defined as follows: Every possible integer of S_{C} should be between 0 and (2^{b} – 1). And j is an element of S_{C} if and only if C accepts the binary representations of the b-bits integer j as input. The problem SUCCINCT MAXIMUM is now this: Given the succinct representation C of a set S_{C} and a b-bits integer x, where C is a Boolean circuit with b input gates, is x the maximum in S_{C}?
It is very easy to show this problem is not in P, because we should need n comparisons to know whether x is the maximum in a set of n (distinct) positive integers when the set is arbitrary. And this number of comparisons will be optimal. This would mean we cannot always accept every instance (C; x) of SUCCINCT MAXIMUM in polynomial-time, because we must use at least n = |S_{C}| comparisons for infinite amount of cases, where |S_{C}| is the cardinality of S_{C}. However, n could be exponentially more large than the size of (C; x).
But, at the same time, it is so easy to show this problem is in coNP. Certainly, given a b-bits integer y, we can check whether C accepts the binary representation of y (which means that y is an element of S_{C}) and x < y in polynomial-time, and thus, we could verify whether (C; x) is a "no" instance SUCCINCT MAXIMUM in polynomial-time.
However, the existence of a problem in coNP and not in P is sufficient to show that P is not equal to NP, because if P would be equal to NP, then P = coNP.
Basically is this, but you could see more in
https://hal.archives-ouvertes.fr/hal-01281254/document
@Yu Zhao. Thanks for the corrections!
Typo after (5), in “This already shows that, in the definition of {\phi’},…” should probably be “This already shows that, in the definition of f,…”