[I am preparing a survey talk on Unique Games for a mathematics conference, and writing a survey paper for a booklet that will be distributed at the conference. My first instinct was to write a one-line paper that would simply refer to Subhash’s own excellent survey paper. Fearing that I might come off as lazy, I am instead writing my own paper. Here is part 1 of some fragments. Comments are very welcome.]
Khot formulated the Unique Games Conjecture in a remarkably influential 2002 paper. In the subsequent eight years, the conjecture has motivated and enabled a large body of work on the computational complexity of approximating combinatorial optimization problems (the original context of the conjecture) and on the quality of approximation provided by “semidefinite programming” convex relaxations (a somewhat unexpected byproduct). Old and new questions in analysis, probability and geometry have played a key role in this development. Representative questions are:
- The central limit theorem explains what happens when we sum several independent random variables. What happens if, instead of summing them, we apply a low degree polynomial function to them?
- In Gaussian space, what is the body of a given volume with the smallest boundary?
- What are the balanced boolean function whose value is most likely to be the same when evaluated at two random correlated inputs?
- What conditions on the Fourier spectrum of a function of several variables imply that the function essentially depends on only few variables?
- With what distortion is it possible to embed various classes of finite metric spaces into L1?
1. The Computational Complexity of Optimization Problems
The “Unique Games Conjecture” is a statement about the computational complexity of certain computational problems.
To briefly introduce computational complexity theory, consider the 3-Coloring problem. In this computational problem we are given as input an undirected graph , and the goal is determine whether there is a proper 3-coloring of the vertices, that is a function
such that
for every
. (If such proper colorings exist, we are also interested in finding one.) This is a problem that is easily solvable in finite time: just consider in some order all possible
colorings
and check each of them to see if it is a proper coloring. While finite, this is an unfeasible amount of computation even for very small graphs: for example a computer that examines 10 trillion colorings per second (comparable to the ability of the current fastest supercomputers) would take more than 10 billion years to consider
colorings. It is easy to improve the running time to about
, and there are non-trivial ways to achieve further speed-ups, but all the known algorithms have a worst-case running time that grows like
for a constant
, and they are unfeasible even on graphs with a few hundred vertices. Is there are an algorithm whose worst-case running time is bounded above by a polynomial function of
?
This is an open question equivalent to the versus
problem, one of the six unsolved Millenium Prize Problem. One thing that we know, however, is that the 3-Coloring problem is NP-complete. This means that every computational problem in
, that is every computational problem that involves searching an exponentially big list of possibilities for an object satisfying a polynomial-time checkable property can be “encoded” as a 3-Coloring problem, and so a polynomial-time algorithm for 3-Coloring would imply a polynomial-time algorithm for every exponential-size-list-searching problem, that is, we would have
.
As a concrete example, the following is true:
There is an algorithm that given an integer runs in time polynomial in
and constructs a graph with
vertices such that the graph has a proper 3-Coloring if and only if the Riemann Hypothesis has a proof of at most
pages.
In fact, there is nothing special about the Riemann Hypothesis: for every mathematical statement and every integer
it is possible to efficiently construct a graph of size polynomial in the length of
and in
such that the graph is 3-colorable if and only if
has a proof of at most
pages.
The theory of NP-completeness can also be used to reason about combinatorial optimization problems, that is problems in which one wants to pick from an exponentially long list an item that maximizes or minimizes a given cost function.
A well-known example of combinatorial optimization problem is the Traveling Salesman Problem. This is the problem encountered, for example, by a shuttle driver who picks up eight people at the airport and has to decide in which order to take them to their homes and go back to the airport so that he is done in the shortest possible time. Formally, one is given a finite metric space with points and wants to find an ordering of the points so that the sum of pairwise distances of consecutive pairs of points in the order is minimized. This problem can be solved in time growing roughly like
by simply considering each possibility; there is a non-trivial algorithm whose running time grows like
, and all the known algorithms take exponential time. In fact, the problem, given an instance of TSP and a number
, to decide if there is a solution of cost at most
is an NP-complete problem.
Another example is the Max Cut problem. In this problem we are given an undirected graph and we want to find a partition
of the set of vertices maximizing the number of edges that are cut by the partition, that is the number of edges that have one endpoint in
and one endpoint in
. Given a graph
and a number
, it is an NP-complete problem to determine if there is a partition such that at least
edges have endpoints on different sides of the partition.
A third example is the Sparsest Cut Problem. Given a -regular graph
(that is, a graph in which each vertex is an endpoint of precisely
edges), we again want to find a partition
, but this time we want to minimize the number of edges cut by the partition relative to how balanced the partition is. Namely, we want to find the partition that minimizes the ratio
where is the set of cut edges. The normalization is chosen so that the ratio in the optimal solution is always between 0 and 1. It is an NP-complete problem to decide, given a graph
and a number
, whether there is a set
such that
.
Such NP-completeness results rule out (assuming ) the possibility of algorithms of polynomial running time computing optimal solutions for the above problems. What about the computational complexity of finding approximate solutions?
The reductions that establish the above NP-completeness results do not offer much insight into the complexity of computing approximations. For example, the NP-completeness result for the Travelling Salesman Problem, relating it again to the task of finding a proof of the Riemann Hypothesis, gives the following implication:
There is an algorithm that, given an integer , runs in time polynomial in
and constructs a finite metric space
with
points such that all pairwise distances are either 1 or 2 and such that:
- If there is a proof of the Riemann Hypothesis of at most
pages can then there is a tour in
of total length
;
- A tour of total length
in
can be efficiently converted into a valid proof of the Riemann Hypothesis of at most
pages.
Looking more carefully into the argument, however, one sees that the following two stronger implications are true:
- If there is a proof of the Riemann Hypothesis of at most
pages with at most
mistakes, then it can be efficiently converted into a tour in
of total length
;
- A tour of total length
in
can be efficiently converted into a valid proof of the Riemann Hypothesis of at most
pages with at most
mistakes.
This means that if we had, for example, an algorithm that finds in polynomial time solutions to the Traveling Salesman Problem that are at most worse than the optimal, we would have that we could find an
-page “proof” such that at most
of the steps are wrong. Since it is always easy to come up with a proof that contains at most one mistake (“trivially, we have
, hence
”), this doesn’t cause any contradiction.
This doesn’t mean that approximating the Traveling Salesman Problem is easy: it just means that the instance produced by the NP-completeness proof are easy to approximate, and if one wants to prove a statement of the form “if there is a polynomial time algorithm for the Traveling Salesman Problem that finds solutions at most worse than the optimum, then
,” then such a result requires reductions of a rather different form from the ones employed in the classical theory of NP-completeness.
Indeed, with few exceptions, proving intractability results for approximation problems remained an open question for two decades, until the proof of the \em PCP Theorem in the early 1990s. The PCP Theorem (PCP stands for Probabilistically Checkable Proofs) can be thought of as describing a format for writing mathematical proofs such that even a “proof” in which up to, say, of the steps are erroneous implies the validity of the statement that it is supposed to prove.
Theorem 1 (The PCP Theorem) There is a constant
and a polynomial time algorithm that on input a graph
outputs a graph
such that
- If
has a proper 3-coloring then so does
![]()
- If there is a coloring
such that at least a
fraction of the edges of
are properly colored by
, then
has a proper 3-coloring.
The counterpositive of the second property is that if is not a 3-colorable graph then
is a graph that is not even approximately 3-colorable, that is,
is a graph such that every 3-coloring of the vertices leaves more than an
fraction of the edges monochromatic.
To see how this leads to “probabilistically checkable proofs,” let us return to our initial example of whether, for a given , there is an
-page proof of the Riemann Hypothesis. For a given
, we can construct in time polynomial in
a graph
such that an
-page exists if and only if there is a proper 3-coloring of
. From
we can construct, again in time polynomial in
, a graph
as in the PCP theorem. Now, an
-page proofs of the Riemann hypothesis can be encoded (at the cost of a polynomial blow-up in size) as a proper colorings of
. Given a candidate proof, presented as a coloring of
, we can think of it as having
“steps,” each being the verification that one of the edges of
has indeed endpoints of different colors. If an
-page proof of the Riemann Hypothesis exists, then there is a proof, in this format, all whose “steps” are correct; if there is no
-page proof of the Riemann Hypothesis, however, every “proof” is now such that at least an
fraction of the “steps” are wrong. If we sample at random
edges of
, and check the validity of the given coloring just on those edges, we will find a mistake with extremely high probability. Thus the PCP theorem gives a way to write down mathematical proofs, and a probabilistic verification procedure to check the validity of alleged proofs that reads only a constant number of bits of the proof and such that valid proofs pass the probabilistic test with probability 1, and if the test passes with probability higher than
then a valid proof exists.
While this application to proof checking is mostly interesting to visualize the meaning of the result, it might have applications to cryptographic protocols. In any case, the main application and motivation of the PCP Theorem is the study of the complexity of finding approximations to combinatorial optimization problems.
Various forms of the PCP Theorems are known, which are tailored to the study of specific optimization problems. A very versatile form of the Theorem, which was proved by Ran Raz, refers to the Label Cover problem.
2. The Unique Games Conjecture
Definition 2 (Label Cover) An input to the label cover problem with range
is a set of equations of the form
where
are functions specified as part of the input.
The goal is to find an assignment to the variables
and
that satisfies as many equations as possible.
For example, the following is an instance of label cover with range :
The first and third equation are not simultaneously satisfiable, and so an optimal solution to the above instance is to satisfy two of the equations, for example the first and the second with the assignment ,
,
,
.
Notice that while the equations were of an algebraic nature in the example above, any function can be used to construct an equation.
Theorem 3 (Raz) For every
there is a
,
and a polynomial time algorithm that on input a graph
outputs an instance
of label cover with range
such that
- If
has a proper 3-coloring then in
there is an assignment to the variables that satisfies all constraints;
- If
is not properly 3-colorable, then every assignment to the variables of
satisfies at most an
fraction of the equations.
This form of the PCP Theorem is particularly well suited as a starting point for reductions, because in the second case we have the very strong guarantee that it is impossible to satisfy even just an fraction of the equation. For technical reasons, it is also very useful that each equation involves only two variables.
The approach to derive intractability, for example for a graph problem, from Theorem 3 is to encode each variable as a small graph, and to lay out edges in such a way that the only way to have a good solution in the graph problem is to have it so that it defines a good solution for the label cover problem. If we are studying a cut problem, for example, and we have collection of vertices corresponding to each variables
in the label cover instance, then a cut
in the graph gives a
-bit string
for every variable
of label cover, corresponding to which of the
vertices does or does not belong to
.
Intuitively, one might try to set up such a reduction by associating each variables with
vertices, with each of the
bit strings being associated to one element of
. This, however, would not work, because it would be sufficient to change a
fraction of the cut in order to change all the values of
, and potentially we could start from a highly unsatisfiable instance of label cover and produce an instance of a graph problem in which there is a solution of very high quality.
Instead, the bit string associated to each variable encodes the value of the variable with an error-correcting code.
The problem then becomes: to make sure that (1) only bit strings close to a valid codeword can occur in a near-optimal solution, and (2) to make sure that in near optimal solutions the decodings satisfy a large number of equations. Task (2) has proved to be considerably harder than task (1), especially in reductions to graph problems. Indeed most NP-completeness results for approximating graph optimization problems have gone by first reducing label cover to an intermediate simpler problem, and then reducing the intermediate problem to the graph problem, but at the cost of weaker intractability results than the conjectured ones.
In 2002, Khot formulated a conjecture that considerably simplifies (2), essentially making it of difficulty comparable to (1).
Definition 4 (Unique Game) A unique game with range
is a set of equations of the form
where
are bijective functions specified as part of the input.
The goal is to find an assignment to the variables
and
that satisfies as many equations as possible.
For example, the following is a unique game with range :
In the above example, it is not possible to satisfy all four equations, but the optimal solution satisfies three of the equations.
Notice that the only difference between a Label Cover instance and a Unique Game is that, in a Unique Game, the functions that define the equations have to be bijective. This is, however, a substantial difference. In particular, given a Unique Game that has a solution that satisfies all equations, such a solution can be found very quickly in time linear in the number of equations. But what if we are given a Unique Game in which there is a solution that satisfies, say, a fraction of the equation?
Conjecture 1 (Unique Games Intractability Conjecture) For every
, there is a
such that there is no polynomial time algorithm that, given an instance of Unique Games with range
in which it is possible to satisfy at least a
fraction of equations, finds a solution that satisfies at least an
fraction of equations.
If then Conjecture 1 is false; this means that proving Conjecture 1 would require first proving
, which is beyond the reach of current techniques. The strongest evidence that we can currently hope for in favor of Conjecture 1 is:
Conjecture 2 (Unique Games NP-Hardness Conjecture) For every
there is a
and a polynomial time algorithm that, on input a graph
outputs a unique games instance
with range
, such that
- If
is properly 3-colorable then there is an assignment that satisfies at least a
fraction of equations in
;
- If
is not properly 3-colorable then every assignment to the variables of
satisfies at most an
fraction of equations.
If Conjecture 2 is true, then every inapproximability result proved via a reduction from Unique Games establishes an NP-hardness of approximation, in the same way as a reduction starting from label cover.
Consequence for Max Cut
In the following we let
The above constant comes up in the remarkable algorithm of Goemans and Williamson.
Theorem 5 (Goemans and Williamson) There is a polynomial time algorithm that on input a graph
in which the optimal cut cuts
edges finds a cut that cuts at least
edges.
Furthermore, for sufficiently small
, given a graph
in which the optimal cut cuts
edges, the algorithm finds a solution that cuts at least
edges, which is approximately
edges.
It is known that an approximation better than implies that
, but no NP-hardness result is known in the range between
and
.
Work of Kindler, O’Donnell and Mossel, together with later work of Mossel, O’Donnel and Oleszkiewicz, proves that no improvement is possible over the Goemans-Williamson algorithm assuming the Unique Games Intractability Conjecture.
Theorem 6 Suppose that there is a
, a
and a polynomial time algorithm that given a graph
in which an optimal cut cuts
vertices finds a solution that cuts at least
edges.
Then the Unique Games Intractability Conjecture is false.
In particular, by taking to be the minimizer of Equation (1) we have that, for every
the existence of a polynomial time algorithm that, on input a graph in which the optimum is
finds a solution that cuts more than
edges would contradict the Unique Games Intractability Conjecture. So, assuming the conjecture, the constant
is precisely the best achievable ratio between the value of polynomial-time constructible solutions and optimal solutions in the Max Cut problem.
In Section 3 we will present a detailed overview proof of Theorem 9.
Consequence for Sparsest Cut
The algorithm achieving the best ratio between the quality of an optimal solution and the quality of the solution found in polynomial time is due to Arora, Rao and Vazirani.
Theorem 7 (Arora-Rao-Vazirani) There is a polynomial time algorithm that given a graph
finds a cut
such that
where
is an optimal solution to the sparsest cut problem.
A classical algorithm based on spectral graph theory achieves a better approximation in graphs in which the optimum is large.
Theorem 8 (Spectral Partitioning) There is a polynomial time algorithm that given a graph
finds a cut
such that
where
is an optimal solution to the sparsest cut problem.
Theorem 9 There is an absolute constant
such that if there is a
, an
and a polynomial time algorithm that given a graph
in which the sparsest cut
satisfies
finds a cut
such that
Then the Unique Games Intractability Conjecture is false.
In particular, assuming the conjecture, the trade-off between optimum and approximation in the spectral partitioning algorithm cannot be improved, and the approximation ratio in the Arora-Rao-Vazirani algorithm cannot be improved to a constant.
3. Reducing Unique Games to Max Cut
A general approach in using Unique Games (and, in general, Label Cover) with range to other problems is to ensure that a solution in the target problem associates to each variable
of the unique game a function
. Then we define a way to “decode” a function
to a value
, and we aim to prove that if we have a good solution in the target problem, then the assignment
to each variable
defines a good solution in the Unique Games instance. The general idea is that if a function “essentially depends” one of its variables, then we decode it to the index of the variable that it depends on.
3.1. The Reduction from Unique Games to Max Cut
We outline this method by showing how it works to prove Theorem 9. To prove the theorem, we start from a Unique Games Instance such that a
fraction of equations can be satisfied, where
is a very small positive parameter determined by
, and whose range is
. We show how to use the assumption of the Theorem to find a solution that satisfies at least an
fraction of equations. We do so by constructing a graph
, applying the algorithm to find a good approximation to Max Cut in the graph, and then converting the cut into a good solution for the Unique Games instance.
If has
variables, then
has
vertices, a vertex
for every variable
of
and every value
.
We define as a weighted graph, that is a graph in which edges have a positive real-value weight. In such a case, the value of a cut is the total weight (rather than the number) of edges that are cut. There is a known reduction from Max Cut in weighted graph to Max Cut in unweighted simple graphs as defined above, so there is no loss of generality in working with weights.
We introduce the following notation: if is an array of
bits indexed by the elements of
, and
is a bijection, we denote by
the vector
such that
.
We also define the noise operator as follows: if
is a boolean vector, then
is the random variable generated by changing each coordinate of
independently with probability
, and leaving it unchanged with probability
.
The edge set of is defined so that its total weight is 1, and we describe it as a probability distribution:
- Pick two random equations
and
in
conditioned on having the same left-hand side.
- Pick a random element
and pick an element
- Generate the edge
Let be an optimal assignment for the Unique Games instance
. Consider the cut of
in which
. This cut cuts edges of total weight at least
. From our assumption, we can find in polynomial time a cut
that cuts a
fraction of edges. We want to show how to extract from
an assignment for the Unique Games that satisfies a reasonably large number of equations.
First we not that assigns a bit to each variable
and to each
. Let us call
and
We went to decode each of these functions into an index
. We describe a probabilistic decoding process
later.
Some calculations show that the functions that we derive in such a way have the property that
and from this we want to derive that
from which it is easy to see that from the decodings we can reconstruct an assignment for all variables that satisfies at least an
fraction of equations in the unique game.
Some manipulations show that, essentially, it is sufficient to prove the following lemma:
Lemma 10 (Main) There is a probabilistic symmetric algorithm
that on input a function
outputs an element
, and such that the following is true. Suppose that
is such that
Then there is an index
such that
We say that the decoding is symmetric if such the distribution of is the same as the distribution
for every bijection
.
(Technically, the Main Lemma is not sufficient as stated. An extension that deals with all bounded real-valued functions is necessary. The boolean case, which is simpler to state and visualize, captures all the technical difficulties of the general case.)
3.2. The Proof of the Main Lemma
Before discussing the proof of the Main Lemma, we show that it is tight, in the sense that from a weaker assumption in Equation (2) it is not possible to recover the conclusion.
Consider the majority function such that
if and only if
has at least
ones. Then
is a symmetric function, in the sense that
for every bijection
. This implies that for every symmetric decoding algorithm
we have that
is the uniform distribution over
, and so every index
has probability
which goes to zero even when the other parameters in the Main Lemma are fixed. A standard calculation shows that, for large
,
so we have an example in which Equation (2) is nearly satisfied but the conclusion of the Main Lemma fails.
This example suggest that, if the Main Lemma is true, then the functions that satisfy Equation (2) must be non-symmetric, that is, it does not depend equally on all the input variables, and that the decoding procedure must pick up certain input variables that the function depends in a special way on.
Another example to consider is that of the functions arising in the cut that one derives from an optimal solution in the unique game instance . In that case, for every variable
the corresponding function
is of the form
where
is the value assigned to
in the optimal solution. In this case, we would expect the decoding algorithm to output the index
. In general, if
depends only on a small number of variables, we would expect
to only output the indices of those variables.
These observations suggest the use of the notion of influence of input variables. If is a boolean function, then the influence of variable
on
is the probability
where is the string obtained from
by changing the
-th coordinate.
It is then appealing to consider the decoding algorithm that picks an index with probability proportional to
; note that this process is symmetric.
There is, unfortunately, a counterexample. Consider the function
and take . Then
and one can compute that
This means that we expect the decoding algorithm to select some index with a probability that is at least a fixed constant for every fixed .
When we compute the influence of the variables of , however, we find out that
and
have constant influence
, while the variables
have influence approximately
. This means that the sum of the influences is about
, and so
and
would be selected with probability about
, and the remaining variables with probability about
. In particular, all probabilities go to zero with
, and so a decoding algorithm based only on influence does not satisfy the conditions of the Main Lemma.
In order to introduce the correct definition, it helps to introduce discrete Fourier analysis over the Hamming cube. For our purposes, only the following facts will be used. One is that if is a real-valued function, then there is a unique set of real values
, one for each subset
, such that
The values are called the Fourier coefficients of
.
the other is that
Deviating slightly from the above notation, if is a boolean function, then we let
be the Fourier coefficients of
, that is
This guarantees that .
It is easy to see that
The fact that suggests that
naturally defines a probability distribution. Unfortunately, it is a probability distribution over subsets of
, rather than a probability distribution over elements of
. A natural step is to consider the algorithm
defined as follows: sample a set
with probability equal to
, then output a random element of
. In particular, we have
which is similar to the expression for the influence of , but weighted to give more emphasis of the Fourier coefficients corresponding to smaller sets.
If we go back to the function , we see that the algorithm defined in (3) has a probability of generating
and
which is at least an absolute constant, and that doesn’t go to zero with
.
The decoding algorithm described in Equation (3) turns out to be the correct one. Proving the main lemma reduces now to proving the following result.
Lemma 11 (Main Lemma — Restated) Suppose that
is such that
Then there is an index
such that
The proof has two parts:
- An invariance theorem due to Mossel, O’Donnell and Oleszkiewicz showing that the Main Lemma is true in the boolean setting provided that a “Gaussian version” of the Lemma hods for functions taking real inputs with Gaussian distribution is true;
- A1985 theorem of Borell establishing the Gaussian version of the Lemma
3.3. The Invariance Theorem and Borell’s Theorem
A starting point to gain intuition about the Invariance Theorem is to consider the Central Limit Theorem. Suppose that is a collection of independent boolean random variables, each uniform over
, and suppose that
are arbitrary coefficients. Then the random variable
is going to be close to a Gaussian distribution of average zero and variance , provided that the coefficients are reasonably smooth. (It is enough that if we scale them so that
, then
is small.)
Suppose now that, instead of considering a sum, that is, a degree-1 function, we take an -variate low-degree polynomial
and we consider the random variable
We cannot say any more that it has a distribution close to a Gaussian and, in fact, it does not seem that we can say anything at all. Looking back at the Central Limit Theorem, however, we can note that the “right” way of formulating it is to consider a collection of independent boolean random variables each uniform over
, and also a collection of independent Gaussian random variables
each with mean zero and variance 1. Then we have that the two random variables
are close provided that the are smooth.
This is exactly the same statement as before, because the distribution happens to be a Gaussian distribution of mean zero and variance
.
This formulation, however, as a natural analog to the case of low-degree polynomials. The Invariance Theorem states that if is a sufficiently “smooth” low degree polynomial then the random variables
are close.
When we apply the Invariance Theorem to a smoothed and truncated version of the Fourier transform of the function in the Main Lemma, we have that either such a function is a “smooth polynomial” to which the Invariance Theorem applies, or else the conclusion holds and there is a coordinate with noticeably high probability of being output by the decoding algorithm. If the Invariance Theorem applies, then the probability that
changes value on anti-correlated boolean inputs is approximately the probability that a function changes its value on anti-correlated Gaussian inputs. The latter is given by a Theorem of Borrel
Theorem 12 (Borrel) Suppose
is a measurable function according to the standard Gaussian measure in
and such that
. For an element
and for
, let
be the random variable
where
is a standard Gaussian random variable.
Then
The context of Borrel’s theorem is the question of what is the body in of a given volume (in that above case, of volume
) whose boundary is the smallest. (The answer is a half-space whose boundary is hyperplane passing through the origin, which is also a tight case for the theorem above.)
There are a few ways in which Borrel’s theorem is not the “Gaussian analog” of the Main Lemma. Notably, there is a condition on the expectation of (in the boolean case, it would be the condition
), there is a lower bound, rather than an upper bound, to the probability that
changes value, and the theorem applies to the range
, while we are interested in the “anti-correlation” case of
. There is a simple trick (consider only the “odd part” of the Fourier expansion of the boolean function
— that is only the terms corresponding to sets
of odd size) that takes care of all these differences.
In the next post:
The sparsest cut problem, its Unique Games-hardness of approximation, and its relation to metric embeddings, plus algorithms for unique games.
Some comments:
– “This means that every computational problem in {NP}, that is every computational problem that involves searching an exponentially big list of possibilities for an object satisfying a polynomial-time checkable property can be “encoded” as a 3-Coloring problem, and so a polynomial-time algorithm for 3-Coloring would imply a polynomial-time algorithm for every exponential-size-list-searching problem, that is, we would have {P=NP}.” This sentence will be hard to parse for people outside the area.
– “there is a non-trivial algorithm whose running time grows like {2^n}”. Perhaps “non-trivial” is an overstatement. “More clever”?
– It keeps saying you’ll prove Theorem 9, but you actually prove Theorem 6.
– “The context of Borrel’s theorem… The answer is a half-space”. A half space has infinite volume under the uniform measure. Do you mean under Gaussian measure?
Typos:
– “counterpositive”
– “First we not that”
– “if such the distribution”
– “hods for functions taking real inputs with Gaussian distribution is true;”
– “A1985 theorem”
– “as a natural analog”
P.S. Thanks for keeping your audience entertained and enlightened with this informative post.
I am proud that I have readers who think of this material as “entertaining.”
you must refer to Rotar for the invariance principle. [MOO] is a variant of Rotar’s work from the 70s using similar methods, with perhaps better bounds. For the qualitative description above, Rotar should suffice. [MOO] refer to Rotar.
You may be ignoring different approach to max-cut problem. Simulations are consistent with it. Nobody shown it is wrong, although, formally, there is a gap. See discussion here.