Scribed by Haaris Khan
In which we study the SDP relaxation of Max Cut in random graphs.
1. Quick Review of Chernoff Bounds
Suppose are mutually independent random variables with values . \newline Let . The Chernoff Bounds claim the following: \newline
3. When we do not know , we can bound as follows:
2. Cutting a Near-Optimal Number of Edges in Via SDP Rounding
Consider where . We show that with probability, the max-degree will be
- Fix v
- For some constant c,
Next, we compute the number of vertices that participate in a triangle. Recall that degree can be bounded by
If a vertex participates in a triangle, there are ways of choosing the other two vertices that participate with v in the triangle. \newline So the expected number of vertices in triangles can be bounded by
So with probability,
- All vertices have degree
- vertices participate in triangles.
3. Eigenvalue Computations and SDP
Problems like finding the largest / smallest eigenvalue can be solved using SDP
Let be symmetric, be the largest eigenvalue of M: We can formulate this as Quadratic Programming:
We showed previously that we can relax a Quadratic Program to SDP:
In fact, it happens that these two are equivalent. To show this, we must show that a vector solution of SDP can hold as a solution to the QP and vice versa.
Proving for QP is valid for SDP: Trivial. Any solution to our Quadratic Program must be a solution for our SDP since it is a relaxation of the problem; then the optimum of our QP must be less than or equal to the optimum of our SDP
Proving for SDP is valid for QP: Consider . We note that our SDP can be transformed into an unconstrained optimization problem as follows:
The cost c can be defined as the value of our solution:
We get a one-dimensional solution when we use the element of , and wish to find the that maximizes this.
We use the following inequality:
4. SDP Max-Cut: Spectral Norm as a SDP Certificate
Consider the SDP relaxation of Max-Cut on Graph :
Let the optimum value for this SDP be . It’s obvious that . Under our constraints, we can rewrite our SDP as
So our new optimization problem is
We can relax our constraint to the following: . Relaxing our constraint will yield an optimization problem with a solution less than the stricter constraint (call this ):
Clearly, we have the following inequalities: . We can rewrite as
Note that our objective function computes the largest eigenvalue of :
For every graph with ,
Recall from previous lectures that for , the adjacency matrix of sampled from has with high probability. This implies that . Semantically, this means that computes in poly-time a correct upper-bound of .
5. Trace and Eigenvalues
Suppose matrix is symmetric with eigenvalues . The following are true:
- eigenvalues are
Then, for .
is defined as the number of expected paths from to that take steps (not necessarily simple paths in a graph)
Our goal with this is to compute the eigenvalues . Since traces relates the sum of the diagonal and the sum of eigenvalues for symmetric , we can use this to provide an upper bound for symmetric .