Corrado Bohm

SUP_0236

I was very saddened to hear that Corrado Böhm died today at age 94.

Böhm was one of the founding fathers of Italian computer science. His dissertation, from 1951, was one of the first (maybe the first? I don’t know the history of these ideas very well) examples of a programming language with a compiler written in the language itself. In the 1950s and 1960s he worked at the CNR (an Italian national research institution with its own technical staff), in the IAC (Institute for the Applications of Computing) directed by mathematician Mauro Picone. IAC was the second place in Italy to acquire a computer. In 1970 he moved to the University of Turin, were he was the founding chairman of the computer science department. In 1972 he moved to the Sapienza University of Rome, in the Math department, and in 1989 he was one of the founders of the Computer Science department at Sapienza. He remained at Sapienza until his retirement.

Böhm became internationally known for a 1966 result, joint with Giuseppe Jacopini, in which he showed, roughly speaking, that programs written in a language that includes goto statements (formalized as flow-charts) could be mapped to equivalent programs that don’t. The point of the paper was that the translation was “structural” and the translated program retained much of the structure and the logic of the original program, meaning that programmers could give up goto statements without having to fundamentally change the way they think.

Dijkstra’s famous “Go To Statement Considered Harmful” 1968 letter to CACM had two references, one of which was the Jacopini-Böhm theorem.

Böhm was responsible for important foundational work on lambda calculus, typed functional languages, and the theory of programming languages at large.

He was a remarkable mentor, many of whose students and collaborators (including a notable number of women) became prominent in the Italian community of theory of programming languages, and Italian academia in general.

gruppetto

In the photo above is Böhm with Simona Ronchi, Betti Venneri and Mariangiola Dezani, who all became prominent Italian professors.

You may also recognize the man on the right as a recent recipient of the Turing Award. Silvio Micali went to Sapienza to study math as an undergrad, and he worked with Böhm, who encouraged Silvio to pursue his PhD abroad.

I studied Computer Science at Sapienza, starting the first year that the major was introduced in 1989. I remember that when I first met Böhm he reminded me of Doc Brown from Back to the Future: a tall man with crazy white hair, speaking of wild ideas with incomprehensible technical terms, but with unstoppable enthusiasm.

One year, I tried attending a small elective class that he was teaching. My, probably imprecise, recollection of the first lecture is as follows.

He said that one vertex is a binary tree, and that if you connect two binary trees to a new root you also get a binary tree, then he asked us, how would you prove statements on binary trees by induction? The class stopped until we would say something. After some consultation among us, one of the smart kids proposed “by induction on the number of vertices?” Yes, said Böhm, that would work, but isn’t there a better way? He wanted us to come up by ourselves with the insight that, since binary trees have a recursive definition, one can do induction on the structure of the definition.

In subsequent lectures, we looked (without being told) at how to construct purely functional data structures. I dropped the class after about a month.

(Photo credits: corradobohm.it)

Advertisements

Beyond Worst-Case Analysis: Lecture 10

In which we go over a more powerful (but difficult to compute) alternative to the spectral norm, and discuss how to approximate it.

Today we’ll discuss a solution to the issue of high-degree vertices distorting spectral norms, which will prepare us for next lecture’s discussion on community detection in the stochastic block model using SDP. We’ll discuss a new kind of norm, the infinity-to-one norm, and find an efficient way to approximate it using SDP.

Continue reading

Beyond Worst-Case Analysis: Lecture 9

Scribed by Chinmay Nirkhe

In which we explore the Stochastic Block Model.

1. The {G_{n,p,q}} problem

The Stochastic Block Model is a generic model for graphs generated by some parameters. The simplest model and one we will consider today is the {G_{n,p,q}} problem.

Definition 1 ({G_{n,p,q}} graph distribution) The {G_{n,p,q}} distribution is a distribution on graphs of {n} vertices where {V} is partitioned into two 2 subsets of equal size: {V = V_1 \sqcup V_2}. Then for {\{i,j\}} pair of vertices in the same subset, {\Pr( (i,j) \in E) = p} and otherwise {\Pr( (i,j) \in E) = q}.

We will only consider the regime under which {p > q}. If we want to find the partition {V = V_1 \sqcup V_2}, it is intuitive to look at the problem of finding the minimum balanced cut. The cut {(V_1, V_2)} has expected size {q n^2 / 4} and any other cut will have greater expected size.

Our intuition should be that as {p \rightarrow q}, the problem only gets harder. And for fixed ratio {p/q}, as {p,q \rightarrow 1}, the problem only gets easier. This can be stated rigorously as follows: If we can solve the problem for {p,q} then we can also solve it for {cp, cq} where {c > 1}, by keeping only {1/c} edges and reducing to the case we can solve.

Recall that for the {k}-planted clique problem, we found the eigenvector {{\bf x}} corresponding to the largest eigenvalue of {A - \frac{1}{2} J}. We then defined {S} as the vertices {i} with the {k} largest values of {|x_i|} and cleaned up {S} a little to get our guess for the planted clique.

In the Stochastic Block Model we are going to follow a similar approach, but we are instead going to find the largest eigenvalue of {A - \left( \frac{p + q}{2} \right) J}. Note this is intuitive as the average degree of the graph is {p(n/2 - 1) + q(n/2) \approx \frac{n}{2} (p+q)}. The idea is simple: Solve {{\bf x}} the largest eigenvector corresponding to the largest eigenvalue and define

\displaystyle  V_1 = \{ i : x_i > 0\}, \qquad V_2 = \{ i : x_i \leq 0 \} \ \ \ \ \ (1)

Continue reading

Beyond Worst-Case Analysis: Lecture 8

Scribed by Luowen Qian

In which we use spectral techniques to find certificates of unsatisfiability for random {k}-SAT formulas.

1. Introduction

Given a random {k}-SAT formula with {m} clauses and {n} variables, we want to find a certificate of unsatisfiability of such formula within polynomial time. Here we consider {k} as fixed, usually equal to 3 or 4. For fixed {n}, the more clauses you have, the more constraints you have, so it becomes easier to show that these constraints are inconsistent. For example, for 3-SAT,

  1. In the previous lecture, we have shown that if {m > c_3 \cdot n} for some large constant {c_3}, almost surely the formula is not satisfiable. But it’s conjectured that there is no polynomial time, or even subexponential time algorithms that can find the certificate of unsatisfiability for {m = O(n)}.
  2. If {m > c \cdot n^2} for some other constant {c}, we’ve shown in the last time that we can find a certificate within polynomial time with high probability that the formula is not satisfiable.

    The algorithm for finding such certificate is shown below.

    • Algorithm 3SAT-refute({f})
    • for {b_1 \in \{0,1\}}
      • if 2SAT-satisfiable({f} restricted to clauses that contains {x_1= \overline b_1}, with {x:= \overline b_1})
        • return {\bot}
    • return UNSATISFIABLE

    We know that we can solve 2-SATs in linear time, and approximately

    \displaystyle \frac{\binom{n - 1} 2 \cdot m}{\binom n 3 \cdot 2} = \frac{3m}{2n + O(1)} > \frac 3 2 cn - O(1)

    clauses contains {x_1 = \overline{b_1}}. Similarly when {c} is sufficiently large, the 2-SATs will almost surely be unsatisfiable. When a subset of the clauses is not satisfiable, the whole 3-SAT formula is not satisfiable. Therefore we can certify unsatisfiability for 3-SATs with high probability.

In general for {k}-SAT,

  1. If {m > c_k \cdot n} for some large constant {c_k}, almost surely the formula is not satisfiable.
  2. If {m > c'_k \cdot n^{k - 1}} for some other constant {c'_k}, we can construct a very similar algorithm, in which we check all assignments to the first {k-2} variables, and see if the 2SAT part of the restricted formula is unsatisfiable.

    Since for every fixed assignments to the first {k - 2} variables, approximately

    \displaystyle \frac{\binom{n - k + 2} 2}{\binom n k 2^{k - 2}} = \frac{k!}{(n^{k - 2} + O(n^{k - 3})) 2^{k - 1}}

    portion of the {m} clauses remains, we expect the constant {c'_k = \Omega\left(\frac{2^k}{k!}\right)} and the running time is {O(2^k m)}.

So what about {m}‘s that are in between? It turns out that we can do better with spectral techniques. And the reason that spectral techniques work better is that unlike the previous method, it does not try all the possible assignments and fails to find a certificate of unsatisfiability.

2. Reduce certifying unsatisfiability for k-SAT to finding largest independent set

2.1. From 3-SAT instances to hypergraphs

Given a random 3-SAT formula {f}, which is an and of {m} random 3-CNF-SAT clauses over {n} variables {x_1, x_2, ..., x_n} (abbreviated as vector {{\bf x}}), i.e.

\displaystyle  f({\bf x}) = \bigwedge\limits_{i = 1}^m \left( x_{\sigma_{i,1}} = b_{i,1} \lor x_{\sigma_{i,2}} = b_{i,2} \lor x_{\sigma_{i,3}} = b_{i,3} \right),

where {\sigma_{i,j} \in [n], b_{i,j} \in \{0, 1\}}, {\forall i \in [m], \sigma_{i,1} < \sigma_{i,2} < \sigma_{i,3}} and no two {(\sigma_{i,1}, b_{i,1}, \sigma_{i,2}, b_{i,2}, \sigma_{i,3}, b_{i,3})} are exactly the same. Construct hypergraph {H_f = (X, E)}, where

\displaystyle X = \left\{(i, b) \middle| i \in [n], b \in \{0, 1\}\right\}

is a set of {2n} vertices, where each vertex means an assignment to a variable, and

\displaystyle E = \left\{ e_j \middle| j \in [m] \right\}, e_j = \{(\sigma_{j,1}, \overline{b_{j,1}}), (\sigma_{j,2}, \overline{b_{j,2}}), (\sigma_{j,3}, \overline{b_{j,3}})\}

is a set of {m} 3-hyperedges. The reason we’re putting in the negation of {b} is that a 3-CNF clause evaluates to false if and only if all three subclauses evaluate to false. This will be useful shortly after.

First let’s generalize the notion of independent set for hypergraphs.

An independent set for hypergraph {H = (X, E)} is a set {S \subseteq X} that satisfies {\forall e \in E, e \not \subseteq S}.

If {f} is satisfiable, {H_f} has an independent set of size at least {n}. Equivalently if the largest independent set of {H_f} has size less than {n}, {f} is unsatisfiable. Proof: Assume {f} is satisfiable, let {{\bf x} \leftarrow {\bf y}} be a satisfiable assignment, where {{\bf y} \in \{0, 1\}^n}. Then {S = \{ (x_i, y_i) | i \in [n] \}} is an independent set of size {n}. If not, it means some hyperedge {e_j \subseteq S}, so {\sigma_{j,1} = \overline{b_{j,1}} \land \sigma_{j,2} = \overline{b_{j,2}} \land \sigma_{j,3} = \overline{b_{j,3}}} and the {j}-th clause in {f} evaluates to false. Therefore {f} evaluates to false, which contradicts the fact that {{\bf y}} is a satisfiable assignment. \Box

We know that if we pick a random graph that’s sufficiently dense, i.e. the average degree {d > \ln n}, by spectral techniques we will have a certifiable upper bound on the size of the largest independent set of {O\left(\frac n{\sqrt d}\right)} with high probability. So if a random graph has {\Omega(n \log n)} random edges, we can prove that there’s no large independent set with high probability.

But if we have a random hypergraph with {\Omega(n \log n)} random hyperedges, we don’t have any analog of spectral theories for hypergraphs that allow us to do this kind of certification. And from the fact that the problem of certifying unsatisfiability of random formula of {\Omega(n \log n)} clauses is considered to be hard, we conjecture that there doesn’t exist a spectral theory for hypergraphs able to replicate some of the things we are able to do on graphs.

However, what we can do is possibly with some loss, to reduce the hypergraph to a graph, where we can apply spectral techniques.

2.2. From 4-SAT instances to graphs

Now let’s look at random 4-SATs. Similarly we will write a random 4-SAT formula {f} as:

\displaystyle  f({\bf x}) = \bigwedge\limits_{i = 1}^m \left( x_{\sigma_{i,1}} = b_{i,1} \lor x_{\sigma_{i,2}} = b_{i,2} \lor x_{\sigma_{i,3}} = b_{i,3} \lor x_{\sigma_{i,4}} = b_{i,4} \right),

where {\sigma_{i,j} \in [n], b_{i,j} \in \{0, 1\}}, {\forall i \in [m], \sigma_{i,1} < \sigma_{i,2} < \sigma_{i,3} < \sigma_{i,4}} and no two {(\sigma_{i,1}, b_{i,1}, ..., \sigma_{i,4}, b_{i,4})} are exactly the same. Similar to the previous construction, but instead of constructing another hypergraph, we will construct just a graph {G_f = (V, E)}, where

\displaystyle V = \left\{(i_1, b_1, i_2, b_2) \middle| i_1, i_2 \in [n], b_1, b_2 \in \{0, 1\}\right\}

is a set of {4n^2} vertices and

\displaystyle E = \left\{ e_j \middle| j \in [m] \right\}, e_j = \{(\sigma_{j,1}, \overline {b_{j,1}}, \sigma_{j,2}, \overline {b_{j,2}}), (\sigma_{j,3}, \overline {b_{j,3}}, \sigma_{j,4}, \overline {b_{j,4}})\}

is a set of {m} edges.

If {f} is satisfiable, {G_f} has an independent set of size at least {n^2}. Equivalently if the largest independent set of {H_f} has size less than {n^2}, {f} is unsatisfiable. Proof: The proof is very similar to the previous one. Assume {f} is satisfiable, let {{\bf x} \leftarrow {\bf y}} be a satisfiable assignment, where {{\bf y} \in \{0, 1\}^n}. Then {S = \{ (x_i, y_i, x_j, y_j) | i, j \in [n] \}} is an independent set of size {n^2}. If not, it means some edge {e_j \subseteq S}, so {\sigma_{j,1} = \overline {b_{j,1}} \land \sigma_{j,2} = \overline {b_{j,2}} \land \sigma_{j,3} = \overline {b_{j,3}} \land \sigma_{j,4} = \overline {b_{j,4}}} and the {j}-th clause in {f} evaluates to false. Therefore {f} evaluates to false, which contradicts the fact that {{\bf y}} is a satisfiable assignment. \Box

From here, we can observe that {G_f} is not a random graph because some edges are forbidden, for example when the two vertices of the edge has some element in common. But it’s very close to a random graph. In fact, we can apply the same spectral techniques to get a certifiable upper bound on the size of the largest independent set if the average degree {d > \ln n}, i.e. if {m = \Omega(n^2 \log n)}, we can certify unsatisfiability with high probability, by upper bounding the size of the largest independent set in the constructed graph.

We can generalize this results for all even {k}‘s. For random {k}-SAT where {k} is even, if {m > c_k n^{k/2} \log n}, we can certify unsatisfiability with high probability, which is better than the previous method which requires {m = \Omega(n^{k - 1})}. The same {n^{k/2}(\log n)^{O(1)}} is achievable for odd {k}, but the argument is significantly more complicated.

2.3. Certifiable upper bound for independent sets in modified random sparse graphs

Despite odd {k}‘s, another question is that in this setup, can we do better and get rid of the {\log n} term? This term is coming from the fact that spectral norm break down when the average degree {d < \ln n}. However it’s still true that random graph doesn’t have any large independent sets even when the average degree {d} is constant. It’s just that the spectral norm isn’t giving us good bounds any more, since the spectral norm is at most {O\left(\sqrt{\max d}\right) = O\left(\sqrt \frac{\log n}{\log \log n}\right)}. So is there something tighter than spectral bounds that could help us get rid of the {\log n} term? Could we fix this by removing all the high degree vertices in the random graph?

This construction is due to Feige-Ofek. Given random graph {G \sim G_{n, p}}, where the average degree {d = np} is some large constant. Construct {G'} by taking {G} and removing all edges incident on nodes with degree higher than {2\bar d} where {\bar d} is the average degree of {G}. We denote {A} for the adjacency matrix of {G} and {A'} for that of {G'}. And it turns out,

With high probability, {\left\lVert A' - \frac d n J \right\rVert \le O\left(\sqrt d\right)}.

It turns out to be rather difficult to prove. Previously we saw spectral results on random graphs that uses matrix traces to bound the largest eigenvalue. In this case, it’s hard to do so because the contribution to the trace of a closed walk is complicated by the fact that edges have dependencies. The other approach is that given random matrix {M}, we will try to upper bound {\left\lVert M \right\rVert = \max\limits_x \frac {x^T M x} {\lVert x \rVert^2}}. A standard way for this is to that for every solution, count the instances of {M} in which the fixed solution is good, and argue that the number of the fixed solutions is small, which tells us that there’s no good solution. The problem here is that the set of solutions is infinitely large. So Feige-Ofek discretize the set of vectors, and then reduce the bound on the quadratic form of a discretized vector to a sum of several terms, each of which has to be carefully bounded.

We always have

\displaystyle  \max \textrm{IndSet}(G) \le \max \textrm{IndSet}(G') \le \frac n d \left\lVert A' - \frac d n J \right\rVert

and so, with high probability, we get an {O\left(\frac n {\sqrt d}\right)} polynomial time upper bound certificate to the size of the independent set for a {G_{n,d/n}} random graph. This removes the extra {\log n} term from our analysis of certificates of unsatisfiability for random {k}-SAT when {k} is even.

3. SDP relaxation of independent sets in random sparse graphs

In order to show a random graph has no large independent sets, a more principled way is to argue that there is some polynomial time solvable relaxation of the problem whose solution is an upper bound of the problem.

Let SDPIndSet{(G)} be the optimum of the following semidefinite programming relaxation of the Independent Set problem, which is due to Lovász:

\displaystyle  \begin{array}{rcl}  \max && \sum_{i\in V} \langle {\bf x}_i, {\bf x}_0 \rangle\\ s.t. \\ && ||{\bf x}_0||^2 = 1\\ && \langle {\bf x}_0, {\bf x}_i \rangle = ||{\bf x}_i ||^2 \ \ \ \forall i\in V\\ && \langle {\bf x}_i, {\bf x}_j \rangle = 0 \ \ \ \forall (i,j)\in E \end{array}

Since it’s the relaxation of the problem of finding the maximum independent set, {\max \textrm{IndSet}(G) \le \textrm{SDPIndSet}(G)} for any graph {G}. And this relaxation has a nice property.

For every {0 < p < 1}, and for every graph {G}, we have \begin{equation*} {\rm SDPIndSet}(G) \leq \frac 1p \cdot || pJ – A || \end{equation*} where {J} is the all-one matrix and {A} is the adjacency matrix of {G}.

Proof: First we note that SDPIndSet{(G)} is at most

\displaystyle  \begin{array}{rcl}  \max && \sum_{i\in V} \langle {\bf x}_i, {\bf x}_0 \rangle\\ s.t. \\ && ||{\bf x}_0||^2 = 1\\ && \sum_{i\in V} \langle {\bf x}_0, {\bf x}_i \rangle = \sum_{i\in V} ||{\bf x}_i ||^2 \\ && \sum_{(i,j)\in E} \langle {\bf x}_i, {\bf x}_j \rangle = 0 \end{array}

and this is equal to

\displaystyle  \begin{array}{rcl}  \max && \frac { \left( \sum_{i\in V} \langle {\bf x}_i, {\bf x}_0 \rangle \right) ^2}{\sum_{i \in V} || {\bf x}_i||^2}\\ s.t. \\ && ||{\bf x}_0||^2 = 1\\ && \sum_{i\in V} \langle {\bf x}_0, {\bf x}_i \rangle = \sum_{i\in V} ||{\bf x}_i ||^2 \\ && \sum_{(i,j)\in E} \langle {\bf x}_i, {\bf x}_j \rangle = 0 \end{array}

which is at most

\displaystyle  \begin{array}{rcl}  \max && \frac { \left \| \sum_{i\in V}{\bf x}_i \right \|^2}{\sum_{i \in V} || {\bf x}_i||^2}\\ s.t. \\ && ||{\bf x}_0||^2 = 1\\ && \sum_{i\in V} \langle {\bf x}_0, {\bf x}_i \rangle = \sum_{i\in V} ||{\bf x}_i ||^2 \\ && \sum_{(i,j)\in E} \langle {\bf x}_i, {\bf x}_j \rangle = 0 \end{array}

because

\displaystyle  \sum_{i\in V} \langle {\bf x}_i, {\bf x}_0 \rangle = \left\langle \sum_{i\in V} {\bf x}_i , {\bf x}_0\right \rangle \leq \left \| \sum_{i\in V} {\bf x}_i \right \| \cdot || {\bf x}_0 || = \left \| \sum_{i\in V} {\bf x}_i \right \|

Finally, the above optimization is equivalent to the following

\displaystyle  \begin{array}{rcl}  \max && \frac { \left \| \sum_{i\in V}{\bf x}_i \right \|^2 - \frac 1p \sum_{i,j} A_{i,j} \langle {\bf x}_i , {\bf x}_j \rangle }{\sum_{i \in V} || {\bf x}_i||^2}\\ s.t. \\ && ||{\bf x}_0||^2 = 1\\ && \sum_{i\in V} \langle {\bf x}_0, {\bf x}_i \rangle = \sum_{i\in V} ||{\bf x}_i ||^2 \\ && \sum_{(i,j)\in E} \langle {\bf x}_i, {\bf x}_j \rangle = 0 \end{array}

which is at most the unconstrained problem

\displaystyle \begin{aligned} \max \frac { \left \| \sum_{i\in V}{\bf x}_i \right \|^2 - \frac 1p \sum_{i,j} A_{i,j} \langle {\bf x}_i , {\bf x}_j \rangle }{\sum_{i \in V} || {\bf x}_i||^2} &= \max \frac { \sum_{i,j} \left( J - \frac 1p A\right)_{i,j} \langle {\bf x}_i,{\bf x}_j \rangle }{\sum_{i \in V} || {\bf x}_i||^2} \\ &= \lambda_{\max} \left (J - \frac 1p A \right) \\ &\leq \frac 1p || pJ - A||. \end{aligned}

\Box

Recall from the previous section that we constructed {G'} by removing edges from {G}, which corresponds to removing constraints in our semidefinite programming problem, so {\textrm{SDPIndSet}(G) \le \textrm{SDPIndSet}(G') \le \left\lVert J - \frac 1 p A' \right\rVert}, which is by theorem 3 at most {O\left(\frac n{\sqrt d}\right)} with high probability.

4. SDP relaxation of random k-SAT

From the previous section, we get an idea that we can use semidefinite programming to relax the problem directly and find a certificate of unsatisfiability for the relaxed problem.

Given a random {k}-SAT formula {f}:

\displaystyle  \begin{array}{rcl}  f({\bf x}) &= & \bigwedge\limits_{i = 1}^m \bigvee\limits_{j = 1}^k x_{\sigma_{i,j}} = b_{i,j} \\ &= &\bigwedge\limits_{i = 1}^m \overline{\overline{\bigvee\limits_{j = 1}^k x_{\sigma_{i,j}} = b_{i,j}}} \\ &= &\bigwedge\limits_{i = 1}^m \overline{\bigwedge\limits_{j = 1}^k x_{\sigma_{i,j}} = \overline{b_{i,j}}}. \end{array}

The satisfiability of {f} is equivalent of the satisfiability of the following equations:

\displaystyle  \begin{array}{rcl}  && x_i^2 = x_i \forall i \in [n] \\ && \sum_{i = 1}^m \left(1 - \prod_{j = 1}^k\left((-1)^{b_{i,j}}x_{\sigma_{i,j}} + b_{i,j}\right)\right) = m \end{array}

Notice that if we expand the polynomial on the left side, there are some of the monomials having degree higher than 2 which prevents us relaxing these equations to a semidefinite programming problem. In order to resolve this, {\forall A \subseteq {\bf x}} and {|A| \le k/2} we introduce {x_A = \prod_{i \in A} x_i}. Then we can relax all variables to be vectors, i.e.

\displaystyle  \begin{array}{rcl}  && \lVert {\bf x}_\emptyset \rVert^2 = 1 \\ && \langle {\bf x}_A, {\bf x}_B \rangle = \langle {\bf x}_C, {\bf x}_D \rangle \ \ \ \forall A \cup B = C \cup D \\ && \sum_{i = 1}^m \left(1 - \prod_{j = 1}^k\left((-1)^{b_{i,j}}{\bf x}_{\sigma_{i,j}} + b_{i,j}\right)\right) = m \ \ \ \textrm{rewritten as quadratic forms of } {\bf x}_A \end{array}

For example, if we have a 4-SAT clause

\displaystyle  x_3 \lor \overline{x_4} \lor x_7 \lor \overline{x_{10}},

we can rewrite it as

\displaystyle  \begin{array}{rcl}  1 - (1 - {\bf x}_3) \cdot {\bf x}_4 \cdot (1 - {\bf x}_7) \cdot {\bf x}_{10} &= &1 - {\bf x}_4{\bf x}_{10} + {\bf x}_3{\bf x}_4{\bf x}_{10} + {\bf x}_3{\bf x}_7{\bf x}_{10} - {\bf x}_3{\bf x}_4{\bf x}_7{\bf x}_{10} \\ &= &1 - {\bf x}_{\{4\}}{\bf x}_{\{10\}} + {\bf x}_{\{3,4\}}{\bf x}_{\{10\}} + {\bf x}_{\{3,7\}}{\bf x}_{\{10\}} - {\bf x}_{\{3,4\}}{\bf x}_{\{7,10\}}. \end{array}

For this relaxation, we have:

  1. If {m < c(k, n) n^{k/2}}, the SDP associated with the formula is feasible with high probability, where {c(k, n) = 1/n^{o(1)}} for every fixed {k}.
  2. If {m > c'(k) n^{k/2}}, the SDP associated with the formula is not feasible with high probability, where {c'(k, n)} is a constant for every fixed even {k}, and {c'(k, n) = \textrm{poly}(\log n)} for every fixed odd {k}.

Beyond Worst-Case Analysis: Lecture 7

Scribed by Jeff Xu

In which we discussed planted clique distribution, specifically, we talked about how to find a planted clique in a random graph. We heavily relied upon our material back in lecture 2 and lecture 3 in which we covered the upper bound certificate for max clique in {G_{n,\frac{1}{2}}}. At the end of this class, we wrapped up this topic and started the topic of {k}-SAT.

1. Planted Clique

To start with, we describe a distribution of graphs with a planted clique. Suppose that we sample {G} from {G_{n,\frac{1}{2}}} and we want to modify {G} s.t. it has a size {k} clique, i.e., we have a clique {S \subseteq V} with {\left|S\right|=k}. The following code describes a sampler for the distribution.

  • {G \leftarrow G_{n,\frac{1}{2}}}
  • Pick a subset of vertices {S} from {V} s.t. {|S|=k}
  • Independently for each pair {\{u,v\}}, make {\{u,v\}} an edge with probability
    • {1} if {u,v \in S}
    • {\frac 12} otherwise

Note: We are only interested in the case {k \geq 2\log n}, which is the case in which the planted clique is, with high probability, larger than any pre-existing clique

Continue reading

Annuntio vobis gaudium magnum: habemus directors

goldwasser_86278892
(Photo credit: ACM)

Formally ending a search started in March 2016 (and a process started in the Fall of 2015), we are pleased to finally officially announce that Shafi Goldwasser will take over from Dick Karp as director of the Simons Institute for Computing on January 1st, and will return to Berkeley after a 30+ year hiatus.

Shafi is the co-inventor and developer of the notions semantic security in encryption; of zero-knowledge proofs; of pseudorandom functions; of the connection between PCP and hardness of approximation; and of property testing in sublinear algorithms, among others. She has received the Turing award for her work on cryptography and of two Gödel prizes for her work on complexity.

I cannot put in words how happy I am for the Berkeley community, including myself, and for the future of the Institute.

The director search was my first glimpse into how the Berkeley central campus bureaucracy operates, and it was horrifying. The simplest thing couldn’t be done without a sequence of authorities signing off on it, and each authority had a process for that, which involved asking for other things that other authorities had to sign off on, and so on in what at times seemed actual infinite descent.

The announcement linked above was in the works for at least three weeks!

Alistair Sinclair, after two terms as associate director of the Simons Institute, during which his heroic efforts were recognized with the SIGACT service award, also retired from his position at the Institute, and last July 1st was replaced by Berkeley professor Peter Bartlett, a noted pioneer of the study of neural networks.

This weekend, on Saturday, the Simons Institute will host the FOCS reception, which will double as celebration for Alistair’s prize. There will buses leaving the conference hotel at 6:45pm, and there will be plenty of food (and drinks!) at the Institute. There will also be buses taking people back to the hotel, although once you are in downtown Berkeley on a Saturday evening (bring a sweater) you may want to hang out a bit more and then take a rideshare service back to the hotel.

Beyond Worst Case Analysis: Lecture 5

Scribed by Haaris Khan

In which we study the SDP relaxation of Max Cut in random graphs.

1. Quick Review of Chernoff Bounds

Suppose {X_1, ..., X_n} are mutually independent random variables with values {{0, 1}}. \newline Let {X := \sum_{i=1}^{n} X_i}. The Chernoff Bounds claim the following: \newline

1. { \textrm{ } \forall \textrm{ } \epsilon \textrm{ such that } 0 \leq \epsilon \leq 1, }

\displaystyle  \mathop{\mathbb P}(\vert X - \mathop{\mathbb E}[X] \vert) > \epsilon \cdot \mathop{\mathbb E}[X]) \leq \exp(\Omega(\epsilon^2 \cdot \mathop{\mathbb E}[X]))

2. { \forall \textrm{ } t \textrm{ } > 1, }

\displaystyle  \mathop{\mathbb P} (\vert X - \mathop{\mathbb E}[X] \vert \geq t \cdot \mathop{\mathbb E}[X]) \leq \exp(-\Omega((t\log(t)) \cdot \mathop{\mathbb E}[X]))

3. When we do not know {\mathop{\mathbb E}[X]}, we can bound as follows:

\displaystyle  \mathop{\mathbb P}(\vert X - \mathop{\mathbb E}[X] \vert \geq \epsilon \cdot n) \leq \exp(- \Omega(\epsilon^2 \cdot n))

2. Cutting a Near-Optimal Number of Edges in {G_{n,p}} Via SDP Rounding

Consider {G_{n, p}} where {p > \frac{\log(n)}{n}}. We show that with {1 - o(1)} probability, the max-degree will be {O(d)}

  • Fix v
  • For some constant c,

    \displaystyle \mathop{\mathbb P}(\textrm{v has degree} > c \cdot d) = \mathop{\mathbb P}(\vert deg(v) - \mathop{\mathbb E}[v] \vert > (c - 1) \mathop{\mathbb E}[deg(v)])

    \displaystyle \leq \exp(- \Omega((c - 1)\log(c - 1) \cdot d)) \textrm{ (by Chernoff Bounds)}

    \displaystyle  \leq \exp(-\Omega((c - 1)\log(c - 1) \cdot \log(n))

    \displaystyle  \leq \frac{1}{n^2}, \textrm{ for some choice of constant c}

So {\mathop{\mathbb P}(\exists \text{ } v \textrm{ with degree } > c \cdot d) \leq n \cdot \frac{1}{n^2} \leq \frac{1}{n}}

Next, we compute the number of vertices that participate in a triangle. Recall that degree {d} can be bounded by {o(n^{\frac{1}{3}})}

\displaystyle \mathop{\mathbb E}[\textrm{number vertices in triangles}] = n \cdot \mathop{\mathbb P}(\textrm{v participates in a triangle})

If a vertex participates in a triangle, there are {\binom{n - 1}{2}} ways of choosing the other two vertices that participate with v in the triangle. \newline So the expected number of vertices in triangles can be bounded by

\displaystyle  \mathop{\mathbb E}[\textrm{number vertices in triangles}] \leq n \cdot p^3 \cdot \binom{n - 1}{2}

\displaystyle  \leq n^3 \cdot p^3

\displaystyle  = o(n) \textrm{ if } p = o \left(\frac{1}{n^{\frac{2}{3}}}\right), \textrm{ } d = o(n^{\frac{1}{3}})

So with {o(1)} probability,

  • All vertices have degree {O(d)}
  • {o(n)} vertices participate in triangles.

3. Eigenvalue Computations and SDP

Problems like finding the largest / smallest eigenvalue can be solved using SDP

Let {M} be symmetric, {\lambda_{\max}} be the largest eigenvalue of M: {\lambda_{\max} = \max_x \frac{\boldsymbol{x}^T M \boldsymbol{x}}{\|\boldsymbol{x} \|^2}} We can formulate this as Quadratic Programming:

\displaystyle  \begin{array}{rcl}  \max_{i, j} & & \sum_{i, j} M_{i, j} x_i y_j{\rm s.t.} \\ && \sum_i x_i^2 = 1 \\ \end{array}

We showed previously that we can relax a Quadratic Program to SDP:

\displaystyle  \begin{array}{rcl}  \max_{i, j} & & \sum_{i, j} M_{i, j} \langle \boldsymbol{x_i}, \boldsymbol{x_j} \rangle{\rm s.t.} \\ && \sum_i \|\boldsymbol{x_i}\|^2 = 1 \\ \end{array}

In fact, it happens that these two are equivalent. To show this, we must show that a vector solution {x} of SDP can hold as a solution to the QP and vice versa.

Proving {x} for QP is valid for SDP: Trivial. Any solution {x} to our Quadratic Program must be a solution for our SDP since it is a relaxation of the problem; then the optimum of our QP must be less than or equal to the optimum of our SDP

Proving {x} for SDP is valid for QP: Consider {x := \text{vector solution of cost c}}. We note that our SDP can be transformed into an unconstrained optimization problem as follows:

\displaystyle  \begin{array}{rcl}  \max_{i, j} & & \frac{\sum_{i, j} M_{i, j} \langle \boldsymbol{x_i}, \boldsymbol{x_j} \rangle}{\sum_i \|\boldsymbol{x_i}\|^2} \end{array}

The cost c can be defined as the value of our solution:

\displaystyle  c = \frac{\sum_{i, j} M_{i, j} \sum_k \boldsymbol{x_k}^i \boldsymbol{x_k}^j}{\sum_i \sum_k \|\boldsymbol{x_k}^i|^2}

\displaystyle  \leq \max_k \frac{\sum_{i, j} M_{i, j} \boldsymbol{x_k}^i \boldsymbol{x_k}^j}{\sum_i \|\boldsymbol{x_k}^i\|^2}

We get a one-dimensional solution when we use the {k^{th}} element of {x}, and wish to find the {k} that maximizes this.

We use the following inequality:

\displaystyle \frac{a_1 + ... + a_m}{b_1 + ... + b_m} \leq \max_{k = 1, ..., m} \frac{a_k}{b_k}, b_k > 0

Proof:

\displaystyle \sum_i a_i = \sum_i b_i \cdot \frac{a_i}{b_i} \leq \sum_i b_i \cdot \max_k \frac{a_k}{b_k}

\displaystyle  = \max_k \text{ } \frac{a_k}{b_k} \cdot \sum_i b_i

4. SDP Max-Cut: Spectral Norm as a SDP Certificate

Consider the SDP relaxation of Max-Cut on Graph {G}:

\displaystyle  \begin{array}{rcl}  \max & & \sum_{(i, j) \in E} \frac{1}{4} \|\boldsymbol{X_i} - \boldsymbol{X_j}\|^2 \\ {\rm s.t.} \\ && \forall v \in V, \|\boldsymbol{X_v}^2\| = 1 \\ \end{array}

Let the optimum value for this SDP be {SDPMaxCut(G)}. It’s obvious that {MaxCut(G) \leq SDPMaxCut(G)}. Under our constraints, we can rewrite our SDP as

\displaystyle  \sum_{(i, j) \in E} \frac{1}{2} - \frac{1}{2} \langle \boldsymbol{X_i}, \boldsymbol{X_j} \rangle

So our new optimization problem is

\displaystyle  \begin{array}{rcl}  \max & & \frac{\vert E \vert}{2} - \sum_{(i, j) \in E} \frac{1}{2} \langle \boldsymbol{X_i}, \boldsymbol{X_j} \rangle \\ {\rm s.t.} \\ && \forall i \in V, \| \boldsymbol{X_i} \|^2 = 1 \\ \end{array}

We can relax our constraint to the following: {\forall i \in V, \sum_i \| \boldsymbol{X_i} \|^2 = n}. Relaxing our constraint will yield an optimization problem with a solution less than the stricter constraint (call this {SDP'MaxCut(G)}):

\displaystyle  \begin{array}{rcl}  \max & & \frac{\vert E \vert}{2} - \sum_{(i, j) \in E} \frac{1}{2} \langle \boldsymbol{X_i}, \boldsymbol{X_j} \rangle \\ {\rm s.t.} \\ && \sum_v \|\boldsymbol{X_v}\|^2 = n \\ \end{array}

Clearly, we have the following inequalities: { MaxCut(G) \leq SDPMaxCut(G) \leq SDP'MaxCut(G) }. We can rewrite {SDP'MaxCut(G)} as

\displaystyle  \begin{array}{rcl}  \max & & \frac{\vert E \vert}{2} + \frac{n}{4} \cdot \sum_{i, j} \frac{-A_{i, j} \langle \boldsymbol{X_i}, \boldsymbol{X_j} \rangle}{\sum_i \|\boldsymbol{X_i}\|^2} \\ && \sum_v \|\boldsymbol{X_v}\|^2 = n \\ \end{array}

Note that our objective function computes the largest eigenvalue of {-A}:

\displaystyle  = \frac{\vert E \vert}{2} + \frac{n}{4} \cdot \lambda_{\max}(-A)

For every graph {G_{n, p}} with {0 \leq p \leq 1},

\displaystyle  MaxCut(G) \leq SDPMaxCut(G) \leq \frac{\vert E \vert}{2} + \frac{n}{4} \cdot \lambda_{\max}(-A)

\displaystyle  \leq \frac{\vert E \vert}{2} + \frac{n}{4} \cdot \lambda_{\max}(pJ - A)

\displaystyle  \leq \frac{\vert E \vert}{2} + \frac{n}{4} \cdot \|pJ - A\|

Recall from previous lectures that for {p > \frac{\log(n)}{n}}, the adjacency matrix of {A} sampled from {G_{n, p}} has {\|pJ - A\| \leq O(\sqrt{np})} with high probability. This implies that {SDPMaxCut(G) \leq \frac{\vert E \vert}{2} + O(n \cdot \sqrt{d})}. Semantically, this means that { SDPMaxCut(G) } computes in poly-time a correct upper-bound of {MaxCut(G)}.

5. Trace and Eigenvalues

Suppose matrix {M} is symmetric with eigenvalues {\lambda_1 \hdots \lambda_n}. The following are true:

  • {M^k} eigenvalues are {\lambda_1^k \hdots \lambda_n^k}
  • {trace(M) := \sum_{i, i} M_{i, i} } ; {trace(M) = \sum_i \lambda_i}

Then, for {M^{2k}, trace(M^{2k}) = \lambda_1^{2k} + \hdots + \lambda_n^{2k}}.

\displaystyle  (\max_i \vert \lambda_i \vert)^{2k} \leq trace(M^{2k}) \leq n \cdot (\max_i \vert \lambda_i \vert)^{2k}

Also,

\displaystyle  \|M\| \leq (trace(M^{2k})^{\frac{1}{2k}} \leq n^{\frac{1}{2k}} \cdot \|M\|

{A_{i, j}} is defined as the number of expected paths from {i} to {j} that take {k} steps (not necessarily simple paths in a graph)

{ = \sum_{\text{paths from i to j}} M_{i, a_1} \hdots M_{a_n, j}}

Our goal with this is to compute the eigenvalues {\lambda}. Since traces relates the sum of the diagonal and the sum of eigenvalues for symmetric {M}, we can use this to provide an upper bound for symmetric {M}.

412346_252840668164070_176511152_o

(Photo from facebook.com)

 

 Michael Cohen, one the most brilliant young minds of our field, recently passed away in Berkeley.

After going to MIT for college, Michael worked for Facebook and was a graduate student at MIT. This semester, he was at Berkeley as Simons Fellow in connection with the program on optimization at the Simons Institute.

In a few short years, Michael left his mark on a number of problems that are close to the heart of in theory‘s readers.

He was part of the team that developed the fastest algorithm for solving systems of linear equations in which the matrix of constraints is a graph Laplacian (or, more generally, is symmetric and diagonally dominated), running in time O(m \sqrt {\log n}) where m is the number of non-zero entries of the matrix and n is the number of variables.

He also worked on matrix approximation via subsampling, on algorithms that approximate random walk properties, on algorithms for flow and shortest paths, and on geometric algorithms.

My favorite result is his single-author paper giving a polynomial time construction of bipartite Ramanujan graphs of all degree and all sizes, making the approach of Marcus, Spielman and Srivastava constructive.

Michael was a unique person, who gave a lot to our community and had touched several lives. His loss is an unspeakable tragedy that I still find very hard to process.

Beyond Worst-Case Analysis: Lecture 6

Scribed by Theo McKenzie

In which we study the spectrum of random graphs.

1. Overview

When attempting to find in polynomial time an upper bound certificate on the max cut and maximum independent set of a graph, we have used the following property. If {G\sim G_{n,\frac{1}{2}}}, then with high probability {\|A-\mathop{\mathbb E} (A)\|\leq O(\sqrt{n})}, where {\|\cdot\|} is the spectral norm. Generally, if {G\sim G_{n,p}} and {p>\frac{\log n}{n}} then w.h.p.

\displaystyle \|A-\mathop{\mathbb E} (A)\|\leq O(\sqrt{np}).

Today we will prove how to obtain the bound in Proposition 1 with an extra term of {\log n}, as well as show an outline of the method of finding the bound in Proposition 1. We will also show how when {p} is small this bound breaks down, namely how when {p=\Theta(\frac 1 n)},

\displaystyle \|A-\mathop{\mathbb E} (A)\|\geq\Omega \left(\sqrt{\frac{\log n}{\log\log n}} \right).

2. Introducing the Trace

Henceforth {M_{ij}^k} signifies {(M^k)_{ij}}. Take {M} symmetric and real. All eigenvalues of this matrix are real, and we can enumerate them {\lambda_1,\lambda_2,\ldots,\lambda_n} such that {|\lambda_1|\geq|\lambda_2|\geq\ldots\geq |\lambda_n|}.

The trace {\textnormal{Tr}(A)} is defined to be {\textnormal{Tr}(A)=\sum_{i=1}^n A_{ii}} where {A} is an {n\times n} matrix.

Moreover we know that \textnormal{Tr}{(A)=\sum_{1=1}^n \lambda_i}. If we take {k} large and even, the eigenvalues of {M^k} are {|\lambda_1|^k\geq|\lambda_2|^k\geq\ldots\geq |\lambda_n|^k}. Therefore we have

\displaystyle  \sum_i M^k_{ii}=\text{Tr}(M^k)=\sum_i |\lambda_i|^k\geq |\lambda_1|^k=\|M\|^k.

Moreover we have

\displaystyle \textnormal{Tr}(M^k)^{\frac{1}{k}}=\left(\sum_i |\lambda_i|^k\right)^\frac 1 k\leq n^{\frac{1}{k}}|\lambda_1|= n^{\frac{1}{k}}\|M\|.

This gives us an estimation of the norm, {\|M\|\leq (\sum_i M^k_{ii})^\frac{1}{k}\leq n^{\frac1 k}\|M\|}, which for {k>\log n} gives a constant factor approximation of {\|M\|}.

3. Using the Trace to Bound the Spectral Norm

Assume that {G\sim G_{n,\frac{1}{2}}} and {A} is the adjacency matrix of {G}. We will prove the following. {\displaystyle{\mathop{\mathbb E}_{G\sim G_{n,\frac{1}{2}}}}(\textnormal{Tr}((A-\mathop{\mathbb E} (A))^k))} is bounded above by {2^{O(k)}n^{1+k/2}k^{k/2}}. If {k>\log n}, by taking the {k}th root we achieve a bound of {O(\sqrt{n\log n})} on {\|A-\mathop{\mathbb E} (A)\|}.

3.1. Expected Value of Matrix Entries

First, we examine the matrix {M=2A-2\mathop{\mathbb E} (A)}. We have {M_{ii}=0} and {M_{ij}\in\{\pm1\}} with equal probability of each when {i\neq j}. Moreover {M_{ij}=M_{ji}}. If {i\neq j,\mathop{\mathbb E}( M_{ij}^k)=0} if {k} is odd and {\mathop{\mathbb E} (M_{ij}^k)=1} for {k} even.

{\mathop{\mathbb E} (\sum_i M^k_{ii})=n\mathop{\mathbb E} (M^k_{11})} by the linearity of expectation and symmetry between the entries. We evalute {M^k_{11}}.

\displaystyle M^k_{11}=\sum_{\{i_1,\ldots,i_{k-1}\}} M_{1i_1}M_{i_1i_2}\cdots M_{i_{k-1,1}}

where {{i_1,\ldots i_{k-1}}} represents the intermediate steps on a “path” between vertices that starts at 1 and returns to 1. For example, {M_{11}^2=\sum M_{1i}M_{i1}}. Note that we can repeat edges in these paths. By the linearity of expectation

\displaystyle  \mathop{\mathbb E} (M_{11}^k)=\sum_{\{i_1,\ldots,i_{k-1}\}}\mathop{\mathbb E} (M_{1i_1}M_{i_1i_2}\cdots M_{i_{k-1,1}}).

If any pair {\{i,j\}} occurs {\ell} times in the sequence of pairs {\{ 1,i_1\}, \{ i_1, i_2 \}, \ldots, \{ i_{k-1}, 1 \}}, where {\ell} is odd, then as the value of this term is independent from all other terms and {\mathop{\mathbb E} M^\ell_{ij}=0} for odd {\ell}, then {\mathop{\mathbb E} (M_{1i_1}M_{i_1i_2}\cdots M_{i_{k-1}1})=0}. If all pairs occur an even number of times, their product’s expectation is 1. Therefore {\mathop{\mathbb E} (M_{11}^k)} is the number of sequences {i_1,\ldots,i_{k-1}\in V^{k-1}} such that, in the sequence of pairs {\{ 1,i_1\}, \{ i_1, i_2 \}, \ldots, \{ i_{k-1}, 1 \}}, each pair occurs an even number of times.

3.2. Encoding argument

In order to give an upper bound on the number of such sequences, we will show how to encode a sequence where there are {m} distinct edges. In the sequence {i_1,\ldots i_{k-1}}, the element {i_j} is represented either as {(0,i_j)}, which takes {1 + \log n} bits, if {i_ji} appears for the first time in the sequence at location {j}, and as {(0,\ell)} otherwise, where {\ell < j} is such that {i_\ell = i_j}, which requires {1 + \log k} bits. Notice that, if {i_j} occurs for the first time at location {j}, then the pair {\{ i_{j-1}, i_j \}} also occurs for the first time at the location {j-1} and {j}. Thus the number of times that we encounter a vertex for the first time is at most the number of distinct edges. If we have {t} distinct vertices (other than vertex 1), then we are using {k + t \log n + (k-t) \log k}; for {k<n}, this value increases with {t}, but we have {t \leq m \leq k/2} (because every edge has to appear an even number of times and so there can be at most {k/2} distinct edges. This means that we use at most {k + \frac k2 \log n + \frac k2 \log k} bits in the encoding. The number of strings that can be encoded using at most {L} bits is {2^{L+1} }. If we assume {k<n}, we have the bound {\mathop{\mathbb E}(M_{11}^k)\leq k^\frac{k}{2}n^{\frac{k}{2}}2^{k+1}}, meaning

\displaystyle \textnormal{Tr}(M)=n\mathop{\mathbb E}( M_{11}^k)\leq n^{1+\frac{k}{2}}2^{k+1} k^\frac{k}{2}.

Therefore using suitable {k} and {t} we achieve our bound on {\|M\|}. For example, choose {k=\log n} and {t=10\sqrt{n}\sqrt{\log n}}. We use Markov’s inequality to obtain

\displaystyle  \mathbf{P}(\|M\|>t)=\mathbf{P}(\|M\|^k>t^k)\leq \frac{\mathop{\mathbb E}\|M\|^k}{t^k}\leq\left(\frac{2n^{\frac{1}{k}}\sqrt n \sqrt k }{t}\right)^k\leq e^{-\Omega(\log n)}\rightarrow 0.

4. Tightening the Bound

To obtain the sharper bound of {O(\sqrt n)}, we need to count the number of pairs more sharply and remove the {k^{\frac{k}{2}}} term, namely improve the way we talk about repetitions. Here we give an outline for how to find a tighter bound.

The worst case in the above analysis is when the number of distinct vertices (not counting vertex {1}) is maximal, which is {k/2}. In that case, the number of distinct “edges” {\{ i_j, i_{j+1} \}} is {k/2}, and they must form a connected graph over {1 + k/2} vertices, that is, they have to form a tree. Furthermore, each edges is repeated exactly twice in the closed walk, otherwise we would not have enough distinct edges to connect {1+k/2} distinct vertices.

If the pairs form a tree, then the only way we can have closed walk in which every edge is repeated twice is that the closed walk is a depth-first visit of the tree. In this case, we can improve our encoding in the following way. In a depth-first visit of a tree only two events are possible at each step: either we discover a new vertex, or we backtrack on the edge between the current node and the parent node. Thus we only need to pay {1 + \log n} bits to encode a new node in the sequence and {1} bit to encode an already seen node, and we obtain a bound of {2^{\frac{k}{2}+\frac k 2 \log n + \frac k 2}= 2^kn^\frac k 2}. By taking the {k}th root we obtain a bound on {\|M\|} of {O(\sqrt n)}.

5. Generalizing to any {p}

Now assume {G\sim G_{n,p}} and {A} is the adjacency matrix of {G}. We also assume {p<\frac{1}{2}}. We define

\displaystyle M=A-\mathop{\mathbb E}(A).

In this matrix {M_{ii}=0} and if {i\neq j, M_{i,j}=1-p} with probability {p} and {-p} with probability {1-p}. Therefore {\mathop{\mathbb E} (M_{ij})=0, \mathop{\mathbb E} (M_{ij}^2)=p-p^2\leq p}. In fact, {\mathop{\mathbb E} (M_{ij}^k)\leq p} for all {k\geq 1}.

From this we see we need to sum over sequences such that the multiset has each pair occuring at least two times, as if any pair occurs once, the expectation is {0}.

Therefore the bound is

\displaystyle  \mathop{\mathbb E} (M_{11}^k)\leq \sum_{i_1,\ldots i_{k-1}} p^\ell

where {\ell} is the number of distinct pairs and the sum is taken over multisets where each pair occurs at least twice. For large {\ell}, the number of sequences where each pair occurs at least twice with {\ell} distinct pairs is approximately {2^{O(\ell)}n^\ell}. This would give us

\displaystyle  \sum_{i_1,\ldots i_{k-1}}p^\ell=\sum_\ell p^\ell 2^{O(\ell)}n^\ell\leq O(p^\frac{k}{2}2^{O(k)}n^{\frac{k}{2}})

so the bound on {\|M\|} is {O(\sqrt{np})}. However, the bound on the number of sequences with {\ell} distict pairs breaks down when {\ell} is much smaller than {k}. In a full proof much more complicated calculations must be done.

6. Problems with sparse graphs

If {p=\Theta(\frac{1}{n})} , then {\|A-\mathop{\mathbb E}(A)\|\geq \Omega\left(\sqrt\frac{\log n}{\log\log n}\right)} w.h.p.

This breaks down the nice bound we obtained in section 5. This follows from the irregularity of sparse graphs. There will be isolated vertices and vertices with degree much higher than average.

Lemma 1 If {p=\Theta(\frac{1}{n})} then w.h.p. the highest degree vertex of {G} is of order {\Theta\left(\frac{\log n}{\log \log n}\right)}.

If G has a node of degree {\geq d}, then, for every {p< \frac 1 {4\sqrt d}}, {\lambda_{\max} (A-pJ) \geq\Omega(\sqrt d)}. This implies that {\forall 0< p < . \frac 1 {4\sqrt d}, \ \|A-pJ\|\geq \Omega(\sqrt d)}.

Proof: We have

\displaystyle  \lambda_{\max}(A-pJ)=\max_{{\bf x}}\frac{{\bf x}^T(A-pJ){\bf x}}{\|{\bf x}\|^2}

where the maximum is taken over all nonzero vectors {{\bf x}}. Call {v} a node of degree {\geq d} and call {d} of its neighbors {u_1,\ldots, u_d}.

Consider the vector {{\bf x}} such that {x_v=1, x_{u_i}=\frac{1}{\sqrt d}} and {x_w=0} for other vertices {w}. We have

\displaystyle {\bf x}^TA{\bf x}\geq 2 \sqrt d

\displaystyle  {\bf x}^TpJ{\bf x}=p\cdot \left(\sum x_i\right)^2=p\cdot (1+\sqrt d)^2\leq 4pd

\displaystyle  || {\bf x}||^2=2

Therefore if {p\leq \frac 1{4\sqrt d}},

\displaystyle \frac{{\bf x}^T(A-pJ){\bf x}}{\|{\bf x}\|^2}\geq \sqrt d - \frac 12 \sqrt d=\Omega(\sqrt d)

yielding the desired bound.

\Box

Theorem 4 proceeds immediately from Proposition 5 and Lemma 1.