Hasselmann, Manabe and Parisi win 2021 Physics Nobel Prize

itscomingrome-lorenza-parisi

Today the Italian academic community, along with lots of other people, was delighted to hear that Giorgio Parisi is one of the three recipients of the 2021 Nobel Prize for Physics.

Parisi has been a giant in the area of understanding “complex” and “disordered” systems. Perhaps, his most influential contribution has been his “replica method” for the analysis of the Sherrington-Kirkpatrick model. His ideas have led to several breakthroughs in statistical physics by Parisi and his collaborators, and they have also found applications in computer science: to tight analyses on a number of questions about combinatorial optimization on random graphs, to results on random constraint satisfaction problems (including the famous connection with random k-SAT analyzed by Mezard, Parisi and Zecchina) and random error correcting codes, and to understanding the solution landscape in optimization problems arising from machine learning. Furthermore these ideas have also led to the development and analysis of algorithms.

The news was particularly well received at Bocconi, where most of the faculty of the future CS department has done work that involved the replica method. (Not to be left out, even I have recently used replica methods.)

Mezard and Montanari have written a book-length treatment on the interplay between ideas from statistical physics, algorithms, optimization, information theory and coding theory that arise from this tradition. Readers of in theory looking for a shorter exposition aimed at theoretical computer scientists will enjoy these notes posted by Boaz Barak, or this even shorter post by Boaz.

In this post, I will try to give a sense to the reader of what the replica method for the Sherrington-Kirkpatrick model looks like when applied to the average-case analysis of optimization problems, stripped of all the physics. Of course, without the physics, nothing makes any sense, and the interested reader should look at Boaz’s posts (and to references that he provides) for an introduction to the context. I did not have time to check too carefully what I wrote, so be aware that several details could be wrong.

What is the typical value of the max cut in a {G_{n,\frac 12}} random graph with {n} vertices?

Working out an upper bound using union bounds and Chernoff bound, and a lower bound by thinking about a greedy algorithm, we can quickly convince ourselves that the answer is {\frac {n^2}{4} + \Theta(n^{1.5})}. Great, but what is the constant in front of the {n^{1.5}}? This question is answered by the Parisi formula, though this fact was not rigorously established by Parisi. (Guerra proved that the formula gives an upper bound, Talagrand proved that it gives a tight bound.)

Some manipulations can reduce the question above to the following question: suppose that I pick a random {n\times n} symmetric matrix {M}, say with zero diagonal, and such that (up to the symmetry requirement) the entries are mutually independent and each entry is equally likely to be {+1} or {-1}, or perhaps each entry is distributed according to a standard normal distribution (the two versions can be proved to be equivalent), what is the typical value of

\displaystyle  \max _{x \in \{+1,1\}^n } \ \ \frac 1{n^{1.5}} x^T M x

up to {o_n(1)} additive terms,?

As a first step, we could replace the maximum with a “soft-max,” and note that, for every choice of {\beta>0}, we have

\displaystyle  \max _{x \in \{+1,1\}^n } \ \ x^T M x \leq \frac 1 \beta \log \sum_{x \in \{+1,1\}^n } e^{\beta x^T Mx}

The above upper bound gets tighter and tighter for larger {\beta}, so if we were able to estimate

\displaystyle  \mathop{\mathbb E} \log \sum_{x \in \{+1,1\}^n } e^{\beta x^T Mx}

for every {\beta} (where the expectation is over the randomness of {M}) then we would be in good shape.

We could definitely use convexity and write

\displaystyle  \mathop{\mathbb E} \max _{x \in \{+1,1\}^n } \ \ x^T M x \leq \frac 1 \beta \mathop{\mathbb E} \log \sum_{x \in \{+1,1\}^n } e^{\beta x^T Mx} \leq \frac 1 \beta \log \mathop{\mathbb E} \sum_{x \in \{+1,1\}^n } e^{\beta x^T Mx}

and then use linearity of expectation and independence of the entries of {M} to get to

\displaystyle  \leq \frac 1 \beta \log \sum_{x \in \{+1,1\}^n } \prod_{1\leq i < j\leq n} \mathop{\mathbb E} e^{2\beta M_{i,j} x_i x_j }

Now things simplify quite a bit, because for all {i<j} the expression {M_{i,j} x_i x_j}, in the Rademacher setting, is equally likely to be {+1} or {-1}, so that, for {\beta = o(1)}, we have

\displaystyle  \mathop{\mathbb E} e^{2\beta M_{i,j} x_i x_j } = cosh (2\beta) \leq 1 + O(\beta^2) \leq e^{O(\beta^2)}

and

\displaystyle  \sum_{x \in \{+1,1\}^n } \prod_{1\leq i < j\leq n} \mathop{\mathbb E} e^{2\beta M_{i,j} x_i x_j } \leq 2^n \cdot e^{O(\beta^2 n^2)}

so that

\displaystyle  \frac 1 \beta \log \sum_{x \in \{+1,1\}^n } \prod_{1\leq i < j\leq n} \mathop{\mathbb E} e^{2\beta M_{i,j} x_i x_j } \leq \frac {O(n)}{\beta} + O(\beta n^2)

which, choosing {\beta = 1/\sqrt n}, gives an {O(n^{1.5})} upper bound which is in the right ballpark. Note that this is exactly the same calculations coming out of a Chernoff bound and union bound. If we optimize the choice of {\beta} we unfortunately do not get the right constant in front of {n^{1.5}}.

So, if we call

\displaystyle  F := \sum_{x \in \{+1,1\}^n } e^{\beta x^T Mx}

we see that we lose too much if we do the step

\displaystyle  \mathop{\mathbb E} \log F \leq \log \mathop{\mathbb E} F

But what else can we do to get rid of the logarithm and to reduce to an expression in which we take expectations of products of independent quantities (if we are not able to exploit the assumption that {M} has mutually independent entries, we will not be able to make progress)?

The idea is that if {k>0} is a small enough quantity (something much smaller than {1/\log F}), then {F^k} is close to 1 and we have the approximation

\displaystyle  \log F^k \approx F^k-1

and we obviously have

\displaystyle  \log F^k = k \log F

so we can use the approximation

\displaystyle  \log F \approx \frac 1k (F^k - 1)

and

\displaystyle  \mathop{\mathbb E} \log F \approx \frac 1k (\mathop{\mathbb E} F^k - 1)

Let’s forget for a moment that we want {k} to be a very small parameter. If {k} was an integer, we would have

\displaystyle  \mathop{\mathbb E} F^k = \mathop{\mathbb E} \left( \sum_{x \in \{+1,1\}^n } e^{\beta x^T Mx} \right)^k = \sum_{x^{(1)},\ldots x^{(k)} \in \{+1,-1\}^n} \mathop{\mathbb E} e^{\beta \cdot ( x^{(1) T} M x^{(1)} + \cdots + x^{(k)T} M x^{(k)}) }

\displaystyle  = \sum_{x^{(1)},\ldots x^{(k)} \in \{+1,-1\}^n} \ \ \prod_{i< j} \ \mathop{\mathbb E} e^{2\beta M_{i,j} \cdot ( x^{(1)}_i x^{(1)}_j + \cdots + x^{(k)}_i x^{(k)}_j )}

Note that the above expression involves choices of {k}-tuples of feasible solutions of our maximization problem. These are the “replicas” in “replica method.”

The above expression does not look too bad, and note how we were fully able to use the independence assumption and “simplify” the expression. Unfortunately, it is actually still very bad. In this case it is preferable to assume the {M_{i,j}} to be Gaussian, write the expectation as an integral, do a change of variable and some tricks so that we reduce to computing the maximum of a certain function, let’s call it {G(z)}, where the input {z} is a {k \times k} matrix, and then we have to guess what is an input of maximum value for this function. If we are lucky, the maximum is equivalent by a {z} in which all entries are identical, the replica symmetric solution. In the Sherrington-Kirkpatrick model we don’t have such luck, and the next guess is that the optimal {z} is a block-diagonal matrix, or a replica symmetry-breaking solution. For large {k}, and large number of blocks, we can approximate the choice of such matrices by writing down a system of differential equations, the Parisi equations, and we are going to assume that such equations do indeed describe an optimal {z} and so a solution to the integral, and so they give as a computation of {(\mathop{\mathbb E} F^k - 1)/k}.

After all this, we get an expression for {(\mathop{\mathbb E} F^k - 1)/k} for every sufficiently large integer {k}, as a function of {k} up to lower order errors. What next? Remember how we wanted {k} to be a tiny real number and not a sufficiently large integer? Well, we take the expression, we forget about the error terms, and we set {k=0\ldots}

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s