Postdoc Positions for 2023-24

I am looking for three postdoctoral fellows for the next academic year to work with me at Bocconi.

The positions offer an internationally competitive salary (up to 65,000 Euro per year, tax-free, plus relocation assistance and travel allowance), in a wonderful location. The strict application deadline is January 31, 2023. Each position is for one year, renewable to a second year.

Among the topics that I am interested in are spectral graph theory, average-case complexity, “applications” of semidefinite programming, random processes on networks, approximation algorithms, pseudorandomness and combinatorial constructions.

Please contact me if you are interested in these positions and you would like more information.

To apply, go to this link, and then look for the opening for 3 positions dated December 6, 2022, like the one below (unfortunately there isn’t a perma-link to the application form):

Bocconi’s Computing Sciences department has a sizable theory group, that includes Laura Sanità, who works on optimization and approximation algorithms, Alon Rosen, who works on the foundations of cryptography, Marek Elias, who works on online optimization, and Andrea Celli, who works on algorithmic game theory. Next year, Adam Polak who works on fine-grained complexity and analysis of algorithms, will also join us.

Speaking of Alon Rosen, he is also recruiting postdocs for the next academic year, and he has two open positions, that you can find at the same link looking for two positions dated December 14 with an application deadline of February 28:

Workshop in Milan Next Week

As previously announced, next week Alon Rosen and I are organizing a workshop at Bocconi, which will actually be the union of two workshops, one on Recent Advances in Cryptography and one on Spectral and Convex Optimization Techniques in Graph Algorithms. Here is the program. In short:

  • where: Bocconi University’s Roentgen Building (via Roentgen 1, Milano), Room AS01
  • when: June 15-18
  • what: talks on cryptography and graph algorithms, including two hours devoted to Max Flow in nearly-linear time
  • how: register for free

The First XL Computer Scientist

Some time ago, I received a message to the effect that I was being considered for membership in the “Academy of the XL”, to which my reaction was, hey, we have all gone out of shape during the pandemic, and body-shaming is never… then it was explained to me that, in this context, “XL” means “forty” and that the Academy of the Forty is Italy’s National Academy of Science.

Italy has a wonderfully named, and well-known within the country, National Academy of Arts and Science, the Accademia dei Lincei, which means something like academy of the “eagle-eyed” (literally, lynx-eyed), that is, people that can see far. The Accademia dei XL is much less well known, although it has a distinguished 240-year history, during which people like Guglielmo Marconi and Enrico Fermi were members. More recently, the much beloved Rita Levi-Montalcini, Holocaust survivor, Nobel Laureate, and Senator-for-life, was a member. Current members include Nobel Laureates Carlo Rubbia and Giorgio Parisi. Noted algebraist Corrado De Concini is the current president.

Be that as it may, the academicians did vote to make me a member, their first computer scientist ever. Next week, at the inauguration of their 240th academic year, I will speak to the other members about randomness and pseudorandomness in computation.

Online Optimization Post 7: Matrix Multiplicative Weights Update

This is the seventh in a series of posts on online optimization, where we alternate one post explaining a result from the theory of online convex optimization and one post explaining an “application” in computational complexity or combinatorics. The first two posts were about the technique of Multiplicative Weights Updates and its application to “derandomizing” probabilistic arguments based on combining a Chernoff bound and a union bound. The third and fourth post were about the Follow-the-Regularized-Leader framework, which unifies multiplicative weights and gradient descent, and a “gradient descent view” of the Frieze-Kannan Weak Regularity Lemma. The fifth and sixth post were about the constrained version of the Follow-the-Regularized-Leader framework, and the Impagliazzo Hard-Core Set Lemma. Today we shall see the technique of Matrix Multiplicative Weights Updates.

1. Matrix Multiplicative Weights Update

In this post we consider the following generalization, introduced and studied by Arora and Kale, of the “learning from expert advice” setting and the multiplicative weights update method. In the “experts” model, we have a repeated game in which, at each time step {t}, we have the option of following the advice of one of {n} experts; if we follow the advice of expert {i} at time {t}, we incur a loss of {\ell_t (i)}, which is unknown to us (although, at time {t} we know the loss functions {\ell_1(\cdot),\ldots,\ell_{t-1}(\cdot)}). We are allowed to choose a probabilistic strategy, whereby we follow the advice of expert {i} with probability {x_t(i)}, so that our expected loss at time {t} is {\sum_{i=1}^n x_t(i) \ell_t(i)}.

In the matrix version, instead of choosing an expert {i} we are allowed to choose a unit {n}-dimensional vector {v_t}, and the loss incurred in choosing the vector {v_t} is {v_t ^T L_t v_t}, where {L_t} is an unknown symmetric {n\times n} matrix. We are also allowed to choose a probabilistic strategy, so that with probability {x_t(j)} we choose the unit vector {v_t^{(j)}}, and we incur the expected loss

\displaystyle  \sum_j x_t (j) \cdot (v_t^{(j)})^T L_t v_t^{(j)}

Continue reading

Postdoc Positions

The call is out for two postdoctoral positions at Bocconi to work in my group (see below for how to apply). If you are interested and you have any questions, feel free to email me (L.Trevisan at Unibocconi dot it)

The negotiable start date is September 1st, 2022. Each position is for one year, renewable for a second. The positions offer an internationally competitive salary (up to 65,000 Euro per year, tax-free, plus relocation assistance and travel allowance), in a wonderful location that, at long last, is back to more or less normal life. The application deadline is December 17, 2021.

Among the topics that I am interested in are spectral graph theory, average-case complexity, “applications” of semidefinite programming, random processes on networks, approximation algorithms, pseudorandomness and combinatorial constructions.

Bocconi Computer Science is building up a theory group: besides me, we have Alon Rosen, Marek Elias, a tenured person that will join next Fall, and more hires are on the horizon. Now that traveling is ok again, and considering that Alon and I both have ERC grants, we should expect a big stream of theory visitors coming and going through Bocconi from week-long visits to semester or year long sabbaticals.

To apply, go to https://www.unibocconi.eu/faculty-postdoc and look for the position advertised as “BIDSA Informatics”, which looks like this:

and click on “apply online”. Currently it is the second position from the top in the list

Online Optimization Post 6: The Impagliazzo Hard-Core Set Lemma

(This is the sixth in a series of posts on online optimization techniques and their “applications” to complexity theory, combinatorics and pseudorandomness. The plan for this series of posts is to alternate one post explaining a result from the theory of online convex optimization and one post explaining an “application.” The first two posts were about the technique of multiplicative weight updates and its application to “derandomizing” probabilistic arguments based on combining a Chernoff bound and a union bound. The third and fourth post were about the Follow-the-Regularized-Leader framework, and how it unifies multiplicative weights and gradient descent, and a “gradient descent view” of the Frieze-Kannan Weak Regularity Lemma. The fifth post was about the constrained version of the Follow-the-Regularized-Leader framework, and today we shall see how to apply that to a proof of the Impagliazzo Hard-Core Lemma.)

1. The Impagliazzo Hard-Core Lemma

The Impagliazzo Hard-Core Lemma is a striking result in the theory of average-case complexity. Roughly speaking, it says that if {g: \{ 0,1 \}^n \rightarrow \{ 0,1 \}} is a function that is “weakly” hard on average for a class {\cal F} of “efficiently computable” functions {f}, that is, if, for some {\delta>0}, we have that

\displaystyle  \forall f \in {\cal F}: \ \ \Pr_{x\sim \{ 0,1\}^n} [f(x) = g(x) ] \leq 1 -\delta

then there is a subset {H\subseteq \{ 0,1 \}^n} of cardinality {\geq 2\delta 2^n} such that {g} is “strongly” hard-on-average on {H}, meaning that

\displaystyle  \forall f \in {\cal F}: \ \ \Pr_{x\sim H} [f(x) = g(x) ] \leq \frac 12 + \epsilon

for a small {\epsilon >0}. Thus, the reason why functions from {\cal F} make a mistake in predicting {g} at least a {\delta} fraction of the times is that there is a “hard-core” set {H} of inputs such that every function from {\cal F} makes a mistake about 1/2 of the times for the {2\delta} fraction of inputs coming from {H}.
The result is actually not literally true as stated above, and it is useful to understand a counterexample, in order to motivate the correct statement. Suppose that {\cal F} contains just {1/\delta} functions, and that each function {f\in \cal F} differs from {g} in exactly a {\delta} fraction of inputs from {\{ 0,1 \}^n}, and that the set of mistakes are disjoint. Thus, for every set {H\subseteq \{ 0,1 \}^n}, no matter its size, there is a function {f\in \cal F} that agrees with {g} on at least a {1-\delta} fraction of inputs from {H}. The reason is that the sets of inputs on which the functions of {\cal F} differ from {g} form a partition of {\{ 0,1 \}^n}, and so their intersections with {H} form a partition of {H}. By an averaging argument, one of those intersections must then contain at most {\delta |H|} elements of {H}.

In the above example, however, if we choose any three distinct functions {f_1,f_2,f_3} from {\cal F}, we have

\displaystyle  \forall x\in \{ 0,1 \}^n: \ \ \ g(x) = {\rm majority} (f_1(x), f_2(x),f_3(x))

So, although {g} is weakly hard on average with respect to {\cal F}, we have that {g} is not even worst-case hard for a slight extension of {\cal F} in which we allow functions obtained by simple compositions of a small number of functions of {\cal F}.

Theorem 1 (Impagliazzo Hard-Core Lemma) Let {\cal F} be a collection of functions {f: \{ 0,1 \}^n \rightarrow \{ 0,1 \}}, let {g: \{ 0,1 \}^n \rightarrow \{ 0,1 \}} a function, and let {\epsilon>0} and {\delta >0} be positive reals. Then at least one of the following conditions is true:

  • ({g} is not weakly hard-on-average over {\{ 0,1 \}^n} with respect to a slight extension of {\cal F}) There is a {k= O(\epsilon^{-2} \log \delta^{-1} )}, an integer {b}, and {k} functions {f_1,\ldots,f_k \in \cal F}, such that

    \displaystyle  h(x) := I \{ f_1(x) + \ldots + f_k(x)\geq b \}

    satisfies

    \displaystyle  \Pr_{x\in \{ 0,1 \}^n} [ g(x) = h(x) ] \geq 1-\delta

  • ({g} is strongly hard-on-average over a set {H} of density {2\delta}) There is a set {H\subseteq \{ 0,1 \}^n} such that {H \geq 2\delta \cdot 2^n} and

    \displaystyle  \forall f\in {\cal F}: \ \ \Pr_{x\in H} [ g(x) = f(x) ] \leq \frac 12 + \epsilon

Where {I \{ {\rm boolean\ expression} \}} is equal to {1} or {0} depending on whether the boolean expression is true or false (the letter “{I}” stands for “indicator” function of the truth of the expression).

2. Proving the Lemma

Impagliazzo’s proof had {k} polynomial in both {1/\epsilon} and {1/\delta}, and an alternative proof discovered by Nisan has a stronger bound on {k} of the order of {\epsilon^{-2} \log \epsilon^{-1} \delta^{-1}}. The proofs of Impagliazzo and Nisan did not immediately give a set of size {2\delta2^n} (the set had size {\delta 2^n}), although this could be achieved by iterating their argument. An idea of Holenstein allows to prove the above statement in a more direct way.

Today we will see how to obtain the Impagliazzo Hard-Core Lemma from online optimization, as done by Barak, Hardt and Kale. Their proof achieves all the parameters claimed above, once combined with Holenstein’s ideas.

Continue reading

ARV on Abelian Cayley Graphs

Continuing from the previous post, we are going to prove the following result: let {G} be a {d}-regular Cayley graph of an Abelian group, {\phi(G)} be the normalized edge expansion of {G}, {ARV(G)} be the value of the ARV semidefinite programming relaxation of sparsest cut on {G} (we will define it below), and {\lambda_2(G)} be the second smallest normalized Laplacian eigenvalue of {G}. Then we have

\displaystyle   \lambda_2 (G) \leq O(d) \cdot (ARV (G))^2 \ \ \ \ \ (1)

which, together with the fact that {ARV(G) \leq 2 \phi(G)} and {\phi(G) \leq \sqrt{2 \lambda_2}}, implies the Buser inequality

\displaystyle   \lambda_2 (G) \leq O(d) \cdot \phi^2 (G) \ \ \ \ \ (2)

and the approximation bound

\displaystyle   \phi(G) \leq O(\sqrt d) \cdot ARV(G) \ \ \ \ \ (3)

The proof of (1), due to Shayan Oveis Gharan and myself, is very similar to the proof by Bauer et al. of (2).

Continue reading