What’s New at the Simons Institute

This Thursday (December 15) is the deadline to apply for post-doctoral positions at the Simons Institute for the next academic year. In Fall 2017 we will run a single program, instead of two programs in parallel, on optimization. The program will be double the size of our typical programs, and will focus on the interplay between discrete and continuous methods in optimization. In Spring 2018 there will be a program on real-time decision-making and one on theoretical neuroscience.

In a few weeks, our Spring 2017 programs will start. The program on pseudorandomness will have a mix of complexity theorists and number theorists, and it will happen at the same time as a program on analytic number theory at MSRI. It will start with a series of lectures. It will start with a series of lectures in the third week of January. The program on foundations of machine learning has been much anticipated, and it will also start with a series of lectures, in the fourth week of January, which will bring to Berkeley at impressive collection of machine learning practitioners whose research is both applied and rigorous. All lectures will be streamed live, and I encourage you to set up viewing parties.

During the current semester, the Simons Institute had for the first time a journalist in residence. The inaugural resident journalist has been Erica Klarreich, that many of you probably know for her outstanding writing on mathematics and computer science for quanta magazine.

Next semester, our resident journalist will be Pulitzer-prize winner John Markoff, who just retired from the New York Times.

Small-bias Distributions and DNFs

Two of my favorite challenges in unconditional derandomization are to find logarithmic-seed pseudorandom generators which are good against:

  1. log-space randomized algorithms
  2. {AC^0}, that is, constant depth circuits of polynomial size

Regarding the first challenge, the best known pseudorandom generator remains Nisan’s, from 1990, which requires a seed of {O(\log^2 n)} bits. Maddeningly, even if we look at width-3 oblivious branching programs, that is, non-uniform algorithms that use only {\log_2 3} bits of memory, nobody knows how to beat Nisan’s generator.

Regarding the second challenge, Nisan showed in 1988 that for every {d} there is a pseudorandom generator of seed length {O(\log^{2d+6} n)} against depth-{d} circuits of size {n}. The simplest case is that of depth-2 circuits, or, without loss of generality, of disjunctive-normal-form (DNF) formulas. When specialized to DNF formulas, Nisan’s generator has seed length {O(\log^{10} n)}, but better constructions are known in this case.

Luby, Velickovic and Wigderson improved the seed length to {O(\log^4 n)} in 1993. Bazzi’s celebrated proof of the depth-2 case of the Linian-Nisan conjecture implies that a {O(\log^2 m/\delta)}-wise independent distribution “{\delta}-fools” every {m}-term DNF, by which we mean that for every such DNF formula {\phi} and every such distribution {X} we have

\displaystyle  \left| \mathop{\mathbb P}_{x\sim X} [ \phi(x) = 1] - \mathop{\mathbb P}_{x\sim U} [\phi(x) = 1] \right| \leq \delta

where {U} is the uniform distribution over assignments. This leads to a pseudorandom generator that {\delta}-fools {n}-variable, {m}-term DNF formulas and whose seed length is {O(\log n \cdot \log^2 m/\delta)}, which is {O(\log^3 n)} when {m,n,\delta^{-1}} are polynomially related.

In a new paper with Anindya De, Omid Etesami, and Madhur Tulsiani, we show that an {n}-variable, {m}-term DNF can be {\delta}-fooled by a generator of seed length {O(\log n + \log^2 m/\delta \cdot \log\log m/\delta)}, which is {O(\log^{2+o(1)} n)} when {n,m,\delta^{-1}} are polynomially related.

Our approach is similar to the one in Razborov’s proof of Bazzi’s result, but we use small-bias distribution instead of {k}-wise independent distributions

Continue reading

The Large Deviation of Fourwise Independent Random Variables

Suppose {X_1,\ldots,X_n} are mutually independent unbiased {\pm 1} random variables. Then we know everything about the distribution of

\displaystyle   | X_1 + \ldots + X_N | \ \ \ \ \ (1)

either by using the central limit theorem or by doing calculations by hand using binomial coefficients and Stirling’s approximation. In particular, we know that (1) takes the values {1,\ldots, 1/\sqrt N} with probability {\Theta(1/\sqrt N)} each, and so with constant probability (1) is at most {O(\sqrt N)}.

The last statement can be proved from scratch using only pairwise independence. We compute

\displaystyle  \mathop{\mathbb E} \left| \sum_i X_i \right|^2 = N

so that

\displaystyle  \mathop{\mathbb P} \left[ \left|\sum_i X_i \right| \geq c \cdot \sqrt N \right] = \mathop{\mathbb P} \left[ \left|\sum_i X_i \right|^2 \geq c^2 \cdot N \right] \leq \frac 1 {c^2}

It is also true that (1) is at least {\Omega(\sqrt N)} with constant probability, and this is trickier to prove.

First of all, note that a proof based on pairwise independence is not possible any more. If {(X_1,\ldots,X_N)} is a random row of an Hadamard matrix, then {\sum_i X_i = N} with probability {1/N}, and {\sum_i X_i =0} with probability {1-1/N}.

Happily, four-wise independence suffices.

Continue reading

Distinguishers from linear functions

In the last post we introduced the following problem: we are given a length-increasing function, the hardest case being a function {G: \{ 0,1 \}^{n-1} \rightarrow \{ 0,1 \}^n} whose output is one bit longer than the input, and we want to construct a generator {D} such that the advantage or distinguishing probability of {D}

\displaystyle   \left| \mathop{\mathbb P}_{z \in \{ 0,1 \}^{n-1}} [D(G(z)) =1 ] - \mathop{\mathbb P}_{x \in \{ 0,1 \}^{n}} [D(x) =1 ] \right| \ \ \ \ \ (1)

is as large as possible relative to the circuit complexity of {D}.

I will show how to achieve advantage {\epsilon} with a circuit of size {O(\epsilon^2 n 2^n)}. Getting rid of the suboptimal factor of {n} is a bit more complicated. These results are in this paper.

Continue reading

Efficiently Correlating with a Real-valued Function and Breaking PRGs

Suppose we have a length-increasing function {G: \{ 0,1 \}^{n-1} \rightarrow \{ 0,1 \}^n}, which we think of as a pseudorandom generator mapping a shorter seed into a longer output.

Then the distribution of {G(z)} for a random seed {z} is not uniform (in particular, it is concentrated on at most {2^{n-1}} of the {2^n} elements of {\{ 0,1 \}^n}). We say that a statistical test {D: \{ 0,1 \}^n \rightarrow \{ 0,1 \}} has advantage {\epsilon} in distinguishing the output of {G} from the uniform distribution if

\displaystyle   \left| \mathop{\mathbb P}_{z \in \{ 0,1 \}^{n-1}} [D(G(z)) =1 ] - \mathop{\mathbb P}_{x \in \{ 0,1 \}^{n}} [D(x) =1 ] \right| \geq \epsilon \ \ \ \ \ (1)

If the left-hand side of (1) is at most {\epsilon} for every {D} computable by a circuit of size {S}, then we say that {G} is {\epsilon}-pseudorandom against circuits of size {S}, or that it is an {(S,\epsilon)}-secure pseudorandom generator.

How secure can a pseudorandom generator possibly be? This question (if we make no assumption on the efficiency of {G}) is related to the question in the previous post on approximating a boolean function via small circuits. Both questions, in fact, are special cases of the question of how much an arbitrary real-valued function must correlate with functions computed by small circuits, which is answered in a new paper with Anindya De and Madhur Tulsiani.

Continue reading

CS276 Lecture 3: Pseudorandom Generators

Scribed by Bharath Ramsundar


Last time we introduced the setting of one-time symmetric key encryption, defined the notion of semantic security, and proved its equivalence to message indistinguishability.

Today we complete the proof of equivalence (found in the notes for last class), discuss the notion of pseudorandom generator, and see that it is precisely the primitive that is needed in order to have message-indistinguishable (and hence semantically secure) one-time encryption. Finally, we shall introduce the basic definition of security for protocols which send multiple messages with the same key.

1. Pseudorandom Generators And One-Time Encryption

Intuitively, a Pseudorandom Generator is a function that takes a short random string and stretches it to a longer string which is almost random, in the sense that reasonably complex algorithms cannot differentiate the new string from truly random strings with more than negligible probability.

Definition 1 [Pseudorandom Generator] A function {G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m} is a {(t,\epsilon)}-secure pseudorandom generator if for every boolean function {T} of complexity at most {t} we have

\displaystyle   \left | {\mathbb P}_{x\sim U_k } [ T(G(x)) = 1] - {\mathbb P} _{x\sim U_m} [ T(x) = 1] \right| \leq \epsilon \ \ \ \ \ (1)

(We use the notation {U_n} for the uniform distribution over {\{ 0,1 \}^n}.)

The definition is interesting when {m> k} (otherwise the generator can simply output the first m bits of the input, and satisfy the definition with {\epsilon=0} and arbitrarily large {t}). Typical parameters we may be interested in are {k=128}, {m=2^{20}}, {t=2^{60}} and {\epsilon = 2^{-40}}, that is we want {k} to be very small, {m} to be large, {t} to be huge, and {\epsilon} to be tiny. There are some unavoidable trade-offs between these parameters.

Lemma 2 If {G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m} is {(t,2^{-k-1})} pseudorandom with {t = O(m)}, then {k\geq m-1}.

Proof: Pick an arbitrary {y \in \{ 0,1 \}^k}. Define

\displaystyle  T_y(x) = 1 \Leftrightarrow x = G(y)

It is clear that we may implement {T} with an algorithm of complexity {O(m)}: all this algorithm has to do is store the value of {G(y)} (which takes space {O(m)}) and compare its input to the stored value (which takes time {O(m)}) for total complexity of {O(m)}. Now, note that

\displaystyle {\mathbb P}_{x\sim U_k } [ T(G(x)) = 1] \geq \frac{1}{2^k}

since {G(x) = G(y)} at least when {x = y}. Similarly, note that {{\mathbb P} _{x\sim U_m} [ T(x) = 1] = \frac{1}{2^m}} since {T(x) = 1} only when {x = G(y)}. Now, by the pseudorandomness of {G}, we have that {\frac{1}{2^k} - \frac{1}{2^m} \leq \frac{1}{2^{k+1}}}. With some rearranging, this expression implies that

\displaystyle \frac{1}{2^{k+1}} \leq \frac{1}{2^m}

which then implies {m \leq k + 1 } and consequently {k \geq m - 1}

Exercise 1 Prove that if {G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m} is {(t,\epsilon)} pseudorandom, and {k < m}, then

\displaystyle  t \cdot \frac 1 \epsilon \leq O( m \cdot 2^k)

Suppose we have a pseudorandom generator as above. Consider the following encryption scheme:

  • Given a key {K\in \{ 0,1 \}^k} and a message {M \in \{ 0,1 \}^m},

    \displaystyle  Enc(K,M) := M \oplus G(K)

  • Given a ciphertext {C\in \{ 0,1 \}^m} and a key {K\in \{ 0,1 \}^k},

    \displaystyle  Dec(K,C) = C \oplus G(K)

(The XOR operation is applied bit-wise.)

It’s clear by construction that the encryption scheme is correct. Regarding the security, we have

Lemma 3 If {G} is {(t,\epsilon)}-pseudorandom, then {(Enc,Dec)} as defined above is {(t-m,2\epsilon)}-message indistinguishable for one-time encryption.

Proof: Suppose that {G} is not {(t-m, 2\epsilon)}-message indistinguishable for one-time encryption. Then {\exists} messages {M_1, M_2} and {\exists} algorithm {T} of complexity at most {t - m} such that

\displaystyle  \left | {\mathbb P}_{K \sim U_k} [T(Enc(K, M_1)) = 1] - {\mathbb P}_{K \sim U_k} [T(Enc(K, M_2)) = 1] \right | > 2\epsilon

By using the definition of {Enc} we obtain

\displaystyle  \left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1)) = 1] - {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_2)) = 1] \right | > 2\epsilon

Now, we can add and subtract the term {{\mathbb P}_{R \sim U_m} [T(R) = 1]} and use the triangle inequality to obtain that {\left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1) = 1] - {\mathbb P}_{R \sim U_m} [T(R) = 1] \right |} added to {\left | {\mathbb P}_{R \sim U_m} [T(R) = 1] - {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_2) = 1] \right |} is greater than {2\epsilon}. At least one of the two terms in the previous expression must be greater that {\epsilon}. Suppose without loss of generality that the first term is greater than {\epsilon}

\displaystyle  \left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1)) = 1] - {\mathbb P}_{R \sim U_m} [T(R) = 1] \right | > \epsilon

Now define {T'(X) = T(X \oplus M_1)}. Then since {H(X) = X \oplus M_1} is a bijection, {{\mathbb P}_{R \sim U_m} [T'(R) = 1] = {\mathbb P}_{R \sim U_m} [T(R) = 1]}. Consequently,

\displaystyle  \left | {\mathbb P}_{K \sim U_k} [T'(G(K)) = 1] - {\mathbb P}_{R \sim U_m} [T'(R) = 1] \right | > \epsilon

Thus, since the complexity of {T} is at most {t - m} and {T'} is {T} plus an xor operation (which takes time {m}), {T'} is of complexity at most {t}. Thus, {G} is not {(t, \epsilon)}-pseudorandom since there exists an algorithm {T'} of complexity at most {t} that can distinguish between {G}‘s output and random strings with probability greater than {\epsilon}. Contradiction. Thus {(Enc, Dec)} is {(t-m, 2\epsilon)}-message indistinguishable. ◻

2. Security for Multiple Encryptions: Plain Version

In the real world, we often need to send more than just one message. Consequently, we have to create new definitions of security for such situations, where we use the same key to send multiple messages. There are in fact multiple possible definitions of security in this scenario. Today we shall only introduce the simplest definition.

Definition 4 [Message indistinguishability for multiple encryptions] {(Enc,Dec)} is {(t,\epsilon)}-message indistinguishable for {c} encryptions if for every {2c} messages {M_1,\ldots,M_c}, {M'_1,\ldots,M'_c} and every {T} of complexity {\leq t} we have

\displaystyle  | {\mathbb P} [ T(Enc(K,M_1), \ldots,Enc(K,M_c)) = 1]

\displaystyle  -{\mathbb P} [ T(Enc(K,M'_1), \ldots,Enc(K,M'_c)) = 1] | \leq \epsilon

Similarly, we define semantic security, and the asymptotic versions.

Exercise 2 Prove that no encryption scheme {(Enc,Dec)} in which {Enc()} is deterministic (such as the scheme for one-time encryption described above) can be secure even for 2 encryptions.

Encryption in some versions of Microsoft Office is deterministic and thus fails to satisfy this definition. (This is just a symptom of bigger problems; the schemes in those versions of Office are considered completely broken.)

If we allow the encryption algorithm to keep state information, then a pseudorandom generator is sufficient to meet this definition. Indeed, usually pseudorandom generators designed for such applications, including RC4, are optimized for this kind of “stateful multiple encryption.”

Next time, we shall consider a stronger model of multiple message security which will be secure against Chosen Plaintext Attacks.