You are currently browsing the tag archive for the ‘Pseudorandomness’ tag.

Two of my favorite challenges in unconditional derandomization are to find logarithmic-seed pseudorandom generators which are good against:

- log-space randomized algorithms
- , that is, constant depth circuits of polynomial size

Regarding the first challenge, the best known pseudorandom generator remains Nisan’s, from 1990, which requires a seed of bits. Maddeningly, even if we look at width-3 oblivious branching programs, that is, non-uniform algorithms that use only bits of memory, nobody knows how to beat Nisan’s generator.

Regarding the second challenge, Nisan showed in 1988 that for every there is a pseudorandom generator of seed length against depth- circuits of size . The simplest case is that of depth-2 circuits, or, without loss of generality, of disjunctive-normal-form (DNF) formulas. When specialized to DNF formulas, Nisan’s generator has seed length , but better constructions are known in this case.

Luby, Velickovic and Wigderson improved the seed length to in 1993. Bazzi’s celebrated proof of the depth-2 case of the Linian-Nisan conjecture implies that a -wise independent distribution “-fools” every -term DNF, by which we mean that for every such DNF formula and every such distribution we have

where is the uniform distribution over assignments. This leads to a pseudorandom generator that -fools -variable, -term DNF formulas and whose seed length is , which is when are polynomially related.

In a new paper with Anindya De, Omid Etesami, and Madhur Tulsiani, we show that an -variable, -term DNF can be -fooled by a generator of seed length , which is when are polynomially related.

Our approach is similar to the one in Razborov’s proof of Bazzi’s result, but we use small-bias distribution instead of -wise independent distributions

Suppose are mutually independent unbiased random variables. Then we know everything about the distribution of

either by using the central limit theorem or by doing calculations by hand using binomial coefficients and Stirling’s approximation. In particular, we know that (1) takes the values with probability each, and so with constant probability (1) is at most .

The last statement can be proved from scratch using only pairwise independence. We compute

so that

It is also true that (1) is at least with constant probability, and this is trickier to prove.

First of all, note that a proof based on pairwise independence is not possible any more. If is a random row of an Hadamard matrix, then with probability , and with probability .

Happily, four-wise independence suffices.

In the last post we introduced the following problem: we are given a length-increasing function, the hardest case being a function whose output is one bit longer than the input, and we want to construct a generator such that the *advantage* or *distinguishing probability* of

is as large as possible relative to the circuit complexity of .

I will show how to achieve advantage with a circuit of size . Getting rid of the suboptimal factor of is a bit more complicated. These results are in this paper.

Suppose we have a length-increasing function , which we think of as a pseudorandom generator mapping a shorter seed into a longer output.

Then the distribution of for a random seed is not uniform (in particular, it is concentrated on at most of the elements of ). We say that a statistical test has *advantage* in distinguishing the output of from the uniform distribution if

If the left-hand side of (1) is at most for every computable by a circuit of size , then we say that is -pseudorandom against circuits of size , or that it is an -secure pseudorandom generator.

How secure can a pseudorandom generator possibly be? This question (if we make no assumption on the efficiency of ) is related to the question in the previous post on approximating a boolean function via small circuits. Both questions, in fact, are special cases of the question of how much an arbitrary real-valued function must correlate with functions computed by small circuits, which is answered in a new paper with Anindya De and Madhur Tulsiani.

*Scribed by Madhur Tulsian*

** Summary **

Today we show how to construct a pseudorandom function from a pseudorandom generator. Read the rest of this entry »

*Scribed by Siu-On Chan*

** Summary **

Today we complete the proof that it is possible to construct a pseudorandom generator from a one-way permutation. Read the rest of this entry »

** Summary **

Today we complete the proof that it is possible to construct a pseudorandom generator from a one-way permutation Read the rest of this entry »

** Summary **

Today we prove the Goldreich-Levin theorem. Read the rest of this entry »

*Scribed by Bharath Ramsundar*

**Summary**

Last time we introduced the setting of *one-time symmetric key encryption*, defined the notion of *semantic security*, and proved its equivalence to *message indistinguishability*.

Today we complete the proof of equivalence (found in the notes for last class), discuss the notion of *pseudorandom generator*, and see that it is precisely the primitive that is needed in order to have message-indistinguishable (and hence semantically secure) one-time encryption. Finally, we shall introduce the basic definition of security for protocols which send multiple messages with the same key.

**1. Pseudorandom Generators And One-Time Encryption**

Intuitively, a Pseudorandom Generator is a function that takes a short random string and stretches it to a longer string which is almost random, in the sense that reasonably complex algorithms cannot differentiate the new string from truly random strings with more than negligible probability.

Definition 1[Pseudorandom Generator] A function is a -secure pseudorandom generator if for every boolean function of complexity at most we have

(We use the notation for the uniform distribution over .)

The definition is interesting when (otherwise the generator can simply output the first m bits of the input, and satisfy the definition with and arbitrarily large ). Typical parameters we may be interested in are , , and , that is we want to be very small, to be large, to be huge, and to be tiny. There are some unavoidable trade-offs between these parameters.

Lemma 2If is pseudorandom with , then .

*Proof:* Pick an arbitrary . Define

It is clear that we may implement with an algorithm of complexity : all this algorithm has to do is store the value of (which takes space ) and compare its input to the stored value (which takes time ) for total complexity of . Now, note that

since at least when . Similarly, note that since only when . Now, by the pseudorandomness of , we have that . With some rearranging, this expression implies that

which then implies and consequently ◻

Exercise 1Prove that if is pseudorandom, and , then

Suppose we have a pseudorandom generator as above. Consider the following encryption scheme:

- Given a key and a message ,
- Given a ciphertext and a key ,

(The XOR operation is applied bit-wise.)

It’s clear by construction that the encryption scheme is correct. Regarding the security, we have

Lemma 3If is -pseudorandom, then as defined above is -message indistinguishable for one-time encryption.

*Proof:* Suppose that is not -message indistinguishable for one-time encryption. Then messages and algorithm of complexity at most such that

By using the definition of we obtain

Now, we can add and subtract the term and use the triangle inequality to obtain that added to is greater than . At least one of the two terms in the previous expression must be greater that . Suppose without loss of generality that the first term is greater than

Now define . Then since is a bijection, . Consequently,

Thus, since the complexity of is at most and is plus an xor operation (which takes time ), is of complexity at most . Thus, is not -pseudorandom since there exists an algorithm of complexity at most that can distinguish between ‘s output and random strings with probability greater than . Contradiction. Thus is -message indistinguishable. ◻

**2. Security for Multiple Encryptions: Plain Version**

In the real world, we often need to send more than just one message. Consequently, we have to create new definitions of security for such situations, where we use the same key to send multiple messages. There are in fact multiple possible definitions of security in this scenario. Today we shall only introduce the simplest definition.

Definition 4[Message indistinguishability for multiple encryptions] is -message indistinguishable for encryptions if for every messages , and every of complexity we have

Similarly, we define semantic security, and the asymptotic versions.

Exercise 2Prove that no encryption scheme in which is deterministic (such as the scheme for one-time encryption described above) can be secure even for 2 encryptions.

Encryption in some versions of Microsoft Office is deterministic and thus fails to satisfy this definition. (This is just a symptom of bigger problems; the schemes in those versions of Office are considered completely broken.)

If we allow the encryption algorithm to keep *state* information, then a pseudorandom generator is sufficient to meet this definition. Indeed, usually pseudorandom generators designed for such applications, including RC4, are optimized for this kind of “stateful multiple encryption.”

Next time, we shall consider a stronger model of multiple message security which will be secure against *Chosen Plaintext Attacks*.

As reported here, here and here, Mark Braverman has just announced a proof of a 1990 conjecture by Linial and Nisan.

Mark proves that if is an AC0 boolean circuit (with NOT gates and with AND gates and OR gates of unbounded fan-in) of depth and size , and if is any -wise independent distribution with , then

that is, “fools” the circuit into thinking that is the uniform distribution over . Plausibly, this might be true even for .

Nothing was known for depth 3 or more, and the depth-2 case was settled only recently by Bazzi, with a proof that, as you may remember, has been significantly simplified by Razborov about six months ago.

Mark’s proof relies on approximating via low-degree polynomials. The point is that if is an -variate (real valued) polynomial of degree , and is a -wise independent distribution ranging over , then

Now if we could show that approximates both under and under , in the sense that , and also , then we would be done.

The Razborov-Smolenski lower bound technique gives a probabilistic construction of a polynomial such that for every input one has a high probability that . In particular, one get one polynomial such that both

and

Unfortunately this is not sufficient, because the polynomial might be very large at a few points, and so even if agrees with with high probability there is no guarantee that the average of is close to the average of .

Using a result of Linial, Mansour and Nisan (developed in the context of learning theory), one can construct a different kind of low-degree approximating polynomial , which is such that

The Linial-Mansour-Nisan approximation, however, says nothing about the relation between and under the distribution .

Using ideas of Bazzi’s, however, if we had a single polynomial such that properties (1), (2) and (3) are satisfied simultaneously, then we could construct another low-degree polynomial such that , and also , giving us that is fooled by .

As far as I understand, Mark constructs a polynomial satisfying properties (1), (2) and (3) by starting from the Razborov-Smolenski polynomial , and then observing that the indicator function of the points on which is itself a boolean function admitting a Linial-Mansour-Nisan approximation . Defining , we have that has all the required properties, because multiplying by “zeroes out” the points on which is excessively large.

I have been interested in this problem for some time because of a connection with the complexity of 3SAT on random instances.

## Recent Comments