Suppose {X_1,\ldots,X_n} are mutually independent unbiased {\pm 1} random variables. Then we know everything about the distribution of

\displaystyle   | X_1 + \ldots + X_N | \ \ \ \ \ (1)

either by using the central limit theorem or by doing calculations by hand using binomial coefficients and Stirling’s approximation. In particular, we know that (1) takes the values {1,\ldots, 1/\sqrt N} with probability {\Theta(1/\sqrt N)} each, and so with constant probability (1) is at most {O(\sqrt N)}.

The last statement can be proved from scratch using only pairwise independence. We compute

\displaystyle  \mathop{\mathbb E} \left| \sum_i X_i \right|^2 = N

so that

\displaystyle  \mathop{\mathbb P} \left[ \left|\sum_i X_i \right| \geq c \cdot \sqrt N \right] = \mathop{\mathbb P} \left[ \left|\sum_i X_i \right|^2 \geq c^2 \cdot N \right] \leq \frac 1 {c^2}

It is also true that (1) is at least {\Omega(\sqrt N)} with constant probability, and this is trickier to prove.

First of all, note that a proof based on pairwise independence is not possible any more. If {(X_1,\ldots,X_N)} is a random row of an Hadamard matrix, then {\sum_i X_i = N} with probability {1/N}, and {\sum_i X_i =0} with probability {1-1/N}.

Happily, four-wise independence suffices.

Lemma 1 Suppose {X_1,\ldots,X_n} are four-wise independent {\pm 1} random variables. Then

\displaystyle  \mathop{\mathbb P} \left[ \left| \sum_i X_i \right| \geq \Omega(\sqrt N) \right] \geq \Omega(1)

This is probably well known to a lot of people, but the only place where I have seen this result is a paper by Andreev et al. that I have mentioned before.

Here is a slightly simplified version of their argument. We compute

\displaystyle  \mathop{\mathbb E} \left| \sum_i X_i \right|^2 = N

and

\displaystyle  \mathop{\mathbb E} \left| \sum_i X_i \right|^4 = 3N^2 - 2N

So we have (from here on is our simplified version)

\displaystyle  \mathop{\mathbb E} \left( \left(\sum_i X_i \right)^2 - 2N \right)^2 = 3N^2 - 2N \leq 3N^2

and

\displaystyle  \mathop{\mathbb P} \left[ \left| \sum_i X_i \right| \leq \frac 13 \sqrt N \right]

\displaystyle  = \mathop{\mathbb P} \left[ \left| \sum_i X_i \right|^2 \leq \frac 19 N \right]

\displaystyle  \leq \mathop{\mathbb P} \left[ \left| \left| \sum_i X_i \right|^2 - 2N \right| \geq \frac {17}{9} N \right]

\displaystyle  = \mathop{\mathbb P} \left[ \left( \left| \sum_i X_i \right|^2 - 2N \right)^2 \geq \frac {289}{81} N^2 \right]

\displaystyle  \leq \frac 3 {\frac {289}{81}} = .80408

Thus

\displaystyle  \mathop{\mathbb P} \left[ \left| \sum_i X_i \right| \geq \frac 13 \sqrt N \right] \geq .159

Following the same proof, but replacing each “{X_i}” with “{a_i X_i}” and every “{N}” with “{\sum_i a_i^2}”, we get the more general version

Lemma 2 Suppose {X_1,\ldots,X_N} are four-wise independent {\pm 1} random variables and let {a_1,\ldots,a_N} be any real numbers. Then

\displaystyle  \mathop{\mathbb P} \left[ \left| \sum_i a_i X_i \right| \geq \frac 13 \sqrt {\sum_i a_i^2 } \right] \geq .159

In particular,

\displaystyle  \mathop{\mathbb E} \left| \sum_i a_i X_i \right| \geq \frac 1 {20} \sqrt {\sum_i a_i^2 }

A related version that was useful in our work on distinguishers for pseudorandom generators is

Corollary 3 Let {H} be a four-wise independent family of functions {h: S \rightarrow \{-1,1\}} and {g: S \rightarrow {\mathbb R}} be any real valued function. Then

\displaystyle  \mathop{\mathbb E}_{h\sim H} \left| \sum_{x\in S} h(x)g(x) \right| \geq \frac 1{20\cdot \sqrt{|S|}} \sum_x | g(x) |

To briefly show how this relates to our work on distinguishers, suppose {G: \{ 0,1 \}^{n-1} \rightarrow \{ 0,1 \}^n} is a length-increasing function, and that we want to construct a function {D}, computable by a circuit as small as possible, such that

\displaystyle  \mathop{\mathbb P}_{z\in \{ 0,1 \}^{n-1}} [ D(G(z)) =1 ] - \mathop{\mathbb P}_{x\in \{ 0,1 \}^n} [D(x)=1] \geq \epsilon

Then this is equivalent to proving

\displaystyle   \mathop{\mathbb E}_{z \in \{ 0,1 \}^{n-1}} (-1)^{D(G(z))} - \mathop{\mathbb E}_{x\in \{ 0,1 \}^n} (-1)^{D(x)} \geq 2 \epsilon \ \ \ \ \ (2)

and, if we set

\displaystyle  g(x) := \mathop{\mathbb P}_{z\in \{ 0,1 \}^{n-1}} [G(z) = x] - \frac 1 {2^n}

then (2) is equivalent to

\displaystyle   \sum_{x\in \{ 0,1 \}^n} (-1)^{D(x)} \cdot g(x) \geq 2 \epsilon \ \ \ \ \ (3)

Now, fix a simple partition {\cal P} of {\{ 0,1 \}^n} into {1,600 \epsilon^{2} 2^n} subsets each of size {(40 \epsilon)^{-2}}, for example based on the value of the first {\log 1,600 \epsilon^2 2^n} bits.

Fix an efficiently computable four-wise independent family {H} of functions {h: \{ 0,1 \}^n \rightarrow \{ 0,1 \}}. Then we have, for every set {S} in the partition

\displaystyle  \mathop{\mathbb E}_{h\sim H} \left| \sum_{x \in S} (-1)^{h(x)} g(x) \right| \geq \frac{1}{20\sqrt {|S|}} \sum_{x\in S} |g(x)| = 2\epsilon \sum_{x\in S} |g(x)|

and if we sum over all sets in the partition we have

\displaystyle  \mathop{\mathbb E}_{h\sim H} \sum_{S\in {\cal P}} \left| \sum_{x \in S} (-1)^{h(x)} g(x) \right| \geq 2\epsilon \sum_{x\in \{ 0,1 \}^n} |g(x)| \geq 2\epsilon

In particular, there is a function {h_0 \in H} such that

\displaystyle  \sum_{S\in {\cal P}} \left| \sum_{x \in S} (-1)^{h_0(x)} g(x) \right| \geq 2\epsilon

To finish the argument, define the function {c: \{ 0,1 \}^n \rightarrow \{ 0,1 \}} such that {c(z)=1} iff {\sum_{x\in S} (-1)^{h_0(x)} g(x) < 0}, where {S} is the set that {z} belongs to. Then

  1. {c(x)} depends only on the first {\log O(\epsilon^{-2} 2^n)} bits of the input, and so it can be computed by a circuit of size {O(\epsilon^{-2}2^n)}
  2. for every {S} we have

    \displaystyle  \left| \sum_{x \in S} (-1)^{h_0(x)} g(x) \right| = \sum_{x\in S} (-1)^{c(x)} \cdot (-1)^{h_0(x)} g(x)

So, in conclusion

\displaystyle  \sum_{x \in \{ 0,1 \}^n} (-1)^{c(x) + h_0(x)} g(x) \geq 2\epsilon

meaning that setting {D(x) := c(x) \oplus h(x)} satisfies (3), and gives us a distinguisher obtaining advantage {\epsilon} and having circuit complexity {O(\epsilon^{-2} 2^n)}.

About these ads