Bounded Independence and DNFs

In 1990, Linial and Nisan conjectured that every AC0 function is “fooled” by polylog-wise independence. This means that if f: \{ 0,1 \}^n \rightarrow \{ 0,1\} is computed by a circuit of size S and depth d, and X is a distribution over \{ 0,1 \}^n such that every k bits are mutually independent and uniformly distributed (such a distribution is called k-wise independent), then

(1) \ \ \ {\mathbb E} f (X) \approx {\mathbb E} f(U_n)

where U_n is the uniform distribution over \{0,1\}^n, provided that k is around (\log S)^{O(d)}. (Maybe even around (\log S)^{d-1}.)

Nearly 20 years later, not much is known about this conjecture, except for a breakthrough result by Louay Bazzi from last year. Bazzi proves that (1) holds if f is computed by a size-S depth-2 circuit (that is, a CNF or a DNF) and X is a O( (\log S)^2)-wise independent distribution. This settles the depth-2 case of the Linial-Nisan conjecture.

A result of Linial, Mansour and Nisan plays an important role in the proof. The result is that if f is computed by a DNF of size S, then there is a function g: \{0,1\}^n \rightarrow {\mathbb R} that can be written as a degree-d polynomial with d= O((\log S)^2) and that is “close” to f, in the sense that

(2) \ \ \ {\mathbb E}  [ |f-g|^2]

is small.

Now, if g is a degree-d polynomial, and X is a d-wise independent distribution, we have

{\mathbb E} g(X) = {\mathbb E} g(U_n)

and we also have {\mathbb E} g(U_n) \approx  {\mathbb E} f(U_n) from (2). If we also had {\mathbb E} g(X) \approx {\mathbb E} f(X), then we would be done.

Unfortunately, we have no way to relate {\mathbb E} g(X) and {\mathbb E} f(X), because the fact that f and g are close does not imply that they have similar expectations under the distribution X, which could be concentrated on the few inputs on which f and g are very different.

Although the Linial-Mansour-Nisan result does not immediately give us the desired result, it is intuitive that it is useful to be able to approximate a DNF by low-degree polynomials, and to know that low-degree polynomials are perfectly “fooled” by d-wise independent distributions.

Bazzi observes that the following stronger polynomial approximation property is sufficient to prove that a distribution fools a given function.

Suppose that f:\{0,1\}^n \rightarrow \{0,1\} is a function, and that we are able to find two degree-d polynomials g_L,g_U such that for every x \in \{0,1\}^n we have

(3) \ \ \ g_L(x) \leq f(x) \leq g_U(x)

and

(4) \ \ \ {\mathbb E} g_U - g_L  \leq \epsilon

where the expectation in (4) is taken under the uniform distribution. Then it is easy to see that if X is d-wise independent, then

{\mathbb E} g_L(X) \leq {\mathbb E} f(X) \leq {\mathbb E} g_U(X)

and

{\mathbb E} g_L(U_n) \leq {\mathbb E} f(U_n) \leq {\mathbb E} g_U(U_n)

so that

(5) \ \ \ | {\mathbb E} f(U_n)  - {\mathbb E} f(X) | \leq \epsilon

It is also possible to use linear programming duality that argue that if f is fooled by d-wise independent distributions, then degree-d polynomials g_L,g_U as above must exist.

Bazzi then shows that, given a size-S DNF f, it is possible to find degree-d polynomials g_L,g_U as above with d = O((\log S)^2).

Finding such polynomials, in turn, is reduced rather cleverly to the task of finding a single polynomial g such that {\mathbb E} |f-g|^2 is small, as in Linial-Mansour-Nisan and, in addition, g has the property that whenever f(x)=0 then we always must have g(x)=0.

The construction of this “one sided” low-degree approximation of f is the bulk of Bazzi’s paper. A couple of days ago, Razborov has posted an exceedingly neat construction, that he explains in a page and a half! (Theorem 7 in the link above.)

It is rather appropriate that this has happened during the Banff Workshop on Analytic Tools in Computational Complexity, and tomorrow Scott Aaronson will give an impromptu talk on Razborov’s proof, hot from the presses.

3 thoughts on “Bounded Independence and DNFs

  1. Pingback: News « Combinatorics and more

  2. Pingback: Bounded Independence, AC0, and Random 3SAT « in theory

  3. Pingback: Small-bias Distributions and DNFs « in theory

Leave a comment