CS245 Lecture 6 – Karp-Lipton

In this lecture we prove the Karp-Lipton theorem that if all NP problems have polynomial size circuits then the polynomial hierarchy collapses. A nice application is a theorem of Kannan, showing that, for every {k}, there are languages in {\Sigma_2} requiring circuits of size {\Omega(n^k)}. The next result we wish to prove is that all approximate combinatorial counting problem can be solved within the polynomial hierarchy. Before introducing counting problems and the hashing techniques that will yield this result, we prove the Valiant-Vazirani theorem that solving SAT on instances with exactly one satisfying assignment is as hard as solving SAT in general.

Continue reading

Approximating a Boolean Function via Small Circuits

Suppose that {f: \{ 0,1 \}^n \rightarrow \{ 0,1 \}} is an arbitrary boolean function, and that we want to construct (or rather, show the existence of) a circuit {C} of size {\leq S} that approximates {f} as well as possible. How well can we hope to do?

Certainly, we can get a {O(1)}-size circuit {C} that computes {f} on {\geq \frac 12} of the inputs, by either choosing the circuit that always outputs 0 or the circuit that always outputs 1.

Also (ignoring {n^{O(1)}} factors), we can construct a circuit that has hardwired the values of {f} at {|S|} places, and then outputs always zero or always one (whichever is better) on the remaining inputs. This circuit will work on a

\displaystyle   \frac 12 + \frac {S}{2^n} \ \ \ \ \ (1)

fraction of inputs, and this seems like the best we can hope for.

We may then try to prove the optimality of the above construction by considering a random function {f}, using the Chernoff bound to show that, for a fixed circuit {C} of size {S}, {f} and {C} have agreement less than {1/2 + \epsilon} except with probability less than {e^{-\epsilon^2 2^n}}, and then taking a union bound over the roughly {2^S} circuits of size {S}. (It’s actually {2^{O(S \log S)}}, but we said we would ignore {n^{O(1)}}.) Unfortunately, this calculation only leads us to a proof that, in general, we cannot hope to compute a random function on more than a

\displaystyle  \frac 12 + \sqrt{\frac{S}{2^n}}  \ \ \ \ \ (2)

fraction of inputs using a circuit of size {S}. This is a disappointment because we were hoping to prove that (1) was tight.

If we think about it, however, we see that the bound in (2) is actually typically achievable for a random function: partition the {2^n} possible inputs into {S} blocks, each of size {2^n/S}. (Based, for example, on the value of the first {\log S} bits.) Now, the value of {f} in each block are just a sequence of {2^n/S} random bits, so, with constant probability, there is either an excess of {> \sqrt{2^n/S}} ones or an excess of {> \sqrt {2^n/S}} zeroes. For each block, hard-wire into the circuit the majority value of {f} among elements of the block. Clearly we are getting an {1/2 + \Omega(\sqrt{S/2^n})} fraction of inputs correct, and our circuit has size {S}.

What about general functions? Is it possible to do better than (1) for every function?

Continue reading