(This is the sixth in a series of posts on online optimization techniques and their “applications” to complexity theory, combinatorics and pseudorandomness. The plan for this series of posts is to alternate one post explaining a result from the theory of online convex optimization and one post explaining an “application.” The first two posts were about the technique of multiplicative weight updates and its application to “derandomizing” probabilistic arguments based on combining a Chernoff bound and a union bound. The third and fourth post were about the Follow-the-Regularized-Leader framework, and how it unifies multiplicative weights and gradient descent, and a “gradient descent view” of the Frieze-Kannan Weak Regularity Lemma. The fifth post was about the constrained version of the Follow-the-Regularized-Leader framework, and today we shall see how to apply that to a proof of the Impagliazzo Hard-Core Lemma.)
Tag Archives: Russell Impagliazzo
Impagliazzo Hard-Core Sets via "Finitary Ergodic-Theory"
In the Impagliazzo hard-core set theorem we are a given a function such that every algorithm in a certain class makes errors at least a
fraction of the times when given a random input. We think of
as small, and so of
as exhibiting a weak form of average-case complexity. We want to find a large set
such that
is average-case hard in a stronger sense when restricted to
. This stronger form of average-case complexity will be that no efficient algorithm can make noticeably fewer errors while computing
on
than a trivial algorithm that always outputs the same value regardless of the input. The formal statement of what we are trying to do (see also the discussion in this previous post) is:
Impagliazzo Hard-Core Set Theorem, “Constructive Version”
Letbe a boolean function,
be a size parameter,
be given. Then there is a size parameter
such that the following happens.
Suppose that for every function
computable by a circuit of size
we have
Then there is a set
such that: (i)
is recognizable by circuits of size
; (ii)
, and in fact the number of
in
such that
is at least
, and so is the number of
in
such that
; and (iii) for every
computable by a circuit of size
,
Our approach will be to look for a “regular partition” of . We shall construct a partition
of
such that: (i) given
, we can efficiently compute what is the block
that
belongs to; (ii) the number
of blocks does not depend on
; (iii)
restricted to most blocks
behaves like a random function of the same density. (By “density” of a function we mean the fraction of inputs on which the function evaluates to one.)
In particular, we will use the following form of (iii): for almost all the blocks , no algorithm has advantage more than
over a constant predictor in computing
in
.
Let be the union of all majority-0 blocks (that is, of blocks
such that
takes the value 0 on a majority of elements of
) and let
be the union of all majority-1 blocks.
I want to claim that no algorithm can do noticeably better on than the constant algorithm that always outputs 0. Indeed, we know that within (almost) all of the blocks that compose
no algorithm can do noticeably better than the always-0 algorithm, so this must be true for a stronger reason for the union. The same is true for
, with reference to the constant algorithm that always outputs 1. Also, if the partition is efficiently computable, then(in a non-uniform setting)
and
are efficiently recognizable. It remains to argue that either
or
is large and not completely unbalanced.
Recalling that we are in a non-uniform setting (where by “algorithms” we mean “circuits”) and that the partition is efficiently computable, the following is a well defined efficient algorithm for attempting to compute :
Algorithm. Local Majority
On input:
determine the blockthat
belongs to;
outputif
;
otherwise output 0
(The majority values of in the various blocks are just a set of
bits that can be hard-wired into the circuit.)
We assumed that every efficient algorithm must make at least a fraction of errors. The set of
inputs where the Local Majority algorithm makes mistakes is the union, over all blocks
, of the “minority inputs” of the block
. (If
is the majority value of
in a block
, then the “minority inputs” of
are the set of inputs
such that
.)
Let be the set of minority inputs (those where our algorithm makes a mistake) in
and
be the set of minority inputs in
. Then at least one of
and
must have size at least
, because the size of their union is at least
. If
has size at least
, then
has all the properties of the set
we are looking for.
It remains to construct the partition. We describe an iterative process to construct it. We begin with the trivial partition where
. At a generic step of the construction, we have a partition
, and we consider
as above. Let
be such that
. If there is no algorithm that has noticeable advantage in computing
over
, we are done. Otherwise, if there is such an algorithm
, we refine the partition by splitting each block according to the values that
takes on the elements of the block.
After steps of this process, the partition has the following form: there are
functions
and each of the (at most)
blocks of the partition corresponds to a bit string
and it contains all inputs
such that
. In particular, the partition is efficiently computable.
We need to argue that this process terminates with . To this end, we define a potential function that measures the “imbalance” of
inside the blocks the partition
and we can show that this potential function increases by at least at each step of the iteration. Since the potential function can be at most 1, the bound on the number of iterations follows.
A reader familiar with the proof of the Szemeredi Regularity Lemma will recognize the main ideas of iterative partitioning, of using a “counterexample” to the regularity property required of the final partition to do a refinement step, and of using a potential function argument to bound the number of refinement steps.
In which way can we see them as “finitary ergodic theoretic” techniques? As somebody who does not know anything about ergodic theory, I may not be in an ideal position to answer this question. But this kind of difficulty has not stopped me before, so I may attempt to answer this question in a future post.
The Impagliazzo Hard-Core-Set Theorem
The Impagliazzo hard-core set theorem is one of the bits of magic of complexity theory. Say you have a function such that every efficient algorithm makes errors at least
of the times when computing
on a random input. (We’ll think of
as exhibiting a weak form of average-case complexity.) Clearly, different algorithms will fail on a different
of the inputs, and it seems that, intuitively, there should be functions for which no particular input is harder than any particular other input, per se. It’s just that whenever you try to come up with an algorithm, some set of mistakes, dependent on the algorithmic technique, will arise.
As a good example, think of the process of generating at random, by deciding for every input
to set
with probability
and
with probability
. (Make the choices independently for different inputs.) With very high probability, every efficient algorithm fails with probability at least about
, but, if we look at every efficiently recognizable large set
, we see that
takes the value 1 on approximately
of the elements of
, and so the trivial algorithm that always outputs 1 has a pretty good success probability.
Consider, however, the set of size
that you get by taking the
inputs
such that
plus a random sample of
inputs
such that
. Then we can see that no efficient algorithm can compute
on much better than
of the inputs of
. This is the highest form of average-case complexity for a boolean function: on such a set
no algorithm does much better in computing
than an algorithm that makes a random guess.
The Impagliazzo hard-core theorem states that it is always possible to find such a set where the average-case hardness is “concentrated.” Specifically, it states that if every efficient algorithm fails to compute
on a
fraction of inputs, then there is a set
of size
such that every efficient algorithm fails to compute
on at least a
fraction of the elements of
. This is true for every
, and if “efficient” is quantified as “circuits of size
” in the premise, then “efficient” is quantified as “circuits of size
” in the conclusion.
The example of the biased random function given above implies that, if one wants to prove the theorem for arbitrary , then the set
cannot be efficiently computable itself. (The example does not forbid, however, that
be efficiently computable given oracle access to
, or that a random element of
be samplable given a sampler for the distribution
for uniform
.)
A number of proofs of the hard core theorem are known, and connections have been found with the process of boosting in learning theory and with the construction and the decoding of certain error-correcting codes. Here is a precise statement.
-
Impagliazzo Hard-Core Set Theorem
Let
Suppose that for every function computable by a circuit of size
we have
Then there is a set of size
such that for every function
computable by a circuit of size
we have
Using the “finitary ergodic theoretic” approach of iterative partitioning, we (Omer Reingold, Madhur Tulsiani, Salil Vadhan and I) are able to prove the following variant.
-
Impagliazzo Hard-Core Set Theorem, “Constructive Version”
Let
Suppose that for every function computable by a circuit of size
we have
Then there is a set such that: (i)
is recognizable by circuits of size
; (ii)
, and in fact the number of
in
such that
is at least
, and so is the number of
in
such that
; and (iii) for every
computable by a circuit of size
,
The difference is that is now an efficiently recognizable set (which is good), but we are not able to derive the same strong average-case complexity of
in
(which, as discussed as the beginning, is impossible in general). Instead of proving that a “random guess algorithm” is near-optimal on
, we prove that a “fixed answer algorithm” is near-optimal on
. That is, instead of saying that no algorithm can do better than a random guess, we say that no algorithm can do better than either always outputting 0 or always outputting 1. Note that this conclusion is meaningless if
is, say, always equal to 1 on
, but in our construction we have that
is not exceedingly biased on
, and if
, say, then the conclusion is quite non-trivial.
One can also find a set with the same type of average-case complexity as in the original Impagliazzo result
by putting into Then we recover the original statement except that a
size sample of elements
of
such that
and an equal size sample of elements of
such that
equals 1. (Alternatively, put in
all the elements of
on which
achieves the minority value of
in
, then add a random sample of as many elements achieving the majority value.)
is exponential instead of polynomial. [Update: constructing
is somewhat more complicated than we originally thought, the details are in the paper.]
Coming up next, the proof of the “constructive hard core set theorem” and my attempt at explaining what the techniques have to do with “finitary ergodic theory.”