Applications of Low-Complexity Approximations

In the last post, I stated the following generalization of the weak regularity lemma:

Theorem (Low Complexity Approximation, TTV Version) Let (X,\mu) be a probability space, g:X \rightarrow [-1,1] a bounded function, F a collection of bounded functions f:X \rightarrow [ -1,1], and \epsilon an approximation parameter.

Then there is a function h: X \rightarrow [-1,1] such that

  • h has low complexity relative to F: there are k= O( \epsilon^{-2}) functions f_i\in F and coefficients c_i such that

    \displaystyle h(x) := \max \{ -1 , \min \{ 1, \sum_{i=1}^k c_i f_i (x)  \}\}

  • h is \epsilon-indistinguishable from g by F, that is,

    \forall f.F \ \ \left| {\mathbb E}_{x \sim \mu} f(x) g(x) - {\mathbb E}_{x \sim \mu} f(x) h(x) \right|  \leq \epsilon

(Last time, I mentioned that our proof handled only boolean functions f; now we can handle arbitrary bounded functions, and with an “energy-decrease” style proof, this will appear in the next online revision of the paper.)

This seems to be a useful tool, limited only by one’s creativity in choosing the functions F and then making use of the properties of h.

As already discussed,

  • if one takes X to be the edges of a complete graph, and F the set of indicator variables of cuts, then the existence of h gives the weak regularity lemma of Frieze and Kannan; and
  • if one takes F to be the set of circuits of size at most S(n), and normalizes g and h to be probability distributions, one gets that for every probability distribution D of high entropy there is a (non-uniformly) efficiently samplable and computable distribution M that is indistinguishable from D by circuits of size \leq S(n).

In this post I’ll show how to use it to give short proofs of the Hardcore Lemma of Impagliazzo and the Dense Model Theorem of Green, Tao and Ziegler. Both proofs also have, at least in hindsight, a sense of “inevitability,” meaning that given the Low-Complexity Approximation Theorem, and given what we want to prove, both proofs get to the point in a most economical and natural way.

  1. The Impagliazzo Hardcore Lemma. We have already mentioned that if g is “hard-on-average” for F, then h cannot be an approximation in the sense of being close to g on most inputs. What, then, about the points on which g and h differ? They form an Impagliazzo Hardcore Set for g, as described next.

    Let g: X \rightarrow \{-1,1\} be a function that is weakly hard on average for a class of algorithms F. Suppose, specifically, that for every algorithm h of complexity O(\epsilon^{-2}\delta^{-2}) relative to F we have

    {\mathbb P} [ g(x) = h(x) ] \leq 1-\delta

    and, more generally, for fractional h, we have

    (1) \ \ \ {\mathbb E} | g(x) - h(x) | \geq 2\delta

    Then, construct an approximating function h of complexity
    \leq \epsilon^{-2}\delta^{-2} relative to F and such that h and g are \epsilon\delta-indistinguishable by F. Note that, even though h is “indistinguishable” from g, it is also “far” from g, as in (1).

    Define a probability distribution H that assigns to point x a probability proportional to |g(x)-h(x)|. (If h were boolean, this would be the uniform distribution over points on which h and g differ.) Then this distribution is \delta-dense in the uniform distribution, meaning that every point has probability at most (\delta |X|)^{-1}. Observe also that we have

    g(x)\cdot |g(x)-h(x)| = g(x)-h(x)

    for every x, because g(x) and g(x)-h(x) have the same sign and |g(x)|=1, so we have

    \begin{array}{ll} \displaystyle {\mathbb E}_H g(x)f(x) & \displaystyle = \sum_x \frac {|g(x)-h(x)|}{\sum_y |g(y)-h(y)|} \cdot g(x)f(x) \\ & \displaystyle = \frac {|X|}{\sum_y |g(y)-h(y)|} \cdot {\mathbb E}_X |g(x)-h(x)|g(x)f(x) \\ & \displaystyle \leq \frac {1}{\delta} {\mathbb E}_X  (g(x)-h(x))f(x)  \\ & \displaystyle \leq \epsilon \end{array}

    and so H is a hardcore distribution, because the above expression is equivalent to

    {\mathbb P}_H [ g(x) = f(x) ] \leq \frac 12 + \frac \epsilon 2

  2. The Dense Model Theorem. Suppose that R\subseteq X is a pseudorandom set with respect to functions that have bounded complexity relative to F, and let D\subseteq R be a dense subset of R, |D| = \delta |R|.

    To find a dense model of D, we take g to be the characteristic function of D, and we let h be the low-complexity approximation, but using the uniform distribution on R as \mu.. Now suppose for simplicity that h is boolean, and that M is the set of inputs of X on which h is 1. We want to argue that M is a dense model of D. By assuming without loss of generality that F contains the all-one function, we get from the indistinguishability of g and h that

    \delta = {\mathbb E}_R g \approx {\mathbb E}_R h

    and from the pseudorandomness of R we have

    {\mathbb E}_R h \approx {\mathbb E}_X h = |M|/|X|

    and so |M| \approx \delta |X| and M is indeed dense.

    For the indistinguishability of M and D, take any function f, and observe that

    {\mathbb E}_M f = \delta^{-1} {\mathbb E}_R fg \approx \delta^{-1} {\mathbb E}_R fh \approx \delta^{-1} {\mathbb E}_X fh \approx {\mathbb E}_M f

    where we use both the indistinguishability of g and h under distribution R, and the fact that the distributions R and X are indistinguishable by functions of bounded complexity.

    This proof is appealingly intuitive, in the sense that if we expect D to be indistinguishable from a large set, then when we try to approximate the characteristic function of D we will end up with a low complexity function that is spread around much of X, thus defining a dense model. It also shows that “relative” versions of the Regularity Lemma, such as the Regularity Lemma for subgraphs of an expander, may be derived from the regular Lemma by the above argument. A disadvantage of the argument is that it does not establish the stronger form of the Dense Model Theorem suggested by Impagliazzo, in which there is no set R, but we require D to have the “pseudo-density” requirement that for every low-complexity bounded function f,

    {\mathbb E}_{x\in D} |f(x)| \leq \frac 1 {\delta} {\mathbb E}_{X} |f(x)| + \epsilon

    which follows immediately if D has density \delta in a pseudorandom set R, but that is a seemingly weaker property. (The relative Regularity Lemma in graphs had long be known to hold under such a pseudo-density assumption.)

3 thoughts on “Applications of Low-Complexity Approximations

  1. Luca, For reproof of “The Impagliazzo Hardcore Lemma”, it seems that the distribution that you get is actually 2\delta dense which is the stronger form of the Lemma by Holenstein.
    But, the issue is that you have a bit stronger assumption. If boolean functions fail on \delta fraction, it does not mean that the fractional ones fail (will be far) by at least 2 \delta. Or maybe I am missing something…

    Russel showed that the dense model theorem reduces to strong HC lemma. Now it seems the bounded decomposition is the right way to look at the theorems which proves both of the other ones easily (and it is not clear if it can be easily reduced to HC lemma)

  2. Hi Mohammad I think you are right, now I am not sure why we thought we had density delta.

    As for your other other question, if

    E_x | g(x) – h(x) | = \delta

    then there is a threshold t such that defining
    h_t(x) = 1 iff h(x) \geq t
    we have
    Pr_x [ g(x) \neq h_t(x) ] \leq \delta/2

    because, for every point x, we have

    Pr_t [ g(x) \neq h_t(x) ] = (1/2) * |g(x)-h(x)|

    (the threshold is picked uniformly between -1 and 1,
    and there is only an interval of length |g(x)-h(x)|
    which leads to g being different from h_t; note that
    here we must assume that g is boolean)

    And a threshold applied to h is just a threshold applied
    to a sum of f_i, so it still has the same low complexity.

    Indeed, in retrospect, I think that if we combine the “boosting”
    proof of the low-complexity approximation with this proof,
    we get exactly Holenstein’s proof of the hard-core lemma.

  3. but isn’t |g-h| in the setting that g is {-1, 1} boolean not a valid probability measure, since |g-h| = 2 if g =1, h = -1. so the measure you need is actually |g-h|/2 implying that the measure is only \delta dense?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s