Last week I gave a tutorial on average-case complexity at FOCS’08.
I have now posted online the slides, along with links to some survey papers.
In my talk I discussed the following four open problems, which are among my favorite.
- Is it possible to base the existence of average-case hard problems in NP on the existence of worst-case hard problems in NP? (Say, on the assumption that .)
The evidence is mixed; work by Ajtai and others has shown the existence of distributional problems in NP which are hard-on-average provided that certain related problems are hard in the worst case. The latter problems, however, are in and are unlikely to be -complete. Furthermore, Feigenbaum and Fortnow show that if one proves that the worst case solution of a problem in reduces to the average-case solution of a problem in under a samplable distribution , and the reduction has a certain special form, then is in “.” (There are scare quotes because is actually in , which is “essentially the same” as .) The Feigenbaum-Fortnow result has been generalized by Bogdanov and me to hold for any “non-adaptive” reduction, but the following question remains open.
Let be problems in . Suppose there is a polynomial-time reduction and a polynomial with the following property: For every oracle that agrees with on a fraction of inputs of length , correctly solves on every input.
Then is in , or, more broadly, is in a class that cannot contain -complete problems unless the polynomial hierarchy collapses
- In the hardness amplification problem we are given (or we assume the existence of) a problem that is hard on average in a weak sense (meaning that every efficient algorithm makes at least a fraction of mistakes) and we want to define a new problem (possibly, a very artificial one) that is hard on average in a much stronger sense. (Every efficient algorithm makes at least a fraction of mistakes, and is not noticeably better than the trivial algorithm that outputs a random guess and makes a fraction of mistakes.) O’Donnell and Healy, Vadhan and Viola prove such a result for problems in ; their reductions are non-uniform, and hence the result applies to the notion of “efficient algorithm” given by polynomial size circuits. A uniform reduction, that gives amplification against probabilistic algorithms, appears in a work of mine, but it only amplifies from a fraction of mistakes to a fraction of mistakes.
The following question remains open.
For every problem in and polynomials there is a problem in such that the following happens. If there is a randomized polynomial time algorithm that solves on a fraction of inputs of length , then there is a randomized polynomial time algorithm that solves on a fraction of the inputs of length .
- Is there a more effective proof of Levin’s completeness result?
Levin proves that there is an problem such that for every problem and every “computable” distribution , there is a reduction from to . The reduction preserves average-case solvability in the sense that the distribution is dominated by the uniform distribution. (Every element in the range of has probability at most polynomially larger than in the uniform distribution.) There is, however, a loss in the domination factor which is exponential in the length of the non-deterministic program that puts in .
Is this necessary?
I leave it as an open-ended question, but I would consider a negative result in a relativized world to be a satisfactory negative answer. In particular, I believe that it is possible to construct an oracle relative to which for every fixed problem in it is not possible to construct a polynomial-time reduction such that given a non-deterministic circuit deciding the language , and a circuit “computing” the distribution , the mapping is a reduction from to such that is dominated by the uniform distribution with a domination factor depending polynomially in the sizes of the circuits and .
- Find more reductions between distributional problems
We have very few completeness results in Levin’s theory, but also very few reductions between problems that are not known to be complete. I think it would be very interesting to discover more such reductions. The following question is due to Russell Impagliazzo.
Prove that if there is an efficient algorithm that solves well on average SAT on random instances with variables and clauses, then there is an efficient algorithm that solves well on average SAT on random instances with variables and clauses.
In this setting, “solving well on average” means “certifying the unsatisfiability of a fraction of the formulas.” Note that the converse is trivial: if we have an algorithm that certifies the unsatisfiability of formulas with clauses, then given random clauses we can drop the last of them and pass the rest to the algorithm; if the algorithm certifies the unsatisfiability of this subformula, we are done.
5 comments
Comments feed for this article
November 7, 2008 at 5:49 am
Anon
“Scare quotes”? Maybe you meant “air quotes”?
November 7, 2008 at 8:17 am
luca
http://en.wikipedia.org/wiki/Scare_quote
November 12, 2008 at 11:14 am
Impagliazzo’s Multiverse « Combinatorics and more
[...] On the occasion of Luca’s post on his FOCS 2008 tutorial on average-case complexity here is a reminder of Russell [...]
November 28, 2008 at 12:02 am
Impagliazzo’s Multiverse « Combinatorics and more
[...] By Gil Kalai On the occasion of Luca’s post on his FOCS 2008 tutorial on average-case complexity here is a reminder of Russell [...]
March 7, 2010 at 4:51 pm
On the Necessity of Enumerating All Programs « in theory
[...] 7, 2010 in theory | Tags: average-case complexity, one-way function, universal search In a tutorial on average-case complexity that I gave at FOCS 2008, I mentioned four of my favorite open questions in the theory. I have some [...]