Pick a random instance of 3SAT by picking at random $m$ of the possible $8 {n\choose 3}$ clauses that can be constructed over $n$ variables. It is easy to see that if one sets $m=cn$, for a sufficiently large constant $c$, then the formula will be unsatisfiable with very high probability (at least $1-2^n \cdot (7/8)^m$), and it is also possible (but less easy) to see that if $c$ is a sufficiently small constant, then the formula is satisfiable with very high probability.

A number of questions come to mind:

*If I plot, for large $n$, the probability that a random 3SAT formula with $n$ variables and $cn$ clauses is satisfiable, against the density $c$, what does the graph look like? We just said the probability is going to be close to 1 for small $c$ and close to $0$ for large $c$, but does it go down smoothly or sharply?*Here the conjecture, supported by experimental evidence, is that the graph looks like a step function: that there is a constant $c_3$ such that the probability of satisfiability is $1-o_n(1)$ for density $c_3$. A similar behavior is conjectured for kSAT for all $k$, with the threshold value $c_k$ being dependent on $k$.

Friedgut proved a result that comes quite close to establishing the conjecture.

For, say, 3SAT, the statement of the conjecture is that there is a value $c_3$ such that for every interval size $\epsilon$, every confidence $\delta$ and every sufficiently large $n$, if you pick a 3SAT formula with $(c_3+\epsilon)n$ clauses and $n$ variables, the probability of satisfiability is at most $\delta$, but if you pick a formula with $(c_3-\epsilon)n$ clauses then the probability of satisfiability is at least $1-\delta$.

Friedgut proved that for every $n$ there is a density $c_{3,n}$, such that for every interval size $\epsilon$, every confidence $\delta$ and every sufficiently large $n$, if you pick a 3SAT formula with $(c_{3,n}+\epsilon)n$ clauses and $n$ variables, the probability of satisfiability is at most $\delta$, but if you pick a formula with $(c_{3,n}-\epsilon)n$ clauses then the probability of satisfiability is at least $1-\delta$.

So, for larger and larger $n$, the graph of proability of satisfiability versus density does look more and more like a step function, but Friedgut’s proof does not guarantee that the location of the step stays the same.

*Of course*the location is not going to move, but nobody has been able to prove that yet.Semi-rigorous methods (by which I mean, methods where you make things up as you go along) from statistical physics predict the truth of the conjecture and predict a specific value for $c_3$ (and for $c_k$ for each $k$) that agrees with experiments. It remains a long-term challenge to turn these arguments into a rigorous proof.

For large $k$, work by Achlioptas, Moore, and Peres shows almost matching upper and lower bounds on $c_k$ by a

*second moment*approach. They show that if you pick a random kSAT formula for large $k$ the variance of the number of satisfying assignments of the formula is quite small, and so the formula is likely to be unsatisfiable when the average number of assignments is close to zero (which actually just follows from Markov’s inequality), but also the formula is likely to be satisfiable when the average number of assignments is large. Their methods, however, do not improve previous results for 3SAT. Indeed, it is known that the variance is quite large for 3SAT, and the conjectured location of $c_3$ is not the place where the average number of assignments goes from being small to being large. (The conjectured value of $c_3$ is smaller.)*Pick a random formula with a density that makes it very likely that the formula is satisfiable: is this a distribution of inputs that makes 3SAT hard-on-average?*Before addressing the question we need to better specify what we mean by hard-on-average (and, complementarily, easy-on-average) in this case. For example, the algorithm that always says “satisfiable” works quite well; over the random choice of the formula, the error probability of the algorithm is extremely small. In such settings, however, what one would like from an algorithm is to produce an actual satisfying assignment. So far, all known lower bounds for $c_3$ are algorithmic, so in the density range in which we rigorously know that a random 3SAT formula is likely to be satisfiable we also know how to produce, with high probability, a satisfying assignment in polynomial time. The results for large $k$, however, are non-constructive and it remains an open question to match them with an algorithmic approach.

The statistical physics methods that suggest the existence of sharp thresholds also inspired an algorithm (the

*survey propagation*algorithm) that, in experiments, efficiently finds satisfying assignments in the full range of density in which 3SAT formulas are believed to be satisfiable with high probability. It is an exciting, but very difficult, question to rigorously analyze the behavior of this algorithm.*Pick a random formula with a density that makes it very likely that the formula is unsatisfiable: is this a distribution of inputs that makes 3SAT hard-on-average?*Again, an algorithm that simply says “unsatisfiable” works with high probability. The interesting question, however, is whether there is an algorithm that efficiently and with high probability delivers

*certificates*of unsatisfiability. (Just like the survey propagation algorithm delivers certificates of satisfiability in the density range in which they exist, or so it is conjectured.) This will be the topic of the next post.

## 9 comments

Comments feed for this article

May 8, 2007 at 4:24 pm

BrianGood stuff!

May 8, 2007 at 5:13 pm

SureshHi Luca,

Another handwavy thing that one gets from statistical physics is that “convergence to optimal is slow at phase transitions”. In this context, it might suggest the following question: if we choose a constant around c_3, i.e what we believe is the transition from satisfiable to unsatisfiable, are those instances likely to be hard on average ?

May 8, 2007 at 7:55 pm

LucaThat’s a good question, Suresh. My feeling is that the semi-rigorous techniques work really well (somewhat mysteriously so) when you use them to analyze one specific probabilistic process. Like when one asks, is an instance drawn from this distribution satisfiable? Is this algorithm converging fast on this input distribution? And the techniques, by some kind of miracle, give you the right answer.

When it comes to questions about complexity, however, you want to know what happens for

everypossible algorithm, and it is dangerous to draw any conclusion based just on the behavior of one algorithm.As I will describe in the next post, there is mounting evidence that 3SAT is hard-on-average in the unsatisfiability regime, but this is based on the accumulation of results about various, and general, classes of algorithms.

But whether or not it says something about the average-case complexity of problems, the fact (observed experimentally, semi-rigorously confirmed in many cases, and rigorously confirmed in a few cases) that the Glauber dynamic slows down precisely when there is a phase transition is very intriguing, and it is the kind of question that would probably be not even asked if it weren’t for the close interaction between theoretical computer scientists and physicists.

May 9, 2007 at 6:33 pm

Paul BeameOne of the difficulties in trying to make a general connection between statistical physics notions of phase transitions and hardness on the average is that versions that are too broad are certainly incorrect. The ‘amount of (dis)continuity’ in the transition (known as the order of the transition) has been suggested as the indicator of hardness on the average, since 2SAT also has a phase transition on c_2 that is of a different order from that of 3SAT, but is always easy. Even this refinement is not accurate. Some NP-complete problems, such as 1-in-k-SAT have the same kind of transition as 2SAT and are hard whereas others like Hamiltonian circuit that have a phase transition like 3SAT are always easy on the average using the usual distributions. It has been suggested, though, that these are cases where the ‘wrong’ parameter has been chosen and that with the right parameter on which to measure the transition everything will work out.

Despite the problems with applying this approach in general, there are a number of problems where the approach has given good intuition and, with the SP algorithm for 3SAT, even has led to an algorithm that seems to work well on the average. The methods by which these intuitions are obtained are mathematically like of some form of black magic, but somehow they do yield useful conjectures.

May 14, 2007 at 11:32 am

Gabriel IstrateHi Paul,

I believe that 1-in-k SAT

is not hard on the average. You can turn the proof for the location of the phase transition into a solution finding/refutation algorithm that will work with probability 1-o(1)

(at least outside the phase transition region). Am I missing something ?

On the other hand, there exists a fairly trivial reason why exponential resolution complexity is related to a first-order jump of

a backbone-like parameter: they

are both the consequence of the

same phenomenon, the

emergence of a large unsatisfiable

subformula at the transition

(which, of course, happens for 3-SAT but not for 2-SAT). Please see (sorry for the shameless self-promotion)

http://dx.doi.org/10.1007/s10472-005-7033-2

May 14, 2007 at 7:56 pm

LucaGabriel, doesn’t the same phenomenon occur in random 3XOR?

May 15, 2007 at 7:03 pm

Gabriel IstrateYes it does (if you refer to the fact that 3-XOR-SAT has a first-order transition and efficient proofs of unsatisfiability). I was only reacting to the statement about 1-in-k SAT, which went against my intuition.

The connection “order of the phase transition” and “complexity of refutation” probably doesn’t go substantially beyond resolution/DPLL (since it doesn’t even work for Gaussian elimination). By how much, and for what class of problems ? Descriptive complexity (for the specification of both problems and of certification algorithms, in the style of the LICS 2002 paper by Atserias) could possibly give an answer.

Anyway, what is somewhat frustrating is how little phase transitions seem to relate to standard computational complexity theory. By that I mean, for instance, that I’d like to see a “statistical physics” argument that would somehow relate (at least in a handwaving sense) some aspect of the phase transition in 3-SAT to the average-case-hardness hypothesis for that problem (in the unsatisfiable phase).

May 24, 2007 at 6:31 am

andreaA quick comment on the slowing down of

Glauber dynamics at the SAT-UNSAT phase transition. Indeed it is conjectured that

mixing time changes from polynomial to exponential at a strictly smaller value

of the clause density c_{k,d} smaller than c_k (for k larger than 2).

This is a purely `dynamical’ phase transition, that has been proved in

XORSAT on regular graphs.

November 7, 2007 at 9:10 am

Hot Random Stuff by Sikandardoing great random posting, keep it up