1982 was the *annus mirabilis* of the foundations of cryptography. In their paper “probabilistic encryption,” Goldwasser and Micali introduced two rigorous definitions of security for encryption, which they proved to be equivalent. One definition required the distributions of encryptions of any two messages to be computationally indistinguishable (a concept they introduce in the paper), the other, which they call semantic security, is the property that whatever can be efficiently computed about a message given the cyphertext can also be efficiently computed without the cyphertext. Later the same year, Blum and Micali gave a rigorous definitions of security for pseudorandom generators, and Yao wrapped all these results in a more general framework requiring generic, rather than number-theoretic, assumptions.

The concept of semantic security inspired most subsequent definitions, and proofs, of security based on the concept of simulation. Instead of trying to specify a list of things than adversary should not be able to do, one defines an idealized model in which the adversary has no access to private and encrypted data, and one *defines* a given system to be secure if whatever an attacker can efficiently compute given the ability of eavesdrop (and possibly mount an active attack), can also be efficiently computed in the ideal model. One then *proves* a system to be secure by developing a *simulator* in the ideal model for every real-world adversary.

Together with Rackoff, Goldwasser and Micali took this idea one step further from encryption to interactive communication, and came up with the idea of *Zero-Knowledge Proofs*. A zero-knowledge proof is a probabilistic proof system in which a prover can convince a verifier, with high confidence, of the truth of a statement, with the additional property that there is a *simulator* that is able to sample from the distribution of verifier’s views of the interaction. Thus the verifier is convinced of the truth of the statement being proved, but gains no additional information. In their paper, Goldwasser, Micali and Rackoff introduce the concept and present a zero-knowledge proof for a conjecturally intractable number-theoretic problem. The paper was famously rejected several times, eventually appearing in 1985.

The following year, Goldreich, Micali and Avi Wigderson published a paper giving zero knowledge proof systems for all problems in NP. Their work made zero-knowdge proofs a key tool in the design of secure cryptosystem: it was now possible for a party to publish a commitment to a secret and then, at any time, be able to prove that has a certain property without releasing any additional information about . This ability was a key ingredient in the development of secure multi-party computation in 1987, by the same authors.

So how does one prove in zero knowledge that, say, a graph is 3-colorable? (Once you have zero-knowledge proofs for one NP-complete problems, you immediately have them for all problems in NP.)

Suppose the prover and the verifier know a graph and the prover knows a 3-coloring. A physical analog of the protocol (which can be implemented using the notion of *commitment schemes*) is the following: the prover randomizes the color labels, then takes lockboxes, each labeled by a vertex, and puts a piece of paper with the color of vertex in the lockbox labeled by , for every . The prover locks all the lockboxes, and sends them to the verifier. The verifier picks a random edge and asks for the keys of the lockboxes for and for . If they contain different colors, the verifier accepts, otherwise it rejects.

The protocol is *complete*, in the sense that if the graph is 3-colorable and the parties follow the protocol, then the verifier accepts with probability 1.

The protocol is *sound*, in the sense that if the graph is not 3-colorable, then, no matter what the prover does, there will have to some edge such that the lockboxes of and are the same, and the verifier has probability at least of picking such an edge and rejecting. Thus the verifier accepts with probability at most , which can be made negligibly small by repeating the protocol several times.

As per the zero-knowledge property, the view of the verifier is the choice of a random edge, two open lockboxes corresponding to the endpoints of the edge, containing two random different colors, and unopened lockboxes. A view with such a distribution can be easily sampled, and the same is true when the physical implementation is replaced by a commitment scheme. (Technically, this is argument only establishes *honest-verifier zero knowledge*, and a bit more work is needed to capture a more general property.)

Blum claims in this (http://www.mathunion.org/ICM/ICM1986.2/Main/icm1986.2.1444.1451.ocr.pdf) paper that

… the proof of any theorem whatsoever (e.g., Fermât ‘s Last Theorem)

whose proof has been formalized in a standard logical system (such as Whitehead

and Russell’s Principia Mathematicae), together with any integer upper bound

on the length of the proof, can be translated into a zero-knowledge proof. The

zero-knowledge proof shows that the theorem is very probably true and that the

prover almost certainly knows a proof in the given logical system. It gives away

no other information whatsoever…

Theorem 4 is the theorem in that paper where this is described. Are more details about this known? The argument presented in the document is just a blueprint one. I am very curious to know more details about this. Do you have any idea?