# CS276 Lecture 3: Pseudorandom Generators

Scribed by Bharath Ramsundar

Summary

Last time we introduced the setting of one-time symmetric key encryption, defined the notion of semantic security, and proved its equivalence to message indistinguishability.

Today we complete the proof of equivalence (found in the notes for last class), discuss the notion of pseudorandom generator, and see that it is precisely the primitive that is needed in order to have message-indistinguishable (and hence semantically secure) one-time encryption. Finally, we shall introduce the basic definition of security for protocols which send multiple messages with the same key.

1. Pseudorandom Generators And One-Time Encryption

Intuitively, a Pseudorandom Generator is a function that takes a short random string and stretches it to a longer string which is almost random, in the sense that reasonably complex algorithms cannot differentiate the new string from truly random strings with more than negligible probability.

Definition 1 [Pseudorandom Generator] A function ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is a ${(t,\epsilon)}$-secure pseudorandom generator if for every boolean function ${T}$ of complexity at most ${t}$ we have

$\displaystyle \left | {\mathbb P}_{x\sim U_k } [ T(G(x)) = 1] - {\mathbb P} _{x\sim U_m} [ T(x) = 1] \right| \leq \epsilon \ \ \ \ \ (1)$

(We use the notation ${U_n}$ for the uniform distribution over ${\{ 0,1 \}^n}$.)

The definition is interesting when ${m> k}$ (otherwise the generator can simply output the first m bits of the input, and satisfy the definition with ${\epsilon=0}$ and arbitrarily large ${t}$). Typical parameters we may be interested in are ${k=128}$, ${m=2^{20}}$, ${t=2^{60}}$ and ${\epsilon = 2^{-40}}$, that is we want ${k}$ to be very small, ${m}$ to be large, ${t}$ to be huge, and ${\epsilon}$ to be tiny. There are some unavoidable trade-offs between these parameters.

Lemma 2 If ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is ${(t,2^{-k-1})}$ pseudorandom with ${t = O(m)}$, then ${k\geq m-1}$.

Proof: Pick an arbitrary ${y \in \{ 0,1 \}^k}$. Define

$\displaystyle T_y(x) = 1 \Leftrightarrow x = G(y)$

It is clear that we may implement ${T}$ with an algorithm of complexity ${O(m)}$: all this algorithm has to do is store the value of ${G(y)}$ (which takes space ${O(m)}$) and compare its input to the stored value (which takes time ${O(m)}$) for total complexity of ${O(m)}$. Now, note that

$\displaystyle {\mathbb P}_{x\sim U_k } [ T(G(x)) = 1] \geq \frac{1}{2^k}$

since ${G(x) = G(y)}$ at least when ${x = y}$. Similarly, note that ${{\mathbb P} _{x\sim U_m} [ T(x) = 1] = \frac{1}{2^m}}$ since ${T(x) = 1}$ only when ${x = G(y)}$. Now, by the pseudorandomness of ${G}$, we have that ${\frac{1}{2^k} - \frac{1}{2^m} \leq \frac{1}{2^{k+1}}}$. With some rearranging, this expression implies that

$\displaystyle \frac{1}{2^{k+1}} \leq \frac{1}{2^m}$

which then implies ${m \leq k + 1 }$ and consequently ${k \geq m - 1}$

Exercise 1 Prove that if ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is ${(t,\epsilon)}$ pseudorandom, and ${k < m}$, then

$\displaystyle t \cdot \frac 1 \epsilon \leq O( m \cdot 2^k)$

Suppose we have a pseudorandom generator as above. Consider the following encryption scheme:

• Given a key ${K\in \{ 0,1 \}^k}$ and a message ${M \in \{ 0,1 \}^m}$,

$\displaystyle Enc(K,M) := M \oplus G(K)$

• Given a ciphertext ${C\in \{ 0,1 \}^m}$ and a key ${K\in \{ 0,1 \}^k}$,

$\displaystyle Dec(K,C) = C \oplus G(K)$

(The XOR operation is applied bit-wise.)

It’s clear by construction that the encryption scheme is correct. Regarding the security, we have

Lemma 3 If ${G}$ is ${(t,\epsilon)}$-pseudorandom, then ${(Enc,Dec)}$ as defined above is ${(t-m,2\epsilon)}$-message indistinguishable for one-time encryption.

Proof: Suppose that ${G}$ is not ${(t-m, 2\epsilon)}$-message indistinguishable for one-time encryption. Then ${\exists}$ messages ${M_1, M_2}$ and ${\exists}$ algorithm ${T}$ of complexity at most ${t - m}$ such that

$\displaystyle \left | {\mathbb P}_{K \sim U_k} [T(Enc(K, M_1)) = 1] - {\mathbb P}_{K \sim U_k} [T(Enc(K, M_2)) = 1] \right | > 2\epsilon$

By using the definition of ${Enc}$ we obtain

$\displaystyle \left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1)) = 1] - {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_2)) = 1] \right | > 2\epsilon$

Now, we can add and subtract the term ${{\mathbb P}_{R \sim U_m} [T(R) = 1]}$ and use the triangle inequality to obtain that ${\left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1) = 1] - {\mathbb P}_{R \sim U_m} [T(R) = 1] \right |}$ added to ${\left | {\mathbb P}_{R \sim U_m} [T(R) = 1] - {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_2) = 1] \right |}$ is greater than ${2\epsilon}$. At least one of the two terms in the previous expression must be greater that ${\epsilon}$. Suppose without loss of generality that the first term is greater than ${\epsilon}$

$\displaystyle \left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1)) = 1] - {\mathbb P}_{R \sim U_m} [T(R) = 1] \right | > \epsilon$

Now define ${T'(X) = T(X \oplus M_1)}$. Then since ${H(X) = X \oplus M_1}$ is a bijection, ${{\mathbb P}_{R \sim U_m} [T'(R) = 1] = {\mathbb P}_{R \sim U_m} [T(R) = 1]}$. Consequently,

$\displaystyle \left | {\mathbb P}_{K \sim U_k} [T'(G(K)) = 1] - {\mathbb P}_{R \sim U_m} [T'(R) = 1] \right | > \epsilon$

Thus, since the complexity of ${T}$ is at most ${t - m}$ and ${T'}$ is ${T}$ plus an xor operation (which takes time ${m}$), ${T'}$ is of complexity at most ${t}$. Thus, ${G}$ is not ${(t, \epsilon)}$-pseudorandom since there exists an algorithm ${T'}$ of complexity at most ${t}$ that can distinguish between ${G}$‘s output and random strings with probability greater than ${\epsilon}$. Contradiction. Thus ${(Enc, Dec)}$ is ${(t-m, 2\epsilon)}$-message indistinguishable. ◻

2. Security for Multiple Encryptions: Plain Version

In the real world, we often need to send more than just one message. Consequently, we have to create new definitions of security for such situations, where we use the same key to send multiple messages. There are in fact multiple possible definitions of security in this scenario. Today we shall only introduce the simplest definition.

Definition 4 [Message indistinguishability for multiple encryptions] ${(Enc,Dec)}$ is ${(t,\epsilon)}$-message indistinguishable for ${c}$ encryptions if for every ${2c}$ messages ${M_1,\ldots,M_c}$, ${M'_1,\ldots,M'_c}$ and every ${T}$ of complexity ${\leq t}$ we have

$\displaystyle | {\mathbb P} [ T(Enc(K,M_1), \ldots,Enc(K,M_c)) = 1]$

$\displaystyle -{\mathbb P} [ T(Enc(K,M'_1), \ldots,Enc(K,M'_c)) = 1] | \leq \epsilon$

Similarly, we define semantic security, and the asymptotic versions.

Exercise 2 Prove that no encryption scheme ${(Enc,Dec)}$ in which ${Enc()}$ is deterministic (such as the scheme for one-time encryption described above) can be secure even for 2 encryptions.

Encryption in some versions of Microsoft Office is deterministic and thus fails to satisfy this definition. (This is just a symptom of bigger problems; the schemes in those versions of Office are considered completely broken.)

If we allow the encryption algorithm to keep state information, then a pseudorandom generator is sufficient to meet this definition. Indeed, usually pseudorandom generators designed for such applications, including RC4, are optimized for this kind of “stateful multiple encryption.”

Next time, we shall consider a stronger model of multiple message security which will be secure against Chosen Plaintext Attacks.

# CS276: Lecture 4 (draft)

Summary

Last time we defined pseudorandom generators and proved that, if they exist, they provide message-indistinguishable (and hence semantically secure) one-time encryption.

How do we construct a pseudorandom generator? We can’t if ${P=NP}$, so the security of any construction will have to rely on an unproved assumption which is at least as strong as ${P\neq NP}$. We shall see, later on, how to construct a pseudorandom generator based on well-established assumptions, such as the hardness of integer factorization, and we shall see that the weakest assumption under which we can construct pseudorandom generators is the existence of one-way functions.

Today, we shall instead look at RC4, a simple candidate pseudorandom generator designed by Ron Rivest. RC4 is very efficient, and widely used in practice — for example in the WEP standard for wireless communication. It is known to be insecure in its simplest instantiation (which makes WEP insecure too), but there are variants that may be secure.

This gives a complete overview of one-time symmetric-key encryption: from a rigorous definition of security to a practical construction that may plausibly satisfy the definition.

A usable, system, however, should be able to handle multiple encryptions. To define security for multiple encryptions, we have to define what an adversary is able to do with past messages.

In the most basic (and unsatisfactory) setting, the adversary simply sees the encryptions of past messages. (Some systems used in practice fail to satisfy even this very basic notion of security.) We can achieve this kind of security using a pseudorandom generator if the communicating parties keep state information between communication sessions and if messages are received in the order in which they were sent.

A more satisfactory notion of security allows the adversary to see encryptions of known plaintexts.

What’s Coming Next

Using new primitives called pseudorandom functions and pseudorandom permutations, it is possible to construct encryption schemes that satisfy this notion of security and that do not require synchronization between sender and receiver, and that do not require them to keep state information.

The best notion of security resists even an attack in which the adversary has the ability to see decryptions of chosen messages. This notion too is achievable via pseudorandom functions, but it will take us some time to develop the right tools to analyze a construction meeting this level of security.

How do we construct pseudorandom functions and permutations? It is possible to construct them from pseudorandom generators (and hence from one-way functions), and there are ad-hoc constructions which are believed to be secure.

1. RC4

RC4 is a very simple candidate pseudorandom generator. We will give a slightly generalized presentation of how it works.

Fix a modulus ${s}$, which is ${256}$ in RC4, and let ${{\mathbb Z}_s}$ be the finite group of ${s}$ elements ${\{ 0,\ldots, s-1\}}$ together with the operation of addition mod s. (The notation ${{\mathbb Z} / s{\mathbb Z}}$ is more common in math.)

The generator has two phases:

In the first phase, a short seed ${K \in \{ 0,1 \}^k}$ is converted into a permutation ${P: {\mathbb Z}_s \rightarrow Z_s}$ as follows (${id}$ is the identity permutation ${id(x) = x}$, the variables ${a,b}$ are in ${{\mathbb Z}_s}$ and so addition is performed mod ${s}$):

• ${P:= id}$
• ${b:=0}$
• for ${a}$ in ${\{ 0,\ldots s-1\}}$ :

• ${b:= b + P(a) + K(a \bmod k)}$
• swap ${(P(a),P(b))}$

(Note that if ${k=s}$ then the first phase has the following simpler description: for each ${a\in {\mathbb Z}_s}$, swap ${P(a)}$ with a random location.)

In the second phase, the permutation is used to produce the output of the generator as follows:

• ${a:=0}$; ${b:=0}$
• for ${i:=1}$ to ${m}$ :

• ${a:= a+1}$
• ${b = b + P(a)}$
• output ${P ( P(a) + P(b) )}$
• swap ${(P(a),P(b))}$

In RC4, ${s}$ is 256, as said before, which allows extremely fast implementations, and ${k}$ is around 100.

The construction as above is known to be insecure: the second byte has probability ${2^{-7}}$ instead of ${2^{-8}}$ of being the all-zero byte.

There are other problems besides this bias, and it is possible to reconstruct the key and completely break the generator given a not-too-long sequence of output bits. WEP uses RC4 as described above, and is considered completely broken.

If one discards an initial prefix of the output, however, no strong attack is known. A conservative recommendation is to drop the first ${4096}$ bits.

2. Security for Multiple Encryptions: Vanilla Version

Last time we introduced the following notion of security for multiple encryptions.

Definition 1 [Message indistinguishability for multiple encryptions] ${(Enc,Dec)}$ is ${(t,\epsilon)}$-message indistinguishable for ${c}$ encryptions if for every ${2c}$ messages ${M_1,\ldots,M_c}$, ${M'_1,\ldots,M'_c}$ and every ${T}$ of complexity ${\leq t}$ we have

$\displaystyle | {\mathbb P} [ T(Enc(K,M_1), \ldots,Enc(K,M_c)) = 1] -$

$\displaystyle {\mathbb P} [ T(Enc(K,M'_1), \ldots,Enc(K,M'_c)) = 1] | \leq \epsilon$

If we allow the encryption algorithm to keep state information, then a pseudorandom generator is sufficient to meet this definition. Indeed, usually pseudorandom generators designed for such applications, including RC4, are optimized for this kind of “stateful multiple encryption.”

3. Security for Multiple Encryptions: Chosen Plaintext Attack

In realistic scenarios, an adversary has knowledge of plaintext-ciphertext pairs. A broadly (but not fully) general way to capture this knowledge is to look at a model in which the adversary is able to see encryptions of arbitrary messages of her choice. An attack in this model is called a Chosen Plaintext Attack (CPA).

If ${O}$ is a, possibly randomized, procedure, and ${A}$ is an algorithm, we denote by ${A^O(x)}$ the computation of algorithm ${A}$ given ${x}$ as an input and given the ability to execute ${O}$. We charge just one unit of time for every execution of ${O}$, and we refer to ${A}$ as having oracle access to ${O}$.

Definition 2 [Message indistinguishability against CPA] ${(Enc,Dec)}$ is ${(t,\epsilon)}$-message indistinguishable against CPA if for every ${2}$ messages ${M}$, ${M'}$ and every ${T}$ of complexity ${\leq t}$ we have

$\displaystyle | {\mathbb P} [ T^{Enc(K,\cdot)}(Enc(K,M)) = 1] -$

$\displaystyle {\mathbb P} [ T^{Enc(K,\cdot)} (Enc(K,M')) = 1] | \leq \epsilon$

This is a generalization of security for multiple encryptions

Lemma 3 Suppose ${(Enc,Dec)}$ is ${(t,\epsilon)}$-message indistinguishable against CPA. Then for every ${c}$ it is ${(t-cm,c\epsilon)}$-message indistinguishable for ${c}$ encryptions.

# CS276 Lecture 3 (draft)

Summary

Last time we introduced the setting of one-time symmetric key encryption, defined the notion of semantic security, and proved its equivalence to message indistinguishability.

Today we complete the proof of equivalence (found in the notes for last class), discuss the notion of pseudorandom generator, and see that it is precisely the primitive that is needed in order to have message-indistinguishable (and hence semantically secure) one-time encryption.

1. Pseudorandom Generators And One-Time Encryption

Definition 1 [Pseudorandom Generator] A function ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is a ${(t,\epsilon)}$-secure pseudorandom generator if for every boolean function ${T}$ of complexity at most ${t}$ we have

$\displaystyle \left | {\mathbb P}_{x\sim U_k } [ T(G(x)) = 1] - {\mathbb P} _{x\sim U_m} T(x) = 1] \right| \leq \epsilon \ \ \ \ \ (1)$

(We use the notation ${U_n}$ for the uniform distribution over ${\{ 0,1 \}^n}$.)

The definition is interesting when ${m> k}$ (otherwise the generator can simply output the first m bits of the input, and satisfy the definition with ${\epsilon=0}$ and arbitrarily large ${t}$). Typical parameters we may be interested in are ${k=128}$, ${m=2^{20}}$, ${t=2^{60}}$ and ${\epsilon = 2^{-40}}$, that is we want ${k}$ to be very small, ${m}$ to be large, ${t}$ to be huge, and ${\epsilon}$ to be tiny. There are some unavoidable trade-offs between these parameters.

Lemma 2 If ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is ${(t,2^{-k-1})}$ pseudorandom with ${t = O(m)}$, then ${k\geq m-1}$.

Exercise 1 Prove that if ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is ${(t,\epsilon)}$ pseudorandom, and ${k < m}$, then

$\displaystyle t \cdot \frac 1 \epsilon \leq O( m \cdot 2^k)$

Suppose we have a pseudorandom generator as above. Consider the following encryption scheme:

• Given a key ${K\in \{ 0,1 \}^k}$ and a message ${M \in \{ 0,1 \}^m}$,

$\displaystyle Enc(K,M) := M \oplus G(K)$

• Given a ciphertext ${C\in \{ 0,1 \}^m}$ and a key ${K\in \{ 0,1 \}^k}$,

$\displaystyle Dec(K,C) = C \oplus G(K)$

(The XOR operation is applied bit-wise.)

It’s clear by construction that the encryption scheme is correct. Regarding the security, we have

Lemma 3 If ${G}$ is ${(t,\epsilon)}$-pseudorandom, then ${(Enc,Dec)}$ as defined above is ${(t-m,2\epsilon)}$-message indistinguishable for one-time encryption.

2. Security for Multiple Encryptions: Vanilla Version

Definition 4 [Message indistinguishability for multiple encryptions] ${(Enc,Dec)}$ is ${(t,\epsilon)}$-message indistinguishable for ${c}$ encryptions if for every ${2c}$ messages ${M_1,\ldots,M_c}$, ${M'_1,\ldots,M'_c}$ and every ${T}$ of complexity ${\leq t}$ we have

$\displaystyle {\mathbb P} [ T(Enc(K,M_1), \ldots,Enc(K,M_c)) = 1]$

$\displaystyle -{\mathbb P} [ T(Enc(K,M'_1), \ldots,Enc(K,M'_c)) = 1] | \leq \epsilon$

Similarly, we define semantic security, and the asymptotic versions.

Exercise 2 Prove that no encryption scheme ${(Enc,Dec)}$ in which ${Enc()}$ is deterministic (such as the scheme for one-time encryption described above) can be secure even for 2 encryptions.

Encryption in some versions of Microsoft Office is deterministic and thus fails to satisfy this definition. (This is just a symptom of bigger problems; the schemes in those versions of Office are considered completely broken.)

If we allow the encryption algorithm to keep state information, then a pseudorandom generator is sufficient to meet this definition. Indeed, usually pseudorandom generators designed for such applications, including RC4, are optimized for this kind of “stateful multiple encryption.”

# Bounded Independence, AC0, and Random 3SAT

As reported here, here and here, Mark Braverman has just announced a proof of a 1990 conjecture by Linial and Nisan.

Mark proves that if ${C}$ is an AC0 boolean circuit (with NOT gates and with AND gates and OR gates of unbounded fan-in) of depth ${d}$ and size ${S}$, and if ${X}$ is any ${k}$-wise independent distribution with ${k = (\log S)^{O(d^2)}}$, then

$\displaystyle {\mathbb E} C(U_n) \approx {\mathbb E} C(X)$

that is, ${X}$ “fools” the circuit ${C}$ into thinking that ${X}$ is the uniform distribution ${U_n}$ over ${\{0,1\}^n}$. Plausibly, this might be true even for ${k = O((\log S)^{d-1})}$.

Nothing was known for depth 3 or more, and the depth-2 case was settled only recently by Bazzi, with a proof that, as you may remember, has been significantly simplified by Razborov about six months ago.

Mark’s proof relies on approximating ${C}$ via low-degree polynomials. The point is that if ${p}$ is an ${n}$-variate (real valued) polynomial of degree ${\leq k}$, and ${X}$ is a ${k}$-wise independent distribution ranging over ${\{0,1\}^n}$, then

$\displaystyle {\mathbb E} p(X) = {\mathbb E} p(U_n)$

Now if we could show that ${p}$ approximates ${C}$ both under ${X}$ and under ${U_n}$, in the sense that ${{\mathbb E} p(X) \approx {\mathbb E} C(X)}$, and also ${{\mathbb E} p(U_n) \approx {\mathbb E} C(U_n)}$, then we would be done.

The Razborov-Smolenski lower bound technique gives a probabilistic construction of a polynomial ${p}$ such that for every input ${x}$ one has a high probability that ${p(x) = C(x)}$. In particular, one get one polynomial ${p}$ such that both

$\displaystyle {\mathbb P}_{x\sim X} [p(x) = C(x) ] = 1-o(1) \ \ \ \ \ (1)$

and

$\displaystyle {\mathbb P}_{x\sim U_n} [p(x) = C(x) ] = 1-o(1) \ \ \ \ \ (2)$

Unfortunately this is not sufficient, because the polynomial ${p}$ might be very large at a few points, and so even if ${p}$ agrees with ${C}$ with high probability there is no guarantee that the average of ${p}$ is close to the average of ${C}$.

Using a result of Linial, Mansour and Nisan (developed in the context of learning theory), one can construct a different kind of low-degree approximating polynomial ${p}$, which is such that

$\displaystyle {\mathbb E}_{x\sim U_n} | C(x) - p(x)|^2 = o(1) \ \ \ \ \ (3)$

The Linial-Mansour-Nisan approximation, however, says nothing about the relation between ${p}$ and ${C}$ under the distribution ${X}$.

Using ideas of Bazzi’s, however, if we had a single polynomial ${p}$ such that properties (1), (2) and (3) are satisfied simultaneously, then we could construct another low-degree polynomial ${p'}$ such that ${{\mathbb E} p'(X) \approx {\mathbb E} C(X)}$, and also ${{\mathbb E} p'(U_n) \approx {\mathbb E} C(U_n)}$, giving us that ${C}$ is fooled by ${X}$.

As far as I understand, Mark constructs a polynomial satisfying properties (1), (2) and (3) by starting from the Razborov-Smolenski polynomial ${p}$, and then observing that the indicator function ${E}$ of the points on which ${p \neq C}$ is itself a boolean function admitting a Linial-Mansour-Nisan approximation ${e}$. Defining ${p' := p\cdot (1-e)}$, we have that ${p'}$ has all the required properties, because multiplying by ${1-e}$ “zeroes out” the points on which ${p}$ is excessively large.

I have been interested in this problem for some time because of a connection with the complexity of 3SAT on random instances.

# CS276 Lecture 2: Semantic Security

In which we encounter for the first time message indistinguishability and semantic security

In the last lecture we saw that

• all classical encryption schemes which allow the encryption of arbitrarily long messages have fatal flaws;
• it is possible to encrypt with perfect security using one-time pad, but the scheme can be used only once, and the key has to be as long as the message;
• if one wants perfect security, one needs a key as long as the total length of all messages that are going to be sent.

Our goal for the next few lectures will be to study schemes that allow the sending of messages that are essentially arbitrarily long, using a fixed key, and having a security that is essentially as good as the perfect security of one-time pad.

Today we introduce a notion of security (semantic security) that is extremely strong. When it is met there is no point for an adversary to eavesdrop the channel, regardless of what messages are being sent, of what she already knows about the message, and what goal she is trying to accomplish.

# CS276 Lecture 1: Introduction

This course assumes CS170, or equivalent, as a prerequisite. We will assume that the reader is familiar with the notions of algorithm and running time, as well as with basic notions of algebra (for example arithmetic in finite fields), discrete math and probability.

Cryptography is the mathematical foundation on which one builds secure systems. It studies ways of securely storing, transmitting, and processing information. Understanding what cryptographic primitives can do, and how they can be composed together, is necessary to build secure systems, but not sufficient. Several additional considerations go into the design of secure systems, and they are covered in various Berkeley graduate courses on security.

In this course we will see a number of rigorous definitions of security, some of them requiring seemingly outlandish safety, even against entirely implausible attacks, and we shall see how if any cryptography at all is possible, then it is also possible to satisfy such extremely strong notions of security. For example, we shall look at a notion of security for encryption in which an adversary should not be able to learn any information about a message given the ciphertext, even if the adversary is allowed to get encodings of any messages of his choice, and decodings of any ciphertexts of his choices, with the only exception of the one he is trying to decode.

We shall also see extremely powerful (but also surprisingly simple and elegant) ways to define security for protocols involving several untrusted participants.

Learning to think rigorously about security, and seeing what kind of strength is possible, at least in principle, is one of the main goals of this course. We will also see a number of constructions, some interesting for the general point they make (that certain weak primitives are sufficient to make very strong constructions), some efficient enough to have made their way in commercial products.