Chapter 8: Quantum Cryptography
Cryptography is the science of keeping secrets. For thousands of years, it relied on clever mathematical tricks that were easy to perform but hard to reverse. Then, in the 1980s, physicists realized that quantum mechanics could do something mathematics alone could not: provide information-theoretic security - secrecy guaranteed not by the difficulty of a computation, but by the fundamental laws of physics.
This chapter tells two intertwined stories. First, we survey the classical cryptographic landscape - how public-key encryption works and why it has kept the internet secure for decades. Then we introduce the quantum protocols that promise unbreakable key distribution, unforgeable money, and - on the flip side - the quantum threat that could shatter the classical systems we depend on today.
You should be comfortable with single-qubit states ($|0\rangle$, $|1\rangle$, $|+\rangle$, $|-\rangle$), the Hadamard gate (Chapter 5), entanglement and Bell pairs (Chapter 6), and the no-cloning theorem (Chapter 6).
8.1 Classical Cryptography in One Section
Before we can appreciate what quantum cryptography offers, we need to understand what classical cryptography provides - and where it falls short.
Symmetric Encryption: One Key, Two Parties
The oldest form of cryptography is symmetric encryption: Alice and Bob share a secret key, and they use the same key to both encrypt and decrypt messages. The Advanced Encryption Standard (AES) is the modern workhorse - used to protect everything from web traffic to classified government communications.
AES is fast and considered secure against both classical and quantum attacks (with sufficiently large key sizes). But it has a fundamental problem: how do Alice and Bob agree on the shared secret key in the first place? If they have never met and can only communicate over a public channel, any key they exchange could be intercepted.
Public-Key Cryptography: The Key Exchange Revolution
In 1976, Whitfield Diffie and Martin Hellman proposed a revolutionary idea: what if encryption and decryption used different keys? One key could be made public, while the other remained private. Anyone could encrypt a message using the public key, but only the holder of the private key could decrypt it.
The RSA algorithm (Rivest, Shamir, Adleman, 1977) realizes this idea using the mathematics of prime numbers. Here is the core insight:
- Key generation: Choose two large prime numbers $p$ and $q$. Compute $n = p \times q$. The number $n$ is made public; $p$ and $q$ are kept secret.
- Encryption: A message $m$ is encrypted as $c = m^e \bmod n$, where $e$ is the public exponent.
- Decryption: The ciphertext is decrypted as $m = c^d \bmod n$, where $d$ is the private exponent derived from $p$ and $q$.
The security of RSA rests on a single assumption: factoring large numbers is computationally hard. Multiplying two 1,000-digit primes takes milliseconds, but no known classical algorithm can factor their product in any reasonable time. Current RSA key sizes use 2048- or 4096-bit moduli, which classical computers cannot factor.
RSA is computationally secure - it is safe only because we believe factoring is hard for classical computers. If someone found a fast factoring algorithm (or built a quantum computer), RSA would break. Information-theoretic security, by contrast, means that an adversary gains zero information about the secret regardless of their computational power. As we will see, quantum key distribution achieves this stronger guarantee.
Diffie-Hellman Key Exchange
The Diffie-Hellman protocol lets Alice and Bob agree on a shared secret over a public channel without ever transmitting the secret itself. It works using modular exponentiation: Alice and Bob each pick a secret number, compute a public value, exchange those public values, and then combine the other's public value with their own secret to arrive at the same shared key. The security rests on the discrete logarithm problem, which is believed to be hard classically.
Elliptic Curve Cryptography (ECC) offers the same functionality with smaller key sizes by replacing modular arithmetic with operations on elliptic curves. Both Diffie-Hellman and ECC, like RSA, are vulnerable to quantum attacks - a point we will return to in Section 8.5.
The Asymmetric Assumption
All public-key cryptography depends on one-way functions - operations that are easy to compute in one direction but hard to reverse. Multiplying primes is easy; factoring is hard. Computing a modular exponent is easy; finding the discrete logarithm is hard. Classical cryptography bets everything on the assumption that these one-way functions will remain hard. Quantum computing challenges that bet.
8.2 BB84: Quantum Key Distribution
In 1984, Charles Bennett and Gilles Brassard proposed the first quantum key distribution (QKD) protocol, now universally known as BB84. It allows two parties to generate a shared secret key with security guaranteed by quantum mechanics - not by any computational assumption.
The Two Bases
BB84 uses two conjugate bases for encoding bits into qubits:
| Basis | Bit 0 | Bit 1 | Gate from $|0\rangle$ |
|---|---|---|---|
| Computational ($Z$-basis) | $|0\rangle$ | $|1\rangle$ | None / X |
| Hadamard ($X$-basis) | $|+\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ | $|-\rangle = \frac{1}{\sqrt{2}}(|0\rangle - |1\rangle)$ | H / XH |
The crucial quantum property is this: if a qubit is prepared in one basis and measured in the other, the outcome is completely random. Measuring $|+\rangle$ in the $Z$-basis gives $|0\rangle$ or $|1\rangle$ with equal probability. This randomness is not due to ignorance - it is fundamental.
The Protocol Step by Step
- Preparation. Alice generates a random bit string and, for each bit, randomly chooses either the $Z$-basis or the $X$-basis. She encodes each bit as a qubit in the chosen basis and sends it to Bob over a quantum channel.
- Measurement. Bob independently and randomly chooses a measurement basis ($Z$ or $X$) for each incoming qubit and records the outcome.
- Sifting. Over a public classical channel, Alice and Bob announce which basis they used for each qubit (but not the bit values). They discard all bits where they chose different bases. On average, their bases match about 50% of the time.
- Error estimation. They sacrifice a random subset of the remaining bits, comparing them publicly to estimate the error rate. In a perfect channel with no eavesdropper, these bits should match perfectly.
- Key extraction. If the error rate is below a threshold (typically around 11%), they apply classical error correction and privacy amplification to the remaining bits, producing a shorter but perfectly secret shared key.
Interactive: BB84 Protocol Simulator
Step through a complete BB84 key exchange. Alice and Bob each choose random bases and bits. Toggle Eve's interception to see how eavesdropping introduces detectable errors. The error rate panel shows the fraction of mismatched bits in the sifted key.
Why Eavesdropping Is Detectable
Suppose an eavesdropper, Eve, intercepts a qubit in transit. She does not know which basis Alice used, so she must guess. If she guesses correctly, her measurement reveals Alice's bit and the qubit passes through undisturbed. But if she guesses wrong - which happens 50% of the time - her measurement collapses the qubit into the wrong basis, introducing errors.
Specifically, when Eve measures a $Z$-basis qubit in the $X$-basis (or vice versa), she destroys the original state. The qubit she re-sends to Bob is now random with respect to Alice's encoding. When Alice and Bob compare a subset of their bits, they detect an error rate of approximately 25% (Eve intercepts all qubits, guesses the wrong basis half the time, and introduces a 50% error in those cases: $\frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$).
Eve cannot simply copy the qubit and measure her copy later. The no-cloning theorem (Chapter 6) forbids making a perfect copy of an unknown quantum state. This forces Eve to measure the original, which unavoidably disturbs it. This is the fundamental quantum advantage: eavesdropping leaves a detectable fingerprint.
Information-Theoretic Security
Unlike RSA, whose security could be broken by a sufficiently powerful computer, BB84's security is unconditional. It does not rely on any assumption about Eve's computational power. Even with unlimited computing resources, Eve cannot learn the key without introducing detectable errors. The proof follows directly from the laws of quantum mechanics: the uncertainty principle and the no-cloning theorem.
Predict-Observe-Explain: Eavesdropper Detection
Eve intercepts a qubit that Alice prepared as $|+\rangle$ (bit 0 in the $X$-basis). Eve measures in the $Z$-basis, getting either $|0\rangle$ or $|1\rangle$, and re-sends what she measured. Bob measures in the $X$-basis (matching Alice). Will Bob always get 0?
Predict
Alice prepares $|+\rangle$ and sends it to Bob. Eve intercepts and measures in the $Z$-basis, then re-sends her result. Bob measures in the $X$-basis (same as Alice). What does Bob get?
Observe
Eve's $Z$-basis measurement collapses $|+\rangle$ to $|0\rangle$. She re-sends $|0\rangle$ to Bob. Bob measures $|0\rangle$ in the $X$-basis (apply H then measure). Run this circuit:
Bob gets 0 or 1 with equal probability - but Alice sent bit 0, so Bob should always get 0!
Explain
Eve's measurement in the wrong basis destroyed the quantum state. She collapsed $|+\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ into $|0\rangle$ (or $|1\rangle$). When Bob measures $|0\rangle$ in the $X$-basis, the outcome is completely random - 50% chance of 0 or 1. Alice sent bit 0, but Bob gets a random bit. This 50% error rate (on the qubits where Eve guessed the wrong basis) produces the detectable ~25% overall error rate in the sifted key.
Sandbox: The Four BB84 States
The four states used in BB84 can each be prepared with at most two gates. Run the circuits below to verify the measurement statistics. In the $Z$-basis, $|0\rangle$ always yields 0 and $|1\rangle$ always yields 1. In the $X$-basis, $|+\rangle$ and $|-\rangle$ each give 0 and 1 with equal probability when measured in the computational basis.
$|0\rangle$ - Bit 0 in $Z$-basis:
$|1\rangle$ - Bit 1 in $Z$-basis:
$|+\rangle$ - Bit 0 in $X$-basis:
$|-\rangle$ - Bit 1 in $X$-basis:
The $|-\rangle$ state is prepared by first applying X (to get $|1\rangle$) and then H. Recall from Chapter 5 that $H|1\rangle = |-\rangle$. When measured in the computational basis, $|-\rangle$ gives 0 or 1 each with probability $\frac{1}{2}$ - just like $|+\rangle$. The two states differ only in their relative phase, which is invisible to $Z$-basis measurement but detectable with an additional H gate before measurement.
Worked Example
Let us trace through a small instance of BB84 with 8 qubits:
| Qubit | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|---|---|---|---|---|---|---|---|---|
| Alice's bit | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 1 |
| Alice's basis | Z | X | Z | X | X | Z | X | Z |
| State sent | $|0\rangle$ | $|-\rangle$ | $|1\rangle$ | $|+\rangle$ | $|-\rangle$ | $|0\rangle$ | $|+\rangle$ | $|1\rangle$ |
| Bob's basis | Z | Z | Z | X | Z | X | X | Z |
| Bob's result | 0 | ? | 1 | 0 | ? | ? | 0 | 1 |
| Bases match? | Yes | No | Yes | Yes | No | No | Yes | Yes |
After sifting, Alice and Bob keep qubits 1, 3, 4, 7, and 8, giving the shared bit string
0 1 0 0 1. The entries marked "?" are random because Bob measured in the
wrong basis. They sacrifice some of these matching bits (say qubits 1 and 8) to check for
errors. If the values agree, they use the remaining bits (1, 0, 0) as their secret key.
8.3 E91: Entanglement-Based Key Distribution
In 1991, Artur Ekert proposed a fundamentally different approach to quantum key distribution. Instead of Alice sending prepared qubits to Bob, a source distributes entangled Bell pairs - one qubit to Alice, one to Bob. The protocol, called E91, ties security to the violation of Bell inequalities, creating an elegant bridge between foundational quantum physics and practical cryptography.
The Protocol
- Distribution. A source (which may be untrusted) generates pairs of qubits in the Bell state $|\Phi^+\rangle = \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$ and sends one qubit to Alice, the other to Bob.
- Random measurements. For each pair, Alice randomly chooses a measurement direction from a set of angles (e.g., $0$, $\pi/8$, $\pi/4$), and Bob independently chooses from a different but overlapping set (e.g., $0$, $\pi/8$, $-\pi/8$). They record their choices and outcomes.
- Sifting for the key. After all measurements, Alice and Bob publicly compare which angles they used (not outcomes). When both used the same angle (e.g., both measured at $0$ or both at $\pi/8$), their outcomes are perfectly correlated due to entanglement. These matching-angle outcomes form the raw key.
- Bell test for security. For pairs where they used different angles, they publicly compare outcomes and compute the CHSH correlation value $S$. Quantum mechanics predicts $S = 2\sqrt{2} \approx 2.83$, violating the classical bound of $S \leq 2$. If Eve has measured (and thus disturbed) the entangled pairs, the correlations weaken and $S$ drops toward the classical regime.
- Key extraction. If the Bell test confirms strong quantum correlations ($S$ close to $2\sqrt{2}$), Alice and Bob proceed with error correction and privacy amplification to distill a secure key.
Why Bell Violations Guarantee Security
The deep insight of E91 is that maximal Bell inequality violation implies maximal privacy. If Alice and Bob's measurements violate the CHSH inequality to the quantum-mechanical maximum, then no third party can have any information about their outcomes. This follows from a result in quantum information theory called the monogamy of entanglement: the more strongly two particles are entangled with each other, the less entangled either can be with anything else (including Eve's system).
A remarkable feature of E91-style protocols is that security can be verified even if Alice and Bob do not trust their own measurement devices. As long as their measurement statistics violate a Bell inequality, the key is secure. This is the foundation of device-independent QKD, an active area of research that offers the strongest possible security guarantees.
BB84 vs. E91
| Feature | BB84 | E91 |
|---|---|---|
| Quantum resource | Single qubits in 4 states | Entangled Bell pairs |
| Eavesdropper detection | Error rate estimation | Bell inequality violation |
| Security proof basis | No-cloning, uncertainty principle | Monogamy of entanglement |
| Device trust | Devices must be trusted | Can be made device-independent |
| Key rate (ideal) | ~50% of qubits sent | Lower (some pairs used for Bell test) |
Both protocols achieve the same fundamental goal - information-theoretically secure key distribution - but through different quantum phenomena. BB84 exploits the disturbance caused by measurement; E91 exploits the fragility of entanglement.
8.4 Quantum Money and Unforgeable Tokens
The idea of using quantum mechanics for cryptography actually predates BB84. In the late 1960s (published in 1983), Stephen Wiesner proposed quantum money - banknotes that are physically impossible to counterfeit. His scheme was one of the earliest applications of quantum information, and its core idea still influences quantum cryptography today.
Wiesner's Scheme
Each banknote carries a classical serial number and a sequence of qubits. The bank, when minting a note with serial number $s$, generates a random sequence of bits $b_1, b_2, \ldots, b_n$ and a random sequence of bases $\theta_1, \theta_2, \ldots, \theta_n$ (each either $Z$ or $X$). The bank encodes bit $b_i$ in basis $\theta_i$, producing a qubit in one of the four BB84 states: $|0\rangle$, $|1\rangle$, $|+\rangle$, or $|-\rangle$. The bank stores the secret key $(b_i, \theta_i)$ for each serial number in a secure database.
To verify a note, the bank looks up the serial number, retrieves the corresponding bases, and measures each qubit in the correct basis. If the note is genuine, every measurement matches the recorded bit value.
Why Counterfeiting Fails
A counterfeiter who wants to forge a note must duplicate its quantum state. But the no-cloning theorem makes this impossible. Without knowing which basis each qubit was prepared in, the counterfeiter cannot reliably copy any of them. Measuring a qubit to learn its state collapses it, and guessing the wrong basis destroys the information irreversibly.
Quantitatively, a counterfeiter who tries to clone all $n$ qubits by guessing bases at random succeeds on each qubit with probability at most $\frac{3}{4}$ (they guess the right basis half the time and get it right, and guess wrong the other half, getting a 50-50 result). The probability of successfully forging the entire note is at most $\left(\frac{3}{4}\right)^n$, which becomes negligibly small for even moderate $n$.
Wiesner's paper was rejected by journals for over a decade before being published in 1983 in SIGACT News. Bennett, who was Wiesner's college friend, later credited quantum money as the direct inspiration for BB84. The idea was simply too far ahead of its time - practical quantum memory did not (and still does not fully) exist to store quantum banknotes for extended periods.
Limitations and Modern Extensions
Wiesner's original scheme requires the bank to verify every transaction, since only the bank knows the secret bases. This makes it more of a "quantum check" than "quantum cash." Modern research has extended the idea toward public-key quantum money, where anyone can verify a note without the bank's secret, and quantum tokens for one-time authentication and unforgeable digital tickets.
8.5 The Quantum Threat to Classical Cryptography
Quantum mechanics giveth (secure key distribution) and quantum mechanics taketh away (the security of classical cryptosystems). The same quantum phenomena that enable BB84 also power algorithms that can break the mathematical problems underlying RSA, ECC, and Diffie-Hellman.
Shor's Algorithm: Breaking RSA and ECC
In 1994, Peter Shor discovered a quantum algorithm that factors integers in polynomial time. On a classical computer, the best known factoring algorithm (the general number field sieve) runs in sub-exponential time. Shor's algorithm runs in $O(n^3)$ time (where $n$ is the number of bits in $N$, equivalently $O((\log N)^3)$), making it exponentially faster.
Since RSA security depends on factoring being hard, a large enough quantum computer running Shor's algorithm would break RSA entirely. The same algorithm, adapted for computing discrete logarithms, also breaks Diffie-Hellman and all elliptic curve cryptography. We will study Shor's algorithm in detail in Chapter 11.
Shor's algorithm does not break all encryption. Symmetric ciphers like AES are not based on factoring or discrete logarithms. Grover's algorithm (Chapter 12) can speed up brute-force search against symmetric keys, but only by a quadratic factor - meaning AES-256 retains an effective security level of 128 bits against quantum attack, which is still considered secure.
The "Harvest Now, Decrypt Later" Threat
Large-scale quantum computers capable of running Shor's algorithm do not exist yet. But this does not mean organizations can wait. Adversaries can record encrypted communications today and store them until a quantum computer becomes available to decrypt them. This strategy is known as "harvest now, decrypt later" (HNDL).
Consider data that must remain secret for 20 or 30 years - diplomatic communications, medical records, trade secrets, intelligence reports. If a cryptographically relevant quantum computer arrives within that window, anything encrypted with RSA or ECC today could be retroactively exposed. This makes the quantum threat immediate, even though the quantum computers themselves are not.
Post-Quantum Cryptography: The NIST Standards
The response to the quantum threat is post-quantum cryptography (PQC) - classical algorithms designed to resist both classical and quantum attacks. After an eight-year evaluation process, the U.S. National Institute of Standards and Technology (NIST) finalized its first post-quantum cryptographic standards in 2024:
| Standard | Algorithm | Type | Based On |
|---|---|---|---|
| FIPS 203 | ML-KEM (formerly CRYSTALS-Kyber) | Key encapsulation | Module lattices |
| FIPS 204 | ML-DSA (formerly CRYSTALS-Dilithium) | Digital signature | Module lattices |
| FIPS 205 | SLH-DSA (formerly SPHINCS+) | Digital signature | Hash functions |
A fourth standard, FN-DSA (formerly FALCON), based on NTRU lattices, is expected to follow as FIPS 206. NIST also continues evaluating additional signature schemes in an ongoing process.
These algorithms replace the hard problems of factoring and discrete logarithms with problems that are believed to be hard for both classical and quantum computers - primarily the Learning With Errors (LWE) problem on lattices.
Post-quantum cryptography is not quantum cryptography. PQC algorithms run on ordinary classical computers. They simply use mathematical problems that quantum computers cannot solve efficiently. QKD (BB84, E91) uses actual quantum hardware. Both approaches address the quantum threat, but from completely different angles.
When Should Organizations Migrate?
The urgency of migration depends on three time horizons, sometimes called the Mosca inequality:
- $x$ = the number of years data must remain secret
- $y$ = the number of years needed to migrate systems to post-quantum cryptography
- $z$ = the number of years until a cryptographically relevant quantum computer exists
If $x + y > z$, then the organization needs to begin migration now. Given that migration for large organizations can take 5-10 years, and many experts estimate cryptographically relevant quantum computers could arrive in the 2030s to 2040s, the window for comfortable migration is already narrowing.
Interactive: Mosca Inequality Timeline
Major standards bodies and governments have responded with concrete timelines. NIST recommends transitioning to post-quantum algorithms by 2035, with the goal of deprecating vulnerable algorithms entirely. The NSA's CNSA 2.0 suite mandates post-quantum algorithms for national security systems. The message is clear: the time to begin migrating is now, not when quantum computers arrive.
The safest strategy is hybrid encryption - combining a classical algorithm (like RSA or ECC) with a post-quantum algorithm (like ML-KEM) in the same protocol. This way, the system remains secure even if one of the two algorithms is broken. Major protocols like TLS 1.3 are already being updated to support hybrid key exchange, and the IETF has standardized ML-KEM hybrid modes.
8.6 The Quantum Cryptography Landscape
Let us step back and survey the full picture. Quantum mechanics interacts with cryptography in three distinct ways:
| Category | What It Does | Quantum Hardware? |
|---|---|---|
| Quantum Key Distribution (BB84, E91) | Generates shared secret keys with information-theoretic security | Yes - quantum channel required |
| Quantum Attack (Shor, Grover) | Breaks classical public-key cryptography; weakens symmetric crypto | Yes - quantum computer required |
| Post-Quantum Cryptography (ML-KEM, ML-DSA) | Classical algorithms resistant to quantum attack | No - runs on classical hardware |
These three threads will weave through the rest of this textbook. Chapter 9 introduces oracles and quantum parallelism - the conceptual foundation for the quantum algorithms that pose the threat. Chapter 11 dives into Shor's algorithm itself. And Chapter 35 explores advanced quantum cryptographic protocols beyond basic key distribution.
Practical QKD Today
Quantum key distribution is not just a theoretical curiosity. Commercial QKD systems are deployed in fiber-optic networks in China, Europe, Japan, and South Korea. The longest QKD link spans over 2,000 km using the Micius satellite, which demonstrated satellite-to-ground entanglement-based key distribution in 2017. However, practical challenges remain: QKD systems require dedicated hardware, are limited in distance (without quantum repeaters), and have lower key rates than classical key exchange.
The future likely involves both approaches working in tandem. Post-quantum algorithms protect the bulk of internet traffic today, while QKD secures the most sensitive channels where information-theoretic security justifies the additional infrastructure.