Chapter 30: Quantum Computational Complexity
We have seen that quantum computers can factor integers, search unsorted databases, and simulate quantum systems. But how powerful are they, really? Can a quantum computer solve any problem faster than a classical one? Are there problems that remain hard even for quantum computers? These questions belong to quantum computational complexity theory - the study of what quantum computers can and cannot efficiently compute. In this chapter, we map the landscape of quantum complexity classes, examine the evidence for quantum advantage, and confront the limits of quantum computing head-on.
30.1 BQP: What Quantum Computers Can Efficiently Solve
In classical complexity theory, the class P contains problems solvable in polynomial time by a deterministic Turing machine, and BPP (Bounded-error Probabilistic Polynomial time) contains problems solvable in polynomial time by a probabilistic Turing machine with error probability at most 1/3. The quantum analog is BQP: Bounded-error Quantum Polynomial time.
BQP is the class of decision problems solvable by a polynomial-time quantum Turing machine (equivalently, a polynomial-size uniform quantum circuit family) with error probability at most 1/3. That is, for every YES instance the quantum computer accepts with probability at least 2/3, and for every NO instance it rejects with probability at least 2/3.
The error bound of 1/3 is conventional - by repeating the computation $O(\log(1/\delta))$ times and taking a majority vote, the error can be reduced to any $\delta > 0$ with only polynomial overhead. This is called probability amplification, and it works for both BPP and BQP.
Where BQP Sits
The known inclusions among complexity classes give us the following chain:
$$\mathsf{P} \subseteq \mathsf{BPP} \subseteq \mathsf{BQP} \subseteq \mathsf{PP} \subseteq \mathsf{PSPACE}$$Let us unpack each inclusion:
- $\mathsf{P} \subseteq \mathsf{BPP}$: A deterministic computation is a special case of a probabilistic one (just never flip any coins).
- $\mathsf{BPP} \subseteq \mathsf{BQP}$: A quantum computer can simulate any classical probabilistic computation. Classical coin flips can be implemented by applying Hadamard gates and measuring. Classical logic gates (AND, OR, NOT) can be implemented reversibly using Toffoli gates, as we saw in Chapter 3. So any BPP algorithm runs on a quantum computer with at most polynomial overhead.
- $\mathsf{BQP} \subseteq \mathsf{PP}$: The class PP (Probabilistic Polynomial time, with unbounded error) can simulate quantum computations. Intuitively, PP can compute the sum of exponentially many signed amplitudes, which is exactly what quantum computation requires. This was shown by Adleman, DeMarrais, and Huang in 1997.
- $\mathsf{PP} \subseteq \mathsf{PSPACE}$: PP problems can be solved with polynomial space, since we can enumerate and count all computation paths using reusable space.
Every one of these inclusions is believed to be strict, but proving any single separation remains one of the great open problems of theoretical computer science.
What Problems Are in BQP?
The most celebrated problems known to be in BQP but believed to be outside BPP include:
- Integer factoring and discrete logarithm (Shor's algorithm, Chapter 11): These problems are in BQP with polynomial-time quantum algorithms, but no polynomial-time classical algorithm is known.
- Simulating quantum systems (Chapters 32-33): Given a Hamiltonian $H$ and time $t$, approximating $e^{-iHt}|\psi\rangle$ is in BQP for sparse or local Hamiltonians. Classical simulation requires exponential time in the worst case.
- Pell's equation: Finding the fundamental solution to $x^2 - Dy^2 = 1$ is in BQP (Hallgren, 2002), exploiting the abelian hidden subgroup structure of real quadratic number fields.
- Approximating the Jones polynomial at certain roots of unity (Freedman, Kitaev, Wang, 2002): This problem is BQP-complete, meaning it is among the hardest problems in BQP.
The complexity class BQP was defined by Bernstein and Vazirani in 1993, who also proved the first quantum complexity-theoretic results. Their work, along with Simon's 1994 oracle separation between BPP and BQP, set the stage for Shor's algorithm the following year.
The Oracle World
Much of what we know about BQP comes from oracle (or black-box) results. Simon proved in 1994 that there exists an oracle relative to which BQP is not contained in BPP - the first formal evidence that quantum computers are more powerful than classical ones. Bernstein and Vazirani similarly showed an oracle separation. These results do not prove $\mathsf{BPP} \neq \mathsf{BQP}$ in the real world (oracle results do not transfer automatically), but they provide strong structural evidence.
In 2018, Raz and Tal proved a landmark result: there exists an oracle relative to which $\mathsf{BQP} \not\subseteq \mathsf{PH}$, where PH is the polynomial hierarchy. This means that, in a relativized world, quantum computers can solve problems that lie outside the entire polynomial hierarchy - a vast tower of complexity classes built on top of NP. This is perhaps the strongest structural evidence we have that quantum computational power is fundamentally different from classical computational power.
The sandbox above implements the Deutsch-Jozsa algorithm for a balanced function on 2 bits.
A single quantum query determines that the function is balanced (you should see the output
11 with 100% probability), whereas a classical deterministic algorithm would need
to query a majority of inputs. This is a simple example of an oracle problem in BQP that is
not in P relative to the oracle.
30.2 QMA and Quantum Verification
The class NP captures problems where a correct answer can be verified efficiently, even if finding the answer is hard. A prover provides a classical "witness" (a certificate or proof), and a polynomial-time verifier checks it. The quantum analog is QMA: Quantum Merlin-Arthur.
QMA (Quantum Merlin-Arthur) is the class of decision problems where a YES answer can be verified by a polynomial-time quantum verifier (Arthur) given a polynomial-size quantum state as a witness (from Merlin). For YES instances, there exists a quantum witness that makes the verifier accept with probability at least 2/3. For NO instances, no quantum witness makes the verifier accept with probability more than 1/3.
The key difference from NP is that the witness is a quantum state - it can be an arbitrary superposition over exponentially many basis states - and the verifier is a quantum computer. This makes QMA potentially more powerful than NP: the quantum witness might encode information that no classical string of polynomial length can capture.
The known relationships are:
$$\mathsf{NP} \subseteq \mathsf{QMA} \subseteq \mathsf{QIP} = \mathsf{PSPACE}$$The variant QMA(2), which allows two unentangled quantum witnesses, satisfies $\mathsf{QMA} \subseteq \mathsf{QMA(2)} \subseteq \mathsf{NEXP}$, but whether QMA(2) is contained in PSPACE is an open question. Whether $\mathsf{NP} = \mathsf{QMA}$ is a major open question. If they are equal, it would mean that quantum witnesses are no more powerful than classical ones for verification - a surprising conclusion that most researchers consider unlikely.
The Local Hamiltonian Problem: Quantum NP-Completeness
Every complexity class needs a "complete" problem - a problem that is as hard as any problem in the class. For NP, the canonical complete problem is SAT (the Cook-Levin theorem). For QMA, the canonical complete problem is the Local Hamiltonian problem, proved by Alexei Kitaev in 1999 in what is often called the "quantum Cook-Levin theorem."
The problem is defined as follows. Given a Hamiltonian $H = \sum_{j=1}^{m} H_j$ acting on $n$ qubits, where each $H_j$ acts non-trivially on at most $k$ qubits (making it "$k$-local"), and two thresholds $a < b$ with $b - a \geq 1/\text{poly}(n)$:
- YES: The ground state energy of $H$ (its smallest eigenvalue) is at most $a$.
- NO: The ground state energy of $H$ is at least $b$.
Kitaev showed that the $k$-Local Hamiltonian problem is QMA-complete for $k = 5$. Subsequent work tightened this: Kempe and Regev (2003) reduced it to $k = 3$, and Kempe, Kitaev, and Regev (2004) proved QMA-completeness for $k = 2$. This is the tightest possible result, since the 1-local case is trivially in P.
Kitaev's Quantum Cook-Levin Theorem: The $k$-Local Hamiltonian problem is QMA-complete for $k \geq 2$. It plays the same role in quantum complexity theory that SAT plays in classical complexity theory. Just as SAT captures the essence of classical verification, the Local Hamiltonian problem captures the essence of quantum verification.
The analogy between SAT and the Local Hamiltonian problem is deep. A $k$-SAT clause checks $k$ bits and is "satisfied" or not. A $k$-local Hamiltonian term acts on $k$ qubits and contributes an "energy penalty." Minimizing the total number of violated clauses (MAX-SAT) corresponds to minimizing the total energy (finding the ground state). The quantum witness for the Local Hamiltonian problem is the ground state itself - a quantum state that a quantum verifier can use to estimate the energy.
Physical Significance
The QMA-completeness of the Local Hamiltonian problem has profound implications for physics. It means that determining the ground state energy of a quantum many-body system is, in the worst case, intractable even for quantum computers (assuming $\mathsf{QMA} \neq \mathsf{BQP}$). This is not merely a theoretical curiosity - finding ground states of quantum systems is one of the central challenges of condensed matter physics and quantum chemistry. The QMA-completeness result tells us that no general-purpose efficient algorithm (classical or quantum) can solve all instances of this problem.
QMA-completeness of the Local Hamiltonian problem does not mean that all instances are hard. Many physically relevant Hamiltonians have special structure (symmetries, gap conditions, geometric locality) that makes their ground states tractable. The hardness is a worst-case statement about the general problem.
30.3 QIP = PSPACE
Beyond single-round verification (QMA), we can consider interactive proof systems where a verifier and a prover exchange multiple rounds of messages. Classically, the class IP (Interactive Proofs) was shown to equal PSPACE in a celebrated result by Shamir (1992), building on the work of Lund, Fortnow, Karloff, and Nisan. This was a surprise: interactive proofs are exactly as powerful as polynomial-space computation.
The quantum analog is QIP: Quantum Interactive Proofs, where the verifier is a polynomial-time quantum computer, the prover is computationally unbounded, and they exchange quantum messages over multiple rounds.
QIP = PSPACE (Jain, Ji, Upadhyay, and Watrous, 2011): The class of problems having quantum interactive proof systems with polynomially many rounds equals PSPACE. Since IP = PSPACE (Shamir, 1992), this means quantum interactive proofs are exactly as powerful as classical interactive proofs.
This result, proved by Rahul Jain, Zhengfeng Ji, Sarvagya Upadhyay, and John Watrous, resolved a major open question in quantum complexity theory. The proof proceeded in two steps:
- PSPACE $\subseteq$ QIP: This follows immediately from IP = PSPACE, since a classical interactive proof is a special case of a quantum one.
- QIP $\subseteq$ PSPACE: This is the hard direction. The authors showed that the optimal strategy for the quantum verifier can be computed in polynomial space by applying a parallelized version of the matrix multiplicative weights update method to a class of semidefinite programs that captures the power of quantum interactive proofs.
The Significance
The result QIP = PSPACE has a remarkable interpretation: quantum messages do not increase the power of interactive proof systems. Even though the prover and verifier exchange quantum states (which can carry superpositions and entanglement), the class of problems that can be verified this way is the same as with classical interaction. Quantum information helps with efficiency (QIP achieves PSPACE power with only 3 rounds of interaction, whereas classical IP may require polynomially many rounds), but not with raw computational power.
An important intermediate result is that QIP = QIP(3): any quantum interactive proof system can be parallelized to use just 3 rounds of interaction (one message from verifier to prover, one back, then a final message). This was shown by Kitaev and Watrous (2000). In contrast, for classical IP, it is believed that more rounds genuinely add power (unless the polynomial hierarchy collapses).
The Bigger Picture
Here is the full hierarchy of quantum complexity classes, from weakest to strongest:
$$\mathsf{BQP} \subseteq \mathsf{QCMA} \subseteq \mathsf{QMA} \subseteq \mathsf{QIP(2)} \subseteq \mathsf{QIP} = \mathsf{PSPACE}$$Let us distinguish carefully between established and conjectured containments. The established containments are:
$$\mathsf{BQP} \subseteq \mathsf{QMA} \subseteq \mathsf{QIP} = \mathsf{QIP(3)} = \mathsf{PSPACE}$$Whether QMA is strictly contained in QIP (equivalently, whether QMA $\neq$ PSPACE) is unknown. If QMA = PSPACE, the entire hierarchy would collapse. Most researchers believe the containments are strict, giving a rich structure of increasingly powerful quantum proof systems.
The QIP = PSPACE proof appeared on arXiv in July 2009 and was published in the Journal of the ACM in 2011. It won the Best Paper Award at the 2010 ACM Symposium on Theory of Computing (STOC). The matrix multiplicative weights technique used in the proof has since found applications in other areas of quantum information theory, including quantum state tomography and quantum channel capacity bounds.
30.4 Quantum Supremacy and Advantage
Complexity theory gives us the theoretical framework, but a natural experimental question arises: can we demonstrate that a quantum computer solves some problem faster than any classical computer? This is the question of quantum computational supremacy (sometimes called "quantum advantage" to avoid political connotations of the word "supremacy").
What Quantum Supremacy Means
Quantum supremacy, a term coined by John Preskill in 2012, refers to the demonstration that a quantum device can perform a computational task that no classical computer can perform in a reasonable amount of time. The task does not need to be useful - it just needs to be well-defined, verifiable, and classically intractable.
The leading candidate task has been Random Circuit Sampling (RCS): run a random quantum circuit on $n$ qubits with depth $d$, measure all qubits, and collect samples from the output distribution. For sufficiently large $n$ and $d$, the output distribution becomes so complex that no known classical algorithm can sample from it efficiently. The theoretical basis for this hardness relies on complexity-theoretic conjectures related to the non-collapse of the polynomial hierarchy.
Google Sycamore (2019)
In October 2019, Google's quantum AI team published a landmark paper in Nature claiming the first demonstration of quantum supremacy. Their Sycamore processor, with 53 operational superconducting qubits (out of 54; one was defective), performed a random circuit sampling task in approximately 200 seconds. Google estimated that the same task would take the world's fastest supercomputer approximately 10,000 years.
The claim was immediately contested. IBM argued that by using disk storage and clever algorithmic techniques, a classical supercomputer could complete the task in roughly 2.5 days - still far slower than 200 seconds, but not 10,000 years. Subsequent classical algorithms have further narrowed the gap. In 2022, a team in China demonstrated that tensor network methods on classical GPUs could match Sycamore's performance for the specific circuit parameters Google used.
Google Willow (2024)
In December 2024, Google announced their Willow quantum chip with 105 superconducting qubits. Willow achieved two major milestones. First, it demonstrated below-threshold quantum error correction: as the size of the error-correcting code increased (from $3 \times 3$ to $5 \times 5$ to $7 \times 7$ grids of physical qubits), the logical error rate decreased exponentially - cutting roughly in half with each increase in code distance. This is the behavior required for scalable quantum computing and had never been demonstrated before.
Second, Willow performed a random circuit sampling benchmark in under 5 minutes that Google estimated would take the fastest classical supercomputers approximately $10^{25}$ years - a gap so vast that even aggressive improvements in classical algorithms seem unlikely to close it. The improved qubit coherence times (averaging approximately 68 microseconds, a 5x improvement over Sycamore) and lower error rates make the classical simulation challenge dramatically harder.
Other Quantum Advantage Experiments
Google's experiments are not the only claims of quantum advantage. Notable others include:
- Boson sampling (Jiuzhang, USTC China, 2020-2023): Photonic quantum computers sampling from distributions of interfering photons. The Jiuzhang experiments used up to 113 detected photons and claimed computational advantages over classical simulation, though the theoretical hardness assumptions differ from those of RCS.
- Random circuit sampling on other platforms: IBM, Quantinuum, and others have performed RCS experiments, though not always claiming supremacy.
The Current Status
The quantum supremacy landscape is a moving target. Classical simulation algorithms continue to improve, and some initial claims have been challenged or narrowed. However, a clear trend has emerged: as quantum hardware improves (more qubits, lower error rates, longer coherence times), the classical simulation cost grows super-exponentially. Willow's $10^{25}$-year estimate suggests that we have moved well beyond the regime where incremental classical improvements can close the gap.
Quantum supremacy does not mean quantum computers are "better" than classical computers at everything, or even at anything useful (yet). The tasks demonstrated so far (random circuit sampling, boson sampling) have no known practical applications. Supremacy is a proof of concept: it demonstrates that quantum computational power is real and that the theoretical predictions of quantum mechanics hold at scale. The path from supremacy to practical advantage for real-world problems is still being charted.
Quantum supremacy experiments demonstrate that quantum computers can perform specific computational tasks that are intractable for classical computers. The Google Sycamore (2019) and Willow (2024) experiments on random circuit sampling, and the USTC Jiuzhang experiments on boson sampling, provide the strongest evidence to date. The key challenge is moving from supremacy on artificial benchmarks to advantage on practically useful problems.
30.5 Limits of Quantum Computing
Quantum computers are not magic. They obey the laws of physics - specifically, quantum mechanics - and those laws impose real constraints. Understanding what quantum computers cannot do is just as important as understanding what they can do.
BQP vs NP: Quantum Computers Cannot Solve All Hard Problems
Perhaps the most important thing to understand about quantum computing is this: there is strong evidence that quantum computers cannot efficiently solve NP-complete problems.
The relationship between BQP and NP is subtle. Neither is known to contain the other:
- Is NP $\subseteq$ BQP? Almost certainly not. If quantum computers could solve NP-complete problems in polynomial time, it would mean $\mathsf{NP} \subseteq \mathsf{BQP} \subseteq \mathsf{PP}$. But PP is known to contain the entire polynomial hierarchy PH (Toda's theorem), and $\mathsf{NP} \subseteq \mathsf{BQP}$ combined with other known results would likely collapse PH. No evidence supports such a collapse.
- Is BQP $\subseteq$ NP? Also believed to be false. The Raz-Tal oracle separation shows that BQP contains problems outside the polynomial hierarchy (relative to an oracle), and NP is only the first level of PH.
The most likely picture is that BQP and NP are incomparable: each contains problems not in the other. BQP contains problems like factoring (believed to be outside P but inside NP $\cap$ co-NP, so not NP-complete). NP contains problems like 3-SAT that are almost certainly not in BQP.
$$\text{BQP} \not\subseteq \mathsf{NP} \quad \text{and} \quad \mathsf{NP} \not\subseteq \text{BQP} \quad \text{(conjectured)}$$Grover's Lower Bound
The strongest concrete evidence for quantum limitations comes from Grover's search problem. Recall from Chapter 12 that Grover's algorithm searches an unsorted database of $N$ items in $O(\sqrt{N})$ queries. Bennett, Bernstein, Brassard, and Vazirani (1997) proved that this is optimal: any quantum algorithm for unstructured search requires $\Omega(\sqrt{N})$ queries. This is a provable lower bound, not a conjecture.
The square-root speedup is significant but not exponential. For NP-complete problems like 3-SAT on $n$ variables, the brute-force search space has $N = 2^n$ elements. Grover's algorithm reduces this to $O(2^{n/2})$ - still exponential in $n$. A classical computer checking all $2^n$ possibilities would take time $2^n$; a quantum computer using Grover's algorithm would take time $2^{n/2}$. Both are exponential; the quantum computer merely halves the exponent.
The BBBV Theorem: Any quantum algorithm for unstructured search on $N$ elements requires $\Omega(\sqrt{N})$ queries. This means quantum computers offer at most a quadratic speedup for brute-force search. NP-complete problems, whose hardness comes from the need to search exponentially large spaces, remain exponentially hard even for quantum computers (albeit with a smaller exponent).
The No-Cloning Theorem and Its Consequences
The no-cloning theorem (Chapter 6) states that an arbitrary unknown quantum state cannot be perfectly copied. This fundamental constraint limits quantum algorithms in several ways:
- You cannot "branch" on a quantum computation the way classical algorithms fork processes.
- Intermediate results in a quantum computation cannot be saved and reused without disturbing the quantum state.
- Error correction requires substantial overhead (many physical qubits per logical qubit) precisely because errors cannot be detected by simply copying and comparing quantum states.
The Measurement Problem
A quantum computer can maintain exponentially many amplitudes in superposition during a computation, but measurement collapses the state to a single classical outcome. You get only $n$ classical bits from an $n$-qubit measurement. The art of quantum algorithm design lies in arranging interference so that the desired answer has high probability when measured. This is a severe constraint: you cannot simply "read out" all $2^n$ amplitudes.
When Does Quantum Help?
Quantum computers provide genuine speedups when the problem has structure that quantum algorithms can exploit:
- Algebraic structure: The periodicity in Shor's algorithm, the group structure in the hidden subgroup problem (Chapter 31).
- Quantum structure: Simulating quantum systems is naturally suited to quantum computers because the simulator and the simulated system "speak the same language" (Chapters 32-33).
- Interference structure: Problems where constructive and destructive interference can amplify correct answers and suppress incorrect ones.
When a problem lacks exploitable structure - when it is truly "unstructured" in the computational sense - quantum computers offer at most a polynomial (typically quadratic) speedup, not an exponential one.
Scott Aaronson has eloquently summarized the limits of quantum computing: "Quantum computers would not solve hard search problems instantaneously by simply trying all the possible solutions at once." The real power of quantum computing lies not in brute-force parallelism but in the subtle manipulation of probability amplitudes through interference. Problems must have the right mathematical structure for quantum algorithms to achieve exponential speedups.
Summary: The Complexity Landscape
The following diagram summarizes the believed relationships (none of these separations are proven, but all are widely conjectured):
$$\mathsf{P} = \mathsf{BPP} \subsetneq \mathsf{BQP}$$ $$\mathsf{BQP} \not\subseteq \mathsf{NP}, \quad \mathsf{NP} \not\subseteq \mathsf{BQP}$$ $$\mathsf{NP} \subseteq \mathsf{QMA} \subseteq \mathsf{QIP} = \mathsf{PSPACE}$$Quantum computers are more powerful than classical computers (BQP properly contains P = BPP), but they are not all-powerful (BQP does not contain NP). They occupy a specific, well-defined niche in the complexity landscape - one that happens to include some problems of extraordinary practical importance (factoring, quantum simulation) while excluding others (NP-complete optimization, most combinatorial search problems).