Chapter 33: Digital Quantum Simulation
In the previous chapter, we saw why quantum simulation matters and the difference between analog and digital approaches. Now we dive into the details of digital quantum simulation - the systematic decomposition of quantum time evolution into quantum gates. We will develop the Trotter-Suzuki method from scratch, explore advanced techniques like qubitization and quantum signal processing, and see how these tools are applied to the two most important application domains: quantum chemistry and condensed matter physics.
33.1 Hamiltonian Simulation: The Core Problem
The central problem of digital quantum simulation is this: given a Hamiltonian $H$ acting on $n$ qubits, an initial state $|\psi(0)\rangle$, and a time $t$, implement the unitary operator $e^{-iHt}$ as a quantum circuit. The output state $|\psi(t)\rangle = e^{-iHt} |\psi(0)\rangle$ is the solution to the Schrodinger equation:
$$i\frac{d}{dt}|\psi(t)\rangle = H|\psi(t)\rangle$$The Hamiltonian $H$ is a $2^n \times 2^n$ Hermitian matrix. Exponentiating it directly would require manipulating this exponentially large matrix - exactly the classical bottleneck we are trying to avoid. The key insight is that physically relevant Hamiltonians have structure that can be exploited.
Local Hamiltonians
Most physical Hamiltonians are local: they decompose as a sum of terms, each acting on a small number of qubits:
$$H = \sum_{j=1}^{m} H_j$$where each $H_j$ acts non-trivially on at most $k$ qubits (typically $k = 2$ for pairwise interactions). Examples include:
- Ising model: $H = J\sum_{\langle i,j \rangle} Z_i Z_j + h\sum_i X_i$. Each term acts on at most 2 qubits.
- Heisenberg model: $H = J\sum_{\langle i,j \rangle} (X_i X_j + Y_i Y_j + Z_i Z_j)$. Pairwise interactions between neighboring spins.
- Electronic structure Hamiltonians: After second quantization and a mapping to qubits (Jordan-Wigner or Bravyi-Kitaev), molecular Hamiltonians become sums of Pauli tensor products.
The locality condition is crucial: while $e^{-iHt}$ is an exponentially large matrix, each $e^{-iH_j t}$ acts on only a few qubits and can be implemented with $O(1)$ quantum gates. The challenge is that $e^{-i(H_1 + H_2)t} \neq e^{-iH_1 t} e^{-iH_2 t}$ in general, because $H_1$ and $H_2$ typically do not commute.
The Hamiltonian simulation problem asks: given $H = \sum_j H_j$ (a sum of local terms), implement $e^{-iHt}$ as a quantum circuit to accuracy $\epsilon$. The difficulty comes from non-commutativity: $[H_j, H_k] \neq 0$ in general, so we cannot simply apply each term's evolution independently. Trotterization and more advanced methods handle this non-commutativity.
Complexity of Hamiltonian Simulation
For a Hamiltonian with $m$ local terms, the simulation cost depends on:
- $t$: the simulation time (longer times require more work).
- $m$: the number of terms in the Hamiltonian.
- $\|H\|$: the norm of the Hamiltonian (stronger interactions are harder).
- $\epsilon$: the desired accuracy (tighter precision costs more).
Different simulation methods achieve different trade-offs among these parameters. The theoretical optimum is $O(\alpha t + \log(1/\epsilon))$ queries, where $\alpha$ characterizes the access model (e.g., the 1-norm of coefficients). This optimal scaling is achieved by qubitization and QSVT (Section 33.3).
33.2 Trotterization
The Trotter-Suzuki decomposition is the simplest and most widely used method for Hamiltonian simulation. It is named after Hale Trotter (1959) and Masuo Suzuki (1976, 1990), who developed the mathematical theory of product formulas for operator exponentials. Suzuki's 1976 work generalized the Trotter product formula, while his 1990 paper introduced the recursive fractal decomposition that yields higher-order formulas.
First-Order Trotter Formula
The idea is straightforward. Although $e^{-i(A+B)t} \neq e^{-iAt} e^{-iBt}$ when $[A,B] \neq 0$, the Trotter formula says that the product becomes a good approximation when we divide time into small steps:
$$e^{-i(A+B)t} = \lim_{n \to \infty} \left( e^{-iAt/n} \, e^{-iBt/n} \right)^n$$For finite $n$ (the number of Trotter steps), the error is:
$$\left\| e^{-i(A+B)t} - \left( e^{-iAt/n} \, e^{-iBt/n} \right)^n \right\| \leq \frac{t^2 \|[A,B]\|}{2n}$$The error scales as $O(t^2/n)$ for two terms. For a Hamiltonian with $m$ terms, $H = \sum_{j=1}^m H_j$, the error per step generalizes to $O(m^2 \|H\|^2 t^2/n)$. One Trotter step applies all $m$ individual evolutions, so to achieve accuracy $\epsilon$ the total gate count is $O(m^3 \|H\|^2 t^2/\epsilon)$.
First-order Trotterization: Approximate $e^{-iHt}$ by dividing time into $n$ small steps and applying each term's evolution sequentially: $e^{-iHt} \approx \left(\prod_{j=1}^m e^{-iH_j t/n}\right)^n$. The error is $O(m^2 \|H\|^2 t^2/n)$, requiring $n = O(m^2 \|H\|^2 t^2/\epsilon)$ steps for accuracy $\epsilon$.
Second-Order Suzuki-Trotter Formula
A better approximation symmetrizes the product. The second-order Suzuki-Trotter formula is:
$$S_2(t) = \prod_{j=1}^{m} e^{-iH_j t/2} \prod_{j=m}^{1} e^{-iH_j t/2}$$This "palindrome" structure (forward then backward) cancels the leading-order error term, giving:
$$\left\| e^{-iHt} - S_2(t/n)^n \right\| = O\left(\frac{t^3}{n^2}\right)$$The error now scales as $O(t^3/n^2)$ - a significant improvement. To achieve accuracy $\epsilon$, we need $n = O(t^{3/2}/\sqrt{\epsilon})$ steps, which is fewer than the first-order formula requires.
Higher-Order Formulas
Suzuki showed how to recursively construct product formulas of any even order $2k$:
$$S_{2k}(t) = S_{2k-2}(p_k t)^2 \, S_{2k-2}((1 - 4p_k)t) \, S_{2k-2}(p_k t)^2$$where $p_k = (4 - 4^{1/(2k-1)})^{-1}$. The $2k$-th order formula has error $O(t^{2k+1}/n^{2k})$. Higher orders converge faster with $n$ but require more gate operations per step (the formulas become longer). In practice, the optimal order depends on the specific Hamiltonian and desired accuracy.
Implementing Trotter Steps for the Ising Model
Let us implement Trotterization for the transverse-field Ising model on two qubits:
$$H = J \, Z_0 Z_1 + h(X_0 + X_1)$$One Trotter step requires:
-
$e^{-iJ Z_0 Z_1 \Delta t}$: This is the $\text{Rzz}(2J\Delta t)$ gate, which applies a
$ZZ$ rotation. In QASM:
rzz(2*J*dt) q[0], q[1]. -
$e^{-ihX_0 \Delta t}$: This is the $\text{Rx}(2h\Delta t)$ gate on qubit 0. In QASM:
rx(2*h*dt) q[0]. -
$e^{-ihX_1 \Delta t}$: Same rotation on qubit 1. In QASM:
rx(2*h*dt) q[1].
Try the sandbox below. It simulates the Ising model with $J = 1$, $h = 0.5$, total time $t = \pi/2$, using a variable number of Trotter steps. The code uses 4 steps (each with $\Delta t = \pi/8$).
The circuit above starts with qubit 0 in state $|1\rangle$ and qubit 1 in state $|0\rangle$, then simulates the Ising Hamiltonian for time $t = \pi/2$. The $ZZ$ interaction causes energy exchange between the qubits, while the transverse field $X$ induces spin flips. You should see a non-trivial probability distribution over all four basis states, reflecting the quantum dynamics.
Try modifying the code: reduce to 2 Trotter steps (double the angles per step) or increase to 8 steps (halve the angles). With more steps, the simulation becomes more accurate, but the circuit becomes deeper. This trade-off is at the heart of Trotterization.
The Heisenberg Model: Three Interaction Terms
The Heisenberg model $H = J(X_i X_j + Y_i Y_j + Z_i Z_j)$ requires three interaction gates per bond per Trotter step:
- $e^{-iJ X_i X_j \Delta t}$: the $\text{Rxx}(2J\Delta t)$ gate
- $e^{-iJ Y_i Y_j \Delta t}$: the $\text{Ryy}(2J\Delta t)$ gate
- $e^{-iJ Z_i Z_j \Delta t}$: the $\text{Rzz}(2J\Delta t)$ gate
This sandbox simulates the Heisenberg spin chain: starting with qubit 0 in $|1\rangle$ and qubit 1 in $|0\rangle$, the $XX + YY + ZZ$ interaction causes the spin excitation to hop between sites. The Heisenberg model conserves total spin, so you should see significant probability only on $|10\rangle$ and $|01\rangle$ (the one-excitation sector), with very little probability on $|00\rangle$ or $|11\rangle$ (which would indicate Trotter error).
Trotter Error in Practice
The theoretical error bounds for Trotterization are often pessimistic. Childs, Su, Tran, Wiebe, and Zhu (2021) developed a refined "commutator scaling" theory that exploits the commutativity structure of the Hamiltonian terms. Their key insight: the Trotter error depends not on the norms of individual terms but on the norms of their commutators. If many terms commute (or nearly commute), the actual error is much smaller than the worst-case bound predicts.
For instance, in a lattice Hamiltonian where terms on distant sites commute exactly, the commutator-based bound gives errors that are tight to within a factor of approximately 5, compared to the naive bound which can overestimate by orders of magnitude.
It is sometimes believed that higher-order Trotter formulas are always better. In practice, the optimal order depends on the simulation parameters. Higher-order formulas have smaller error per step but require more gates per step (the formula length grows exponentially with order). For short simulations or low precision, first- or second-order formulas may actually use fewer total gates than fourth- or sixth-order formulas.
33.3 Advanced Techniques: Qubitization and QSP
While Trotterization is intuitive and practical, it is not asymptotically optimal. The gate count for first-order Trotter scales as $O(m^2 t^2/\epsilon)$, which has suboptimal dependence on both $t$ and $\epsilon$. Two advanced techniques achieve the theoretically optimal scaling: qubitization (Low and Chuang, 2019) and quantum signal processing (Low and Chuang, 2017), unified under the QSVT framework (Chapter 31).
Linear Combinations of Unitaries (LCU)
Before describing qubitization, we need the LCU lemma (Childs and Wiebe, 2012). Suppose the Hamiltonian can be written as a linear combination of unitaries:
$$H = \sum_{j=1}^{m} \alpha_j U_j, \quad \alpha_j > 0$$where each $U_j$ is a unitary operator. For Pauli Hamiltonians (which arise from quantum chemistry), this decomposition is natural: each $U_j$ is a tensor product of Pauli operators, and $\alpha_j$ is the corresponding coefficient.
The LCU technique uses two operations:
- PREPARE: A unitary that maps $|0\rangle$ to $\sum_j \sqrt{\alpha_j/\lambda} |j\rangle$, where $\lambda = \sum_j \alpha_j$ is the 1-norm.
- SELECT: A controlled unitary that maps $|j\rangle|\psi\rangle$ to $|j\rangle U_j |\psi\rangle$.
Using these, one can implement a block encoding of $H/\lambda$: the unitary $U = \text{PREPARE}^\dagger \cdot \text{SELECT} \cdot \text{PREPARE}$ satisfies $\langle 0 | U | 0 \rangle = H/\lambda$ (projecting the ancilla register). This block encoding is the gateway to qubitization and QSVT.
Qubitization
Qubitization, developed by Low and Chuang (2019), converts a block encoding of $H/\lambda$ into a quantum walk operator whose eigenvalues encode the eigenvalues of $H$. Specifically, the "qubitized walk operator" $W$ has eigenvalues $e^{\pm i \arccos(\lambda_k/\lambda)}$, where $\lambda_k$ are the eigenvalues of $H$.
The key property of qubitization is that it embeds the Hamiltonian's eigenvalues into an $SU(2)$ subspace - a two-dimensional space for each eigenvalue. This embedding is exactly what quantum signal processing needs: by applying a sequence of signal processing rotations to $W$, one can implement any polynomial transformation of the eigenvalues.
Qubitization converts a block encoding of $H$ into a quantum walk operator $W$ whose eigenvalues are $e^{\pm i\arccos(\lambda_k/\lambda)}$. Combined with quantum signal processing, it achieves $O(\lambda t + \log(1/\epsilon)/\log\log(1/\epsilon))$ query complexity for Hamiltonian simulation - optimal in both $t$ and $\epsilon$ up to doubly logarithmic factors.
Achieving Optimal Simulation
To simulate $e^{-iHt}$ using qubitization and QSP:
- Construct a block encoding of $H/\lambda$ using PREPARE and SELECT oracles.
- Build the qubitized walk operator $W$.
- Find QSP angles $\phi_0, \ldots, \phi_d$ such that the polynomial $P(x) \approx e^{-i\lambda t \cdot x}$ (a polynomial approximation to the complex exponential).
- Apply the QSP sequence: alternating $W$ with single-qubit rotations determined by the angles $\phi_k$.
The degree of the polynomial $P$ needed to approximate $e^{-i\lambda t x}$ to accuracy $\epsilon$ is $d = O(\lambda t + \log(1/\epsilon)/\log\log(1/\epsilon))$. Each degree requires one use of $W$ (which uses one PREPARE and one SELECT), giving the total query complexity.
Comparison of Simulation Methods
| Method | Query Complexity | Key Advantage |
|---|---|---|
| 1st-order Trotter | $O(m^2 \lambda^2 t^2 / \epsilon)$ | Simple implementation |
| 2nd-order Trotter | $O(m^2 \lambda^2 t^{3/2} / \sqrt{\epsilon})$ | Better $\epsilon$ scaling |
| $2k$-th order Trotter | $O(m^2 (\lambda t)^{1+1/2k} / \epsilon^{1/2k})$ | Tunable time vs precision trade-off |
| LCU (Taylor series) | $O(\lambda t \cdot \text{polylog}(\lambda t / \epsilon))$ | Near-optimal in $t$ and $\epsilon$ |
| Qubitization + QSP | $O(\lambda t + \log(1/\epsilon)/\log\log(1/\epsilon))$ | Optimal in both $t$ and $\epsilon$ |
The progression from Trotterization to qubitization represents a journey from $O(t^2/\epsilon)$ to $O(t + \log(1/\epsilon))$ - a dramatic improvement in asymptotic complexity. However, the constant factors and ancilla requirements of advanced methods can be substantial, and for many practical instances (moderate $t$, modest $\epsilon$), Trotterization with optimized ordering remains competitive.
The quest for optimal Hamiltonian simulation has a rich history. Lloyd (1996) gave the first efficient simulation using Trotterization. Berry, Ahokas, Cleve, and Sanders (2007) improved the scaling with simulation time, achieving sublinear dependence on $1/\epsilon$ through higher-order product formulas and multi-product methods. Berry, Childs, Cleve, Kothari, and Somma (2015) achieved logarithmic scaling in $1/\epsilon$ using a truncated Taylor series with LCU - the first method with this near-optimal precision dependence. Low and Chuang (2017, 2019) achieved the optimal $O(t + \log(1/\epsilon))$ scaling using QSP and qubitization, closing a line of research spanning two decades.
33.4 Simulating Chemistry
Quantum chemistry is the flagship application of quantum simulation. The goal is to compute the electronic structure of molecules: the energies, wavefunctions, and properties of electrons in the Coulomb potential of atomic nuclei. This is the foundation of all of chemistry, materials science, and much of biology.
The Electronic Structure Problem
Under the Born-Oppenheimer approximation (nuclei are stationary), the electronic Hamiltonian for a molecule is:
$$H_{\text{elec}} = -\sum_i \frac{\nabla_i^2}{2} - \sum_{i,A} \frac{Z_A}{|r_i - R_A|} + \sum_{i < j} \frac{1}{|r_i - r_j|}$$The three terms represent: kinetic energy of electrons, electron-nuclear attraction, and electron-electron repulsion. It is the electron-electron repulsion (the last term) that makes the problem hard: it couples all electrons together, creating the "correlation" that classical methods struggle to capture.
Second Quantization and Qubit Mapping
To simulate this on a quantum computer, we first express $H_{\text{elec}}$ in second quantization using a finite basis of molecular orbitals:
$$H = \sum_{pq} h_{pq} \, a_p^\dagger a_q + \frac{1}{2}\sum_{pqrs} h_{pqrs} \, a_p^\dagger a_q^\dagger a_r a_s$$where $a_p^\dagger$ and $a_p$ are fermionic creation and annihilation operators, and $h_{pq}$, $h_{pqrs}$ are one- and two-electron integrals computed classically. The operators satisfy anticommutation relations $\{a_p, a_q^\dagger\} = \delta_{pq}$.
We then map fermionic operators to qubit operators using one of several encodings:
- Jordan-Wigner transformation: Maps each orbital to one qubit. The occupation $n_p = a_p^\dagger a_p$ becomes $(I - Z_p)/2$. Creation operators become strings of $Z$ operators (to enforce antisymmetry). The result is a Hamiltonian expressed as a sum of Pauli strings: $H = \sum_k \alpha_k P_k$, where each $P_k$ is a tensor product of Pauli operators.
- Bravyi-Kitaev transformation: An alternative mapping that reduces the length of Pauli strings from $O(n)$ to $O(\log n)$, at the cost of more complex encoding.
For a molecule described by $M$ molecular orbitals, the qubit Hamiltonian has $M$ qubits and $O(M^4)$ Pauli terms (from the two-electron integrals). This is the Hamiltonian we simulate.
Quantum chemistry simulation proceeds in three steps: (1) compute classical integrals $h_{pq}$, $h_{pqrs}$ for a chosen basis set; (2) map the fermionic Hamiltonian to a qubit Hamiltonian via Jordan-Wigner or Bravyi-Kitaev encoding; (3) simulate the qubit Hamiltonian on a quantum computer using Trotterization, LCU, or qubitization. The qubit count equals the number of molecular orbitals; the gate count scales with the number of Hamiltonian terms.
A Simple Example: Hydrogen Molecule
The simplest non-trivial molecule is H$_2$. In a minimal basis (STO-3G), H$_2$ requires 4 spin-orbitals and thus 4 qubits. After applying symmetries and simplifications, the qubit Hamiltonian in the Jordan-Wigner encoding reduces to:
$$H_{\text{H}_2} = g_0 I + g_1 Z_0 + g_2 Z_1 + g_3 Z_0 Z_1 + g_4 X_0 X_1 + g_5 Y_0 Y_1$$where the coefficients $g_k$ depend on the internuclear distance. The $XX$ and $YY$ terms come from electron hopping, while the $ZZ$ term comes from electron-electron repulsion. We can simulate this with Trotter steps using the gates we already know:
This sandbox simulates a simplified H$_2$ Hamiltonian starting from the Hartree-Fock state $|01\rangle$. The $ZZ$ term represents electron-electron interaction, while $XX$ and $YY$ terms represent electron hopping (correlation). Run the circuit and observe the output: the presence of $|10\rangle$ probability indicates electron correlation beyond the mean-field (Hartree-Fock) approximation. This is precisely the quantum advantage - capturing correlation effects that classical mean-field methods miss.
Resource Estimates for Real Molecules
For industrially relevant molecules, the resource requirements are substantial:
| Molecule | Application | Qubits (est.) | Trotter Gates (est.) |
|---|---|---|---|
| H$_2$ (minimal basis) | Benchmark | 4 | $\sim 100$ |
| LiH | Battery materials | 12 | $\sim 10^4$ |
| FeMoCo (nitrogenase) | Nitrogen fixation catalyst | $\sim 100-200$ | $\sim 10^{11}-10^{14}$ |
| Cytochrome P450 | Drug metabolism | $\sim 200-400$ | $\sim 10^{12}-10^{15}$ |
The FeMoCo active site of nitrogenase - the enzyme responsible for biological nitrogen fixation - is a prime target for quantum simulation. Understanding how FeMoCo catalyzes the conversion of $\text{N}_2$ to $\text{NH}_3$ at ambient temperature and pressure (a reaction that industrially requires extreme conditions in the Haber-Bosch process) could revolutionize fertilizer production and reduce its enormous carbon footprint.
Quantum computers will not replace all of classical chemistry. For many molecules, classical methods (DFT, CCSD(T), DMRG) work well. Quantum advantage is expected specifically for strongly correlated systems - those with near-degenerate orbitals, transition metal complexes, and bond-breaking reactions - where classical methods fail or require uncontrolled approximations. The goal is not to replace classical chemistry but to extend its reach into regimes it cannot currently handle.
33.5 Simulating Materials and Condensed Matter
Beyond molecules, quantum simulation can address fundamental questions in condensed matter physics - the study of solids, liquids, and other many-body systems. Many of the most important open questions in physics involve quantum many-body systems that are beyond the reach of classical simulation.
The Hubbard Model
The Hubbard model is perhaps the most important unsolved model in condensed matter physics:
$$H = -t \sum_{\langle i,j \rangle, \sigma} c_{i\sigma}^\dagger c_{j\sigma} + U \sum_i n_{i\uparrow} n_{i\downarrow}$$Despite its simplicity (only two parameters, $t$ and $U$), the 2D Hubbard model is believed to capture the essential physics of high-temperature superconductivity in cuprate materials. Yet its phase diagram remains unknown in important parameter regimes. Classical methods - quantum Monte Carlo, DMRG, dynamical mean-field theory - each have limitations (the sign problem, dimensionality restrictions, or mean-field approximations) that prevent a definitive solution.
A quantum computer could simulate the Hubbard model directly. Using the Jordan-Wigner transformation on a 2D lattice, the Hamiltonian maps to a qubit Hamiltonian with nearest-neighbor hopping terms (long Pauli strings due to the Jordan-Wigner encoding) and on-site interaction terms ($ZZ$ type). Trotterization then proceeds as before, with $\text{Rxx}$, $\text{Ryy}$ gates for hopping and $\text{Rzz}$ gates for interactions.
Topological Phases
Some of the most exotic quantum phases of matter are topological phases - states characterized not by local order parameters (like magnetization) but by global topological invariants. Examples include:
- Topological insulators: Materials that are insulating in their bulk but conduct on their surface through topologically protected edge states.
- Fractional quantum Hall states: Two-dimensional electron gases in strong magnetic fields that exhibit fractionalized excitations (anyons) with exotic exchange statistics.
- Quantum spin liquids: Magnetic materials where quantum fluctuations prevent ordering even at zero temperature, potentially hosting anyonic excitations useful for topological quantum computing.
Quantum simulation can probe these phases by preparing ground states of topological Hamiltonians and measuring topological invariants (e.g., entanglement entropy, string order parameters). This is an area where analog quantum simulators (particularly Rydberg atom arrays) have already made significant experimental progress.
Lattice Gauge Theories
Quantum field theories - the theoretical framework of particle physics - can be discretized on a lattice for numerical simulation. Lattice gauge theories describe the strong nuclear force (quantum chromodynamics) and electroweak interactions. Classical lattice Monte Carlo methods are powerful for equilibrium properties but fail for real-time dynamics (due to the sign problem) and for systems at finite density (relevant to neutron stars and heavy-ion collisions).
Quantum simulation of lattice gauge theories is a rapidly growing field. The Hamiltonian formulation of lattice gauge theories maps naturally to qubit systems (with gauge constraints implemented via energy penalties or symmetry-preserving encodings). Early quantum simulations have demonstrated proof-of-concept for 1+1 dimensional models (Schwinger model, lattice QED), and scaling to higher dimensions is an active area of research.
Condensed matter applications of quantum simulation include: the 2D Hubbard model (high-temperature superconductivity), topological phases of matter (topological insulators, quantum spin liquids), and lattice gauge theories (real-time dynamics of quantum field theories). These problems share a common feature: classical methods face fundamental limitations (sign problem, entanglement growth, dimensionality) that quantum simulation can potentially overcome.
Near-Term Prospects
What can we realistically expect from quantum simulation in the near term? The answer depends on the hardware:
- NISQ era (now - ~2030): Noisy quantum computers with 50-1000 qubits and limited circuit depth. Variational algorithms (VQE, QAOA) and shallow Trotter circuits can address small molecules and lattice models. Results must be carefully benchmarked against classical methods. Some scientific insights (e.g., qualitative phase diagrams) may be achievable even without full error correction.
- Early fault-tolerant era (~2030+): Logical qubits with low error rates enabling deeper circuits. First applications where quantum simulation provides unambiguous advantages over all classical methods - likely in quantum chemistry (molecules with 50-100 orbitals) or small lattice models.
- Full-scale fault-tolerant era: Thousands of logical qubits with arbitrary circuit depth. Industrial-scale quantum chemistry, materials design, and simulation of quantum field theories. This is when the vision of Feynman's 1982 proposal is fully realized.
The first quantum simulation experiments were demonstrations of simple lattice models on trapped-ion and superconducting platforms around 2010-2012. By 2023, analog simulators with over 200 atoms (Rydberg arrays, cold atoms in optical lattices) were probing many-body physics beyond the reach of exact classical simulation. Digital simulation has progressed from 2-qubit demonstrations to circuits on 100+ qubits, though at noise levels that limit accuracy. The field is in a period of rapid experimental progress, with each year bringing larger systems, lower error rates, and more sophisticated simulation protocols.
Interactive: Hamiltonian Expectation Value
A key output of digital quantum simulation is the expectation value of an observable on the time-evolved state. Adjust the evolution parameter below to see how the $\langle ZZ \rangle$ expectation value changes as the simulated Ising interaction evolves the system.