Chapter 13: How to Build a Quantum Computer

In the first twelve chapters of this textbook, we treated quantum computation as an abstract mathematical discipline. We manipulated qubits, applied unitary gates, and analyzed algorithms without ever asking the most basic engineering question: what is a qubit, physically? What real-world object stores a quantum superposition, and how do you poke it with just the right pulse to execute a Hadamard or a CNOT? This chapter answers those questions. We will survey the leading physical platforms for quantum computing, understand the criteria that any platform must satisfy, and confront the engineering challenges that make building a quantum computer one of the hardest technological feats ever attempted.

The landscape of quantum hardware is diverse and rapidly evolving. Superconducting circuits operate at temperatures colder than outer space. Trapped ions float in electromagnetic cages, manipulated by precisely tuned lasers. Neutral atoms are arranged by optical tweezers into programmable arrays. Photons race through waveguides at room temperature. Each approach has distinct strengths and weaknesses, and no single platform has yet emerged as the definitive winner. Understanding these trade-offs is essential for anyone who wants to move beyond textbook quantum computing and engage with the technology as it actually exists today.

13.1 What Makes a Good Qubit? (DiVincenzo's Criteria)

In 2000, the physicist David DiVincenzo of IBM Research published a landmark paper identifying five criteria that any physical system must satisfy to serve as a practical quantum computer. These criteria, now universally known as DiVincenzo's criteria, provide the standard checklist against which every hardware platform is evaluated.

Criterion 1: A Scalable Physical System with Well-Characterized Qubits

The system must contain identifiable two-level quantum systems (qubits) whose properties are well understood. Crucially, it must be possible to add more qubits without fundamentally changing the physics. Scalability means not just fitting more qubits on a chip but ensuring that control and readout infrastructure scales manageably as the system grows.

Criterion 2: The Ability to Initialize Qubits to a Known State

Before computation begins, all qubits must be set to a well-defined starting state, typically $|0\rangle$. In practice, initialization is achieved by cooling to the ground state (for superconducting qubits), optically pumping ions into a specific electronic state, or similar techniques. Initialization fidelity matters: qubits starting in the wrong state propagate errors through the entire computation.

Criterion 3: Long Coherence Times (Relative to Gate Operations)

Qubits are fragile. Interactions with the environment cause decoherence - the gradual loss of quantum information. Two timescales characterize this process:

  • $T_1$ (energy relaxation time): The timescale on which an excited qubit ($|1\rangle$) spontaneously decays to the ground state ($|0\rangle$). This is analogous to a ball rolling downhill - the qubit loses energy to its environment.
  • $T_2$ (dephasing time): The timescale on which the phase of a superposition is randomized. A qubit in the state $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ gradually loses the definite phase relationship between its $|0\rangle$ and $|1\rangle$ components, degrading into an incoherent mixture. Always $T_2 \leq 2 T_1$.
Key Concept.

What matters is not the absolute coherence time, but the ratio of coherence time to gate time. If a gate takes 20 nanoseconds and $T_2 = 100$ microseconds, you can execute roughly 5,000 gates before decoherence destroys your quantum state. This ratio determines how deep a circuit the hardware can reliably execute.

Criterion 4: A Universal Set of Quantum Gates

The system must support a set of gates sufficient to approximate any unitary operation to arbitrary precision. As we learned in earlier chapters, a common universal gate set consists of arbitrary single-qubit rotations plus a two-qubit entangling gate such as CNOT. In practice, different hardware platforms implement different native gate sets - for instance, IBM's Eagle-generation systems use the ECR (echoed cross-resonance) gate as their native two-qubit gate, while newer Heron processors use CZ. Meanwhile, trapped-ion systems typically implement the Mølmer-Sørensen or XX gate. Compilers decompose abstract circuits into whichever native gates the hardware provides.

Criterion 5: A Qubit-Specific Measurement Capability

At the end of a computation, you must be able to measure individual qubits and read out their states as classical bits (0 or 1). The measurement must be fast relative to coherence times and must have high fidelity - meaning if the qubit is in $|0\rangle$, the readout should report 0 with high probability, and likewise for $|1\rangle$. Measurement fidelities above 99% are now standard on leading platforms.

Beyond the Five.

DiVincenzo also identified two additional criteria specifically for quantum communication (as opposed to computation): (6) the ability to convert between stationary and flying qubits, and (7) the ability to faithfully transmit flying qubits between specified locations. These are essential for quantum networking and distributed quantum computing but are not required for a standalone quantum processor.

Every platform in the following sections can be evaluated against these five criteria. No platform satisfies all perfectly, and improving one often comes at the expense of another - a tension that drives the diversity of hardware approaches.

Interactive: DiVincenzo Criteria Radar Chart

DiVincenzo Criteria: Platform Comparison

13.2 Superconducting Qubits

Superconducting qubits are the most widely deployed quantum computing platform today, championed by IBM, Google, Rigetti, and others. They encode quantum information in the energy levels of tiny electrical circuits cooled to temperatures near absolute zero, where electrical resistance vanishes and quantum effects dominate macroscopic behavior.

The Transmon Qubit

The workhorse of the superconducting approach is the transmon (short for "transmission-line shunted plasma oscillation qubit"), developed at Yale in 2007 by Koch, Houck, and colleagues. A transmon is a nonlinear LC oscillator where the inductor is replaced by a Josephson junction - a thin insulating barrier between two superconducting electrodes.

An ordinary LC circuit has evenly spaced energy levels like a harmonic oscillator, making it useless as a qubit: a pulse exciting $|0\rangle \to |1\rangle$ would also drive $|1\rangle \to |2\rangle$. The Josephson junction introduces anharmonicity, making the energy gaps unequal (typically by 200-300 MHz). This allows microwave pulses to selectively address only the $|0\rangle$-$|1\rangle$ transition, defining a usable qubit.

Operating Environment: The Dilution Refrigerator

Superconducting qubits must operate at approximately 10-20 millikelvin - about 100 times colder than outer space (which sits at roughly 2.7 K due to the cosmic microwave background). At these temperatures, thermal energy ($k_B T$) is far below the qubit's energy splitting ($\hbar \omega$, typically 4-6 GHz), ensuring the qubit remains in its ground state unless deliberately excited.

The cooling is achieved by a dilution refrigerator, a cryogenic system that exploits the quantum properties of helium-3/helium-4 mixtures. A dilution refrigerator is a multi-stage device roughly the size of a wardrobe, with thermal stages at ~50 K, ~4 K, ~700 mK, ~100 mK, and finally ~10-15 mK at the mixing chamber where the quantum processor sits. The infrastructure requirements are substantial: vacuum pumps, helium circulation, heavily filtered microwave lines, and magnetic shielding.

Gates and Control

Single-qubit gates are implemented by sending precisely calibrated microwave pulses (at the qubit's resonant frequency, typically 4-6 GHz) through control lines connected to each qubit. The pulse's amplitude, duration, and phase determine which rotation is applied to the qubit's state on the Bloch sphere. A typical single-qubit gate takes roughly 20-50 nanoseconds.

Two-qubit entangling gates exploit the coupling between neighboring qubits. IBM's systems use the cross-resonance interaction: driving one qubit at the frequency of its neighbor creates a conditional rotation, implementing a CNOT-like gate. Google's systems use tunable couplers that allow qubits to be selectively brought into and out of resonance. Two-qubit gates are slower and noisier than single-qubit gates, typically taking 50-100 nanoseconds with fidelities of 99-99.9%.

Connectivity and Topology

Superconducting qubits are fabricated on planar chips using lithographic techniques. Each qubit couples to a small number of nearest neighbors - typically 2 to 4 in a square lattice or heavy-hex topology. To execute a CNOT between non-adjacent qubits, the compiler must insert SWAP gates (each decomposing into three CNOTs) to route quantum information, adding noise.

Hardware Constraint.

On a superconducting chip with nearest-neighbor connectivity, executing a CNOT between distant qubits may require multiple SWAP operations. Each additional SWAP introduces errors. Algorithms designed for all-to-all connectivity (like many textbook circuits) can incur significant overhead when compiled for superconducting hardware. This is why circuit transpilation - adapting an abstract circuit to a specific hardware topology - is a critical part of the quantum software stack.

Leading Processors

IBM Heron (2023-2024): The original Heron processor has 133 qubits; Heron r2 (2024) increased to 156 qubits in a heavy-hex lattice, achieving median two-qubit gate fidelities above 99.5%. Heron uses fixed-frequency transmons with tunable couplers, allowing interaction strength between qubits to be adjusted on demand. IBM's roadmap targets 100,000+ qubits by decade's end through modular multi-chip architectures.

Google Willow (2024): Google's Willow chip contains 105 qubits and demonstrated a landmark error-correction result: increasing the surface code distance from 3 to 5 to 7 reduced the logical error rate at each step, achieving exponential suppression of errors with scale. This was the first conclusive demonstration of the scaling behavior that fault-tolerant quantum computing requires. Willow achieved $T_1$ times averaging approximately 68 microseconds (with some qubits exceeding 100 microseconds).

Performance Summary

Metric Typical Range
Qubit count50-1,200 (current generation)
$T_1$ coherence50-200 $\mu$s
$T_2$ coherence50-200 $\mu$s (up to $2T_1$)
Single-qubit gate time20-50 ns
Two-qubit gate time50-100 ns
Single-qubit gate fidelity99.9-99.97%
Two-qubit gate fidelity99.0-99.9%
Readout fidelity99.0-99.8%
ConnectivityNearest-neighbor (2-4 neighbors)
Operating temperature~15 mK

The sandbox below illustrates a key hardware constraint. We build a simple three-qubit GHZ state. On a linear chain of superconducting qubits (where qubit 0 connects to qubit 1, and qubit 1 connects to qubit 2, but qubit 0 does not directly connect to qubit 2), this circuit maps naturally onto the hardware because both CNOTs target adjacent qubits.

You should see roughly equal probabilities for 000 and 111, with little probability on other outcomes. Both CNOT gates act on adjacent qubits, so no SWAP routing is needed. Now consider what would happen if we needed a CNOT directly between q[0] and q[2] on this same linear chain - we would need to insert a SWAP through q[1], tripling the two-qubit gate cost. Hardware topology matters.

13.3 Trapped Ions

Trapped-ion quantum computers use individual atoms, stripped of one electron to become positively charged ions, as qubits. The ions are confined in free space by oscillating electromagnetic fields (a Paul trap or linear ion trap) and manipulated by precisely targeted laser beams or microwave fields. This approach, pioneered by groups at NIST, Oxford, Innsbruck, and now commercialized by Quantinuum and IonQ, offers some of the highest gate fidelities and longest coherence times of any platform.

How Ions Become Qubits

The two qubit states $|0\rangle$ and $|1\rangle$ are encoded in two long-lived electronic energy levels of the ion. Common choices include:

  • Hyperfine qubits: Two hyperfine ground states of ions such as $^{171}\text{Yb}^+$ (ytterbium) or $^{9}\text{Be}^+$ (beryllium), split by a few gigahertz. These states have extremely long natural lifetimes - effectively infinite for practical purposes, with $T_1$ times that can exceed minutes or even hours.
  • Optical qubits: A ground state and a metastable excited state in ions such as $^{40}\text{Ca}^+$ (calcium) or $^{138}\text{Ba}^+$ (barium), connected by an optical (visible-light) transition. These have shorter natural lifetimes (on the order of seconds) but offer certain technical advantages for gate implementation.

All-to-All Connectivity

A chain of trapped ions in a linear Paul trap shares collective vibrational modes - the ions can oscillate together like beads on a string. These shared motional modes serve as a "quantum bus" that mediates entangling interactions between any pair of ions in the chain, regardless of their physical separation. This all-to-all connectivity is a major advantage over superconducting qubits. Any qubit can directly interact with any other qubit without the need for SWAP routing.

Key Concept.

Trapped ions offer all-to-all connectivity: any qubit can be entangled with any other qubit in a single gate operation. This eliminates the SWAP overhead that plagues nearest-neighbor architectures. For algorithms requiring long-range interactions (like the QFT used in Shor's algorithm), trapped ions can execute the circuit directly without topological routing.

Gate Implementation

Single-qubit gates are performed by applying laser pulses (or microwave pulses for hyperfine qubits) resonant with the qubit transition. The pulse duration, phase, and intensity determine the rotation angle and axis. Single-qubit fidelities routinely exceed 99.99%.

Two-qubit entangling gates use laser beams to couple two ions through their shared motional modes. The most common scheme is the Mølmer-Sørensen gate, which uses bichromatic laser fields (two frequencies symmetrically detuned from the qubit transition) to create a state-dependent force on the ions. The resulting XX interaction generates maximal entanglement. Two-qubit gate fidelities have reached 99.5-99.9% in leading systems.

The primary downside is speed. Because gates rely on the relatively slow mechanical motion of ions (oscillation frequencies of 1-5 MHz), two-qubit gates in trapped-ion systems typically take 100-500 microseconds - roughly 1,000 to 10,000 times slower than superconducting two-qubit gates. However, coherence times are correspondingly much longer, so the ratio of coherence time to gate time remains competitive.

Scalability: The QCCD Architecture

A single linear ion chain becomes unwieldy beyond ~20-50 ions: the motional mode spectrum gets crowded and cross-talk increases. The leading scaling approach is the quantum charge-coupled device (QCCD) architecture (Kielpinski, Monroe, Wineland, 2002), where ions are shuttled between multiple trap zones connected by junctions. Small groups interact in "gate zones," and ions are physically transported as needed. Quantinuum's H-series processors use this architecture.

Leading Systems

Quantinuum H2 (2023-2025): Uses $^{171}\text{Yb}^+$ ions in a QCCD trap. Launched with 32 qubits in May 2023. In April 2024, achieved two-qubit gate fidelities of 99.9% on H1-1 (the first commercially available system to reach "three nines"). Upgraded to 56 qubits in June 2024, and achieved a record quantum volume of $2^{25}$ (33,554,432) in September 2025. All-to-all connectivity means any qubit can interact with any other without routing overhead.

IonQ Forte (2022-2024): Uses $^{171}\text{Yb}^+$ ions with acousto-optic deflectors, supporting 36 algorithmic qubits with all-to-all connectivity. IonQ has pursued photonic interconnects for scaling beyond a single trap.

Performance Summary

Metric Typical Range
Qubit count20-56 (current generation)
$T_1$ coherenceSeconds to hours (hyperfine)
$T_2$ coherence1-10 seconds (typical operational)
Single-qubit gate time1-10 $\mu$s
Two-qubit gate time100-500 $\mu$s
Single-qubit gate fidelity99.95-99.99%
Two-qubit gate fidelity99.5-99.9%
Readout fidelity99.5-99.9%
ConnectivityAll-to-all
Operating temperatureRoom temp (vacuum chamber)

13.4 Neutral Atoms

Neutral-atom quantum computers use individual atoms - not ions, but electrically neutral atoms - trapped and arranged by focused laser beams called optical tweezers. This approach, commercialized by QuEra Computing and Pasqal, has emerged as a strong contender for scalability, with systems already demonstrating hundreds of qubits in programmable two-dimensional and three-dimensional arrays.

Optical Tweezers and Atom Arrays

An optical tweezer is a tightly focused laser beam creating a microscopic potential well. A single atom (typically rubidium-87 or cesium-133) is trapped at each focus. Arrays of tweezers, generated by spatial light modulators, assemble atoms into arbitrary geometric patterns - grids, rings, triangles, or custom arrangements. Defects in loading (empty sites) can be corrected by rearranging atoms in real time, providing dynamic, reconfigurable connectivity - unlike superconducting chips where topology is fixed at fabrication.

Rydberg Interactions for Entanglement

Entangling gates exploit Rydberg states - highly excited electronic states where the outermost electron orbits far from the nucleus, giving the atom an enormously exaggerated electric dipole moment. The key mechanism is the Rydberg blockade: when one atom is excited to a Rydberg state, the strong dipole-dipole interaction shifts the energy levels of nearby atoms, preventing their excitation. This conditional behavior - "if atom A is excited, atom B cannot be" - implements the controlled interaction needed for entangling gates. The blockade radius is typically 5-10 micrometers.

Scalability: The Killer Feature

Because the atoms are identical (every rubidium-87 atom has exactly the same properties) and trapping is optical (no physical wires to each qubit), scaling to hundreds or thousands of qubits is primarily an optics/laser engineering challenge, not a fundamental physics obstacle.

Key Concept.

Neutral atoms benefit from natural uniformity: every atom of the same isotope is identical, unlike superconducting qubits where each circuit has slightly different fabrication imperfections. This eliminates an entire category of calibration challenges and makes scaling more predictable.

Leading Systems

QuEra (2023-2024): Uses rubidium atoms in reconfigurable optical tweezer arrays with 256+ qubits available through Amazon Braket. In a landmark 2023 Nature paper, a Harvard-QuEra-MIT collaboration demonstrated 48 logical qubits - the largest demonstration of error-corrected logical qubits at the time. QuEra's roadmap targets 10,000 physical qubits by 2026.

Pasqal (2023-2024): Spun out of research by Browaeys and Lahaye at Institut d'Optique, Pasqal builds neutral-atom processors using rubidium. Their systems arrange atoms in arbitrary 2D and 3D geometries, making them suited for analog quantum simulation. Pasqal has demonstrated systems with over 300 atoms.

Current Limitations

Two-qubit gate fidelities using Rydberg interactions have reached 99.5% in the best demonstrations but have not yet consistently matched top trapped-ion or superconducting results. Atom loss during Rydberg excitation (atoms occasionally escape the trap) is another challenge being actively addressed.

Performance Summary

Metric Typical Range
Qubit count100-1,000+ (current generation)
$T_1$ coherence1-10 seconds (ground-state trapping)
$T_2$ coherence0.1-1 seconds (hyperfine qubit)
Single-qubit gate time0.5-5 $\mu$s
Two-qubit gate time0.5-2 $\mu$s
Single-qubit gate fidelity99.5-99.9%
Two-qubit gate fidelity99.0-99.5%
Readout fidelity97-99.5%
ConnectivityReconfigurable; native multi-qubit gates
Operating temperatureUltracold gas (~$\mu$K), room-temp apparatus

13.5 Photonic Quantum Computing

Photonic quantum computers use particles of light - photons - as qubits. This approach is radically different from the matter-based platforms above. Photons do not need to be cooled; they travel at the speed of light; and they naturally resist decoherence because they interact very weakly with their environment. The companies PsiQuantum and Xanadu are the leading commercial efforts in this space.

Encoding Qubits in Light

Quantum information can be encoded in photons in several ways. In polarization encoding, horizontal and vertical polarization map to $|0\rangle$ and $|1\rangle$. In dual-rail encoding, a single photon occupies one of two spatial paths (waveguides), with the path determining the qubit value. Xanadu's approach uses squeezed-state encoding with continuous-variable states rather than single photons, encoding information in the quadrature amplitudes of the electromagnetic field.

The Challenge of Two-Photon Gates

The fundamental difficulty of photonic quantum computing is that photons do not naturally interact with each other. Two photons can pass through each other with no effect whatsoever. This is wonderful for coherence (the environment cannot disturb your qubits) but terrible for entangling gates (you need qubits to interact to create entanglement).

In 2001, Knill, Laflamme, and Milburn (KLM) showed that universal quantum computation is possible using only linear optics (beam splitters, phase shifters, single-photon detectors) combined with measurement and feed-forward. KLM-style gates are probabilistic, but when combined with quantum teleportation and cluster-state techniques, the probabilistic element can be managed.

Measurement-Based Quantum Computing.

Many photonic approaches use the measurement-based model: prepare a large entangled "cluster state," then compute by measuring individual qubits in adaptively chosen bases. This model is mathematically equivalent to the circuit model but maps more naturally onto photonic hardware, where entangled resource states can be generated optically.

Advantages of Photonics

Photonic circuits can operate at room temperature (though single-photon detectors often require cryogenic cooling to ~1-4 K), dramatically reducing infrastructure. Photons travel through optical fibers over long distances, making photonic systems natural candidates for quantum networking. Photonic circuits can leverage existing silicon photonics fabrication from the telecom industry. And photonic gates operate at the speed of light, with time-bin operations running at GHz rates.

Leading Efforts

PsiQuantum: Pursuing a fault-tolerant photonic quantum computer using standard silicon photonics manufacturing at GlobalFoundries. Their approach uses fusion-based quantum computation, where small entangled resource states are "fused" through Bell measurements. PsiQuantum targets a million-qubit system but has not yet released a publicly accessible processor.

Xanadu: Xanadu's Borealis processor (2022) demonstrated quantum computational advantage on Gaussian boson sampling using squeezed-light states and programmable beam splitter arrays. While not universal computation, it showed that photonic systems can perform tasks beyond efficient classical simulation. Xanadu also develops PennyLane, a widely used quantum software framework.

Current Limitations

Photon loss is the dominant error source. Generating single photons on demand with high efficiency and purity remains challenging, and photon detectors have finite efficiency (typically 90-98%). The probabilistic nature of linear optical gates means that resource overhead for fault tolerance is substantial. Despite these challenges, compatibility with semiconductor manufacturing gives photonics a plausible path to very large-scale systems.

13.6 Other Platforms

Beyond the four major approaches above, several other physical platforms are under active development. Each offers unique potential advantages, though all are at earlier stages of maturity for general-purpose quantum computing.

Topological Qubits (Microsoft)

Microsoft has pursued one of the most ambitious and unconventional approaches: topological quantum computing. The idea is to encode quantum information not in a single particle's state but in the collective, global properties of exotic quasiparticles called non-abelian anyons. The most studied candidate is the Majorana zero mode, predicted to appear at the ends of specially engineered semiconductor-superconductor nanowires.

The appeal is intrinsic error protection. Because quantum information is stored nonlocally - spread across the spatial extent of the quasiparticle pair - local noise cannot access or corrupt it. This is analogous to encoding a message in the number of knots in a rope: small deformations do not change the knot count. Gates are performed by "braiding" quasiparticles around each other, and the result depends only on the topological class of the braid.

In early 2025, Microsoft announced their Majorana 1 chip, described as the first quantum processor based on a topological core architecture. The device demonstrated controllable creation and measurement of Majorana zero modes in indium arsenide / aluminum nanowire devices. While a milestone, it was a proof of concept rather than a functional multi-qubit processor. Topological quantum computing remains the most speculative major approach, with the furthest distance to travel before programmable multi-qubit systems.

Quantum Dots (Semiconductor Spin Qubits)

Quantum dot qubits encode information in the spin of individual electrons confined in semiconductor nanostructures - typically silicon or gallium arsenide. A quantum dot is a tiny region (tens of nanometers) where an electron is electrostatically trapped, with spin-up and spin-down serving as $|0\rangle$ and $|1\rangle$.

The major appeal is compatibility with existing CMOS manufacturing, raising the prospect of leveraging decades of semiconductor industry investment. Intel has been a prominent player, and academic groups in Delft, UNSW Sydney, and RIKEN have made significant progress.

Silicon spin qubits have demonstrated single-qubit fidelities above 99.9% and two-qubit fidelities above 99%. Gate speeds are fast (tens of nanoseconds for single-qubit, hundreds for two-qubit) and they operate at ~1 K - cold, but 100 times warmer than superconducting qubits. The main challenge is scaling: controlling many tightly packed dots requires dense wiring and individual calibration, and achieving consistently high two-qubit fidelities across large arrays remains an open problem.

NV Centers in Diamond

A nitrogen-vacancy (NV) center is a point defect in a diamond crystal where a nitrogen atom replaces one carbon atom adjacent to a vacant lattice site. The electron spin associated with this defect can serve as a qubit with remarkably long coherence times - $T_2$ can exceed milliseconds at room temperature and seconds at cryogenic temperatures.

NV centers can be individually addressed and read out optically, initialized and manipulated with microwave pulses. Their room-temperature operation and long coherence times make them attractive for quantum sensing and small-scale quantum information. However, coupling between distant NV centers is weak, limiting multi-qubit demonstrations to a handful of qubits. NV centers are most promising for quantum networking (as memory nodes) and quantum sensing (magnetometry, thermometry) rather than large-scale computation.

Why So Many Approaches?

The diversity of platforms reflects a fundamental truth: we do not yet know which physical system will best satisfy all of DiVincenzo's criteria at scale. Each makes different trade-offs between speed, fidelity, connectivity, scalability, and operating conditions. The field is in an exploratory phase, and it is possible that different applications will favor different hardware - or that the winning approach has not yet been invented.

13.7 Comparing Platforms

The following table summarizes key characteristics of the major platforms. Every number is approximate and rapidly evolving. The value lies not in memorizing specific figures but in understanding the qualitative trade-offs.

Property Superconducting Trapped Ions Neutral Atoms Photonic Quantum Dots
Qubit count 50-1,200 20-56 100-1,000+ 100+ (modes) 2-12
$T_2$ coherence 50-200 $\mu$s 1-10 s 0.1-1 s N/A (photon loss) 1-100 $\mu$s
Two-qubit gate time 50-100 ns 100-500 $\mu$s 0.5-2 $\mu$s ~ns (optical) 100-500 ns
Two-qubit gate fidelity 99.0-99.9% 99.5-99.9% 99.0-99.5% ~95-99% 99.0-99.5%
Connectivity Nearest-neighbor All-to-all Reconfigurable Programmable Nearest-neighbor
Gate-to-coherence ratio ~$10^3$-$10^4$ ~$10^4$-$10^5$ ~$10^4$-$10^5$ Loss-limited ~$10^2$-$10^3$
Operating temp. ~15 mK Room temp (vacuum) $\sim\mu$K (atoms) Room temp* ~0.1-1 K
Native gate set ECR/CZ + rotations XX + rotations CZ + rotations Beam splitters CNOT-like + rotations
Key advantage Speed, maturity Fidelity, connectivity Scalability Networking, room temp CMOS compatibility
Key challenge Connectivity, cooling Speed, scale Gate fidelity Photon loss, det. gates Scale, crosstalk
Major players IBM, Google, Rigetti Quantinuum, IonQ QuEra, Pasqal PsiQuantum, Xanadu Intel, Delft, UNSW

* Photonic circuits operate at room temperature, but single-photon detectors typically require cryogenic cooling (~1-4 K).

DiVincenzo Scorecard

How does each platform fare against DiVincenzo's five criteria? The assessment below uses three levels: strong, moderate, and developing.

Criterion Superconducting Trapped Ions Neutral Atoms Photonic Quantum Dots
1. Scalable qubits Strong Moderate Strong Moderate Moderate
2. Initialization Strong Strong Strong Strong Strong
3. Long coherence Moderate Strong Strong Strong Moderate
4. Universal gates Strong Strong Moderate Moderate Moderate
5. Measurement Strong Strong Moderate Moderate Strong

What Metric Matters Most?

For near-term NISQ computing (Chapter 14), the most important metric is arguably two-qubit gate fidelity, since two-qubit gates dominate the error budget. For fault-tolerant computing, the question shifts to whether the physical error rate is below the fault-tolerance threshold (~1% for the surface code) and whether the platform can scale to the millions of physical qubits that error correction demands.

The sandbox below prepares a Bell pair - one Hadamard and one CNOT. On a perfect (simulated) device, you see only 00 and 11. On real hardware, the finite fidelity of the CNOT introduces a small probability of the "wrong" outcomes 01 and 10, at rates determined by the hardware's two-qubit gate fidelity.

The Road Ahead

The quantum hardware landscape evolves at a pace that makes any snapshot quickly obsolete. But the underlying trade-offs are durable: speed versus fidelity, connectivity versus scalability, isolation from noise versus ability to control. Understanding these tensions, grounded in DiVincenzo's criteria, equips you to evaluate new developments as they emerge.

In Chapter 14, we confront the central consequence of imperfect hardware: noise. Every real quantum computer operates in the noisy intermediate-scale quantum (NISQ) regime, where qubit counts are too small and error rates too high for full fault-tolerant error correction. Understanding what useful computation is possible despite noise - and what error correction strategies will eventually overcome it - is the subject we turn to next.

Interactive: Hardware Topology Explorer

Real quantum processors have limited qubit connectivity - not every pair of qubits can interact directly. Explore the native gate sets and connectivity graphs of different hardware platforms.

Hardware Topology Viewer

Interactive: Transpile for Real Hardware

Quantum circuits must be transpiled - adapted to the native gate set and qubit connectivity of the target hardware. See how the same GHZ-state circuit changes when compiled for different quantum processors. Notice how the gate count and depth increase for hardware with restricted connectivity.

Interactive: Platform Comparison Table

Quantum Hardware Platform Comparison