• Physics 18, 174
Current quantum computers are noisy, which places limitations on the type of quantum machine needed to outpace classical computers.
We are currently in a fascinating era of quantum computing in which near-term experiments may be able to achieve “quantum advantage” and outperform classical computers at certain tasks. While a conclusive demonstration of quantum advantage will be a watershed moment in the history of computation and physics, current experiments have profound limitations, and a much more fundamental understanding of these limitations is needed to rigorously assess the computational power of quantum schemes. New research by Thomas Schuster from Caltech and his colleagues explores such limitations from the viewpoint of classical algorithms that can simulate quantum computations [1]. Their work shows that the possibility of noisy quantum computers outdoing classical computers may be restricted to a “Goldilocks zone” between too few and too many qubits.
Uncorrected noise stands as the most significant limitation of the current quantum computing era. A representative case is Google’s first quantum-advantage experiment in 2019 using 53 superconducting qubits [2]. Although groundbreaking, that quantum computation had errors in the vast majority of its runs (technically speaking, the estimated fidelity was 0.2% with 99.8% noise). While quantum error-correction promises to mitigate the effects of this noise, this strategy requires additional qubits—a scaling of computer size that exceeds current experimental capabilities. Therefore, it is critical to understand if quantum advantage is achievable without error correction.
Schuster and colleagues place important constraints on quantum-advantage experiments that do not utilize error correction. To do this, the researchers revisit a classical algorithm designed to simulate a noisy quantum circuit [3, 4]. The algorithm is meant to reproduce the output of the circuit—more precisely, the probability that a particular output will be measured. The math behind the algorithm is a Feynman path integral, which intuitively represents a sum of the different ways that the circuit’s quantum state can evolve over time. In general, this sum has an exponentially large number of terms, or “Pauli paths.” However, noise in the circuit can be shown to “kill off” the contribution of most of these paths (Fig. 1). As a result, a noisy quantum circuit can be simulated by explicitly calculating the value of a very small number of specially chosen Pauli paths that survive the noise.
Thanks to the reduction in the number of Pauli paths, the classical algorithm can be run in a relatively short time, which helps to level the playing field between quantum and classical machines. As shown in the previous studies [3, 4], the algorithm takes longer if the corresponding quantum circuit has more qubits, but the algorithm speeds up if the circuit has more noise. More specifically, the algorithm’s run-time scales polynomially in the number of qubits but exponentially in the inverse of the noise rate per gate. From the perspective of a scientist trying to develop a faster-than-classical quantum computer, these scaling relations imply that it is much more important to reduce the noise in the quantum computer than to add qubits.
The original algorithm targeted so-called random quantum circuits, in which the gates are randomly chosen. This randomness is a common feature in current quantum-advantage experiments, as such circuits have certain distinct implementational advantages [5, 6]. But a major open question remains: Could we get around the quantum-advantage limitations by removing randomness? In other words, could quantum circuits with carefully structured gates be a harder target for classical algorithms to simulate? Schuster and colleagues tackle this question by extending the analysis of the classical simulation algorithm to encompass a much broader class of quantum circuits. Specifically, the researchers prove that the Pauli-path algorithm can simulate arbitrary noisy quantum circuits, with one critical caveat: The algorithm only succeeds with high probability with respect to a randomly chosen input state. That is, it fails for a small fraction of inputs, which are related to a known case where the simulation of quantum circuits becomes very hard.
The moral of this work can be stated clearly: Too much noise can kill quantum advantage, regardless of whether the circuit is random or structured. Therefore, when we assess the computational power of noisy quantum experiments, it is extremely important to consider how the noise rate per gate changes as more qubits are added. Ideally, the noise rate would scale inversely with the number of qubits, but engineering such a vanishing noise rate is difficult. In practice, keeping the noise low requires placing a stringent upper bound on the number of qubits.
Noisy quantum computers thus face strict limitations in achieving quantum advantage, but there is at least one important setting that may not be subject to such constraints. While conventional approaches estimate expectation values of observables, many current quantum-advantage experiments solve the more difficult problem of sampling from the full output distribution of a specified quantum circuit, generated through the measurement of all qubits in the system. In this case, the algorithm considered by Schuster and colleagues is more restrictive and is only able to simulate noisy quantum-circuit ensembles that “anticoncentrate.” Anticoncentration is a statistical property indicating that the output distribution is not overly concentrated on a few outcomes. This property holds for many natural-circuit ensembles but not for others such as circuits with realistic “nonunital” noise [7, 8]. Understanding if these “concentrated” quantum circuits could still achieve a scalable quantum advantage remains an important open direction for future research.
Schuster and colleagues make a crucial contribution to the field of quantum computing. This contribution is important from a fundamental perspective where it elucidates the dividing line between noisy quantum and classical computation. At the same time these results reinforce a pragmatic message that will be important for future quantum experiments: While near-term quantum advantage may be possible by implementing a Goldilocks number of qubits (that is, not too small but also not too large relative to the noise rate), the only path forward to achieve a fully scalable quantum advantage is quantum error correction.
References
- T. Schuster et al., “A polynomial-time classical algorithm for noisy quantum circuits,” Phys. Rev. X 15, 041018 (2025).
 - F. Arute et al., “Quantum supremacy using a programmable superconducting processor,” Nature 574, 505 (2019).
 - X. Gao and L. Duan, “Efficient classical simulation of noisy quantum computation,” arXiv:1810.03176v1.
 - D. Aharonov et al., “A polynomial-time classical algorithm for noisy random circuit sampling,” Proc. 55th Ann. ACM Symp. Theory of Computing (STOC), 945 (2023).
 - S. Boixo et al., “Characterizing quantum supremacy in near-term devices,” Nat. Phys. 14, 595 (2018).
 - A. Bouland et al., “On the complexity and verification of quantum random circuit sampling,” Nat. Phys. 15, 159 (2018).
 - A. Deshpande et al., “Tight bounds on the convergence of noisy random circuits to the uniform distribution,” PRX Quantum 3, 040329 (2022).
 - B. Fefferman et al., “Effect of nonunital noise on random-circuit sampling,” PRX Quantum 5, 030317 (2024).
 
									 
					
