The stochastic, non-unitary nature of measurement is a foundational principle in quantum theory and stands in stark contrast to the deterministic, unitary evolution prescribed by Schrödinger’s equation1. Because of these unique properties, measurement is key to some fundamental protocols in quantum information science, such as teleportation2, error correction19 and measurement-based computation20. All these protocols use quantum measurements, and classical processing of their outcomes, to build particular structures of quantum information in space–time. Remarkably, such structures may also emerge spontaneously from random sequences of unitary interactions and measurements. In particular, ‘monitored’ circuits, comprising both unitary gates and controlled projective measurements (Fig. 1a), were predicted to give rise to distinct non-equilibrium phases characterized by the structure of their entanglement3,4,21,22,23, either ‘volume law’24 (extensive) or ‘area law’25 (limited), depending on the rate or strength of measurement.
In principle, quantum processors allow full control of both unitary evolution and projective measurements (Fig. 1a). However, despite their importance in quantum information science, the experimental study of measurement-induced entanglement phenomena26,27 has been limited to small system sizes or efficiently simulatable Clifford gates. The stochastic nature of measurement means that the detection of such phenomena requires either the exponentially costly post-selection of measurement outcomes or more sophisticated data-processing techniques. This is because the phenomena are visible only in the properties of quantum trajectories; a naive averaging of experimental repetitions incoherently mixes trajectories with different measurement outcomes and fully washes out the non-trivial physics. Furthermore, implementing the model in Fig. 1a requires mid-circuit measurements that are often problematic on superconducting processors because the time needed to perform a measurement is a much larger fraction of the typical coherence time than it is for two-qubit unitary operations. Here we use space–time duality mappings to avoid mid-circuit measurements, and we develop a diagnostic of the phases on the basis of a hybrid quantum-classical order parameter (similar to the cross-entropy benchmark in ref. 28) to overcome the problem of post-selection. The stability of these quantum information phases to noise is a matter of practical importance. Although relatively little is known about the effect of noise on monitored systems29,30,31, noise is generally expected to destabilize measurement-induced non-equilibrium phases. Nonetheless, we show that noise serves as an independent probe of the phases at accessible system sizes. Leveraging these insights allows us to realize and diagnose measurement-induced phases of quantum information on system sizes of up to 70 qubits.
The space–time duality approach9,15,16,17 enables more-experimentally convenient implementations of monitored circuits by leveraging the absence of causality in such dynamics. When conditioning on measurement outcomes, the arrow of time loses its unique role and becomes interchangeable with spatial dimensions, giving rise to a network of quantum information in space–time32 that can be analysed in multiple ways. For example, we can map one-dimensional (1D) monitored circuits (Fig. 1a) to 2D shallow unitary circuits with measurements only at the final step17 (Fig. 1b and Supplementary Information section 5), thereby addressing the experimental issue of mid-circuit measurement.
We began by focusing on a special class of 1D monitored circuits that can be mapped by space–time duality to 1D unitary circuits. These models are theoretically well understood15,16 and are convenient to implement experimentally. For families of operations that are dual to unitary gates (Supplementary Information), the standard model of monitored dynamics3,4 based on a brickwork circuit of unitary gates and measurements (Fig. 2a) can be equivalently implemented as a unitary circuit when the space and time directions are exchanged (Fig. 2b), leaving measurements only at the end. The desired output state |Ψm⟩ is prepared on a temporal subsystem (in a fixed position at different times)33. It can be accessed without mid-circuit measurements by using ancillary qubits initialized in Bell pairs (\({Q}_{1}^{{\prime} }\ldots {Q}_{12}^{{\prime} }\) in Fig. 2c) and SWAP gates, which teleport |Ψm⟩ to the ancillary qubits at the end of the circuit (Fig. 2c). The resulting circuit still features post-selected measurements but their reduced number (relative to a generic model; Fig. 2a) makes it possible to obtain the entropy of larger systems, up to all 12 qubits (\({Q}_{1}^{{\prime} }\ldots {Q}_{12}^{{\prime} }\)), in individual quantum trajectories.
Previous studies15,16 predicted distinct entanglement phases for |Ψm⟩ as a function of the choice of unitary gates in the dual circuit: volume-law entanglement if the gates induce an ergodic evolution, and logarithmic entanglement if they induce a localized evolution. We implemented unitary circuits that are representative of the two regimes, built from two-qubit fermionic simulation (fSim) unitary gates34 with swap angle θ and phase angle ϕ = 2θ, followed by random single-qubit Z rotations. We chose angles θ = 2π/5 and θ = π/10 because these are dual to non-unitary operations with different measurement strengths (Fig. 2d and Supplementary Information).
To measure the second Renyi entropy for qubits composing |Ψm⟩, randomized measurements35,36 are performed on \({Q}_{1}^{{\prime} }\ldots {Q}_{12}^{{\prime} }\). Figure 2e shows the entanglement entropy as a function of subsystem size. The first gate set gives rise to a Page-like curve24, with entanglement entropy growing linearly with subsystem size up to half the system and then ramping down. The second gate set, by contrast, shows a weak, sublinear dependence of entanglement with subsystem size. These findings are consistent with the theoretical expectation of distinct entanglement phases (volume-law and logarithmic, respectively) in monitored circuits that are space–time dual to ergodic and localized unitary circuits15,16. A phase transition between the two can be achieved by tuning the (θ, ϕ) fSim gate angles.
We next moved beyond this specific class of circuits with operations restricted to be dual to unitary gates, and instead investigated quantum information structures arising under more general conditions. Generic monitored circuits in 1D can be mapped onto shallow circuits in 2D, with final measurements on all but a 1D subsystem17. The effective measurement rate, p, is set by the depth of the shallow circuit, T, and the number of measured qubits, M. Heuristically, p = M/(M + L)T (the number of measurements per unitary gate), where L is the length of the chain of unmeasured qubits hosting the final state for which the entanglement structure is being investigated. Thus, for large M, a measurement-induced transition can be tuned by varying T. We ran 2D random quantum circuits28 composed of iSWAP-like and random single-qubit rotation unitaries on a grid of 19 qubits (Fig. 3a), with T varying from 1 to 8. For each depth, we post-selected on measurement outcomes of M = 12 qubits and left behind a 1D chain of L = 7 qubits; the entanglement entropy was then measured for contiguous subsystems A by using randomized measurements. We observed two distinct behaviours over a range of T values (Fig. 3b). For T < 4, the entropy scaling is subextensive with the size of the subsystem, whereas for T ≥ 4, we observe an approximately linear scaling.
The spatial structure of quantum information can be further characterized by its signatures in correlations between disjointed subsystems of qubits: in the area-law phase, entanglement decays rapidly with distance37, whereas in a volume-law phase, sufficiently large subsystems may be entangled arbitrarily far away. We studied the second Renyi mutual information
$${{\mathcal{I}}}_{AB}^{(2)}={S}_{A}^{(2)}+{S}_{B}^{(2)}-{S}_{AB}^{(2)},$$
(1)
between two subsystems A and B as a function of depth T, and the distance (the number of qubits) x between them (Fig. 3c). For maximally separated subsystems A and B of two qubits each, \({{\mathcal{I}}}_{AB}^{(2)}\) remains finite for T ≥ 4, but it decays to 0 for T ≤ 3 (Fig. 3d). We also plotted \({{\mathcal{I}}}_{AB}^{(2)}\) for subsystems A and B with different sizes (T = 3 and T = 6) as a function of x (Fig. 3e). For T = 3 we observed a rapid decay of \({{\mathcal{I}}}_{AB}^{(2)}\) with x, indicating that only nearby qubits share information. For T = 6, however, \({{\mathcal{I}}}_{AB}^{(2)}\) does not decay with distance.
The observed structures of entanglement and mutual information provide strong evidence for the realization of measurement-induced area-law (‘disentangling’) and volume-law (‘entangling’) phases. Our results indicate that there is a phase transition at critical depth T ≃ 4, which is consistent with previous numerical studies of similar models17,18,38. The same analysis without post-selection on the M qubits (Supplementary Information) shows vanishingly small mutual information, indicating that long-ranged correlations are induced by the measurements.
The approaches we have followed so far are difficult to scale for system sizes greater than 10–20 qubits27, owing to the exponentially increasing sampling complexity of post-selecting measurement outcomes and obtaining entanglement entropy of extensive subsystems of the desired output states. More scalable approaches have been recently proposed39,40,41,42 and implemented in efficiently simulatable (Clifford) models26. The key idea is that diagnostics of the entanglement structure must make use of both the readout data from the quantum state |Ψm⟩ and the classical measurement record m in a classical post-processing step (Fig. 1c). Post-selection is the conceptually simplest instance of this idea: whether quantum readout data are accepted or rejected is conditional on m. However, because each instance of the experiment returns a random quantum trajectory43 from 2M possibilities (where M is the number of measurements), this approach incurs an exponential sampling cost that limits it to small system sizes. Overcoming this problem will ultimately require more-sample-efficient strategies that use classical simulation39,40,42, possibly followed by active feedback39.
Here we have developed a decoding protocol that correlates quantum readout and the measurement record to build a hybrid quantum–classical order parameter for the phases that is applicable to generic circuits and does not require active feedback on the quantum processor. A key idea is that the entanglement of a single ‘probe’ qubit, conditioned on measurement outcomes, can serve as a proxy for the entanglement phase of the entire system39. This immediately eliminates one of the scalability problems: measuring the entropy of extensive subsystems. The other problem—post-selection—is removed by a classical simulation step that allows us to make use of all the experimental shots and is therefore sample efficient.
This protocol is illustrated in Fig. 4a. Each run of the circuit terminates with measurements that return binary outcomes ±1 for the probe qubit, zp, and the surrounding M qubits, m. The probe qubit is on the same footing as all the others and is chosen at the post-processing stage. For each run, we classically compute the Bloch vector of the probe qubit, conditional on the measurement record m, am (Supplementary Information). We then define \({\tau }_{m}={\mathsf{s}}{\mathsf{i}}{\mathsf{g}}{\mathsf{n}}({{\bf{a}}}_{m}\cdot \hat{z})\), which is +1 if am points above the equator of the Bloch sphere, and −1 otherwise. The cross-correlator between zp and τm, averaged over many runs of the experiment such that the direction of am is randomized, yields an estimate of the length of the Bloch vector, \(\zeta \simeq \overline{| {{\bf{a}}}_{m}| }\), which can in turn be used to define a proxy for the probe’s entropy:
$$\zeta =2\overline{{z}_{{\rm{p}}}\,{\tau }_{m}},\quad {S}_{{\rm{proxy}}}=-{\log }_{2}[(1+{\zeta }^{2})/2],$$
(2)
where the overline denotes averaging over all the experimental shots and random circuit instances. A maximally entangled probe corresponds to ζ = 0.
In the standard teleportation protocol2, a correcting operation conditional on the measurement outcome must be applied to retrieve the teleported state. In our decoding protocol, τm has the role of the correcting operation, restricted to a classical bit-flip, and the cross-correlator describes the teleportation fidelity. In the circuits relevant to our experiment (depth T = 5 on N ≤ 70 qubits), the classical simulation for decoding is tractable. For arbitrarily large circuits, however, the existence of efficient decoders remains an open problem39,41,44. Approximate decoders that work efficiently in only part of the phase diagram, or for special models, also exist39, and we have implemented one such example based on matrix product states (Supplementary Information).
We applied this decoding method to 2D shallow circuits that act on various subsets of a 70-qubit processor, consisting of N = 12, 24, 40, 58 and 70 qubits in approximately square geometries (Supplementary Information). We chose a qubit near the middle of one side as the probe and computed the order parameter ζ by decoding measurement outcomes up to r lattice steps away from that side while tracing out all the others (Fig. 4a). We refer to r as the decoding radius. Because of the measurements, the probe may remain entangled even when r extends past its unitary light cone, corresponding to an emergent form of teleportation18.
As seen in Fig. 3, the entanglement transition occurs as a function of depth T, with a critical depth 3 < Tc < 4. Because T is a discrete parameter, it cannot be tuned to finely resolve the transition. To do this, we fix T = 5 and instead tune the density of the gates, so each iSWAP-like gate acts with probability ρ and is skipped otherwise, setting an ‘effective depth’ Teff = ρT; this can be tuned continuously across the transition. Results for ζ(r) at ρ = 1 (Fig. 4b) reveal a decay with system size N of ζ(rmax), where r = rmax corresponds to measuring all the qubits apart from the probe. This decay is purely due to noise in the system.
Remarkably, sensitivity to noise can itself serve as an order parameter for the phase. In the disentangling phase, the probe is affected by noise only within a finite correlation length, whereas in the entangling phase it becomes sensitive to noise anywhere in the system. In Fig. 4c, ζ(rmax) is shown as a function of ρ for several N values, indicating a transition at a critical gate density ρc of around 0.6–0.8. At ρ = 0.3, which is well below the transition, ζ(rmax) remains constant as N increases (inset in Fig. 4c). By contrast, at ρ = 1 we fit ζ(rmax) at around 0.97N, indicating an error rate of around 3% per qubit for the entire sequence. This is approximately consistent with our expectations for a depth T = 5 circuit based on individual gate and measurement error rates (Supplementary Information). This response to noise is analogous to the susceptibility of magnetic phases to a symmetry-breaking field7,30,31,45 and therefore sharply distinguishes the phases only in the limit of infinitesimal noise. For finite noise, we expect the N dependence to be cut off at a finite correlation length. We do not see the effects of this cut-off at system sizes accessible to our experiment.
As a complementary approach, the underlying behaviour in the absence of noise may be estimated by noise mitigation. To do this, we define the normalized order parameter \(\widetilde{\zeta }(r)=\zeta (r)/\zeta ({r}_{\max })\) and proxy entropy \({\widetilde{S}}_{{\rm{proxy}}}(r)=-{\log }_{2}[(1+\widetilde{\zeta }{(r)}^{2})/2]\). The persistence of entanglement with increasing r, corresponding to measurement-induced teleportation18, indicates the entangling phase. Figure 4d shows the noise-mitigated entropy for ρ = 0.3 and ρ = 1, revealing a rapid, N-independent decay in the former and a plateau up to r = rmax − 1 in the latter. At fixed N = 40, \({\widetilde{S}}_{{\rm{proxy}}}(r)\) displays a crossover between the two behaviours for intermediate ρ (Fig. 4e).
To resolve this crossover more clearly, we show \({\widetilde{S}}_{{\rm{proxy}}}({r}_{\max }-1)\) as a function of ρ for N = 12–58 (Fig. 4e). The accessible system sizes approximately cross at ρc ≈ 0.9. There is an upward drift of the crossing points with increasing N, confirming the expected instability of the phases to noise in the infinite-system limit. Nonetheless, the signatures of the ideal finite-size crossing (estimated to be ρc ≃ 0.72 from the noiseless classical simulation; Supplementary Information) remain recognizable at the sizes and noise rates accessible in our experiment, although they are moved to larger ρc. A stable finite-size crossing would mean that the probe qubit remains robustly entangled with qubits on the opposite side of the system, even when N increases. This is a hallmark of the teleporting phase18, in which quantum information (aided by classical communication) travels faster than the limits imposed by the locality and causality of unitary dynamics. Indeed, without measurements, the probe qubit and the remaining unmeasured qubits are causally disconnected, with non-overlapping past light cones46 (pink and grey lines in the inset in Fig. 4f).
Our work focuses on the essence of measurement-induced phases: the emergence of distinct quantum information structures in space–time. We used space–time duality mappings to circumvent mid-circuit measurements, devised scalable decoding schemes based on a local probe of entanglement, and used hardware noise to study these phases on up to 70 superconducting qubits. Our findings highlight the practical limitations of NISQ processors imposed by finite coherence. By identifying exponential suppression of the decoded signal in the number of qubits, our results indicate that increasing the size of qubit arrays may not be beneficial without corresponding reductions in noise rates. At current error rates, extrapolation of our results (at ρ = 1, T = 5) to an N-qubit fidelity of less than 1% indicates that arrays of more than around 150 qubits would become too entangled with their environment for any signatures of the ideal (closed system) entanglement structure to be detectable in experiments. This indicates that there is an upper limit on qubit array sizes of about 12 × 12 for this type of experiment, beyond which improvements in system coherence are needed.