Overview The Past, Present and Future of Quantum Error Correction Technology

img1

 

Errors are an inevitable phenomenon in computation, especially in quantum computing, and we must have precise control over the behavior of ultrasensitive quantum systems. While computational advantages can be achieved in the short term through the use of techniques that reduce the effects of noise in quantum systems, fully exploiting the potential of computation to achieve super-polynomially accelerated quantum algorithms will likely require significant advances in quantum error correction techniques - with error tolerance as the ultimate goal, error mitigation is the way to get quantum computing to practical pathway.

 

a93436795c2b1b405ddd59fddf102ec1

 

Over the past few years, researchers in the field have made significant progress in quantum error correction, but much work remains to be done to achieve this goal. For example, IBM scientists are working with the broader quantum community to develop scalable quantum error correction (QEC) techniques to enable practical quantum computing as soon as possible.

 

IBM is firmly committed to advancing error correction technology. IBM's research in error correction and its sponsored activities (such as the Quantum Error Correction Summer School) has enabled itself to foster new ideas that move closer to being able to perform arbitrarily long, error-free quantum computation.

 

01What is quantum error correction?

 

The modern world relies on the storage, transmission and processing of digital information, represented as zeros and ones, called bits (bits). Sometimes, digital information is corrupted when some of these bits are flipped from 0 to 1, or from 1 to 0. Traditional classical computers are highly reliable under these conditions because modern computer components rarely encounter these errors, and error correction schemes protect the storage and transmission of data.

 

On the other hand, quantum computers use how to repair quantum computing errors [2]? The new technique aims to achieve error correction faster than the error occurs. Quantum bits are not zeros and ones, the possible error rate is much larger than classical bits, and the possible types of errors are much broader, such as phase errors - a corruption of the extra quantum information carried by quantum bits, which makes quantum bits fundamentally different from classical ones.

 

38d2575e6becbcaab6e75aa9649ceb18

Classical Bits and Quantum Bits

 

Quantum error correction codes provide the protection needed for quantum computers to operate reliably in the presence of these higher error rates and a wider range of error types. It is critical that these codes work without directly measuring quantum bits of data, as such measurements would cause quantum processors to lose their quantum information and thus the advantages of quantum computing.

 

There are many possible error correction codes that can work in a variety of situations. The most promising family of codes is currently known as quantum low-density parity-check bit (qLDPC) codes: this code diagnoses quantum errors by performing certain checks on several quantum bits, and each quantum bit participates in some checks. These checks are similar to the parity check bit checks in classical error correction.

 

These checks require only a few physical quantum bits, which makes qLDPC codes very attractive for designing efficient quantum error correction circuits, since the erroneous operation in the circuit will only destroy the state of those few physical quantum bits involved in the same parity check. Important codes, such as two-dimensional surface codes and two-dimensional color codes, with which many experiments are currently performed, belong to the qLDPC family.

 

02Benchmarking with error correction

 

Building reliable quantum computers on a large scale is difficult. Therefore, it is necessary to study and implement codes on today's hardware, not only to expand our knowledge of how to design better quantum computers, but also to help benchmark the state of current hardware. This helps to deepen our understanding of system-level requirements and improve the capabilities of our systems. By pushing these capabilities to their physical limits and evaluating them using well-designed benchmarks, the research community has identified important constraints that tell us how to co-design the best error suppression, mitigation, and correction protocols in quantum computing. As a result, a significant amount of QEC research is now entering the experimental demonstration phase, using the most appropriate QEC codes to implement logic operations on today's quantum hardware.

 

9efc20088914c69d90c6f86d2e1889c4

code to a 3-bit repetition code (left) leading to a logical heavy square lattice (right).

 

Many of these demonstrations involve researchers implementing surface codes and related heavy hexagonal code schemes [3]. These code families are designed to be used on two-dimensional lattices of quantum bits, often with different roles for physical quantum bits: data quantum bits for storing data and auxiliary quantum bits for measurement checks or flags.IBM measures the robustness of these codes to errors by the "distance," a metric that represents the return value of the Distance" is a metric that represents the minimum number of physical quantum bit errors required to return an incorrect logical quantum bit value. Thus, increasing the distance implies a more robust code. The probability of logical quantum bit error decreases exponentially with increasing distance.

 

In the past year, distance-2 [4] and distance-3 [5] codes have been implemented by IBM, distance-3 codes by researchers at ETH Zurich and CSU [6], and even distance-5 codes by researchers at Google [7]. These codes represent important experimental work, and perhaps most importantly, the fact that they were implemented shows how much progress has been made in reducing errors in quantum processors. In addition, improvements in measuring logical error rates will provide further evidence (and perhaps a confidence boost) that the field is on the right track.

 

Other teams have shown experimental demonstrations of other types of error-correcting codes (called color codes). Color codes and surface codes follow similar principles, but color codes have simpler quantum logic gates, but at the cost of greater parity checking. Last year, researchers at Honeywell (now Quantinuum) implemented color codes on their trapped ion hardware. This year, researchers at the University of Innsbruck implemented the crucial T-gate [8] as part of a color code demonstration, while Quantinuum researchers implemented an error-corrected CNOT gate [9], one of the gates that provides entanglement. As with the surface code examples, these demonstrations represent the current state-of-the-art in error correction, and also mark the way forward in efforts to improve these codes.

 

03Developing New Codes

 

To date, the 2D surface code has been considered the undisputed leader in error correction; however, it has two important drawbacks. First, most of the physical quantum bits are used for error correction. As the distance of a surface code grows, the number of physical quantum bits must grow as the square of the distance to encode a quantum bit. For example, a surface code with a distance of 10 would require about 200 physical quantum bits to encode a logical quantum bit. Second, it is very difficult to implement a computationally general set of logic gates. Leading-edge "magic state extraction" methods require additional resources beyond simply encoding quantum information into error correction codes. The space-time cost of these additional resources may be too expensive for small to medium scale computations.

 

One way to address the first drawback of surface codes is to find and study codes from the "good" family of qLDPC codes. "Good" is a technical term for a good family of codes in which the number and distance of logical quantum bits are proportional to the number of physical quantum bits; thus, doubling the number of physical quantum bits doubles the number and distance of logical quantum bits. Surface code families are not "good", and finding good qLDPC codes has been a major open problem in quantum error correction.

 

A major advance in this problem came in 2009, when researchers Jean-Pierre Tillich and Gilles Zémor discovered a family of codes for supergraph product codes: these codes do not improve the square root distance limit, but they do greatly improve the scaling of physical to logical quantum bits. Last year, Pavel Panteleev and Gleb Kalachev of Moscow State University published a landmark paper [9] with a clever mathematical proof of a conjecture recently proposed by Nikolas Breuckmann and Jens Eberhardt [10] that the existence of a class of qLDPC codes (called boosted/ balanced product codes) can break this barrier. In other words, they found good families of codes. However, unlike surface codes, these new qLDPC codes need to provide more quantum bit connections than two-dimensional lattices, i.e., some quantum bits need to be connected over long distances.

 

Altogether, these discoveries have collectively opened up new directions in quantum error correction and led to further developments. For example, earlier this year, Anthony Leverrier and Gilles Zémor published a simplified version of the Panteleev-Kalachev code [11], called the quantum Tanner code, which enhances our understanding of these cryptographic capabilities; this summer, researchers from the Massachusetts Institute of Technology and the California Institute of Technology researchers introduced the first efficient decoder for quantum Tanner codes [12], which is a key step in making these next-generation error-correcting codes a reality.

 

Good qLDPC coding is one possible way to achieve efficient fault-tolerant quantum computing. Other approaches have the potential to address the second drawback of surface codes by reducing the cost of general-purpose logic gates. A team led by Guanyu Zhu at IBM Quantum has studied a class of codes called fractal surface codes (FSC) [13], which are three-dimensional codes constructed by punching holes with smooth boundaries in a three-dimensional surface code so that the resulting mesh is a fractal. Like the qLDPC code, the FSC combines multiple physical quantum bits together to simulate some underlying topological order (i.e., logical quantum bits) that preserve some fundamental properties of the system even in the presence of small perturbations. However, also like qLDPC coding, they require more complex quantum bit connections than a two-dimensional lattice. This has the advantage of requiring less overhead to implement a generic set of codes for logic gates.

 

86eb46785a5d7206e688a0fa338c7720

Left: 3D fractal model defined on the fractal cube geometry; Right: honeycomb code.

 

There are other codes in the works as well. Last year, researchers at Microsoft debuted the honeycomb code [14], so named because it organizes quantum bits on the vertices of a hexagonal lattice. This is similar, but not identical, to the heavy hexagonal code used by IBM Quantum today. The key difference between the honeycomb code and the two-dimensional surface code is that the logical quantum bits are dynamically defined (a logical quantum bit does not necessarily correspond to the exact same physical quantum bit during computation) and can be measured and checked with incredibly simple quantum circuits. So far, the honeycomb code has only been shown to be competitive with leading surface codes by performing so-called entanglement measurements on parity bit checks (measuring two quantum bits simultaneously in a way that preserves their entanglement).

 

In a follow-up paper [15], researchers at Google and the University of California, Santa Barbara benchmarked the Honeycomb code and found that it requires 7000 physical quantum bits to encode a logical quantum bit with a logical error rate of one part in a trillion. As a result, they conclude that this code is a promising candidate for implementing error correction on today's quantum hardware with two-dimensional lattices of quantum bits.

 

04Decoding (Decoding)

 

When we perform quantum computation using deployed error-correcting codes, we observe important error-sensitive events: these events are clues to potential errors, and when they occur, it is the task of the decoder to correctly identify the appropriate error correction. Classical hardware performs this decoding and must keep up with the high rate at which important events occur. In addition, the amount of event data transferred from the quantum device to the classical hardware must not exceed the available bandwidth. Thus, the decoder imposes additional constraints on the control hardware and the way the quantum computer interfaces with the classical system; addressing this challenge is of key importance both theoretically and experimentally.

 

Decoders for many types of codes are computationally efficient (i.e., run in polynomial time), yet this may not be sufficient given the above limitations. Developing and experimenting with real-time decoders is becoming an important aspect of creating usable subsystems. This fact is evident from the recent conference on real-time decoders held at the IEEE Quantum Week event in Colorado, and from the number of recent papers on real-time decoders from IBM, AWS, Quantinuum and others.

 

Research continues within the field to develop better decoders and to understand the underlying reasons why they work. For stabilizer codes in general, IBM's Ben Brown has recently been exploiting the basic structure that arises from symmetries between elements of surface code stabilizers; by focusing on these symmetries, he has begun to address the question of how decoders known as "minimum weight perfect match decoders" can be generalized to other types of codes [16].

 

Earlier in 2022, a research group at IBM implemented and demonstrated on IBM quantum hardware a decoder called the "matching and maximum likelihood decoder" [17] for distance-2 and distance-3 heavy hexagonal codes with the help of intermediate circuit measurements and fast resets. These are just some of the important results on decoders that have emerged recently, and more exciting results are expected in the future.

 

450b96010e5e0407df9e1c989c599417

 

0a3cd825c445a38da94e666d953ce25a

Figure: color code with three different types of borders on the edges of the triangular quantum bit array proposed by the group of Ben Brown [16], yielding a symmetry that supports the Möbius strip: more stabilizer codes can be decoded. Below: decoding plots of Z and X stabilizer measurements in the literature [17] for correcting X and Z errors for d=3 heavy hexagonal codes with circuit-level noise, respectively. The blue (a) and red (b) nodes in the figure correspond to the stabilizers and the black nodes correspond to the boundaries. The node labels are defined by the measurement value (Z or X) of the stabilizer, along with a subscript indexing the stabilizer and a superscript indicating the circle.

 

05Performing calculations

 

Logical quantum bits cannot simply be encoded and decoded; however, we must also be able to use them for computation. Therefore, a key challenge is to find simple, inexpensive techniques to implement a computationally general set of logic gates. Similarly, for two-dimensional surface codes and their variants, we have no such technology and require expensive magic state extraction techniques. Although the cost of magic state extraction has been reduced over the years, it is still far from ideal, and research continues to improve the distillation process and to discover new methods and/or codes that do not require it. To avoid the overhead of mid-term magic state extraction, it is envisioned that error mitigation and error correction work in tandem to provide universal gate sets by using error correction to remove noise from Clifford gates and error mitigation on T-gates.

 

Recently, results by Benjamin Brown [18] showed how to implement one of the most resource-intensive gates, called Controlled-Controlled-Z gates (CCZ gates), in a two-dimensional architecture without resorting to magic state extraction. This approach relies on the observation that a three-dimensional version of the surface code can easily implement a logical CCZ and then cleverly embed the three-dimensional surface code into a 2+1-dimensional spacetime.

 

c876f638bba68702d310dc52806b6131

Left: 3D spacetime surface code; Right: 2D layout of non-Clifford gates.

 

 

Reducing the huge overhead cost of implementing a computationally common set of logic gates is a key goal of the quantum error correction community and an essential component in the pursuit of ever larger quantum computing systems.

 

06Where to go next?

 

Quantum computers are real and can be programmed, but building large and reliable quantum computers remains a major challenge. A major challenge. Major advances in quantum error correction technology may be needed to fully exploit the potential of these systems. Fortunately, we appear to be entering another creative period as the field begins to advance in new directions, and recent advances, such as the new qLDPC code, show promise for future systems.

 

Reference links:

[1]https://research.ibm.com/blog/future-quantum-error-correction

[2]https://www.scientificamerican.com/article/how-to-fix-quantum-computing-bugs/

[3]https://journals.aps.org/prx/abstract/10.1103/PhysRevX.10.011022

[4]https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.128.110504

[5]https://arxiv.org/abs/2203.07205

[6]https://www.nature.com/articles/s41586-022-04566-8

[7]https://arxiv.org/abs/2207.06431

[8]https://www.nature.com/articles/s41586-022-04721-1

[9]https://dl.acm.org/doi/10.1145/3519935.3520017

[10]https://ieeexplore.ieee.org/document/9490244

[11]https://arxiv.org/abs/2202.13641

[12]https://arxiv.org/abs/2206.06557

[13]https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.3.030338

[14]https://quantum-journal.org/papers/q-2021-10-19-564/[15]https://quantum-journal.org/papers/q-2022-09-21-813/[16]https://arxiv.org/abs/2207.06428[17]https://arxiv.org/abs/2203.07205[18]https://www.science.org/doi/10.1126/sciadv.aay4929

2022-10-13