2024, this will be the real challenge of quantum computing

icv    Quantum    QC    qc-new1    2024, this will be the real challenge of quantum computing
 

In 2023, a quantum computing record was broken.

 

Google's quantum AI team has demonstrated that the error correction method of combining multiple quantum bits into a single logical quantum bit can provide lower error tolerance. Previous error correction research with the increase in the number of bits, the error rate will increase, are "the more corrected the more wrong", but this time the first time Google realized the "more corrected the more right". That is to say, breaking through the break-even point of quantum error correction. This is an important turning point in the "long march" of quantum computing, which points out a new way to realize the logic error rate required for general-purpose computing.

 

IBM has introduced Heron chips with lower error rates, and next year, more IBM Heron processors will be added to IBM's industry-leading utility-scale systems.

 

A team of researchers from the Defense Advanced Research Projects Agency (DARPA), Harvard University, and the QuEra team built a quantum computer that has the largest number of logical quantum bits ever and can run on up to 280 physical quantum bits. With this team's approach, scientists won't need thousands, hundreds of thousands, or millions of physical quantum bits to correct errors - and the race to build a practical quantum computer may be entering a new phase.

 

Now, Global Quantum is setting its sights on something far more important, albeit less glamorous.

 

Quantum error correction: more important than the number of quantum bits

 

Quantum computers have transformative potential, but only if we can handle the errors inherent in these noisy, highly sensitive systems operating at the physical limits of the universe. Error handling is not a simple task; we need to predict and fix these errors at every step of system design.

 

Whether it's calculating how much tax you should pay or playing Super Mario, our computers have always worked their magic on long strings of 0s and 1s. Quantum computing, on the other hand, shows magic on quantum bits. Quantum bits can be at both 0 and 1 at the same time, just as if you were sitting at both ends of a long couch at the same time. They can be realized in ions, photons, or tiny superconducting circuits, a two-energy system that gives quantum computing its superpowers.

 
Classical and Quantum Bit
 

However, quantum bits are at the same time fragile, and even very weak interactions with their surroundings can cause them to change. So scientists must learn how to correct these errors.

 

Errors in computers are natural: quantum states are supposed to evolve according to the provisions of the quantum circuits implemented. However, due to various unavoidable disturbances in the external environment or in the hardware itself (what we call noise), the evolution of the actual quantum state and the quantum bits may differ, leading to errors in the calculations. However, quantum bit error is more complicated than classical bit error. Not only do the zero or one values of quantum bits change, but quantum bits also have a phase: it's a bit like the direction they point.

 

We need to find ways to deal with both kinds of errors at all levels of the system:

 

- One is to improve our control over the computing hardware itself;
- The second is to build redundancy into the hardware, so that even if one or a few quantum bits go wrong, we can still retrieve the exact value of the computation.

 

Now the early leaders in quantum computing-Google, Rigetti, and IBM-have shifted their perspectives to this goal.Hartmut Neven, head of Google's Quantum Artificial Intelligence Lab, says, "This [quantum error correction] is very sure to be the next big milestone." And Jay Gambetta, leader of IBM's quantum computing endeavor, said, "You're going to see a whole range of results in the next couple of years from us in solving the quantum error correction problem."

 

Physicists are already experimenting with their quantum error correction schemes on a small scale, but the challenges remain extremely daunting.

 

Pursuing the Quantum Computer

 

The quest for a quantum computer began in 1994. At the time, Peter Shor, a mathematician at MIT, demonstrated a still-hypothetical machine that could quickly factorize a large number. Thanks to the two-energy system of quantum bits, Shor's algorithm used quantum wavefunctions to represent possible ways to factorize a large number.

 

These quantum waves, which can fluctuate through all the quantum bits of a quantum computer at the same time, interfere with each other, causing incorrect forms of decomposition to cancel each other out, and ultimately the correct form of cranes. The very cryptosystem that now protects Internet communications is based on the basic fact that searching for the decomposition form of large numbers is virtually impossible for conventional computers, and so a quantum computer running Shor's algorithm could crack this cryptosystem. Of course, this is just one of many things that a quantum computer can do.

 

But Shor assumes that each quantum bit is able to maintain its state intact, so that quantum waves can swing from side to side as long as necessary. Real quantum bits are far less stable, and the ones used by Google, IBM, and Rigetti all consist of micro-nano resonant circuits etched from superconducting metals.

Such bits have now been shown to be easier to manipulate and integrate into circuits than other types of quantum bits. Each circuit has two defined energy states, which we can denote as 0 and 1. By applying microwaves to this circuit, the researchers can put it in one of these states, or any combination of the two - say, 30% 0 and 70% 1.

 

However, these "intermediate states" dissipate, or "decoherence," in a very short period of time. Even before decoherence occurs, noise can "crash" and change these quantum states, "derailing" the computational results and evolving them in an unwanted direction.

 
From classical to quantum error correction
 

It is therefore necessary to study and implement code on today's hardware, not only to expand our knowledge of how to design better quantum computers, but also to help benchmark the state of current hardware. This helps deepen our understanding of system-level requirements and improves the capabilities of our systems.

 

The way scientists have spread the information from one quantum bit - a "logical quantum bit" - over many physical bits dates back to the development of early classical computers in the 1950s. The bits in early computers consisted of vacuum tubes or mechanical relays (switches) that sometimes reversed without warning. To overcome this problem, the famous mathematician von Neumann pioneered error correction.

 

Von Neumann's method utilized redundancy. Suppose a computer makes three copies of each bit, so that even if one of them flips, most of the bits still hold the correct value. The computer can find and correct the incorrect bit by doing a two-by-two comparison of these bits, a method known as parity checking. For example, if the first and third bits are the same, but the first and second and second and third are different, then most likely the second bit flipped, so the computer flips it back. Greater redundancy means greater error correction.

 

Interestingly, the transistors etched into microchips, the devices that modern computers use to encode their bits, are so reliable that error correction isn't really used much.

 

But quantum computers have to rely on this, at least for those made of superconducting quantum bits.

(Quantum bits made of individual ions are less susceptible to noise, but harder to integrate.) The very principle of quantum mechanics makes this even harder, because it takes away the simplest tool for error correction, replication. In quantum mechanics, the non-clonability theorem tells us that it is not possible to copy a quantum bit's state to other quantum bits without changing its original state. Joschka Roffe, a theoretical physicist at the University of Sheffield, says: "This means that it is impossible for us to directly convert a classical error-correcting code into a quantum error-correcting code."

 

In a conventional computer, a bit is a switch that can be set to 0 or 1. To protect a bit, the computer can copy it to other bits. If noise causes a copy to flip, the computer can locate the error by doing a parity check: comparing a pair of bits to see if they are in the same or different state

 

To make matters worse, quantum mechanics also requires the researcher to find errors blindfolded. Although a quantum bit can be in a superposition of 0 and 1, according to quantum mechanics, it is impossible for an experimenter to measure this superposition without causing a collapse, and the measurement always results in the quantum state collapsing to one of the 0s or 1s: measuring one state annihilates one state! Kuperberg says, "The simplest way to correct errors (classical error correction) is to go through all the bits and see what went wrong. But with a quantum bit, you have to find the error without looking at it."

 

These obstacles may sound insurmountable, but again, quantum mechanics points to possible solutions. While researchers can't replicate the state of one quantum bit, they can extend it up to other bits, utilizing an unfathomable quantum correlation - quantum entanglement.

 
How is quantum error correction realized?
 

How entanglement is achieved shows just how subtle quantum computing can be. Under microwave excitation, an initial quantum bit interacts with another bit in the 0 state through a "control non" (CNOT) gate operation. The CNOT gate changes the state of the second bit when the first quantum bit is in the 1 state, while keeping the second bit unchanged when the first bit is in the 0 state. Despite the interaction, this process does not make a measurement of the second quantum bit, and therefore does not force its quantum state to collapse.

 

Instead, the process maintains the bidirectional state of the first quantum bit and is simultaneously in a state that changes and does not change the second quantum bit; in short, it puts the two quantum bits in a superposition of simultaneous 0s and simultaneous 1s.

 

For example, if the initial quantum bit is in a superposition state of 30% 0 and 70% 1, we can chain it with other bits, such as three quantum bits sharing an entangled state with 30% all 0s and 70% all 1s. This state is different from the state consisting of three copies of the initial bit. In fact, none of the three entangled strings of quantum bits has an exact state on its own, but they are perfectly correlated: if you measure the first bit and it collapses to 1, the other two must also collapse to 1; and vice versa, if the first collapses to 0, the other two collapse to 0. This correlation is the essence of entanglement.

 

In such a larger entangled state, scientists can now keep an eye out for errors. To do this, they proceeded to entangle more "auxiliary" quantum bits with the three-bit chain, one with the first and second bits, and the other with the second and third bits. The auxiliary quantum bits are then measured, just like the parity check in the classical bits. For example, the noise might have flipped one of the original three coded bits, so its 0 and 1 parts switched, changing the potential correlation between them. If researchers get it right, they can make "stabilizer" measurements on auxiliary quantum bits to detect these correlations.

 

Although measuring the auxiliary quantum bits led to a collapse of their state, it had no effect on the coding bits. "It's a specially designed parity measurement that doesn't cause the information encoded in the logical state to collapse," Roffe said. For example, if the first auxiliary bit measures 0, it only shows that the first and second encoded bits must be in the same state, but it doesn't tell us exactly which state they are in, whereas if the auxiliary bit measures 1, it shows that the encoded bits must be in the opposite state, and that's it. If the flipped bit can be found quickly before the quantum bit state tends to diffuse, then microwaves can be used to flip it back to its original state and restore its coherence.

 
The principles of quantum mechanics make it infeasible to detect errors directly by copying and measuring quantum bits (up) and detecting them. The alternative that physicists have come up with is to spread the state of a quantum bit into other quantum bits by entanglement (middle), then monitor those quantum bits to detect errors, and then manipulate the error bit to return to the correct state (bottom) when the error is detected
 
If noise causes a quantum bit to flip, physicists can detect this change without actually measuring the state. They entangle a pair of primary quantum bits with other auxiliary quantum bits and measure those auxiliary bits, and if the correlation between the primary quantum bits stays the same, the result is 0, whereas if a flip occurs, the measurement is 1. Next, the quantum bits can be flipped back again by microwaves to restore the original entangled stat
 

This is just the most basic concept. A quantum bit state is more complex than just a combination of 0s and 1s. It also depends on how these two parts are intertwined, in other words, it also depends on an abstract angle, the phase. This phase angle, which can vary from 0° to 360°, is the key to the fluctuating interference effect, and it is this quantum interference that gives quantum computers their superpowers. In principle, any error in a quantum bit state can be thought of as some combination of a bit flip and a phase flip; a bit flip corresponds to an exchange of 0s and 1s occurring, while a phase flip corresponds to a 180-degree change in phase.

 

To fix both errors, researchers can extend the error correction scheme described above to another dimension. Since a three-entangled string of bits, plus two auxiliary bits intertwined therein, is the smallest structure that can detect and correct a bit-flip error, then a 3x3 lattice of quantum bits, plus eight auxiliary bits distributed among them, is the smallest structure that can simultaneously detect and correct both bit-flip and phase-flip errors. Logic bits now exist in such a 9-bit entangled state - thank goodness you don't have to write out its mathematical formulas! A stabilizer measurement in one of the dimensions on such a lattice detects bit-flip errors, while a slightly different stabilizer measurement in the other dimension detects phase-flip errors.

 

The scheme for putting quantum bit states into a two-dimensional grid for error correction changes with the geometric arrangement of the quantum bits and the details of the stabilizer measurements, but the researchers' route to quantum error correction is clear: encode individual logical quantum bits into a grid array of physical bits and show that the fidelity of the logical bits increases with the scale of the array.

 
Error-correcting based codes
 
By pushing these capabilities to their physical limits and evaluating them using well-designed benchmarks, the research community has discovered important constraints that tell us how to co-design optimal error suppression, mitigation, and correction protocols in quantum computing. A significant amount of QEC research is now entering the experimental demonstration phase, using the most appropriate QEC codes to implement logic operations on today's quantum hardware.
 
Coding to a 3-bit repetition code (left) leads to a logical re-square (right)
 

Many of these demonstrations involved researchers implementing surface codes and related heavy hexagonal code schemes. These families of codes are designed to work on a two-dimensional lattice of quantum bits, often with different roles for the physical quantum bits: data quantum bits for storing data, and auxiliary quantum bits for measurement checking or flagging. ibm measures the robustness of these codes to errors by the metric "distance," which represents the distance at which an error is returned. The "distance" metric represents the minimum number of physical quantum bit errors required to return an incorrect logical quantum bit value.

 

Thus, increasing the distance implies a more robust code. The probability of logical quantum bit error decreases exponentially with increasing distance.

 

To date, the 2D surface code has been considered the undisputed leader in error correction; however, it has two important drawbacks. First, most of the physical quantum bits are used for error correction. As the distance of a surface code grows, the number of physical quantum bits must grow as the square of the distance to encode a single quantum bit. For example, a surface code with a distance of 10 would require about 200 physical quantum bits to encode a logical quantum bit. Second, it is difficult to implement a computationally universal set of logic gates. The leading "magic state extraction" methods require additional resources beyond simply encoding quantum information into error-correcting codes. The spatial and temporal cost of these additional resources may be prohibitive for small- to medium-scale computations.

 

One way to address the first drawback of surface codes is to look for and study codes from the "good" family of qLDPC codes. "Good" is a technical term for a family of codes in which the number of logical quantum bits and the distance between them are proportional to the number of physical quantum bits; thus, doubling the number of physical quantum bits doubles the number of logical quantum bits and the distance between them. The surface code family is not "good", and finding good qLDPC codes has been a major open problem in quantum error correction.

 

When we perform quantum computation with deployed error-correcting codes, we observe important error-sensitive events: these events are clues to potential errors, and when they occur, it is the task of the decoder to correctly identify the appropriate error correction. Classical hardware performing such decoding must keep up with the high rate at which important events occur. Furthermore, the amount of event data transferred from the quantum device to the classical hardware must not exceed the available bandwidth.

Thus, the decoder imposes additional constraints on the control hardware and on the way the quantum computer interfaces with the classical system; solving this challenge is of critical importance both theoretically and experimentally.

 

Logical quantum bits cannot simply be encoded and decoded; however, we must also be able to use them for computation. Thus, a key challenge is to find simple, inexpensive techniques to realize a computationally general set of logic gates. Similarly, for two-dimensional surface codes and their variants, we have no such techniques and require expensive magic state extraction techniques. While the cost of magic state extraction has been reduced over the years, it is still far from ideal, and research continues to improve the distillation process and discover new methods and/or codes that do not require it.

 

To avoid the overhead of mid-term magic state extraction, it was envisioned that error mitigation and error correction would work in tandem to provide a universal set of gates by using error correction to remove noise from Clifford gates and error mitigation on T-gates.

 

Together, these discoveries have opened up new directions in quantum error correction and led to further developments.


2024, the age of error-correcting innovation

 

We want to use error correction to achieve our ultimate goal: fault-tolerant quantum computing. In this kind of computing, we build in redundancy - so that even if a few quantum bits make an error, the system can still return an accurate answer for any attempt we run on the processor. Error correction is a standard technique in classical computing, where information is encoded through redundancy so that it can be checked for errors.

 

Quantum computers are real and programmable, but building large, reliable quantum computers remains a major challenge. Significant advances in quantum error correction technology may be needed to realize the full potential of these systems.

 

Quantum error correction is the same idea, but it is important to note that we must take into account the new types of errors described above. In addition, we must carefully measure the system to avoid a collapse of our state. In quantum error correction, we encode individual quantum bit values (called logical quantum bits) across multiple physical quantum bits and implement gates that treat the physical quantum bit structure as essentially error-free logical quantum bits. We perform a specific set of operations and measurements, collectively called error-correcting codes, to detect and correct errors. By the Threshold Theorem, we know that our hardware must achieve a minimum hardware-dependent error rate before we can apply error correction.

 

But error correction is not just an engineering challenge; it is also a physical and mathematical problem.

The current leading encoded surface codes require a large number of physical quantum bits O(d^2) for each single logical quantum bit, where d is a feature of the code, called its distance, related to the number of errors that can be corrected. In order for a QEC code to be able to correct a sufficient number of errors to be fault tolerant, the distance d must be chosen high enough to match the code's error-correcting capability to the error rate of the quantum device. Since current quantum devices are very noisy, with error rates approaching 1e-3 , this means that the number of quantum bits required for quantum error correction using surface codes is currently unrealistic - too many physical quantum bits are required for each logical quantum bit. To move forward, we need both to reduce the physical error rate of devices, say to 1e-4, and to discover new codes that require fewer physical quantum bits.

 

Theorists around the world are still designing different error-correction strategies and quantum bit layouts to determine which ones hold the most promise for the future, and fortunately we seem to be entering another creative period as the field begins to push forward in new directions, with recent advances, such as the new qLDPC code, showing promise for future systems.

 
Reference Links:
[1]https://research.ibm.com/blog/future-quantum-error-correction‍‍‍
[2]https://arxiv.org/abs/2207.06428[17]https://arxiv.org/abs/2203.07205[18]https://www.science.org/doi/10.1126/sciadv.aay4929
[3]https://medium.com/qiskit/why-this-quantum-pioneer-thinks-we-need-more-people-working-on-quantum-algorithms-3020729c61e1