The cover article of Scientific American in May: How to correct errors in quantum computing?
The cover article of the May issue of the popular science magazine Scientific American is "Errors in the Machine" by IBM quantum theorist Zaira Nazario, which introduces us to the importance of quantum error correction for quantum computers.
The following is the full text translation:
Everything that is not banned is mandatory, this is a physical law. Therefore, the error is inevitable. They are everywhere: language, cooking, communication, image processing, of course, also have calculations. Mitigating and correcting these issues allows society to continue to run. You can scrape a DVD, but still play it. The QR code may be blurred or broken, but still readable. Images from space detectors can spread hundreds of millions of miles, but still look clarity.
Correction is one of the most basic concepts in information technology. Errors may be inevitable, but they can also be repaired.
This inevitable law is equally applicable to quantum computers. This emerging machine uses physics's basic rules to solve the problem of classical computer is difficult to solve, and the impact on science and business may be far-reaching. However, with the power of power, it is a huge vulnerability: quantum computing opportunities do not know the mistakes, and standard correction techniques cannot fix these errors.
Zaira Nazario is a physicist who is engaged in quantum calculations in IBM, but her career does not start there. Nazario was originally a condensed state theoretist, research material quantum mechanics, such as superconductivity; Nazario interrupted the scientific policy of the US State Council, then went to the National Defense Senior Research Program (DARPA) and Intelligence Advanced Research IARPA. There, trying to use the basic principles of nature to develop new technologies.
At that time, the quantum computer was still in the early stage. Although the Paul Benioff of the Agong National Laboratory has made this idea as early as 1980, the physicists spent the first 20 years. After ten years, in 2007, they invented the basic data unit, as the basis of IBM, Google and other company quantum computers, called "superconductive sub-bit". Nazario experienced experience in superconducting: IBM has been taken in the IARPA to help run several quantum computing research projects.
In IBM, Nazario is dedicated to improving the operation between multiple connected quantum bits and explores how to correct errors. The quantum bits can be combined by a entangled quantum phenomenon, they can collectively store large amounts of information, far exceeding the information that the same number of ordinary computer bits can be stored. Because the quantum bit state is present in the form of wave, they can interfere with each other like optical waves, thereby bringing more prospects that are much stronger than flipped bits. These capabilities enable quantum computers to perform certain functions extremely efficiently, and may accelerate a wide range of applications: simulating nature, research and design new materials, revealing hidden features in data to improve machine learning, or find more energy saving for industrial chemical processes. catalyst.
Trouble is that many solutions that solve useful problems require quantum computers to execute 10 billion logical operations or "doors" for hundreds to thousands of quantum bits. This requires that they have made most errors every 1 billion doors. However, today's best machine will be wrong every 1000 doors. In the face of the huge gap between theory and practice, the early physicist worried that quantum calculations will still be a curiosity of science.
Correction errors
This game has changed in 1995, and the Peter Shor of the Bell Lab and Oxford University developed quantum error independently. They show how physicists propaggave information on a single quantum bit to a plurality of physical quantum bits to build a reliable quantum computer with unreliable components. As long as the physical quantum bit is high enough, the error rate is below a certain threshold, we can remove errors in a faster speed than the error.
To understand why Shor and Steane's work is so important, think about how ordinary error correction works. For example, a simple error correction code can generate a backup copy of the information, for example, using 000 represents 0, and 111 denotes 1. This way, if your computer reads a 010, it knows the original value may be 0. This code will succeed when the error rate is low enough so that the maximum number of bits are corrupted. Engineers make hardware as reliable, then adding a layer of redundancy to eliminate any residual errors.
Classic bits and quantum bits
However, it is not clear how to apply classic error correction methods to quantum computers. We need to collect information about them by measuring. The problem is that if you measure quantum bits, you can bring their state to the state - that is, you can destroy the quantum information encoded therein. Further, in addition to the flip bit there is an error, in the quantum computer, the phase describing the wave of the quantum bit state is also errors.
In order to solve all of these problems, quantum error correction strategies use auxiliary quantum bits. A series of doors will beat the auxiliary quantum bit to the original quantum ratio, effectively transmit noise from the system to the auxiliary quantum bits. The auxiliary quantum bits are then measured, which provides sufficient information to identify errors without having to contact the system that is concerned, so that errors can be repaired.
Quantum circuit
As with the classic error correction method, success depends on the physical properties of the noise. For quantum computers, an error will occur when the device is entangled with the environment. To make your computer, the physical error rate must be small enough. This error rate has a threshold. When the threshold is lower than this threshold, the error can be corrected, and the probability of calculating failed is arbitrarily lowered. At this point, the speed of hardware introduces the error is faster than our correcting the error. The transformation of this behavior is essentially a phase change between ordered and disorderly states. As a theoretical condensed physicist, Nazario's career is studying quantum phase change.
Today, they are continuing to study improved error correction code so they can handle higher error rates, broader errors, and hardware restrictions. The most popular error correction code is called a topological quantum code. The origins of their origins can be traced back in 1982, when the Frank Wilczek of the Massachusetts Institute proposed a new particle in the universe. Known types of shapes are either integer, or half-quot; university, the new type of angular momentum can be a value between the two. He called their "Anyon" and warned "The practical application of these phenomena seems very far."
Surface code for error correction
But soon, physicists have found that any child is not so deep; in fact, they are associated with the phenomenon of the real world. In order to complete the transformation from theory to practical technical requirements, Alexei Kitaev, California Institute of Technology, realized that any child is a useful formula for quantum calculations. He further proposed a specific multi-particle system as a quantum error correction code.
In these systems, the particles are connected in a lattice structure, and their minimum energy state is highly entangled. These errors correspond to the system at a higher energy state, called excitation. These excitation are arbitrary. This system marks the birth of the topology code, and thereby creates another connection between the condensed physics and quantum error correction: Since the noise is expected to work on the grid, the topology code has local excitation, so they Soon as the preferred solution for protecting quantum information.
Two examples of topology code are called surface code and color codes. The surface code is created by Kitaev and Sergey Bravyi. Its features are data and auxiliary quantum bits alternately arranged on two-dimensional grid, just like the black and white grid on the board.
From the chessboard to the settlers of Catan
The theory behind the surface codes was compelling, but when researchers at IBM started exploring them, they encountered a challenge: Understanding these required learning more about how superconducting qubits work.
Superconducting transmon qubits rely on oscillating currents flowing in superconducting circuits. The qubit 0 and 1 values correspond to different superpositions of charges. To operate on the qubits, the team applied pulses of microwave energy at specific frequencies. There is some flexibility in the choice of frequency: the frequency is set when the qubit is made, and different frequencies are chosen for different qubits so that they can be individually addressed. The problem is that the frequency can deviate from the expected value, or the pulses can overlap in frequency, so one pulse for one qubit can change the value of an adjacent qubit. The dense grid of surface codes (each qubit connected to four other qubits) causes too many of these frequency collisions.
The IBM team decided to solve this problem by connecting each qubit to fewer neighbors. The resulting grid is made up of hexagons—called a "heavy hex" layout—that looks like the settlers of the Catan game board, rather than a chessboard. The good news is that the heavy hexagon layout reduces the frequency of collisions. But in order for this layout to be worthwhile, the IBM theoretical team had to develop a new error-correcting code.
Catan Game Board
The new code, called the heavy hexagonal code, combines the features of surface codes and Bacon-Shor codes, another grid-based code. The lower qubit connectivity in the code means that some qubits, called marker qubits, have to act as intermediaries to identify which errors have occurred, resulting in a slightly more complicated circuit and thus a slightly lower error threshold for success, but the trade-off is worth it.
There is one more problem to be solved. Codes that exist on a two-dimensional plane and contain only nearest-neighbor connections have significant overhead: Correcting more errors means building larger codes, which require the use of more physical qubits to create a single logical qubit. This setup requires more physical hardware to represent the same amount of data, and more hardware makes it harder to build qubits good enough to beat the error threshold.
Quantum engineers have two options. The larger overhead (additional qubits and gates) can be accepted as the cost of the simpler architecture, and efforts are made to understand and optimize the different factors that affect the cost. Alternatively, keep looking for better code. For example, in order to encode more logical qubits into fewer physical qubits, perhaps qubits should be allowed to interact with qubits farther away, not just their nearest neighbors, or into three dimensions or higher A grid of dimensions. The theory team is working on both options.
The importance of generality
A useful quantum computer must be able to perform any possible computational operation. Ignoring this requirement is the source of many common misconceptions and misleading information about quantum computing. In short, not all devices that people call quantum "computers" are actually computers—many are more like computing machines that can only perform specific tasks.
Ignoring the need for general-purpose computing is also a source of misunderstanding and misleading information about logic qubits and quantum error correction. Protecting the information in memory from errors is a start, but it's not enough. We need a general set of quantum gates, rich enough to perform any gate that quantum physics allows; then we need to make those gates robust to errors. This is where things get difficult.
Some doors are easily error-proof - they fall under the category of horizontal doors. To understand these gates, consider two levels of description: logical qubits (error-protected units of information) and physical qubits (hardware-level devices that work together to encode and protect logical qubits). To implement an error-protected single-qubit lateral gate, you need to implement the gate on all physical qubits that encode logical qubits. To operate an error-protected lateral gate between multiple logical qubits, you operate the gate between the corresponding physical qubits in the logical qubits. You can think of logical qubits as two blocks of physical qubits, called Block A and Block B. To implement a logical (i.e. error-protected) lateral gate, you make the gate between qubit 1 of block A and qubit 1 of block B, and the gate between qubit 2 of block A and qubit 2 of block B, according to And so on. Because only the corresponding qubits interact, the lateral gate keeps the number of errors per block constant and under control.
If all quantum gates were lateral, life would be simpler. But a fundamental theorem states that no quantum error-correcting code can perform general-purpose computations using only lateral gates. We can't have everything in life or quantum error correction.
This tells us something important about quantum computers. If you hear someone say that what makes quantum computing special is superposition and entanglement, beware! Not all superposition and entanglement states are special. Some are achieved through a set of lateral gates, which we call a Clifford set. A classical computer can efficiently simulate quantum computing using only Clifford gates. What you need is a non-Clifford door, which tends not to be lateral, making it difficult to do a classic simulation.
Our best trick for implementing non-Clifford gates that are immune to noise is "magic state distillation" developed by Kitaev and Bravyi. If you have access to a special resource called magic state, you can implement non-Clifford gates using only Clifford gates. However, those magical states must be very pure - with very few errors. Kitaev and Bravyi realized that, in some cases, it is possible to start with a set of noisy magic states and extract them by detecting and correcting errors using only perfect Clifford gates (assuming Clifford gates are error-corrected here) and measurements , and end up with fewer but purer magical states. Repeat this extraction process many times, and you can get a pure magic state out of many noisy states.
Once you have the pure magic state, you can make it interact with the data qubit using a process called teleportation, which transfers the state of the data qubit to a new state that a non-Clifford gate would produce. The magic state is consumed in the process.
Although this approach is clever, it is also very expensive. For standard surface codes, magic state extraction consumes 99% of the total computation. Obviously, we need some way to improve or circumvent the need for magic state extraction. At the same time, we can improve our capabilities on noisy quantum computers by reducing errors. Rather than trying to design a quantum circuit to fix errors in computation in real time (requiring additional qubits), error mitigation uses a classical computer to learn the contribution of noise from noisy experimental results and remove them. You don't need extra qubits, but you have to pay the price of running more quantum circuits and introducing more classical processing.
For example, if you can describe the noise in a quantum processor, or learn it from a set of noisy circuits that can be efficiently simulated in a classical computer, you can use that knowledge to approximate the output of an ideal quantum circuit. Think of this circuit as a sum of noisy circuits, each with a weight calculated from knowledge of the noise. Or run the circuit multiple times, changing the noise value each time. Then, you can take the results, connect the dots, and infer what you would expect if the system was error-free.
These techniques have limitations. They don't apply to all algorithms, and even if they do, they only apply so far. But combining error mitigation with error correction creates a powerful alliance. IBM's theoretical team recently showed that by using error correction for Clifford gates and error mitigation for non-Clifford gates, this approach could allow us to simulate universal quantum circuits without the need for magic state extraction. This result may also allow us to achieve advantages over classical computers with smaller quantum computers. The team estimates that a specific combination of error mitigation and error correction could allow you to simulate circuits with 40 times as many non-Clifford gates than conventional computers can handle.
In order to move forward and devise more efficient ways to deal with errors, there must be a tight feedback path between hardware and theory. Theorists need to adapt quantum circuits and error-correcting codes to the engineering constraints of the machine. Engineers should design systems around the need for error-correcting codes. The success of quantum computers depends on navigating these theoretical and engineering tradeoffs.
I'm proud to have played a role in shaping quantum computing, from a lab-based demonstration of one- and two-qubit devices to a field where anyone can access quantum systems with dozens of qubits through the cloud. But we still have a lot to do. Reaping the benefits of quantum computing requires hardware that operates below the error threshold, error-correcting codes that can repair remaining faults with as few extra qubits and gates as possible, and better methods that combine error correction and mitigation. We have to move on because we haven't finished writing the history of computing.
Link:
[1] https://www.scientificamerican.com/article/birdsong-quantum-computing-omicrons-mutations-and-more/
[2] https://www.scientificamerican.com/article/how-to-fix-quantum-computing-bugs/