IBM Quantum computing will achieve practical applications in the NISQ era
Quantum computing technology is evolving so fast that it's hard for people outside of this field to keep up with the trend. To present a real-time snapshot of the evolving quantum industry, executives from two quantum computing companies - Jay Gambetta, vice president of IBM Quantum, and Timothy Hirzel, CEO of Zapata Computing - recently talked about their views on recent advances/ challenges [1].
Despite their different positions, they both expressed the same view: quantum computing will achieve some degree of practical application in the NISQ (Noise-Inclusive Intermediate Scale Quantum) era.

Jay Gambetta, vice president of IBM Quantum
Timothy Hirzel, CEO, Zapata Computing
The full text of the interview, as compiled by Photon Box, is as follows.
Q: On technological progress. What are your thoughts on the progress made in the quantum field in the last six months and why? What near-term future developments does it set the stage for?
Gambetta.:Advances in multilayer wiring, packaging and coherence have enabled superconducting quantum bit systems to break the 100-qubit barrier. This is a milestone in quantum computing, as this system scale makes it possible to address the complexity of quantum circuits that are beyond the scope of classical processors. These advances have been accompanied by a 10-3 error rate for double quantum bits, which is close to the point where error mitigation techniques can achieve noise-free estimation of observations in reasonable time.
Hirzel:There are three main points.
1)Quantum advantages in generative modeling (GCM). Recent work, such as "Generating high-resolution handwritten digits with ion-trap quantum computers," "Enhancing generative models by quantum correlation," and "Evaluating generalizations of quantum and classical generative models," has laid the groundwork for establishing the near-term potential of quantum computers to improve machine learning algorithms both experimentally and theoretically.
2) Approaches Using Early Fault-Tolerant Quantum Computers. An increasing amount of research is focused on developing algorithms and resource estimates suitable for early fault-tolerant quantum computers, or quantum computers with limited quantum error correction capabilities. Early fault-tolerant quantum computing will require balancing power and error robustness. Recent work has laid the groundwork for designing quantum algorithms that allow us to tune this balance. This differs from approaches where error robustness (robust) is too low (designing algorithms for fault-tolerant quantum computers) and approaches where error robustness is too high but power is insufficient (developing expensive error mitigation techniques).
3) Xanadu quantum superiority experiment. As with other quantum demonstrations, this is an important milestone, showing that we have now entered the era of engineering quantum systems that can exhibit computational power beyond that of classical computers.
Q: On the development of algorithms. We hear a lot about Shor's algorithm, Grover's algorithm and the VQE solver. What are the most important missing algorithms/applications needed for quantum computing, and how far away are we from developing them?
Gambetta:Just like in classical computing, people usually think that high-performance programming requires 13 topics, but in my opinion, we don't need to find too many algorithms. The missing step is how we program these and minimize the impact of noise. In the long run, error correction is the solution, but is it possible to implement core quantum circuits with error mitigation and show a continuous error correction path. This is the most important question. I believe we have some ideas showing that this path can be continuous. But if we can leverage advances in error mitigation to advance quantum applications, improvements in hardware will have a more immediate impact in quantum technology. From these core quantum circuits, I expect many applications similar to what is happening in HPC, most likely in the areas of simulating nature (high energy physics, materials science, chemistry, drug design), structured data (quantum machine learning, ranking, detecting signals), and non-exponential applications such as search and optimization.
Hirzel:There are two main points.
1) Algorithms that exploit the sampling capabilities of quantum devices. Applications include machine learning (generative and recursive models), optimization, and cryptography. A prominent example of this category is the use of quantum devices as a source of statistical power to enhance optimization [2], which represents a fundamental new paradigm for using recent quantum devices to gain practical advantages.
2) Algorithms that exploit the capabilities of early fault-tolerant quantum devices. A related example is robust amplitude estimation (RAE), which comes from a series of works. On the basis of amplitude estimation, we can further improve hybrid quantum-classical schemes such as VQE as well as algorithms for state property estimation. These methods have applications in quantum chemistry, optimization, finance, and other fields.
Q: Quantum Bit Technology. Which technology is least likely to succeed as a fundamental quantum bit technology and why? Which technology is least likely to succeed?
Gambetta:For a technology to be successful, it needs a way to scale the QPU, improve the quality of the quantum circuits running on the QPU, and speed up the operation of the quantum circuits on the QPU. Currently, in my opinion, not all quantum bit technologies can do all three of these things, and some are physically impossible to improve one or more of these parts. I prefer superconducting quantum bits because they offer the best path to development when optimized for all three points.
Hirzel:It's too early to say. We expect that whether quantum bit technology is best will depend on the problem: different types of problems will work best with different quantum bit approaches, and this will continue to evolve for some time.
We have already had good results with superconducting and ion trap devices and are excited about exploring quantum photonics. The answer depends on the time scale one is considering and what success means. Experiments using ion traps may give better results if there is no error correction; on the other hand, ion traps may face limitations when the number of quantum bits increases: a trap can only hold a certain number of ions, so different traps need to be entangled in some way to reach a larger number of quantum bits. There has not been much experimental work in this area, so it is not clear how well this setup works and how easy it is to do QEC. feedback between different ion traps on the CPU and QPU will add a layer of complexity, mainly in terms of delay times.
Photonic approaches face different opportunities and challenges. Due to their scalable but short-lived quantum bits, they are more for implementing fault-tolerant architectures. However, we can imagine that some superconducting platforms may put all the quantum bits on a "module". In other words, one is not combining different chips in one giant chip: this would reduce latency problems compared to ion traps.
Scaling to a larger number of quantum bits should be easier for neutral atom platforms than for superconductors and ion traps, because the unwanted interactions between the different quantum bits would be small, but for the same reason, it would be difficult to make "gates", which require interactions between quantum bits.
Two potential platforms that are potentially more attractive than all others are: topological quantum bits (which do not require QEC, but have not been created yet) and quantum bits built using cat states (which have inherent exponential suppression of bit-flip errors and one only needs to correct for phase flip errors, thus greatly reducing the overhead of QEC, but this is a completely new platform).
Q: Significant challenges. What do you think are the top three challenges facing quantum computing and quantum information science (QIS) today?
Gambetta:Perhaps we can summarize the primary challenges as 1) scaling up quantum systems; while 2) making them less noisy and faster; and 3) identifying and developing error mitigation techniques to obtain noise-free output results from quantum circuits.
Hirzel:There are three points.
1) Talent shortage. The quantum talent pool is relatively small and is rapidly diminishing. According to our recent report on enterprise quantum computing adoption, 51% of companies that have started down the path to quantum adoption have already started identifying talent and building teams. If companies waited until the technology was mature before developing quantum, all the best talent would already be working for someone else.
2) The complexity of integrating quantum technology with existing IT technologies. This is a familiar challenge for any enterprise that adopts AI and machine learning. Companies can't just eliminate and replace; they need to integrate quantum computing with their existing technology stacks. Any quantum acceleration is easily negated by a cumbersome quantum workflow, which includes moving data into the computation and vice versa.
Gambetta:This is one of the most misunderstood questions the public has about quantum computing. Instead of diving into QEC, I prefer to start with quantum circuits and ask what it takes to implement a quantum circuit (quantum bits, runtime, gate fidelity). This is because at this level, gates and operations and encoding become important. The minimum number of quantum bits to encode a fully correctable logical quantum bit is 5. A popular LDPC code called surface code, and even planar codes in general, have good thresholds, but their encoding rate (the number of quantum bits encoded versus physical quantum bits) approaches zero as the distance of the code increases. In addition, these codes do not support all gates and require techniques such as "magic state injection" to implement universal quantum circuits. This means that these codes are good for demonstrating quantum bits with low fidelity using gates, but because the number of physical quantum bits you see in the literature is so large, they are not practical for quantum computing in the long run. This has more of an impact on the number of physical quantum bits than the underlying quantum bit technology.
In my opinion, the way forward is to ask if we can implement quantum circuits by using ideas like error suppression, error mitigation, error mitigation + error correction, and in the future build systems with long distance coupling to allow higher rate quantum LDPC codes. I believe this path will find value in the short term and show more value in a continuous trajectory as hardware improves, rather than waiting until we can build a 1+ million quantum bit system with magic state injection. I also believe that science is about the undiscovered and I am very excited about the revolution in error correction happening with the new quantum LDPC codes. We need to maximize the co-design between hardware and theory to minimize the size of the systems we need to build to bring value to our users.
3) Time and urgency. Quantum computing is evolving rapidly, and many enterprises have little understanding of how much time it will take to upgrade their infrastructure and build valuable quantum applications. Those companies that wait until the hardware matures will take a long time to catch up to their peers who started early.
Q: Error correction. What are your thoughts on the redundancy of quantum bits required to achieve quantum error correction? In other words, how many physical quantum bits are needed to implement a logical quantum bit. What difference does this make based on many factors (fidelity, speed, underlying quantum bit technology)?
Hirzel:Under current quantum error correction theory, each order of magnitude improvement in gate error (e.g., a 1% error rate versus a 10% error rate) requires a constant multiplier for the number of physical quantum bits.
It is worth noting that "quantum bit redundancy" is not the only relevant metric. For example, the error correction cycle rate and the scalability of the architecture (even if it comes at the cost of high quantum bit redundancy) may be equally important. We recently received a DARPA grant through which we are building tools to perform fault-tolerant resource estimation.
Q: Current work. In a paragraph or two, please describe your current top projects and priorities.
As we move into the future, there are two big challenges that we will need to address in the coming years. The first is to drive scale by embracing the concept of modularity. Modularity throughout the system is critical, from the QPU to the cryogenic components, the electronics that control them, and even the entire cryogenic environment. We are considering this in a number of ways, as detailed in our extended development roadmap. In order to use the QPU more efficiently, we will introduce modularity in terms of classical control and classical linking of multiple QPUs. This allows for certain error handling techniques called error mitigation and enables larger circuits to be explored, tightly integrated with classical computation through circuit weaving. A second strategy for modularity is to break the need for increasingly large individual processor chips through high-speed chip-to-chip quantum links. These links extend the quantum computing architecture, however, this is also not enough, as the remaining components such as connectors and even cooling can become bottlenecks, so a slightly longer distance modularity is also needed. For this reason, we envision a one-meter-long microwave cryogenic link between QPUs that, although slower than a direct chip-to-chip link, would still provide a quantum communication link. In our roadmap, Heron, Crossbill and Flamingo all reflect these extended strategies.
The second (challenge) is HPC+quantum integration, which is not simply classical+quantum integration, but true HPC and quantum integration into the workflow, where more classical and quantum will work together in many ways. At the lowest level, we need dynamic circuits that bring concurrent classical computation to quantum circuits, allowing simple computations to occur within coherence (100 nanoseconds), and at the next level, we will need classical computation to perform runtime compilation, error suppression, error mitigation, and ultimately error correction. This requires low latency and must be close to the QPU. above this level, I am very excited about circuit weaving, an idea that shows how we can extend the computational reach of quanta by adding classical computation. For example, by combining linear algebra techniques and quantum circuits, we can effectively simulate a much larger quantum circuit. To build this layer, we need to develop ideas that can be computed in milliseconds on a classical computer (which could be a GPU) and then run the quantum circuit and get the output.
Hirzel:We can't share all of our projects, but a few stand out. Our QML (Quantum Machine Learning) suite is now available to our enterprise customers through our quantum workflow orchestration platform, Orquestra. the QML suite is a plug-and-play, user-defined workflow toolkit for building quantum machine learning applications. This new product reflects our commitment to helping our customers generate near-term value from quantum computers. We are particularly excited about generative modeling as a near-term application of QML for optimizing problems and creating synthetic data to train models for small sample size situations, such as financial crises and pandemics.
One of the most public client projects we are now involved in is our work with the Andretti Autosport fleet to upgrade their data analytics infrastructure to make it quantum ready. Not many people know this, but INDYCAR racing is a very analytics-heavy sport: each car generates about 1 terabyte of data during a race. We are helping Andretti build advanced machine learning models to help determine the best times for pit stops, ways to reduce fuel consumption, and other race strategy decisions.
Finally, cybersecurity has become a top priority for us. We have already been contacted by customers at the senior CIO/CISO level asking us to help evaluate their post-quantum vulnerabilities. It is thought that cryptographically destructive algorithms like Shore's algorithm are decades away, but the threat may have emerged much earlier. In fact, it has already emerged in the form of "save first, decrypt later" (SNDL) attacks. As the inventors of Variational Quantum Factoring, an algorithm that greatly reduces the number of quantum bits needed to factor 2048-bit RSA numbers, we have a unique perspective on the timeline of quantum vulnerabilities. workflows to provide replaceable PQC (post-quantum cryptography) infrastructure upgrades.
Reference link:
[1]https://www.hpcwire.com/2022/08/15/hpcwire-quantum-survey-first-up-ibm-and-zapata-on-algorithms-error-mitigation-more/[2]https://arxiv.org/abs/2101.06250
