An article on the quantum race in the NISQ era

ICV    QUANTUM-news    An article on the quantum race in the NISQ era
 

 

/Entry-Record/

I. What do industry experts think about NISQ?
II. quantum hardware resource estimation in the NISQ era
2.1. NISQ quantum bit requirements
2.2. NISQ computation time
2.3. Classical simulation of NISQ code
III. Resource Estimation of Mainstream Quantum Algorithm "Advantages
3.1 VQE Algorithm for Chemical Simulation
3.2. QAOA algorithm for combinatorial optimization
3.3 Quantum machine learning algorithms
How to solve the current weaknesses of NISQ QPU?
4.1. Quantum bit fidelity and capability
4.2. Quantum Bit Connectivity
4.3. Quantum Error Suppression and Mitigation
4.4. Algorithmic Advances
4.5. Extended Analog Quantum Computers
4.6. Other NISQ Technologies
4.7. Finding Other Quantum Advantages
4.8. Energy
V. Moving from NISQ to fault-tolerant quantum computing
VI. Looking to the Future of Quantum Computers: Cautious Optimism

 
The NISQ era was first defined by John Preskill in a keynote address at the inaugural Q2B conference organized by QC Ware, Inc. in California in December 2017, and in a paper published in 2018 in Quantum.
 
 

A quantum computer with 50-100 quantum bits might be able to perform tasks beyond the capabilities of today's classical digital computers, but noise in quantum gates would limit the size of the quantum circuits that could reliably perform," he said [......]. . I made up a word: NISQ. which stands for 'Noisy Intermediate-Scale Quantum'."

 

"The 'intermediate-scale' here refers to the scale of quantum computers that will emerge in the next few years, with quantum bits ranging from 50 to several hundred [......] . Using these noisier devices, we do not expect to be able to execute circuits containing more than about 1,000 gates."

 

John Presskill goes on to add that, in addition to NISQ, "even if classical supercomputers run faster, quantum technology may become the preferred choice if quantum hardware has lower cost and power consumption."

 

So far, not much research has been done on this last part. Most of the published scientific papers on the NISQ algorithm deal with some form of computational advantage, but not with other, more economical advantages, especially those related to energy consumption. Indeed, we must strive to find a situation in which a NISQ system can someday produce results similar to those of a first-class supercomputer or HPC algorithm, not necessarily faster, but with lower energy consumption.

 

Various techniques that may help, such as improved quantum bit fidelity, various quantum error mitigation methods, analog/digital mixing, the use of specific quantum bit types (e.g., multimode photonics), as well as quantum annealers and analog quantum computers (a.k.a., quantum simulators or programmable Hamiltonian simulators), appear to be closer to delivering useful applications, although they have their own medium- and long-term scalability Challenges.

 

Given all the constraints of these different solutions, it seems that we can expect some real-world use cases for NISQ systems to emerge, but this is a very narrow window until a variety of scalability issues arise.

 

Looking ahead, NISQ would require a hundred or so quantum bits with gate fidelity well above 99.99% to outperform conventional supercomputers in speed or energy efficiency, while FTQC has a lower acceptable gate fidelity of about 99.9%, but would require millions of quantum bits and ultra-long-range entanglement capabilities.

 

This raises the key question of the trade-off between quantum bit size and quantum bit fidelity that may be required in future quantum computer designs.

 

What do industry experts think about NISQ?

 
 

The known quantum algorithms best suited for NISQ systems belong to the generalized variational quantum algorithms (VQA). Considering the existing and near-future quantum bit-gate fidelity, these algorithms should have a shallow quantum circuit depth, i.e., a low number of quantum bit-gate cycles, preferably below 10.

 

Such algorithms include VQE for quantum physics simulations, QAOA for a variety of optimization tasks, VQLS for solving linear equations, and QML for a variety of machine learning and deep learning applications.Many other kinds of NISQ VQA algorithms have also been proposed, especially for chemical simulations and search.

 

Most of these algorithms are heuristic algorithms that determine near-optimal solutions to various forms of optimization problems, VQE, QAOA, and QML are various optimization problems for finding the minimum of an energy or cost function. The variational algorithms are hybrid by design, and a large portion of them are implemented in classical computers, which is itself an NP-hard class of problems that grows exponentially as the input size increases. Some other nonvariable NISQ algorithms have been proposed, such as quantum wandering.

 

Not at all NISQ-related algorithms are also integer and discrete logarithmic factorization algorithms (the best-known algorithms are from Peter Shor in 1994), oracle-based search algorithms (e.g., Grover's algorithm and Simon's algorithm), and all algorithms relying on quantum Fourier transforms, including HHLs for linear algebra and many partial derivative equation (PDE) solving algorithms. 

 

All of these algorithms require fault-tolerant quantum computing (FTQC) architectures, and in particular, the computational depth of typical FTQC gate-based algorithms grows classically logarithmically with the number of quantum bits, given the number of quantum bits.

 

In the space and speed domains, quantum dominance requires at least 50 to 100 physical quantum bits.

However, the advantages in the space and speed domains are quite different. In some cases, 30-50 quantum bits are sufficient to gain some speed improvement, at least when comparing a QPU with perfect quantum bits, fast gates, and a cluster of classical servers executing the same code in emulation mode, which is usually not a best-in-class equivalent classical solution.

 

Below 18 quantum bits, it is even recommended to use a local quantum code emulator. This is not only cheaper, but also faster and more convenient, as computational tasks are not put on potentially long waiting lists and there is no need to pay for expensive access to cloud QPU (quantum processing unit) resources. In this case, laptops, individual cloud servers or server clusters are always cheaper than quantum computers.

 
Classification criteria for various quantum advantages, including space, speed, mass, energy and cost
 
From John Preskill's definition of NISQ to actual experiments
 

To date, most NISQ experiments have been performed with less than 30 quantum bits, and are therefore best described as "pre-NISQ". While these experiments are elegant proofs of concept, they have not yet demonstrated any speedup over classical computation, which means that they have not yet entered the NISQ regime as defined by John Preskill.

 

Quantum computing vendors and their ecosystem (analysts, service providers, and some software vendors) are bragging about the arrival of "commercial quantum computing," meaning their systems are ready for prime time.

 

The Q2B conferences organized by QC Ware in Silicon Valley, Tokyo, and Paris have focused on "Practical Quantum Computing", and the proliferation of such "Quantum for Business" conferences around the world is actually an attempt to exaggerate the enterprise readiness of NISQ and to urge Enterprises are urged to jump on the quantum computing bandwagon.

 

Vendors are interested in promoting the story of quantum computing readiness, at least to attract investors (as they are raising capital) and potential customers to increase revenue, which in turn will help secure funding. They will tout use cases with exaggerated details that in most cases can be deployed at a much lower cost and can run even faster on traditional computers, often even on $1,000 laptops.

 

This is somewhat different from analog quantum computing solutions, which are closer to achieving certain quantum computing and economic advantages, but are not able to benefit from the same market drivers, at least due to the small number of vendors in the space (D-Wave, Pasqal, QuEra).

 

Some industry vendors, such as Microsoft, Alice&Bob, QCI, Amazon Web Services (AWS), and PsiQuantum, are skipping the NISQ route and focusing directly on creating fault-tolerant quantum computers.

 

Scientists range from the cautiously optimistic to the simply pessimistic. Take Daniel Gottesman of the University of Maryland, for example, who offered some insights in the Quantum Threat Timeline Report 2022, published by the Global Risk Institute. He argues, "It is not clear that there will be any useful NISQ algorithms: many of the algorithms that have been proposed are heuristic and may not work at all when scaled up. And those algorithms that are not heuristic, such as noisy quantum simulations, may not produce useful information in the presence of real device noise. I think it's likely to work and be useful, but definitely not certain."

 

In a February 2023 review paper on superconducting quantum bits, Göran Wendin of Chalmers University puts it bluntly, "Useful NISQ numerical quantum advantage = mission impossible? The short answer is: yes, unfortunately, it may be mission impossible in the age of NISQ."

 

Joe Fitzsimons of Horizon Quantum Computing agrees, "One would hope that these computers would be well used before any error correction, but the focus has now shifted away from that." He even noted in his January 2023 prediction that NISQ will simply die out.

 
 
Translated into layman's terms, this means that no quantum computer will be useful until fault-tolerant varieties of quantum computers are available and operating at a sufficient scale, which we'll have to wait at least a decade for.

 

Quantum hardware resource estimation in the NISQ era

 
 

Hardware resource and time estimation is a key discipline in quantum computing, and there is even a special "QRE workshop" that bridges the gap between real-world use cases, the algorithms involved, and the physical resources and computation time they require.

 

In late 2022, Microsoft released a resource estimation software tool that can be used for fault-tolerant quantum computing algorithms.

 

In the meantime, any estimate of NISQ resources should be compared to classical estimates of computational resources required to solve the same problem. Currently, there is a lack of such estimators of optimal classical algorithmic computational resources. We always analyze the situation on a case-by-case basis and compare it with a moving classical goal, usually with or without heuristics in different situations.

 

It would indeed be better to make a "business" decision to use a quantum computer to solve a particular problem if one could quantify the economic costs and benefits of a quantum computer compared to existing classical solutions.

 

The concept of "Total Cost of Ownership" (TCO), which is often used in classical computing, has not yet been adopted due to the lack of maturity of quantum computing technology and the absence of real-world use cases; TCO includes not only the cost of hardware and software, but also the cost of services, training, and a wide range of direct and indirect solution lifecycle costs. 

 

However, examining the current NISQ literature can provide some clues.

 

1) NISQ Quantum Bit Requirements

 

We will explore here the quantum bit resources required to successfully run the NISQ algorithm. Surprisingly, this is not difficult to assess; there is a rule of thumb that identifies these physical resource requirements. It relates the physical quantum bit error rate to the breadth and depth of a particular algorithm, and the error rate under consideration corresponds to the gates with the lowest fidelity, which for most quantum bit technologies are two-qubit gates, such as CNOT.

 
 
The breadth corresponds to the number of quantum bits used in the algorithm and the depth corresponds to the number of quantum gate cycles. From a quantum circuit perspective, this is a quantum algorithm quantum volume. You can make some tradeoffs between these two dimensions by running a very shallow algorithm with more quantum bits or a deeper algorithm with fewer quantum bits. As shown in the equation above, the quantum bit error rate must be less than the inverse of the computational breadth x depth.
 
Hardware resource requirements based on NISQ gates
 

However, when calculating these numbers using existing quantum hardware, you will find that the situation is not quite satisfactory. On the one hand, you need at least 50 physical quantum bits to gain some quantum advantage and comply with the NISQ limit; on the other hand, the shallowest algorithms have a depth of 8 quantum gate cycles.

 

Ultimately, you need a physical quantum bit gate fidelity of about 99.7%, which applies mainly to double quantum bit gates and quantum bit readout. Until this week, no QPU had this double quantum bit gate fidelity of more than 50 quantum bits. Google's Sycamore "version 2022" with 72 quantum bits has 99.4% dual quantum bit gate fidelity, and IBM's 2020 Prague/Egret system is much closer to that threshold, with 99.66% fidelity at 33 quantum bits.

 

Just now, IBM's Heron 133 quantum processor, introduced in 2023, achieves double quantum bit gate fidelity of ultra-99.9%. Looking at all the vendors' roadmaps, IBM is the only one to reach this goal.

 

As another example, Rigetti plans to create an 84-qubit QPU with only 99% dual-quantum-bit gate fidelity, followed by a 336-quantum-bit version that barely reaches 99.5% fidelity, which is clearly insufficient to run any NISQ algorithm with that many quantum bits.

 

Most two-qubit gate fidelities offered by industry vendors are median or average fidelities, and an important metric that is usually not reported is their standard deviation and minimum values. Good median fidelity and high standard deviation are not practical, especially for the first gate of a given algorithm. High error rates can cause irreversible damage to most running algorithms.

 

One solution is to deactivate neighboring quantum bits after calibration, since hardware defects can cause "stable" double-qubit error gates. However, even with these average fidelity values, the publicly available double-qubit gate fidelity is still insufficient to successfully run the NISQ algorithm.

 

The same is true for ion trap quantum bits, which have very good fidelity but seem to be difficult to scale beyond a few tens of quantum bits, preventing developers from gaining space-related computational advantages. These quantum bits are also too slow to drive, compromising their potential to generate acceleration in quantum dominance mechanisms.

 

In current industry vendor plans and roadmaps, it is expected that most QPUs will not be able to support more than 99.9% double quantum bit gate fidelity required for NISQ or FTQC.

 

The most common solution is a "scale-out" approach that connects multiple QPUs together, somewhat like the distributed parallel computing used in high performance computing (HPC). These connections must maintain the overall entanglement and fidelity of the quantum bits. Only a handful of quantum computing companies have started the next phase, which can be explored alongside the development of QPUs.

 

Extended architectures can use a variety of techniques, such as microwave guidance between quantum bits or entangled photon based connections. Specialized quantum information networking startups, such as WelinQ (France) and QPhoX (The Netherlands), have already started to build quantum links based on entangled photon connections, and also provide quantum memory capabilities for computational and intermediate communication buffers.

 

With hundreds or thousands of quantum bits, a gate fidelity of 99.9% to 99.9999% is ultimately required, which is clearly out of reach for today's quantum computers, even in a laboratory setting with a few quantum bits. And this also ignores the fact that many NISQ algorithms that require so many quantum bits are not necessarily as shallow as those that require less than 10 gate cycles.
 

2) NISQ computation time

 

Another resource to estimate is the total computation time of the NISQ algorithm, including its classical part. After all, we are looking for some computational speedups, but at the same time we need a reasonable computational time related to our patience. In the quantum dominance regime, the scaling must be carefully estimated, which involves various costs: the number of bubble strings, the accuracy sought, and the exponential cost of quantum error mitigation.

 

The computation time of NISQ should be reasonable regardless of usage and speed. As we will see, this is not necessarily the case under quantum dominance mechanisms when it is better than classical computation. The computational time of most NISQ variational algorithms has a large number of variables equal to Ni ∗ It, where It = (ct + S ∗ Qt ), Ni = the number of iterations of the variational algorithm, and S ∗ Qt = the number of iterations of the variational algorithm:

 

Ni = number of iterations for the variational algorithm to converge to an acceptable value. It is case-specific and depends on the way the variational algorithm converges to the expected solution.

 

The ct also includes the time required for classical post-processing of the data from the quantum computing lens in order to generate the expected value of the Hamiltonian observation from the inversion. It depends strongly on the number of lenses described below.

 

S = the number of quantum circuits, which corresponds to the number of times an inversion must be performed on a quantum computer to compute the expectation value of the inverted observation in order to achieve a given accuracy.

 

For VQE algorithms used in quantum chemical simulations, the target error rate can be very low, so the number of shots required increases to astronomical levels.

 

In 2015, it was estimated that it would take 10^19 circuit operations and 10^26 gate operations to find the ground state of ferroredoxin (Fe2S2) with 112 spin orbitals using VQE. Various optimizations have been proposed to eliminate the polynomial or exponential curse associated with the number of quantum bits of data, but these are algorithmically related. Otherwise, it would be a key obstacle for NISQ implementations beyond N=40 and prevent the realization of some practical quantum advantages.

 

You can then add to this list with various quantum error mitigation technology overheads, and if the quantum bits have sufficient fidelity, then fairly simple chemical calculations using optimized VQE algorithms can last for decades or even centuries, even without the use of superconducting quantum bits.

 

What about better captured ion quantum bits with high fidelity? Since quantum gates are about 1000 times slower than superconducting quantum bits, these quantum bits are completely out of the running. Theoretical speedups compared to classical computing have no value if they actually occur on non-human time scales!

 

Again, an actual full-stack evaluation of all these time costs would be very useful when discussing potential NISQ quantum advantages. Many NISQ algorithm papers do not always investigate this issue, and most of these papers deal with sub-NISQ scaling mechanisms with less than 30 quantum bits. Still, it can drive some interesting architectural designs where many of these shots will run in parallel on different QPUs, or even in a single QPU that is logically divided into multiple small quantum bit regions running the same circuit.

 

3) Classical simulation of the NISQ code

 

There are two main approaches to assessing the differences between quantum and classical computers.

A simpler but imperfect method is to compare the execution of a given quantum algorithm on a QPU with its code simulation on various types of classical computers. This simulation can be achieved by reproducing the behavior of perfect quantum bits (using state vector simulation) or the behavior of noisy quantum bits (using density matrix or tensor network techniques).

 

Another approach is to perform similar comparisons but use best-in-class classical algorithms to fulfill the same needs as the quantum algorithms. In fact, best-in-class classical algorithms may be faster than quantum algorithms that are simply simulated on a classical device. Comparisons between classical computers and NISQ systems must also take into account various subtleties associated with heuristics, output sampling, finding a solution versus finding the best solution, etc.

 
All these comparisons can only be done correctly in a few cases. We can only guess which type of NISQ quantum algorithms can be simulated on classical systems and compare their relative speed, cost, and power consumption. Furthermore, emulation is not a one-stop solution, as it can be implemented in a variety of ways, such as emulating perfect quantum bits, using state vectors, or using compression techniques such as tensor networks, which can handle a large number of quantum bits with shallow algorithms and are relevant for NISQ code emulation. Depending on the number of quantum bits and the depth of the algorithm, some key thresholds can be defined between different levels of quantum code emulation.
 
Evaluation table of typical classical resources required to simulate gate-based quantum algorithms
 
NVIDIA has localized the scope of state vector quantum simulation to systems with fewer than 32 quantum bits but unrestricted circuit depth (Y-axis), while tensor network simulation can be scaled to hundreds of quantum bits with shallow algorithms. As can be seen in the figure, classical simulation has a wider range than the existing NISQ quantum computer (gray)
 

In addition, the "quantum advantage" is usually seen when the QPU has at least the same power as the most powerful supercomputer, but this equivalence can be assessed when comparing it to a regular weaker high-performance computer.

 

Would the size of the QPU be very different in this case? Would a classical solution cost less than a quantum solution? How much cheaper? This is an open question.

 

In the NISQ system, the situation is complicated by the fact that all quantum algorithms are hybrid algorithms that require a large classical part to prepare the "resolution" and repeat the tuning and running on the quantum processor. In the case of QML algorithms, the classical computer has to do a lot of data ingestion and preparation, e.g., some vector encoding for natural language processing tasks. When comparing with some classical code emulation, the classical emulator should be paired with the same classical computer that handles the classical part of the algorithm.

 

A classical way to measure the capability of a classical emulation is to evaluate the available memory. For each additional quantum bit emulated, the capacity is doubled. In the most demanding state vector simulation mode, 29 quantum bits require 8 GB of memory, which is appropriate in most laptops.

 

However, there are some differences between memory and processing requirements. A powerful laptop with 16GB of RAM may not be enough to simulate 29 quantum bits faster than the QPU.

 

An Intel server node can emulate up to 32 quantum bits. While Eviden's (Atos) QLM can emulate up to 40 quantum bits with more than 1 TB of RAM, the associated execution time is likely to be longer than the QPU, regardless of the quality of the results. GPU-based emulation is by far the most efficient, and NVIDIA is leading the way with its V100, A100, and latest H100 GPGPU families, whose general-purpose GPUs have different requirements than those used for gaming and 3D image rendering.

 

Resource Estimation of Mainstream Quantum Algorithm "Advantages"

 
 

We will now provide an overview of quantum algorithms applicable to NISQ QPUs, focusing not on their fundamentals but on their quantum bit resource requirements and computational time scales.

 

Kishor Bharti has pointed out in his 91-page review of NISQ algorithms that "These computers consist of hundreds of noisy quantum bits, i.e., quantum bits without error correction, and therefore perform imperfect operations in a finite coherence time. In the search for quantum advantages using these devices, various algorithms have been proposed for applications in a variety of disciplines, including physics, machine learning, quantum chemistry, and combinatorial optimization."

 

"The goal of these algorithms is to utilize the limited available resources to accomplish challenging classical tasks. Interestingly, they start by localizing NISQ in the range of hundreds of quantum bits."

 

Despite the diverse opinions of various scholars, they accordingly collectively note that "we may be in this era for a very long time."

 
The main classes of NISQ algorithms proposed by researchers and industry vendors are shown in blue, and those specific to fault-tolerant quantum computers that rely on quantum error correction are shown in green. Theoretically, these NISQ variational algorithms should be resistant to noise and shallow algorithms, but so far they have not been able to do so
 

In fact, the most studied NISQ algorithms belong to the class of variational quantum algorithms. It mainly includes VQE for chemical simulation, QAOA for combinatorial optimization, and many QML algorithms.

 

All these algorithms are based on heuristic classical quantum hybrid algorithms. These algorithms use an inversion function to compute the Hamiltonian parameter of a quantum system, which consists of multiple rotations of a single quantum bit Rx, Ry, and Rz gates at arbitrary angles accomplished by some CNOT gates.

 

In addition, most known NISQ algorithms are variational algorithms, some of which are run on classical computers and others are run cyclically on quantum computers until convergence. There are several issues worth noting here. The first question is, how does the classical part extend to large problems in the NISQ quantum dominance mechanism?

 
Diagram describing how the variational quantum algorithms (VQA) operate and their scaling parameters. The gray part corresponds to the classical component of these algorithms, where the variational method consists of a Hamiltonian cyclically encoded with a single rotation and a two-qubit CNOT gate. After computing a number of runs, it generates the expected value of the Hamiltonian in a classical manner, and other auxiliary quantum bits and operations can be added to the inversion
 

"When quantum computers are finally able to fulfill their promise, variational methods may not be practical," declares Kenneth Rudinger of the U.S. Department of Energy's Sandia Laboratories.

 

"There's good reason to believe that the scale of the problem you're trying to solve is too large for variational methods; at that scale, it's essentially impossible for a conventional computer to find a good setup for a quantum device."

 

Another important but usually unaddressed question is what is the relative weight of the classical computational part in terms of computational time and total running cost in variational quantum algorithms? On this point, most papers do not elaborate much on the classical resource cost of variational algorithms.

 

1) VQE algorithms for chemical simulations

 

Most VQE experiments to date have been realized with a few quantum bits, well below the quantum dominance threshold, and these experiments have been done in the pre-NISQ phase, well below 50 quantum bits, for several reasons.

 

First, many PhD projects last between one and three years. Second, while there are several QPUs with more than 50 bits, notably from IBM and Google, the bit-gate fidelity of these QPUs is too low to enable larger scale VQE (and VQA) anti-noise algorithms. The amount of truly usable QPU quantum is very low, with the Quantinuum capture ion QPU recorded at 2^22. 

 

These experiments are useful for testing the whereabouts of the algorithms until the QPU can scale and hold more quantum bits.

 

In the field of chemical simulations, VQE experiments are usually limited to finding the ground state energy of the Hamiltonian for simple two- to three-atom molecules such as LiH, BeH2, or H2O. As we have seen before, finding the ground state of a slightly more complex molecule such as benzene takes the NISQ system into uncharted territory and requires very long computational times and demands for high-fidelity physical quantum bits.

 

The results show that for a variety of molecules, even the best performing VQE algorithms require gate error probabilities of the order of 10^-6 to 10^-4 orders of magnitude to achieve chemical accuracy, and that VQE is also useful for calculating the excited states of molecules.

 
NISQ algorithm paper and the number of quantum bits it tests
 

Nowadays, VQE is not sufficient for more pressing computational chemistry needs, such as determining the structure of macromolecules, searching for complex vibrational and rotational spectra, and molecular docking, which are very useful in drug design and in the chemical industry. These use cases generally fall under the FTQC domain and, in most cases, are in the extreme case where the number of logic bits is very large. For example, estimating the ground state of a complex molecular Hamiltonian in the FTQC domain requires the use of the Quantum Phase Estimation (QPE) algorithm. Its accuracy depends on the number of auxiliary quantum bits that encode the eigenvalue results.

 

2) QAOA algorithm for combinatorial optimization

 

QAOA is the most relevant VQA algorithm for the second class of NISQ QPUs. However, it does not seem to scale well and requires a larger number of high-quality quantum bits than the existing number to bring some quantum advantage in practical use cases in the field of enterprise operations.

 
p corresponds to the number of times the QAOA circuit block is repeated in the algorithmic parse. This means that p=8 is twice as deep as p=4. This requires a physical quantum bit fidelity of 99.9986%, which is well beyond the scope of the NISQ architecture
 

QAOA algorithms typically rely on QAOA components. Anton Simen Albino et al. state, "Due to the linear relationship between the dimensionality of the problem and the number of quantum bits, thousands of quantum bits are required before QAOA and its variants can be used to solve these problems. However, the quantum bits used are not necessarily error-corrected due to the nature of the heuristic itself, which requires low depth circuits and a small number of measurements of the final state."

 

Johannes Weidenfeller et al. provide a number of clues to running QAOA on the NISQ system. They highlight some of the obstacles that need to be overcome to "improve the competitiveness of QAOA, such as gate fidelity, gate speed, etc."

 
The figure shows that the double quantum bit gate error rate required to achieve quantum dominance using QAOA is far beyond what is currently achievable
 

Therefore, implementing the QAOA algorithm in a quantum dominant system appears to require FTQC and more than a million physical quantum bits! A workaround is to build relatively large NISQ systems with high quantum bit connectivity.

 

3) Quantum machine learning algorithms

 

In the literature, the situation of quantum machine learning seems to be not much better. Compared to the QAOA algorithm, related algorithms running on NISQ suffer from the same problem in the way they actually scale.

 

In November 2022, Lucas Slattery et al. estimated that "NISQ has no quantum advantage on QML using classical data." Worse, the geometric difference between the "well-behaved" quantum model and the classical model is small and decreases as the number of quantum bits increases.

 
When running some hybrid quantum neural network algorithmic inferences, the actual algorithmic depth of NISQ is compared to some of the current QPUs offered by IBM and AWS cloud services. Here, we can see the difficulties of the current NISQ platform in terms of breadth (number of quantum bits) and depth (number of gate cycles associated with the fidelity of quantum bits)
 

On the other hand, quantum machine learning speedup is not the only potentially quantum-advantageous attribute, but as Maria Schuld and Nathan Killoran point out, the comparison between classical and quantum machine learning algorithms is complex.

 

It involves classification quality, generalization ability on unseen training data, training data requirements, etc., with few benchmark references. Moreover, the training data ingestion is mainly done by the classical part to prepare the quantum variance of the algorithm, which is linearly proportional to the size of the data, and thus there is no predictable quantum advantage.

 

Finally, like the VQE algorithm, the QML algorithm must cope with the well-known barren plateau problem, where training convergence cannot be achieved unless the inversion circuit is very shallow.

 

This is equivalent to the avoiding local minima trap in classical machine learning, where the global minimum is searched for but difficult to reach. There is active research to address this problem, such as adding additional parameters and constraints to improve the gradient in the variational training loop without resorting to inefficient overfitting.

 

How to address the current weaknesses of NISQ QPUs

 
 

So far, we've painted a rather bleak picture of where NISQ is going, at least in the short term.

 

Here we will discuss potential solutions, albeit sketchy and unproven. How can some of the current weaknesses of NISQ QPUs be addressed so that they can take advantage of some form of quantum computing?

 

We can attribute these problems to improvements:

 

- Quantum error suppression and mitigation techniques, although it is well known that the cost of these techniques is exponentially related to circuit depth or number of quantum bits.

 

- Algorithmic resilience to noise and other hardware requirement limitations. This resilience is quite rare and is mainly seen in some specific quantum machine learning techniques.

 

- Extended analog quantum computing platforms, as they have their own limitations, belong to another category in the NISQ field.

 

- Quantum bit fidelity and capability to achieve larger quantum volumes and more high-fidelity quantum bits.

 

- Utilizing the connectivity of quantum bits to achieve shallower algorithm implementations and faster computation times.

 

- Quantum advantages in addition to speed gains.

 

- Energy. it may be a key operational advantage of the NISQ system as long as useful computation is performed first.

 

1) Quantum Bit Fidelity and Capacity

 

Improving quantum bit fidelity is certainly something that is easier to know than to do. All quantum computing research labs and industry vendors are working in this direction with various results.

 

Quantum bit fidelity includes quantum bit initialization fidelity, single and double quantum bit gate fidelity, and quantum bit readout fidelity.

 

Interesting QPUs, both existing and future, are those with double quantum bit gate fidelity above 99.5%; currently there are only a few capture ion and superconducting QPUs from IonQ, Quantinuum, and IBM.

 

Capture ions appear to be difficult to realistically scale to more than 40 quantum bits. To date, no platform has achieved 99.9% double quantum bit gate fidelity and quantum bit preparation and readout.

 

A number of alternatives are in the pipeline:

 

- C12 Quantum Electronics' carbon nanotube spin quantum bits can achieve 99.9% criticality and have been digitally simulated to date.

 

- Nitrogen and silicon carbide vacancy centers are also ideal candidates for high-fidelity quantum bits, although they are currently difficult to manufacture on a large scale.

 

- Optical quantum bits have a different superiority because they are not naturally deconvolved. The problem to be solved is their statistics, which requires a deterministic photon source, preferably a cluster state of entangled photons, and the use of a deterministic photon detector.

 

- Autonomous corrected quantum bits in the bosonic quantum bit family are also promising. These include the cat bits developed by Alice&Bob and AWS, and other bosonic coded bits developed by Nord Quantique and QCI.

Their bit-flip error rates are low, but their phase error rates are high enough to require some error correction, so these quantum bits go directly into the FTQC field.

 

- Similarly, Majorana fermion (or MZM, Majorana Zero Mode) quantum bits provide some form of self-correction, but are only implemented when error-tolerant error correction schemes are in effect. They do not belong to the NISQ QPU category.

 
Quantum computing experts trust quantum bit extensions mainly from superconducting and captured ion quantum bits, with a few trusting photon and spin quantum bits
 

2) Quantum bit connectivity

 

Quantum bit connectivity plays a key role in minimizing the depth of many algorithms, both in the NISQ and FTQC regimes, e.g., limiting the number of SWAP gates required in the implementation of many algorithms.

 

The quantum bits with the best connectivity are capture ions. They demonstrate many-to-many connectivity, which, together with excellent fidelity, makes them the leading quantum computing platform.

 

This also explains why ion trap QPUs have the best quantum volume to date; unfortunately, at the current stage of development, the number of these quantum bits does not scale well. All current vendors (IonQ, Quantinuum, AQT, Universal Quantum, eleQtron) have QPUs with less than 30 quantum bits and progress is very slow.

 
Differences in connectivity of superconducting quantum bits across platforms
 

Superconducting quantum bits have various types of connectivity. The best is D- Wave's product, which has clusters of quantum bits connected to 15 neighbors and soon to 20 neighbors, albeit in quantum annealing mode. Then, Google's Sycamore quantum bits are connected to 4 neighbors through the use of tunable couplers.

Finally, IBM's heavy hexahedral lattice achieves limited 1-to-2 and 1-to-3 connections.

 

Some quantum error-correcting codes (e.g., LDPC) require long-distance connections between quantum bits, which seems to be possible using stacked-connected chipsets underneath the quantum-bit chipset. Adding more metal layers to the connectivity chipset underneath the quantum bit chipset promises some progress in this area.IBM, MIT Lincoln Labs are developing 3- and 7-layer connectivity chipsets to improve the connectivity of superconducting quantum bits.

 

3) Quantum Error Suppression and Mitigation

 

Quantum computers deal with errors in different ways.The techniques used by the NISQ system are quantum error suppression and quantum error mitigation. An error tolerant quantum computer will use quantum error correction techniques that are independent of the NISQ QPU.

 

Quantum error suppression techniques involve improving quantum bits at the physical level to minimize decoherence (loss of superposition and entanglement), crosstalk (when manipulation of some quantum bits interferes with others), and leakage (which occurs when a quantum bit is removed from its |0⟩ and |1⟩ computational bases, such as happens to superconducting quantum bits case) and maximizes gate fidelity and speed. It also handles quantum bit initialization errors and readout corrections.

 

It does this by optimizing electronic controls (pulse shaping, reduction of phase, amplitude and frequency jitter) as well as advanced device qualification and calibration. It depends on the type of quantum bit. If implemented properly, error suppression techniques scale relatively well with the number of quantum bits and the complexity of the algorithm.

 

Error suppression techniques can also be used in FTQC setups. Error filtering (EF) is a variation of the error suppression technique that reuses a technique originally designed for quantum communications.

 

Quantum error mitigation (QEM) is the reduction of errors in quantum algorithms based on running the algorithm multiple times and averaging the results, combined with classical post-processing techniques and some potential circuit modifications. In contrast to fast feedback corrections based on QEC active quantum bit measurements and affecting the results of a single run, QEM utilizes multiple runs and subsequent measurements as well as some classical processing to reduce the impact of quantum errors.

 

QEM proposals started to emerge around 2016 . Most of them are aimed at understanding the effect of noise on the evolution of quantum bits and creating predictive noise models that can be used to tune the results of quantum computation. Most QEM methods do not increase the number of quantum bits required for a particular algorithm.

 

Most of the known QEM techniques have various limitations, including accuracy and scaling issues, and the exponential overhead in computational time in turn limits the potential quantum advantage of the NISQ algorithm in high-end systems.

 

However, these drawbacks may be limited within the narrow range of quantum advantages that can be realized with NISQ.

 

4) Algorithmic Advances

 

As we have seen in the previous section on NISQ algorithms, the requirements for these algorithms to produce some quantum advantage are quite demanding. Most of the algorithms have been tested on very low scales, requiring more quantum bits and higher gate fidelity, which is not currently possible or even foreseeable in the short to medium term.

 

Nevertheless, improvements in algorithm design are encouraging. Many of these algorithms reduce the number of quantum bits and gate depth requirements of typical variational algorithms (VQE, QAOA).

 

Another approach to optimizing molecular simulations using VQE has been proposed by Algorithmiq and Trinity College Dublin using the ADAPT-VQE-SCF methodology, and they anticipate that these techniques will yield a useful quantum advantage in 2023.

 

All along, in the VQE field, a team of German and Spanish researchers found a way to improve the Flight Gate Aircraft Assignment (FGA) algorithm for airport flights. The goal of the algorithm is to minimize "the total time spent by passengers at the airport" by "finding the optimal gate assignment for a flight."

 
A 20-qubit 40-deep NISQ algorithm executed on the Quantinuum QPU. This non-shallow NISQ algorithm using a large number of double quantum bit gates can currently only be run on the Capture Ion QPU. The designers of this particular algorithm hope that it will provide some quantum advantage in future systems because it uses a fixed depth circuit. However, this would still require more captured ion QPUs and with a higher fidelity than the existing QPUs
 
Finally, a relatively exotic way to gain a quantum advantage with NISQ is to provide quantum data directly to the QPU, which could theoretically be achieved with quantum sensors, and it was achieved by running the QML algorithm in 2021 using 40 superconducting quantum bits and 1,300 quantum gates. It's interesting, but only for very specific use cases.
 
Feeding the QML algorithm directly with quantum data from quantum sensors (above) is exponentially faster compared to a classical setup (below) where data is generated via classical means, since most of the data used in quantum machine learning comes from classical data sources
 

5) Extending analog quantum computers

 

Quantum annealing and analog quantum computing are not the darlings of the quantum computing industry. On the one hand, in the field of quantum annealing, D-Wave has long been criticized for being "non-quantum" or for failing to deliver any computational advantage. On the other hand, analog quantum computers (programmable Hamiltonian simulations or programmable quantum simulators) have been developed and commercialized by very few vendors, such as PASQAL and QuEra, and are said to face scalability challenges of their own.

 

Nonetheless, when objectively comparing the documented case studies around us, it becomes clear that many of the solutions are not far from realizing some kind of quantum advantage. Most of them are not yet "production grade", but they are closer to it than all NISQ-based prototype algorithms.

 
Comparison of the accuracy and runtime of dynamic combinatorial optimization solutions based on gate-based VQE, D-Wave annealing, and classical tensor networks, showing the potential advantages of quantum in terms of profitability of large problems and runtime of the D-Wave 2000Q processor
 
Case studies in finance: quantum annealing, quantum simulation and quantum-inspired algorithms. As of 2023, short-term case studies with low business impact can use classical quantum-inspired algorithms. Quantum annealers and quantum simulators have already created prototype solutions, but are generally not yet scalable to production-grade levels. Finally, the most interesting commercial use cases and algorithms require fault-tolerant quantum computers with thousands of logical quantum bits, so the scalability of these systems has yet to be verified in practice
 

In reviewing all these case studies in the category of gate quantum computing and analog quantum computing, one thing is striking: the most powerful solutions available are in the analog domain, not in the gate domain.

Quantum-inspired classical solutions that implement linear algebra and tensor network computation also make classical computation more competitive in several areas that are not quantum computation at all.

 

Other use cases then place us directly in the FTQC region, requiring thousands of logical quantum bits and therefore millions to hundreds of millions of physical quantum bits.

 

However, even if the existing use cases for analog quantum computing are closer to real-life production-grade levels than the gate-based equivalents, there are still a number of challenges that need to be overcome in order to use analog quantum computers to generate quantum advantage.

 

For neutral atoms, their scaling is related to the ability to control large blocks of well-placed entangled atoms in the ultravacuum. The relevant tools consist of more powerful and stable lasers and their associated control electronics. In addition, the research-grade optical bench used to control all the quantum computer equipment had to be redesigned to avoid cumbersome positioning adjustments in setting up and calibrating these QPUs.

 
Some extension challenges for quantum annealers and analog quantum computers
 

6) Other NISQ Technologies

 

Let us now take stock of the various technologies that have the potential to make NISQ feasible, although the evaluation of these technologies is still ongoing, as in most cases they have not yet been verified in practice.

 

DAQC (Digital Analog Quantum Computing) is a proposal to implement a hybrid analog quantum computing model based on gates.DAQC makes more efficient use of quantum computing resources, allowing NISQ algorithms to use fewer quantum bits and to run faster than normal NISQ QPUs. It is suitable for optimization and machine learning. It was proposed by Kipu Quantum (Germany) and Qilimanjaro (Spain), Kipu Quantum is investigating the use of superconducting, trapped ion and neutral atom quantum bits. There are questions about the speed-up effect of this architecture, its dependence on the class of algorithms and its impact on control electronics and energy. In addition, debugging the algorithm is more complex and there are few development tools to support it.

 

The LHZ architecture (inventor's name: Lechner-Hauke-Zoller) developed by ParityQC (Austria) uses small logic quantum bits in a variant of quantum annealing to make it programmable. The architecture can be realized with superconductors, NV color centers, quantum dots, and neutral atom quantum bits.ParityQC has proposed a related technique to reduce QAOA errors through quantum error mitigation.

 

Circuit cutting and entanglement forging are two NISQ techniques proposed by IBM Research. Circuit cutting splits "a quantum circuit into smaller circuits with multiple quantum bits and fewer gates, so that by utilizing subsequent classical post-processing, execution of the smaller set of circuits yields the same result as execution of the original circuit". This approach improves the QAOA expectation, but its advantage diminishes with the size of the graph.

 

Entanglement forging "utilizes classical resources to capture quantum correlations, doubling the size of systems that can be simulated by quantum hardware." It is mainly used in conjunction with VQE for molecular simulation or quantum machine learning, based on Schmidt decomposition and SVD (Singular Value Decomposition) of quantum states into binary states of N+N quantum bits, with scalability to be further verified.

 

The third technique is circuit weaving, where circuits are clustered into highly interactive parts on the same QPU, across multi-core and distributed architectures, and using some quantum communication such as microwave links or photonic entangled links.

 

Q-CTRL (Australia) provides a quantum control infrastructure software that controls quantum bits to drive microwave pulses at a low-level firmware level and uses machine learning to improve these quantum bits to control the pulses and optimize quantum error correcting codes, a quantum error suppression technique.

 
Q-QTRL Boulder Opal Architecture Optimizes Superconducting Quantum Bits to Control Pulses
 

Quantum computer designers using IBM Qiskit, Rigetti, and Quantum Machine Microwave Pulse Generator are using their Python toolkit. They implement error correction techniques that increase the likelihood of success of quantum computing algorithms on quantum hardware by a factor of 1,000 to 9,000: this is measured using the QED-C algorithm benchmark.

 

NISQ+ is a technology proposed by Intel, the University of Chicago and the University of Southern California (USC) in 2020 that uses fast approximate quantum error correction and quantum error correction mitigation techniques, SFQ superconducting control electronics circuits running at 3.5K, and lightweight logic quantum bits. 

 

It is intermediate between NISQ and FTQC and can increase the availability of NISQ QPUs by several orders of magnitude. For example, it can extend the computational depth of a 40 to 78 quantum bit QPU to millions of gate cycles using only 1000 physical quantum bits.

 
NISQ+ has the potential to achieve 78 logic quantum bits and good computational depth
 

7) Finding other quantum advantages

 

The goal of most quantum algorithms is to achieve quantum speedups over best-in-class classical algorithms. In theory, this computational time speedup is usually polynomial or exponential, with the Holy Grail being the exponential speedup.

 

In practice, however, most NISQ algorithms appear to have at most moderate polynomial speedups. Due to the high constants in the quantum mechanism and the rather slow gate cycles of quantum computers, crossovers with best-in-class classical algorithms can occur at very large time thresholds.

 

This means that the NISQ implementation outperforms the classical regime in cases where the computation takes more than days, months or even years. If the classical part of the computation of the variational algorithm is very long and does not scale well, the difference may even be minimal.

 

However, in some cases, the NISQ quantum algorithm can help create better solutions than the classical algorithm. But this is difficult to evaluate, especially with QML.

 

Some qualitative aspects resulting from NISQ solutions could be: higher prediction and classification accuracy for QML, less training data for QML, or better heuristic results for optimizations implemented in physical simulations of QAOA or VQE variations. 

 

Another potential advantage is the favorable energetics of quantum computers. However, to be evaluated, any NISQ quantum algorithm would have to be able to do at least as well as the best-in-class classical algorithms, since it is never easy to make comparisons.

 

Comparison of quantum computing systems with classical computing systems is more subtle than simply looking at speed, and the classical point of comparison is not necessarily the largest supercomputer available.

Furthermore, the taxonomy is not limited to theoretical asymptotic polynomial or exponential advantages, but rather to practical advantages for a given set of algorithms and real-world use cases using production-grade input datasets.

 

Many papers have discussed these aspects without taking into account the practical state of classical computing techniques. Much work remains to be done in this area, and more theoretical and experimental data is needed, as well as more precise classical computational equivalents for comparisons. 

 

8) Energy

 

If the NISQ algorithms can show some superiority in speed, or even be on par with various classical algorithms, it would be interesting to compare their energy consumption.

 

We may get a surprising result: one of the key advantages of the NISQ platform is its lower energy cost compared to the classical platforms.

 

As we have seen so far, only the NISQ QPUs from IBM, D-Wave, PASQAL and QuEra are currently worth looking at. We will have to take a look at their roadmaps for the NISQ era to see if they can deliver some computational advantages and some energy advantages.

 
Typical power consumption of existing QPUs and their sources, none of these systems have quantum advantages at this time (2023)
 
A table comparing the power consumption of existing QPUs and future NISQ-class QPUs from several vendors, and if these future systems bring quantum computing advantages in the near future, then they may also bring associated energy benefits
 

IBM and its 1386-bit Flamingo system to be released in 2024 could be interesting, while Condor's 1121-bit platform may not have enough fidelity to successfully run NISQ algorithms.

 

As for PASQAL and QuEra, we must consider their next-generation neutral-atom analog quantum computers with actual controllable atom counts between 300 and 1,000. Other QPUs worth considering are based on multimode photons, such as Quandela's QPU and Xanadu's other systems.

 

To understand IBM's estimate of 140 kW for the future Flamingo platform, we can guess that it will use a Bluefors KIDE cryostat containing nine pulsed tubes and Cryomech compressors consuming about 10 kW each, plus a complementary external water-to-water cooler for the compressors. The power consumption of the gas handling system and control system for each of the three diluters is about 1 kW. Together with a couple of personal computers, vacuum pumps and control electronics, the power consumption per quantum bit is about 20 watts.

 

The power consumption cannot be directly compared to that of a classical bit because the computation time must be taken into account. The energy footprint is not power, but power × time. To estimate this footprint, we need to calculate the number of gate cycles required for a particular QPU algorithm and multiply it by the average gate length. This gives an estimate of the power consumption (in joules) per computation. Of course, efforts would also need to be made to identify such computations with comparable performance to the classical algorithms and then benchmark their respective power consumption.

 

For example, IBM's future Flamingo platform, whose power consumption is estimated to be under 140 kW, may compare favorably with HPC if it can successfully run the NISQ algorithm within a reasonable optimization cycle.

In addition, there are three dilution units with GHS (Gas Handling System) and control systems of about 1 kW.

Add to that a couple of personal computers, a vacuum pump, and control electronics, and you have a reasonable power budget of 20 watts per physical quantum bit. But all of this had to be simulated, tested and calculated to reach a conclusion.

 

In the end, this makes sense by comparing these quantum systems to classical systems working in similar functional states. For example, we know that a full rack of Nvidia DGXs has about 30 kilowatts of power, while Aurora Frontier, the largest supercomputer at the U.S. Department of Energy's Oak Ridge National Laboratory, has 22 megawatts of power when used at full scale.

 
A new quantum superiority perspective. The height of the bars (without units) corresponds to the relative added value of the solution compared to an equivalent classical solution. The prerequisite, of course, is that the NISQ and FTQC algorithms provide some computational advantage, or at least are comparable to classical computers accomplishing the same task
 
In conclusion, the quantum advantage of NISQ may ultimately be related to mass and energy rather than to computational time: in a world of limited resources, this would make NISQ solutions perfectly suited to the field of high-performance computing.

 

Toward fault-tolerant quantum computing from NISQ

 
 

What is the difference between NISQ and FTQC from a use case perspective?

 

We have seen that the NISQ algorithm covers a wide range of optimization, machine learning, and physics simulations when it comes to achieving quantum advantages that are not yet apparent. While not yet well documented, its potential is moderate in terms of the scale of problems it can solve.

 

In fact, as we have seen in the previous parts of this paper, NISQ does not scale very well for at least three reasons: it is difficult to create quantum bits with very high fidelity, thus realizing mid-scale NISQ with hundreds of quantum bits and gate cycles; the cost of quantum error mitigation scales exponentially in the direction of error; and the computational time is completely unreasonable, especially for the VQE for various chemical simulations Algorithm.

 

The FTQC algorithm adds some additional features:

 

- Solving problems with more variables, such as simulating larger molecules, solving larger combinatorial problems but with determinism, and larger quantum machine learning models.

 

- Various algorithms relying on quantum Fourier transforms, such as quantum phase estimation, quantum amplification estimation, linear algebra and HHL for partial differential equations. these algorithms are being used for quantum many-body simulations, quantum machine learning, financial applications and many other use cases.

The computational time of QPE (quantum phase estimation) based chemical simulation algorithms is slower than the NISQ VQE equivalent.

 

- Shor integer and discrete logarithm algorithms, whose main "commercial value" is clearly not in the area of "technological benefit", but in breaking keys in public key infrastructures and sharing symmetric keys.

 

- Oracle-based search and optimization problems are solved using, for example, the Grover algorithm. In some cases, it relies on the availability of various forms of quantum memory, which are also not yet available. And it does not scale well, leading only to potential polynomial speedups.

 

The typical problem with FTQC, these algorithms and their practical use cases, is the large amount of resources required in terms of physical bits. Many papers have done such resource estimation, including Microsoft's recent resource estimation tool mentioned above. 

 

In addition, like NISQ algorithms such as VQE, FTQC algorithms can take too long to compute.

 
According to Xanadu and Volkswagen, simulating the key characteristics of the battery (voltage, ion mobility, and thermal stability, including simulating the cathode material using the first quantization algorithm) requires between 2,375 and 6,652 logic quantum bits
 
FeMoCo simulation requires at least 2000 logic quantum bits
 
Estimation of PsiQuantum for Implementing Fermi-Haber Crystal Material Simulations, Typical RSA Key Sizes and Shor Algorithms for cc-pVDZ/VTZ Molecular Compounds
 
Some resources needed to implement a specific option pricing algorithm
 

So what is the sequence of NISQ and FTQC?

 

John Preskill's definition of NISQ implies that it is the middle road to FTQC: one after the other. What if this order is not the only option? We see here that NISQ and FTQC may be two parallel paths, each with different tools and challenges.

 

We have already seen that many NISQ algorithms require QPUs with fidelity much higher than 99.99%. this means that FTQC is in fact a much more feasible way to implement so-called NISQ algorithms, and can even lead to some quantum advantage: this may explain why some physicists consider FTQC to be the only feasible way to gain quantum advantage.

 
Some advantages and challenges of each of NISQ and FTQC
 

However, it can also be inferred that it may be easier to create a few hundred high quality quantum bits for NISQ than to create a large number of 99.9% well entangled quantum bits for FTQC. This is essential to gain a real quantum advantage with the NISQ QPU, because at 99.9% fidelity, the QPU is easy to simulate classically. If it is not possible to scale quantum bits to the order of ten thousand to a million, this means that NISQ may be the only viable route.

 

On the other hand, if we are able to make very high quality quantum bits that scale well, then it would be possible to create FTQC QPUs with fewer physical quantum bits, thus easing the scalability burden, especially in terms of cabling, controlling electronics, and signal multiplexing.

 

This remains an open question. How big can a quantum bit entanglement network get? Will it reach the famous quantum-classical bound? We need to better understand the "noise budget sources" of the various types of quantum bits. In between, industry players such as IBM are confident that the line between NISQ and FTQC will blur, especially with the help of various quantum error mitigation techniques.

 

There are also middle roads between NISQ and FTQC. One such path, from Japan's Fujitsu, Osaka University, and the RIKEN Institute, involves reducing the number of physical quantum bits needed to build logical quantum bits using corrected, accurate analog phase rotation gates, which involves a low-overhead correction scheme, rather than building logical quantum bits with expensive combinations of error-correcting H- and T-gates. This creates a useful early FTQC setup: only 10,000 physical quantum bits are needed to support 64 logical quantum bits.

 
The path from NISQ to FTQC is uncertain. We may have a long NISQ path through quantum bits of very high fidelity; and another path through FTQC logical quantum bits constructed from quantum bits of poorer quality. In summary, one requirement is to be able to control the entanglement of a large number of quantum objects
 
The paths to NISQ and FTQC are slightly different, with very high quality quantum bits in the middle of NISQ and lower quality quantum bits in the middle of FTQC

 

Looking to the future of quantum computers: cautious optimism

 
 

In summary, at this stage of development, the drawbacks of NISQ are manifold:

 

1) It is difficult to implement practical NISQ on existing hardware. this is a fairly long-term goal in the roadmaps of most hardware vendors.

 

2) Conflicting requirements on the number of quantum bits, fidelity and algorithmic depth of existing and even future hardware.

 

3) The designers of the NISQ algorithm have not done a good job of studying or documenting how the hardware resource requirements of the QPU and its classical parts scale up to achieve some form of quantum advantage. This is a particularly difficult task for heuristic-based algorithms.

 

4) Most QAOA and VQE algorithms do not utilize existing and recent hardware well enough to reach quantum dominance levels, especially when you look at the details of their measurement steps, which require at least a polynomial number of shots. 

 

In addition, NISQ quantum dominance is highly use-case and algorithm dependent and not generalizable, and in many cases, such as multi-body simulations using the VQE algorithm, the currently estimated computational time is very high, even beyond a human lifetime. Many new theoretical boundaries also prevent the expansion of NISQ under quantum dominance mechanisms.

 

5) Most of the existing practical implementations of noise-laden NISQ gated algorithms can be easily simulated on classical hardware. Using tensor network-based techniques, most of the shallow gate-based quantum algorithms can be efficiently simulated on classical computers.

 

6) Many useful quantum algorithms require FTQC hardware with millions or even billions of physical quantum bits.

 

7) Currently, NISQ hardware vendors tend to exaggerate the capabilities of their systems and fuel unreasonable hype, largely because they are raising capital and want to please potential investors looking for customers and short-term revenue opportunities. Most hardware startups remain private research labs with low TRLs.

 

Going the other way and taking a longer term view, there are some potential advantages to making NISQ a reality, although they all deserve further scrutiny:

 

1) Short-term quantum hardware may be able to meet the requirements of the NISQ power range in terms of quantum bit count and even fidelity, primarily from IBM's Heron processors.

 

2) Many new quantum error mitigation techniques must be studied and their benefits and overhead quantified. These techniques can extend current and near-term NISQ platforms, but face their own scalability challenges. Until quantum error mitigation reaches its limits, the potential for quantum dominance at the scale of small NISQs may remain small.

 

3) Analog quantum computing appears to be a more powerful paradigm for NISQ computing, although its scalability is unknown and may be limited by the lack of error correction techniques. The supplier space in related industries may extend well beyond the neutral atoms currently offered, such as silicon quantum bits and captured ions.

 

4) The development of NISQ algorithms indirectly promotes healthy competition between classical and quantum algorithms, which may stimulate advances in both fields.

 

5) NISQ is also a learning pathway to FTQC. Skipping NISQ and going directly to FTQC may be considered a wrong approach, since the failure of NISQ may also mean the direct failure of FTQC. However, the NISQ route may be easier to implement than controlling millions of physical quantum bits.

 

The great scalability challenge at the quantum level (entanglement, fidelity) and at the classical level (control costs, cooling) will be the trade-off between quality and quantity.

 
There are multiple scenarios for the emergence of NISQ and FTQC QPUs. One scenario is that FTQC may become feasible before NISQ emerges. This is an issue of threshold differences in quantum bit fidelity, and the need for FTQC and a viable NISQ would bring some quantum advantages. However, if NISQ is the way to create quantum bits with higher fidelity, and it is possible to build them on a large scale, then NISQ may be the way to create FTQC QPUs with fewer physical quantum bits per logical quantum bit
 

- NISQ systems could bring some quantum advantages, some algorithmic quality advantages, and energy advantages - this is still uncharted territory to be investigated.

 

The tension between these drawbacks and cautious optimism is not only a "debate" about NISQ, but also a feature of the emerging field where the line between basic research and vendor technology development and commercialization is blurred.

 

This paper demonstrates the wide gap between the technological reality of quantum computing and the current over-promise of some analysts and industry vendors. The current tirade against the so-called commercial readiness of quantum computing may backfire and have unintended negative consequences.

 

Quantum computing is a fairly long-term quest that governments, policymakers and investors should be aware of.

 
Reference Links:
[1]https://arxiv.org/abs/2305.09518
[2]https://arxiv.org/abs/2202.03459

[3]https://arxiv.org/abs/2101.08448

 

 

2023-12-08 18:55

REALTIME NEWS