List of applications for quantum algorithms in 2023

ICV    QUANTUM-news    List of applications for quantum algorithms in 2023
 

The intended applications of quantum computers span scientific and industrial fields ranging from quantum chemistry and many-body physics to optimization, finance, and machine learning.

Quantum solutions proposed in these fields typically combine multiple quantum algorithmic primitives into an overall quantum algorithm, which must then be combined with quantum error-correction and fault-tolerance methods in order to be properly implemented on quantum hardware. As a result, it is difficult to assess how much a particular application can benefit from quantum computing, as the various approaches tend to be very sensitive to the complex technical details of the underlying primitives and their complexity.

 

In a recent article published on the arXiv, scientists from Amazon and Harvard University, in conjunction with scientists from Imperial College in the UK, Germany and Denmark, investigated several potential application areas for quantum algorithms and their underlying algorithmic primitives, carefully considering the technical caveats and subtleties.

 

The scientists outlined the challenges and opportunities in each area in an "end-to-end" manner, clearly defining the problem to be solved and the input-output model, instantiating all the "oracles" and spelling out all the hidden costs. Finally, the team also compares the quantum solution with state-of-the-art classical methods and complexity-theoretic constraints to assess possible quantum speedups.

 

Photon Box has compiled the first half of the algorithmic applications in a previous post (see "A list of quantum algorithmic applications in 2023 (Part 1)"), and we will continue to introduce other useful algorithmic applications here.

 

/eject-record/

V. Quantum Algorithms to Speed Up Combinatorial Optimization Problems
5.1. The Grover Search Algorithm
5.2. Beyond quadratic speedup in exact combinatorial optimization
VI. Quantum Algorithms for Accelerating Sequential Optimization Problems
6.1. Zero-Sum Games: Computing Nash Equilibria
6.2. Conic programming: solving LP, SOCP and SDP
6.3. General convex optimization
6.4. Non-convex optimization: getting rid of saddle points and finding local minima
VII. What is the best algorithm for quantum cryptanalysis?
7.1. Cracking cryptosystems
7.2. Weakening cryptosystems
VIII. Quantum techniques for solving differential equations
IX. Quantum Acceleration for Financial Services Use Cases
9.1. Portfolio Optimization
9.2. Monte Carlo methods: option pricing
Interactions between Quantum Computing and Machine Learning
10.1. Quantum machine learning via quantum linear algebra
10.2. Quantum machine learning via energy-based models
10.3. Tensor PCA
10.4. Topological data analysis
10.5. quantum neural networks and quantum kernel methods

 

 

Quantum algorithms speed up combinatorial optimization problems
 

A combinatorial optimization problem is the task of finding an optimal solution among a limited number of candidate solutions. In industrial settings, combinatorial optimization arises in scheduling, routing, resource allocation, supply chain management, and other logistics problems where it is difficult to find optimal solutions that satisfy various desired constraints.

 

Operations research came to prominence after its application to the logistics problems faced by the military in World War II, where it applied combinatorial optimization (as well as sequential optimization) methods to these problem domains in order to improve decision making and efficiency in real-world problems.

 

Combinatorial optimization problems are also central to classical theoretical computer science, where they are used to describe and classify complexity classes. Typical combinatorial optimization problems have a limited amount of structure available to them, and thus quantum computation usually provides only polynomial speedups. In fact, in the early days of quantum computing research, quantum computers did provide speedups of up to quadratic speedups for a wide variety of such problems via Grover search algorithms, which came as a surprise. Subsequently, a great deal of effort has been invested in understanding how Grover search and its generalized algorithmic amplitude amplification provide speedups for a wide variety of combinatorial optimization problems.

 

In this section, we present several different approaches to solving combinatorial optimization problems. First, we examine combinatorial optimization problems through the relationship between combinatorial optimization and search problems on which Grover's algorithm or its generalized algorithms can achieve quadratic or less than quadratic speedups; then, we will present several proposals that have the potential to go beyond the quadratic speedups of Grover's algorithm: the variational algorithms (which are considered to be exact algorithms), the adiabatic algorithms, and the "short-path " algorithms. We discuss the (limited but available) evidence that these methods may yield significant advantages, as well as the associated caveats.

 

We do not specifically discuss the large number of quantum methods for approximate combinatorial optimization (usually variational quantum algorithms or quantum annealing). These algorithms typically transform the optimization problem into an energy minimization problem for a spin system, where the Hamiltonian energy of the spin system encodes the classical objective function. They apply a number of physical heuristics to efficiently generate low-energy solutions with the hope of obtaining better objective values than classical methods in comparable time. An advantage of these methods is that they are usually more compatible with NISQ hardware.

 

While approximate optimization remains an interesting direction, these quantum algorithms are heuristic and there is a general lack of concrete evidence that they provide practical advantages.

 

1) Grover's search algorithm

 

The Grover search algorithm and its generalizations, such as amplitude amplification, are an important source of quantum speedups.A direct application of the Grover search to optimization is the quantum minimum search, which finds the minimum of a function on a given set of elements with quadratic speedup.

 

Since the search is a very general primitive algorithm, Grover's algorithm has a wide range of applications and can speed up many subroutines, especially in combinatorial optimization algorithms. A large number of such applications were found in the early days of quantum computing, and the list continues to grow.

 

There have been several studies on resource estimation for Grover-type (sub)secondary acceleration. Unfortunately, these recent studies suggest that unless the huge overhead of fault-tolerant quantum computing schemes can be greatly reduced, secondary or smaller accelerations alone are unlikely to be useful even in the medium term. For example, it has been noted in the literature that "even considering only examples of problems that can be solved in a single day, we find that quantum computation speedups may be substantially higher.... However, the number of physical quantum bits used is extremely large, ....... In particular, the quantum advantage disappears if one counts the cost of the classical processing power required to decode the surface code using current techniques". According to the latest paper mentioned above, it is estimated that gaining the quantum advantage through a second acceleration would take at least a month of computation time if each iteration included at least one floating-point operation.

 

The cases of three and four accelerations look more promising, but unfortunately, such improvements seem to require techniques beyond Grover's search.

 
Lov Grover and Grover's algorithm
 

Grover originally described his results as "a fast quantum mechanical algorithm for database searching". If we are working in the circuit model of quantum computing, then strictly speaking, Grover's search slows down database searches because each iteration of Grover needs to "touch" every element in the database. If we need to "touch" all N elements in the database anyway, then the best we can do is to simply traverse each element in linear time Tsukuba(N). 

 

In this case, Grover's search circuit has a gate complexity of

 

 

which is clearly worse than sequentially traversing the entire dataset.

 

In the database scenario, we can recover the quadruple speedup only if we assume that we can use quantum random access memory (QRAM) with constant (or logarithmic) cost per database query.

Similar assumptions are usually made in classical computer science for ordinary RAM, simply because RAM calls are cheap in practice. However, since RAM calls should be able to touch every bit of the database, the gate cost of a RAM call must be at least N from the point of view of circuit complexity.On the other hand, this task can be well parallelized, so that, with the right hardware, it is reasonable that the (time) complexity of a RAM call is log(N). In particular, a similar computation may not be fair for QRAM if error correction is required, especially if the entire QRAM circuit is implemented in an error-tolerant manner.

 

However, the Grover algorithm achieves a fourfold speedup without additional hardware assumptions when the list elements we are searching for can be easily generated and checked "on-the-fly". For example, in the case of the SAT, we are searching for 2^n possible truth-valued assignments, but we can easily check that individual assignments are satisfactory by simply substituting the assignments into the formula and evaluating the resulting Boolean expression.

 

We discuss how Grover's search can bring about a quadratic speedup for SAT and how amplitude amplification can bring about a quadratic speedup for 3-SAT.Grover's algorithm can also be used as an auxiliary line for a number of other combinatorial optimization problems, such as those related to graphs. As mentioned above, fault-tolerant resource estimation is unfavorable in any case.

 

An oft-cited graph problem related to quantum computing is the (famous) traveling salesman problem. However, for this problem, the provably optimal speedup is only subquadratic. The classical problem runs in n! and the Grover algorithm is four times faster than it. Unfortunately, the best classical algorithm uses dynamic programming and has a running time of 2^n.

 

Given the overhead associated with implementing QRAM and fault tolerance, the traveling salesman problem seems to be one of the least likely problems to achieve practical quantum speedup.

 

Finally, let us mention the quantum random wandering algorithm, which can also be seen as a generalization of Grover's search. However, quantum random walk is a "distant relative" of Grover's search, and can only be applied in more specific contexts. Quantum random walks can be used to prove many non-trivial speedups in query complexity, but the resulting algorithms are often impractical due to the large space and/or gate complexity overhead.

 

However, there are more practical quantum walk algorithms that can be used, for example, to accelerate the backtracking algorithm, which is one of the most successful and widely used classical heuristics for solving SAT instances in practice. The quantum algorithm essentially enables a quadratic acceleration compared to the classical backtracking algorithm: this approach is suitable for the special case of the traveling salesman problem where the order of the graph is at most 4.

 

A further extension of the algorithm applies to the branch-and-bound method, which in some cases has a running time considerably better than what we know can be achieved by using Grover's algorithm. The speedup based on the branch-and-bound method can also be used to solve mixed-integer programs, including certain formulations of portfolio optimization problems.

 

There are many other applications of quantum search speedups, ranging from machine learning to dynamic programming solutions to other NP-hard problems.

 
2) Beyond quadratic acceleration in exact combinatorial optimization
 

The discovery of the Grover algorithm (later generalized to amplitude amplification) has long been a source of enthusiasm for the advantages of quantum algorithms in combinatorial optimization, as it brings quadratic asymptotic speedups to many of the specific end-to-end search problems in this area. However, resource estimates suggest that, due to the inherent overhead of quantum computing compared to classical computing, early- and mid-term fault-tolerant devices will not provide a practical advantage when the available speedup is only quadratic.  

 

Thus, determining whether there is a speedup beyond quadratic is critical in determining the actual end-to-end advantage in combinatorial optimization. Although Grover's algorithm is optimal in a black-box (unstructured) environment, if the combinatorial optimization problem has some structure that quantum algorithms can exploit better than classical algorithms, then it is possible to achieve beyond-quadratic speedups.

 

Unfortunately, many of the schemes that can achieve superquadratic acceleration lack rigorous theoretical performance guarantees: this includes quantum adiabatic algorithms and variational quantum algorithms such as the Quantum Approximate Optimization Algorithm (QAOA), which is often used to give approximate solutions but can also be used to find exact solutions when costs are high. There is now limited analytical and numerical work that provides some evidence that QAOA may outperform fictitious applications of Grover's algorithm on the k-SAT problem, but no definitive conclusions have been drawn on this. There is also literature that examines a different algorithm (related in some ways to the quantum adiabatic algorithm) and provides tight runtime guarantees that slightly outperform the Grover algorithm.

 

However, while these algorithms may be faster than the Grover algorithm, this does not imply a super-quadruple speedup over the best classical algorithms, which can often do better than exhaustive search by exploiting structure in other ways. In short, it remains an open question whether quantum algorithms can provide superquadratic speedups for useful problems in exact combinatorial optimization.

 

There are several caveats. One of the most salient is that for most algorithms, there is no provable advantage over Grover. At the same time, the size of the provable advantage over Grover is negligible as far as the algorithms are concerned. As a result, the prospects for these algorithms can only rely on numerical simulations performed at very small instance sizes and speculations based on physical principles.

 

A second important caveat is that to achieve practical superquadratic speedups, the performance of the quantum algorithms needs to be compared with the best classical algorithms, which tend to be vastly superior to the O* (2^n) running time of the exhaustive method.

 

A third caveat along these lines is the existence of classical "quantum Monte Carlo" algorithms, which under certain conditions can classically model the quantum algorithms described above.

Randomness means that the ground state of the Hamiltonian can be written in such a way that all amplitudes are non-negative real numbers, which means that these Hamiltonians avoid the so-called "sign problem", making potential applications of quantum Monte Carlo techniques possible. It should be clear that for these combinatorial optimization problems involving stochastic Hamiltonians, it is still possible for quantum algorithms to avoid classical simulations: in fact, a superpolynomial oracle separation between classical computation and adiabatic quantum computation limited to random paths has been demonstrated, but this is a point to keep in mind when designing algorithms based on stochastic Hamiltonians.

 

One final note is that the quantum algorithms described here are generally not amenable to parallelization, although QAOA can in principle be parallelized if one chooses not to use amplitude amplification (which leads to worse asymptotic complexity). This contrasts with many classical optimization algorithms used for exact combinatorial optimization: the latter are highly parallelizable, and exploiting this feature can significantly shorten the running time of these classification algorithms on high-performance computers, making it more difficult to achieve practical quantum advantage.

 

Since quantum algorithms typically do not have strict runtime guarantees, it is not possible to estimate speedups. However, it is worth emphasizing that for difficult combinatorial optimization problems, the speedup may be super-quadratic, but is not expected to be quadratic polynomial.

 

The QAOA approach is also suitable for NISQ implementations (assuming that amplitude amplification is not applied on them) because of the shallow depth of the quantum circuits that need to be implemented. In this case, uncorrected errors in the NISQ device may degrade the performance (more repetitions are required to extract the optimal bit string z*). Similarly, on the NISQ quantum annealer, a noisy version of the quantum adiabatic algorithm can be run and repeated until the optimal bit string z* is found.

 

For quantum computers to have an impact on exact combinatorial optimization, one of two things must happen: 1) huge advances in the expected underlying clock speeds of quantum hardware and in the overhead of fault-tolerant quantum computation; or 2) the development of quantum algorithms that greatly exceed the quadruple speedup provided by Grover's algorithm.

 

On the one hand, ideas have been proposed that could potentially enable such speedups, but on the other hand, in all cases there are no provable guarantees, or very small provable guarantees. If we want to utilize quantum algorithms to bring practical advantages, we must invest more effort in studying these quantum algorithms and developing new ones, especially given the large amount of work involved in developing complex classical algorithms for these problems.

 
Quantum Algorithms to Accelerate Sequential Optimization Problems
 

Continuous optimization problems arise in science and industry. On the surface, continuous optimization problems appear to involve little quantum mechanics; however, quantum algorithms have been proposed to accelerate convex and nonconvex continuous optimization. Much of the research on these algorithms to date has been aimed at developing and exploiting the various primitive ingredients that yield potential quantum advantages in this area, without an eye toward the end-to-end practicality of the algorithms.

 

A better understanding of the utility of these methods should be the focus of future work.

 
1) Zero-sum games: computing Nash equilibria
 

In a two-player zero-sum game, each player independently chooses a strategy and receives a "payoff" (the sum of the payoffs is always zero) that depends on which pair of strategies is chosen.  

A Nash equilibrium is a probabilistically optimal way of choosing a strategy that maximizes a player's worst-case payoff. The problem of computing a Nash equilibrium is in some sense equivalent to solving a linear programming (LP): computing a Nash equilibrium is a special case of an LP, and conversely, any LP can be reduced to computing a Nash equilibrium at the cost of introducing a "scale-invariant" precision parameter for a particular instance. However, in the special case of computing Nash equilibria, the quantum LP solution based on the multiplicative weights updatemethod is more efficient and has fewer caveats. 

It may be up to four times faster compared to classical methods.

A two-player zero-sum game is defined by an n-question m-matrix A (called the "payoff matrix") that specifies how much player 1 wins from player 2 when player 1 uses a (pure) strategy and player 2 uses a (pure) strategy. A pure strategy is one in which players use a fixed strategy for each game. In contrast, a mixed strategy is when a player randomly chooses a pure strategy based on some probability distribution. A Nash equilibrium is an optimal strategy (usually a mixed strategy) that maximizes a player's worst-case payoff, independent of the choices made by other players.

 
 

The algorithm has no existing error-correction resource estimates. The complexity of quantum complexity is quadratically increased compared to the parameter n+m and polynomially slowed down compared to the parameter ϵ.

 

It is difficult to assess whether a practical advantage can be gained in a zero-sum game without further research into how queries to the matrix elements are accomplished, evaluating the constant factors involved in the algorithm, and considering the additional overhead associated with fault-tolerant quantum computation. Theoretical speedups are quadratic and may require medium- or large-sized QRAMs; in practice, such speedups may not be sufficient to overcome these overheads.

 

It may be instructive to compare the prospect of zero-sum games with general conic programming. On the one hand, unlike general SDP and LP algorithms, the complexity of zero-sum game algorithms does not depend on instance-specific parameters that represent the size of the primary and dual solutions. This makes the running time of the algorithm easier to evaluate and more likely to be an efficient algorithm. On the other hand, the core subroutine of the quantum algorithm is classical Gibbs sampling at a rate four times faster than that of a classical computer, using techniques such as amplitude amplification. However, it is not clear how the speedup can be made more than four times faster, even in special cases. A similar subroutine is required to solve the multiplicative weighting approach to SDP, but in this case the Gibbs states to be sampled are true quantum states (i.e., non-diagonal in the computational basis) rather than classical states.

Using more advanced Gibbs sampling methods, it is possible to achieve superquadratic quantum acceleration of SDPs in some special cases - which is not the case for simpler cases such as LPs and zero-sum games.


2) Conic programming: solving LP, SOCP and SDP
 

Conic programming is a specific subclass of convex optimization problems in which the objective function is linear and the convex constraints are restrictions on the intersection of the affine space with certain cones in Rn . Common cones are orthogonal cones, second-order cones ("ice cream cones"), and semidefinite cones, which give rise to linear programs (LP), second-order conic programs (SOCP), and semidefinite conic programs (SDP), respectively.

 

This framework remains quite general and many problems in the real world can be reduced to conic programs. However, the additional structure of the programs allows for more efficient classical and quantum algorithms than for fully general convex problems.

 

LP, SOCP and SDP algorithms have been the subject of research. Currently, the best classical algorithms are based on the interior point method (IPM), but there are other algorithms based on the method of multiplicative weights update (MWU), which may be superior in cases where high precision is not required. Both methods can be translated into quantum algorithms, potentially providing asymptotic quantum acceleration for general LP, SOCP and SDP. However, the running time of quantum algorithms usually depends on additional instance-specific parameters, making general comparisons with classical algorithms difficult.

 
Linear programming (LP) is the simplest form of convex programming, and an LP instance is specified by an m x n matrix A, an n-dimensional vector c, and an m-dimensional vector b
 
By replacing the n-dimensional vector x in the definition of the LP with an n × n symmetric matrix X and replacing the orthogonal constraints with conic constraints for which X is a positive semifinite matrix, a semifinite program (SDP) is formed
 

Neither approach for conic programs has yielded research results at the level of error-correcting resource estimation. At this time, we are not aware of a similar logical resource analysis for the MWU approach targeting conic curve programming. Such an analysis would be valuable and it would be desirable to select a specific use case to be able to evaluate the magnitude of all relevant parameters.

 

For both IPM and MWU methods, the quantum speedup is at most polynomial: the upper and lower bounds on the scaling with n polynomials are known in both the classical and quantum cases.The speedup for the QIPM method depends on the scaling of κ with n, but the speedup does not exceed quadratic. Assuming constant sparsity, the speedup of the MWU method can be quadratic in n scaling. If the Gibbs sampling routine is faster in practice than its worst-case upper bound suggests, it is possible that the speedup will be even greater when utilizing Monte Carlo-style methods for Gibbs sampling.

 

It is very possible to obtain asymptotic polynomial speedups on the problem scale using MWU methods for solving LPs or SDPs, but such speedups appear to be only quadratic speedups, and the assessment of practicality depends on the scaling of some unspecified instance-specific parameters. Similarly, the QIPM approach can lead to subquadratic speedups, but only if certain assumptions are made about the condition numbers of certain matrices.  These quadratic and subquadratic speedups alone may be unlikely to result in practical speedups after accounting for error correction overhead and slower quantum clock speeds. 

 

Future work should aim to find more asymptotic speedups while focusing on specific practically relevant use cases that allow for the evaluation of unspecified parameters.

 
3) General convex optimization
 

Convex problems require the optimization of a convex function f on a convex set K, where K is a subset of Rn. Here, we study the case where both the value of f (x) and the membership of x in the set K can be computed efficiently by classical methods. This situation is quite different from the case of solving conic curve programs, where f is linear and K is the intersection of a convex cone and an affine space, and exploiting these features can lead to more efficient classical and quantum algorithms.

 

Resource estimates for such algorithms are not yet available. Without a more specific program, it may not make sense to perform such an estimate, as the results will depend heavily on the complexity of the execution circuitry.

 

The only analysis of this strategy is theoretical, focusing more on the complexity of the queries that solve this problem than on any specific applications it may have. As a result, the analysis is not refined enough to determine the effect of constant factors or logarithmic differentiation factors. While there may be a quadratic speedup in query complexity, the maximum speedup in gate complexity is less than quadratic. In addition, there is a lack of specific problems that fit the paradigm of "structureless" quantum convex optimization. 

 

Together, these factors make it unlikely that a practical quantum advantage can be found in this case.

 
4) Nonconvex optimization: getting rid of saddle points and finding local minima
 

Finding global minima for nonconvex optimization problems is challenging because local algorithms can fall into local minima. 

 

Often, there are many local minima, and there is a large energy barrier between each of them. Therefore, one may choose to find local minima instead of finding global minima: local minima can often still be effectively utilized in situations such as training machine learning models. An effective method for finding local minima is gradient descent, but gradient descent may run into the problem of getting stuck near saddle points. Therefore, to find local minima efficiently, it is necessary to find ways to get rid of saddle points. 

 

Limited research in the field suggests that there may be polynomial quantum speedups in finding the dimensional dependence of local minima using Hamiltonian simulations and quantum gradient estimation subroutines.

 

Relatively little attention has been paid to this problem, and no resource estimates have been made.

 

It is also unclear whether the algorithm for finding local minima will lead to practical speedups, since it depends heavily on the availability of efficient classical procedures for implementing gradient queries; quantum speedups are only possible if such queries are difficult to implement classically. Nevertheless, the algorithm represents a useful end-to-end problem to which the quantum gradient estimation primitive can be applied. 

 

Notably, the quantum algorithm also employs Hamiltonian simulation, a fundamental principle not used by most other continuous optimization methods. Unlike the classical gradient descent method, it can use quantum tunneling to avoid falling into local minima; thus, it has the potential to find global minima for non-convex functions. The fact that quantum methods based on Hamiltonian simulations can yield advantages in nonconvex optimization in specific end-to-end problems is an interesting direction for future work.

 
What is the best algorithm for quantum cryptanalysis
 

Cryptography ensures the security of computation and communication. For example, users' data and the information they send or receive can be kept secret in case a malicious interloper tries to access sensitive information.

 

A set of algorithms collectively known as a cryptosystem gives this security. Attempts to break the security are known as cryptanalysis, which has its own set of algorithms. Historically, both cryptography and cryptanalysis have considered the classical polynomial time algorithm to be the only realistic algorithm. The advent of quantum computing forces us to consider attacks via quantum algorithms. In general, we want to know what is the best algorithm for cryptanalysis in order to understand the impact on cryptosystems in the worst case scenario. The impact of a quantum attack could invalidate the security of a set of widely used cryptosystems (chapter on cracking cryptosystems). 

 

More generally, quantum cryptanalysis can reduce the security of cryptosystems (section on weakening cryptosystems), thus making it more expensive to implement cryptosystems in a secure way.

 
1) Cracking the password system
 

A cryptosystem is considered secure if it is assumed that a particular mathematical problem is so difficult to solve that an adversary cannot learn trivial information about the encrypted content. The earliest such cryptosystems used specific problems in number theory, variants of which are still widely used today. These cryptosystems fall under the category of public-key cryptography, where any user can perform tasks such as encryption, unlike symmetric cryptography, where users must share keys beforehand.

 

Quantum computers use quantum algorithms to solve computational problems, and in some cases they are faster than the best known classical techniques. When quantum computers are applied to the underlying computational tasks in cryptosystems, the dramatic speedup of quantum algorithms compared to classical methods may disrupt cryptosystems because adversaries can learn encrypted messages efficiently to a non-negligible degree. One of the first discovered and best-known applications of quantum computing is the Shor algorithm, which breaks common methods of public-key encryption based on number theory, including factorization, discrete logarithms, and elliptic curves. Applications of these public-key cryptosystems include encryption to hide the content of a message, signatures to prevent tampering and impersonation, and key exchange to generate keys for symmetric ciphers.

 

In this section, we will focus on two of the most widely used cryptosystems, Rivest-Shamir-Adleman (RSA) and elliptic curves.

 
The RSA cryptosystem relies on the user choosing a large number N that is the product of two primes; arithmetic operations are done modulo N. The user can then choose a number N that is the product of two primes. By construction, using tricks in number theory, there exists such a d: (me)d mod N = m mod N. However, if an adversary can find a factorization of N after the user's construction, they can also solve for d and thus decrypt the message. The security of this cryptosystem comes from the difficulty of factoring large numbers, i.e., finding the two prime numbers that multiply with N .
 

Similar cryptosystems are based on elliptic curves, which have the advantage that attacks on its classical algorithms have a lower success rate than attacks on RSA, and therefore a larger number of secure bits relative to the key size (quantifying the number of attacks required to understand the encrypted message; see the section on weakening cryptosystems for details). As a result, fewer resources (e.g., complexity of communication, encryption, and decryption) are required to implement cryptographic curve cryptography.

 

The minimum key size recommended by RSA is 2048 bits. Circuit optimizations and the addition of hardware limitations have made the resource estimates decreasing, but also more realistic. For a key size of 2048 bits, assuming nearest-neighbor connectivity, about 14,000 logic quantum bits and 3 × 10^9 Toffoli gates are required.

 

For elliptic curve encryption, the recommended minimum key size is 256 bits for 128-bit security (a key size of 3072 bits is required to achieve the same level of security using RSA). It is estimated that three times fewer logic quantum bits and 100 times fewer Toffoli gates are required to break the 256-bit elliptic curve encryption algorithm than the 3072-bit RSA.

 

While popular cryptosystems based on number-theoretic problems are not secure for public-key cryptography, alternatives exist that are considered secure for quantum computers: for example, algorithms based on error-correcting codes or lattices. These alternative computational problems are considered difficult for both classical and quantum computers. The National Institute of Standards and Technology (NIST) plans to provide relevant standards by 2024 to facilitate implementation. Symmetric encryption involves computations that don't have much structure and can't be cracked by quantum computers. Instead, the number of bits for security is reduced.

 

Previous experimental demonstrations of Shore's algorithm have used knowledge of the answers to optimize the circuit to arrive at a size that is experimentally feasible on a non-error-correcting device. Meaningful demonstrations should avoid such shortcuts, which do not exist in realistic cryptographic scenarios.

 

The large circuit depth, computational complexity, and large number of quantum bits required to implement the Shore algorithm make it challenging to faithfully implement NISQ. However, there have been some attempts to simplify the implementation at the expense of losing the guarantees of the Shore algorithm, in the hope that the output will still be correct with some non-zero probability, which may be vanishing.

 

One approach is to simplify a number of operations to approximate them. The implementation of the approximation algorithm, including experimentation, makes successful factorization of larger problem instances possible. This approximate version is not NISQ in the usual sense involving noisy circuits, but introduces some uncontrolled approximation error in exchange for the possibility of reducing the depth and obtaining useful results. Another approach is to encode the factorization problem in a variational optimization circuit. Again, the performance of this approach is not guaranteed; moreover, variational optimization applied to general problems is expected to have at most four improvements compared to classical methods, with little hope of cracking cryptography.

 

Classical simulations of small problems have shown that the algorithm can be successful, as have experimental implementations on superconducting quantum processors. In general, there is no evidence or argument that these NISQ methods can be scaled to the scale of systems relevant to cryptography.

 

The existence of Shore's algorithm means that common RSA and elliptic curve schemes are theoretically insecure, and resource estimates have made it clear what scale of quantum hardware would crack them. While no such hardware currently exists, advances in such devices can be utilized to determine the speed of the transition to quantum-resistant encryption. 

 

Currently, the field of quantum computing is far from realizing algorithms capable of breaking encryption schemes actually in use from a hardware perspective. The above estimates suggest that the re-sources required would be millions of physical quantum bits, executing billions of Toffoli gates, running in days. In contrast, current state-of-the-art quantum computing techniques require only one hundred noisy physical quantum bits and have made progress in demonstrating a single logical quantum bit.

 

Thus, there is a huge gap between state-of-the-art hardware and the requirements of cracking cryptosystems. In addition, a linear increase in key size will increase the number of Toffoli gates, e.g., by powers of three, which can be very sizable. Thus, given the experimental challenges, it is likely that only the most sensitive data will be at risk first, rather than ordinary transactions. These highly confidential communications may first be subject to post-quantum encryption to avoid being cracked. However, insecure protocols tend to linger in practice, so quantum computers could exploit any unresolved vulnerabilities in deployed systems. For example, RSA keys of size 768 bits have been found in commercial devices (note that keys of this size can already be classically cracked). In addition, intercepts encrypted with RSA or elliptic curve encryption can be stored now and decrypted later once large-scale quantum computers are available.

 

The resilience of post-quantum cryptographic candidates is being actively investigated. In particular, specialized quantum attacks can reduce the number of secure bits and weaken cryptosystems.  Classical attacks have even compromised some cryptosystems. It is important to note that these attacks can affect the feasibility of particular schemes, but other post-quantum candidates without known weaknesses also exist.

 

One sensitive area that deserves further discussion is cryptocurrencies, as most encryption relies on compromised number-theoretic public key cryptography. In addition, changing cryptographic protocols for cryptocurrencies requires consensus among a majority of users, which is difficult to coordinate even after overcoming the technical hurdles of adopting post-quantum encryption.

Cryptocurrency wallets can be cracked using Shore's algorithm if they compromise their public key (e.g., by reusing a public key previously assigned to that wallet for a transaction). Attacks are also possible within a short window of time when the key is compromised in a single transaction.

 

Different cryptocurrencies have different levels of sensitivity to such attacks; nonetheless, cryptocurrency mining has not been compromised, only weakened by quantum computers.

 
2) Weakening the cryptographic system
 

The discovery of Shore's algorithm has stimulated interest in post-quantum cryptography, the study of cryptographic systems that assume the existence of large-scale working quantum computers. While the security of some existing systems remains compelling, others that have been cracked by quantum algorithms have been replaced by systems that can accomplish the same task but are thought to maintain a higher level of security against quantum attacks.

 

Even if a cryptosystem is not completely broken, its level of security is weakened by quantum algorithms. The strength of a cryptosystem is usually quantified in terms of the number of secure bits, i.e., n bits is equivalent to guessing the required information and accessing the protected content with a probability of 1/2n. Breaking a cryptosystem means that only a valid number of attempts (i.e., poly(n)) is needed, whereas an attack that weakens a cryptosystem still requires a number of attempts of 2m > poly(n), for some m < n.

 

Symmetric-key cryptography was discovered earlier and has fewer features than public-key cryptosystems. However, symmetric-key cryptography relies less on the assumed hardness of the underlying mathematical problem, and thus has only been weakened by quantum cryptanalysis, as discussed in more detail below.

 

In symmetric-key cryptography, both communicating parties share the same key K, which is used to encrypt EncK and decrypt DecK.Typically, everyone, including the adversary, knows the encryption algorithm (EncK, DecK). Then, the adversary's task is to learn the key in the case of obtaining r to the plaintext (message m) and the corresponding ciphertext c (its encryption). This pair can be acquired, for example, by forcing the transmission of some test message. To be precise, the output of the function with input K is:

 
 

That is, find a key such that all messages are encrypted correctly. A straightforward attack method is to use brute force to test each key; in practice, sophisticated classical attack methods perform no better than this method for asymptotic scaling.

 

Since quantum attacks can only halve the exponent in complexity, a simple solution is to double the key length: for example, by using AES-256 instead of AES-128. This modification increases the cost of implementation (i.e., the complexity of the encryption and communication resources), but is usually affordable. In addition, there are cryptosystems with information-theoretic security, assuming that the adversary has unlimited computational power and can withstand quantum attacks.

 

It is also worth noting that to take full advantage of the quadratic amplification of the amplitude, in contrast to classical brute-force cracking attacks that can utilize the parallelism of high-performance classical computers, thereby increasing the value of n where quantum methods have an advantage over classical methods.

 

A classical algorithmic attack on AES reduces security by only a few bits. More practical are side-channel attacks that utilize physical byproducts such as energy consumption. For example, when comparing bits between a key and another string, a flipped value would result in a logical increase in energy consumption, whereas nothing would happen at the same value. The two cases would be distinguished and information about the key would be known.

 

128 bits of security is the minimum value currently recommended.

 

The key can be encoded as a base state of the Hamiltonian and then solved by applying a variational method. Scaling is expected to be the same as amplitude scaling. However, since the variational algorithm does not set the time complexity, the solution may be slower or faster. If the fluctuations are sufficiently large, they could potentially pose a challenge to cryptography that provides worst-case guarantees. However, there is no reason to expect the probability of success to increase with key size and compromise security in practice. An alternative approach is to use amplitude amplification, but adjusted for recent devices, so that the NISQ-optimized version performs better in real-world experiments.

 

Here, we focus on symmetric key encryption as an example. Nonetheless, assuming that the Oracle is built efficiently, the amplification effect of halving the security of the effective number of bits is generalized for computational problems. From a cryptographic point of view, this attack is mild and can be counteracted by doubling the number of secure bits in the scheme. In practice, the increase in key size may cause inconvenience in some applications (e.g., cryptocurrencies), but basic security is not threatened.

 
Quantum techniques for solving differential equations
 

Many applications in engineering and science rely on the solution of differential equations. As a result, this accounts for a large portion of R&D high performance computing (HPC) workloads across a wide range of industries.

 

Not surprisingly, many proposals have been made to speed up differential equation solving on quantum computers. The current consensus is that we lack convincing evidence of the actual acceleration effect of quantum technology in solving industry-relevant problems.

 

However, the field is advancing rapidly and still has the potential for some surprises. Some of the major application areas that have been considered include:

 

- Computational Fluid Dynamics (CFD). Typically involves the simulation of the Navier- Stokes equations. Major industries that rely on CFD simulations include: automotive, aerospace, civil engineering, wind energy, and defense. While most simulations focus on air or fluid flow over solid objects, the simulation of other processes such as foaming is also important. Large CFD calculations typically reach petaflop levels and run on millions of CPU cores.

 

- Geophysical modeling, involving simulation of the wave equation. Major industries are: oil and gas, hydropower, and geophysics. Large seismic imaging simulations can easily reach petaflop level.

 

- Finite Element Method (FEM) is used to study the structural properties of solid objects. Major industries include: civil engineering, manufacturing (including automotive), aerospace, and defense. Simulation scales are usually slightly smaller than CFD, but large HPC clusters are still required.

 

- Maxwell's equations and heat equations can be applied to chip design and other electronic component design, as well as navigation and radar technology.

 

- Risk modeling involving Stochastic Differential Equation (SDE) simulations is widely used in finance (especially derivatives pricing), insurance and energy markets. The largest risk modeling simulations can easily reach the petaflop level, but are typically more discretized than CFD calculations.

 

- Plasma physics involving the simulation of the Vlasov equation is very common in fusion research.

 

Differential equations can be categorized according to a number of properties: a) ordinary versus partial differential equations, depending on the number of differential variables; b) stochastic versus deterministic differential equations, depending on whether or not the function is a random variable; and c) linear versus nonlinear differential equations. We will focus mainly on linear partial differential equations, since they are the largest class of practical problems, commenting only in passing on stochastic or nonlinear differential equations; in order to solve the differential equations numerically, we need to specify a discretization scheme.

 

There are two main categories (i) finite difference methods and their many variants, including finite element method (FEM) and finite volume method (FVM), combined with a variety of support meshes and preprocessing options. In the finite difference framework, the continuous space is discretized on the grid and the continuous operators are replaced by finite difference operations on adjacent grid points. Alternatively (ii), the space can be discretized by expanding over a functional basis (Fourier, Hermitian, etc.) and solving the discrete problem in this space. 

 

The second class of methods is often referred to as spectral methods. A linear differential equation can be mapped to a system of linear equations. If one is interested in very high precision and very fine discretization is required, the system of linear equations may be too large to be solved directly numerically on a classical computer. In particular, if high precision time integration results are required, and/or systems containing many continuous variables, then simulations can be challenging in both time and memory.

 

Although similar estimation results for quantum linear system solvers yield similar resource estimation results, there have not been many such resource estimation results to date. However, most of the techniques for classical PDE solvers involve finding appropriate preprocessing schemes to control the condition number. The literature suggests that a common class of preprocessing schemes is effective within the framework of quantum algorithms, but it is unclear whether this is more general.

 
 

An explicit resource estimate for the above end-to-end problem has been given in the literature, estimating that to beat the best classical solver, "the required computational accuracy is 0.01%: an expected computational accuracy of 0.01 requires an approximation of the circuit width of 340 and a circuit depth of 10^25 if Oracle costs are not taken into account, and if Oracle costs are included the circuit width and depth are, respectively are 10^8 and 10^29. "These estimates are not very encouraging, but they can be subtracted by many orders of magnitude using the latest synthesis and simulation methods. 

 

It is expected that the number of Tofolli gates can be reduced by several orders of magnitude using state-of-the-art quantum linear system solvers, possibly between 10^(11-15) depending on the specific setup.

 

Various proposals have been made for NISQ implementations of PDE solvers; the idea is to start with some sort of discretization of the PDE L|ψ(θ)⟩ = |b⟩, where |ψ(θ)⟩ is the appropriately chosen variational circuit, and then optimize the circuit parameters. This is an example of a variational quantum algorithm. It is difficult to imagine achieving sufficient size and accuracy in the NISQ regime to compete with the best classical solvers.

 

Although the simulation of PDEs is one of the most important large-scale computational tasks and accounts for a significant proportion of HPC workloads in industry, the advantages of current quantum solvers are still too limited up to four dimensions. Finding killer applications of quantum algorithms for PDEs (beyond self-certifying chemistry) will require finding important applications for high-dimensional PDEs that require very high-precision solving while involving relatively simple geometries or initial conditions, and that cannot currently be solved accurately by any classical method.  

 

Memory utilization is still likely to increase significantly, but this is not currently a bottleneck for classical PDE solving. Recent advances suggest that there may still be room for substantial improvements in quantum hardware in very special cases, but it is not clear how practical or relevant these cases are.

 
Quantum acceleration for financial services use cases
 

While multiple industries can benefit from quantum computing, the financial services industry has historically been a forerunner in quantum technology, investing significant research and development efforts in the field of quantum finance. The financial industry is notable for the fact that more powerful and accurate simulations can lead to immediate competitive advantages that are difficult to achieve in other industries. In this application area, researchers have worked hard to find quantum acceleration for use cases of interest to financial services. 

 

Many candidate use cases for quantum solutions have been proposed, for example:

 

- Derivatives pricing (e.g., options and collateralized debt obligations (CDOs). Derivatives are financial instruments that are based on an underlying asset and may depend on the value of the asset in potentially complex ways. In a derivative pricing problem, we need to determine the fair price of a financial instrument, which usually depends on the expected value of the underlying asset at a later date. A similar and related problem is known as calculating the Greek value [3].  The Greek value of a financial derivative is a quantity that determines the sensitivity of the derivative to various parameters in the problem. 

 

For example, the Greek value of an option is given by the derivative of the option value with respect to some parameter such as ∆ := ∂V/∂X , where V is the value of the option and X is the price of the underlying asset.

 

- Credit Valuation Adjustment (CVA). Credit Valuation Adjustment (CVA) refers to the problem of determining the fair price of derivatives, portfolios, or other financial instruments that are offered to purchasers on a credit basis and that take into account the purchaser's (potentially poor) credit rating and risk of default.The CVA is typically given by the difference between a risk-free portfolio and the value of a portfolio that takes into account the likelihood of default.

 

- Value at Risk (VaR). Risk analysis comes in many forms, and VaR is a common example. VaR measures the total value of a financial instrument (e.g., a portfolio) that could be lost at predetermined time intervals within a fixed confidence interval.

 

 For example, the VaR of a portfolio indicates that, with 95% probability, the portfolio will not lose more than Y dollars. Similar techniques apply to the associated credit value-at-risk (CVaR) problem.

- Portfolio Optimization. The goal of portfolio optimization is to determine the optimal allocation of funds across the range of investable assets so as to maximize the return and minimize the risk of the portfolio while complying with other constraints.

 

While there are many more use cases and several methods for generating quantum speedups, broadly speaking, many of the use cases stem from one of two avenues of quantum improvement: quantum enhancement of Monte Carlo methods (for simulating stochastic processes) and constrained optimization.

 

In the first case, the method typically involves encoding a correlated, problem-specific function into a quantum state, and then using quantum amplitude estimation to sample from the distribution four times fewer times than classical Monte Carlo methods. In the second case, a financial use case is reduced to a constrained optimization problem and solved using a quantum optimization algorithm.

 

Of the use cases studied in these two areas, option pricing and portfolio optimization tend to be the typical examples of Monte Carlo and constrained optimization problems, respectively, and have the most follow-up work on quantum algorithms associated with them. In addition, these two classes of problems represent a sizable portion of the classical computation used in the financial services industry; although the methods, caveats, and complexities therein can (often) be readily applied to other related use cases.

 

1) Portfolio Optimization

 

The portfolio optimization (PO) problem is the problem of finding, given a set of possible investment assets, the optimal allocation of funds among these assets so as to maximize return and minimize risk. What is commonly referred to as the Markowitz model is widely used in the financial industry because of its simplicity and broad applicability.

 

Complex constraints, transaction cost functions and modifications to the problem can be used to simulate realistic modern portfolio optimization problems. Numerical solution of these optimization problems is a regular part of the existing workflow in the financial services business.

Several quantum methods for solving portfolio optimization problems have been proposed, each with its own advantages and disadvantages.

 

End-to-end practical problems that have been solved include:

 

Consider a set of n investable assets with a fixed total budget. Define wi ∈ R to be the portion of the total budget invested in asset i. Let r be a known n-dimensional vector representing the expected return on each available asset, i.e., the percentage increase in the value of each asset that is expected over some defined time period. 

 

Let Σ ∈ Rn × n be the covariance matrix used to control for stochastic (and possibly correlated) fluctuations in asset returns that deviate from the mean r. The covariance matrix can be used to define the portfolio's "risk" wTΣw, which is the variance of the returns generated by the portfolio assuming the underlying model is accurate. Denote the all-one vector by 1. For any pair of vectors u, v, let ⟨u, v⟩ denote the standardized inner product between u and v:

 
Maximize the expected return subject to a fixed risk parameter σ
 
Risk minimization with constant return parameter r0
 
The trade-off between maximizing return and minimizing risk is determined by the parameter λ of the "risk aversion parameter".
 
or the alternative of the square root of risk (standard deviation rather than variance), where q plays the same role as λ
 

It is usually satisfactory to find an optimization vector that leads to an objective function with additive error ϵ for some pre-specified value of ϵ.

 

As is often the case with optimization problems, the formulation of the problem has a significant impact on the solution strategy and the difficulty of the problem. If the PO problem is unconstrained and continuous (i.e., each wi is real), then the problem is relatively easy. If convex inequality constraints are imposed, such as length-only constraints or turnover constraints, the problem is more difficult, but it can still be solved by relatively efficient convex optimization methods. 

 

In contrast, if the problem is discretized (so that w now represents an integer number of asset shares or lots being traded), or if some of the above constraints are applied (e.g., integer-valued constraints such as gravity), then the problem becomes nonconvex and much harder to solve. In general, under discrete constraints, the problem can be formulated as an instance of a mixed integer program (MIP), which is NP-complete and thus difficult to solve in polynomial time (n) under commonly held assumptions. Alternatively, it can be formulated as an instance of Quadratic Unconstrained Binary Optimization (QUBO) by binary encoding the integer variables.

 

These representations allow the use of quantum algorithms for combinatorial optimization; for example, MIP representations can be solved by branch-and-bound methods, and QUBO representations can be solved by Grover-type methods, or heuristically by (NISQ-friendly) quantum annealing methods.

 

The PO quantum algorithm discussed above inherits many of the caveats of its fundamentals (i.e., QLSS, hierarchical imaging, and classical data loading). One prominent caveat is that QLSS-based methods rely on a number of instance-specific parameters that are difficult to predict without numerical simulations. Asymptotic speedups depend on assumptions about the scaling of these parameters. In addition, to achieve speedup, log-depth QRAM must be used on large datasets, which, while theoretically feasible, poses practical challenges.

 

The branch-and-bound approach does not require log-depth QRAM to achieve near-quadratic speedups, as the running time will be dominated by the exponential tree size factor (although fast QRAM helps to reduce the time needed to solve the convex relaxation by a factor of poly(n) at each step). One point to note about this approach, however, is that to obtain a quadratic speedup, the convex relaxation of the MIP (i.e., SOCP) needs to be solved coherently. In principle, this is always feasible, but may require a large number of coherent classical operations as well as additional poly(n) time and space overhead.

 

Several alternative approaches to portfolio optimization using quantum solutions have been proposed.

 

- NISQ-HHL. this work generalizes the above algorithms by employing mid-circuit measurements and conditional logic, resulting in the NISQ version of QLSS, which easily solves the PO problem.

 

- QAOA methods. These methods typically use a quadratic objective function, but treat wi ∈ {0, 1} as a binary variable indicating whether an asset is part of the portfolio (a significant departure from the normal formulation). Constraints are handled by adding penalties to the objective function. Alternatively, constraints can be enforced by choosing clever analytic formulas, or by measuring projections into the feasible space.

 

- Quantum annealing methods. Like the previous method, these methods require the problem to be formulated as a binary optimization problem. In this case, however, they usually employ a MIP scheme and encode the integers binary by one of several possible encodings (thus, the number of binary variables will be greater than n). Various techniques can also be used to incorporate the constraints in the PO problem into the objective function to obtain the desired QUBO, which can then be solved using a quantum annealer.

 

QIPM methods for continuous formulations of PO (as well as more general QLSS-based techniques) have the potential to provide polynomial (but subquadratic) speedups for PO problems. However, these speedups are subject to conjectures about the scaling of some instance-specific parameters, and preliminary empirical estimates do not show maximum speedups. 

 

In any case, the resource estimates above suggest that the non-Clifford resources required to implement QIPM in this case are prohibitive, even if the size of the problem is trivial to solve with a classical computer. There may be asymptotic quantum advantages to this problem for sufficiently large asset sets, but this approach is unlikely to bear fruit without significant improvements to the quantum algorithms and underlying primitives (e.g., QRAM, QLSS). Even with such improvements, the algorithm can only provide polynomial speedups that are subquadratic at best, thus greatly limiting the upside potential of this approach.

 

Branch-and-bound methods for discrete formulas have the potential for greater quadratic speedups, but as observed in Grover-like quadratic speedups in combinatorial optimization, it is not clear that the quadratic speedups are sufficient to overcome the overheads associated with inherently slower quantum clock speeds and fault-tolerant quantum computation of practical instance sizes.

 

2) Monte Carlo methods: option pricing

 

Many financial instruments require the estimation of the average value of a function of a random variable over a time window. To compute this average, Monte Carlo methods can be used to run multiple simulations of a stochastic process over a time window, evaluate the function (which may depend on the trajectory of the random variable over the entire window), and numerically estimate the average. 

 

While the setup and details of the problem may vary from use case to use case, the basic approach tends to be very similar. As a typical example of this problem, we will focus on the pricing of derivatives (e.g., options), but we would like to point out that many of these results can be used in other use cases, such as calculating Greek letters, credit valuation adjustments, value-at-risk, and so on.

 

Derivatives are financial instruments that, roughly speaking, allow interested parties to benefit from the appreciation or depreciation of an asset (e.g., a stock) without having to hold the asset itself. One such derivative is known as an "option", which is a contract that allows the holder to buy (call option) or sell (put option) the underlying asset at a fixed, predetermined price (strike price) at or before a predetermined time in the future (strike window). If the option holder chooses to exercise the option, the option seller is obligated to sell or buy the asset.

 

So how should the price of the option (i.e., the amount the holder must pay for the contract, not the strike price) be determined? The well-known Black-Scholes (or Black-Scholes-Merton) model provides an option pricing methodology that makes a number of assumptions about the underlying asset and contract rules. It is also possible to consider more complex options, such as contracts with multiple assets (e.g., a basket of options), multiple possible strike windows (e.g., Bermudan or American options), and so on.

 

Typically, options are priced by taking a Monte Carlo sample of the value of the underlying asset, determining the expected profit or loss for a given position, and translating that into a price that the purchaser must pay. Options with more potential downside should be more expensive for the seller to purchase.

 

Suppose you want to price an option based on an underlying asset. The price of the asset is a random variable X, which follows a known (or assumed) stochastic process that simulates market conditions for the underlying asset. The option has a known payoff function f(X) (e.g., the difference between the price of the asset at each time step minus the strike price on the trajectory, or zero, whichever is greater). For options that depend on more than one underlying asset or on asset prices at multiple different points in time, the random variable X will represent a data vector containing all the information needed to compute the payoff. 

 

Given these inputs, the end-to-end problem is to compute an estimate of the expected return EX(f(X)) that is probabilistically within a certain error tolerance ϵ. This quantity is then used to determine the price of the option.

 

Using the assumed stochastic model of asset prices, we can build a stochastic differential equation for the average return on an option. In limited cases, we can calculate the average payoff analytically, such as the famous Black-Scholes formula for the price of European call options - the formula through which the 1997 Nobel Prize in Economics was awarded. 

 

The Black-Scholes differential equation for the price of an asset at time t is:

 
 

Quantum methods for numerically solving stochastic differential equations have been proposed, including finite difference methods, Hamiltonian simulation methods, and quantum stochastic wandering methods. In many real-world derivative pricing use cases, the underlying differential equations become intractable. Therefore, instead of solving stochastic differential equations, the most commonly used classical method for calculating average option returns is to directly sample the stochastic process X in Monte Carlo. To do this, we need to generate a large number of price trajectories over a selected time horizon and then calculate the average payoff numerically.

 

If different financial instruments are required, such as value-at-risk or credit valuation adjustments, the functions that need to be computed may be quite different, but the methodology is often the same: simulate the underlying stochastic evolution several times and numerically estimate the required quantities.

 

There are many types of options and derivatives that may not be accurately captured by these simple models. Some payoff functions are path-dependent, so the cost cannot simply be calculated using the value of the asset at some fixed time; rather, the cost depends on the trajectory of the random variable in each Monte Carlo sample.

 

In addition, the classical approach to Monte Carlo sampling typically allows for massive parallelism, as each simulation of the underlying asset can be done independently. In contrast, quantum algorithms that address this problem require a serial approach, as the subroutines in a quantum algorithm must run one after the other without measurements and restarts in order to realize the quadratic advantage. If the slower clock speeds of quantum devices are also taken into account, the requirement for quantum speeds to exceed the classical approach is even more stringent, as larger problem sizes are required to realize the practical advantage.

 

In the literature, the authors place an upper bound on the resources required for option pricing on quantum computers and propose the goal of quantum hardware development to be able to outperform classical Monte Carlo methods. In particular, the authors estimate that quantum devices need to be able to execute about 10^7 layers of T-gates per second. In addition, the code distance for fault-tolerant execution needs to be chosen large enough to support a total of 10^10 error-free logic operations. These requirements translate into a logic clock rate of about 50 MHz to be comparable to current classical Monte Carlo methods. 

 

This clock rate is several orders of magnitude faster than foreseeable possibilities given the current state of the physical hardware and currently known methods for executing logic gates in surface code.

 

While derivative pricing is quite resource intensive, it is still an area of active 

 
Interactions between quantum computing and machine learning
 

Recently, there has been a strong interest in exploring the interplay between quantum computing and machine learning. Quantum resources and quantum algorithms have been studied in all major parts of the traditional machine learning pipeline: (1) datasets; (2) data processing and analysis; (3) machine learning models leading to families of hypotheses; and (4) learning algorithms. 

 

In this section, we mainly consider quantum algorithms applied to classical data. These methods include algorithms based on quantum linear system solvers (or quantum linear algebra more generally), which are a possible source of quantum-accelerated classical learning algorithms. These algorithms also include quantum neural networks (using the framework of variational quantum algorithms) and quantum kernels in which classical machine learning models are replaced by quantum models. In addition, we will discuss quantum algorithms designed to accelerate data analysis tasks, namely tensor principal component analysis (TPCA) and topological data analysis.

 

Quantum machine learning is an active research area. However, as new results are discovered, evaluations show that few of the quantum machine learning algorithms considered will show quantum advantages in the near future. This conclusion stems from a number of factors, including the problems of loading classical data into quantum devices and extracting classical data through tomography, as well as the success of classical "dequantization" algorithms. More specialized tasks, such as tensor PCA and topological data analysis, may provide greater polynomial speedups (i.e., better than quadratic) in some cases, but their applications are not as widespread. Finally, other techniques such as quantum neural networks and quantum kernel methods contain heuristic elements that make it challenging to perform end-to-end resource estimation for specific analyses.

 

1) Quantum machine learning via quantum linear algebra

 

High-dimensional spatial linear algebra with a tensor product structure is the workhorse of quantum computing, and also of machine learning (ML). It is therefore not surprising that efforts have been made to find quantum algorithms for a variety of learning tasks including, but not limited to: cluster analysis, principal component analysis, least squares fitting, recommender systems, binary classification, and Gaussian process regression.

 

One of the main computational bottlenecks for all these tasks is dealing with large matrices. Quantum linear algebra can significantly speed up the processing of such problems, and the Quantum Linear System Solver (QLSS) is a good example. The main issue of feasibility is:

 

- Can quantum linear algebra be fully dequantized for ML tasks?
- Can classical training data be efficiently loaded into quantum random access memory (QRAM)?
- Can quantum ML algorithms that avoid the above pitfalls solve relevant machine learning problems?  

 

Based on our current understanding, both of the above conditions must be present for a significant quantum advantage, which has not been found in the specific applications analyzed so far, although a modest speedup is still possible.

 

Since many of the quantum machine learning proposals are one-offs and often heuristic, we will explore three specific applications rather than cover each proposal. Each example explains the end-to-end problem to be solved and roughly describes how the proposed quantum algorithm solves the problem to derive its main complexity. In each case, the quantum algorithm assumes access to fast coherent data access (log-depth QRAM) and utilizes quantum primitives to solve linear systems (and linear algebra more generally). 

 

Under certain conditions, these fundamentals may be several times faster than classical methods that deal with all vector terms in exponentially large vector spaces. However, for these examples, it is critical to define the end-to-end problem carefully, as the exponential advantage may be lost in the readout step, where the answer to the machine learning problem must be retrieved from the quantum state encoding the solution to the linear algebra problem. In all three examples below, this is accomplished through some form of amplitude or overlap estimation.

 

Furthermore, we note that even though these quantum algorithms are exponentially faster than classical algorithms that manipulate complete state vectors, in some cases this speedup has been "de-quantized" by algorithms that only sample vector entries. Specifically, for some end-to-end problems, there are classical quantum-inspired algorithms that solve the problem in only a polynomial amount of time slower than the quantum algorithm. 

 

Assuming that a quantum algorithm can access classical data with fast QRAM is analogous to assuming that a classical algorithm can access data with fast sampling queries (SQs). We note that most linear algebra-based machine learning tasks with quantum algorithms are also dequantized to some extent. However, in some cases, quantum algorithms may still have an exponential advantage if they are able to utilize additional structure in the correlation matrix (e.g., sparsity) that classical algorithms cannot.

 

The biggest problem with these and other proposals is how to access classical data in quantum superpositions. These quantum machine learning algorithms assume that we can load N entry vectors or N^2 entry matrices in polylogarithmic (N) time. While efficient quantum data structures (i.e., QRAM) have been proposed to accomplish this task, they also introduce some caveats. 

 

In order to load N data coherently in log(N) time, QRAM uses many ancilla quantum bits arranged in a tree structure. To load data of size N, the QRAM data structure requires O(N) quantum bits, which is much larger than the O(log(N)) data quantum bits used in the above algorithm. This space complexity does not include the overhead of quantum error correction and fault-tolerant computation, in particular the large spatial resources required for parallel refinement of magic states. Thus, we do not yet know if it is possible to construct a QRAM that can load data fast enough while maintaining modest space resources.

 

In addition, achieving speedups by efficiently representing data as quantum states may indicate that similar performance can be achieved by tensor network-based approaches in some cases. Taking this idea to the extreme, many efficient classical algorithms have been developed by "dequantizing" quantum algorithms. That is, by assuming a similar access model for the training data (the SQ access model), and some assumptions about the sparsity and/or rank of the inputs, one can obtain approximate classical sampling algorithms with polynomial overhead compared to quantum algorithms. 

 

This means that any apparent exponential speedup must be an artifact of the data loading/data access assumptions.

 

Another caveat inherited by the QLSS subroutine is that the complexity can be large when the matrices involved are poorly conditioned. This problem is mitigated somewhat in the Gaussian process regression and support vector machine examples above, where the matrices to be inverted are regularized by adding the same matrices.

 

To the best of our knowledge, no complete end-to-end resource estimation has been performed for any specific quantum machine learning task.

 

The ability of classical machine learning based on linear algebra to achieve quantum speedup depends heavily on the degree of dequantization of the quantum algorithm. Currently, exponential speedups seem to be prohibited for many of the proposed problems, but larger polynomial speedups are still possible. The latest asymptotic scaling analyses for dequantization methods still allow speedups of the order of 4 on the Frobenius criterion and 9 on the polynomial approximation degree. However, the classical algorithms are steadily improving and their scaling may be further reduced.

 

It is also worth noting that the classical probabilistic algorithm based on the SQ access model is not currently used in practice. This may be due to a number of reasons, including poor polynomial scaling, the access model may not be well suited for many practical scenarios, or simply because the method is new and has not yet been tested in practice.

 

On the other hand, it is known that some quantum linear algebra-based machine learning tasks cannot be dequantized, such as Gaussian process regression under the assumption of sparse kernel matrices. Quantum-inspired classical algorithms based on SQ access will still scale with polynomials of the Frobenius criterion, although other classical algorithms may be able to exploit sparsity more directly. It is perhaps not surprising that the prototype matrices that satisfy these criteria are sparse unitary matrices (e.g., matrices naturally realized by local quantum gates). To avoid exponential overhead, the end-to-end problem must not require exponentially small precision.

 

2) Quantum machine learning through energy-based models

 

An important class of models in machine learning is energy-based models, which are heavily inspired by statistical mechanics. The goal of energy-based models is to train a physical model (i.e., tune the strength of the interactions between a set of particles) such that the model closely matches the training set at thermal equilibrium. Energy-based models are an example of generative models because once they are trained, they can form new examples similar to the training set by sampling from the model's thermal distribution.

 

Energy-based models are deeply connected to physics and are therefore prime candidates for various forms of quantization. However, a challenge for quantum methods is that the statistical mechanical nature of the learning problem often lends itself to efficient, approximate classical methods as well. As a result, the best quantum algorithms may also be heuristic, which hinders end-to-end complexity analysis. Although energy-based models are currently less widely used than deep neural networks, they are an important conceptual development in the field of machine learning and continue to receive attention due to their sound theoretical foundations and their connection to statistical mechanics.

 

There are many proposals to generalize energy-based models to quantum machine learning. Quantum algorithms may be useful for training classical graphical models. We can also generalize the model itself by allowing the physical system at each vertex to be quantum, and by allowing interactions between systems to be non-exchangeable.

 

There are two major caveats to quantum methods for training classical models that apply to both annealing and fault-tolerant settings. (i) Classical heuristic algorithms, such as greedy or contrastive divergence methods, typically perform well in practice and are the method of choice for existing classical analyses. These methods are also usually highly parallelizable. It may not matter if a quantum algorithm is faster than a slower exact classical method, but if a faster approximate classical method is sufficient. (ii) Cases where one would expect heuristic quantum annealing methods to perform better may not be relevant problems, e.g., problems based on highly regular lattices.

 

One caveat of quantum annealing methods is that the gradient of the loss function is not perfectly correlated with the sample mean, so imperfect workarounds must be taken. As in many other cases in machine learning, the resulting end-to-end solutions are heuristic and their effectiveness needs to be empirically demonstrated.

 

While energy-based models can be naturally extended to the quantum domain, decisive evidence of quantum superiority is still lacking for specific end-to-end classical machine learning problems. Due to the centrality of heuristic quantum methods, some uncertainty remains about the future of these methods. One may hope that these heuristics will outperform classical heuristics in some specific contexts, but the success of classical heuristics and the effectiveness of near-classical methods present significant obstacles to realizing any quantum advantage in this domain.

 

3) Tensor PCA

 

Inference problems play an important role in machine learning. One of the most commonly used methods is Principal Component Analysis (PCA), which is a technique for extracting the most important information from potentially noisy data streams.  

 

In the special case where the data is generated from rank-1 vectors plus Gaussian noise, i.e., the spike matrix model, it is well known that there is a phase transition in the large sparse vector limit signal-to-noise ratio. Above the transition point, the principal components can be recovered efficiently, whereas below the transition point, the principal components cannot be recovered at all. There are two transition points in the tensor expansion of the problem. One is information-theoretic, below which the principal components cannot be recovered, and the other is computational, below which the principal components can be recovered but inefficiently, while above which they can be recovered efficiently.

 

Thus, the tensor PCA problem provides a richer mathematical setting that is related to optimization and spin-glass theory; however, it is not clear whether the tensor PCA framework has natural practical applications. A quantum algorithm for tensor PCA has been proposed that has demonstrable runtime guarantees for the spiked tensor model; it is potentially four times faster compared to classical algorithms, and it efficiently recovers signals from noise with a much smaller signal-to-noise ratio compared to other classical methods.

 

The spike tensor model seems to be irrelevant to any practical problem. In addition, efficient recovery requires a fairly high signal-to-noise ratio, which may not happen in the real world (and when it does, it is not clear that formulating the problem as a tensor PCA problem is the most efficient route).

 

The quadruple acceleration is very compelling. At this time, we do not know if there are other large-scale inference problems with similar acceleration characteristics.

 

4) Topological Data Analysis

 

In topological data analysis, we aim to compute the main topological features (connectivity components and k-dimensional holes) of data points sampled from the underlying topological manifold (given the length scale of the viewed data) or graph. These features may have independent significance (e.g., the number of connected components of the distribution of matter in the universe), or they may be used as generic features for comparing data sets. 

 

Quantum algorithms for this problem utilize the ability of quantum bit registers to efficiently store the state of a representative system, which makes quantum algorithms more efficient than known classical algorithms in some cases.

 

Considering the large overhead associated with error correction, it seems unlikely that quantum algorithms that compute (persistent) beta numbers will provide a practical advantage for the computations of current interest. This is due to the fact that the speedup of the quantum algorithm in this task is only quadratic compared to the classical approach, which is efficient in the commonly considered k ≤ 3 mechanism.

 

The quantum algorithm may be of practical interest if more datasets can be found in which the number of high-dimensional (persistent) β's is large and of practical interest in terms of calculating relative errors.

 

5) Quantum Neural Networks and Quantum Kernel Methods

 

There are typically two approaches to using quantum computers as machine learning models: quantum neural networks and quantum kernel approaches. 

 

Many of the early ideas were motivated by the limitations of recent NISQ devices. Nevertheless, not all subsequent proposals can be realized on NISQ devices. Furthermore, these proposals do not have to be limited to running on NISQ devices, but can also be run on devices with explicit quantum error correction.

 

In the case of obtaining certain data, our goal is to obtain a function or distribution that models certain properties of the data, which we call a model. This is done by first defining a family of models or a set of hypotheses, and then using a learning algorithm to select a model from this set of models. For example, in supervised learning we have data xi ∈ X which have their own labels yi ∈ Y. Our goal is to find a model function h : X a Y which correctly labels previously unseen data with high probability. More generally, this description encompasses the possibility of manipulating quantum data in such a way that each xi corresponds to a quantum state.

 

Quantum neural networks and quantum kernel methods use quantum computers to assist in constructing models to replace classical models such as neural networks. Specifically, the model here will encode data by preparing some quantum states and measuring some observations to obtain model predictions. 

 

One issue we have not discussed so far is how to encode classical data into quantum circuits, which is an important aspect of building quantum models. There are many possibilities, such as amplitude encoding or encoding the data as a rotation angle of a single quantum bit rotation.

While certain strategies are popular, it is generally not clear what is the best choice for the particular problem at hand, so choosing a data encoding strategy is itself a heuristic process. The same problem extends to the choice of a quantum neural network or quantum kernel. While certain choices may perform well in specific problem instances, there is a lack of strong evidence as to why these approaches may be superior to classical approaches in general.

 

While the optimization of parameterized quantum circuits is primarily a concern for quantum neural networks, the search for a good quantum "kernel" has also led to proposals for trainable kernels, i.e., the use of parameterized quantum circuits to construct quantum kernels. In the case of parameter optimization using heuristics, it faces the same challenges and considerations as VQA.

 

In both cases, finite statistics is an important consideration. When optimizing parametric quantum circuits, care must be taken to avoid the phenomenon of barren plateaus; similar effects may occur in the kernel setup, or may even arise purely due to data encoding circuits.

 

The use of classical machine learning models is often highly heuristic, guided by empirical evidence or physical intuition; nonetheless, they have been remarkably successful in solving many computational problems. The quantum techniques outlined in this section also broadly follow this approach (although substantial theoretical advances have also been made in some areas), and there is no a priori reason why they should not be equally useful. Nonetheless, it remains challenging to make specific predictions about quantum dominance, especially for classical data.  

 

This challenge is exacerbated by our limited analytical understanding of end-to-end applications even in fully classical settings. Indeed, with the exception of a few select examples, it may ultimately be challenging to obtain as complete an end-to-end theoretical analysis of quantum algorithms as other quantum algorithms.

 

Indeed, the potential for concrete demonstrable advantages in the field of quantum data appears to be ripe for the picking.

 
Reference Links:
[1]https://arxiv.org/abs/2310.03011
[2]https://inspirehep.net/literature/2706151
2023-10-13 19:35

REALTIME NEWS