Nat. Rev. Phys. quantum computing in finance
Quantum computers are expected to surpass the computing power of classical computers and have a transformative impact on numerous industry sectors.
Recently, in NATURE REVIEWS PHYSICS, scientists from JPMorgan Chase, the United States, and Japan provided a comprehensive summary of the state of the art of quantum computing in financial applications - with a particular focus on stochastic modeling, optimization, and machine learning.
"Quantum computing for finance", with first author/corresponding author and lead member from JPMorgan Chase.
The research team says: "This review is geared towards physicists and therefore provides an overview of classical techniques used in the finance industry and discusses the potential advantages and limitations of quantum techniques. Finally, we will explore the challenges that physicists can help address."
Financial institutions deal with a large number of computationally challenging problems on a daily basis. These problems range from forecasting (from pricing and risk estimation to identifying unusual trades and customer preferences) and optimization (e.g., portfolio selection, finding optimal trading strategies, and hedging).
As a result of advances in mathematical finance and computational techniques, as well as contributions from the financial industry and the scientific community, financial institutions now employ a diverse set of problem-solving tools that use stochastic modeling , optimization algorithms, and machine learning (ML) models to solve both types of problems. In recent years, there has been a growing interest in academia and industry to explore whether quantum computing can be used to solve classically challenging problems. Researchers and industry practitioners have developed various quantum computing algorithms for each of these problem-solving tools.
For example, quantum algorithms for Monte Carlo Integration (MCI) have shown promise in quadrupling convergence speeds compared to classical algorithms; this could significantly improve risk modeling, for which MCI is an indispensable tool. In optimization and ML, recent research has demonstrated the benefits of quantum algorithms in discrete optimization, dynamic programming, boosting and clustering, showing better complexity in some dimensions of the problem scale compared to state-of-the-art classical algorithms.
In addition, scientists have developed and heavily investigated various heuristics (often using hybrid classical-quantum methods involving variational subcircuits). Preliminary results show that these heuristics are an improvement over classical methods. Since most of these algorithms are generally applicable to various types of problems, they have similar potential for financial applications.
Despite the remarkable progress in quantum algorithms, there are still many challenges to achieve end-to-end quantum dominance on problems with commercially relevant specifications. In particular, significant work is needed to reduce the resource requirements of certain components of the algorithms: e.g., the quantum embedding of classical input data, the readout of quantum outputs, and pre- and post-processing. Otherwise, the complexity of these components may significantly affect the speedup of other parts of the algorithm. In addition, current quantum hardware is still in its infancy; the low fidelity and small number of quantum bits of existing NISQ quantum devices severely limit the size of the problems that can be solved. As a result, executing quantum circuits in hardware often requires significant optimization work at the circuit design and compilation level, and relies on classical computation to find optimal initial conditions and reduce errors.
These challenges provide research opportunities to address bottlenecks in end-to-end quantum applications and to design hardware-aware quantum algorithms; moreover, additional opportunities may be found in exploiting the rich structure of financial problems in the quest for quantum speedups, in introducing new heuristics for computationally difficult problems, and in improving classical heuristics.
Quantum algorithms for financial problems
In this review, the experimental team surveys the current state of quantum algorithms for financial applications. The scientists discuss important financial problems solved using stochastic modeling, optimization, and ML techniques; for each area, the team reviews the classical techniques used and discusses whether the associated quantum algorithms are useful. This review takes a more comprehensive view, discussing a wider range of financial applications and the latest quantum algorithms than previous works.
Stochastic processes are commonly used to model phenomena in the physical sciences, biology, epidemiology and finance. In finance, stochastic modeling is often used to help make investment decisions, usually with the goal of maximizing returns and minimizing risk. Quantities describing market conditions (including stock prices, interest rates, and their volatility) are often modeled as stochastic processes and represented as random variables. The evolution of such stochastic processes is governed by stochastic differential equations (SDEs), and the purpose of stochastic modeling is to solve stochastic differential equations for the expected value of some random variable of interest (e.g., the expected return on a financial derivative at some time in the future), which determines the fair value of the derivative.
While in a few simple cases (e.g., the geometric Brownian motion used in the Black-Scholes European option model), SDEs have analytic solutions; the vast majority of SDEs in financial models are of a more complex form and can only be solved numerically. Moreover, even when SDEs can be solved analytically, the complexity of the derivative payoff specifications makes it difficult to find closed-form exact solutions to the moments of payoffs for all but the simplest European options and some Asian options. These more complex payoff types include American options and barrier options. If the desired stochastic process moments can be expressed as tractable differential equations, then one numerical approach is to use high-precision nonstochastic methods to approximately solve the differential equations. The price of high accuracy is a strong dependence on the dimension of the stochastic process. Alternatively, we can use MCI, trading off the variance of the stochastic process for a much weaker dependence on dimension for accuracy.
Indeed, integrals computed for financial problems typically involve stochastic processes whose variance does not grow exponentially with dimension as it does in the more general case. The duality between solving partial differential equations (PDEs) with moment evolution directly and estimating PDEs with moment evolution by sampling is captured in the Feynman-Kac formulation. Both approaches can be widely applied even outside stochastic modeling and have triggered the development of corresponding quantum methods.
1) Quantum methods based on Monte Carlo pricing and risk analysis
An important type of asset pricing problem in finance is the pricing of derivatives. A derivative is a contract that derives its value from another source called the "underlying" (e.g., a collection of assets or a financial benchmark), which is modeled as a stochastic process. The value of a derivative is typically calculated by modeling the dynamics of the underlying and calculating the corresponding payoff. Within each time step, based on the current and potential historical state of the underlying asset, the payoff function calculates the cash flows to the owner of the contract: the payoff is a simple function that is determined at the time the derivative contract is entered into. The fair value price of the derivative is given by the expected future cash flows or payoffs discounted to the current date.
Traditionally, we can use a model to simulate the stochastic process of payoffs and estimate the price of the derivative using a sample average - this is the MCI method, which makes no assumptions about the underlying model and can handle high-dimensional financial problems.
According to Chebyshev's inequality, the number of model samples required to realize the estimation error ε is O(σ2/ε2), where σ2 is the variance of the payoff process. However, there is a quantum algorithm (often referred to as QMCI) that can use O(σ/ε) quantum samples to derive an ε estimate of the price with a constant probability of success.
Specifically, assuming that the state of the underlying follows with probability p(ω) a trajectory ω ∈ Ω from the initial point in time to the expiration point of the contract, and additionally assuming that f(ω) is a payoff function scaled within [0, 1], the following operation gives a quantum sample:
It is generated by the unit operator Q.
A necessary condition for realizing practical quantum advantage is that Q can be realized in a time comparable to the time it takes to generate a classical sample from p(ω). Furthermore, it has been pointed out that the overhead of current quantum error-correcting codes may prevent the realization of quadratic acceleration. The good news is that the form of the payoff function in most derivatives is simple and thus can be realized using reversible arithmetic techniques; moreover, there are QAE variants that reduce the circuit depth.
A variety of techniques have been developed to realize Q. Specifically, the Grover-Rudolph algorithm and its approximate variants are designed to solve load distribution problems (e.g., log-concave distributions) that can be efficiently integrated numerically. However, it has been shown that QMCI does not provide a quadratic speedup when using this state-preparation technique when integrating distributions using numerical methods such as the classical MCI that we are trying to avoid.
For loading general L1 or L2 normalization functions, a number of black-box state preparation techniques have been proposed. These algorithms first prepare the target state with a probability of success and then use amplitude amplification to increase the probability of success. Despite the general applicability of these algorithms, the inverse dependence of these algorithms on the fill rate of the loading function (the relevant norm of the function divided by the corresponding norm of the boundaries of the box of that function) makes these algorithms less efficient when the target distribution is concentrated around a few values. Non-unit-distributed loading methods have also been proposed, although how to integrate these methods into QMCI remains an open question.
Furthermore, instead of directly loading the full payoff distribution p(ω), it is preferable to load the joint distribution of the path increments and use arithmetic to introduce the necessary correlations. However, this approach requires a number of quantum bits linearly proportional to the number of time steps, and quantum arithmetic operations can be expensive to perform. Therefore, it remains an open question whether there are alternative unitary procedures for loading the payoff distribution that avoid large numbers of arithmetic operations and whose number of quantum bits scales sublinearly. In cases where SDEs can only be approximated, i.e., when using a local fluctuation model, the time discretization may introduce an additional multiplication factor of O(1/ε) to the overall complexity of the MCI algorithm. To address this problem, multistage MCI has been proposed, which ensures that the overall algorithm still scales to multilogarithmic factors as in the case of single time-step MCI under certain assumptions on the payoff function.
There is a deterministic classical approach known as quasi-MCI that obtains error scaling similar to QMCI for low-dimensional problems; however, this approach comes at the cost of exponential dependence on dimensionality. Interestingly, for reasons that are not yet fully understood, this exponential dependence does not appear in practical applications for certain financial problems. Thus, a similar classical quadratic acceleration can be obtained in some cases.
Since QMCI provides a demonstrable query complexity speedup to the widely used MCI method, the community has begun to analyze application-specific resources for pricing various types of derivatives.
For some derivatives, evaluating expected returns may be influenced by future decisions, which can greatly complicate the pricing process. American options are an important example of this type of derivative. An American call option gives the holder the right to buy the underlying asset at a fixed price, K (the strike price), at any time between today (t = 0) and the expiration date (t = T); at each point in time, t ∈ [0, T ], the holder must decide whether to exercise the option by comparing the returns to the [math processing error].
St is the price of the underlying asset at time t. The option is a price of the underlying asset at time t. The option is a price of the underlying asset at time t. compared to the expected return from exercising the option at a later time to decide whether to exercise the option. American option pricing belongs to the broader optimal stop loss problem. The exact solution of the American option value requires dynamic programming because determining whether or not to exercise the option at time t requires solving the subproblem of determining the expected future return from exercising the option after t, i.e., the continuation value. In finance, an approximate solution to American option pricing is usually to use regression to predict the continuation value and to calculate the stopping time back from the expiration point. This technique is known as least squares Monte Carlo. The labels used to train the regression model are usually computed using the MCI, and there are also versions that use the QMCI.
Another important task related to derivatives pricing is usually the calculation of the sensitivity of derivatives prices to model and market parameters, which corresponds to the calculation of the gradient of the price with respect to these input parameters. Such gradients are often referred to as "option Greeks". Option Greeks are an important tool for risk management as they systematically hedge the risk of holding derivative contracts in the face of market movements.
Traditionally, Greeks can be computed by "collision and repricing", which combines finite differences with MCI. Under certain conditions of continuity of the payoff function with respect to the input parameters of interest, Greeks computed using collision and repricing can achieve the same mean square error convergence as the number of samples used in the MCI when the MCI is performed using ordinary random numbers. Thus, the number of samples required to compute k option Greeks with ε precision is O(kσ2/ε2). Using quantum gradient methods, a quantum acceleration of the classical bump and repricing method for computing Greek letters has been proposed, achieving a quadratic reduction in the number of samples required relative to k and ε.
On a quantum computer, it is also possible to perform automatic differentiation (AD) using reversible arithmetic. When AD is used, QAE can be used to speed up error convergence. In addition, neural networks can be used to combine AD with backpropagation to simultaneously learn the integral and its partial derivatives. While the advantages of using a neural network are unclear, if a neural network could learn both price and Greek with only O(σ2/ε2) training samples, it would be superior to the collision plus price method. Unfortunately, current quantum neural network (QNN) architectures incur classical sampling costs in computing gradients, making quantum advantage unlikely.
In addition to pricing and option Greek letters, quantum neural networks are widely used to compute other risk metrics that involve estimating expected quantities from stochastic processes. Similar to derivatives pricing, QMCI can be applied to these tasks, allowing for potential quantum speedups. Specifically, QMCI-based quantum methods have been shown to be useful for calculating value-at-risk and credit risk.
2) Quantum methods for pricing and risk analysis based on differential equations
As mentioned earlier, the expectation value of certain random variables (whose evolution is described by the SDE) can be formulated as a solution to a parabolic PDE.This connection between the SDE and the PDE allows us to study stochastic processes using deterministic methods. A common numerical technique for solving PDEs is the finite difference method, which approximates differential equations into a system of linear equations at discrete grid points. Finite difference-based methods are combined with the Quantum Linear Systems Algorithm (QLSA) for solving multi-asset Black-Scholes PDEs.While the dimensional dependence of the solution for quantum linear systems decreases exponentially under various assumptions on the linear system and data-access modeling conditions, there is a sampling cost associated with reading out the solution from the quantum state, which is reminiscent of both classical and quantum MCI.
However, it may still be beneficial to choose the PDE solution method over MCI, as it has been shown that the former can better converge to certain errors induced by time discretization (e.g., in pricing barrier options).
For second-order linear PDEs, it is possible to convert the PDE into a form similar to the Schrödinger equation and use Hamiltonian simulation techniques to model the dynamics generated by such PDEs. If the resulting evolution is non-unitary, chunk coding may be required to convert it to unitary evolution. It has been shown that if all the eigenvalues of the evolution matrix are known, the simulation of Hamiltonian dynamics can be exponentially faster compared to classical methods. However, since the solution of the PDE is encoded in a quantum state, QMCI or QAE is still required to obtain the required quantities from the PDE solution, adding additional overhead.
Variable Quantum Simulation (VQS) can also be used to model the evolution of certain PDE solutions as quantum states. It has been shown that this can be done for Black-Scholes PDEs. Subsequently, a team extended this approach to the more general Feynman-Kac PDE and its variants, and discussed the potential applicability to American option pricing (i.e., the Hamilton-Jacobi-Bellman equation) and stochastic volatility models.
The above algorithm encodes the solution as a quantum state and computes the ground state to represent the discrete values of the spatial variables with amplitudes proportional to the corresponding values of the function. However, another approach has been proposed by some teams that use QNN as a parameterized parser for PDE solutions. The variational parameters are updated by gradient descent to satisfy the PDE.One point to note with this approach is that the input data is encoded as a product state. This makes it possible to implement more expressive versions of the model using classical kernel methods. However, it is thought that variational submodels can introduce regularization that is not possible with kernel methods, making QNNs that encode the input data as a single product state potentially useful.
Although VQSs and QNNs typically consume fewer resources than QLSAs, it remains unclear without further experimentation whether these heuristics have any quantum advantages.
Most financial optimization problems are typically highly constrained linear or quadratic programs with continuous variables, discrete variables, or both. Quantum computing offers a variety of heuristics and algorithms for solving such problems.
1) Quantum methods for continuous optimization
In continuous optimization, the decision variables of the problem are real-valued. Convex programming is an area of continuous optimization for which various efficient classical algorithms exist for structured problems. Well-known examples of structured convex problems are symmetric conic programs (SCPs) such as linear programs (LPs), second-order conic programs (SOCPs), and semidefinite programs (SDPs), which often appear in financial applications. Financing or cash flow management and arbitrage detection are important financial linear optimization problems. In a short-term financing problem, the objective is to select cash flows (e.g., assets or lines of credit with a fixed rate of return) to match periodic quotas while maximizing assets. Since the amount of cash flows or credit obligations is proportional to the amount held, the constraints of the various problems are linear.
Portfolio optimization is the process of selecting an optimal set of assets and their quantities from the pool of assets under consideration, based on some predetermined objective. The objectives can vary according to the investor's preferences for financial risk and expected returns. Modern portfolio theory focuses on the trade-off between risk and return to produce an efficient portfolio - i.e., one that maximizes expected return for a given amount of risk. The expected return and risk of a financial portfolio can usually be modeled by looking at the mean and variance of portfolio returns, respectively. The problem setting of portfolio optimization can be formulated as constrained utility maximization:
Considering the high efficiency of the classical solution of the SCP, quantum acceleration has little scope in practice. Still, a quantum reduction is observed in terms of worst-case running time in the problem dimension due to the existence of fast quantum algorithms for Basic Linear Algebraic Subroutines (BLAS). However, there are three noteworthy issues with quantum BLAS (QBLAS) that make comparing quantum and classical worst-case complexity challenging.
- First, QBLAS operates on the spectrum of matrices and thus has a polynomial dependence on the conditioning of the matrices that is usually only found in classical iterative algorithms.
- Second, retrieving data from a quantum device requires sampling, and thus incurs classical sampling costs that are not strongly related to the required error.
- Finally, these algorithms require access to classical data in the quantum superposition, which can only be done efficiently without quantum memory in certain cases where the input matrix is sparse.
Classical techniques for solving SCPs fall into two categories: those that correlate well with the size of the problem and those that correlate well with the solution error. The first category is mainly dominated by the Matrix Multiplication Weighted Meta-Algorithm (MMW). It can be shown that the complexity of general SDP solving is:
where m is the number of constraints, n is the number of variables, s is the sparsity, and γ is a function of the desired error.
Since the dependence of MMW on the desired problem error is already low, and assuming matrix sparsity, quantum MMW may not be limited by QBLAS. While MMW has not been extensively studied for solving SCPs in finance, scalar multiplier weighted updating methods, which are a special case of MMW, have been extensively studied. Online portfolio optimization algorithms, which adjust portfolio choices to market changes, have received attention from the quantum computing community.
The scalar multiplier weights framework has also been used to generate sublinear algorithms for estimating Nash equilibria for two-person zero-sum games\, which are simple matrix games.Gibbs-sampled quantum algorithms have been used to solve zero-sum games with quadratic reductions in dimensional dependence. These algorithms have been applied to the task of estimating martens measures from market data and derivative pricing. In addition, quantum amplitude amplification techniques have been used to quadratically reduce the dimensionality dependence of classical sublinear algorithms for a more general class of matrix games solved in a primitive binary manner with multiplicative weights and online mirror descent, and have been applied to the training of kernel classifiers.
The second class of algorithms is known as polynomial time methods and consists of interior point methods (IPM) and cutting planes. Theoretically, the fastest algorithm in this class to solve general SDPs is the cutting plane-based method, while IPM is the fastest method to solve LPs and SOCPs. Unfortunately, these quantum IPMs and QBLAS suffer from the above three problems. Classical IPM can be realized in polynomial time with logarithmic dependence matrix conditions, i.e., by classical arithmetic; the classical sampling cost paid by quantum algorithms effectively offsets the good error dependence of classical IPM, which can only approximately solve Newtonian systems. It is shown that this results in quantum IPM only being able to approximately solve primal optimization problems; resource estimation for simple combinatorial optimization problems suggests that quantum IPM lacks advantages.
At present, it still seems to be an open question whether quantum computing can bring advantages for structural convex programs in general (and not only for financial programs). However, for problems that are well-conditioned, have sparsity, or have access to sufficient quantum memory, speedups may be achievable on practical problems. However, these problems can already be solved quickly by classical computation.
2) Quantum methods for discrete optimization
Many optimization problems in finance require that the solution take values from discrete sets rather than continuous. Examples of this include some of the most common financial optimization problems, such as portfolio optimization. In many portfolio optimization problems, the optimal solution to a convex optimization may suggest allocating fractional units of assets, whereas market constraints often require that positions in these assets be integers or multiples of fixed increments. These financial use cases require the use of discrete optimization methods or integer programming, where the variables to be optimized are restricted to integers, or more generally mixed integer programming (MIP), where some of the variables are integers.
In asset portfolio optimization problems, where the typical value of the asset position is much larger than the minimum increment allowed, the approximate optimal solution obtained through relaxation is often acceptable with only minor modifications. This is often the case with equity portfolios, where the minimum holding is one share and the holdings are usually at least two orders of magnitude larger. However, fixed income and derivatives markets typically require larger unit transaction sizes, which makes the quality of approximate solutions to relaxation problems unsatisfactory. Therefore, these problems often require the use of discrete optimization techniques.
Branch and Bound (B&B) methods are a powerful class of algorithms that can be used to solve difficult optimization problems such as MIPs and provide optimality certificates or optimality gaps. The optimality gap is the difference between a known optimal solution and a known bound.B&B is the core algorithm of most commercial solvers and is therefore commonly used by financial institutions.
Quantum walk search (QWS) is a powerful framework for accelerating classical search algorithms and has been applied to important optimization algorithms in finance. Notably, the general framework of quantum walks also enables the simulation of symmetric Markov chains to be reduced by up to a factor of four in time, which may be applicable to the simulation of stochastic processes arising in finance. Progress has been made in this direction with the integration of depth-first search heuristics and, more recently, a large class of practical heuristics. However, similarly to quantum MCI for stochastic modeling, it is not clear that a practical reduction in problem solving time can be achieved given the resources required to solve the subproblems.
Simulated annealing (SA) is a widely used Markov chain Monte Carlo (MCMC) method for combinatorial optimization. Quantum walks have been used four times to reduce the spectral gap dependence of the asymptotic convergence time of the exact solution. However, in general, it is not clear whether the quantum dependence on all parameters can be better than or even match the dependence of classical MCMC.
Some scientists have also developed a framework based on Grover's algorithm, which is a special case of QWS that uses global problem information for unstructured optimization. The algorithm is a special case of QWS that uses global problem information for unstructured optimization. Although the query complexity of quantum unstructured search is four times higher than that of classical unstructured search, it is not as resource-efficient as quantum walk-based techniques, which rely on local rather than global information. In discrete optimization, uniform superpositions on unconstrained search spaces can be prepared with low gate complexity, and for NP optimization problems, oracles that check constraints and evaluate cost functions can be implemented with efficient gate complexity.
Finally, there are short path algorithms, which use techniques from quantum walks, and exploit problem-specific information to achieve speedups superior to Grover-based algorithms for certain discrete optimization problems.
In addition to quantized versions of various classical algorithms, there are also quantum heuristic algorithms. These include quantum annealing and variational quantum algorithms: the Quantum Approximate Optimization Algorithm (QAOA), the Variational Quantum Eugenic Solver (VQE), and the VQS. In general, these methods naturally deal with unconstrained binary optimization problems, but have also been applied to continuous optimization problems. At least when implemented on general-purpose digital quantum computers, the evolution of these quantum heuristics can be effectively restricted to respect binary variational constraints.
Variational quantum algorithms are often difficult to tune variational parameters and hyperparameters, which are themselves NP-hard. Moreover, for some initializations, the gradients of the parameters may vanish even if the model is far from converging, which requires an exponential number of samples to estimate them. However, the convergence of VQE has been demonstrated in the over-parameterization mechanism - even if the gradients are exponentially small.
While existing quantum annealing devices are large enough that they do not need to meet the fault tolerance requirements of general-purpose devices, the overall benefits of quantum annealing remain to be determined. Quantum tunneling can penetrate high and thin potential barriers; however, it is not possible to know in advance whether the potential barriers that arise in practical problems allow tunneling. For highly constrained financial optimization problems with multiple classes of variables, it is often costly to transform the problem into an unconstrained quadratic optimization problem, especially if the number of quantum bits is limited. However, thanks to the availability of relatively large quantum annealers, scientists have begun to experiment with applications such as portfolio optimization.
Finally, QAOA has other advantages over VQE and VQS: for example, only two parameters are used per layer, and theoretical and numerical results show that it is able to transfer optimization parameters between problem instances. Moreover, in some cases, parameter optimization can be performed efficiently in a classical manner using a quantum circuit simulator.
3) Quantum methods for dynamic programming
In some optimization problems, the information needed to make subsequent decisions is only revealed after intermediate decisions have been made. Thus, in such cases, decisions must be made sequentially and the optimal policy must consider both current and future decisions. Solving these decision problems often requires dynamic programming, i.e., solving the problem recursively by reducing the main problem to a series of smaller, easier-to-solve problems.
Dynamic programming problems are also common in finance. In addition to the American option pricing problem, another important place where dynamic programming appears in finance is in the structuring of Collateralized Mortgage Obligations (CMOs), which bundle mortgages and reschedule their cash flows into multiple "tranches" that are paid sequentially.The issuer of a CMO is usually interested in the optimal payment schedule (structure) for these tranches. Issuers of CMOs are often interested in the optimal schedule (structure) of these installments to minimize payment obligations to CMO owners, which requires a recursive solution because the optimal start and end of the kth installment depends on the optimal schedule of the first k - 1 installments.
While there are no quantum solutions for CMO structures and other similar financial problems, quantum algorithms have been proposed for reducing the exponential dependence of exponential-time dynamic programming for various NP-complete problems.