IBM's new quantum algorithm that will speed up random number generation!
Generating random numbers may seem easy, but it can be surprisingly difficult - especially if the probability distribution of the random numbers is very complex.
This is often the case in scientific research (e.g., when training neural networks). In such cases, researchers can use a technique that was first used in general-purpose computing: the Metropolis algorithm. The algorithm was first run on the groundbreaking MANIAC computer in 1953, and its modern name is the Markov Chain Monte Carlo (MCMC) algorithm.
Writing in Nature on July 12, Layden et al, scientists from IBM Quantum, report on a more modern twist in the development of this algorithm by using a quantum computer to accelerate the program's performance.

The MCMC algorithm is a framework for generating random numbers according to a specified probability distribution, a task known as sampling. The framework contains several variants, all of which involve iterations of samples; after a sufficient number of cycles, it is necessary to ensure that the distribution of the samples matches the desired target distribution.
Each iteration of this process consists of two parts:
- A suggestion step, where a sample is suggested based on the current sample;
- An accept or reject step, which accepts the new sample as the next sample in the iteration, or rejects the new sample to repeat the process.


The Markov Chain Monte Carlo (MCMC) algorithm, which draws random numbers from a target probability distribution, involves two steps: suggesting a sample and then accepting that sample as the next sample in the iteration, or rejecting the sample, which triggers the process to start over. Both of these steps must be designed to ensure a sufficient number of iterations in order to ultimately obtain a sample that meets the target distribution, the number of iterations required to achieve this goal is called the convergence time (convergence time). Specifically, the number of iterations required for the samples to converge to the target distribution is less than the number of iterations using the traditional MCMC algorithm.
Variants of the MCMC algorithm can be distinguished by the different strategies used in each step. Most importantly, both steps must be constructed in such a way that repeating both steps ensures that the final sample distribution obtained from repeating both steps matches the desired distribution. Thus, "how long it takes" is a key property of any MCMC variant.
Does the process need to be repeated 1000 times before the samples are distributed according to the target distribution? Or a million?
The number of iterations required is called the convergence time, and it depends on the dimensionality of the random variable - the number of bits required to describe the sampled variable. The larger the dimension, the longer the convergence time. Unfortunately, the exact mathematical relationship between convergence time and the dimension of the variable is not strict for most MCMC variants in use today. However, this has not stopped people from using MCMC algorithms: they tend to utilize empirical and statistical features of convergence.
Layden et al. devised a variant of MCMC: a quantum computer is utilized to generate samples in the proposal step. In any iteration, a random sample is encoded as a quantum state, and a series of quantum operations are performed on it to produce an output state which can be measured to generate a new sample. This in itself is not unusual: in the proposed step of the MCMC algorithm, almost any procedure can be used to generate a new sample (including simply applying noise to the current sample).
However, the researchers on this occasion had to be able to show that these steps converged to the target distribution, which is not possible for an arbitrary proposal procedure. This leads to the key innovation of Layden and colleagues: they have devised a set of quantum operations that verify convergence when the quantum proposal step is combined with a standard acceptance or rejection step.
The authors demonstrate their quantum-enhanced MCMC algorithm through a combination of numerical simulations and early quantum computing hardware experiments. Their results show the predicted convergence of the iterations to the target distribution. More importantly, they also demonstrate that the convergence is faster than several classical alternatives previously devised for the proposed step.
The actual rate of convergence is difficult to measure, and the authors only measured it for processes of finite complexity-those whose target variable distributions can be described by up to 10 bits; they also approximated the rate of convergence for target distributions with 20-bit variables.
In all cases, they found convincing evidence that the quantum version of the MCMC algorithm converges faster than the classical version. They determined empirically that this speedup is polynomial and that the convergence time of the quantum-enhanced strategy is approximately the cube root of the convergence time of the classical strategy.
Where does this speedup come from? As with most quantum enhancements, it is difficult to attribute it to any one feature of the quantum system.The numerical evidence provided by Layden et al. suggests that their chosen quantum manipulation strikes a delicate balance between generating a diverse set of proposals and satisfying the constraints imposed by the target probability distributions: a trade-off that is difficult to achieve with classical proposal strategies.

Average convergence rate simulations.

Convergence rate experiments.

Quantum acceleration mechanisms.
Despite the comprehensiveness of the work of Layden and colleagues, there are some limitations.
First, the convergence of quantum enhancement algorithms proves to be effective only if the quantum operations are perfectly executed: in the absence of any noise in the hardware. However, their experimental results show that the convergence rate is somewhat robust to noise, especially when the hardware noise can be randomized.
Second, the accelerated convergence is only observed in small-scale problems and may disappear in larger-scale problems, especially in the presence of noise. If the authors' explanation of the cause of the acceleration is correct, and if hardware noise can be suppressed on larger scales, the acceleration is likely to persist - but this is far from certain at this stage.
Finally, although Layden et al. have shown that their quantum enhancement algorithm converges faster than some common classical proposal strategies, they have not yet tested more MCMC variants. Thus, it is possible that this gap could be bridged by other classical proposal strategies that are available or may be devised, or may even be inspired by this work.
Despite these limitations, the research of Layden and colleagues provides an important and exciting application for generating useful solutions for early noise-laden quantum computers, as well as setting many directions for fruitful future research.
Reference links:
[1] https://www.nature.com/articles/s41586-023-06095-4
[2] https://www.nature.com/articles/d41586-023-02176-6