ZJU, Tsinghua achieve groundbreaking results in superconducting quantum computers

img1

 

Researchers have proposed many quantum algorithms to augment various AI tasks. With the rapid establishment of quantum-enhanced AI, a pressing fundamental question naturally arises: Are quantum AI techniques trustworthy under various attacks?

 

Classical neural networks are susceptible to adversarial perturbations; for example, a stop sign with a small graffiti may be misclassified as a yield sign. Recent theoretical work suggests that quantum neural networks are similarly vulnerable, which would pose serious problems for future applications of quantum machine learning in security scenarios. As a result, researchers have established the foundations of quantum adversarial machine learning.

 

However, experimentally demonstrating adversarial examples of quantum classifiers and showing the effectiveness of the proposed adversarial measures in practice is challenging and has never been reported before. Now, a team consisting of Haohua Wang at Zhejiang University and Dongling Deng at Tsinghua University has overcome these difficulties and reported the first experimental demonstration of quantum adversarial learning with an array of 10 programmable superconducting quantum bits. The paper was published in Nature Computational Science on November 28 [1].

 

It is worth mentioning that Professor Haohua Wang of Zhejiang University was awarded the Science Discovery Award in mathematical physics with a prize of 3 million RMB at the recent 3rd and 4th Science Discovery Award ceremonies.

 

4cb007335bcda8daca34e2842c275f23

Haohua Wang, Professor and Doctoral Supervisor, School of Physics, Zhejiang University

 

In this work, by optimizing the device fabrication and control processes, they have increased the average lifetime of these quantum bits to 150 μs, while the average fidelity of single and double quantum bit gates is greater than 99.94% and 99.4%, respectively. This allowed them to successfully implement large-scale quantum classifiers with different structures, circuit depths up to 60, and a trainable number of variational parameters exceeding 250. they trained these classifiers using large-size real images (e.g., medical MRI scans) and high-dimensional quantum data (e.g., thermal and local quantum many-body states), obtaining quantum gradients directly by measuring a number of observables (a gradient is a vector).

 

After training, these classifiers can achieve state-of-the-art performance on these datasets, with a tested accuracy of up to 99%, and further demonstrate that with adversarial training, the quantum classifiers will be immune to adversarial perturbations generated by the same attack strategy.

 

0136 Quantum Bits Processor

 

The team demonstrated a programmable quantum processor with 36 superconducting transmon quantum bits arranged on a 6×6 two-dimensional square. The quantum bit layer and control line layer highlighted in Figure 1b are patterned on a sapphire (top) and silicon (bottom) substrate, respectively, which are assembled together in a flip-flop bonding process. The quantum classifier is built on the basis of a large-scale variational quantum circuit implemented with this processor.

 

They input the generated adversarial examples into the quantum classifier to test its performance. A schematic representation of the main ideas of quantum adversarial learning is shown in Figure 1. Figure 1a shows a legitimate MRI (magnetic resonance imaging) scan of a fixed cerebral hemisphere used for sclerosis diagnosis and its corresponding adversarial example, obtained by adding a small amount of carefully crafted perturbations to the original image. Figure 1c shows the predictions for the legitimate and adversarial samples. The quantum classifier will correctly identify the legitimate MRI scan as "malignant" (blue) and incorrectly classify the corresponding adversarial example as "benign" (red) with a high confidence level, differing only by the amount of imperceptible perturbation.

 

2ac1b7137a85e1d534c18885aa1fadcc

Figure 1 Schematic diagram of experimental quantum adversarial learning

 

To demonstrate quantum adversarial learning, they chose a one-dimensional array of 10 quantum bits in the processor with energy relaxation times T1 ranging from 131 to 173 μs at the frequencies of quantum bit initialization and operation. single quantum bit XY rotation was achieved using a 30 ns long microwave pulse generated by a multi-channel arbitrary waveform generator (MOSTFIT MFAWG-08) The controlled non (CNOT) gate is based on a controlled π-phase (CZ) gate plus a single quantum bit rotation. The 60 ns-long CZ gate is achieved by carefully tuning the frequency and coupling strength of the quantum bits.

 

9ff4d029e95932ce00c63a4fa3188977

Figure 2 Quantum bits used in the experiments

 

Since in the team's experimental sequence, single-quantum-bit (two-quantum-bit) gates are implemented simultaneously on multiple quantum bits (quantum bit pairs), they simultaneously perform cross-entropy benchmark tests to characterize the gate performance with an average bubble error of about 0.08% (0.72%).

 

7f443a69099c54fd8d80784c58f9e964

Figure 3 Performance metrics for 10 quantum bits (the second row of the table ηj/2π units should be MHz)

 

0299% correct rate of quantum adversarial test

 

In their experiments, the team focused on adversarial training and conducted experiments to demonstrate its effectiveness in practice. They first generated adversarial examples for each legitimate sample and then injected them into the training set. They retrained the quantum classifier with both legitimate and adversarial samples. In Figure 4a, they plot the accuracy of the classifier used to classify MRI images on the legal and adversarial sets as a function of the time period during the adversarial training.

 

They find that after about 25 time periods, the probabilities of both data sets increase and approach unity, indicating that the adversarially trained quantum classifier becomes immune to adversarial perturbations. More specifically, a randomly selected adversarial example is plotted in Figure 4b (top panel). The image will be misclassified by the original quantum classifier into the "breast" category, but after adversarial training, it will be correctly identified as "hand". This clearly shows that adversarial training can indeed significantly enhance the robustness of the quantum classifier to adversarial perturbations.

 

17e79b88c03db5ce77ac7f4e596a3170

Fig. 4 Experimental results of quantum adversarial training using MRI images. a. Accuracy of legal and adversarial test data at each stage of the adversarial training process. b. Adversarial sample images and corresponding experimental outputs of the quantum classifier before and after adversarial training.

 

The authors conclude that their results not only reveal the vulnerability of quantum learning systems in adversarial scenarios, but also demonstrate the effectiveness of defense strategies against adversarial attacks in practice, thus making an important experimental attempt to achieve trustworthy quantum artificial intelligence. As the emerging field of quantum artificial intelligence develops, their findings will prove useful in security practical applications.

 

Reference:

[1]https://www.nature.com/articles/s43588-022-00351-9

2022-11-30