What does it mean for HPC centers to be quantum ready?
A demonstration project is underway at the Quantum Integration Center of the Leibniz Supercomputing Center (LRZ), which is an integral part of EC-wide quantum development and a close collaborator in the regional work of the Munich Quantum Valley (MQV). Among other things, the Leibniz Center for Quantum Information is developing the Munich Quantum Software Stack, designed to run and manage quantum applications operating in a hybrid high-performance computing-quantum ecosystem.

For some time now, it has seemed that most quantum computer users access quantum devices through web portals. Indeed, this may still be true, but more and more quantum computer developers have recently begun to discuss providing in-house solutions: whether embedded in large HPC centers like Leibniz or in private entities as IBM has done at the Cleveland Clinic.
Integrating quantum computing into high-performance computing (HPC) centers is a topic of growing interest and urgency. As quantum computing matures, the question is no longer just about its theoretical capabilities, but also about its practical applicability in real-world computing environments. In fact, many organizations shopping for quantum computers are demanding that they be "HPC-ready," meaning that the quantum solutions should not only be powerful, but also work with existing HPC infrastructures.
But what does it mean to be "HPC-ready," which encompasses a multitude of factors that make a quantum computer not only powerful, but also compatible, reliable, and efficient in the HPC ecosystem? In this paper, we will unpack what it means for a quantum computer to be truly "HPC-ready", focusing on the physical attributes of a quantum computer, the software stack, the ability to execute hybrid (classical/quantum) algorithms, and the key management functions of system monitoring and management.
physical property
The physical dimensions of a quantum computer must match the existing infrastructure of a high-performance computing center. Unlike increasingly compact classical computers, some quantum computers can be quite bulky - ensuring that quantum hardware fits into a designated space is the first step in getting ready for high-performance computing.
Some quantum computers, such as those using superconducting quantum bits, need to operate at extremely low temperatures to maintain quantum coherence. This requires specialized cooling systems such as dilution chillers, which can be a logistical challenge. These cooling systems would have to be integrated into the data center's existing cooling infrastructure, requiring careful planning and potentially major modifications.
One piece of good news is that the typical power consumption of quantum computers is low compared to high-end high-performance computing resources. Today, quantum computing systems consume as little as 5 kilowatts and as much as 25 kilowatts, which is still much more efficient compared to classical computing systems.
software stack
Once the system can be physically installed and supported, it is time to focus on the software stack.
Application programming interfaces (APIs) and software development kits (SDKs) are critical for developers to integrate quantum computing capabilities into existing applications. These APIs and SDKs should be powerful, well-documented, and ideally standardized to ensure that quantum computers can be easily "plugged and played" into existing software environments. Since quantum computers are still a developing technology, there are not many experts in quantum computing software.
Therefore, sample programs and getting started guides are essential.
Middleware is the glue between quantum computers and classical high-performance computing systems. It helps execute quantum algorithms, manage resources, and ensure that quantum and classical systems can communicate effectively. Middleware solutions must be compatible with existing HPC software stacks.
Many HPC centers use SLURM (Simple Linux Utilities for Resource Management) as a powerful job scheduler and resource manager.SLURM's key features include job queuing and prioritization, virtualization, resource allocation with node selection and reservation, and sophisticated workload management through job arrays and task assignment.SLURM also provides real-time monitoring, reporting, access control, and accounting capabilities. Since quantum computers will work in parallel with classical HPCs, one way to improve efficiency is to use SLURM to distribute computational tasks between HPCs and quantum systems.
To optimize integration with such HPC environments, the quantum computer should also have a SLURM interface.
Both quantum algorithms and quantum computers are complex, so it is important to have a flexible and open software stack that allows for fine-grained control of the algorithms, the quantum circuits that implement the algorithms, the optimizers used to improve the circuits, and the pulses that drive individual quantum bits.

Classical/Quantum Hybrid Algorithms
One of the most exciting developments in quantum computing is the rise of hybrid algorithms that utilize both classical and quantum resources. These algorithms typically use classical systems for preprocessing and post-processing tasks, while quantum computers handle computationally intensive core calculations. Being ready for high-performance computing means having the software infrastructure to efficiently support these hybrid algorithms.
Part of this software infrastructure is a coordination layer that manages the workflow between classical and quantum computing, ensuring that tasks are assigned to the most appropriate computational resources. This layer also handles error correction and optimization, making the whole process more efficient and reliable.
An interesting approach is to add tightly coupled GPUs to the quantum computer.While GPUs are popular in traditional HPC centers, adding high-speed, low-latency connectivity between quantum computers and dedicated GPU resources opens up new opportunities.GPUs can work in tandem with quantum computers to perform time-sensitive tasks, such as error correction, and can also perform hybrid algorithms.
Monitoring and Management
Real-time monitoring tools are critical for keeping an eye on the health and performance of quantum computers. These tools should integrate seamlessly with existing monitoring solutions in HPC centers. They should provide insight into resource utilization, error rates, and other key performance indicators (KPIs).
Some of the KPIs commonly used in quantum computing environments include:
- Execution time or runtime: the time it takes for a quantum algorithm to run to completion is an important KPI. this can be compared to classical algorithms to measure the efficiency gains realized through quantum computing.
- Job success rate: some quantum jobs fail, so it is important to track these failures, notify the user, and automatically restart the job if necessary. Quantum systems often require frequent automatic or manual calibration, and monitoring success rates helps determine when calibration is needed.
- Queue Time: In high-performance computing environments, jobs often need to wait in a queue before they can execute. Monitoring queue times specific to quantum jobs can help optimize resource allocation strategies.
- Resource Utilization: Just as in classical computing, it is critical to understand how computational resources are being utilized.
- System Uptime: Continuous operation without unplanned interruptions is a key requirement in HPC environments. System uptime metrics help assess the reliability of quantum computers in the HPC ecosystem.
- User engagement metrics: Understanding how often and for what purpose quantum resources are accessed can provide valuable information for future resource planning and system improvements.
Experienced HPC managers need to ensure that centers not only collect and own this data, but also use the right analytical tools to turn this raw data into actionable insights.
Summarizing, looking forward
Ultimately, making quantum computers ready for high-performance computing is not just a technological pursuit, but a transformative endeavor that has the potential to redefine the boundaries of computational science.
As we stand on the cusp of this new era, the roadmap to achieve HPC readiness is not only a guide, but also a testament to the spirit of innovation and collaboration. It is a call to action for quantum scientists, high-performance computing experts, and software developers to unite their respective expertise to advance computing technology.
The stakes are high, but the rewards - unlocking the untapped potential of quantum computing for real-world applications - could be game-changing.