BLOG / ARTICLE

What HPC Centers Should Take Away from the MerLin Quantum Utility Webinar

Author – Xavier GEOFFRET, Business Development Manager, Quandela

This morning, I attended the webinar “Toward Quantum Utility in HPC & AI with MerLin,” presented by Jean Senellart and Samuel Horsch.

Full disclosure: I work at Quandela. Still, I joined the session with a specific lens, not as a quantum algorithm specialist, but with an HPC center perspective in mind.

The webinar was primarily targeted at data scientists, AI researchers, and quantum practitioners. HPC centers were not the main audience. And yet, I came away convinced that some of the most valuable insights were for HPC centers, especially those trying to understand their role in the emerging quantum landscape.

Before going further, one clarification: the discussion focused largely on Quantum Machine Learning (QML). Quantum computing is a much broader field, with applications in simulation, optimization, cryptography, and more. The reflections below should be read in that context, even if many of them likely extend beyond QML.

Here is my personal take.

Quantum will not enter HPC as a faster accelerator but as a new scientific instrument

HPC centers have historically integrated new technologies as accelerators such as GPUs, FPGAs, or other specialized hardware, all aimed at speeding up existing workloads. Quantum does not fit that pattern. It behaves less like an accelerator and more like a new type of scientific instrument, enabling different computations rather than simply faster ones. If HPC centers approach quantum with a performance-first mindset, they risk missing where the real value lies.

If you are waiting for a “quantum speedup,” you are looking in the wrong direction

One quote from the webinar captured this well: “Quantum is not faster, it’s different.” This is not just semantics. Today, the real opportunity is not about outperforming classical systems at scale, but about exploring new computational regimes, especially in QML. For HPC centers, this shifts the question from when quantum will beat classical systems to what new classes of problems can already be explored.

Another implication, less discussed but equally important, is that quantum computing is fundamentally sampling-based. Instead of producing a single deterministic output, it generates distributions that must be statistically analyzed. For HPC centers, this is a shift in mindset: the challenge is not just scaling compute, but managing and interpreting large volumes of probabilistic results, closer to Monte Carlo workflows than traditional deterministic pipelines.

The real bottleneck is increasingly shifting from hardware to integration into HPC workflows

The webinar made it clear that we are moving from a hardware problem to a systems problem. Quantum processors can already be accessed, simulated, and integrated. The real challenge is embedding them into existing AI and HPC workflows, and orchestrating workflows across QPU, GPU, and CPU resources. This is not a quantum problem. It is an HPC architecture problem.

Hybrid workflows are not a compromise, they are the actual product

There is still a tendency to think in terms of quantum versus classical, but that framing no longer holds. What is emerging instead is a model where classical machine learning pipelines remain dominant while quantum layers are introduced where they bring value. The question is no longer whether quantum can replace classical computing, but where it can add marginal value within an existing pipeline. For HPC centers, this should feel familiar, but it applies to a new computational paradigm.

Orchestration is becoming the most strategic layer in HPC architectures

If hybrid workflows are the future, then coordination across heterogeneous resources becomes critical. This includes managing data movement, scheduling execution, and ensuring efficient communication between classical and quantum systems. Frameworks such as MerLin are positioning themselves at this level by bridging AI, HPC, and quantum environments.

This becomes even more tangible with the push toward tighter integration with existing HPC ecosystems. The mention of ongoing work with NVIDIA, particularly around technologies such as NVQLink, highlights that efficient communication between QPU and GPU is not a secondary concern but a central architectural challenge. This is exactly the kind of system-level problem HPC centers are well positioned to tackle.

Simulation is not a fallback; it is where most quantum value is created today

One of the most important and often misunderstood points is that simulation is not just a step before hardware. It is a core part of the discovery process. In practice, most QML exploration happens through large-scale simulation, relying heavily on GPU acceleration and running naturally on HPC infrastructures. This means that today, not tomorrow, HPC centers are already central to quantum progress.

This is closely tied to what the speakers described as the simulability frontier: a regime where quantum systems are still simulable on HPC infrastructure, but already too complex to be explored exhaustively. This creates a very specific role for HPC centers supporting quantum: defining the boundary of what can still be understood classically before requiring hardware experiments.

The winning quantum use cases will not be designed, they will be discovered

There is a strong parallel with the evolution of AI and machine learning, where the biggest breakthroughs were not predicted in advance but emerged from experimentation at scale. Quandela’s approach reflects this by integrating quantum layers into existing machine learning workflows, running experiments, and identifying where quantum provides value. The goal is to find “sweet spots,” not to assume them. This is not a roadmap driven by theory, but a strategy driven by systematic exploration.

This is also where HPC centers bring more than infrastructure. Modern AI did not scale because of hardware alone, but because of methodologies: large-scale experimentation, hyperparameter exploration, reproducibility practices. These are native capabilities of HPC environments. In that sense, HPC is not just hosting quantum experimentation, it is importing the experimental discipline that quantum currently lacks.

Photonics is already showing promising signals for Quantum Machine Learning

One element that stood out is how photonic quantum computing is being used in practice for QML exploration. Even with a relatively small number of photons, it is already possible in some cases to compete with, and even outperform, classical machine learning models on reduced-scale problems. This does not yet translate into large-scale advantage, but it provides a meaningful signal. The real challenge now is to scale these approaches with larger hardware to determine whether this early promise can turn into meaningful quantum advantage.

Reproducing papers is not academic overhead, it is the fastest path to quantum maturity

One of the most striking initiatives presented was the effort to reproduce an increasing number of QML papers, with a clear trajectory toward large-scale coverage. This is not just about validation. Reproducing these papers requires adapting workflows to real hardware constraints, understanding limitations across platforms, and building comparable benchmarks. In other words, it turns fragmented research into operational knowledge.

HPC centers should see reproducibility as a new service they provide

A new workflow is emerging where published results are first reproduced, then extended through simulation, and finally validated on real hardware. This aligns directly with the core mission of HPC centers, which is to enable science, ensure reproducibility, and provide scalable environments for experimentation. Quantum does not change that mission, it reinforces it.

The real competition is not hardware—it is who builds the most active scientific ecosystem

Quantum adoption will not be driven by machines alone but by communities that reproduce and extend results, share benchmarks, and collaborate through open frameworks such as MerLin and Perceval. The real question is not who has the best quantum processor, but where meaningful experiments are actually happening.

If I had to summarize in one sentence, quantum computing is not entering HPC as a new accelerator but as a new experimental discipline. A discipline where HPC defines both the methodology and the operational limits of what can be explored before hardware takes over. HPC centers can either wait for this discipline to mature, or actively shape how it evolves. From what I saw in this webinar, those who engage early will not just adopt quantum, they will help define what quantum utility actually means.

Latest from the blog