Quantum machine learning will not scale on claims alone. It must be benchmarked, reproducible, hardware-aware, and useful at every stage. With the release of MerLin 0.3 and our accompanying scientific publication, Quandela introduces a discovery engine for photonic and hybrid quantum machine learning. Built on optimized strong simulation of linear optics and native PyTorch integration, MerLin enables systematic, proof-driven exploration of where quantum structure creates measurable value — and where it does not. Utility now. Scale next.
From isolated demonstrations to systematic discovery
Quantum machine learning (QML) has produced compelling ideas. But the field still faces a structural challenge:
Results are often:
- Difficult to reproduce
- Evaluated under heterogeneous assumptions
- Detached from hardware constraints
- Reported without standardized baselines as it is usually done in the AI world
If quantum computing is to become deployable — not just demonstrable — QML must evolve from isolated experiments to systematic benchmarking.
At Quandela, we believe in utility at every stage .
That principle guides both our hardware roadmap and our software ecosystem.
MerLin was built to bring that same discipline to quantum machine learning.
What is MerLin?
MerLin (MerLin: A Discovery Engine for Photonic and Hybrid Quantum Machine Learning) is an open-source framework designed to:
- Integrate photonic quantum circuits directly into modern ML workflows
- Enable scalable, differentiable training of quantum layers
- Provide reproducible benchmarking pipelines
- Remain hardware-aware by design
It is built on Perceval, Quandela’s photonic quantum SDK , and leverages optimized Strong Linear Optical Simulation (SLOS) for exact simulation within the tractable regime.
Unlike fragmented tooling ecosystems, MerLin connects:
- Photonic circuit design
- PyTorch-native model training
- Benchmark-driven evaluation
- Cloud and hardware execution pathways
- Gate-based models to photonic models
The objective is simple: turn photonic QML into an engineering discipline.
Differentiable photonic quantum layers
At the core of MerLin lies the QuantumLayer, a PyTorch-compatible module representing a parametrized linear-optical circuit.
Users can:
- Construct and manipulate photonic circuits using high level primitives without any need to understand the underlying physics
- Encode classical data (angle or amplitude encoding)
- Select measurement strategies (probabilities, expectations, amplitudes)
- Train parameters through automatic differentiation
MerLin accelerates simulation by precomputing sparse photon-number transition graphs. During optimization, only unitary-dependent coefficients are updated.
The result: photonic quantum circuits become trainable components within standard AI pipelines — without sacrificing physical realism.
This reflects a broader Quandela principle: Quantum must integrate into real computing environments, not remain isolated from them.
builder = CircuitBuilder(n_modes=6)
builder.add_entangling_layer(trainable=True, name="U1")
builder.add_angle_encoding(modes=list(range(6)))
builder.add_entangling_layer(trainable=True, name="U2")
layer = QuantumLayer(
builder=builder,
n_photons=3, # 3 photons evenly distributed on 6 modes
measurement_strategy=MeasurementStrategy.probs(),
)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
for _ in range(epochs):
layer.train()
optimizer.zero_grad()
probs = layer(X_train)
loss = F.cross_entropy(probs, y_train)
loss.backward()
optimizer.step()
# Your layer is now trained !
Bridging photonic and qubit paradigms
MerLin is not limited to photon-native architectures.
Through a QuantumBridge abstraction, qubit-based circuits can be mapped into photonic encodings (e.g., dual-rail representations). This enables:
- Cross-paradigm architectural comparison
- Hardware-relevant translation of gate-based QML models
- Controlled benchmarking under unified conditions
Instead of debating abstractions, MerLin enables measurement.
This is aligned with our differentiation: Quantum that works — deployable, efficient, and built for scale.
Benchmarking 18 state-of-the-art models
Alongside the framework, MerLin ships with 18 reproduced state-of-the-art photonic and hybrid QML works, released as modular, reusable experiments.
This matters because QML results are still frequently reported under heterogeneous assumptions, with limited baselines and inconsistent reproducibility. MerLin’s reproduced-paper collection provides a shared empirical starting point — and a practical way to compare architectures under unified conditions.
Crucially, MerLin also supports cross-paradigm benchmarking: gate-based models can be translated into photonic encodings via the QuantumBridge, enabling controlled comparison between modalities.
Quantum software should not drift away from hardware realities. It should prepare for them.
Key technical insights
Our reproduction campaign revealed several structural observations:
- Photon-native implementations can reproduce behaviors of gate-based architectures.
- Encoding strategy significantly impacts robustness and expressivity.
- Simulation efficiency gains enable larger-scale empirical exploration.
- Not all reported quantum advantages remain under standardized baselines.
These findings reinforce a simple idea:
Progress requires measurement.
Why MerLin matters
Quandela’s purpose is to deliver photonic quantum systems that create value now and scale credibly toward fault tolerance.
MerLin extends that philosophy into quantum machine learning. It provides:
- A reproducible foundation for photonic QML
- A bridge between classical AI and quantum hardware
- A benchmarking culture grounded in evidence
- An open ecosystem for researchers and developers
Quantum computing will not mature through superlatives. It will mature through deployable systems, measurable progress, and disciplined engineering.
MerLin 0.3 is one step in that direction.
References
- C. Notton et al., MerLin: A Discovery Engine for Photonic and Hybrid Quantum Machine Learning, arXiv:2602.11092 (2026).
- MerLin GitHub Repository — Release 0.3
- Reproduced paper GitHub Repository

