Quandela Tech Report: Certified Quantum Random Numbers on a Small Optical Chip

Article by Jason Mueller, PhD

Part 1: What is randomness?

What is randomness? And how can we generate it? Both questions — the first mathematical, the second technological — have profound implications in many of today’s industries and our everyday lives.

Imagine that we wanted to make a random list of 0s and 1s. This list could be used to protect your medical records as a keyword, could ensure that a lottery is trustworthy, or could encrypt a digital letter to your long-lost half-step-brother-in-law. To make this list, maybe you use the classic coin-flip method: write down a 1 every time the coin lands heads-up and a 0 otherwise. Or maybe instead, you time the decay of the radioactive cesium that you stole from the lab. Is one of these methods better than the other; does it even matter? Yes, on both counts.

Let’s explore the principles of randomness together, and some new results from Quandela that generate certifiable randomness according to the laws of quantum mechanics.

Part 2: Generating randomness

We’ve already talked about two Random-Number Generators (RNGs), the coin flip and the radioactive cesium, both of which are examples of true RNGs. That is, they rely on unpredictable physical processes (or, rather, not-easily-predictable) to make their lists of 0s and 1s. But there are many kinds of RNGs, which you can see in Fig. 1

Fig. 1: Various classes of random number generators discussed in this article.

The one programmers are probably most familiar with is pseudo-RNGs. Here, you give a random seed number to a mathematical algorithm, which uses that number to deterministically generate a longer list of numbers that look random. While this is a convenient way to generate random numbers, there is a big caveat: if someone ever gained access to your seed numbers, past or present, they could replicate all of your numbers for themselves!

Then you have true RNGs, of which there are two main kinds: classical and quantum. These are based on unpredictable physical processes. A simple example of a classical true RNG is the coin flip, but there are many more complex methods that are used in modern electronics, for example measuring thermal noise from a resistor. The Achille’s heel of classical true RNGs is that while the details of the generating process may be unknown, they are still in principle knowable. What are the implications of this? Well, if a bad guy wants to know your random numbers, they can always build a better model of your classical system for better predictions. Your classical RNG must therefore necessarily be complex.

Quantum true RNGs can do better. Rather than watching our stolen cesium beta-decay, we’ll take an example from quantum optics (see Fig. 2). Here, a train of single photons is shone onto a semi-transparent mirror, which has two outputs. Because quantum mechanics is inherently unpredictable, the single photon will randomly take one of these two paths, after which it can be measured by one of two detectors, which correspond to 0 or 1. This is good at first sight, but be careful: in an ideal world, where all of the photons really are single, the beam splitter doesn’t absorb any photons, and the detectors don’t have any noise or loss, then we really do get independent and identically distributed random numbers. But this is not an ideal world, and so we call this kind of quantum RNG device-dependent because we must trust the physical engineering of the RNG. However, quantum RNGs are still a step in the right direction because we do not have to complexify them like in the classical case.

Fig. 2: An example from quantum optics of true random number generation. Single photons are shone onto a semi-transparent mirror, and they will randomly go to detector 0 or 1.

Now, we wouldn’t have given the previous example a fancy name like device-dependent quantum true random number generator if there wasn’t also such a thing as device-independent quantum RNGs — the holy grail of RNGs. The latter devices can generate and validate their own random numbers: they are certifiably random, independent of the underlying hardware. But how can we make such a thing? We use entanglement.

Essentially, if Alice and Bob share two entangled particles, and Alice measures her particle, then the state of Bob’s particle is instantly changed according to Alice’s measurement, no matter how far apart they are. This is a fundamentally quantum effect which cannot be described classically. But we have to be careful that Alice and Bob aren’t cheating. If Alice and Bob are physically too close, then we don’t know if they’re secretly sharing information to coordinate their measurements. When they’re sufficiently separated, we call this nonlocality.

So, if Alice and Bob create a pair of entangled particles, then randomly choose how to measure them, the outcomes of their measurements are quantum-certified random so long as satisfy the nonlocality condition (and a couple of other secondary conditions which are beyond the scope of this article; see Quandela’s recent publication on quantum RNGs).

Because of nonlocality, these kinds of random-number experiments tend to be big so that the measurements are far, far apart. In a world which is both obsessed with small, scalable devices and also maximum security, how can one reconcile this paradox? This is exactly the problem that researchers at Quandela have solved.

Part 3: Quandela’s contributions

They ask:

‘How can we certify randomness generation in a device-independent way with a practical small-scale device, where [Alice and Bob could cheat thanks to communication] between the physical components of the device?’

Put another way, when you have a small device that might normally allow Alice and Bob to cheat by communicating information before the other one measures their particle, can you account for this local communication somehow to still generate certified random numbers? Quandela’s protocol measures the amount of information that an eavesdropper could potentially use to fake violation of locality, and then sets a bound on how well the device should perform if it is to still produce certified random numbers. The device also periodically tests itself to validate these numbers.

And not only has Quandela derived the theory, but they’ve also demonstrated it experimentally on a two-qubit photonic chip using Quandela’s patented single-photon quantum dot source, show in Fig. 3.

Fig. 3: Quandela’s two-qubit random number generator.

If this all sounds like a big deal, that’s because it is — the technical details are all found in the recent publication, with a patent on its way!

This achievement represents a major step towards building real-world, useable quantum-certified random number generators, one more tool in Quandela’s arsenal of quantum technologies.

If you are interested to find out more about our technology, our solutions, or job opportunities, visit quandela.com.

Single-Photon Sources (Part 3)

Article by Sebastien Boissier

Deterministic sources by Quandela

This is part three of our series on single-photon sources. Building on our previous posts, we finally have the necessary context to discuss Quandela’s core technology and how it will accelerate the development of quantum-photonic technologies towards useful applications.

Atoms

Even though heralded photon-pair sources are a well-known technology, it does seem to be a bit of a work-around to achieve our primary goal of generating pure single-photons. The fact that they are inherently spontaneous is an issue, and we would prefer to avoid multiplexing which is resource hungry. OK then, is there another approach?

Atom-light interaction

An elegant idea comes from our knowledge of how light interacts with single quantum objects such as atoms. In this context, what we really mean by a “single quantum object” are the quantised energy levels of a bound charge particle. In a hydrogen atom for example, the electron is trapped orbiting around the much-heavier proton and it can only exist in one (or a superposition of) discrete quantum states.

Quantum states of the electron in a hydrogen atom. The electron is currently in the ground state.

When we shine a laser on an atom, we can drive transitions between two states. For example, the electron can be promoted from its ground state to a higher-energy excited state. In general, there are certain constraints that the pair of states must satisfy for transitions to occur under illumination. These are broadly called selection rules and correspond to various conservation laws that the interaction must obey.

In our case, we are interested in dipole-allowed transitions which result in strong atom-light interaction. To drive an electric dipole transition, the laser must be resonant (or near-resonant) with the energy difference separating the two states. This means that we have to tune the angular frequency of the laser ωₗₐₛₑᵣ such that the photon energy ħωₗₐₛₑᵣ is equal (or near-equal) to the difference in energy.

Rabi cycles, named after Nobel-prize winner Isidor Rabi.

When this is done, the electron starts to oscillate between the ground and excited levels, creating quantum superposition of the two states in between. This phenomenon is called Rabi flopping. If we control the exact duration of the interaction (i.e. the amount of time the laser is on) we can deterministically move the electron from the ground to the excited state, and vice-versa.

Spontaneous emission

As with spontaneous parametric down-conversion (see part 2), the vacuum fields also play an important role here. When the electron is in the excited state, it doesn’t stay there forever. The presence of vacuum fluctuations cause the electron to decay back to the ground state in a process called spontaneous emission (in fact it’s a little more complicated than that, but this is a fine description for our purposes). Spontaneous decay of the excited state is not instantaneous, and typically follows an exponential decay with lifetime or decay rate Γ=1/T .

Additionally, there’s another property of dipole-allowed transitions which is of great interest to us. As one might have guessed from the resonant nature of the interaction (i.e. energy conservation), the full transition of the electron from the ground state to the excited state (or vice-versa) is accompanied by the absorption (or emission) of a single photon from the laser. And more importantly, spontaneous decay from the excited state to the ground state always comes with the emission of a single photon.

Two ingredient single-photon recipe.

So here we have a recipe to create a single photon. We first use a laser pulse to excite the electron to the excited state, and then we wait for the electron to decay back to the ground state by emitting a single photon.

Note however, that the laser pulse has to be short compared to the lifetime of the excited state. Re-excitation of the emitter can happen as soon as there is some probability that the electron is in the ground state. Therefore, if the laser pulse is still on when spontaneous emission has already started, there is a chance that the emitter will spontaneously emit, get re-excited and emit again. This has the undesirable effect of producing two photons within our laser pulse.

We can now see the major difference between the scheme based on spontaneous parametric down-conversion and the one based on spontaneous emission. With the latter, we can use short laser pulses to actively suppress the probability of multi-photon emission (and minimise the g⑵ ) without compromising on brightness.

The brightness is unaffected because the electron will always emit a single photon if it is completely promoted to its excited state. This is why sources based on such emitters are usually referred to as “deterministic” sources of single photons.

Collecting photons

Dipole radiation

So, can it be that simple? Well, there is one thing we haven’t considered yet: where the photons get emitted. And unfortunately for us, single-photons tend to leave the atom in every direction.

To be more precise, spontaneous emission (from a linearly-polarised dipole-allowed transition) follows the radiation pattern of an oscillating electric dipole, similar to that of a basic antenna. Recall that a radiating dipole is two opposite charges oscillating about each other on a fixed axis.

The fact that dipoles come up again here is not a coincidence. The rules of dipole-allowed transitions and dipole radiation stem from the same approximation that the wavelength of the emitted light is much greater than the physical size of the atom.

On the left, we show the electric field generated by an oscillating dipole oriented in the vertical direction. We see that, apart from the direction of the dipole, the field propagates rather isotropically, and there is no preferred direction in the horizontal plane. This is essentially because the system has rotational symmetry.

If we look far away from the dipole, we can calculate the optical intensity that is emitted in space for every direction. This graph gives us the probability per steradian that a single photon is emitted at every angle. The redder the colour, the more likely a photon will emerge in that direction.

It should be rather obvious why the above figure is bad for us. To collect all the photons into a light-guiding structure like an optical fibre, we would require an impossible system of lenses and mirrors to redirect all the emission back into the fibre.

And if we lose a large portion of the emitted photons, our deterministic source becomes a low-brightness one, limiting the scalability of our quantum protocols.

The Purcell effect

To help us collect more photons from atoms, we can rely on an amazing fact of optics: the lifetime and radiation pattern of an emitter depend on its surrounding dielectric environment, a phenomenon called the Purcell effect. In fact, the radiation pattern we have shown above is only true for an atom placed in a homogeneous medium.

You might think: “Well, that sounds trivial. If I put an optical element like a lens in front of the atom, I am guaranteed to change it’s emission pattern in some way”. And indeed you would be right, but here we are looking at something a little more subtle.

We mentioned above that spontaneous emission is caused by vacuum field fluctuations. It turns out we can engineer the strength of these fluctuations at the position of the atom to accelerate or slow down spontaneous emission in certain directions. To do this, we have to build structures around the atom which are on the scale of the emitted light’s wavelength (λ).

Let’s look at an example: the emitter placed in between two blocks of high refractive-index material (like gallium arsenide — GaAs).

The important parameter here is the Purcell factor: the ratio of the emitter’s lifetime in the structure divided by its lifetime in the bulk material (here it’s just air). As the distance between the blocks (d) decreases, the lifetime of the emitter varies widely (as quantified by the Purcell factor). What’s going on?

Optical modes

The magnitude of the vacuum field fluctuations depends on a couple of things. First, it depends on the number of solutions there are to Maxwell’s equations at the frequency of the emitter (the so-called density of optical modes). With an increasing number of available modes, there are more paths by which light can get emitted by the atom, and spontaneous emission is sped up.

Secondly, we have to consider how localised the electric field fluctuations of the modes are. An optical mode that is spread out in space does not induce strong vacuum fluctuations at one particular position. Whereas a localised mode concentrates its vacuum field where it is confined in space.

If one particular mode has larger local fluctuations than others, the atom preferentially decays by emitting light into this mode (larger fluctuations translate to high decay rates). The radiation profile is therefore changed because the emission will mostly resemble the profile of the dominant mode.

This is what we are seeing in the above experiment. As the separation between the two blocks decreases, cavity modes come in and out of resonance with the emitter. Cavities are dielectric structures which trap light using an arrangement of mirrors. Here, the cavity is formed from the reflections off the high-index material causing the light to bounce back and forth in the air.

Cavity modes only appears at the frequency of the emitter when we can approximately fit an integer number of half-wavelengths between the blocks, creating a standing-wave pattern in their electric field profile.

In our example however, we only see those modes with an integer number of full-wavelengths. This is because the other modes do not induce any electric field fluctuations exactly at the middle point between the blocks, where we placed our emitter. There is a node in their standing wave pattern. Therefore, no spontaneous emission from the atom goes into these modes, and they don’t affect its lifetime.

In contrast, modes with an integer number of full-wavelengths between the mirrors have the maximum of their fluctuations at the centre (anti-node of the standing wave). Therefore, they strongly affect the lifetime of the emitter and its radiation pattern.

Trapping light

By trapping light with mirrors, we have seen how a cavity mode can induce strong vacuum fluctuations at the position of an atom, and therefore funnel its spontaneous emission in a desired direction. It is important to note that the confinement of cavity modes not only depends on how close the mirrors are to each other, but also on how long the light stays inside the cavity. A “leaky” cavity that does not trap light for a long time also does not induce strong vacuum fluctuations.

The cavity we have studied above can only trap light in the vertical direction. We see that the modes have an infinite extent in the horizontal plane because of the continuous translational symmetry. In addition, they do a rather poor job at that. The reflectivity between air and GaAs is only about 30%, meaning that on each bounce 70% of the light is lost upwards or downwards.

So first, we could use better mirrors. A great way to build a mirror is to alternate thin layers of high and low refractive-index materials. These so-called Bragg mirrors can be engineered to have extremely high reflectivity at chosen wavelengths. This is achieved by making sure that each layer fits a quarter-wavelength of light, so that reflections of all the interfaces interfere constructively.

(1) A planar cavity with no lateral confinement. (2) & (3) A micropillar cavity with its mode confined in all 3 dimensions.

Next, we have to confine the mode in 3D. One way to do that is to curve one (or the two) Bragg mirrors to compensate for the diffraction of light inside the cavity. Our preferred method is to structure the stack of layers into micropillars. The high refractive-index contrast between the pillar and the surrounding air confines the light inside the pillar due to index-guiding (this is similar to the mechanism by which light is guided by an optical fibre).

By making these structures, we get cavity modes that are highly localised and that trap light for a long time. If an atom is placed at an anti-node of the cavity’s electric field, the probability of the emission going into the cavity mode will be much greater than that of the atom emitting in a random direction.

An emitter in free space emits photons everywhere. In a leaky cavity, the single-photons preferentially emerge in one direction.

The final piece of the puzzle is what happens to the single-photon once it has been emitted by the atom and is trapped inside the cavity. The Bragg mirrors are designed to keep the light in for a long time, so the photons will mostly leave the cavity from the sides in random directions. This defeats the point of having a cavity in the first place!

To design a good single-photon source, we have to make one mirror less reflective than the other (with less Bragg layers for example) so that the photons preferentially leave the cavity through that mirror only. This is how we achieve near-perfect directionality of the source. Finally, by placing a fibre close to the output mirror, we can then collect the emitted single-photons with high probability.

Note however, that this is a compromise. By reducing the reflectivity of one mirror, the mode of the cavity is not as long-lived, which reduces the vacuum-field fluctuations and therefore decreases the probability that the atom emits into the cavity mode in the first place. Careful engineering of the structure has to be made to strike the right balance.

Quantum dots

It is a hard technological challenge to build the microstructures for the efficient collection of photons. An even harder task is to place an atom at the exact spot where the vacuum-field fluctuations of the cavity mode are at their maximum. While this can be done with single atoms in a vacuum, this involves a very complex procedure and bulky ancillary equipment.

Trapping electrons

Another way to proceed is to find quantum emitters directly in the solid-state. In the last few decades, a number of ways have been found to isolate the quantum energy levels of single electrons inside materials. The approach that we are leveraging is that of quantum dots.

Transmission electron microscopy and electronic structure of quantum dots. At low-temperatures, only states below the Fermi energy (Ef) are occupied.

Quantum dots are small islands of low-bandgap semiconductor material, surrounded by a higher bandgap semiconductor. Because of the difference in bandgap, some electronic states can only exist inside the low-bandgap material. The confinement of electronic states to a few nanometres creates discrete quantum states in a way that is very similar to the classic particle in a box model in quantum mechanics.

This arrangement gives us access to bound electronic states very much like an atom does. That is why quantum dots are sometimes referred to as artificial atoms in the solid-state.

Quantum dots are small islands of low-bandgap semiconductor material, surrounded by a higher bandgap semiconductor. Because of the difference in bandgap, some electronic states can only exist inside the low-bandgap material. The confinement of electronic states to a few nanometres creates discrete quantum states in a way that is very similar to the classic particle in a box model in quantum mechanics.

This arrangement gives us access to bound electronic states very much like an atom does. That is why quantum dots are sometimes referred to as artificial atoms in the solid-state.

Single-photons from the relaxation of an exciton in a quantum dot.

We can then use the same procedure described above to generate single-photons from these states. In this case, we use a pulse laser to promote one electron from the highest occupied state in the conduction band of the dot to the lowest unoccupied state in its valence band.

Compared to the hydrogen atom, the absence of an electron in the valence band is more significant. This ‘hole’ acts as its own quasiparticle, and the effect of the laser pulse is to create a bound state of an electron and a hole, called an exciton (another quasiparticle). The exciton doesn’t live forever. The vacuum field fluctuations cause the electron to recombine with the hole and in doing so a single-photon is created.

Suppressing vibrations

The electronic levels are not completely isolated from the solid-state environment there are in. The most important source of noise comes from temperature: the jiggling of the atoms in the semiconductor.

Temperature affects the quantum dots in a couple of ways. First, the presence of vibrations in the crystal’s lattice adds an uncertainty in the energy difference between the electronic levels (technically, they induce decoherence of the energy levels). This has a detrimental effect on the indistinguishability of the emitted photons which will also have this added uncertainty in their frequencies.

The lattice vibration (or phonons) affecting the process of single-photon emission.

The second effect is that the emitter can emit (or absorb) vibrations of the surrounding crystal during the photon emission process. Because the vibrations carry energy, the emitted photons will again have very different frequencies from one emission event to the other, reducing indistinguishability.

The solution to this is to cool the sample to cryogenic temperatures to get rid of the vibrations inside the material. For quantum dot emitters, cooling the samples between 4K and 8K is sufficient to suppress the thermal noise. This is the realm of closed-cycle cryostats, which are much less demanding than the more complicated dilution refrigerators.

Suppressing the laser

Another important consideration for quantum emitters is how to separate the emitted photons from the laser pulses. If we cannot distinguish between the two, the single-photon output is simply drowned out by the laser noise.

Three-level system for polarization or frequency separation of the pump laser and the single-photons.

Typically, it’s very hard to completely avoid laser light reaching the output of the single-photon source. A good solution here is to use an additional energy level of our quantum emitter.

The key is to find two optical transitions which are separated in frequency (or addressed by orthogonal polarisations of light) such that we can filter out the laser (using optical filters or polarisers) from the single-photons at the output. The excited states must be connected in some way to efficiently transfer the electron to the extra state during the laser excitation.

With dots we have access to two different ways to do that. First, we can use the solid-state environment to our advantage by leveraging the coupling of the emitters to the vibrations of the crystal. By using non-resonant laser pulses, we can force the emission of an acoustic phonon to efficiently prepare an exciton.

The second option we have with quantum dots is to use two electronic states which interact with orthogonal polarizations of light. This is a particularly good method when combined with elliptical cavities.

Putting everything together

At Quandela we benefit from decades of fundamental research in quantum dot semiconductor technology. This allows us to bring all the elements together to fabricate bright sources of indistinguishable single-photons.

Importantly, we use a unique method (developed in the lab of our founders) to deterministically place single quantum-dots at the maximum of the vacuum-field fluctuations of micropillar cavities. In this way, we make the most out of the Purcell effect.

In the figure above, we show a 3D rendering of our single-photon sources. As discussed above, they consist of a quantum dot at the anti-node of a long-lived cavity mode which preferentially leaks through the top mirror towards an output fibre.

The ‘cross’ frame around the pillar cavity is there to make electrical connections to the top and bottom of the semiconductor stack. This allows us to control the electric environment of the dots and tune its emission wavelength with the Stark effect.

Scanning electron microscopy of single-photon sources by Quandela.

With the current generation of devices, we simultaneously achieve an in-fibre brightness of > 30%, a g⑵ < 0.05 and a HOM visibility > 90% (see part 1). With these state-of-the-art performances, we believe that large-scale quantum photonics applications are within reach.

A quick note on scalability

More complex applications in quantum photonics require multiple photons to arrive simultaneously on a chip. The obvious route here is to have multiple sources firing at the same time to provide single-photons in multiple input fibres.

While it is generally more difficult for remote sources to generate identical photons (i.e. with perfect HOM visibility), recent results suggest that the reproducibility of our fabrication process is the key to large-scale fabrication of identical sources.

Additionally, we can use active temporal‐to‐spatial demultiplexing to take advantage of the indistinguishability of the photons coming from the same source. This technique can be thought as the inverse of the temporal multiplexing that we saw in part 2 for spontaneous pair-sources.

Starting from a train of single-photons, we switch and delay some of the photons to get several spatial inputs. With a demultiplexer, we reduce the repetition rate of our source to achieve synchronised inputs. The n-photon coincidence rate is then given by

where (as we defined in part 1) μ is the detection efficiency, B the source efficiency and 𝜂 is the demultiplexer efficiency. R is the repetition rate of the laser pulses.

Conclusion

In this series, we have reviewed the state-of-the-art technology for producing single-photons on demand. We have explored in some detail the underlying principles of spontaneous-parametric-down-conversion and atom-like quantum emitters and highlighted their differences.

At Quandela, we feel that quantum-dot sources have great potential for the miniaturisation and scalability of optical-qubit generators. We are working hard to improve their performances and to make them accessible to a wider academic and industrial audience.

If you would like to know more about our technology please email contact@quandela.com or visit our website https://quandela.com/ . If you would like to join our mission to build the world brightest sources of optical qubits, please have a look at our job openings at https://apply.workable.com/quandela/ .

Single-Photon Sources (Part 2)

Article by Sebastien Boissier

Heralded Sources and Multiplexing

Welcome to part two of our series on single-photon sources. Here, we will discuss one possible route for single-photon generation: spontaneous photon-pair sources.

Non-linear optics

To understand spontaneous photon-pair sources we have to delve into how dielectric material react to light. Classically, light is nothing but the synchronised oscillations of electric and magnetic fields. And dielectrics are electrical insulators which can be polarised by an electric field. Let’s try to unpack these concepts.

When an electric field E is applied to a dielectric material, the electrons within the material’s atoms are pulled away from their respective nuclei but without being able to escape (because dielectric materials are insulators). This creates many dipoles (pairs of positive and negative charges separated by a small distance) and we say that the material is polarised.

The macroscopic strength of these dipoles is described by the polarisation density P which is given by P = x * qₑ * N/V where x is the separation between to charges, qₑ is the negative charge (assuming the charges are opposite but equal in magnitude) and N/V is the density of dipoles (number of dipoles per unit volume).

Polarisation of an atom (left) and of a dielectric material (right) with an electric field.

Let’s now assume we have a perfectly monochromatic laser beam oscillating with an angular frequency ω and traveling through the dielectric. In complex notation we can write this as

where c.c. is the complex conjugate of the term in front. In response to the oscillating electric field, the electrons and nuclei also start to oscillate, and we get a bunch of oscillating dipoles which in turn lead to a macroscopic polarisation density P(t) that oscillates. It is typically assumed that P(t) depends linearly on the external electric field, so that

where the constant of proportionality ꭓ⑴ is known as the linear susceptibility of the material and 𝜖 is the permittivity of free space. The important take-away here is that the dipoles oscillate at the same frequency than the laser.

What’s more, oscillating dipoles are nothing but accelerating charges which themselves emit electromagnetic radiation. It turns out that the dipoles act as a source of light Eₚₒₗ(t) which oscillates at the same frequency as P(t). This new field then gets added to the external driving field.

We therefore have the following string of cause-and-effect Eₗₐₛₑᵣ(t) → P(t) → Eₚₒₗ(t) which produces the output field Eₗₐₛₑᵣ(t)+Eₚₒₗ(t) where the two components oscillate at the same frequency.

In some dielectrics and at high driving power (large Eₗₐₛₑᵣ), the linear relationship between Eₗₐₛₑᵣ(t) and P(t) breaks down. We now enter the nonlinear regime where the relationship is better approximated by adding extra terms in the power series expansion like so

Note that ꭓ⑵ and ⑶ are typically a LOT smaller than ꭓ⑴ in magnitude. Focusing on the ꭓ⑵ non-linearity and expanding the squared term we get

In this regime, the dipoles not only oscillate at the frequency of the driving laser but also pick up another frequency component at . The result is that light is emitted at a different frequency than what had been sent in the material. In this case, we get second-harmonic generation.

So what is happening at the level of the photons? Recall that the energy of a photon is given by ħω where ħ is the reduced Planck’s constant. Here we’re assuming that the material is transparent (that it does not absorb energy), so in order to conserve energy two laser photons at frequency ω must be absorbed to generate a photon at frequency .

ꭓ⑵ dielectric pumped with a single laser (left) and two lasers at different frequencies (right).

Another interesting thing happens when we drive the dielectric material with two laser beams at different frequencies:

Again, focusing on the ꭓ⑵ non-linearity and expanding the squared term, we get terms that oscillate at frequencies ω₁+ω₂ and ω₂-ω₁ (assuming ω₂>ω₁). This is called sum- and difference-frequency generation. Sum-frequency generation is easily understood as one photon from each driving field being absorbed to produce a photon at ω₁+ω₂.

Difference-frequency generation is a little more interesting. During this process, one photon at the pump frequency ω₂ is absorbed and two photons are emitted, one at the driving frequency ω₁ (the signal photon) and the other at ω₂-ω₁ (the idler photon).

Spontaneous parametric down‑conversion

If we attenuated the signal field (ω₁) down to zero, we would not expect any difference-frequency generation to occur. However, the electromagnetic vacuum is not completely empty and at the quantum level there are vacuum fluctuations. These fluctuations correspond to the momentary appearance of particles (in our case photons) out of empty space, as allowed by the uncertainty principle.

It turns out that vacuum fluctuations cause difference-frequency generation in ꭓ⑵ dielectrics without the presence of a signal field. This process is called spontaneous parametric down‑conversion and leads to one photon in the pump field being absorbed and a photon pair being emitted. Parametric just means that no energy is absorbed by the material, and the process is spontaneous because we cannot control when and where in the crystal the down‑conversion will occur (like all the other nonlinear processes).

Spontaneous parametric down‑conversion in a ꭓ⑵ material.

Because we don’t have a second reference frequency any more, it’s fair to ask at what frequencies the photons will emerge. In addition to respecting conservation of energy, the photons involved in the down‑conversion process must also obey conservation of momentum. The momentum of a photon is given by ħk where |k|= n(ω) * ω / c is called the wavevector and it depends on the material’s refractive index at the frequency of the photon (c is the speed of light).

Momentum conservation dictates that k)=k)+k(where the subscripts p, s and i correspond to the pump, signal and idler photons respectively), and because of material dispersion and birefringence this condition is only satisfied for certain frequencies, directions and polarisation of the photons (and they need not be the same for the signal and idler).

Conservation of momentum is also referred to more broadly as phase-matching. In general, the properties of the signal and idler photons satisfying energy and momentum conservation are usually not the ones we want. To solve this, we must engineer the nonlinear material and the setup to achieve phase-matching for the correct output frequencies. This can be done by birefringent phase-matching, by engineering the dispersion profiles of optical modes and with quasi-phase-matching.

Materials with large ꭓ⑵ coefficients required for spontaneous parametric down‑conversion are usually rather exotic. Common choices are lithium niobate (LiNbO3), potassium titanyl phosphate (KTiOPO4) or aluminum nitride (AlN).

It is also possible to use higher non-linear coefficients to achieve a similar effect. The other popular choice is spontaneous four-wave-mixing in ꭓ⑶ materials. More common materials, like silicon, have large ꭓ⑶ coefficient which makes this process an appealing alternative.

Spontaneous photon-pair sources as emitters of single photons

The discussion so far has been quite removed from the topic of single-photon sources. In this section we finally answer why spontaneous photon-pair sources can be useful in that regard.

Two-mode squeezed-vacuum

First, let’s recall (see part 1 of the series) that sources of single photons have to be triggered. By exciting a ꭓ⑵ dielectric with a continuous laser, spontaneous parametric down‑conversion does not allow us to control the emission time of a photon pair.

A better alternative is to restrict the emission to time slots defined by laser pulses. The nonlinear process can only take place when the laser is “on” and that acts as our trigger. However, we cannot completely control whether a photon-pair is emitted within one laser pulse or if we get multiple pairs.

In fact, if we can separate the down-converted photon from the pump light, the photon-number state that we get in each pulse is (ignoring the spectral, momentum and polarization degrees of freedom)

This state is called the two-mode squeezed-vacuum state and the parameter λ (called the squeezing parameter) is a number between 0 and 1 which quantifies the strength of the interaction. λ depends on the ꭓ⑵ nonlinearity, the interaction length and the laser power. Note that the two-mode squeezed-vacuum state is an entangled state in photon number.

Assuming that λ is small, we can approximate the emitted state as

From this, we can directly calculate the probabilities of getting one or two pairs in the time slot defined by a laser pulse. We have to square the amplitudes to get probabilities, so P(1 pair)λ² and P(2 pairs)λ⁴.

A pulsed nonlinear photon-pair source.

If we separated the signal and idler photons, could we use either of them as a single-photon state? Well, we are not off to a great start because we have some probability of having more than one photon-pair per laser pulse. In fact, if we completely ignored one of the outputs of the down‑conversion process (say the signal photons), the idler photons would have

Remember that a single-photon source needs a g⑵ as close to 0 as possible to avoid errors in our quantum protocols. A g⑵ equal to 2 is bad. In fact, it’s worse than if we had just used strongly attenuated laser pulses directly, which have g⑵=1.

Heralding

This can be redeemed if we use the photon-number entanglement in the two-mode squeezed-vacuum state. The trick is to monitor one of the outputs (say the signal) and record which output pulses have one or more photons. If we detect signal photons in a particular time slot, we now know that there is at least one photon in the idler path.

So, conditioned on the detection of a signal photon (or photons), the state of the idler collapses to the mixed state

where we have used the density matrix formalism. The vacuum component is projected out.

A heralded nonlinear single-photon source.

This time, the g⑵ of the idler photon is

And there we have it. By heralding the presence of idler photons with the signal photons, we get a state that approximates a single-photon state asymptomatically better as λ² tends to zero.

However, recall that the probability of getting a single-pair per pulse is also related to λ² (in fact it is equal to λ²) . This is an important compromise we have to make with spontaneous photon-pair sources: we have to keep the brightness of the source low in order for the single-photon purity to be high. It’s a trade-off between scalability and algorithmic errors. For example, if we wanted P(1) to be 10%, then g⑵=0.2 and that’s quite high for a single-photon source.

Note that this can be improved with the use of photon-number resolving detectors, which not only measure if there are photons but also how many. By heralding only the single-photon events in the signal paths (and rejected the vacuum and multi-photon components), we can decrease the g⑵. Even so, the statistics of the two-mode squeezed-vacuum state as such that the theoretical limit for brightness is 25%.

Spectral purity

Heralding is a great way to turn spontaneous photon pair sources into single-photon sources. However, it also creates additional constraints on the nonlinear process. Let’s focus on the frequencies of the emitted photon pair. In general, the emitted state is given by

where f(ωₛ, ωᵢ) is called the joint-spectral-amplitude. If f(ωₛ, ωᵢ) is not factorable — i.e. f(ωₛ, ωᵢ)≠ f(ωₛ)* f(ωᵢ) — the photons are entangled in frequency and measuring the signal photon collapses the idler to a mixed state, generally written

Mixed states in frequency are not a problem for photon purity. A single-photon, regardless of its frequency state, will only be measured at one output of a beam-splitter, and therefore has g⑵=0 (see part 1). However, it is bad for its Hong–Ou–Mandel visibility. A mixed state essentially means we have classical probabilities pₙ of getting different quantum states every time. And distinguishable states will not produce perfect interference on a beam-splitter (again, see part 1). In general, the detection of the signal photon must teach us nothing about the state of the idler photon except for its presence.

To have a spectrally-pure idler state, we have to work harder to engineer a factorable joint-spectral-amplitude. This can be done with group-velocity matching and apodization of the joint spectral amplitude by engineering the profile of the nonlinear medium. Another good method is to further engineer waveguide modes and using long interaction lengths.

Failing to produce a factorable spectrum, we can always filter the output photons to reject all possible frequency states but one. While this approach improves spectral purity and therefore indistinguishability, it comes at the cost of reducing the brightness of the source and that compromises its scalability even more.

Multiplexing

The balance between brightness and single-photon purity seems to be a major hurdle for spontaneous photon-pair sources. Is this the end of the story? No, because we can improve the efficiency of these sources by multiplexing.

At its core, multiplexing is a simple idea. Instead of using just one time-bin of one photon-pair source, we combine multiple sources (spatial multiplexing) or incorporate several time-bins of the same source (temporal multiplexing) to produce a better single-photon source. We consume more resources for better performance. Let’s quickly talk about both options.

Spatial Multiplexing

With spatial multiplexing, we first have to build a number of identical photon-pair sources. It is important that all the heralded single-photon outputs are the same or the indistinguishability of the multiplexed source will suffer.

Once that’s done, the output of the sources are connected to a reconfigurable routing-circuit (a multiplexer). All the sources are then pumped simultaneously with laser pulses and we monitor the emission of signal photons. We then pick one of the sources which has fired and route its now-heralded idler photon to the output of the circuit.

Spatial multiplexing with a reconfigurable routing circuit.

Because we have multiple sources working together, the probability that at least one fires can be a lot greater that the brightness of the individual sources. In fact, assuming a perfect routing circuit the brightness is improved by

where B₁ is the brightness of the sources and N is the number of sources hooked-up to the routing circuit.

Things now look at lot better, but we do have to consume a lot of resources. Say we want a source with g⑵=0.05 (that’s not bad for the state-of-art), then B₁ ≅ P(1) = 2.5% and we need at least 15 sources for a final brightness of 30% (28 for 50% and 91 for 90%).

Temporal Multiplexing

The other variant of this technique is temporal multiplexing. Here, we use contiguous time-bins of the same source (recall that we pump the sources at regular time intervals). We simply wait for the source to fire in one of the time-bins and delay the heralded idler-photon accordingly so that it leaves the circuit in a known time-bin.

Temporal multiplexing based on (1) a network of switches and (2) on a storage loop.

The overall effect is the same as for spatial multiplexing but N is now the number of time-bins that we use. One key advantage of temporal multiplexing is that sequentially emitted photons of the same source are usually more similar than those emitted by different sources, which helps with indistinguishability. The price we pay however is a decrease in the repetition rate of our source which goes from R to R / N.

Repetition rates are important in quantum photonics because experiments are usually probabilistic. We typically known when a particular run has succeeded but we could wait a while for that to happen. The ability to perform the experiment as often as possible is therefore desirable. This is what we are sacrificing with time-multiplexing.

We could also have anything in between full-temporal and full-spatial multiplexing depending on the scarcity of our two resources (experiment time vs number of sources).

Outlook

As can be seen in figures above, multiplexing can be wasteful. As we add more sources (or time-bins) to boost the brightness, we also increase the probability that two or more sources fire in the same multiplexing circuit. Unfortunately, we have to discard these photons because trying to route them to a second exit (or time-bin) will just add complexity and result in an additional source which has a poor brightness.

One interesting idea to improve this is that of relative multiplexing. In this scheme, we do not force the sources to always emit in the same exits or time-bins. Instead, we correct the relative distance (in space or in time) between photons that have been heralded. By using more of the emitted photons, we can reduce hardware complexity while improving coincidence counts.

It must be said however that multiplexing relies on the ability to integrate high-performing photonic components at scale. To implement a routing circuit, we must have access to optical switches, delay lines, single-photon detectors, fast electronics, etc… At present, photonic components can be lossy and/or slow to operate. This limits the performance of multiplexing, and we have to use even more hardware to reach the required improvement in brightness.

Summary

In summary, we have looked at how spontaneous photon-pair sources can generate single photons. We have seen how these sources are probabilistic, in the sense that we cannot completely eliminate the probability of multi-photon emission. In addition, it is only when the presence of one photon is heralded by the detection of the other that these sources become useful for single-photon generation.

We also described how the brightness of these heralded sources has to be kept low in order for the purity of the emission to be high. This can be remedied by multiplexing, but at the cost of multiplying the number of components, and therefore limiting the scalability in the long run.

Heralded photon-pair sources are very much still an active and ongoing topic of research, both theoretically and experimentally. An up-to-date review of the topic can be found here.

Next up

In the next post we will discuss a different approach to creating single-photons using deterministic quantum emitters. Compared to heralded pair-sources, deterministic sources have taken a longer time to mature and they now form the core of Quandela’s technology.

Single-Photon Sources (Part 1)

Article by Sebastien Boissier

Benchmarking Single-Photon Sources

In the span of twenty years, we have witnessed an extraordinary emergence of quantum information technologies — the beginning of what some have called the second quantum revolution. Technology that was once the exclusive playground of scientists is now becoming accessible to a broad range of end-users.

This is good news because progress can be best sustained when the enabling technology is readily available to the broader research and development community. In the quantum realm, the manipulation of isolated quantum objects (atoms, electrons, photons, etc.) and control over their unique properties (entanglement, quantum superposition, non-locality, etc.) is becoming increasingly accessible owing to the development of new products.

To support the growing quantum ecosystem, Quandela is developing and commercialising sources of optical qubits based on single photons. These quantum light sources are crucial building-blocks for optical quantum technologies. To give some examples, they can enable truly-random number generation, enhanced imaging and sensing methods, secure quantum communication and scalable quantum computing.

Quandela offers the brightest single-photon sources available on the market to help researchers and businesses unlock useful quantum applications.

In this post, we will first approach single-photon sources from a conceptual level so that we can identify and understand their key properties. Then we will go on to explain some of the various performance metrics often quoted in the literature. This is important for anyone wishing to compare different kinds of photon source.

Desiderata

First let’s discuss some key properties to keep in mind when assessing single-photon sources.

Property #1: Single-Photon Purity

As the name suggests, we expect a single-photon device to produce one, and only one, particle of light at a time. This is very important because the presence of additional photons leads to errors in quantum protocols that are designed to work with precise numbers of photons.

Purity is a much harder requirement to fulfill than one might imagine. For example, we cannot build a good single-photon source by simply dimming a laser or a candle. Even if a laser beam is attenuated to the point where there is on average much less than one photon at any given time interval, the statistics are such that there is still an unacceptable probability of having more than one. So a true single-photon source must actively suppress the probability of multi-photon emission, while maximizing the probability of having just one.

An attenuated candle versus a true single-photon source.

Property #2: Photons-on-Demand

Another important requirement is that the source must be triggered. Advanced quantum-photonic protocols rely on interference effects between two photons, which typically occur on a beam-splitter (a semi-reflective mirror). To interfere, photons must arrive at the beam-splitter at the same time and that requires synchronisation. We must therefore control the emission time of the photons with a trigger.

Typically, single-photon sources may be triggered by a pulsed laser that stimulates the emission of single-photons at precise time intervals.

Property #3: Brightness

More than requiring photons on-demand, we wish for a high probability that the source emits once we do pull the trigger. This property is what is known as brightness.

It is obvious that ending up with no photons is not very useful. But obtaining a high brightness is perhaps the most difficult challenge to address, mostly because photons are very easy to lose. And for certain sources, brightness is a trade-off against a higher probability of multi-photon emissions.

A single-photon source with imperfect brightness.

Source brightness is crucial for the scalability of optical quantum technologies, since differences in the brightness have an exponential effect on performance as we increase the complexity of our protocols by adding more sources to them. Therefore even small gains in brightness can have drastic improvements for experiments with a large number of sources.

Property #4: Indistinguishability

On the topic of interference, it turns out that for two photons interfere they should ideally be completely identical to one another in all degrees of freedom. These includes polarisation, as well as spectral and spatial envelopes. This indistinguishability condition puts harsh constraints on the sources. For two photons to be exactly identical, the emission process must itself be precisely reproducible.

A single-photon source with imperfect indistinguishability.

Metrics

Now let’s see how we can put some numbers to the properties discussed above.

Purity

An important tool for assessing single photon purity is the second-order correlation function g⑵. Correlation functions are used to characterise the statistical and coherence properties of light. While a full treatment requires more space than a blog post, a first order approximation of g⑵ for high-performing triggered source is g⑵ ≈ 2 P(2) / P(1)² where P(n) is the probability of having n photons per triggering event. So a g⑵ measurement gives us a good idea of the probability that a source malfunctions by emitting two photons.

[Note that strictly speaking P(1) differs from the brightness B because most detectors only register the presence of photons but not the number of them. To complicate matters, however, some papers ignore this distinction and define B as P(1).]

But why use this metric instead of directly quoting the probabilities P(1) and P(2), etc.? Well, it turns out that this first order approximation of g⑵ can be measured in the lab with a rather simple apparatus, shown below:

g⑵ measurement with a Hanbury-Brown-Twiss interferometer.

The experiment consists of sending a single photon to a beam-splitter. The action of the beam-splitter is to evenly split the photon’s wavefunction into a superposition of two distinct paths. If we observe the photon at one detector, the wavefunction collapses into the corresponding branch of the superposition and no photon is detected on the other path. So for single photons we should never observe coincidence events at the two detectors.

But let’s suppose the source has less than perfect purity and see what happens if two photons arrive at the beam-splitter:

A g⑵ measurement with 2 incident photons.

In this case, the wavefunction contains a term in which one photon exits in each path. Half of the time the wavefunction collapses to this branch and we can expect both detectors to fire.

Suppose we are working with a good source which is triggered. We observe that the probability per pulse of detecting a photon at one detector is C₁, and that the probability per pulse of having both detectors firing simultaneously is C₂ . Then to a good first approximation g⑵ ≈ C₂/(C₁)². For perfectly pure single-photon emissions there would be no coincidence events, C₂ = 0, and we would find g⑵ = 0 as expected. In practice the aim is for this figure to be as low as possible.

The result of a g⑵ measurement.

Above we show an example of a g⑵ measurement done in our labs. Here, we trigger a single-photon source with a constant repetition rate and we send the outputs of the two single-photon detectors to a coincidence time-correlator. The correlator’s job is to measure the time delay between a photon arriving on one of the detectors and the next photon arriving on the other detector. We collect many such time differences and plot them in a histogram.

In this histogram we can clearly see the repetition rate of our source, as photons emitted in different pulses can trigger both detectors albeit with a time delay. However, we see very little coincidences around zero time-delay which is the quantum signature of our single-photon source.

Brightness

Typical sources are triggered at a constant repetition rate (number of triggering events per unit of time). With a source of brightness B (a probability lying between 0 and 1) we would end up with a single-photon count rate G = B * R .

Often, the count rate is measured at the output of a single-mode fibre which acts as a “wire” for photonic technology. So knowing the repetition and count rates would allow us to infer the brightness.

Knowing repetition and count rates we can infer source brightness.

However, detecting photons is also an imperfect process. Although single-photon detectors are a relatively mature technology, there still is a small probability that the detector does not register the presence of a photon even when one is present. For state-of-the-art detectors, the detection efficiency μ can be very close to 1 (detectors for which μ > 0.95 already exist).

Therefore our detected count rate is D = μ * B * R . So if we have the detected count rate, detector efficiency and repetition rate, it is then easy to deduce the brightness of a source.

But what if we wanted two photons in our experiment? Then we have a two-photon coincidence rate D₂ = (μ * B)² * R , as we have to multiply the probabilities that two sources successfully emit a photon and that two detectors register the emission.

More generally, we have an n-photon coincidence rate D = (μ * B)ⁿ * R . The term (μ * B)ⁿ decays exponentially with increasing number of sources. It is therefore crucial to have B as close to 1 as possible, and incremental improvements of B lead to huge gains in multi-photon coincidence rates.

Indistinguishability

Finally, we need a number to quantify the degree of similarity between photons. As we mentioned before, this is critical for building complex protocols based on photon interference. Photon indistinguishability is typically measured by a Hong–Ou–Mandel interference experiment.

The action of a beam splitter for one incident photon (left and centre). Hong–Ou–Mandel interference may occur when one photon arrives to each port of the beam splitter (right).

In this setup, two photons arrive at the same time via different ports of the beam-splitter. To see what happens we must first understand the full action of the beam-splitter.

Notice that the beam-splitter acts slightly differently on a photon entering by port 1 than it does on a photon entering by port 2. A minus sign appears for the reflected term of the wavefunction for a photon that entered through port 2. This so-called Stokes relation is a consequence of the time-reversal symmetry of the Maxwell equations (for lossless materials).

If we expand the output state in the case that a photon enters via each port, we find:

Hong–Ou–Mandel interference for indistinguishable and distinguishable photons.

Here we have separated the cases where the input photons are identical or different (after all this is what we are trying to measure).

If we have indistinguishable photons, then apart from having opposite signs there is no way to distinguish the final two terms… and their probability amplitudes cancel out! We are therefore left with an entangled state, which gives zero probability of measuring coincidences between the two detectors.

If on the other hand the two photons are distinguishable, then there is not a total cancellation between final terms, and we do observe coincidences.

So the coincidence rate in this experiment indicates how distinguishable the photons are. Usually it is simpler to speak in terms of the Hong–Ou–Mandel (HOM) visibility Vₕₒₘ, which essentially corresponds to the non-coincidence rate, and which indicates how indistinguishable the photons are.

A low number of coincidences means a high Vₕₒₘ, and Vₕₒₘ is equal to 1 only if we see no coincidences at all. For complex quantum protocols to work without being overwhelmed by errors, we should aim to have Vₕₒₘ as close as possible to 1.

It is also important to mention that the HOM visibility can be corrupted by the presence of more than one photon in the input ports. For example, consider the following scenario:

A Hong–Ou–Mandel interferometer with multiple photons.

With an extra photon in port 1, we would get a state where coincidences are again possible. So even if a source emitted perfectly identical photons but was not perfectly pure — i.e. had some probability of emitting two at a time (g⑵>0) — then we would not measure a perfect HOM visibility.

When trying to understand why a source doesn’t have Vₕₒₘ=1 it is usually a good idea to separate out contributions due to impurity from contributions due to distinguishability. In the literature, therefore, we often find values for the corrected HOM visibility. The correction is so that we only take account of the contributions due to distinguishability.

The result of a HOM interference measurement.

Here is the result of an HOM interference measurement with one of our triggered sources. Here we precisely delay every other photon (approximately) such that two photons of the same source arrive at the beam-splitter at the same time. We then use the same coincidence correlator as for the g⑵ measurement to build up the histogram.

Again we see the repetition rate of the source as photons which do not arrive at the same time on the beam-splitter do not interfere. At zero time-delay, we do not see many coincidences which is the sign that our photons have bunched together at the output of the beam-splitter.

Summing Up

In conclusion, we have looked at the three key features of a single-photon sources: single-photon purity, indistinguishability, and brightness, and we have introduced respective metrics for assessing these: the g⑵ correlation function, the brightness as inferred from various count rates, and the HOM visibility Vₕₒₘ.

So where do we currently stand in terms of these performances? As we will see in the next posts, there are two main technologies able to produce high quality single-photons: quantum-dots in micro-cavities and spontaneous optical frequency down-conversion. Here is a rough map of the progress being made towards the ideal single-photon source:

To Be Continued

In the next post in this series we will introduce one of the technologies that has been developed to create single photons: heralded photon-pair sources with active multiplexing.

Perceval Tech Note: Strong Linear Optical Simulation (SLOS)

Article by Jason Mueller

Part 1: Quantum simulations

Quantum computing is a computational paradigm which encodes information into quantum systems called qubits. These qubits are manipulated using superposition and entanglement to achieve a computational advantage over classical computers. In particular, photonic qubits are one of the most promising quantum-computer technologies [1], with billions invested across the world. As the field continues to evolve quickly, the need to efficiently simulate the calculations performed by quantum circuits becomes evermore pressing.

Perceval, the software layer of Quandela’s full-stack quantum computer, allows you to simulate quantum circuits, their outputs, and to write and run quantum algorithms. The simulator includes several backends, each tailored to the different kinds of algorithms you may be interested in running [2]. Here, we’ll be learning about Perceval’s Strong Linear Optical Simulation (SLOS) backend, which allows us to time-efficiently simulate the output distribution of linear optical quantum circuits, with applications ranging from quantum machine learning to graph optimization [3].

Part 2: Why SLOS?

First, what is the difference between a strong and a weak simulation? Consider the simple example of rolling a die, where we want to know the probability of its landing on each face. There are two options: roll the die many times and write down the number of times each face turns up, or write down the precise probability that each face appears based on physical knowledge we have about the die (e.g. number of faces, shape of die). The first case of rolling the die many times is called a weak simulation, because we’re learning about a probability distribution by sampling from it, one trial at a time. The second case of writing down the precise probabilities is called strong simulation, because we have fully characterized the behavior of the die without sampling.

Now imagine that instead of a die, you had a photonic circuit like the one shown in Fig. 1, which includes indistinguishable photons through modes.

Figure 1

The output distribution is proportional to the permanent:

Using SLOS, we can improve on the complexity of brute-force algorithms, which calculate each output state individually, by an exponential factor of 2ⁿ. SLOS computes several inputs and outputs directly, with the time-scaling 𝒪(nMₙ), where Mₙ = C(n+m-1, m-1) is the binomial coefficient.

Part 3: SLOS_full and SLOS_gen

Perceval includes two versions of SLOS: SLOS_full, which iteratively calculates the full output distribution, and SLOS_gen, which recursively calculates the output distribution given some constraints. The mathematical key to both of these algorithms is understanding how we can efficiently decompose

where pₙ is the position of each photon that is created. For k < n-1 photons, we can add one more photon to our state as

Finally, we concatenate this product:

More simply, SLOS_full calculates and stores the coefficients of a state with photons, then extrapolates to the next state of photons — the specific algorithm is given in [3]. The complexity of SLOS_full is that given a state with photons, you have ways to put in the photon:

which is linear in the number of states!

But suppose you’re interested in only a few of the output modes, common for algorithms which rely on heralding or post-selection. SLOS_gen is the solution. This version of SLOS introduces a mask to filter the states at each step of the computation: we compute the coefficients of a full k-photon state, then we apply the mask so that we obtain only the k-1 -photon states that we’re interested in, then we compute the now-reduced k-photon coefficients and continue. While this algorithm does introduce a memory overhead because it is recursive, the average complexity still scales as 𝒪(n2ⁿ).

Part 4: Complexity vs. Memory

SLOS exponentially outperforms other state-of-the-art approaches, with the tradeoff being that it requires a large memory. The memory requirement of SLOS_full is 𝒪(Mₙ); for n = m = 12M₁₂ = 1.35 × 10⁶.

The memory complexity of SLOS_gen is difficult to calculate due to state redundancy, but for a single input and output, there are C(n, n/2) coefficients to be stored. To visualize these memory constraints for typical computers, we show the maximum state size allowed for a given memory in Fig. 2 — the maximum number of photons and modes that a laptop can practically calculate is ~12.

Fig. 2. The maximum-achievable performance of SLOS running on a personal laptop (blue; 8Gb memory), and 256Gb (orange), 4Tb (red), and 1Pb (black). Image reproduced from [3].

Part 5: SLOS in action!

Let’s see how SLOS performs on a typical quantum machine learning application, where we want to train a quantum circuit to approximate the solution of a differential equation [7]. This requires repeatedly calculating the full output distribution, optimizing the circuit configuration each time. The algorithm converges in 200 to 400 iterations, each with thousands of full-distribution calculations! When we compare the performance of the traditional approach used in the original research versus SLOS, we see that the number of calculable photons is improved from 6 to 10 (see Fig. 3). Put another way, if you are interested in using 6 photons for the simulation, the SLOS optimization takes only 2 minutes, a factor-330 time improvement over the original method.

Fig. 3. The SLOS advantage on a typical quantum machine learning application. SLOS performs exponentially better than the naïve approach when calculating the full output distribution, by a factor of 2n.

Part 6: Where and how to use SLOS

Although SLOS is memory-intensive, it is a complexity-efficient backend, unique to Perceval, for simulating output distributions of linear optical circuits. The 2ⁿ time advantage offered by SLOS is useful for researchers to simulate their theories and verify their experiments, and it is useful for industry to quickly design and validate algorithms.

You can access SLOS through the open-source code on GitHub [8]. Quandela’s tutorial center also includes several applications using SLOS [9–12], which you can run both locally and on Quandela’s quantum cloud service [13].

References:

[1] What is a Quantum Computer?. Quantum computing has the potential to… | by Quandela Team | Quandela | Medium

[2] Computing Backends — perceval 0.3.0a2 documentation (quandela.net)

[3] [2206.10549] Strong Simulation of Linear Optical Processes (arxiv.org)

[4] A linear-optical proof that the permanent is #P-hard | Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences (royalsocietypublishing.org)

[5] ON QUANTUM FIELD THEORY. I. EXPLICIT SOLUTION OF DYSON’S EQUATION IN ELECTRODYNAMICS WITHOUT USE OF FEYNMAN GRAPHS (Journal Article) | OSTI.GOV

[6] The computational complexity of linear optics | Proceedings of the forty-third annual ACM symposium on Theory of computing

[7] [2107.05224] Fock State-enhanced Expressivity of Quantum Machine Learning Models (arxiv.org)

[8] GitHub — Quandela/Perceval: An open source framework for programming photonic quantum computers

[9] Differential equation resolution — perceval 0.3.0a2 documentation (quandela.net)

[10] Boson Sampling with MPS — perceval 0.3.0a2 documentation (quandela.net)

[11] Using non-unitary components in Perceval — perceval 0.3.0a2 documentation (quandela.net)

[12] Remote computing with Perceval — perceval 0.3.0a2 documentation (quandela.net)

[13] Welcome to Quandela Cloud

If you are interested to find out more about our technology, our solutions or job opportunities visit quandela.com

Exploring Graph Problems with Single Photons and Linear Optics

Article by Filipa Carvalho, Rawad Mezher & Shane Mansfield

1 — Introduction (non-technical summary)

The huge promise offered by quantum computers has led to a surge in theoretical and experimental research in quantum computing, with the goal of developing and optimising quantum hardware and software. Already, many small-scale quantum devices (known commonly as quantum processing units or QPUs) are now publicly available via cloud access worldwide. A few weeks ago, we were delighted to announce Quandela’s first QPU with cloud access — Ascella — a milestone achievement and the first of its kind in Europe.

We are working on developing both quantum hardware and software tailored to our technology. Our QPUs, including Ascella, consist of our extremely high-quality single-photon sources, coupled to linear optical components (or chips), and single photon detectors. We can in principle implement many algorithms on these QPUs, including variational algorithms, quantum machine learning and the usual qubit protocols. Check our Medium profile for more.

This post highlights our latest research: a new important use for our QPUs, namely solving graph problems. Graphs are extremely useful mathematical objects, with applications in fields such as Chemistry, Biochemistry, Computer Science, Finance and Engineering, among many others. Our research shows how to efficiently encode graphs (and more!) into linear optical circuits, in practice by choosing the appropriate QPU parameters, and how to then use the output statistics of our QPUs to solve various graph problems, which we detail below in the technical section. Furthermore, we also show how to accelerate solving these problems using some pre-processing, thereby bringing closer the prospect of achieving a practical quantum advantage for solving graph problems by using our QPUs.

2 — Methods and results

We now take a deeper/more technical look into the workings of our technique. The probability of observing any particular outcome in a linear optical interferometer (say =|n⟩=|n₁,…,nₘ⟩) will depend on the configuration of the interferometer (which can be described by a unitary matrix U) and the input state ( |nᵢₙ⟩=|n₁,ᵢₙ , … , nₘ,ᵢₙ⟩ where nᵢₙ is the number of photons in the iᵗʰ mode in the following way [1]:

(Unᵢₙ,n is a unitary matrix obtained by modifying U based on the input and output configurations and nᵢₙ.) Hence, the applications for such a device relate closely to a matrix characteristic known as the permanent (Per) — a mathematical function that is classically hard to compute. One of the key techniques we use to make matrix problems solvable using our quantum devices is mapping the solution of a given matrix problem onto a problem involving permanents.

2.1 — Encoding, computing permanental polynomials and number of perfect matchings

We begin by encoding matrices into our device. Graphs (and many other important structures) can be represented by a matrix. To encode a bounded matrix A, if we consider the singular value decomposition obtaining A = UDV* where D is a diagonal matrix of singular values and take s the largest singular value of (maximum value of D), we can scale down this matrix into Aₛ = A/s so that we obtain a matrix that has a spectral norm ||Aₛ||≤1. From here, we can make use of the unitary dilation theorem [2] which shows that we can encode Aₛ onto a matrix which is unitary and has the following block form

With the appropriate input and output state post-selection, we can estimate the permanent of A by using a setup similar to that of the figure below (see our paper [6] for details). You can test this estimation for yourself using Perceval, an open software tool developed by Quandela to simulate and run code on a photonic QPU [3].

Figure 1 — Setup to estimate a matrix permanent composed of single photon sources (SPS), a linear optical circuit U encoding A, and single photon detectors.

2.2 — The k-densest subgraph problem

The densest subgraph problem is the following: given a graph of n nodes, find the subgraph of k nodes which has the highest density. Considering an adjacency matrix A and the cardinality of nodes |V| and edges |E|, we proved in [6] that (when |V| and |E| are even) the permanent is upper bounded by a function Per(A)≤f(|V|,|E|) where f(|V|,|E|) is a monotonically increasing function with the number of edges.

The connection to the densest subgraph problem is natural since the density of a graph defined by the nodes v is:

where Eᵥ is the number of edges in that subgraph connecting the nodes v meaning that the density is proportional to the number of edges.

Let n be the number of vertices of a graph G. We are interested in identifying the densest subgraph of G of size k≤n. The main steps of our algorithm [6] to compute this are the following:

  • Use a classical algorithm [5] which finds a subset of size <of the densest subgraph of G. Then use L as a seed for our algorithm, by constructing a matrix B of all possible subgraphs of size k containing L. In general, this number of possible subgraphs is exponential however, given the classical seed, this number can be reduced to polynomial in some cases [6].
  • After constructing the matrix with these possibilities, we choose the appropriate input state and the following post-selection rules:

For clarification, check the figure below.

Each post-selected output computes the permanent of a subgraph, and by the connection established above it follows that the densest one will be outputted the most often, since it has highest permanent value.

We tested our theory and obtained some results (you can find and reproduce them on this link). Here we present an example. Given a graph represented in the next figure with node labels, we searched for denser subgraphs of size 3 with minimum number of samples from the linear optical device set to 200.

We tested for different seeds that the classical algorithm might have provided.

  1. For [2], 23420000 samples were generated; 200 remained from post selection, all attributed to node group [2,4,5].
  2. For [4], 21120000 samples were generated; 200 remained from post selection which were distributed over node groups [4,3,5] and [4,2,5] with 115 and 85 samples respectively.

Our algorithm outputted the expected subgraphs from the figure and when it had two equally suited options it correctly outputted both.

2.3 — Graph isomorphism

In our paper [6]we prove that computing the permanent of all possible submatrices of the adjacency matrices of two graphs G₁ and G₂ is necessary and sufficient to show that they are isomorphic; however, this is not practical as the number of these computations skyrockets with the number of vertices. One alternative is to import powerful permanent-related tools from the field of graph theory to distinguish non-isomorphic graphs. A powerful distinguisher of non-isomorphic graphs is the Laplacian [4]. Starting with an estimation using the permanental polynomials P(x):

the Laplacian of a graph is given by L(G):= D(G) — A(G) where A(G) is the graph’s adjacency matrix and D(G) is a diagonal matrix where each entry stands for the degree of the corresponding node. Here, in this Laplacian method L(G) replaces A in P(x).

Given two graphs G₁ and G₂, the steps for graph isomorphism are the following:

  1. Encode xIₙₓₙ — L(G) for both G₁ and G₂ into unitaries U₁ and U₂ of 2n modes.
  2. Compute an estimate of Per (xIₙₓₙ — L(G)) for each graph by using the configuration before: passing photons on the first modes of U₁ and U₂ and post-selecting on |n⟩= |nᵢₙ⟩.
  3. Repeat steps 1 and 2 for different values of x. Declare that they are isomorphic (True) if the estimations are close (up to an error).

We tested our algorithm comparing random graphs with similar characteristics (same number of nodes and same edge connectivity) and obtained results as summarized in the pie chart.

This means 75% of the time, all three outputted the same, 93% of the time we got the right result using the Laplacian method, 77% of the time we had a correct answer with just the Permanental, and 5% of the time we did not obtain a correct answer by either of the methods.

2.3 — Boosting probabilities

Last but not least, boosting our results. Not only might post-selection reject a considerable portion of our samples, but the advantage to estimate permanents over classical approaches can be hard to verify. To soften this, to have a better run time and increase the prospects of seeing a practical quantum advantage, we studied how to better approximate the permanent with less samples. Put simply, we hack the adjacency matrix by multiplying the row corresponding to the least connected node by a number and encode the new matrix into the device; this might allow us to increase the probability, depending on the connectivity.

In the following table, we give some of the results over different w for the graph on the left.

3 — Conclusion and final remarks

The applications and algorithms we developed were focused on graph problems, but the interesting fact is that our encoding method allows the encoding of any bounded matrix.

An important factor is the complexity of these algorithms. To estimate permanents by sampling, one obtains a polynomially close estimate of the permanent using a polynomial (efficient) number of samples [6]. This rules out the prospect that our algorithms present an exponential quantum-over-classical advantage, as polynomial-time classical algorithms like the Gurvits algorithm [7] can also approximate permanents to polynomial precision. However, it leaves open the possibility that our techniques might present a practical advantage over their classical counterparts.

A final note is that when comparing quantum algorithms to classical ones, a factor which is becoming increasingly relevant is their energy cost. In terms of resources, Quandela’s Ascella QPU (12 modes and 6 photons) has the potential to be more energy efficient when compared to its classical counterparts.

References

1. S. Aaronson and A. Arkhipov, The computational complexity of linear optics (2011) pp. 333–342

2. P. R. Halmos, Normal dilations and extensions of operators, Summa Brasil. Math 2, 134 (1950)

3. N. Heurtel, A. Fyrillas, et al., Perceval: A software platform for discrete variable photonic quantum computing, arXiv preprint arXiv:2204.00602 (2022)

4. R. Merris, K. R. Rebman, and W. Watkins, Permanental polynomials of graphs, Linear Algebra and Its Applications 38, 273 (1981)

5. Nicolas Bourgeois, Aristotelis Giannakos, Giorgio Lucarelli, Ioannis Milis, Vangelis Paschos. Exact and approximation algorithms for densest k-subgraph. 2012. ⟨hal-00874586⟩

6. R. Mezher, A. Carvalho, S. Mansfield, arXiv:2301.09594v1

7. L. Gurvits, in International Symposium on Mathematical Foundations of Computer Science (Springer, 2005) pp. 447–458.

If you are interested to find out more about our technology, our solutions or job opportunities visit quandela.com

Entangled-photon generators for fault tolerant photonic quantum computers

Article by Niccolo Somaschi & Pascale Senellart

The scientific results just published in the new issue of the journal Nature Photonics represents another “first” in the technological achievements of the quantum computing company Quandela, towards the development of photonic quantum computers.

Within a collaborative effort, several academic research groups (Pascale Senellart at C2N-CNRS, Sophia Economou at Virginia Tech (USA), Alexia Auffeves at CNRS Grenoble) and the teams in Quandela demonstrate the deterministic generation of a linear cluster state of entangled photons, with a semiconductor quantum dot photonic device.

This work reinforces the already strong bounds within the group of Pascale Senellart at C2N-CNRS and Quandela’s theory and experimental teams.

A semiconductor approach for entangled qubits generation, building on a semiconductor artificial atom and a single spin control.

By entangling a spin embedded in a semiconductor quantum dot, and the photons emitted in a time sequence, the authors demonstrate two and three-partite entanglement with record efficiency with generation rate 100 times and 10 times higher than the previous state of the art across different technologies. This is also the first time (all platforms considered) that such high efficiency is combined with photon indistinguishability — a crucial feature for scaling toward 3D cluster states for fault tolerant quantum computing.

The protocol permits to convert one of the most efficient single-photon generators into an entangled-photons weaving machine at the microscopic scale.

From single photon quantum computing to cluster state-based quantum computing.
The use of cluster states, graphs states and large ensembles of entangled qubits as resource for quantum computing does not represent a new idea.

Several theoretical approaches and protocols have been proposed since early 2000s (MBQCBallistic QCCluster state string) making the approach, generalised as “Measurement based quantum computing”, as one of the most studied and most explored in terms of scalability for universal quantum computing.
It is in fact well assumed in the scientific community, as well as in the quantum computing corporate world, that the architecture based on the manipulation of large cluster state of entangled particle is among the most viable approaches for error corrected, large scale quantum computers — across all platforms.

In the past years, variations of such MBQC approach appeared in the roadmaps of large photonic based quantum computing companies (Fusion BQCCV QC), placing photonics among the most promising technologies for the quest of fault-tolerant quantum computers.

Nevertheless, generating photonic qubits, and especially, generating entanglement among them, represents an extremely challenging quest in the community.

Most of the research community as well as the above mentioned companies, implements traditional techniques, as spontaneous parametric down conversion and linear optical circuits, which are probabilistic processes in nature. This automatically leads to extremely costly technological resources and footprint, due to the need of heavy multiplexing of very large number of components on multiple large photonics chips.

Today Quandela reaches a major technical milestone in this framework, presenting the most efficient devices capable of generating cluster states of entangled photons on-demand with a single micrometer size, semiconductor device. This achievement builds on Quandela’s unique approach leveraging a single matter-based qubit, to overcome the scalability barrier of the photonic quantum computing platform.

Semiconductor manufacturability for scalability.

The expertise in mastering the semiconductor technology at the core of the publication lies in more than 20 years’ experience within the research group of Pascale Senellart, C2N, and Quandela’s co-founders team.

Combined with the advances in the device manufacturability developed in Quandela, in terms of throughput, quality and reproducibility, we expect to improve the performance already in the coming months.

All these ingredients will permit to increase the coherence time of the single spin allowing for the generation of larger cluster states with dozen of photons with high multi-partite fidelity.

Within the quantum computing development, this class of qubit generators permits to initiate a new phase where manipulation of single photons, is replaced by the manipulation of entangled photons linear cluster states.
From it, protocols for manipulation and error correction will be implemented to combine the required ingredients for a resource efficient quantum computing architecture.

If you are interested to find out more about our technology, our solutions or job opportunities visit quandela.com

Exploring Ascella, the Single-Photon Quantum Computing Prototype

Article by Ben Stott

Quantum computing is at the forefront of technology, with the potential to transform how we process information. This emerging field leverages quantum phenomena such as quantum superposition and entanglement to accomplish computational tasks at unprecedented speeds. Among the different strategies for achieving quantum computing, photonic quantum computing is a particularly bright star.

Photonic quantum computing uses light particles — photons — as the carrier quantum information. This approach has many benefits ; photons are stable, they have modest cooling requirements and they are scalable.

One method for encoding information in photonic quantum computing is ‘dual-rail encoding’. In dual rail encoding a pair of tracks form a qubit where the logical (1) is encoded in the photon traveling in the lower rail and the logical(0) is encoded in the photon traveling in the upper rail. This reliable encoding method is widely used in the photonic world. More details can be found here.

In a significant leap forward in this field, we at Quandela have developed Ascella[1], a general-purpose single-photon-based quantum computing platform. Ascella is a quantum computer that uses a highly efficient quantum dot single photon source, which functions much like a miniature box that produces single photons on-demand which then go into its universal interferometer to perform the computation and subsequently are detected at the output for processing. More details about single photon sources here.

To compensate for hardware errors, Ascella uses a machine-learned transpilation process. Transpilation is the process of rewriting a given input circuit to match the topology of a specific quantum device. Additionally, it features a full software stack that enables remote control of the device, opening up opportunities for a range of computational tasks through logic gates or direct photonic operations.

At Quandela we have successfully demonstrated high-fidelity one-, two-, and three-qubit gates, implemented a complex quantum algorithm to calculate the ground-state energy of the hydrogen molecule and even developed a photon-based quantum neural network. Notably, Ascella achieved the generation of a unique type of quantum entanglement involving three photons — heralded 3-photon entanglement. This achievement is a critical milestone toward ‘measurement-based quantum computing’ a promising pathway to scale up quantum computers (*).

(*) Measurement-Based Quantum Computing (MBQC) is a form of quantum computing where computation is performed through a sequence of single-qubit measurements on a pre-established, highly entangled state known as a cluster state. In this model, the process and outcome of computation are determined by the choice and order of measurements

Diving Into Ascella’s Architecture SECTION 1

At its core, Ascella is a unique piece of technology that uses light to perform complex calculations. Its hardware is made up of several key components. Firstly, there is the single-photon source which produces the photons on demand. This source emits photons into a fiber which then sends the photons on their journey through the system. The system contains a device called a demultiplexer, which is responsible for splitting the stream of photons into separate streams. So it feeds the chip 6 photons that arrive at the same time. Secondly there is the unitary matrix in the universal interferometer which determines the evolution of the photons and finally detectors (SNSPD) at the outputs.

FIG. 1: Architecture, performance and stability of Ascella.

1.1 Peering into Ascella’s Operational Mechanism

Quantum interference is what enables Ascella’s operations. At the heart of the system is an integrated circuit, containing a multitude of voltage-controlled phase shifters and beamsplitters. This circuit can be programmed to perform a variety of tasks, transforming the photons’ states as they move through the circuit. These photons are then detected and recorded, creating the output data from Ascella.

1.2 Stability and Performance

Just like a well-oiled machine, Ascella needs to perform consistently over time. To ensure this, key performance metrics are monitored and the system undergoes automatic hourly optimizations. This commitment to performance maintenance allows Ascella to remain stable and high performing for weeks on end, an essential quality for any computing platform and especially for the demanding realm of quantum computing.

1.3 Ascella’s User Interface

Ease of use is another important aspect of Ascella’s design. Users can interact with Ascella remotely using the open-source Perceval framework. This allows users to send tasks, such as specifying a certain pattern for the photonic circuit or requesting a specific transformation on an input state, which could contain anywhere from one to six photons. The results are then sent back to the user.

FIG. 2: From Perceval User to Ascella QPU. PC = Photonic circuit, GB = Gate based, U= Unitary, ɸ= Phase shifts, V= Voltages

Ascella represents an exciting stride towards scalable quantum computing. It operates a record-breaking number of single photons on a chip and its robust capabilities combined with a user-friendly interface making it a promising player in the noisy intermediate-scale quantum (NISQ) computing era.

Quantum Chemistry: Ascella’s Variational Quantum Eigensolver
SECTION 2

Understanding the behavior of molecules is critical in fields like chemistry and medicine. However, simulating molecular behavior can be incredibly complex for classical computers, especially when it comes to quantum phenomena. Ascella, Quandela’s photonic quantum computer takes advantage of the Variational Quantum Eigensolver (VQE), a tool for quantum chemistry.

The VQE is a hybrid quantum-classical algorithm that can estimate the ground state energy of a molecule, providing insight into its lowest energy configuration. This ground state knowledge is crucial as it can help predict how a molecule will react in a chemical reaction, which in turn, can lead to new drug developments or more efficient materials for energy production. In our case the VQE algorithm was used on a H2 molecule with the following ansatz.

FIG. 3: Ansatz circuit with 7 parameters controlling 7 parameterized rotations.

This is where the “variational” part of VQE comes in: it iteratively varies the parameters of the wavefunction to find the one that gives the lowest energy. This is done in a feedback loop, where the quantum computer calculates the expectation value, then a classical computer updates the parameters and directs the quantum system to recalculate based on these new parameters.

FIG. 4: Energy (Hartrees) vs iterations for VQE run on Ascella. We observe a convergence to the true theoretical value.

In practice, Ascella’s VQE consistently converges to the theoretical energy eigenvalue within 50 to 100 iterations, regardless of the initial random parameters or bond length. In tests, the algorithm achieved chemical accuracy — an error of only ±0.0016 Hartree, a unit of energy in the atomic scale — with a success probability of 93%. These significant improvements can be attributed to the high-quality single-photon sources and advanced control of the photonic chip.

Ascella’s Quantum Neural network/Variational quantum algorithm for classification: SECTION 3

Quandela has successfully constructed and trained a quantum neural network (QNN) for a supervised learning classification task using our advanced quantum computing platform, Ascella. In designing our approach, we utilized a variational quantum algorithm (VQA) for constructing the QNN. Similar to classical neural networks, where interconnected neurons are represented by numerical weights and biases, a QNN consists of qubits manipulated by quantum gates. The parameters of these quantum gates, analogous to the weights and biases in classical networks, are adjusted during the learning process to minimise some cost function.

The VQA represents a hybrid quantum-classical strategy, where the quantum device prepares a quantum state dependent on a set of parameters, and the classical device estimates a value for that state.

FIG. 5: The trainable blocks parameterised by θ are depicted in blue. The data encoding block is in green. The model is computed by assigning λ parameters to the outputs at the detectors.

We developed our quantum algorithm with a photonic ansatz. This ansatz was designed directly on Ascella by using beam splitters and phase shifters. We then applied this methodology to the Iris dataset, which contains measurements such as petal and sepal sizes from different species of Iris flowers. The task was to predict the Iris species based on these measurements.

FIG. 6: Iris dataset samples

This experiment constitutes an application of a variational quantum classifier with single photons, a notable achievement. This process yielded an accuracy of 92% on the training set and an even higher 95% on the test set after about 15 iterations.

In summary, our represents an important step in quantum machine learning. By successfully training a QNN on a classic machine learning problem, we have demonstrated the potential for quantum computers to process complex data efficiently. This successful application of a variational quantum classifier with single photons heralds exciting possibilities for the future of quantum computing and machine learning.

A quantum over classical computational task: Boson sampling SECTION 4

In a step forward for our quantum computing approach, we have conducted an experiment in Boson Sampling with six single photons.

Let’s explain what this means, starting with Boson Sampling. Think of this as trying to predict how a bunch of photons (particles of light) will distribute themselves after passing through a complex network of mirrors and splitters. The classical version, a Galton board might help visualize this with a ball moving between pegs and creating like such.

FIG. 7 : Galton board

However, the real quantum task is much more complex. Unlike balls on a Galton board, photons interfere with each other in a quantum mechanical way, leading to a highly complex probability distribution of the output states. Simulating this process with classical computers is incredibly difficult, but a quantum computer, mimicking the physical processes involved, can solve this problem naturally.

Our experiment involves performing Boson Sampling for a record number of six photons with a fully reconfigurable interferometer. An interferometer is a device used to split and recombine beams of light, creating an interference pattern that can be measured.

In the experiment, we measured the occurrences of photon coincidences (when multiple photons are detected at the same time) for up to six photons. We then compared the observed distribution of photon outputs to the ideal distribution predicted by our model.

This accomplishment is a significant stride and step forward for quantum computing, we have conducted an experiment in Boson Sampling with six single photons with high fidelity, demonstrating this quantum-over-classical computational task.

Expanding Ascella’s Quantum Computing Capabilities SECTION 5

We are pushing the boundaries of our quantum computing capabilities. As of now, we’ve achieved an operation speed that is quite impressive. And we believe this speed can be further improved by optimizing each component of our system.

One such component is the single-photon source, which essentially generates the tiny particles of light, or photons, that carry information in this system. Currently, it’s operating at about 55% efficiency[i]. With advancements in technology, we believe that we can increase this efficiency by up to 96%. This means the platform will generate almost twice as many usable photons as it currently does, a significant increase in data processing potential.

Additionally, we have plans to increase the size of the interferometer on the chip we’re using. This mean the chip can process more information simultaneously, which means that we can run tests on larger molecules than H2 and more sophisticated algorithms in general.

An essential part of this exploration involves GHZ states. These are specific entangled configurations in a quantum system involving three or more particles. Creating and controlling these GHZ states is a significant milestone in quantum computing. It represents our growing ability to manipulate complex quantum entanglements involving multiple particles, a crucial aspect of measurement-based quantum computing (MBQC). Even more impressively, on the Ascella project we managed to generate these GHZ states predictably or in a “heralded” way. That means we could not only anticipate the creation of these states but also confirm when they occurred.

The successful implementation of these heralded entanglement schemes opens the door to fault-tolerant quantum computing. It’s an exciting time for quantum computing, and we’re thrilled to be at the forefront with Ascella!

References:

1. Nicolas Maring, Andreas Fyrillas, Mathias Pont, Edouard Ivanov, Petr Stepanov et al

A general-purpose single-photon-based quantum computing platform https://arxiv.org/abs/2306.00874

[i] A single photon source that operates at 100% efficiency would emit a single photon each time it is triggered. At 55% efficiency it would emit a single photon each time it is triggered 55% of the time.

Budgeting with Qubits / Make the most of your photons

Article by Lucija Grba

Linear optical quantum computation uses photons as carriers of information (i.e. qubits) in an optical circuit with beam splitters and phase shifters that process this information. You can read more about how optical quantum computing works here.

Let different outputs of an optical quantum circuit correspond to different spending models one can obtain with a specific number of photons corresponding to their limited budget. Naturally, the more spending models we have access to, the more options we have to choose from and therefore have a higher likelihood of making an optimal choice. If before we had the categories ‘food’ and ‘travelling’, we will surely benefit from branching out these categories into ‘groceries’, ‘fast food’, ‘restaurants’ and ‘flights’, ‘public transportation’ etc. While we may have given a high importance to the category ‘food’ before as it is essential to our survival, we can now identify that some categories like ‘fast food’ and ‘restaurants’ aren’t as essential as ‘groceries’ so we have a higher chance of making better choices with a limited budget.

For the creation of a good photon spending model, it’s important not only to know how many photons there are in a budget (entered the circuit) and where they were found in the end (on which output modes). It’s important to also quantify the photon expenses on each mode (how many photons are found in which output mode), so you have a better chance of reaching that photon distribution you’re looking for.

Are you looking for a way to do more complex computations with a limited photon budget but you find yourself not knowing your photon distribution at the end of your computation? Fear not, we have you covered! Keep reading to discover how to make the most of your photons with our new feature — pseudo photon number resolution (PPNR or pseudo PNR).

1. Why is PNR important?

The use of photons as quantum processing units has an advantage of satisfying 5 DiVincenzo conditions for building a quantum computer (remind yourself of these by reading this short article). It also gives access to a truncated Fock space, that is larger than the qubit space. We measure this difference by the size of the Hilbert spaces that describe them mathematically. You can read more about the difference between these in one of our previous articles here.

Without PNR, we can only work within a Fock space truncated down to the same size as the qubit space size. The plot above demonstrates the power of PNR in enlarging the computational space size, even with maximum resolution of 2 photons.

In short, using the photonic Fock space gives us a scaling advantage. With the same number of photons, the Fock space is much larger than the qubit space, and therefore allows us to construct and work within a greater computational space. Essentially, we can do more with less, helping you make the most of your photons! This can be viewed as the expansion of the number of categories we consider for our photon spending model we talked about earlier.

We previously stated that we couldn’t count the number of photons in a single output mode with perfect accuracy, limiting our access to some states in the Fock space, i.e. a truncated Fock space. While perfect accuracy is still difficult to achieve, we are happy to share that we have made a great stride towards it!

Perceval users can now configure their modes, declaring them photon number resolving or not. This is an optional feature already introduced in our software and ready for use!

2. PNR integration

You’ve probably heard about microprocessors or classical processing units (CPUs). Think of them as the brains of a computer, processing and analysing information using classical operations. They are connected to other parts of the computer through pins, some of which have a predefined function (meaning that they can only perform one function, such as RESET pins) while others are configurable. These are called general purpose input/output (GPIO) and are much more useful.

An analogy of the same idea of configurable ports can be applied on optical devices in a quantum processing unit (QPU). A QPU differs from a CPU in the way its computations are performed, harnessing the power of physical behaviours occurring with quantum particles.

Photon detection works based on a grid — changing the position and number of photons found in each mode of the grid changes the state of the system (spending model considered). This is why having access to a PNR detector is imperative to achieving high fidelity. A single photon detector (SPD) would only be able to tell us whether it detected any photons at all (giving us the output 1) or none at all (outputting 0). You can visualise the difference between the capabilities of PNR and SPD below.

PNR vs SPD

In simple terms, PNR detectors allow you to analyse your photon distribution better (as they have access to more spending models) so you naturally have a better chance of finding that distribution you’ve been dreaming of obtaining. Now you might be wondering: how is PNR achieved in a photonic quantum computer?

Our newest feature uses balanced beam splitters attached to the modes the user has declared PNR. Recall that a beam splitter is an optical device containing 2 input modes and 2 output modes. It controls the direction of the photons that pass through it. A balanced beam splitter gives a photon entered through either of the inputs a 50% chance of ending up in either of the outputs (the choice is random), as it can either reflect off the beam splitter or be transmitted through it.

transmittance / reflection

Imagine a mode containing 2 photons connected to our PPNR detector. Reaching a balanced beam splitter, these photons will end up either taking the same route and will therefore be found in the same mode with a 50% probability, or they will each go their way and be found in separate modes with a 50% probability.

There are 4 different ways 2 photons might travel through a beam splitter, each equally likely to occur (each with 25% probability)

Case 1: Input 1 and input 2 were both reflected and came out the outputs 1 and 2 respectively.

Case 2: Input 1 was reflected and input 2 was transmitted so they both ended up in output 1.

Case 3: Input 1 was transmitted and input 2 was reflected so they both ended up in output 2.

Case 4: Input 1 and input 2 were both transmitted and came out the outputs 2 and 1 respectively.

Since photons are indistinguishable, we recognise that there’s a 50% probability of the photons splitting up and 50% probability of them ending up in the same mode.

We call this pseudo PNR because we obtain the photon distribution probabilistically and not deterministically. In simple terms, we do not have full control over whether a photon gets reflected or transmitted as this happens randomly, with a certain probability (50%).

We constructed a set-up that manipulates how our photons travel and gives us a good chance of counting 2 photons if they are travelling together in a single mode. An example of a non-pseudo PNR would be PNR that deterministically determines the number of photons in a mode by measuring the intensity of light. The more photons are found in a mode, the stronger their light will be. If we can deterministically measure the brightness, we are 100% confident in the photon count and this would be true PNR.

For now, PPNR can only be used on the first 3 modes of our QPU and its simulator, but this can also help us infer knowledge about other modes. You can test its performance for an up to 2-photon number resolution with 50% accuracy. The feature can easily be implemented in a single line of code, as shown in the picture below.

Stay tuned for future releases on more modes!

This is a great step forward, broadening the limits of what is possible to do in our framework and paving the way for some very exciting applications. In quantum machine learning, it allows us to work with more complex learning models as we are able to work in a larger computational space (Fock space). You can find a simple visual explanation for why that is the case here. Furthermore, in quantum algorithms for classification (VQC), access to PNR allows the creation of a highly expressive ansatz, automatically increasing the complexity of the circuit without increasing the number of photons used. You can explore how and why we use VQC on our very own QPU Ascella here.

So, what are you waiting for? Uncover the full potential of our new feature and make the most of your photons!

If you are interested to find out more about our technology, our solutions or job opportunities visit quandela.com

Perceval Tech note: Introducing “shots” in a linear optic quantum computing framework

Article by Raksha Singla

Quantum mechanics is intrinsically random. When a quantum state undergoes an evolution through a quantum process, the output state is a well-defined mathematical object, but we can only access it through measurements. Each measurement produces an output that belongs to a probability distribution that follows the Born rule. This could be seen as a bottleneck for a quantum developer compared to classical computation. But when quantum information is used cleverly, we can encode the results of our problem in a few measurement outcomes enabling solving challenging problems for classical computing (see Shor algorithm example for instance here) .

What do people call a shot in the context of quantum computing?

One “shot” represents one execution of a quantum circuit.
Given the probabilistic nature of the system, conducting multiple iterations of the system (obtaining many shots) is necessary to gather data for statistical analysis of the algorithm’s operation. The concept of “shots” is universally embraced by most quantum providers. But the precise implementation of “shots” and their management will vary among different frameworks as it depends on the specific characteristics of the hardware system.

How do we define shots at Quandela?

Our computing architecture works with linear optics. We send single photons in a quantum circuit composed of tunable linear optical elements which results in an operation optical elements forming the quantum circuit performing the processing action on states of photons (input source) acting as a qubit in the Fock space (for an explanation of the Fock space see here). A photon-coincidence event — detection of at least 1 photon at the output — Detection of 1 or multiple photons by the detectors at the output defines a single execution of a quantum experiment.

Each time a photon in any output mode reaches a detector = SHOT

Our QPU sends single photons at a periodic rate into the chip implementing the circuit designed by the user. While these photons may undergo absorption at various points within the hardware. Nevertheless, whenever at least one photon is detected at the optical chip’s output (termed as a photon-coincidence event), it marks the end of a single execution and the measured output constitutes a data samplewe consider that the quantum circuit has been executed. Our shot count is thus defined as the number of photon-coincidence event during the computation.

Perceval is able to provide the number of shots needed

A user may not necessarily want to sample single photon detections; they may specifically desire samples with a certain number “N” (>1) of photon coincidences and request these as the output. In such cases, the system may need to be run with a number of shots exceeding the requested number of samples, as multiple photon coincidences are anticipated. Recognizing this user preference, we have incorporated a tool in Perceval to calculate estimate the necessary number of shots based on the user’s desired sample count. This tool conducts the estimation by considering the unitary of the implemented circuit, the input state, and the performance characteristics of the device.

How Shots will Revolutionise our User’s Experience?

Access to Quantum State Distribution:
In the light of the definition that characterises “Shots” as the output detected during each circuit execution, they offer direct access to the probability distribution of a quantum state.

Predictable output rate:
In a photonic system characterised by instantaneous gate applications and a complex input state preparation timing (see detail on demultiplexing here), the time capture of shots clearly indicating the end of a single execution is exhibiting variability attributed to this input state time sequence, the actual configured circuit, and system transmittance factor. Working with shots guarantee a predictable output rate independent of these fluctuations.

Simplified User Interactions:
The incorporation of shots not only seeks to standardize user interactions with running algorithms on our Quantum Processing Unit (QPU) through our cloud services but also provides them with a more standardized parameter for understanding their resource needs. This enhancement contributes to a clearer and more consistent measure.

Predictability for Time and Cost:
-Shots, being highly predictable, offer the most reliable means to estimate the time and cost of running an algorithm.
-This stability in parameter counting results in fixed pricing, ensuring fairness to users and independence from the variability of the performance of the physical QPU device.

If you are interested to find out more about our technology, our solutions or job opportunities visit quandela.com