Machine learning (ML) is a subfield of artificial intelligence where algorithms are not explicitly programmed, but rather learn directly from the data, or from experience, i.e. interacting with an environment.
When such algorithms are designed on quantum computers, we talk about quantum machine learning models.
More generally, quantum machine learning (QML) encompasses the following approaches:
- Quantum ML models for classical problems/data: for example, a quantum recommendation system for user preferences in e-commerce,
- Quantum ML models for quantum problems/data: for example, a quantum neural network that classifies quantum states,
- Classical ML models for quantum problems/data: for example, a neural network-based algorithm for decoding errors in quantum error correction.
Commonly used methods in QML
- Variational quantum algorithms: in these algorithms, the model is represented by a parametrized quantum circuit. The parameters of the circuit are optimized through a classical procedure: for example, gradient descent with a well-chosen loss function. See variational quantum algorithms glossary entry for more details.
- Quantum kernel methods: here, the quantum circuit is used to compute a kernel function between, e.g., two data points. The resulting kernel matrix is then used by a classical method like a support vector machine for tasks such as classification. See quantum kernel methods glossary entry for more details.
Challenges in QML
Quantum machine learning is an active research area, and the following topics are under active study among research groups worldwide:
- Training models efficiently. QML models can be challenging to train. Unlike for neural networks, the backpropagation algorithm does not apply in general. Also, there are phenomena like barren plateaus, where the loss function variance tends to 0 exponentially with the size of the system. Sampling noise and physical noise also must be considered when implementing QML models on hardware.
- Designing models with proven advantages or speedups. This question is under debate in the community, with some scientists arguing that heuristic performance improvements are enough to support the usefulness of QML. Others consider that proving speedups is necessary and focus on designing models where complexity theory arguments can apply.
- Benchmarking models. In classical ML, there are several well-known datasets that have been used to benchmark models over the years, such as the MNIST handwritten digits in image classification or the GLUE benchmark in natural language processing. This is something that is still missing for QML.
Frequently asked questions about quantum machine learning
- What are the main applications in QML? Some applications include optimization problems (e.g. supply chains, logistics), as well as drug discovery and material science. Researchers believe that QML may be especially useful when applied to quantum data — either quantum states or classical data collected from quantum processes, like in high-energy physics.
- Is QML only meant for NISQ hardware? No, QML algorithms can very well be executed on error-corrected quantum computers, and some specific QML algorithms like quantum principal component analysis even require it. On the other hand, it is true that the appeal of some approaches like quantum kernel methods or variational quantum algorithms resides in their ability to be run on near-term devices.
- How advanced is quantum ML compared to classical ML? QML is still in its early stages, as current quantum computers are not yet powerful enough to outperform classical ML on most tasks. However, advancements in quantum hardware and algorithms are rapidly progressing, and QML could become practical within the next decade. In the meantime, many researchers turn to hybrid approaches integrating quantum elements inside classical ML architectures.