What is the HHL algorithm?
The algorithm is named after its authors Harrow, Hassidim and Lloyd, and is a quantum algorithm which is mainly used for extracting important information related to the solution of complex systems of linear equations. Initially it was considered as one of the most promising candidates for achieving robust (theoretically proven) quantum speedups along with Grover’s and Shor’s algorithms. More specifically, it was proven that under specific assumptions for the system (sparse, with low condition number) and considering that one is interested only in specific information regarding the solution vector, we can achieve exponential speedup. In this case, using a quantum algorithm, a solution can be found in \( \mathcal{O}(\log(N)\kappa^{2}) \), (with \(N\) being the number of variables in the system and \( kappa \) the condition number), while the best known classical algorithm requires \( \mathcal{O}(N\kappa) \)
The algorithm was considered to be highly important as it can be applied as a subroutine for a lot of different applications/problems. One of the most important applications is related to machine learning and data analysis tasks (e.g., recommendation systems, accelerating the training of LLMs, data dimensionality reduction). In addition, it can also be used for solving problems related to differential equations (e.g., battery design) or optimisation (e.g., portfolio optimisation).
How it works?
The HHL algorithm is built to solve linear system of equations like \( Ax = b \) with \( A \) being a Hermitian, sparse and well-condition matrix. The output of the algorithm is a quantum state proportional to:
$$ |x\rangle = \frac{A^{-1}|b\rangle}{|| A^{-1}|b\rangle ||} $$
The algorithm consists of four main sub-processes:
- Quantum state preparation: the state preparation step provides the right-hand side of the linear system as a quantum state ( \( |b\rangle \) ).
- Quantum Phase Estimation (QPE): applying \( e^{iAt} \) which is used to extract the eigenvalues \( lambda_{j} \) of \( A \) into an ancilla register (without touching eigenvectors).
- Controlled rotations: applying a series of controlled rotations which serves as a way to perform an operation equivalent to \( A^{-1} \) (eigenvalue inversion). This step also involves the measurement of the ancilla qubit. The result will be either \( |0\rangle \) or \( |1\rangle \). When it is \( |0\rangle \), it is discarded and the measurement is repeated until the result is \( |1\rangle \).
- Inverse QPE: the final step involves the inverse QPE which is related to uncomputation and the clean up of ancilla qubits. It ensures that only the encoded solution is kept and a normalized form of it is extracted.
Finally, in order to obtain a meaningful classical result from a quantum state \( |x\rangle \) we need to compute the expectation values:
$$ \langle x| M |x\rangle $$
for a specific observable \( M \). In simple terms, the procedure that we described applies the operator \( A^{-1} \) on a quantum state \( |b\rangle \).
Challenges:
There are some limitations regarding the properties of matrix \( A \) that we can tackle as in order to achieve efficient Hamiltonian simulation it has to be sparse and well-conditioned. Also, one of the main challenges is that when the full solution vector is required, then the exponential quantum advantage vanishes. This happens because the output of the algorithm is a quantum state \( |x\rangle \) which does not correspond to the solution of the problem. To extract the resulting vector, additional post-processes are required such as tomography, which in principle require an exponential number of measurements, and kills the exponential speed-up. That is why it is often used as a subroutine in larger quantum algorithms. Moreover, HHL is an algorithm that is hard to implement on real devices as the quantum requirements are immense. HHL assumes access to a quantum state \( |b\rangle \) (equivalent to \( Ax = b \) which is difficult to prepare from classical data. This means that the use of a quantum RAM would be required but such structures are not yet possible with the current hardware resources. In addition, very deep circuits are necessary which makes the efficient demonstration of the algorithm in NISQ devices almost impossible as noise affects a lot the result.
Classical simulation – Dequantization:
Some years after the publication of HHL algorithm there were some breakthrough work from Ewin Tang who demonstrated that a lot of quantum algorithms used for solving complex linear systems (including HHL applications) fail to provide an exponential quantum advantage if we assume that a classical algorithm has the same access to the data as its quantum equivalent. Using this assumption, she was able to prove that there are “quantum-inspired” classical models which can match the performance (up to polynomial factors) of the quantum algorithms and improve the best classical algorithms known (low rank matrix inversion, PCA, recommendation systems). It is important to mention that these works did not destroy all the possible quantum advantages provided by HHL, as they only apply to low rank matrices. Still, they killed some clear early examples of exponential advantage that were proposed in the literature. In addition, Tang’s work helped in clarifying if the exponential quantum advantage is algorithmic or if it comes mainly from powerful data-access assumptions.
Frequently asked questions about the HHL algorithm:
- Does HHL give an exponential speedup over classical linear-system solvers?
The HHL algorithm is able to provide an exponential quantum advantage but only under specific conditions. These are the following. The matrix \( A \) has to be sparse and well-conditioned, the preparation of the quantum state \( |b\rangle \) can be implemented efficiently and the user is not interested about the whole solution vector but just for some of its properties. Moreover, quantum-inspired classical algorithms were developed (by Tang and others) that can achieve comparable complexity under similar assumptions regarding the access to the data. It is important to mention that solving systems of linear equations belongs to BQP-complete complexity class. This means that if HHL was fully dequantizable, BQP would collapse to BPP, which is widely believed to be unlikely. This implies that there are problems for which HHL can provide significant speedups.
- Is HHL useful on today’s NISQ or early Fault-Tolerant devices?
The implementation of the HHL algorithm on a real quantum device is a challenging task. The algorithm includes a lot of heavy quantum subroutines (Hamiltonian simulation, QPE) which require very deep circuits and increased fidelities. This means that the algorithm is affected a lot by the noise in NISQ devices. So in practice, it is not possible to implement the HHL algorithm efficiently on the current noisy (or even early Fault-Tolerant) hardware.
- Is HHL still relevant despite the dequantization results?
The results around dequantization of HHL from Tang revealed that for specific examples and assumptions the exponential quantum advantage does not come from intrinsic quantum processes but from allowing the model to have strong access on the data. This means that is possible to build classical algorithms which are able to achieve comparable results. However, there still exist cases for which the exponential quantum advantage holds. For example, when the input matrix is generated naturally by a quantum process or when it is sparse and high rank.
