Quantum Computing Glossary

Decoding

Quantum computers are intrinsically noisy, and the role of quantum error correction is to correct errors by encoding a logical qubit into several physical qubits, thus introducing redundancy to protect the quantum information.  

During an error correction cycle, to try to preserve the information, one performs measurement on the system to obtain what is called a syndrome. This syndrome, usually a binary vector, gives information about the types and locations of the errors that potentially occurred on the physical qubits.  

One can use a lookup table to associate a syndrome to an error. Unfortunately, the number of possible syndromes makes a lookup table intractable for all but the smallest codes. This is where decoding comes into play. A classical algorithm, called a decoder, takes the syndrome as input and outputs the most likely recovery operation, so that the combined effect of the error and the recovery act trivially on the logical qubits, thus correcting the errors that occurred. 

Exact decoding for general stabilizer codes can be computationally very hard, and practical decoders trade off accuracy for speed and practical implementability. 

Common families of decoder include: 

  • Lookup tables (For small codes (e.g. the Steane code) 
  • Matching decoders. Minimum Weight Perfect Matching (MWPM) is the most common one. 
  • Message-passing decoders, such as Belief Propagation (BP), often combined with post-processing technique such as Ordered Statistics Decoding (OSD). 
  • Union-Find (UF) decoding is a near-linear-time decoder for the surface code. 
  • Machine learning based decoders (CNN/GNN/transformer) offer fast inference, but need calibration and robustness checks. 
  • Ensemble decoding combines different decoders to improve accuracy 

Frequently asked questions  

  1. What are the main metrics to evaluate the performance of a decoder? A decoder is typically evaluated by its accuracy (the probability that it outputs the correct logical correction given the measured syndrome), its latency (the time it takes the decoder to output a correction), its resource cost (the memory and computational power required to run the decoder) and its hardware implementability (e.g. can it be implemented on FPGAs or ASICs).  
  2. Does the decoder have an impact on the threshold? Yes, the decoder accuracy influences the threshold, the point below which increasing the distance improves the suppression of the error. A greater decoding accuracy moves the threshold towards higher and higher physical errors rates. 
  3. Does the noise model affect the decoder design and performances? The effectiveness of a decoder depends heavily on the noise model (i.e., how errors occur on the physical qubits). Decoders tailored to a specific noise model (e.g., depolarizing noise, biased noise, erasures…) can exploit statistical structure to improve accuracy. Conversely, if the true noise deviates from the assumed model, the decoder’s performance may degrade.