13

The feature of quantum error correcting codes called degeneracy is that they can sometimes be used to correct more errors than they can uniquely identify. It seems that codes exhibiting such characteristic are able to overcome the performance of quantum error correction codes that are nondegenerate.

I am wondering if there exists some kind of measure, or classification method in order to determine how degenerate is a quantum code, and so if there has been any study trying to determine the error correction abilities of quantum codes depending on the degeneracy of them.

Apart from that, it would be interesting to give reference or some intuition about how to construct good degenerate codes, or just reference about the current state of the art of those issues.

glS
  • 27,510
  • 7
  • 37
  • 125

2 Answers2

6

I don't have a complete answer, but perhaps others can improve on this starting point.

There are probably 3 things to ask about the code:

  1. How degenerate is it?

  2. How hard is it to perform the classical post-processing of the error syndrome in order to determine which corrections to make?

  3. What are its error correcting/fault-tolerant thresholds?

I suppose a simple enough measure of degeneracy is the extent to which the Quantum Hamming Bound is surpassed. For an $[[N,k,d]]$ code, a non-degenerate code must satisfy:

$$2^{N-k}\geq\sum_{n=0}^{\lfloor d/2\rfloor}3^n\binom{N}{n}$$

So the amount by which that bound is violated suggests something interesting about how densely the information is packed. Of course, that's no use if your mega-degenerate code cannot actually correct for any errors. Similarly, if your code is in principle awesome, but it takes too long to actually work out what the correction is, your code isn't really any use in practice because errors will continue to accumulate as you try to work out what corrections to do.

Sanchayan Dutta
  • 17,945
  • 8
  • 50
  • 112
DaftWullie
  • 62,671
  • 4
  • 55
  • 140
5

It seems to me that the feature is only interesting if one introduces some notion of weight of the stabilizers and errors. The distance of the code does this implicitly since it cares about correcting all Pauli words from weight 0 up to some $d-1$.

Otherwise, all $[n, k, .]$ stabilizer codes are degenerate. They will have $n-k$ stabilizer generators. You will thus have $2^{n-k}$ disjoint correctable error classes (one per syndrome configuration). Each of these classes is itself composed of $2^{n-k}$ different Pauli errors ($e$ and other Paulis which are equivalent to $e$ up to stabilizers).

Intuitively, I would expect a measure of degeneracy to quantify how many low weight errors coincide in each class. One could say that counting only those Pauli errors words of weight lower than $d$ is a way to do it. By this definition any code which has a stabilizer of weight lower than $d$ is degenerate (that is the Yes/No approximation to quantifying degeneracy which coincides with the usual definition). In this case, each class has at most one Pauli word of weight less than d/2 in it.

However, I suspect there should be a smoother way to quantify this (and possibly still computable) which is probably where the question is coming from.