8

Consider the 9 qubit Shor code. This can detect and correct arbitrary single qubit errors, but if there are 2 or more single qubit errors before a correction round, the correction will fail. (In the sense that it won't reproduce the original logical state.) Hence the probability of failure of the Shor code is the probability of there being 2 or more single qubit errors. If we assume that single qubit errors occur independently with probability $p$, then we can calculate this probability as

$$ P(\mathrm{failure}) = 1 - \left[(1-p)^{9} + 9p(1-p)^{8}\right]. $$

We can write this probability as a series in $p$ as

$$ P(\mathrm{failure}) = \sum_{m=2}^{9} (-1)^{m} (m-1) \binom{9}{m} p^{m}. $$

My question is about the intuition for the coefficients of each term in this sum.

The $\binom{9}{m} p^{m}$ part seems natural. We can interpret it as the probability that $m$ single qubit errors occur multiplied by the number of groups of $m$ qubits these errors could act on.

Question: What I am confused by is the $(-1)^{m} (m-1)$ part of the coefficient. Why should some of the higher order terms lead to a suppression of the failure probability, and why should the weighting of these terms beyond the binomial coefficient increase with order? Is there some intuition for these coefficients?

Note 1: I realise that the $m=2$ case is the most important, since we typically assume $p \ll 1$. Indeed, most treatments truncate the series at $m=2$, where $(-1)^{m} (m-1) = 1$, so even if the higher-order terms do have an effect, that effect should be fairly negligible.

Note 2: Though I have specified the Shor code for concreteness, this behaviour is not specific to the Shor code. For example, if we restrict ourselves to bit-flip errors, then the failure probability of the bit-flip code will exhibit similar coefficients.

Sanchayan Dutta
  • 17,945
  • 8
  • 50
  • 112
anon1802
  • 215
  • 1
  • 6

1 Answers1

5

When given set of Pauli errors acts on a stabilizer code, they will either be corrected, or the attempt at correction will lead to a logical error. There will be no more complicated cases, such as a superposition of the two.

This makes calculation of the failure probability rather straightforward (though not necessarily practical for large codes). You just take the probability of every uncorrectable set of errors, and sum them up.

Even though this is a simple sum of positive terms, there are ways it could be written with the occasional minus sign. For example, you could sum the probabilities for all correctable sets of errors, and then subtract that from $1$. Such subtractions do not imply a suppression of the failure probability. They just show that you previous terms overestimated the failure probability.

Minus signs most often come because the probability for a lack of error on a qubit can be written as $1-p$. If we expand out products in which this is a factor, we will get negative terms. But that is simply because we chose to write the expression as a polynomial in the single probability $p$, rather than using both $p$ and $1-p$ for their respective events.

This is what is happening in your case. The factor $p^m$ is an overestimate for the probability of errors on $m$ qubits, because it doesn’t account for what the other qubits are doing, and the corresponding contributions to the total probability. Hence some subtractions also need to be made.

James Wootton
  • 11,700
  • 1
  • 35
  • 74