The toric code and other popular codes can be decoded using minimum weight perfect matching. Is this an optimal decoder? Here by optimal, I mean it gives the best logical error rate vs physical error rate performance in depolarizing channel. "Threshold" is often used to characterize toric codes, does that assume optimal decoding or a particular type?
2 Answers
Is [minimum weight perfect matching] an optimal decoder?
No, it's not optimal. For example, it uses the weight of the shortest path between two detection events as an approximation for the contributions of all topologically equivalent paths. An optimal decoder would exactly compute the contributions of all possible errors consistent with the symptoms, instead of focusing on the most likely one.
"Threshold" is often used to characterize toric codes, does that assume optimal decoding or a particular type?
Thresholds are always relative to a decoder. And ideally that decoder is something that runs in polynomial time, instead of a hypothetical optimal decoder which likely takes exponential time.
- 44,299
- 1
- 41
- 116
In light of the comments above, I'm going to try to answer this question, with the caveat that I am taking my first steps in this field so I might have errors here.
TLDR - it seems to me that in an independent errors model, the MWPM decoder is an optimal decoder for the Pauli frame, rather than for the logical outcome. Thus, it is the optimal encoder if we only use the syndrome record. However, if we have additional information about the structure of the logical state $|\psi_L\rangle$ and the data outcomes, we can do better.
Why did I reach this conclusion?
Optimal decoder
The error correction procedure uses a an ancilla measurement record $s$ and a data qubit measurement record $d$ to infer a logical observable outcome $L$ (which can be 0 or 1, depending on if the measured state is an eigenvalue of the measured logical operator with eigenvalues +1, -1, respectively).
We want to maximize the probability $p(L'=L)$, where $L'$ is the inferred logical state.
What we need to is to do Bayesian inference, as was commented above, i.e we need to calculate:
$$ L' = \arg\max_L{p(L|s,d)} = \arg\max_L{p(s,d|L)p(L)}. $$
Now, the data qubits are determined by the Pauli frame and the encoded logical state in the following way:
$$ d = (d_L + F) $$,
where $d_L$ is one of the outcome bits of the state $|\psi_L\rangle$ and $F$ is the Pauli frame, and we add mod 2. The syndromes $s$ are determined in a specific way by the set of Pauli errors that occured, and the frame $F$ is determined by the overall Pauli errors that occured on the data qubits. At this point the situation is quite complicated, and it's not clear how to find the $L$ that maximizes this probability.
Optimal estimation of the Pauli frame $F$
Now what it seems to me is that MWPM decoding says the following: Let's estimate $L$ using the following rule:
$$ L' = v_L^T(d_L + F) = L + v_L^TF $$
where $v_L^T$ is a string which is 1 only on the logical qubits of the logical operator. This estimator is probably sub-optimal, though I can't think of a specific way to show it.
In this case however, our task is to estimate $F$. This is more straightforward, since $p(F|s,d)$ is the the probability that we got a set of Pauli errors $P_e$ and measurement errors $M_e$ that are consistent with $F$:
$$ F' = \arg\max_F p(F|s,d) = \arg\max_F p(\{P_e, M_e | F = f(P_e, M_e)\} | s,d) $$
so out of all the possible Pauli and measurement errors that could result in $F$, we need to search for the highest probability set, which in the case of independent errors is the minimal weight set.
Again, any comments/corrections to this argument would be most welcome.
- 1,270
- 4
- 17