In Mackay book (page 559) I beleive the author claims that you get the same performance whether you decode using the raw received codeword (possibly soft LLR's) or using the syndrome of hard decision data from what's received (and some other information of the channel?). I'm having trouble convincing myself of this; or maybe I misunderstood what the author is saying. Can anyone shed light on this?
2 Answers
To me this sounds obviously false. For example, consider T1 decay, It is a biased error that mainly affects qubits storing |1> rather than |0>. Syndrome information doesn't tell you if the stabilizers are currently in the +1 or -1 eigenstates, but the raw measurement data does contain this information and it should slightly better inform you about the likelihood of a T1 decay.
You could argue that in the surface code it's in principle possible to track the measurement value as a parity of the syndromes (assuming non-random initialization), but that's not true in gauge codes. They have substantially more measurement data than syndrome data.
Suppose the machine has some kind of drift that causes measurements to be slightly more likely to return spurious 0s at certain times. With access to all the measurement data you're able to more quickly/accurately tell if that distortion is currently happening and update your decoding weights accordingly.
- 44,299
- 1
- 41
- 116
I think the chapter you are referring to very much states the assumptions in page 557. Indeed, for a belief propagation decoder, whether you start with the received codeword and first pass the message to the check nodes, or whether you start from the syndrome and first pass to the variable nodes does not matter. The graph will be the same.
(47.6) and the two sentences below it seem quite clear. In the next few pages the algorithm is explained from both syndrome and received codeword perspective.
- 151
- 4