Why do you need error correction? My understanding is that error correction removes errors from noise, but noise should average itself out. To make clear what I'm asking, why can't you, instead of involving error correction, simply run the operations, say, a hundred times, and pick the average/most common answer?
4 Answers
That doesn't scale well. After a moderately long calculation you're basically left with the maximally mixed state or whatever fixed point your noise has. To scale to arbitrary long calculations you need to correct errors before they become too big.
Here's some short calculation for the intuition given above. Consider the simple white noise model (depolarizing noise), $$\rho'(\varepsilon)= (1-\varepsilon)\rho + \varepsilon \frac{\mathbb{I}}{\operatorname{tr} \mathbb{I}},$$ where $\rho$ is the ideal state (standard notation applies). If you concatenate $n$ such noisy processes, the new noise parameter is $\varepsilon'=1-(1-\varepsilon)^n$, which increases exponentially in the number of gates (or other error sources). If you repeat the experiment $m$-times and assume that the standard error scales as $\frac{1}{\sqrt{m}}$ you see that the number of runs $m$ would be exponentially in the length of your calculation!
- 2,457
- 17
- 40
If the error rate were low enough, you could run a computation a hundred times and take the most common answer. For instance, this would work if the error rate were low enough that the expected number of errors per computation was something very small — which means that how well this strategy works would depend on how long and complicated a computation you would like to do.
Once the error rate or the length of your computation become sufficiently high, you can no longer have any confidence that the most likely outcome is that there were zero errors: at a certain point it becomes more likely that you have one, or two, or more errors, than that you have zero. In this case, there is nothing to prevent the majority of the cases from giving you an incorrect answer. What then?
These issues are not special to quantum computation: they also apply to classical computation — it just happens that almost all of our technology is at a sufficiently advanced state of maturity that these issues do not concern us in practise; that there may be a greater chance of your computer being struck by a meteorite mid-computation (or it running out of battery power, or you deciding to switch it off) than of there being a hardware error. What is (temporarily) special about quantum computation is that the technology is not yet mature enough for us to be so relaxed about the possibility of error.
In those times when classical computation has been at a stage when error correction was both practical and necessary, we were able to make use of certain mathematical techniques — error correction — which made it possible to suppress the effective error rate, and in principle to make it as low as we liked. The same techniques surprisingly can be used for quantum error correction — with a little bit of extension, to accommodate the difference between quantum and classical information. At first, before the mid-1990s, it was thought that quantum error correction was impossible because of the continuity of the space of quantum states. But as it turns out, by applying classical error correction techniques in the right way to the different ways a qubit could be measured (usually described as "bit" and "phase"), you can in principle suppress many kinds of noise on quantum systems as well. These techniques are not special to qubits, either: the same idea can be used for quantum systems of any finite dimension (though for models such as adiabatic computation, it may then get in the way of actually performing the computation you wish to perform).
At the time I'm writing this, individual qubits are so difficult to build and to marshall that people are hoping to get away with doing proof-of-principle computations without any error correction at all. That's fine, but it will limit how long their computations can be until the number of accumulated errors is large enough that the computation stops being meaningful. There are two solutions: to get better at suppressing noise, or to apply error correction. Both are good ideas, but it is possible that error correction is easier to perform in the medium- and long-term than suppressing sources of noise.
- 12,522
- 1
- 33
- 73
noise should average itself out.
Noise doesn't perfectly average itself out. That's the Gambler's Fallacy. Even though noise tends to meander back and forth, it still accumulates over time.
For example, if you generate N fair coin flips and sum them up, the expected magnitude of the difference from exactly $N/2$ heads grows like $O(\sqrt N)$. That's quadratically better than the $O(N)$ you expect from a biased coin, but certainly not 0.
Even worse, in the context of a computation over many qubits the noise doesn't cancel itself nearly as well, because the noise is no longer along a single dimension. In a quantum computer with $Q$ qubits and single-qubit noise, there are $2Q$ dimensions at any given time for the noise to act on (one for each X/Z axis of each qubit). And as you compute with the qubits, these dimensions change to correspond to different subspaces of a $2^Q$ dimensional space. This makes it unlikely for later noise to undo earlier noise, and as a result you're back to $O(N)$ accumulation of noise.
run the operations, say, a hundred times, and pick the average/most common answer?
As computations get larger and longer, the chance of seeing no noise or of the noise perfectly cancelling out rapidly becomes so close to 0% that you can't expect see the correct answer even once even if you repeated the computation a trillion times.
- 44,299
- 1
- 41
- 116
Why do you need error correction? My understanding is that error correction removes errors from noise, but noise should average itself out.
If you built a house or a road and noise was a variance, a difference, with respect to straightness, to direction, it's not solely / simply: "How would it look", but "How would it be?" - a superposition of both efficiency and correctness.
If two people calculated the circumference of a golf ball given a diameter each would get a similar answer, subject to the accuracy of their calculations; if each used several places of decimal it would be 'good enough'.
If two people were provided with identical equipment and ingredients, and given the same recipe for a cake, should we expect identical results?
To make clear what I'm asking, why can't you, instead of involving error correction, simply run the operations, say, a hundred times, and pick the average/most common answer?
You're spoiling the weighing, tapping your finger on the scale.
If you're at a loud concert and try to communicate with the person next to you do they understand you the first time, everytime?
If you tell a story or spread a rumor, (and some people communicate verbatim, some add their own spin, and others forget parts), when it gets back to you does it average itself out and become essentially (but not identically) the same thing you said? - unlikely.
It like crinkling up a piece of paper and then flattening it out.
All those analogies were intended to offer simplicity over exactness, you can reread them a few times, average it out, and have the exact answer, or not. ;)
A more technical explanation of why quantum error correction is difficult but neccessary is explained on Wikipedia's webpage: "Quantum Error Correction":
"Quantum error correction (QEC) is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is essential if one is to achieve fault-tolerant quantum computation that can deal not only with noise on stored quantum information, but also with faulty quantum gates, faulty quantum preparation, and faulty measurements.".
"Classical error correction employs redundancy. " ...
"Copying quantum information is not possible due to the no-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible to spread the information of one qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. A quantum error correcting code protects quantum information against errors of a limited form.".