3

One version of the Knill–Laflamme conditions for quantum error correction is stated as follows:

Theorem 9.5 in Gottesman's Quantum Error Correction Book:
Let $ \mathcal{F}: L \to Q $ be a quantum channel. Let $ R $ be a reference system whose dimension is at least as large as that of $ L $. There exists a completely positive trace-preserving map (CPTP) $ \mathcal{G} $ such that
$$ \mathcal{G}\circ \mathcal{F} = \mathcal{I} $$ if and only if
$$ I_c(\mathcal{F}, \rho) = S(\rho_L) $$ for all pure states $ \rho $ on $ R \otimes L $, where $ \rho_L = \mathrm{Tr}_R(\rho) $ and $ I_c(\mathcal{F}, \rho) $ is the coherent information of $ \mathcal{F} $ with respect to the state $ \rho $.

Here, $ \mathcal{F} $ can be viewed as the composition of an encoding unitary $ U $ followed by an error channel $ \mathcal{E} $.

For completeness, the coherent information of a channel $ \mathcal{N}: A \to B $ with respect to a state $ \rho^{AR} $ (where $ R $ is a reference system purifying $ \rho^A $) can be written as: $$ I_c(\mathcal{N}, \rho) = S(\rho^B) - S(\rho^{RB}), $$ where:

  1. $ \rho^B = \mathcal{N}(\rho^A), $
  2. $ \rho^{RB} = (\mathrm{id}^R \otimes \mathcal{N})(\rho^{RA}), $
  3. $ S(\cdot) $ denotes the von Neumann entropy.

Question About a Classical Analog:

Is there an analogous formulation of this version of the Knill–Laflamme conditions in the classical setting?

More concretely, suppose we have a probability distribution over the vector space $ \mathbf{F}_2^n $ and an error channel (a linear map) that sends one probability distribution on $ \mathbf{F}_2^n $ to another distribution on $ \mathbf{F}_2^n $. Is there a known necessary and sufficient condition for perfect error correction in terms of a classical information‐theoretic quantity, for example a mutual information measure?

One possibility I can imagine is to restrict the quantum Knill–Laflamme conditions to only those density matrices that are diagonal in some logical subspace, and likewise to noise channels that preserve these diagonal states. Then one might try to read off a purely classical statement.

1 Answers1

4

An analogous classical formulation might just be this:

Let $\mathcal{N}: \mathcal{P}(\mathcal{X}) \rightarrow \mathcal{P}(\mathcal{Y})$ be a stochastic map, where $\mathcal{P}$ denotes the set of probability distributions on a set, and $\mathcal{X}, \mathcal{Y}$ are two finite alphabets. Then, for a distribution $p_X \in \mathcal{P}(\mathcal{X})$ and random variables $X \sim p_X$ and $Y \sim \mathcal{N}(p_X)$, there exists a recovery map $g: \mathcal{Y}\rightarrow \mathcal{X}$ such that \begin{equation} g(Y) = X \end{equation} if and only if \begin{equation} I(X:Y) = H(X) \end{equation}

The basic idea is that the mutual information $I(X:Y)$ will be equal to the "self-information" $I(X:X)$ iff $X$ can be perfectly recovered from $Y$.

There might be a way to strengthen the $g(Y) = X$ statement to use stochastic maps so that the above statement looks more like the quantum theorem, e.g. there is some $\mathcal{R}: \mathcal{P}(\mathcal{Y})\rightarrow \mathcal{P}(\mathcal{X})$ such that $\mathcal{R} \circ \mathcal{N} = \mathcal{I}$. However, even if this stronger claim held, that map $\mathcal{R}$ will end up being induced by the function $g$ given below so you will end up with a more complicated statement saying the same thing as above.

Proof

I am using the standard notation where $X$ denotes a random variable taking value $x\in\mathcal{X}$ with probability $p_X(x)$ for some $p_X \in \mathcal{P}(\mathcal{X})$. In our setup, $Y$ is an induced r.v. distributed according to $p_Y := \mathcal{N}(p_X)$.

Forwards direction: Suppose $g$ exists as stated above, and define the r.v. $Z = g(Y)$, i.e. the output $Z$ of the recovery map is just $X$. By (classical) data processing inequality, \begin{align} I(X: Y ) \leq I(X:Z) = I(X:X) = H(X), \end{align} and equality is achieved since $Z = g(Y)$.

Reverse direction: If $I(X:Y) = H(X)$, this implies \begin{equation} 0 = I(X:Y) -H(X) = H(Y) -H(XY) = -H(X|Y), \end{equation} i.e. the "uncertainty" of $X$ given $Y$ is zero. Just like Fano's inequality says that nonzero $H(X|Y)$ implies that any recovery map $g$ will be imperfect, the opposite also holds$^*$: Whenever $H(X|Y) = 0$, $X$ can be recovered perfectly from $Y$.


$^*$If you are familiar with conditional min-entropy, then a straightforward argument for this is $H_{min}(X|Y) \leq H(X|Y)$, using the conditional min-entropy for classical distributions.

forky40
  • 7,988
  • 2
  • 12
  • 33