1

$\newcommand{\H}{\mathsf{H}}\newcommand{\Hmin}{\H_{\rm min}}\newcommand{\D}{\mathsf{D}}\newcommand{\Dmax}{\D_{\rm max}}$Consider the conditional min-entropy $\Hmin(A|B)_\rho$, discussed e.g. in this and this other related posts. Given a bipartite state $\rho$, we can define it as $$\Hmin(A|B)_\rho = - \inf\{\Dmax(\rho\|I\otimes\sigma): \,\, \sigma\ge0\}, \\ \Dmax(\rho\|Q) \equiv \inf\{\lambda\in\mathbb{R}: \,\,\rho\le 2^\lambda Q\} = \inf \{\log(\eta): \,\, \eta\ge0,\,\, \rho\le \eta Q\}. \tag1$$ Equivalently, we can write it more concisely as $$\Hmin(A|B)_\rho = -\inf\{\log(\operatorname{Tr}(Y)): \,\, Y\ge0, \,\, \rho\le I\otimes Y\}.\tag2$$ As shown e.g. in the posts linked above, or more in detail in Watrous' notes here, when $\rho$ is classical-quantum, this quantity is directly tied to the probability of discriminating between the constituents of the ensemble represented by $\rho$. In case of a diagonal state $\rho=\sum_{ab} p_{ab} (|a\rangle\!\langle a|\otimes|b\rangle\!\langle b|)$, as again discussed here, we can write this explicitly as $$\Hmin(A|B)_\rho = -\log\left(\sum_b \max_a p_{ab}\right).\tag3$$ My question is: in what sense is $\Hmin(A|B)_\rho$ a "conditional entropy"? When introducing other "min entropic quantities", such as min/max entropy and max-relative entropy, the overall gist was to preserve the "nature" of the quantity, but replace averages with some min/maxing. For example, $\D(P\|Q)=\sum_a P_a \log(P_a/Q_a)$ becomes $$\Dmax(P\|Q)=\max_a \log(P_a/Q_a),$$ or for quantum states, $$\Dmax(\rho\|Q)=\max_{Z\ge0} \log\left(\frac{\langle Z,\rho\rangle}{\langle Z,Q\rangle}\right).$$ Similarly, the entropy $H(P)=\sum_a P_a \log(1/P_a)$ becomes the min entropy $\H_{\rm min}(P)=\min_a \log(1/P_a)$, and the analogous definition for quantum states.

Is there any similar direct way to go from the conditional entropy to the conditional min-entropy? Looking at (3) above, even focusing only on the classical case, I don't see a direct way how $$\H(A|B)_P \equiv -\sum_b p_b \sum_a p_{a|b}\log p_{a|b} % H(A|B)_P \equiv -\sum_b P_B(b) \sum_a P_{A|B}(a|b)\log\left(P_{A|B}(a|b)\right)$$ would become $$\Hmin(A|B)_P = -\log\left(\sum_b \max_a p_{ab}\right) = -\log\left(\sum_b p_b \max_a p_{a|b}\right).$$ The biggest differences seem to be that the average over $b$ is preserved, but somewhat gets into the logarithm, and even worse, that rather than some min/max over a logarithm, which is the sort of thing we got before, we now get a logarithm of a sum of maxes, which seems a rather different kind of object. Is there any argument to see why this expression should be somehow directly related to conditional entropies? Or is the conditional min-entropy called as such only following the analogy that because we have $$\H(A|B)_P = -\D(P\|I\otimes P_B) = - \inf_Q \D(P\|I\otimes Q),$$ then we should go and define $$\Hmin(A|B)_P \equiv - \inf_Q \Dmax(P\|I\otimes Q),$$ even though when we replace $\D$ with $\Dmax$ the $\inf$ is achieved by different types of distributions?

glS
  • 27,510
  • 7
  • 37
  • 125

1 Answers1

1

Here is a perspective on why $H_{min}(A|B)$ is a min entropy, which may not directly answer your question.

From an operational perspective, $H_{min}(A|B)$ is defined analogously to $H_{min}(A)$, at least for the classical case. Using monotonicity, rewrite \begin{equation} H_{min}(A) = \min_a \log \left(\frac{1}{p(a)}\right) = -\max_a \log (p(a)) = - \log\left(\max_a p(a) \right), \tag{1} \end{equation} such that $2^{-H_{min}(A)}$ is the maximum success probability for successfully guessing the value of the random variable $A$.

Correspondingly, from one of the questions that you link, $2^{-H_{min}(A|B)}$ is the maximum success probability for successfully guessing $A$ given $B$. This provides some intuition for why the sum of maxima appears inside the logarithm for $H_{min}(A|B)$ - the performance of the optimal guessing strategy is an average of the success probabilities for $A|B$ for every $B$ that the guesser receives: \begin{equation} H_{min}(A|B) = -\log \left( \sum_b p(b) 2^{-H_{min}(A|B=b)} \right) \tag{2} \end{equation} where $H_{min}(A|B=b)$ is a min-entropy with respect to the distribution $p(A|B=b)$. To me, this form suggests that conditional min-entropy is more related to min-entropies of conditional distributions than it is to some kind of minimization involving conditional entropy.

forky40
  • 7,988
  • 2
  • 12
  • 33