7

What's the biggest variance of $h=\sum_i H_i$ where $H_i$ is the hamiltonian act on the ith qubit?

If the n qubits state is separable, i.e., the state is $\mid\psi_1\rangle\otimes\mid\psi_2\rangle\otimes\cdots\mid\psi_n\rangle$. Obviously, the biggest variance, $\max\limits_{\mid\psi\rangle}(\Delta h)^2$, is $N(\lambda_M-\lambda_m)^2/4$, where $\lambda_M \text{ and } \lambda_m$ are biggest and smallest eigenvalue of $H_i$ respectively. To see this, we can use the general formula for state $\mid\psi_i\rangle\equiv \cos(\theta)\mid0\rangle+\sin(\theta)e^{i\phi}\mid1\rangle$. We then calculate the variance of one part: $\mid\psi_i\rangle$ and will get the biggest variance is when $\theta=\frac{\pi}{4}$, and the variance is $(\lambda_M-\lambda_m)^2/4$. At last the total variance of $N$ parts can be added as $N(\lambda_M-\lambda_m)^2/4$.

But how to prove if the total state $\mid\psi\rangle$ is not necessarily separable, the biggest variance of $h$ becomes $(N\lambda_M-N\lambda_m)^2/4$ instead of $N(\lambda_M-\lambda_m)^2/4$?

narip
  • 3,169
  • 2
  • 10
  • 36

1 Answers1

5

Here is an approach that requires no specific knowledge about $|\psi\rangle$ whatsoever.

In your description you implied that each $H_i$ has the same maximum and minimum eigenvalues $\lambda_m$ and $\lambda_M$ respectively so I will assume this in the derivation. The process of measuring $\langle h \rangle$ empirically can be thought of running a series of experiments with the $i$-th experiment returning an energy $E_i$ in the range $[N\lambda_m, N\lambda_M]$. Then we can average all of the observed $E_i$ values to compute $\langle h \rangle$, and we can think of $E$ being a random variable whose probability density (or mass) function is completely determined by $h$ and $|\psi\rangle$.

Let $X$ be a random variable defined as $E- N\lambda_m$ and therefore takes values in the range $[0, N(\lambda_M - \lambda_m)]$. The variance of $X$ is: \begin{align}\tag{1} \text{Var}(X) &= \frac{1}{n}\sum_i X_i^2 - \bar{X}^2 \\ &\leq \frac{1}{n}\sum_i N(\lambda_M - \lambda_m)X_i - \bar{X}^2 \\ &= (N(\lambda_M - \lambda_m) - \bar{X})\bar{X} \end{align} where the inequality just substitutes one $X_i$ for the maximum possible value of $X$, and $n$ is a number of samples that can be assumed to be taken to infinity. Then set the derivative of this expression to zero, $\partial_\bar{X}\text{Var}(X) = N(\lambda_M - \lambda_m) - 2\bar{X}=0$ to get $$\tag{2} \text{argmax}_{\bar{X}} \text{Var}(X) = \frac{N(\lambda_M - \lambda_m)}{2} $$ which results in

$$\tag{3} \max \text{Var}(X) = \left(\frac{N(\lambda_M - \lambda_m)}{2}\right)^2 $$

which also gives $\text{Var}(E) = (\Delta h)^2$ since variance is not affected by the constant shift of $N\lambda_m$.

Note that you can apply this same technique to each $H_i$ individually to recover the maximum variance for the case of a separable state; the factor of $N^2$ disappears compared to the above derivation and only a factor of $N$ is reintroduced by summing up the variances of the separate systems.

forky40
  • 7,988
  • 2
  • 12
  • 33