7

I would like to really understand how the uncertainty principle in QM works, from a practical point of view.

So this is my narrative of how an experiment goes, and I'm quickly in trouble: we prepare a set of many particles in the same state $\psi$ as best we can, then we start measuring 2 observables A and B that don't commute on... each of the particles (?). When we measure A, the wave function collapses to an eigenstate of A with some probability. By accumulating measurements with A, we obtain statistics, and in particular $\langle\psi|A|\psi\rangle$, the expected value of $A$ w.r.t to state $\psi$. But how do I get $\langle\psi|B|\psi\rangle$? Can I measure A and B "simultaneously" on one particle, even if $\psi$ has collapsed to an eigenstate of A, which is not an eigenstate of B, and A and B don't commute... What happens? How do I measure B? Do I need to pull in another particle, on which I'll measure B, but not A this time?

Zeus
  • 3,543

3 Answers3

25

There are many steps:

Step 1, select a state $\Psi$.

Step 2, prepare many systems in same state $\Psi$

Step 3, select two operators A and B

Step 4a, for some of the systems prepared in state $\Psi$, measure A

Step 4b, for some of the systems prepared in state $\Psi$, measure B

Now if you analyze the results, assuming strong (not weak) measurements then every time you measured A, you got an eigenvalue of A, and every time you measure B you got an eigenvalue of B. Each eigenvalue had a probability (which is equal to the ratio of the squared norm of the projection onto the eigenspace divided by the squared norm before you projected onto the eigenspace). So your eigenvalues of A come from a probability distribution that often has a mean $\langle A\rangle=\langle \Psi|A|\Psi\rangle $ and a standard deviation $\Delta A=\sqrt{\langle \Psi|\left(A^2-\langle \Psi|A|\Psi\rangle^2\right)|\Psi\rangle}$. And your eigenvalues of B come from a probability distribution that often has a mean $\langle B\rangle=\langle \Psi|B|\Psi\rangle $ and a standard deviation $\Delta B=\sqrt{\langle \Psi|\left(B^2-\langle \Psi|B|\Psi\rangle^2\right)|\Psi\rangle}$. You never get those from a measurement, or even from a whole bunch, but from steps 4a and 4b you do get a sample mean and a sample standard deviation, and for a large sample these are likely to be very close to the theoretical mean and the theoretical standard deviation.

The uncertainty principle says that way back in step 1 (when you selected $\Psi$) you could select a $\Psi$ that gives a small $\Delta A$, or a $\Psi$ that gives a small $\Delta B$ (in fact if $\Psi$ is an eigenstate of A then $\Delta A=0$, same for $B$). However, $$\Delta A \Delta B \geq \left|\frac{\langle AB-BA\rangle}{2i}\right|=\left|\frac{\langle\Psi| AB-BA |\Psi\rangle}{2i}\right|,$$

So in particular noncommuting operators often (i.e. if the expectation value of their commutator does not vanish) have a tradeoff, if the state in question has really low standard deviation for one operator, then the state in question must have a higher standard deviation for the other.

If the operators commute, not only is there no joint limit to how low the standard deviations can go, but measuring the other variable keeps you in the same eigenspace of the other operator. However that is a completely different fact since the uncertainty principle is about the standard deviations of two probability distributions for two observables applied to one and the same state, and thus approximately applies to the sample standard deviations generated from identically prepared states.

If you have a system prepared in state $\Psi$ and you measure A on it then you generally have to use a different system also prepared in $\Psi$ to measure B. That's because when you measure A on a system it projects the state onto an eigenspace of A, which generally changes the state. And since the probability distribution for B is based on the state, now that you have a different state you will have a different probability distribution for B. You can't find out $\Delta B=\sqrt{\langle \Psi|\left(B^2-\langle \Psi|B|\Psi\rangle^2\right)|\Psi\rangle}$ if you don't have $\Psi$ and only have $\Psi$ projected onto an eigenspace of A.

Timaeus
  • 26,055
  • 1
  • 36
  • 71
11

One of the main problem about the uncertainty principle as it is usually told in quantum mechanics is that it is always told in the historical context, giving what Heisenberg thought about it or Feynman. For once (at least), this is not very clever.

In today's literature, we distinguish different types of "uncertainty relations", based on what they actually refer to. The first set of uncertainty relations is about state preparation. Recall that you can see a state as an abstract version of an experimental preparation procedure. It is not a particular photon, but it actually describes how to produce photons with particular properties. In objective programming languages, a "state" would be the same as a class, not an instance of a class. If you don't like this particular view of a state [this is part of what one could call the Ludwig school], you can also say that a state is the well-defined state for an ensemble. All of this should be equivalent.

In any case, the state describes the properties of this ensemble/preparation procedure and an uncertainty relation for state preparation tells us something about this. The usual Robertson-Schrödinger uncertainty relation of which the Heisenberg relation is but a special case are such state preparation uncertainty relations. The relations are expressed as expectation values and variances, but they do not really encompass any measurement. Therefore, they are about state preparation.

So then, what does the Heisenberg uncertainty tell us? Given a state, if I create instances of this state and measure their momentum, this will give me a distribution. Instead of measuring their momentum, I can also measure their position and this will give me another distribution. I could, for example, produce a stream of instances of this state (or an ensemble) and measure half of them for momentum and half of them for position, then the spread of this distribution is lower bounded by $\hbar/2$. The beauty of the Heisenberg uncertainty relation is that this statement holds true regardless of how I prepare my state. There is no experimental preparation procedure such that the spread of momentum times spread of position is not lower bounded by $\hbar/2$.

The other type of uncertainty relation is about measurements. They are also called "error-disturbance relations", because they try to quantify the oft quoted "if you measure a state, you necessarily disturb it".

Here, you have to consider measurements and you have to define what it means to measure the same instance of a state first with one observable, second, with another. Here, "simultaneous measurement" would have to be defined, but none of this is necessary for the usual Heisenberg uncertainty principle. There is a big literature on error-disturbance relations and a lot of active discussion, to quote just two, you could have a look at Ozawa or Busch, Lahti, Werner for two opposing views.

Martin
  • 15,837
6

You cannot get both $\langle \psi \rvert A \lvert \psi \rangle$ and $\langle \psi \rvert B \lvert \psi \rangle$ if you only have one state $\lvert \psi \rangle$, or if you always measure $A$ on your states.

What you need to do to check the uncertainty principle experimentally is preparing an ensemble of identical states and then measure $A$ on one half and $B$ on the other half.

ACuriousMind
  • 132,081