10

Silly question, but when sending a single electron at a time through a double-slit and observing the interference pattern over time ... how does the single electron that popped up ('measured') at a given point on the screen tell other points on the screen that they better not detect the same electron? Isn't that the definition of the spooky action at a distance? I mean, if the electron is a wave and there are no hidden variables, it means that it really is not pre-determined where the electron going to pop up on the screen until it's actually detected. Like, when the electron just left the electron gun, it can still pop up anywhere on the screen, right? But, supposed it pops up at point A. How does point B know (and point B could be 1 light year away from point A) that it's now 'forbidden' to detect the electron since it's already been detected at point A? It sounds to me very similar to the entanglement issue with opposite spins where we need to 'communicate' instantaneously.

But maybe it's obvious why this is not an issue ...

4 Answers4

8

This is really an "interpretation of quantum mechanics" question along the lines of "what does it mean for the wavefuntion to 'collapse'" I would say. I don't think anyone can pretend to know the answer to that (although some people do). It really comes down to whether "the other points need to be told not to register a reading" as you are putting it. If the wave function is just a wave that will measure the electron somewhere, then I'm not sure the other sensors need to be "told" not to have a reading. But again, I don't think there is a satisfactory interpretation to quantum mechanics.

There is a perspective, though, where all of QM is "spooky action at a distance" I guess. I just think that this is so broad it loses its utility. The amazing thing about entanglement is that there are two particles and what one would expect are two separate wave functions, and the readings will be opposite like spin up or down.

Poisson Aerohead
  • 2,011
  • 5
  • 10
5

Excellent & perceptive question. The truth is if all it takes for a measurement to occur is for "wave function value" to be at that location, then yes we'd see multiple findings of the same particle at different locations - which we don't. Preventing this would require some sort of faster-than-light propagation, which in other references frames would be backwards-in-time causality. This is generally not OK in a relativistic quantum mechanics (for example the wave function of a Dirac particle won't spread faster than $c$ by the Dirac Equation).

Besides saying you're right I can't say much to answer your question. I believe this is just a problem with the standard/textbook formalism. An unpopular opinion I have is that Bohmian Mechanics is a better framework for the regimes that it covers*, and it doesn't have this problem as well as the other measurement-related problems I've wondered about over the years.

*Bohmian Mechanics, as far as I currently know, doesn't have a satisfactory generalization to it which allows for changes in particle number.

3

I think that the concept of the electron being a wave does not make much sense. A wave is not a fundamental entity, it is a type of motion/change a certain substance/entity can be involved in.

A sound wave in a fluid is just a type of motion the fluid molecules can have. You can have a fluid in different states of motion, like stationary, or in a laminar flow, turbulent flow, or a vortex or some combination of those. But the fluid remains a fluid, it does not become a wave or a vortex.

Likewise, an electromagnetic wave is not a fundamental entity, it's just a particular configuration of electric and magnetic fields. You can have an infinity of field configurations that are not waves, like static fields, for example.

Back to our electron, we can say that the motion of the electron in a two-slit experiment can be described by a wave equation. That does not mean that the electron is a wave.

You are correct that all non-deterministic interpretations of QM imply non-locality, "spooky action at a distance". The only way to preserve locality is to choose between:

  1. Many worlds interpretation (deterministic, no hidden variables).
  2. Superdeterminism (deterministic + hidden variables).

So, the answer to your question is that this experiment does not necessarily require non-locality, but a local explanation requires determinism.

Andrei
  • 839
1

I'm not sure this will help at all but there might be some usefulness in thinking about another (more concrete) probabilistic experiment that ends up having some analogies to the DSE: flipping a fair coin.

Let's say you take a fair coin, flip it 50 times, and record the number of heads. Then repeat that experiment an arbitrarily large number of times. Much like in the DSE, before you perform the experiment (send the electron) the outcome is not pre-determined. You also have similarities in that coin flips are memory-less (the flips cannot 'communicate' with each other, the flips are independent). Likewise, where one electron ends up does not impact where the next will.

Then, when you collect the results, you will see something very similar to the DSE interference pattern. On the edges you will have extremely rare outcomes (very faint bars, electrons ending up far from the center of the DSE). If you did the experiment a quadrillion times you probably would only see a couple of outcomes with 0 or 50 heads. Near the middle you would have your most commonly observed outcomes (with 25 heads corresponding the the brightest, middle bar in the DSE). And if you want to expand the analogy, you even have dark areas, as no experiment will have 3.5 heads or 40.433.

So even in this rough analogy, you can still see that communication is not required to obtain a result that looks similar to a DSE interference pattern. Coin flips cannot communicate their results to the next flip, so even if you get 10 heads in a row, the next flip is still 50/50. And despite the lack of communication, when you keep flipping you will always see the outcomes trend towards the expected probabilities, at least in the long run. This sort of probabilistic thinking also leads to the 2nd thermodynamic law -- things break into pieces, they don't spontaneously assemble themselves from broken pieces into a complex object (aka entropy increases over time).

sps
  • 19