4

I was reading about superdeterminism and it was a bit counter-intuitive. The idea of having a hidden variable on the measurement device is very rational. For example, if we emit light to a constrained electron like in a hydrogen atom, only photons of certain frequency and polarization can interact with it. Likewise, when we have a spin pair under some constrain eg:

$$ \hat S_A + \hat S_B = 0 $$

Not all photons can interact with let's say Bob's particle, That's what I thought is the reason we need a non-uniform field on the Stern-Gerlach-like experiments.

While I was expecting that the interacting-photon polarization on the perpendicular plane of Stern-Gerlach experiment to be the hidden variable, on the superdeterminism page there was nothing about this. It was more about philosophical concepts like free will and the angle of measurement itself as a hidden variable.

Are there any classifications of superdeterministic theories? Or only global superdeterminism is possible and my intuition about a local hidden variable on the measurement device caused by non-uniform fields is wrong? Are there any Bell tests considering that?

I'm not an English speaker nor a physicist please edit my question to be more accurate.

Qmechanic
  • 220,844

4 Answers4

7

As the other answers here seem content with saying "Superdeterminism is stupid, don't try to understand it", let me try to address your questions and clarify some things:

When Bell derived his inequality, he considered two entangled particles $A$ and $B$ and assumed that they contain "hidden variables" $\rho_A$ and $\rho_B$ with information about their state. For example, if photon $A$ is spin-up, then $\rho_A = +1$, if its spin down then $\rho_B = -1$.

The measurement device, on the other hand, does not contain hidden variables. Alice and Bob know its state precisely since they set them up. So they are in a known state (let call these state $a$ and $b$). For a polarizers, this would be the angle of their polarization axis, e.g. $a = 45^\circ, b = 60^\circ$ if $A$ is set at 45 and $B$ at 60 degrees.

Now, Bell assumes some additional properties for the hidden variables:

  1. Reality: This is the existence of $\rho_A$ and $\rho_B$ from the start.
  2. Locality: This is the independence of $\rho_A$ and $\rho_B$. Otherwise, we would have to consider $\rho_{AB}$, a single variable for both photons.
  3. Statistical independence: This is the independence of $\rho_A$ and $\rho_B$ on the state of the measurement devices $a,b$.

Since Quantum Mechanics violates Bells inequality, one of these assumptions must be wrong. In Superdeterminism, Number 3. is left out.

So its not that $\lambda$ is hidden. Superdeterminism assumes that the photon states may depend on $\lambda$, i.e. $\rho_A(a,b), \rho_B(a,b)$.

Are there any classifications of superdeterministic theories? Or only global superdeterminism is possible and my intuition about a local hidden variable on the measurement device caused by non-uniform fields is wrong?

I don't really know, but I never heard of global Superdeterminism.

Are there any Bell tests considering that?

The hope of people working on this is that our current experiments are explained well by Quantum Mechanics, but more sensitive experiments might show the dependence of $\rho_A,\rho_B$ on $a$ and $b$ if you look for it.

Whatever you think about Sabine Hossenfelder, she has a decent summary on this topic if you are interested to learn more (pdf). See in particular section 3 (What) and 6 (Experimental Tests).

Cream
  • 1,658
3

Superdeterminism is a response to Bell's Theorem. It is one of two ways that a certain assumption required to prove Bell's Theorem might fail.

The assumption in question is most commonly called "Statistical Independence". More accurately, it would be called "Statistical Independence between the past hidden variables $\lambda$ and the future settings $(a,b)$", but that's a bit of a mouthful. In mathematical terms, this assumption would look like $$Prob(\lambda)=Prob(\lambda|a,b).$$

The idea here is that one can try to model entanglement by assigning a probability distribution $Prob(\lambda)$ to the shared hidden variables of the two particles, back when they are first entangled. The above equation is the assumption that any reasonable model must assign those probabilities for $\lambda$ independently of the eventual measurement settings $(a,b)$ for that run of the experiment. It seems like a reasonable assumption, but if we break it, the central argument of Bell's Theorem doesn't go through. (If this assumption failed, then one really could explain entanglement experiments in terms of localized hidden variables.)

Superdeterminism is the idea that one can violate the above equation, explaining correlations between $\lambda$ and $(a,b)$ in terms of past common causes. Specifically, there could be some distant past set of hidden variables $\Lambda$ which would serve to correlate $(\lambda,a,b)$. That argument makes sense to the point where you consider $a,b$ to be some microscopic details in the measurement device, perhaps why you're asking about hidden variables in the measurement devices themselves. Certainly it would be unreasonable to insist that a model had no correlations between those details.

But ${a,b}$ aren't microscopic details. They are macroscopic settings, chosen in some manner. They're the values written down in the lab book when calculating the entanglement correlations. As Bell himself put it, they could be chosen by the Swiss Lottery Machine. So any superdeterministic account can't merely correlate hidden details. They have to correlate the output of the Lottery Machine in Alice's lab, with the output of the Lottery Machine in Bob's lab, and both of those in turn need to be reproducibly correlated with the original hidden variables back where the entangled particles were generated. If you can't find an account in the literature explaining what those hidden variables might be, it's probably because there's no conceivable set of hidden variables which could account for every way that the settings $(a,b)$ might be chosen.

The other way to break Statistical Independence is having a model which is "Future-Input Dependent", or "Retrocausal", at a hidden level. Instead of a common-cause explanation of the correlations, now the explanation is a direct cause, from $a$ to $\lambda$ and also from $b$ to $\lambda$. (This assumes one is using Pearl-style interventionist causation, where the external intervention/setting is always the "cause" by definition. If you don't take this view of causation, such models are hard to wrap your head around, but can still be analyzed in terms of the input-output structure of the underlying model as described here.)

Some papers (and also the initial definition on Wikipedia) blur the distinction between retrocausal and superdeterministic models, calling them both "superdeterministic", but this seems misguided to me. Clearly there's an enormous conceptual difference between direct retrocausal influences from $(a,b)$ to $\lambda$ and a common-cause explanation of $(a,b,\lambda)$.

Ken Wharton
  • 2,498
  • 8
  • 16
1

Imagine you make a simple experiment, dropping balls from different heights and measuring their speed and time to impact.

With all that data, you find that all of those experiments follow a precise, exact math equation, which we then call a law (the law of gravity). We use that law to predict future experiments, and when we do them, the prediction was correct!

That's standard science.

But there's another, valid, interpretation.

Maybe, it was all just a coincidence. Maybe the balls could move at any speed in any direction, or even turn into elephants mid-fall. But, on pure luck, they all feel in a way compatible with the law of gravity.

This is ridiculously improbable, but it's not impossible, and we can't prove it's not true. And we can't prove it's not false either. So it's "unfalsifiable", and therefore it's not science.

It's not wrong, it's just not science.

That's superdeterminism. Not that the world is "random", not that. But rather, that everything behaves in a way that cannot be predicted by doing small, contained experiments (like dropping balls), and extrapolating them to other situations. In other words, not by science.

It's not surprising we can invoke superdeterminism to explain away anything we don't like in a theory (like Bell in QM), but by doing so we're pretty much abandoning the ability to make predictions.

Juan Perez
  • 3,082
1

There are actually two different versions of Superdeterminism. There's the original version that Sabine Hossenfelder works on, namely the version that hidden causal events going back to the initial correlations of the universe can account for the correlations observed in quantum entanglement. And then there is Dr. Johan Hansson's version, which does not necessarily rely on hidden variables, but rather posits the violation of a little known 4th assumption of Bell's Theorem, namely the assumption of continuous causation in physics. Dr. Hansson published a proof in 2020 that the universe is a predetermined static block universe without continuous causation in physics. Unfortunately, his proof though brilliant is relatively obscure. Dr. Hansson's proof shows you why superdeterminism should be taken very seriously and is anything but stupid. You can read it at Physics Essays Vol. 33, No. 2 (2020), or here at this link:https://www.diva-portal.org/smash/get/diva2:1432225/FULLTEXT01.pdf?fbclid=IwAR1LumqekGHmXOzJXOdpNOgFyRkya5CeKefZpQeGWEIQxrr9yyUd7NnZY5o