I am trying to understand what decides the outcome of an experiment and if there is any theory (e.g. non-local hidden variable theory) that is able to predict the outcome.
3 Answers
According to the Everett Interpretation, unitary evolution of the wave equation predicts that in the outcome state of an experiment there will be multiple versions of you in superposition, each one of you seeing one outcome, none of you able to see any of the others. (See here for an outline of how.)
This interpretation is local, deterministic, and assumes only the standard unitary wavefunction evolution that other interpretations agree applies between observations.
But because none of the different versions of you can see any of the others, each one can make the assertion that all the others vanish, leaving them and their observation as the lone outcome of the experiment. From each observer-instance's point of view, the surviving outcome appears to be random. Since nobody can say how or why the other alternatives disappear, (or - given that they are mutually unobservable - even tell whether or not they have,) there is no way to explain why this particular outcome remains and not any other.
Non-local theories (hidden variable or otherwise) posit that effects propagate non-locally, faster than light and (in some reference frames) backwards in time. This obviously raises problems with the word "predict", as the time-order of events is not defined. In some sense, the outcome of the experiment predicts the experiment that will be done.
Hidden variable theories hypothesize that there is some underlying deterministic mechanism that deterministically decides the outcome based on the input conditions, some of which input information must propagate faster than light, and some of which are not curently able to be or can not ever be observed (i.e. are hidden). I haven't seen any proposals for what this mechanism might be, and given that some parts of the mechanism have to be superluminal, they would not be part of current mainstream physics. It seems their existence is only inferred from the assumptions that physics should be deterministic, and experiments should yield only one outcome.
Given that no specific mechanism has yet been identified, we can't actually make any predictions right now - but should the hypothesised theory/mechanism actually exist, and we assumed a God's-eye view of the hidden variables, then such a theory would by definition predict the outcomes of experiments.
But until we know what the theory actually is, we can't say how.
So-called hidden variable theories posit the existence of variables that account for the outcome of an experiment.
They lack predictive power in the sense that the state of the variables cannot (by any currently-known means) be determined independently of, and before, the materialisation of the effect they cause.
In other scientific theories, the variables which are proposed to exist can usually be measured by some means which is independent of the main effect they are said to control, and machines can be devised where the state of the variables can be measured without the machine evolving further towards its main effect, and without the act of measurement disturbing the existing state of those controlling variables.
The ability to measure in this way is closely linked with the "settability" of the machine - the ability to bring it into a specific initial state.
The problem QM poses is that the physical variables being measured appear to be so physically fundamental, that there aren't means available to examine the states of variables, other than by either operating the mechanism itself toward its main effect, or by subjecting the mechanism to tests that either leave the variables in an altered state following the measurement (so that the previously measured states of variables no longer correspond with the current states that will ultimately determine the main effect of the machine), or destroy the mechanism completely (so that the main effect can no longer occur, and cannot be compared with a prediction).
The absence of an ability to measure the machine without disturbing it's state, also means it cannot be set into any particular initial state.
There is, of course, much physics which explains the nature and difficulty of this measurement conundrum. There isn't dispute about its characterisation or implications, so far as I'm aware.
The main point of dispute about "hidden variable" theories appears to come from those who are primarily battling over the definition of science and who vary in their axiomatic preferences.
The main alternative to hidden variable theories, are what I could dub "non-variable theories". They are theories that essentially posit the possibility that the effects we see may arise from something other than physical causes - something other than "hidden" controlling variables.
They differ from hidden variable theories in not even accounting for effects in terms of any reference to something occuring beforehand - many posit the existence of some fount of fundamental randomness or indeterminism - and in therefore leaving no further avenue for useful scientific investigation or explanation.
- 3,164
Experiments dealing with ever smaller currents or voltages, mechanical forces, electric charges and magnetic dipoles are subject to ever greater percentage error. This is obvious because subatomic particles are the smallest units we can use as measuring devices. A voltmeter or a voltmeter also influence every measurement result, but do not require any interpretation because of the smallness of the error.
The most glaring counter-example is the diffraction of electrons or photons at edges. Any attempt to observe what happens at the edges with light (i.e. photons) or currents/voltages fails. The intensity distribution on the observation screen is completely subject to the interpretation of how the diffraction occurs. The advantage of the wave function is that it does not contain any hidden variables.
The advantage of a theory that includes the interaction of the particles with the electrons of the obstacle would be the controllability of the diffraction, which would be of immense advantage for chip production (and an immense time saving from having to explain to the students again and again that phenomena of quantum mechanics can be described, but cannot be mentally grasped :-).
- 10,980