35

Quantum Mechanics is very successful in determining the overall statistical distribution of many measurements of the same process.

On the other hand, it is completely clueless in determining the outcome of a single measurement. It can only describe it as having a "random" outcome within the predicted distribution.

Where does this randomness come from? Has physics "given up" on the existence of microscopic physical laws by saying that single measurements are not bound to a physical law?

As a side note: repeating the same measurement over and over with the same apparatus makes the successive measurements non-independent, statistically speaking. There could be a hidden "stateful" mechanism influencing the results. Has any study of fundamental QM features been performed taking this into account? What was the outcome?


Edit: since 2 out of 3 questions seem to me not to answer my original question, maybe a clarification on the question itself will improve the quality of the page :-)

The question is about why single measurements have the values they have. Out of the, say, 1000 measure that make a successful QM experiment, why do the single measurements happen in that particular order? Why does the wave function collapse to a specific eigenvalue and not another? It's undeniable that this collapse (or projection) happens. Is this random? What is the source of this randomness?

In other words: what is the mechanism of choice?


Edit 2: More in particular you can refer to chapter 29 of "The road to reality" by Penrose, and with special interest page 809 where the Everett interpretation is discussed - including why it is, if not wrong, quite incomplete.

Qmechanic
  • 220,844
Sklivvz
  • 13,829

12 Answers12

23

First of all, let me start out by pointing out to you that there have been experimental violations of Bell's inequalities. This provides damning evidence against hidden variable models of quantum mechanics, and thus essentially proves that the random outcomes are an essential feature of quantum mechanics. If the outcomes of measurements in every basis were predetermined, we should not be able to violate Bell's inequality.

One way of seeing why this is in fact a reasonable state of affairs is to consider Schroedinger's cat. Evolution of closed quantum systems is unitary, and hence entirely deterministic. For example, in the case of the cat, at some point in time we have a state of the system which is a superposition of (atom decayed and cat dead) and (atom undecayed and cat alive) with equal amplitude for each. This far quantum mechanics predicts the exact state of the system. We need to consider what happens when we open the box and look at the cat. When we do this, the system should then be in a superposition of (atom decayed, cat dead, box open, you aware cat is dead) and (atom undecayed, cat alive, box closed, you aware cat is alive). Clearly as time goes on the two branches of the wave function diverge further as the consequences of whether the cat is alive or dead propagate out into the world, and as a result no interference is likely possible. Thus there are two branches of the wave function with different configurations of the world. If you believe the Everett interpretation of quantum mechanics then both branches continue to exist indefinitely. Clearly our thinking depends on whether we have seen the cat alive or dead so that we ourselves are in a state (seen cat dead and aware we have seen the cat dead and not seen the cat alive) or (seen cat alive and aware we have seen the cat alive and not seen the cat dead). Thus even if we exist in a superposition we are only aware of a classical outcome to the experiment. Quantum mechanics allows us to calculate the exact wavefunction which is the outcome of the experiment, however, it cannot tell us a priori which branch we will find ourselves aware of after the experiment. This isn't really a shortcoming of the mathematical framework, but rather of our inability to perceive ourselves in anything other than classical states.

16

The short answer is that we do not know why the world is this way. There might eventually be theories which explain this, rather than the current ones which simply take it as axiomatic. Maybe these future theories will relate to what we currently call the holographic principle, for example.

There is also the apparently partially related fact of the quantization of elementary phenomena, e.g. that the measured spin of an elementary particle always is measured in integer or half integer values. We also do not know why the world is this way.

If we try to unify these two, the essential statistical aspect of quantum phenomena and the quantization of the phenomena themselves, the beginnings of a new theory start to emerge. See papers by Tomasz Paterek, Borivoje Dakic, Caslav Brukner, Anton Zeilinger, and others for details .

https://arxiv.org/abs/0804.1423 and

https://www.univie.ac.at/qfp/publications3/pdffiles/Paterek_Logical%20independence%20and%20quantum%20randomness.pdf

beginning with Zeilinger's (1999) https://doi.org/10.1023/A:1018820410908, also online free here

These papers present phenomenological (preliminary) theories in which logical propositions about elementary phenomena somehow can only carry 1 or a few bits of information.

Thanks for asking this question. It was a pleasure to find these papers.

Urb
  • 2,724
sigoldberg1
  • 4,537
4

The mechanism of choice in one particular instant of a quantum-mechanical experiment is unknown in all physics today - it's just that this fact for many physicists is too uncomfortable to accept or to admit.

Einstein couldn't accept it, Bohr and Feynmann admitted it though. The question leads us to the never ending Bohr–Einstein debates.

A fundamental fact at the heart of physics is that wave-functions $\psi$ cannot be measured, only their absolute square $|\psi|^2$. Pure logic forces us to admit that statements about wave functions are not statements about a perceivable reality. Quantum theory is a theory that explains how to calculate the outcome of a measurement, it cannot tell us what happened before or during the measurement because a wave function is an idea, an inaccessible hypothesis .

Gerard
  • 7,189
4

The indeterminism does not originate from Quantum Mechanics. It has a wider philosophical origin.

For example, consider the multi-world interpretation of quantum mechanics. It is a completely deterministic theory which describes unitary, reversible and predictable evolution of a quantum system or the Universe (Multiverse in terms of MWI) as a whole.

But any actual experiments still will show uncertainty. Why? According the MWI with each act of measurement the observer gets splitted in two copies each of which experience different results.

One can thus formulate a similar problem but without involving the quantum mechanics: what would one experience if somebody creates an exact copy of him? Will he still experience the old body or the newer one? What happens if the old body is killed?

There can be formulated several related thought experiments:

  1. There is a teleportation device that scans your body, sends that information to the receiver which re-creates an exact copy of your body and then the original body is destroyed. Would a reasonable person use such teleporter even if their friends used it and say it's great?

  2. Suppose the medicine of the future became very advanced. Now you are proposed a game: your brain will be split into two parts with one of them left in your body and the other transplanted into another body and then the both fully regenerated. The memories of the both parts are completely (or mostly) restored. Now one of the resulting people is given billion of dollars while the other is sent to a life imprisonment. Should you agree to such a game? What is the probability that you will find yourself as a billionaire or as a prisoner after the operation? Should you agree if someone cuts not a half of the brain but a smaller part? What about other parts of the body?

This leads to the yet unresolved philosophical questions which exist from the very ancient times when people knew nothing about quantum mechanics.

Here is a list of open philosophical problems that arise in the course of the thought experiment:

  • Hard problem of consciousness (philosophical zombies)
  • Problem of induction
  • Qualia problem
  • Ship of Theseus paradox
Anixx
  • 11,524
4

My two lepta on this mainly conceptual and semantic problem:

It seems that people have an initial position/desire: those who want/expect/believe that measurements should be predictable to the last decimal point and those who are pragmatic and accept that maybe they are not. The first want an explanation of why there exists unpredictability.

An experimentalist knows that measurements are predictable within errors, which errors can sometimes be very large. Take wave mechanics, classical. Try to predict fronts in climate, a completely classical problem. The weather report is a daily reminder how large the uncertainties are in classical problems, in principle completely deterministic. Which leads to the theory of deterministic chaos. So predictability is a concept in the head of the questioner, as far as quantum or classical measurements goes. The difference is that in classical physics we believe we know why there is unpredictability.

Has physics given up on the predictability of the throw of a dice? Taken to extremes trying to find the physics of the metastable state of the fall of the dice we come again to errors and accuracy of measurement.

Within errors in measurements in the order of magnitude we live in, nano to kilometers, quantum mechanics is very predictive, as evinced by all the marvelous ways we communicate through this board . Even in achieving lasing and superconductivity. It is only when probing the very small that the theoretical unpredictability of individual measurements in QM enters. So small that "intuitions" and beliefs can become dominant to measurement and errors. And there, according to the inherent beliefs of each observer, the desire to have a classical predictability framework or the willingness to explore new concepts plays a role to a physicist, whether he/she will obsess about this conundrum or live with it until TOE..

anna v
  • 236,935
3

You have asked several questions here.

  1. Why can't the outcome of a QM measurement be calculated a priori?

    Because what you are measuring is physically real and truly random. Remember collapsing wave-functions are not real and only used mathematically.

  2. Where does this randomness come from?

    From the random trajectories of the photons or electrons depending on the experiment.

  3. Has physics "given up" on the existence of microscopic physical laws by saying that single measurements are not bound to a physical law?

    YES!

  4. Why do the single measurements happen in that particular order?

    I'm not sure about this question, but if you are measuring pairs of particles, then usually is coincidences you are looking for.

  5. Why does the wave function collapse to a specific eigenvalue and not another?

    Remember the wave function is not real and the measurement is actually the random location of an impacting photon.

  6. What is the source of this randomness, in other words: what is the mechanism of choice?

    Photons emitted from a common source have random trajectories and phasing and their interaction with the measuring devise is the mechanism.

Sklivvz
  • 13,829
Bill Alsept
  • 4,159
1

Just wanted to add that the Simulation Hypothesis [1] [2] (SH) may suggest an answer.

SH implies that there is a non-zero chance that we are in fact in a computer simulation. If this simulation is anything like the simulations we currently create, then it implies that the rules defined in the simulation code would appear to us as fundamental, irreducible laws. One such rule could define the wave function, and also define that all observations should pick from the distribution defined by the wave function.

Why does the wave function collapse to a specific eigenvalue and not another?

If SH is true, then this happens because that's what the simulation code defines it to do. In the simulation framework, physical axioms are candidates for simulation code declarations. If so, we'd likely be doomed to never know the reason for why they were declared so (could be as mundane as "let's see what happens if we do").

Is this random? What is the source of this randomness?

It would appear to us as random. The source might still be a deterministic pseudorandom number generator, but if its period is long enough, it would be, in practice, indistinguishable from a truly-random generator. For example, statistical randomness tests performed on numbers selected from a sound quantum number generator would not show any deviations.

Although, depending on the specifics of the host simulator's RNG, with sufficient technological advance, we might be able find a pattern using cryptography methods. And if it was found, it would challenge the assumptions of QM.

Ultimately, no one can prove that quantum measurements are actually random, because to show that one would have to prove the null (i.e. show that the signal is exactly 0 and not just very small). Though we have no evidence that they're not (perhaps we're not looking in the right places?).

Justas
  • 257
1

Quantum mechanics is inherently (and was developed as) as non-deterministic (stochastic) theory. The answer to your question lies in one of the postulates of quantum mechanics (see this page for a full description;

In any measurement of the observable associated with operator $\hat{A}$, the only values that will ever be observed are the eigenvalues $a$, which satisfy the eigenvalue equation.

Since we know that the modulus squared of the wavefunction corresponds to the probability of a certain physical variable taking a given value, this wavefunction can be expanded on its eigenstates, and we see that each measurement result has an associated probability.

Noldorin
  • 7,547
1

is not true that this mechanism is mysterious; however most physicists don't have the time to ponder about these philosophical questions and they just prefer to leave the random nature of quantum mechanics as an 'axiom'

to understand how the random happens, first, let's make a Gedanken experiment were our physical bodies (including our brains) are classically described; they all have well-defined position and momenta, and hence any indeterminism in its evolution is completely of practical reasons, and not a matter of principle.

So in this hypothetical classical universe, teleportation star trek style is a completely legal operation; you can read all the physical microstate of any person, and write it to another place.

But in this classical universe, it is also possible to copy a person microstate: so let's walk on the consequences of such experiment.

So our experimental configuration consists of two separate rooms with big posters on the wall: one of the rooms contains a '+' and the other room contains a '-'. Now we send our test individual into the teleportation chamber, and we will desintegrate this person and create two copies of him into each room

Now the question arises; if you are the test individual, what is that you are going to experience? well, the truth is that there are not that many options for our experiences after we enter the teleportation chamber:

1) we don't experience nothing afterwards, because we have been desintegrated, so we are dead, our goblin soul went away and in the world there are two zombie copies of you that don't have 'soul'

2) we experience to appear in a room with a big poster with a '+'

3) we experience to appear in a room with a big poster with a '-'

So if we discard 1) (i don't wanna argue about religion with anyone here, i'll just say that 1) is preposterous) we are left with two options: 2) and 3)

So, the important thing here, is that, even in this classical, determinist universe, the fact of being able to copy a conscious being, means that some observers/conscious beings, will experience events that are fundamentally random, and are intrinsically non deterministic, even if everything else is.

You could argue that there are underlying 'physical' reasons of why the 'real you' went to 2) instead of 3) or viceversa, you could say that the copy in 2) was more 'perfect' than 3) and hence the real you goes there. But the truth is that these arguments are not fundamental; you are basically trying to take to the heart the fact that there is a single 'you'.

Which makes me go back to your question; How does all this applies to our world?, after all, copying a living being is disallowed (there is even a no-clone theorem on QM) However this is not entirely true. Many-Worlds interpretation of QM is basically taking the determinism of QM to the extreme; if we allow a a quantum superposition to couple to a quantum conscious entity (an observer) the quantum observer will undergo a split and become entangled with the physical system he is measuring; he has physically become two separate copies quantum observers (mutually non-interacting copies, hence the no-clone theorem doesn't apply), experiencing different outcomes, which individually seem random to each one of them.

lurscher
  • 14,933
1

I accidentally found a partial answer to my question on arxiv. It's still completely theoretical, but the answer can be summarized like this.

It is assumed that some form of event horizon exists on a microscopic level. This event horizon prevents some information from escaping. With these premises, a QFT-like theory can be developed by observing the event horizon. The source of the randomness is the fact that information, as seen from outside the event horizon, is incomplete - the assumption being that randomness is the opposite of information.

As the field enters the Rinder horizon for the observer R, the observer shall not get information about future configurations of φ any more and all what the observer can expect about φ evolution beyond the horizon is a probabilistic distribution P[φ] of φ beyond the horizon. Already known information about φ acts as constraints for the distribution. I suggested that this ignorance is the origin of quantum randomness. Physics in the F wedge should reflect the ignorance of the observer in the R wedge, if information is fundamental.

Sklivvz
  • 13,829
0

Yes - physics HAS given up on the existence of microscopic physical laws that you assume would provide the outcome. And what evidence is there that Nature adheres to the idea that P and Q must commute? Physics never established such an idea to begin with. It was always just a convenient assumption, until it turned out that Maxwell simply can not explain atoms. The world can make perfectly good mathematical sense where particles are actually oscillators with a complex phase and possible paths, rather than "mass-points" with classical deterministic behavior. You ask if there is a way to pick out one of the possibilities, but evidently Nature does not know or care. The idea that there is a 'block spacetime' where all events were determined appears to be false.

0

You might want to read about Bohmian mechanics. Bohmian mechanics is perfectly deterministic. The reason randomness appears is explained in the same way as the appearance of randomness in thermodynamic equilibrium.

Here's some further reading with links to several papers at the bottom of the page:

http://plato.stanford.edu/entries/qm-bohm/#qr

Raskolnikov
  • 5,546