19

I was reading the following answer from this question:

In physics, you cannot ask / answer why without ambiguity. Now, we observe that the speed of light is finite and that it seems to be the highest speed for the energy.

Effective theories have been built around this limitation and they are consistent since they depend of measuring devices which are based on technology / sciences that all have c built in. In modern sciences, one doesn't care of what is happenning, but of what the devices measure.

I think this raises a good point, can we get a false positive of a theory being right just because the instruments doing the measuring have that theory built in? For example, if it were the case that there is a particle which travels faster than light, could we even tell that's the case since some of our methods of measuring distance involve relativity (which assumes the speed of light as an upper bound)?

Ovi
  • 2,899
  • 8
  • 30
  • 42

3 Answers3

33

Right, so you have some instrument that generates a photo or takes a bunch of numbers and computes a third number or something, in order to generate that output it makes certain assumptions about the world, some of those assumptions are theory-level. You are asking if this can lead to a false-confirmation of a scientific theory.

And the answer is yes-and-no. Yes, those sorts of problems are real problems that can happen with our measurement apparatuses. And no, if the problem is purely theory-level then this shouldn't be interpreted as a false confirmation of the theory as a whole; the theory wasn't "at stake" here.

For example, the OPERA experiment measured (erroneously) neutrinos going faster than light and there were lots of guesses about what was going wrong, and many of them were "you're getting timing data from GPS, maybe you've got calculations baked in that expect the GPS system to work like X, but instead it looked like Y." It turned out that the GPS calculations were correct but the timing-signal cables were faulty; but that GPS-model-is-wrong idea was a very real possibility that a lot of people took very seriously. And GPS includes relativistic corrections and so forth, which assume the speed of light as $c$.

What is science?

Why, however, would the theory not be in trouble? Let's take a big step back.

A lot of folks were taught a fairy tale where scientists first observe the facts of the world, second look for patterns in those facts, third propose those patterns as a "hypothesis", fourth collect more facts, and fifth see if those new facts confirm or deny that hypothesis. If a hypothesis has no denying-facts, then we call it a "scientific theory". This whole thing is called the "Scientific Method" and, they believe, all scientists follow it. This is a bit suspect for two reasons:

  1. It comes from some time after Francis Bacon and Isaac Newton, but I haven't been able to really track down whose model it really was.

  2. In my professional career in science I have never seen someone say "okay, we have to stop what we're doing and start observing facts and see what some patterns are, so that we can form a hypothesis." To the extent that this stuff happens it's totally subconscious; it's not a "method" by any normal sense of that term. We'd say that this is a descriptive metaphor, not a normative one.

It does help for teaching children, please don't think I am saying that the idea of the scientific method isn't without value. But it doesn't give a great account of what a theory is. Meanwhile I would phrase my definition of science as something like,

Science consists in a set of highly self-critical knowledge communities, that are trying to figure out all of the ways they has fooled themselves in the past, and setting up heuristics so as to not be fooled again in the same way in the future.

(I'm getting the idea here from the computer scientist Alan Kay who referred to something similar as big-S science, I'm sure he could track it back to some source that he got it from?)

So, we have a bunch of shared heuristics we interborrow, for example: we should test our understanding via experiments; experiments should have controls; blinding (experimenters ideally shouldn't know which group is test vs control until all the data is collected); the publication process; replications and meta-analyses. These all correspond roughly to ways that we have fooled ourselves in the past, and try to protect against the same in the future. We don't have one singular method and theories aren't the things that survive that method.

What is a theory? What is theory-choice?

Scientific theory, is the largest organizing unit of truth in science. Heliocentrism, Newtonian mechanics, atomic theory, thermodynamics, quantum mechanics, relativity, these are a few of the theories we use in physics.

If we're trying to understand this better, we're talking philosophy-of-science, whose biggest name in the field is Karl Popper, who wanted to discriminate science from pseudoscience. He said that good science "sticks its necks out": pseudoscience usually could be compatible with any statement of fact, whereas proper science has some sort of experiments which you can run which would hypothetically prove that idea wrong. He didn't distinguish theory from hypothesis really.

The second-biggest name though is Thomas Kuhn, who did make this distinction. He argued that to make a scientific prediction, you need to use the community's prevailing theories, and put them together to make a model for some phenomenon. It's two levels. It's art supplies on the one hand, and the painted picture on the other. Kuhn makes the case that yes, we crumple up pictures that don't match reality, but we don't usually throw away the art supplies. Same as you see in the OPERA experiment linked above, most physicists pushed back against the report of superluminal neutrinos: "no, you must be wrong, there is something wrong with your timing circuitry or your GPS models or something. Surely the neutrinos aren't going faster than light." And so Kuhn argued that the art supplies only changed during so-called scientific revolutions.

I'd go one step farther than I think Kuhn did, to say that theories are often complete for their domains—no matter how complex the phenomenon is, there is some model which predicts that phenomenon in that theory. We saw this with velocity-dependent mass as a way to get special-relativity-like dynamics out of Newtonian mechanics. Or geocentrism—in Newtonian mechanics, you can include centrifugal and Coriolis forces to put the Earth at the center of the coordinate system, then get "epicycles" as a Fourier analysis. So geocentric models can predict everything that heliocentric models can. So I say: what was at stake was not "which theory is right?" but rather "which theory is better?"

To my mind this tension was best resolved by Imre Lakatos. He doesn't to my memory phrase it in Darwinist terms, but that's what feels right to me:

  1. Different theories are kind of "genes" for different papers in different "research programs" which are competing for grant money and researchers and other scarce resources.

  2. A lot of those researchers are lazy grad students, in a good way. They have to make a big impact and they've got to make it sooner rather than later, and if you are a geocentrist computing complex epicycles, that takes a lot more effort to teach and a lot more effort to do, than the mathematically equivalent heliocentric computation. So they're going to choose the one that makes it easier to get an interesting publication published.

  3. Interesting publications are then read by others who apply the same methodologies in new directions: the theories that simplify the models will thus "reproduce" faster than the ones that don't.

In my interpretation of Lakatos, theory choice is therefore by natural selection. A more recent example than geocentrism vs heliocentrism: we now have a deterministic quantum mechanics via "pilot waves"; why is nobody using it? Because it's complicated! Nobody really likes the Copenhagen interpretation philosophically, but it's dead simple to use and generally predicts the same outcomes.

So, theories like atomic theory, quantum mechanics, Newtonian mechanics, etc. are not "right" or "wrong," they are just regarded by the community as "time-tested proven tools, this one is useful for the following sorts of problems, but less useful on those other ones."

So why doesn't theory-embedding lead to false positives?

So to answer your question with all of that background, when theory T is embedded in the calculations of some apparatus A, which then observes something which "confirms T," it turns out that A is not actually confirming T, it is confirming the simplest model M, framed in T, which made the prediction. The fact that A contains further assumptions from T in addition to the assumptions in M, doesn't matter -- it just makes the whole system self-consistent. Self-consistency is actually a good thing! It's better than the reverse.

You can on the other hand have a false positive/negative around internal inconsistencies—so you would have two inconsistent theories T1 and T2, and apparatus A embeds T1 in its calculations while we are testing a model M phrased within theory T2, and A incorrectly either rejects or validates M/T2 when, if you corrected the theory of A to make things self-consistent, it would turn out that your modified apparatus A' would have either validated M or rejected M and forced a more complex model M'.

These things can throw you for a loop, but they don't usually create theory changes. In the past, theory changes have usually come about because we developed new instruments and experiments that allowed us to see the world in a new way. Heliocentrism came about because precise measurements of the stars were being done. Einstein's relativity was considered a philosophical conundrum at the time (people dismissed it as philosophical relativism for bad reasons) but won out because of the Michelson-Morley experiment. Quantum mechanics had its birth in Planck's law which was motivated by trying to explain the blackbody spectrum of the Sun with statistical mechanics. That same statistical mechanics as an explanation of thermodynamics, was considered a little hokey, until Einstein connected it to Brownian motion observations.

CR Drost
  • 39,588
17

You have to give a concrete example. Experiments are designed so as not to depend on what they are trying to measure.

Your speed of light example is not good. Was not the whole scientific community in a dither because superluminal neutrinos were supposed to have been measured? Until it was found that there was a malfunction in an instrument?

In any case theories are not proven by data, they are just validated, i.e. registered as consistent with the data. If the experiment with neutrinos had been correct and another experiment confirmed it the theory would have had to change.

anna v
  • 236,935
2

Can we tell when an established theory is wrong?

Not always. And not everybody. Sometimes one or more people can, and they explain why. But other people won't entertain it, then the "established theory" gets even more established.

I was reading the following answer from this question: In physics, you cannot ask / answer why without ambiguity.

You can ask why. We do physics to understand the world. Not to be told to shut up and calculate by some guy who doesn't understand anything.

Now, we observe that the speed of light is finite and that it seems to be the highest speed for the energy.

Yes, but the speed of light varies with gravitational potential. See Einstein talking about it in 1920. Or Shapiro talking about it in 1964.

Effective theories have been built around this limitation and they are consistent since they depend of measuring devices which are based on technology / sciences that all have c built in. In modern sciences, one doesn't care of what is happening, but of what the devices measure.

Phooey. See the tautology described by Magueijo and Moffat in http://arxiv.org/abs/0705.4507. We use the local motion of light to define our second and our metre, which we then use to measure the local motion of light. So we always measure the same answer, even though the speed varies.

I think this raises a good point, can we get a false positive of a theory being right just because the instruments doing the measuring have that theory built in?

I'm not sure the false positive arises because the instruments have the theory built in. I'd be happier saying false positives can arise because of lack of understanding.

For example, if it were the case that there is a particle which travels faster than light, could we even tell that's the case since some of our methods of measuring distance involve relativity (which assumes the speed of light as an upper bound)?

Yes we could. I don't think there's much of an issue with that. Like anna said, superluminal neutrinos were supposed to have been measured. If such measurement had been independently confirmed then the theory would have to change. But maybe not the way you think. Neutrinos are actually more like photons than electrons. You might classify neutrinos as light in the wider sense.

John Duffield
  • 11,381