88

In cargo cult science Feynman writes:

"Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off, because he had the incorrect value for the viscosity of air....Why didn't they discover that the new number was higher right away? It's a thing that scientists are ashamed of--this history--because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong--and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We've learned those tricks nowadays, and now we don't have that kind of a disease."

What tricks is he talking about specifically? In fact, in general, what tricks do physicists learn for doing experiments and avoiding fooling themselves?

Second: it's my strong belief that Feynman is referring to something here which isn't in textbooks but instead built into the culture of physics, but I don't understand what exactly it is, and I suspect it's passed along in the culture of physics labs. If someone can explain with some stories, examples or general comments, what is that culture like?

BioPhysicist
  • 59,060

6 Answers6

65

There are lots of different strategies that are employed by the scientific community to counteract the kind of behavior Feynman talks about, including:

  • Blind analyses: In many experiments, it is required for the data analysis procedure to be chosen before the experimenter actually sees the data. This "freezing" of methodology ensures that nothing about the data itself changes the way it's analyzed, and a methodology that changes once data starts coming in is a red flag that physicists check for in peer review.

  • Statistical literacy: The more you know about statistics, the easier it is to spot data that has been manipulated. Much effort has been devoted to increasing the knowledge of proper statistical practices among physicists so that publishing a flawed analysis (whether intentional or not) is difficult. For example, most experimental physics courses nowadays include training on the basics of statistics and data analysis (for example, mine extensively used Data Reduction and Error Analysis for the Physical Sciences by Bevington and Robinson).

  • Independent collaborations: It's common nowadays for multiple detectors to perform tests of the same hypotheses independently of each other. The procedure, data, and results are all carefully kept as separate as possible before the respective analyses are published. This increases the likelihood that bias or manipulation will be detected, since it will usually cause a difference in results between multiple studies of the same hypothesis.

  • Verification of old results with new data: This is probably less common than it should be in science, due to a cultural preference for performing novel tests, but still occurs at a reasonable frequency in physics. Even in the huge detectors and giant collaborations of high-energy physics, new data is often cross-checked with old data as a side effect of some analyses. For example, an analysis trying to measure the mass of a new particle will often utilize already-known particles to detect its signature, and in the process of characterizing the dataset, will end up confirming earlier measurements of those particles as a "sanity check" of the integrity of the dataset.

  • Incentivization of properly-done disproof: Typically the case where a measurement disagrees with existing hypotheses/theories is met with as much or even more excitement as one that confirms existing hypotheses/theories. This is especially true in high-energy physics, where the majority of the community is eagerly awaiting the first statistically-significant experimental disagreement with the Standard Model. This excitement also brings intense scrutiny of the experiments claiming to measure a disagreement, which helps filter out improperly-done analyses.

This is, of course, a partial list.

42

My favorite story (which I learned about recently) is about Frank Dunnington and his measurements of electron properties in about 1930.

He was measuring the ratio $e/m_e$. Experiments took quite a long time (four years!). When the experimental device was constructed he asked the person who helped him to construct not to tell him some key attribute of the device. He asked: please make two slits here, the distance between them should be somewhere between 18 and 22 degrees. Although this distance was very important, he did not know it until all the experiments were finished.

When he was done with experiments he disassembled the device, measured the distance, put the actual value into formulas in his paper, and published it.

lesnik
  • 4,242
16

What Feynman is talking about is not particular to physicists, its particular to human nature and it is called "confirmation bias"

Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that affirms one's prior beliefs or hypotheses. It is a type of cognitive bias and a systematic error of inductive reasoning. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for desired outcomes, for emotionally charged issues, and for deeply-entrenched beliefs.

This is no longer happening as at the time of the electron measurements , because people have become aware of this bias generally. In addition , there exists in particle physics research the opposite bias, people looking for deviations of expected values and theories which would indicate new physics. A recent example is the faster than light neutrino which was finally shown to be a measurement error.

anna v
  • 236,935
14

The Particle Data Group (PDG) which every other year summarizes the knowledge of Particle Physics prefixes their tome with the figure I include below (from the 2016 edition). It shows how our knowledge of a few select values evolves over time. You can see how some of the values jump at various points in time. That doesn't mean that nature suddenly changed, it is due to our measurements becoming more and more refined and (hopefully) correcting past mistakes. When the values change significantly sometimes the explanation is benign (say, the mass of a particle was measured relative to another particle for which a better measurement became available at that point in time), sometimes invalid assumptions in the measurement were corrected (say, a negligible quantity turned out to not be negligible), sometimes the effect described by Feynman will have played a role (perhaps mediated by another quantity which was used for calibration).

Not only because of my interest in this particle (please read my thesis), I find the most intriguing of these plots to be the one showing the mass of the $\eta(547)$ (second from bottom, leftmost column), one of the light mesons. Here a number of measurements up until 1990 fell into the same ballpark, but with a lot of tension (indicated by the green error bars which tell us that the PDG applied their scaling technique). Then around 1990 a new measurement appeared which shifted the mean value by several error bars and, after some experimental tension in the mid-noughts, the value jumped again to the now best value. Basically, the tension could twice only be resolved by much more precise experiments that shifted the central value significantly.

So it seems like even in the most fundamental measurements there can be a reluctance to change, but precision wins.

Particle properties over time

tobi_s
  • 1,321
0

You have exactly reversed Feynman's meaning in his use of the word "tricks". His use of "tricks" references discarding values far from Millikan's and keeping values close to Millikan's. Thus, his use of "tricks" is to methods of fooling ourselves.

You then use "tricks" in exactly the opposite sense. Consequently, there is no such thing as what you request: "What tricks is he talking about specifically. In fact in general what tricks do Physics major learn for doing experiments and avoiding fooling yourselves." is explicitly describing an empty set of tricks.

Eric Towers
  • 1,797
0

There is a nice article from Robert MacCoun and Saul Perlmutter: "Blind analysis: Hide results to seek the truth". They suggest that one should use random numbers to

  1. add noise to measured data-points,
  2. add artificial biases, and/or
  3. scramble the categories to which the data-points belong

during the pre-data-analysis. The central idea of this approach is that we debug the programs for the data analysis, draw conclusion for the next steps of the data analysis, and analyse the influence of a possible outlier while we are still "blind folded" and thus unbiased. Only when one agrees that the data analysis is done, the random numbers are removed and the final results are investigated.

NotMe
  • 9,818
  • 1
  • 15
  • 32