4

In the physical sciences (which are physics, chemistry, astronomy, materials science, etc.), we learned that the uncertainty is +/- the smallest unit (which is 1) of the last significant figure if the uncertainty is not given in a recording of data. So, if we have a digital measuring device that measures to the nearest millimeter, has a manufacturer's stated uncertainty of +/- 1 mm, and gives a reading of 914 mm, then it will obviously be recorded as just "914 mm".

However, does the true value actually lie somewhere between exactly 913 mm and exactly 915 mm, or may it stray outside even those numbers if higher precision is used? For example, if go down to the micrometer, is the uncertainty actually +/- 999 μm or +/- 1,499 μm according to the rules of significant figures? If we measure the same sample using a micrometer, is the reading guaranteed to be somewhere between 913,001 microns and 914,999 microns, or is it instead only guaranteed to be somewhere between 912,501 microns and 915,499 microns, respectively?

2 Answers2

13

There is actually no guarantee at all.

Although in school students are taught about “significant figures”, they are not used by professional scientists. They are like “training wheels” for understanding uncertainty.

In actual science, scientists estimate their combined uncertainty by considering “components” of uncertainty. The best source for understanding how professional scientists handle uncertainty is NIST Technical Note 1297.

The components of uncertainty from the device itself should be described in the manufacturer’s documentation. It is best to not try to infer this quantity from the smallest digit in a digital display. For example, these calipers list two components of uncertainty:
Accuracy: 0.02 mm
Repeatability: 0.01 mm

The listed accuracy is probably best modeled as a normal distribution with a standard deviation of 0.02 mm. This gives a standard uncertainty of 0.02 mm.

Since the listed repeatability is equal to the device resolution, it is probably best modeled as a uniform distribution of width 0.01 mm. This gives a standard uncertainty of 0.003 mm (see section 4.6 of the NIST guide).

Then, you would estimate any other sources of uncertainty similarly. Once you have estimates of the standard uncertainty for each source then you would calculate the combined standard uncertainty as the square root of the sum of the squares of all of the individual standard uncertainties (see section 5.1 of the NIST guide).

For example, if we considered only the above two components of uncertainty for a measurement with those calipers then the combined standard uncertainty would be 0.02 mm, meaning that the combined uncertainty is dominated by the “accuracy” component and not the resolution/repeatability component. We would expect that measurements would have a roughly normal distribution with a standard deviation of 0.02 mm. So we would expect about 68% of the measurements of the same object to be within 0.02 mm of each other, but we would expect that about 5% of the measurements would be more than 0.04 mm off.

If we measured a value of 12.34 mm then we could report it as “12.34(2) mm where the number in parentheses is the numerical value of the combined standard uncertainty referred to the corresponding last digits of the result”. Or you could report it using one of the other standard descriptions from section 7. This description would essentially imply that you can treat the measurement as a normally distributed random variable with mean 12.34 mm and standard deviation 0.02 mm.

Dale
  • 117,350
6

It depends on the device. You raise two separate but related issues.

For an explicitly digital device, a common misconception is that the uncertainty can be as small as half of the least significant figure. Your question correctly assumes that the uncertainty is at least as large as the least significant figure, but it's worth imagining some mechanisms as to why.

Suppose that instead of measuring a length of 914 mm, you are actually measuring an electrical potential of 914 mV. All electrical signals are subject to some amount of random noise. Some of this noise is attributable to picking up oscillating signals from the environment, including 50Hz or 60Hz noise from nearby power lines, but also including signals from nearby walkie-talkies or radio stations, or astronomical radio noise from aurorae on Earth or Jupiter. But there is also noise associated with thermal fluctuations in the material's properties, and there is "shot noise" associated with the fact that an electric current is the aggregated motion of many independent charge carriers whose motion can be Poisson-distributed. Your favorite electrical system has some irreducible noise.

Suppose in this particular instance that a hypothetical measuring device with better precision and better frequency resolution would report a long-term mean signal of 914.45 mV, but with fast fluctuations as large as 0.2 mV above or below this average. How should this be reported on your millivolt-precision display? Should the final digit "flicker" whenever the voltage fluctuates above 914.50 mV? What if the typical duration of those excursions is comparable to the time it takes for the display to change? A "flickering" display might be completely unreadable. Consider that a typical seven-segment LED flickering rapidly between "4" and "5" shows the same segments as the display uses for "9"; but 919 mV is manifestly wrong.

There are a number of ways that electronic systems get around this problem. A common approach is to establish some amount of hysteresis on the displayed value, so that small or brief excursions which "should" ideally change the displayed value actually don't. For example, your display might have a "dead zone" around 914.50 mV where the value does not change in response to small fluctuations. Perhaps an excursion above 914.8 mV is enough to knock the displayed value to "915", and it does not return to "914" unless there is an excursion below 914.2 mV. (I vaguely recall this is usually implemented as a "Schmidt trigger.")

Furthermore, without analyzing your device, you have no information about how wide this dead zone is. I gave an example where the displayed value changes if the "true value" and the displayed value briefly differ by more than 0.8 mV, for an unspecified definition of "briefly." But there is no reason in principle for this "dead zone" to be narrower than the difference between least significant digits. A device with such a large "dead zone" would systematically have different display values for the same physical observable when approached from above versus from below. But the narrower dead-zone has this same systematic offset; it's just smaller.

If you complain that you're imagining measuring a length, not a voltage: bad news. If there's a digital display, you're measuring a voltage.

As for the question of reproducibility using a more precise instrument: it depends. Better precision always involves dealing with different sources of noise and uncertainty. In the best case, a more precise instrument lets you learn about the uncertainty characteristics of your less-precise instrument. But you also have an annoyingly high risk of learning unexpected properties of the thing that you're measuring.

rob
  • 96,301