When using notation like $5.7 \pm 0.2$ to indicate a measurement and error, it seems like there are many standards to what this could mean. Is there a standard taught to students? Is there a standard used by professionals?
According to this wikipedia $5.7 \pm 0.2$ could mean:
- The value definitely falls within the range $[5.5, 5.9]$
- $.2$ represents one standard deviation of uncertainty, so there's a probability of $.683$ of falling within the range $[5.5,5.9]$
- $.2$ represents two standard deviations of uncertainty, so there's a probability of $.954$ of falling within the range $[5.5,5.9]$
University physics labs I've found online seem to indicate a similar polysemy
initially (page 2) seems to indicate the first of these three, an upper and lower bound on a range of possible values, but then goes on to present techniques for propagation of uncertainties that seem better suited to the cases where the uncertainty represents a number of standard deviations.
I am a mathematics educator trying to put together something about the mathematics of precision and its relation to limits and derivatives in calculus, and want to accurately portray both how this notation is taught and how it is used by professionals.
It makes sense that different situations would call for different uses of the notation, but I can also imagine this being confusing for students and professionals.
- Is there a single standard taught to students at the university level?
- Is there a single standard used by professionals?
- If not, do professionals at least always indicate which standard they're using, or are they sometimes not specified, leading to mix-ups?