37

Reading the book Schaum's Outline of Engineering Mechanics: Statics I came across something that makes no sense to me considering the subject of significant figures:

Schaum's Outline of Engineering Mechanics: Statics fragment I have searched and saw that practically the same thing is said in another book (Fluid Mechanics DeMYSTiFied): Fluid Mechanics DeMYSTiFied fragment


So, my question is: Why if the leading digit in an answer is 1, it does not count as a significant figure?

knzhou
  • 107,105

5 Answers5

77

Significant figures are a shorthand to express how precisely you know a number. For example, if a number has two significant figures, then you know its value to roughly $1\%$.

I say roughly, because it depends on the number. For example, if you report $$L = 89 \, \text{cm}$$ then this implies roughly that you know it's between $88.5$ and $89.5$ cm. That is, you know its value to one part in $89$, which is roughly to $1\%$.

However, this gets less accurate the smaller the leading digit is. For example, for $$L = 34 \, \text{cm}$$ you only know it to one part in $34$, which is about $3\%$. And in the extreme case $$L = 11 \, \text{cm}$$ you only know it to one part in $11$, which is about $10\%$! So if the leading digit is a $1$, the relative uncertainty of your quantity is actually a lot higher than naively counting the significant figures would suggest. In fact, it's about the same as you would expect if you had one fewer significant figure. For that reason, $11$ has "one" significant figure.

Yes, this rule is arbitrary, and it doesn't fully solve the problem. (Now instead of having a sharp cutoff between $L = 9$ cm and $L = 10$ cm, you have a sharp cutoff between $L = 19$ cm and $L = 20$ cm.) But significant figures are a bookkeeping tool, not something that really "exists". They're defined just so that they're useful for quick estimates. In physics, at least, when we start quibbling over this level of detail, we just abandon significant figures entirely and do proper error analysis from the start.

knzhou
  • 107,105
15

This isn't an actual rule. And as some people point out in the comments, it's not even mentioned in the Wikipedia article on significant digits. The rule applies to $0$, not to $1$.

Simple counter-example: $10$. Would the authors claim that this number has no significant digits?

You can verify this by doing a search for "sig fig counter." All of them should tell you that the number in your question has 4 significant figures.

As others note, this boundary condition is clearly arbitrary. But it needs to be consistent across literature, or else confusion abounds when you're working with others. So I'd say ignore the rule.

11

Truncating numbers to a certain precision is completely arbitrary. There's no reason not to make it more arbitrary.

It seems like someone didn't like the step in precision between 9.99 and 10.0 so they moved it to between 19.99 and 20.0.

In any field where results are clustered around a power of 10, doing this may be beneficial.

Jasen
  • 924
3

It's Experiment Time!

(I was starting to see both points of view on whether to drop the 1, and was curious if there was some objective way of tackling the problem... so I figured it might be a good opportunity for an experiment. For Science!)

Assumptions: Significant Digits are a way of signifying precision on a number - either from uncertainty of measurement or as the result of calculations on a measurement. If you multiply two measurements together, the result has the same number of significant digits as the lower of the two starting values (so 3.8714 x 2.14 has three digits total, not seven like you'd get from plugging it into a calculator.)

That 'calculation' part is what I'd like to take advantage of. Because arguing significant digits on a number in a vacuum is just semantics. Seeing how the precision carries forward with actual operations gives an actual testable prediction. (In other words, this should remove any sort of 'cutoff' issue. If two numbers have X significant digits, then the multiplication of them should have an accuracy of roughly X significant digits - and the validity of the how you determine what's a significant digit should translate accordingly.)

Experimental Layout

Generate two high precision, Benford-compliant coefficients (I'm not actually sure Benford matters in this experiment, but I figured I shouldn't omit any possible complicating factors - and if we're talking physics, our measurements should be fit Benford's Law.) Perform an operation like Multiplication on them. Then, round those same coefficients down to 4 digits after the decimal, and perform the same multiplication on those rounded values. Finally, check how many digits the two resulting values have in common.

Aka, check how well the imprecise 'measurement' version compares the actual, hidden, high-precision calculation.

Now, in an ideal world, the value would be 5 matching (significant) digits. However, since we're just blinding checking whether digits match, we're going to have some that match by sheer luck.

Experimental Results For Multiplication

Digits Matching Where Result Doesn't Start With One
    ... and no input value starts with One:
            5th digit matches 89.7%
            6th matches 21.4%
    ... and one input value starts with One:
            5th digit matches 53.7%
            6th matches 5.57%
    ... and two input values start with One:
            5th digit matches 85.2%
            6th matches 11.1%
Digits Matching Where Result Starts With One:
    ... and no input value starts with One:
            5th digit matches 99.9+%
            6th matches 37.8%
    ... and one input value starts with One:
            5th digit matches 99.9+%
            6th matches 25.5%
    ... and two input values start with One:
            5th digit matches 95.0%
            6th matches 13.9%

Conclusions For Multiplication

First, multiplying two numbers and ending with a number that starts with 1, you should probably count the 1 as a significant digit. In other words, if you multiply '4.245' x '3.743', and come up with '15.889035', you should probably leave it at '15.89'. If you add an additional digit and call it '15.889', you have a 38% chance of that final digit being correct... which probably isn't high enough to be defensible to include.

But multiplying where one of the inputs starts with 1, and it gets strange. Multiplying '1.2513' x '5.8353', and realistically, you don't have five significant digits in your result. According to the experiment, you've got four digits... and a 54% chance of being right with that fifth value. Well, if a 38% chance in the prior situation (multiplying two numbers and ending with a value starting with '1') of getting an 'extra' significant digit isn't acceptable, then it's probably fair to say the 54% chance in this situation is also probably too low to justify including the 5th digit.

So you might be tempted to say "Don't treat a leading 1 as significant as an input to a calculation"... except that multiplying 1.##### x 1.#### (two numbers that start with 1) gives you 85.2% accuracy on that fifth digit - which is pretty much the same level of accuracy where none of the three numbers begin with a 1. So if 8.83 x 8.85 should have three significant digits, so should 1.83 x 1.85.

Final Conclusion: It's actually a deceptively difficult problem to figure out a good heuristic. Especially since there's a pretty big difference between a measurement of 1.045 that's fed into the input of a calculation, and the 1.045 that comes out as a result of a calculation. Which explains why there are multiple methods of handling leading 1's. (If I were forced to choose a Heuristic, it would be: don't count the leading '1' on any measurements performed, but count it for the output of any calculations.)

Kevin
  • 260
  • 1
  • 5
2

Keeping track of "significant digits" is a heuristic for indicating approximately the precision of a number. It's not a substitute for a real uncertainty analysis, but it's good enough for many people and many purposes. When some people run up against the limitations of significant figures, they have enough background (or colleagues with enough background) to switch to a more serious error analysis. When other people run up against those same limitations, they try to "fix" the significant-digits approach by creating new ad-hoc rules like this one.

Let's suppose that you and I are independently analyzing the same data set. Each of us has measured the same quantity to two significant figures: your result is 0.48, and my result is 0.52. Since a healthy significant-figure analysis retains one least-significant digit whose value is only mostly trustworthy, it's not clear whether our measurements agree or not; that level of disagreement is interesting and we might end up discussing how to turn that into a three-significant-figure experiment, in case we've both correctly measured a "true" value closer to 0.498.

Now imagine a different universe where we both do the same experiment, but a different definition somewhere means that our "results" are different numerically by a exact numerical factor of twenty. Your measurement in this universe is 9.6, and mine is 10.4. There's still an interesting tension between those numbers. But if I count the leading 1 as one of my two significant digit s, I should report my result as "10", suggesting it is equally likely to be "9" or "11." If you report 9.6 and I report 10, the tension between our results is much less obvious. Also it appears that my result is ten times less precise than yours. I shouldn't be able to change the precision of a number by doubling or halving it.

That's the logic for keeping track of a "guard digit" if a number happens to fall in the bottom part of a logarithmic decade. (The Particle Data Group keeps a "guard digit" if the first two significant digits are between 10 and 35.) But to explain this by saying that "a leading 1 isn't a significant digit," as your source does: that's terribly confusing. I'd find a book written by someone else and read the author you quote here with some caution.

@supercat reminds me in a comment that there is a compact convention for representing real uncertainties that's become popular in the literature in the past couple decades: one writes theuncertainty in the last few digits in parentheses just after the number. For example, one might write $12.34(56)$ as a shorthand for $12.34\pm 0.56$. This approach is nice in the precision measurements business, where there are many significant figures. For example, the current Particle Data Group reference reports the electron mass (in energy units) as $0.510\ 998\ 950\ 00(15)\,\mathrm{ MeV}/c^2$, which is much easier to write and to parse than $0.510\ 998\ 950\ 00\,\mathrm{ MeV}/c^2 \pm 0.000\ 000\ 000\ 15 \,\mathrm{ MeV}/c^2$.

I haven't seen that approach much in material for introductory students, and I can think of a couple reasons why. The "significant figure rules" are, for most people, the first time they learn that arithmetic is something you can do with numbers that are not exact. Many students are intellectually unprepared for that idea: they're ready to write 0.5 instead of 1/2, but they're vague on whether to decimalize 1/7 as 0.1 or as 0.1428571429, because the latter is how it comes out of the calculator. Furthermore, to use the parenthesis notation, you have to have some understanding of significant figures already. To combine my examples above, most people who aren't in the precision measurements business (where understanding the uncertainty may be more challenging than understanding the central value) would write 12.3(6) rather than keeping the guard digits in 12.34(56). But if you were multiply that value by twenty, it would become 246.8(11.2). Whether to record it thus, or as 247(11), or as $250\pm10$, winds up raising the same issues about guard digits that started this question. While the ambiguity is moved from the central value to the uncertainty, so the stakes for misjudging are lower, explaining this to a person who is new to the idea of careful imprecision is a tall order.

rob
  • 96,301