3

I'm asking this because I saw (at least) two questions here on this Stack that seemed very similar and caused the same confusion to me in reading the answers to both.

Suppose we have a formula:

$A = BC$

And we know the fractional errors in $A,C$ (i.e. $A=A_0(1+\Delta_A)$ etc). We are asked to find the fractional error in $B$.

Here are two methods commonly seen:

Method 1:

Differentiate implicitly, divide by $A=BC$ to get: $\frac{dA}{A} = \frac{dB}{B} + \frac{dC}{C} $ Solving for $\frac{dB}{B}$ we get $\frac{dB}{B} = \frac{dA}{A} - \frac{dC}{C}$

This is clearly wrong because the error in $B$ should not decrease if the error in $C$ increases. I think the way to fix this is to note that actually, $\Delta_X = |\frac{dX}{X}|$ and so take an absolute value on the RHS.

Method 2:

Solve first for $B$, then do the same approach. This actually gives exactly the same answer, but people seem to automatically insert abolute value bars around derivatives when doing this step that tidies away the minus sign.

Method 3:

Again, get to something like:

$dA = CdB + BdC$ only this time square both sides. This leads to a quadrature rule, but I'm not 100% clear where the cross terms went (even if the quantities are "independent".)

Which of these methods is correct, what are the flaws and pitfalls with each? Can each method be made to work if one is sufficiently careful or is one truly invalid?

[Inspiring questions, I do NOT think this is a duplicate of them:

Confusion with regards to uncertainty calculations

The approximate uncertainty in $r$ ]

jacob1729
  • 4,604

0 Answers0