0

I am 99% sure that the calculation to obtain the uncertainty of two multiplied values given by my textbook and this university is incorrect.

They both say this:

$$(A \pm a)(B \pm b) = AB \pm (\varepsilon_A + \varepsilon_B)$$

Which should be equivalent to (correct me if I'm wrong):

$$(A \pm a)(B \pm b) = AB \pm \left(AB \left(\frac{a}{A} + \frac{b}{B}\right)\right) = AB \pm (Ba + Ab))$$

Lets consider: $(4 \pm 1)(2 \pm 1)$ This should equal ($8 \pm 6$) right?

meaning the smallest value possible is 2 while the largest is 14. However we can see that the largest possible value is actually $15$ ($5 \cdot 3$) and the smallest is $3$ ($3\cdot1$). If this is plot on a 3D graph you will see there is no way to obtain lower or larger values than 3 and 15. Therefore there must be a problem with the formula. After some math I came up with the following equation which I believe is the correct equation:

$$(A \pm a)(B \pm b) = (AB + ab) ± (Ab + Ba))$$

As long as $A \ge a$ and $B \ge b$ and $a, b \ge 0$.

Is the equation in the textbook (and given by this university) incorrect or have I just missed something?

jng224
  • 3,796
elldur
  • 303

3 Answers3

4

If you are ever in doubt about error calculation, use differentiation. In fact, all the rules you mentioned are derived using calculus. Unfortunately, these rules are taught but not how the rules were framed.

You have two variables $A$ and $B$ whose uncertainties are $\Delta A$ and $\Delta B$ respectively.

$AB = A\times B$

Differentiate both sides and you get:

$$d(AB) = A(dB) + B(dA)$$

Divide the entire equation by the product $AB$.

$$\frac{d(AB)}{AB} = \frac{A(dB)}{AB} + \frac{B(dA)}{AB}$$

$$\frac{d(AB)}{AB} = \frac{(dB)}{B} + \frac{(dA)}{A}$$

The derivative tells us how the answer changes for an infinitesimally small change. Put in other words, it tells us how much the answer $AB$ varies for infinitesimally small errors in $A$ and $B$. When errors are considerably smaller than the actual value, we can approximate and replace infinitesimals $dX$ with finite differences $\Delta X$. This assumption is what breaks the formula in your question: if the errors are large, this approximation does not hold.

Therefore, the final equation looks like:

$$\frac{\Delta (AB)}{AB} = \frac{\Delta B}{B} + \frac{\Delta A}{A}$$

$$\Delta (AB) = AB\left(\frac{\Delta B}{B} + \frac{\Delta A}{A}\right)$$

$$\text{result} = AB\pm \Delta (AB)$$

Using differentiation, you can approximate errors for any function; for example, $Ae^{(Bx^2 + C)}$. You should use calculus instead of remembering rules or finding results using basic algebra. Deriving from algebra cannot give answers to all types of functions easily without going through a lot of pain (applying product rule many times in the series expansion of $e^x$ and then eyeballing a geometric progression? too complex compared to differentiating $e^x$?).

Yashas
  • 7,257
3

Suppose $x_A$ is the fractional error in $A$ so the absolute error in $A$ is $Ax_A$. Then the product is:

$$ A(1\pm x_A) \times B(1\pm x_B) = AB(1 \pm x_A \pm x_B \pm x_A x_B) $$

If $x_A$ and $x_B$ are small then their product is very small so we can ignore it and we're left with:

$$ AB(1 \pm x_A \pm x_B) $$

But you can't simply add $x_A$ and $x_B$ to get:

$$ AB(1 \pm (x_A + x_B)) $$

because the errors in the two variables $A$ and $B$ aren't correlated so a negative error in $A$ can cancel a positive error in $B$ and vice versa. If the errors are normally distributed then we can show the combined error is:

$$ AB(1 \pm \sqrt{x_A^2 + x_B^2}) $$

So we have a combined fractional error:

$$ x_{AB}^2 = x_A^2 + x_B^2 $$

In the example you've chosen the errors are large so it isn't a good approximation to drop the $x_Ax_B$ term.

Note also that these errors are generally taken to be the standard deviations in $A$ and $B$ i.e. $\sigma_A$ and $\sigma_B$. So a standard deviation of e.g. $\pm1$ means the probability that the value lies within $\pm1$ is $68$%. It doesn't mean the error can't be bigger than this.

John Rennie
  • 367,598
0

The difference between the two equations is that you have an extra $ab $ in your first term.

When the error is small, that is a negligible difference (compared to the size of AB) and it is systematically ignored.

Yashas
  • 7,257
Floris
  • 119,981