6

They say that if $A = X \times Y$, with $X$ statistically independent of $Y$, then

$$\frac{\Delta{A}}{A}=\sqrt{ \left(\frac{\Delta{X}}{X}\right)^2 + \left(\frac{\Delta{Y}}{Y}\right)^2 }$$

I can't understand why that is so geometrically.

If $X$ and $Y$ are interpreted as lengths and $A$ as area, it is pretty easy to understand, geometrically, that

$$\Delta{A} = X\times\Delta{Y} + Y\times\Delta{X} + \Delta{X}\times\Delta{Y}$$

Ignoring the term $\Delta{X}\times\Delta{Y}$ and dividing the both sides by $A$ ($= X \times Y$), that expression becomes

$$\frac{\Delta{A}}{A} = \frac{\Delta{X}}{X} + \frac{\Delta{Y}}{Y}$$

which is different from

$$\frac{\Delta{A}}{A}=\sqrt{ \left(\frac{\Delta{X}}{X}\right)^2 + \left(\frac{\Delta{Y}}{Y}\right)^2 }$$

which looks like a distance calculation. I just can't see how a distance is related to $\Delta{A}$.

Interpreting $A$ as the area of a rectangle in a $XY$ plane, I do see that $\Delta{X}^2+\Delta{Y}^2$ is the how much the distance between two opposite corners of that rectangle varies with changes $\Delta{X}$ in $X$ and $\Delta{Y}$ in $Y$. But $\Delta{A}$ is how much the area, not that distance, would vary.

Qmechanic
  • 220,844

4 Answers4

1

The square root is there as a better estimator of the error than just adding the errors together. If you add the errors together you are finding the maximum possible error which will happen when both quantities are a maximum(or minimum) together. This is an unlikely event compared with all the other domination of errors. The square root formula you you quote has been deduced by statisticians assume, I think, a normal distribution of random errors. You might look up "theory of errors" to get a more detail answer to your question.

Farcher
  • 104,498
1

The formula $$\frac{\Delta{A}}{A} \approx \frac{\Delta{X}}{X} + \frac{\Delta{Y}}{Y} $$

is an approximation because you are ignoring $\Delta X$$\Delta Y$

A better approximation would be

$$\Delta A=\frac{\partial A}{\partial X}\Delta X+\frac{\partial A}{\partial Y}\Delta Y$$

Since errors always add we take the absolute magnitude of $\frac{\partial A}{\partial X}$ and $\frac{\partial A}{\partial Y}$, i.e

$$\Delta A=\bigg |\frac{\partial A}{\partial X}\bigg |\Delta X+\bigg |\frac{\partial A}{\partial Y}\bigg |\Delta Y$$

Since it is always tricky do deal with modulus functions, another work around would be squaring individual errors so that they stay positive

$$(\Delta A)^2=\bigg (\frac{\partial A}{\partial X}\bigg)^2(\Delta X)^2+\bigg (\frac{\partial A}{\partial Y}\bigg )^2(\Delta Y)^2$$

$\frac{\partial A}{\partial X}=Y$ and $\frac{\partial A}{\partial Y}=X$

This will give the required form, this is the root mean squared deviation (standard deviation)

Courage
  • 1,038
1

The general formula for error propagation is:

$$\Delta f(x_1,x_2,\ldots)=\sqrt{(\frac{\partial f}{\partial x_1}\Delta x_1)^2 + (\frac{\partial f}{\partial x_2}\Delta x_2)^2 + \cdots}$$ where $\Delta m$ means "standard deviation of lots of repeated measurements of m".

Where does this come from? By calculus, when all the $x_i$s vary, it causes the following variation of $f$: $$\delta f = \sum_i (\partial f / \partial x_i) \delta x_i$$ where $\delta x_i$ is the difference between this particular measurement of $x_i$ and its true value, and $\delta f$ is ditto for $f$. We are assuming that the errors are relatively small (ignore $\delta x_i \delta x_j$ terms etc.)

I think you had all this so far. The part that you're missing is:

For independent random processes, the variance of the sum is the sum of the variances.

The analogous statement is not true for standard deviations. It is only true for variance, i.e. standard deviation squared.

Since we want the standard deviation of $\delta f$, we need to add up the variances of $(\partial f / \partial x_i) \delta x_i$ and then take the square root. So we wind up with the formula that I wrote at the beginning.

Steve Byrnes
  • 17,082
1

It looks like Pythagoras, but it is only remotely related. The important concept, as presented in SteveB's answer, is that the variables are considered to be independent, i.e. one does not affect the other. In mathematics, independent parameters are said to be orthogonal , and can thus be assigned to separate axes in Cartesian N-space. It just so happens that the root-sum-square error turns out to be the diagonal of the N-cube (or rectangle in 2-D), which matches Pythagoras' trigonometry theorem.

Carl Witthoft
  • 11,106