1

If I had a set of measurements ,eg : $[10.0,11.0,11.5]$ and they each had a relative uncertainty of $10$% meaning my values are now $[10.0±1.0, 11.0±1.1, 11.5±1.15]$ how would I find the average of their uncertainties? Id think either one of two situations:

  1. I simply add their uncertainties so that my new average value becomes $10.83±3.115$
  2. I add their uncertainties and then divide by the total number of measurements like I would when finding the mean of the measurements themselves. So my new average value $10.83±1.08$

Which one of these situations would it be? Would it be something entirely different?

Cheers

2 Answers2

0

General consensus is that errors and uncertainties usually adds-up to the total error. But seems that this approach is not useful here. Because say you made $N$ measurements and next laboratory will validate your results, but will make $N+1$ measurements instead, and because absolute error is a function of number of measurements - both you will get totally different results with totally different confidence level. So comparison of results will be very difficult, thus this needs a different approach.

I suggest changing the way you average your expected value. Usually expected value is calculated like that : $$ \overline x = \sum_i^N w\,x_i = \sum_i^N N^{-1}x_i$$

But who said that averaging weight must always be constant and equal to $\frac 1N$ ? Nobody. If your measurement error grows proportionally to number of measurements taken, that it's rational to introduce a variable averaging weight, which will reduce with each step of averaging. Exact form of averaging weight variable depends on your measurement exact error distribution, so I'll leave this question out of scope, because you need to research your error distribution function too.

However, for demonstration lets just assume that your error increases linearly, so averaging weight will be $w=1/i$, making expected value calculation like that : $$ \overline x = \sum_i^N i^{-1}x_i $$

When you change expected value calculation in this way, you can safely claim that your confidence level is $\pm 10\%$. Second bonus will be that now, it will be easy to validate your results in different laboratories across world.

0

Answer: "something entirely different"

Given measurements $x_i$ with uncertainty $\delta x_i$, the weight of each measurement is:

$$ w_i = \frac 1 {\delta x_i^2} $$

so that the weighted expectation value for $f(x)$ is:

$$ \langle f \rangle = \frac{\sum_i f(x_i)w_i}{\sum_i w_i}$$

The mean is found from $f(x) = x$:

$$ \bar x \equiv \langle x \rangle = 10.759 $$

and the variance is:

$$ \sigma^2 \equiv \langle x^2 \rangle - \bar x^2 = 0.4024 $$

The standard error of the mean is:

$$ \sigma_{\bar x} \approx \frac{\sigma}{\sqrt{N_{\rm eff}} } = 0.3688$$

where the effective number of degrees of freedom is found from the expectation value of $f(x)=1$:

$$ N_{\rm eff} \equiv \langle 1 \rangle = 2.958$$

Note that both of your option (1) and (2) involve adding uncertainties linearly, and uncorrelated things "always" [air quotes = you may find rare exceptions], always add in quadrature.

JEB
  • 42,131