Different sources are giving me different formulae for combining relative uncertainties. One tells me to simply add the relative uncertainties together to get the combined uncertainty while another gives me this formula. $ \sqrt{(\delta x/x)^2+ (\delta y/y)^2} $. Which is the correct method?
2 Answers
It depends on the relation between x and y.
In general, for a variable R that depends on x and y, you have that $\delta R= \sqrt{(\frac{\partial R}{\partial x}\delta x)^2 +(\frac{\delta R}{\delta y}\delta y)^2}$.
For example, if $R=x+y$, then $\delta R= \sqrt{(\delta x)^2 +(\delta y)^2}$.
Note that the above formula is actually just an approximation in the regime where the uncertainties are small (You are basically assuming that higher order corrections are small.)
If you want to compute the uncertainty the most rigorous way, you actually simply have to compute the difference between the maximum value of R and the minimum value of R given your uncertainties. Then $\delta R = (R_{max} -R_{min} )/2$
- 496
Here's the basic math that answers this question. It it a bit involved if you do not know calculus, sorry.
Review of probability densities
So if you have two random variables$X,Y$, the probability of finding the system in the tiny square where $X$ is in the interval $(x,x+dx)$ and $Y$ is in the interval $(y,y+dy)$ is given by a “joint probability density function” $j(x,y)~dx~dy$, and the sum $Z=X+Y$ is distributed according to a similar probability density, with the probability that $Z$ is in the tiny interval $z<Z<z+dz$ as $h(z)~dz,$ given by $$h(z)=\int_{-\infty}^{\infty}dx~j(x,z-x).$$Similarly to how $h$ is a probability density for just one variable, if the two errors are uncorrelated, then you have $j(x,y)=f(x)~g(y)$ for probability densities $f,g$. In that case the above sum formula is a convolution.
Now one nice distribution for an error is the “Gaussian” bell curve with standard deviation $a$, $$f(x)=\frac1{\sqrt{2\pi a}} e^{-x^2/(2a^2)},$$with standard deviation $a$. It is nice for many reasons, two among them being the central limit theorem (sums of many independent errors tend to follow this distribution even when those individual errors don't) and its Fourier transform,$$f[k]=\mathcal F_{x\rightarrow k}f(x)=\int_{-\infty}^{\infty}dx~e^{-2\pi i k x}~f(x)=e^{-2\pi^2 a^2 k^2},$$is just another such Gaussian bell curve.
One other fact: the Fourier transform of a convolution is a product, so if $g$ is a Gaussian with standard deviation $b$ then the sum is in Fourier space as follows,$$h[k] = f[k]~g[k]=e^{-2\pi^2 (a^2 +b^2)k^2}.$$ This is the transform of a Gaussian with standard deviation $\sqrt{a^2 +b^2}.$
So sums of errors add quadratically as long as we’re taking about a sum of independent Gaussians.
Relative error of a sum
If we're now talking about the relative error of a sum, you need to know that your actual variables look like $x_0+X$ and the standard deviation scales with the absolute value of a scale factor, so you'd have a relative error of $$\operatorname{stdev}\left(\frac{(x_0+X)+(y_0+Y)}{x_0+y_0}\right)={\sqrt{a^2+b^2}\over|x_0+y_0|}.$$
Relative error of a product
For products assume $x_0\gg a > 0$ and so forth, one instead is looking at $$\operatorname{stdev}\left({(x_0+X)\cdot(y_0+Y)\over x_0~y_0}\right)=\operatorname{stdev}\left(\left(1+\frac X{x_0}\right)\cdot \left(1+\frac Y{y_0}\right)\right)$$and to first approximation when you FOIL this you can ignore the $X\cdot Y$ term and the relative error that remains is the standard deviation $$\sqrt{\left(\frac a{x_0}\right)^2+\left(\frac b{y_0}\right)^2},$$ following the latter formula you have described.