14

Suppose I am deriving a length contraction formula using natural units. If I arrive at $L = L_0 \sqrt{1 - v^2}$, I know that I should divide $v^2$ by $c^2$ to get the correct answer in SI units. But what if I mistakenly forgot to square the velocity and arrived at $L = L_0 \sqrt{1 - v}$. I would then be inclined to divide $v$ by $c $ and conclude that the answer is $L = L_0 \sqrt{1 - \frac{v}{c}}$.

If I had not used SI units during derivation and only forgot to square the velocity, I would have arrived at $L = L_0 \sqrt{1 - \frac{v}{c^2}}$. I could have kept track of the dimensions and told myself I had made a mistake. But that is not the case when using natural units. Is this the disadvantage of natural units? Or is there a way to get around this problem?

Mozibur Ullah
  • 14,713
Atom
  • 145

3 Answers3

24

You are quite correct that the use of natural units removes a useful method for detecting errors.

This is an example of a more general concept in information theory. If you use the minimum number of symbols to convey a given piece of information (in this example, an equation in physics or something like that) then you have a slimmed-down and efficient notation. However, by building in some extra symbols, in a suitably controlled or designed way, then you build in some error-detection capability.

Suppose that you have $k$ symbols and the probability of making a mistake in copying each from one line to another is $p$. Then the overall probability of making a mistake, for each such copy operation, is approximately $kp$ for small $p$.

Now suppose you add some further symbols such as $c$ or $\hbar$, so that you have $n$ in total, with $n > k$. Now the probability of making a copying error is $np$, so it has gone up. It looks at first as if this makes matters worse. But now you have the error detection capability. An expression such as $1 + v/c^2$ is clearly wrong, and so is $2 + \hbar$ and things like that. This means that many of the mistakes will be detectable, so the overall probability of an error both occurring and also being undetected (by a dimensional check) can easily now be less than $kp$, and usually is.

In my experience, when doing calculations which you are already familiar with (e.g. collision problems in relativity if you have already done many of those), setting $c=1$ is useful to reduce clutter. But when entering into new territory in a calculation (e.g. doing general relativity when you are learning the subject), it is useful to retain $c$ in order to preserve a check and to keep track of what you are doing. Similar statements apply to $\hbar$ in quantum mechanics.

In summary, errors can take many forms, not all of which will lead to a dimensional error, so not all are detectable. But the fact that many are detectable by this method is very useful. When doing familiar calculations by familiar methods, natural units are nice to keep things clean and uncluttered. When doing calculations in unfamiliar territory, on the other hand, the dimensional check capability often outweighs the cost of having more symbols.

Added note to resolve an issue raised in comments

It may be objected that the use of natural units does not entirely preclude a dimensional check. That is true, but it greatly reduces the number of errors that can be detected. For example, if two speed calculations gave the answers $v = x/t$ and $v=t/x$ then which is correct? If units with $c=1$ have been adopted then we can't tell. But if the calculations with $c$ included give the answers $v = x/(c^2 t)$ and $v=c^2 t/x$ then we can at least tell that the first one is not correct. (This example comes up in the case of a body undergoing hyperbolic motion).

Andrew Steane
  • 65,285
3

Natural units are just a choice of convention/notation which reduces the number of symbols you need to write. It's not an iron-clad safeguard against genuine errors, though it does mean you have fewer symbols to keep track of during a derivation. In that sense, it may be helpful in avoiding typos, but it's more about convenience than anything else.

You certainly don't need to use them (unless you're in a class being taught by an instructor who's made them mandatory), but after writing (or typing) $e^{ipx/\hbar}$ and $e^{-iEt/\hbar}$ about a thousand times, the prospect of setting $\hbar=1$ becomes pretty alluring. Especially since, with a bit of practice, those extra symbols are pretty trivial to put back in at any stage you wish.

Albatross
  • 72,909
-3

Natural units, philosophically speaking, are the units natural to the problem. If we are measuring the masses of atoms, we are better of measuring in atomic units rather than kilograms. Likewise, if we are measuring the masses of stars we are better of measuring in solar masses. Because we are using units natural to the problem, numbers come out more nicely and errors are easier to detect.

Also natural units, philosophically speaking again, should more often be dimensionful rather than dimensionless, because being mortals, we live dimensionfully.

In what is called natural units we are in fact working from a singular perspective which takes its point of departure the pursuit of a unified theory. For most physicists, this is not in fact the natural viewpoint. If I am calculating a ballistics problem do I really want to set c to 1?

On the other hand, if you are working from that perspective then it's a tradeoff between reducing symbol clutter, which is one source of error, with that of dimensionful thinking, which increases symbol clutter but allows catching errors of dimensions. The right way to steer between these will depend upon what you find most comfortable.

Mozibur Ullah
  • 14,713