10

In particular I am curious if the values of the rest masses of the electron, up/down quark, neutrino and the corresponding particles over the next two generations can be defined as constant or if there is an intrinsic uncertainty in the numbers. I suppose that since there is no (as of yet) mathematical theory that produces these masses we must instead rely on experimental results that will always be plagued by margins of error. But I wonder does it go deeper than this? In nature do such constant values exist or can they be "smeared" over a distribution like so many other observables prior to measurement? (Are we sampling a distribution albeit a very narrow one?) Does current theory say anything about this? (i/e they must be constant with no wiggle room vs. no comment)

I am somewhat familiar with on-shell and off-shell particles - but I must confess I'm not sure if this figures into my question. I'd like to say that as an example I'm talking about the rest mass of the electron as could be determined by charge and charge/mass ratios. But perhaps this very number is itself influenced by off-shell electron masses? Perhaps this makes no sense. Needless to say I'd appreciate any clarification.

Qmechanic
  • 220,844

5 Answers5

4

I guess I can think of three possible ways in which masses could be non-constant. (1) They could change due to quantum-mechanical fluctuations, (2) they could be slightly different for different particles at the same time, (3) or they could change over cosmological time intervals.

Number 1 seems to be what you had in mind, but I don't think it works. The standard picture is that for a particle of mass m, its momentum p and mass-energy E can fluctuate, but the fluctuations are always such that $m=\sqrt{E^2-p^2}$ (with c=1) stays the same.

Re #2, here's some good info: Are all electrons identical?

Re #3, one thing to watch out for is that it is impossible, even in principle, to tell whether a unitful fundamental constant is changing. The notion only makes sense when you talk about unitless constants: Duff, http://arxiv.org/abs/hep-th/0208093 However, it certainly does make sense to talk about changes in the unitless ratios of fundamental constants, such as the ratio of two masses or the fine structure constant.

There are claims by Webb et al. J.K. Webb et al., http://arxiv.org/abs/astro-ph/0012539v3 that the fine structure constant has changed over cosmological timescales. Chand et al., Astron. Astrophys. 417: 853, failed to reproduce the result, and IMO it's bogus. I'm not aware of any similar tests for the ratios of masses of fundamental particles. If you change the ratio of masses of the electron and proton, it will change the spectrum of hydrogen, but at least to first order, the change would just be a rescaling of energies, which would be indistinguishable from a tiny change in the Doppler shift.

Brans-Dicke gravity (Physical Review 124 (1961) 925, http://loyno.edu/~brans/ST-history/ ) has a scalar field that can be interpreted as either a local variation in inertia or a local variation in the gravitational constant G. This could in some sense be interpreted as meaning that, e.g., electrons at different locations in spacetime had different masses, but all particles would be affected in the same way, so there would be no effect on ratios of masses -- hence the ambiguity between interpreting it as a variation in inertia or a variation in G. B-D gravity has a unitless constant $\omega$, and the limit $\omega\rightarrow\infty$ corresponds to general relativity. Solar system tests constrain $\omega$ to be at least 40,000, so B-D gravity is basically dead these days.

4

They most certainly are not. You are right that there is no theory that explains masses (these are input as parameters) but note that our current theories used to explain e.g. LHC data (that is, quantum field theories) inevitably come with a scale attached: you need to describe upto what energies you do physics otherwise the theory just doesn't make sense [insert usual story here about renormalization and infinities often told to scare little children before their going to bed].

Now, this shouldn't come as such a surprise since there are new particles awaiting discovery just behind the corner, so claiming that we have a complete theory would be preposterous. Instead, what we claim is that we have a good theory that works upto some scale. Consequently, all of the parameters that are inserted by hand must depend on the scale. Again, this is because theories at different scales are potentially completely different (e.g. at the "present scale" there is no supersymmetry assumed while it is conceivable that at a little higher scale our theories will have to include it) and so the parameters of the theories that are used to connect the theory with experiment potentially have no relation to each other. This phenomenon is known as running of coupling constants or, briefly, the running coupling.

The moral is that all the rest masses and interaction "constants" depend on some scale. They shouldn't be thought as something inherently deep about the nature but just as fitting parameters that describe only effective masses and effective coupling. To illustrate why they are just effective: consider an electron in classical physics. We can measure its charge by usual methods. This value is the long-distance low-energy $e(E \to 0)$ limit of the scale dependent coupling $e(E)$. As you increase the energy and try to probe electron at shorter distances you will find that lots of others electron-positron pairs appear, screening the electron, and the charge that you will measure will be different due to these changed conditions (we talk about the polarization of vacuum).

Just for the sake of completeness: one could say that $E \to 0$ limit is the most important thing about couplings and that we should take that as definition. If so, then these long-distance couplings are indeed constants as one was used in classical physics. But this point of view is worthless in particle physics where people instead try to make $E$ as high as possible to obtain a theory valid at high scales (since this is what they need at LHC).

Marek
  • 23,981
0

if you think about it, any time you perform a measuremeant, you must use a device which is limited by quantum mechanical laws. Any device measuring the weight of a particle must record the weight via some state transition internal to the device... (e.g. the device must change in some way when the particle is placed near it so even if the particle is at rest when the observation is made, the device must have some transition in internal position and momenta to record an observation). So I would say that quantum mechanics does impose a limit on how close we can get to finding the actual rest mass of a particle. That limit is always loosely proportional to plancks constant or plancks constant times 1/2.

Timtam
  • 870
-1

The rest masses of fundamental particles certainly are NOT constants !

I will do copy/past from pages 6-9 (of 20) of a recent document by Alfredo (Independent researcher)
the sufficient information to show how atomic properties can change, including mass, at a cosmological level.
Starting only from data and making no hypothesis he formally presents a dilation(scaling) model of the universe where the atoms are not invariant and physical laws hold, without contradiction with GR, and compares the model with FRW and $\Lambda$CDM models.

(the whole paper deserves your attention, it is very clear and it only uses basic physics, imo accessible to undergraduated students. He makes a full study on how we measure, units, local and field constants and laws, how can exist a variation and why we are not aware of such, and a lot more)

quoting Alfredo:
How the universe can be scaling

We have seen that if space expansion traces a scaling phenomenon, we should expect to detect varying field constants; we have now to find out why that is not observed. The first thing to do is to look up to the dimension functions of field and some other constants:

$$ \begin{array}{ccl} \left[G\right] & = & M^{-1}L^{3}T^{-2}\\ \left[\varepsilon\right] & = & M^{-1}Q^{2}L^{-3}T^{2}\\ \left[c\right] & = & LT^{-1}\\ \left[h\right] & = & ML^{2}T^{-1}\\ \left[\sigma\right] & = & M^{-3}L^{-8}T^{5}. \end{array} $$

The equations of field constants ($G,\varepsilon$ and c) display a peculiar characteristic: the summation of exponents of the dimension function of each field constant is zero! This is unexpected and does not happen with the other constants. It means that if all the four base units concerned change by the same factor,

$$ M=Q=L=T, $$ then the measuring units of field constants hold invariant, $[G]=[\varepsilon]=[c]=1$. To see the relevance of this, let us consider that the atomic units of mass, charge, length and time change all at the same rate in relation to the space units. In that case, because of the property shown above, the atomic units of the field constants hold invariant in relation to the space ones and, therefore, the field constants are invariant in both systems (they are invariant in space units by definition of these ones). The geometry of space would be scaling in atomic units while the value of field constants would hold invariant--- which is exactly what cosmic data seems to display.

The fact that the dimensions of field constants display null summation of exponents can just be a coincidence, but it is also the kind of indication we were looking for, a property embedded in physical laws. This is the only way we can consider a previously unknown fundamental property without conflicting with established physics.

We have now the fundamental understanding that can support a scaling (dilation) model of the universe and we will now proceed to the formal development of that model.
...
Hence, one of the systems of units is defined from matter properties, designated here by atomic system and identified by A ("A" from "atomic") and the other is the space system of units, identified by S ("S" from "space"); the later is such that space properties (geometry and field constants) remain invariant in it, which is required to qualify the S system as internally defined in relation to space. Thus, the conditions that define the S system are the following:
- The units of S are such that the S measures of field constants hold invariant;
- The length unit of S is such that the wavelength of a propagating radiation in vacuum is time invariant.
The base quantities are Mass (M), Charge (Q), Time (T), Length (L) and Temperature ($\theta$), and the ratio between A and S base units is denoted by $M_{AS},Q_{AS},T_{AS},L_{AS},\theta_{AS}$. Note that the ratio between the A and S units of any quantity or constant is therefore expressed by the respective dimension function;
...
Postulates

The model will be deducted not from hypotheses but from relevant observational results, which are stated as postulates:

  1. In atomic units (A), all local and field constants are time-independent.
  2. $L_{AS}\,$decreases with time.

The first postulate is not fully supported in experience, as we cannot state it with the required error margin; however, we have also no sound indication from observations that it might be otherwise. The second postulate represents the observed phenomenon of space expansion in atomic units, stated in this unusual way because it is presented as a function of $L_{AS}\,$, i.e., of the ratio between atomic and space length units and not the inverse, as usual.
...
S units, by definition, are such that (eq.1)

\begin{equation} \frac{dG_{S}}{dt_{S}}=\frac{d\varepsilon_{S}}{dt_{S}}=\frac{dc_{S}}{dt_{S}}=0. \end{equation} Since the field constants are time-invariant also in atomic units, as stated by postulate 1, and since the two systems of units are identical at $t=0$, then the values of these constants are the same in the two systems at whatever time moment: (eq.2) $$ \begin{array}{ccccc} G_{A} & = & G_{S} & = & G\\ \varepsilon_{A}^{\vphantom{l}^{\vphantom{L}}} & = & \varepsilon_{S} & = & \varepsilon\\ c_{A}^{\vphantom{l}^{\vphantom{L}}} & = & c_{S} & = & c. \end{array} $$
The relation between the S and A values of each constant is the one between the respective A units and S units, which is given by the dimension function,
therefore (eq.3)
$$ \begin{array}{ccccl} \dfrac{G_{S}}{G_{A}} & = & \left[G\right]_{AS} & = & {M_{AS}^{-1}}{L_{AS}^{3}}{T_{AS}^{-2}}=1\\ \dfrac{\varepsilon_{S}^{\vphantom{l}^{\vphantom{L}}}}{\varepsilon_{A}} & = & \left[\varepsilon\right]_{AS} & = & {M_{AS}^{-1}}{Q_{AS}^{2}}{L_{AS}^{-3}}{T_{AS}^{2}}=1\\ \dfrac{c_{S}^{\vphantom{l}^{\vphantom{L}}}}{c_{A}} & = & \left[c\right]_{AS} & = & L_{AS}{T_{AS}^{-1}}=1. \end{array} $$ This set of equations implies $M_{AS}=Q_{AS}=T_{AS}=L_{AS}$. By postulate 2, $L_{AS}$ is a time function, therefore the solution can be presented as: (eq.4)
$$ \begin{equation} M_{AS}(t)\,=Q_{AS}(t)\,=T_{AS}(t)\,=L_{AS}(t)\,.\, \end{equation} $$ Note that temperature is independent of this result.

The next step is to define this time function, which is the space scale factor law. As all the above four base quantities follow this function, it is convenient to identify it by a specific designation; in this work this scaling law is identified by the symbol $\mathcal{\alpha}$: (eq.5)
$$ \begin{equation} \alpha(t)\,=\, L_{AS}(t). \end{equation} $$ ...
The scaling law

To make no hypothesis on the cause of the expansion is to consider that expansion is due to a fundamental property; to consider otherwise would imply a specific hypothesis on a particular phenomenon driving the expansion. Therefore, for this model, the space expansion is due to a fundamental property, tracing a self-similar phenomenon. Likewise, as no hypothesis is made on how fundamental properties may vary with position on space and time, it is assumed that they do not depend on it. This implies that the scaling has a constant time rate in some physically relevant system of units, i.e., that the scaling law is exponential in such system of units. There are only two possibilities in the framework established for this model: either space expansion is exponential in A units ($L_{SA}(t_{A})=\alpha^{-1}(t_{A})$ is exponential) or matter evanesces exponentially in S units ($L_{AS}(t_{S})=\alpha(t_{S})$ is exponential). The former case does not fit observations; only the later case is possible.

The general expression for a scaling law exponential in S units is (eq.6) $$ \begin{equation} \alpha(t_{S})=k_{1}e^{k_{2}\cdot t_{s}}\,; \end{equation} $$ at the moment $t_{A}=t_{S}=0$ it is $\alpha(0)=L_{AS}(0)=1,$ so $k_{1}=1$; note now that (eq.7)

$$ \begin{equation} \frac{dt_{S}}{dt_{A}}=T_{AS}=\alpha\, \end{equation} $$ which shows that the variation of the measure of time is inversely proportional to the time unit; and that (eq.8) $$ \begin{equation} r_{A}=r_{S}{L_{AS}^{-1}}=r_{S}\cdot\alpha^{-1}, \end{equation} $$ where r is the distance to some point, or its length coordinate; as the rate of space expansion at t=0 is, by definition, the value of Hubble constant, represented by $H_{0}$, then (eq.9)
$$ \begin{equation} H_{0}=\left(\frac{1}{r_{A}}\frac{dr_{A}}{dt_{A}}\right)_{0}=-k_{2}, \end{equation} $$ therefore (eq.10)
$$ \begin{equation} \alpha(t_{S})=e^{-H_{0}\cdot t_{S}}. \end{equation} $$

Hubble constant is the present space expansion rate for an atomic observer and is the matter evanescence rate (negative) for a space observer.

Helder Velez
  • 2,685
-2

The rest masses are certain as they correspond to the ground states of QM systems where the energy is certain. It is valid for any system - "elementary" or compound. High temperatures may create the energy uncertainty and it may influence the "measured" mass.

The mass "measurement" can be indirect via precise transitions expressed with theoretical formulas involving mass.

Many current theories fail to simply use the experimental data on masses and charges and it is obviously not a nature feature but a human being imperfection. If you read Marek's answer, you will learn that, as soon as Marek does not know all the particles the nature can produce, he and his theory cannot be wrong. So masses and charges are running for him, they run his errands. And the main errand is to describe the experimental data with a wrong theory. Difficult task but fortunately feasible with help of running constants.

EDIT: For those who does not know that most of "our theories" are non renormalizable, the usual blah-blah about "scale dependence" is not applicable at all (it does not help).

EDIT 2: I see downvoters do not want to leave the rest masses at rest.