22

It is often said that one of the drawbacks of the standard model is that it has many free parameters. My question is two-fold:

  1. What exactly is a free parameter? My understanding is that the free parameters of a model/theory are the ones that cannot be predicted by the theory and need to be measured and put in the theory 'by hand' so to speak. Are all constants of nature free parameters then? Also, can you give me an example of a non-free parameter in a theory?

  2. Why is it bad for a theory to have free parameters? Couldn't be that some quantities in nature such as the mass of the electron just 'happen' to have a certain value that cannot be predicted by a theory?

Floyd
  • 373
  • 2
  • 7

7 Answers7

20

A parameter is a number that does not "come out" of the theory, but needs to be input into the theory by hand. The electron mass is a good example -- no one can (currently) calculate the electron mass, it is simply something we measure.

There are no absolute rules about things like "how many parameters is too many parameters," though of course many people have different opinions. From the point of view of fundamental physics, typically progress is made by synthesis or unification, which is the development of a deeper theory that simultaneously explains two phenomena you originally thought were different. Typically, a synthesis of two apparently different physical theories leads to realizing that what was originally a free parameter in one of the theories can actually be calculated in the unified theory.

There are a few very famous examples of this in undergraduate physics. Here are a few examples (admittedly the last is not really an undergraduate level example)

  1. The acceleration due to gravity on the surface of the Earth, $g$, can be calculated from the mass $M_\oplus$ and radius $R_\oplus$ of the Earth and Newton's gravitational constant, $g = G\frac{M_\oplus}{R_\oplus^2}$. In Galileo's time, gravity on the Earth was not known to be connected to astronomical quantities like the mass and radius of the Earth, but Newton showed that gravity is one force underlying apples falling from trees and moons orbiting planets.

  2. The speed of light $c$ can be calculated from the electric and magnetic field coupling constants, which in SI units are $\epsilon_0$ and $\mu_0$. In SI units, we have $c^2 = (\mu_0 \epsilon_0)^{-1}$. This follows from Maxwell's realization that light, electricity, and magnetism -- three apparently different subjects -- are all actually one unified theory of electromagnetism.

  3. The gas constant $R$ is related to Avogadro's number $N_A$ and the Boltzmann constant $k$, $R = k N_A$. Realizing the atomic structure of matter explained a lot of phenomena in thermodynamics by reducing them to the statistical motion of individual atoms.

  4. The peak wavelength of light emitted from a black body at temperature $T$, was known in the 1800s to be given by Wien's displacement law, $\lambda_{\rm peak} = b/T$. The constant $b$ is known as Wien's displacement constant. With the development of quantum mechanics and the Planck distribution, it was shown that $b=h c / x k$, where $h$ is Planck's constant, $c$ is the speed of light, $k$ is Boltzmann's constant, and $x$ is a calculable order 1 factor that drops out of some annoying algebra ($x=4.965...$ solves $(x-5)e^x+5=0$).

  5. While this is not as "basic" as the others, the unification of the electric and weak forces showed that the Fermi constant $G_F$ controlling the lifetime of the muon, is closely related to the Higgs vacuum expectation value $v$, via $v = (\sqrt{2} G_F)^{-1/2}$ (and therefore also to the mass and coupling of the W boson).

Note that these examples of unification leading to a deeper understanding of an input parameter are correlated with major developments in physics -- gravity, electromagnetism, statistical mechanics, quantum mechanics, and the electroweak theory.

There is a belief in theoretical physics that the Standard Model is not a fundamental theory (indeed we know it can't be because it doesn't contain gravity and dark matter). By extrapolating the logic of unification that has always been associated with progress in physics, one hopes that at least some of the numbers that are currently input parameters to the Standard Model, will actually be shown to be calculable outputs using a deeper theory.

Andrew
  • 58,167
12
  1. Yes, a free parameter is a parameter that is not fixed by the theory and has to be measured in an experiment. Examples are the electron mass or its charge. No, not all constants of nature are free parameters, in these cases, we can find expressions to represent the quantity by a number of truly free parameters. An example would be the $g$ factor. It is predicted by QFT very accurately solely in terms of the SM free parameters.

  2. You have to understand that at the very core of every theory we are trying to connect some measured quantities to some prediction. For example, we want to compute the crosssection for $e^- e^+ \longrightarrow \mu^- \mu^+$ and we do so by using $e, m_e$ and $m_\mu$. A less predictive theory than QED (like classical QM) would not be able to make this prediction and would need to make the crosssection a new free parameter. So the more predictive and hence useful a theory is, the more free parameters are connected to observables. It would therefore be great to have no free parameters and have an unlimited amount of observables. This would be the ultimate theory but whether it exists is a completely different matter (but I don't think so).

tomtom1-4
  • 1,249
12

Theory: It's true that free parameters are bad for a theory. For a certain value of true.

A useful theory doesn't simply encode all existing observations. It predicts new ones. With enough free variables the theory isn't predicting. It becomes a record of what we've seen, presented in a different way.

It's hard to find a deeper meaning if the theory is just an infinitely distortable funhouse mirror held up to what we already know.

3

An ideal theory can perfectly explain all of the observed data, and could not at all explain anything that was not observed.

For example the theory

[Theory] : "stuff just happens, except when it doesn't"

can be used to explain any experimental observation (Q: "Why did the electron move when I dialled the voltage?" A: "stuff just happens, except when it doesn't", Q:"Why..." A: "Same".) But this is a bad theory, because it could equally well explain the things that did not happen. (Q: "Why did a dragon materialise when I dialled the voltage?" A: "same", Q: "... But I was joking about the dragon!")

Free parameters are a problem, because they always give you more power to twist the theory to match things, whether those things have been observed or not. A theory with too many such parameters starts to get closer and closer to the "stuff just happens" theory. Of course a theory with too few parameters might fail at explaining some real physics. If adding a new parameter one ideally should weigh up how much the theory gains in its ability to explain the real (which we want to maximise) against how capable it becomes at explaining the fictitious (which we want to minimise).

Dast
  • 1,847
2

I think that "free parameter" and "adjustable parameter" and "parameter" are essentially synonyms. I would call the $g$ factor (mentioned in the earlier answer) a prediction of the Standard Model, not a parameter. It could be a free/adjustable parameter of a different model.

The values of the parameters may be random and meaningless, but we don't know that they are, and as long as we don't know, there is a motivation to keep looking for explanations. If a physicist says that free parameters are bad, they mean that it's bad, in some sense, that we don't know everything yet. But in another sense it's good, because it gives them something to do.

benrg
  • 29,129
2

Theory/model is essentially a mathematical framework, which predicts the data observed in an experiment as a function of a set of parameters. Some parameters are specific to the experiment, others are considered as known, and yet others may have to be fit from the results of the experiment (or to better match the data). It is these latter that we call free parameters.

Having many parameters is not necessarily bad: the question is that of overfitting (having too many parameters and fitting "noise") vs. underfitting (having too few of them, and missing important physics). There are standard statistical procedures for validating and comparing models, as discussed in this answer.

Urb
  • 2,724
Roger V.
  • 68,984
2

In short: suppose for a phenomena, there are $n$ quantities that describes it completely. Your model take $a$ number of free parameters as inputs (these needs to be measured and plugged into the theory) and outputs $b$ number of quantities as predictions such that $$a + b = n.$$

As a result, the higher $a$ is, the less $b$ becomes, hence the less predictive the theory becomes.

Our
  • 2,351
  • 19
  • 37