2

I'm studying naturalness in the context of the Renormalization Group, and I don't understand why one talks about the "problem" of (lack of) naturalness.

From what I understand the RG tells us that every theory is defined at a particular energy scale, $\Lambda$, and its coupling constants "run" like $g^i_\text{eff}(q)=c^i\left(\frac{\Lambda}{q}\right)^{y^i}$ with the sign of $y^i$ telling whether the coupling is relevant ($y^i>0$, $g^i_\text{eff}(q)$ rises towards IR), irrelevant ($y^i>0$, $g^i_\text{eff}(q)$ lowers towards IR) or marginal ($y^i=0$, and $g^i_\text{eff}(q)\propto\log(\Lambda/q)$). The theory is predictive only for $q<\Lambda$ since in this way only the relevant coupling are, well, relevant, and there's a finite number of them.
Now, when $q\sim\Lambda$ the running character is less important, and the value of $c^i$ is instead more important, and we expect this value to be close to 1 because that would be "natural". Sometimes it isn't and this is a problem.

First question: why should it be close to 1? The value of $c^i$, just like the value of $\Lambda$ is not given by the theory: if we were super-intelligent beings who couldn't perform any experiments we could theorize QFT and the RG but we couldn't know the values of the parameters. Then why should a value like 0,000017 be less natural than 1,7? Why is "fine-tuning" only considered when the value is far from 1, couldn't the value 1,00000000 also be fine-tuned? I've read that it's a "more likely" value, but who says that? Do we have a probabilistic distribution of the values such parameters can have? Otherwise, isn't every real positive number just "a number"?
Thanks to the post linked in the comments and some more research I think I understand now why, even though we don't know the values of the parameters from the theory, we expect them to be close to one: we measured some of them, they were close to one, and we would like the parameters to have similar values because we hope to find someday a fundamental theory, one with few parameters, and parameters with similar values are easier to link. Therefore, if we accept that parameters should be close to one, it's also clear why only non-close-to-one parameters are said to be fine-tuned: it's because to have the value that we measure one should fine-tune the relevant direction of the UV theory that explains why they're "so low".

Second question: why the fact that it isn't close to 1 is a problem? There are some problems in physics that require a solution: for example, we still don't know how to explain why neutrinos have mass, and we can't "stop looking" until we find an answer. From what I understand this isn't the same for naturalness: the parameters could just be the numbers that we measure them being, and that's it. Then why pose the naturalness problem? Could this help figure out new theories?
The explanation about how naturalness guides towards the discovery of new theories sounds very far-fetched: there have been in the past many trials of using naturalness (or, in general, prior expectations) to explain phenomena, without much success, and after the correct explanation the meaning of "natural" was changed.

0 Answers0