It's precisely the opposite. Naturalness is more meaningful, more reliable, and less subjective the less fundamental the theory is. Naturalness arguments like those used on the Higgs mass are so common in condensed matter physics that people don't even bother to mention when they are using them.
Let me summarize what's been said already.
- The quantitative way to express naturalness is by Bayesian statistics. A theory is natural if it can explain observations with parameters that are likely given our priors on the distribution of parameters. This is in innisfree's answer.
- One could complain that this depends on what our prior is, so naturalness is a subjective idea. That means that the objection that something is unlikely is meaningless -- as long as it's possible, there is no problem. This is Hossenfelder's main idea and the content of Paul's answer.
It is true that what we think is natural depends on what we believe about physics in general. But that isn't a knockdown argument, that's just a description of how all science works.
Naturalness $=$ Science
Suppose you come across an old tree in the park in mid-autumn. All the leaves have fallen off, except for one branch. All the leaves on that branch are conspicuously still there. You might come up with two theories to explain these observations.
- "That's just how it is." By pure coincidence every leaf on that branch has stayed on, while the others have fallen off. Somebody else might consider this unlikely, especially if they think every leaf is as good as any other, but it certainly is a possible explanation.
- You notice that branch has been grafted on. Maybe leaves from the grafted species are hardier and generally fall off later.
If you go with theory (1), then you have basically thrown out all of science, because "unlikely but not literally impossible" is an extremely low bar for a theory to pass. If you go with theory (2), you've made progress even if you don't have a complete answer. You've at least identified something different about that branch. (This is analogous to what happens when theorists show "technical naturalness". The problem is not solved, but by establishing a symmetry, we can get a foothold that makes it easier to solve in a future theory.)
Suppose you're at the casino playing roulette. You always bet on red. The first spin is black, so you lose. The second spin is black, so you lose again. You lose $30$ times in a row without interruption. At this point you start complaining the game is fixed, but the casino manager informs you there's no solid basis for thinking that. That many losses in a row is unlikely, but not impossible. And even if you did suspect the game was unfair, the prior probability you assign to that possibility is subjective. And isn't the future fundamentally unpredictable anyway, because of the problem of induction? There's no logical reason not to keep playing forever.
Naturalness in the Standard Model
Suppose you're an experimental physicist measuring the parameters of the Standard Model. It turns out there are two angles that determine the amount of CP violation. In radians and in binary, they are
$$\theta_1 = 1.01, \quad \theta_2 = 0.000000000000000000000000000000000000\ldots.$$
These are real measured numbers, $\theta_2$ is the theta term of QCD. There are over $30$ zeroes when it is expressed in binary, and we are finding new zeroes every few years. Model building is the act of finding hypotheses that explain this.
Even people who claim to be "above" the dirty act of model building are still doing it. To make up an example, perhaps a string theorist could say the string landscape generically results in some extremely small angles, so it's not that strange that $\theta_2$ is small. This is still model building, because (1) string theory is but an extremely complicated model, and (2) the model is being evaluated on its likelihood given a prior. (Incidentally, the Higgs mass is even harder to explain, because it's not about small numbers, but about many large numbers all adding up to almost exactly zero. You cannot fix it by just taking a prior that favors small parameter values. This distinction is usually elided in the popular literature.)
Or you might say, there's no need to postulate a specific mechanism, there's just something different about $\theta_2$ that makes the comparison to $\theta_1$ unreasonable. In that case you are still fundamentally agreeing with model builders, because you are, again, making a statement based on an inherently subjective prior. The only difference between this hypothesis and a model is that a model gives a specific reason $\theta_2$ might be different.
The only principled way to avoid naturalness arguments is to say that there is absolutely no explanation whatsoever, now or ever, for why $\theta_2$ is small; it just is. But this is a difficult position for many to take. If you do take it, let's go to the casino.
Naturalness in Condensed Matter
Naturalness arguments work better the more you know about a subject, because your priors become more accurate. And we know very much about condensed matter at the fundamental level, because the fundamental theory is just QED, the most precisely tested physical theory in history. In some situations, we can almost compute priors semi-objectively.
Naturalness is constantly used in condensed matter physics implicitly. For example, quasiparticles can come with a "gap", the energy they have at zero momentum. Phonons are measured to be gapless, to within experimental error. This is explained by saying they are the Goldstone modes associated with translational symmetry breaking. One could also say that the many microscopic parameters describing a solid all conspired to make the gap coincidentally too small to detect, but that hypothesis is so outlandish that textbooks don't even bother to state it. The motivation behind explaining the small "gap" of the Higgs boson is exactly the same.
The real argument against naturalness, as used in particle physics, is that we might know too little about fundamental physics for our priors to be accurate. The problem isn't that models aren't solving a real problem, but that they're so unlikely to be on the right track that trying is a waste of time. Confidence in our knowledge is a deeply personal issue with extreme variation between people. At one end, some are convinced that every problem of the Standard Model has already been solved: it's just sterile neutrinos, axions, the MSSM, a SUSY WIMP, plus a GUT. At the other end, some are convinced thinking about $\theta_2$ is pointless because we don't even know if quantum mechanics will hold up in the next experiment we do. (You could go further, to people who think we don't even know if an external world exists, but at that point you would be in the philosophy department.)
Hossenfelder's book is a statement that our priors may not be as accurate as previously believed. In that sense, almost everybody agrees with her. You will hear this constantly at talks and from rather morose reviews on arXiv. One of the original proponents of SUSY GUTs now has a cheeky plaque outside his office that declares he has "given up the search for truth". But I'm personally an optimist -- I think there is still some value to thinking about fundamental physics during the 21st century. Just as all priors are subjective, so is this attitude.