2

The $n$-point correlation functions of QCD, which define the theory, are computed by performing functional derivatives on $Z_{QCD}[J]$, the generating functional of QCD,

$$\frac{\delta^nZ_{QCD}[J]}{\delta J(x_1)...\delta J(x_n)},$$

where $J$ are the different field sources that are to be set to zero. In a perturbative approach, we can expand on the weak gluon coupling constant, but doing so will require taking into account the presence of the loop integrals that, generally, diverge and one has to deal with these singularities that are a consequence of allowing the integrations to run over every possible energy at every point in space-time.

The first step is regularizing the theory, where a regulator is introduced in the loop integrals such that they become convergent. Doing this then allows one to redefine the gluon coupling constant, quarks masses and fields strength, that appear in the generation functional of QCD, in such a way that they cancel these infinities. This redefinition is called renormalization and by doing so we can compute the finite renormalized correlation functions.

From this perturbative approach, we also see that the coupling and mass will be dependent on the energy scale, such that the correlation functions are independent of this scale.

Now, starting again from the step of computing the correlation functions by functional derivatives on $Z_{QCD}$ but now from a non-perturbative approach (and forgetting everything about the perturbative approach), I understand that there isn't a way to do this computation and one has to use other non-perturbative methods such as lattice QCD. But where does the need to regularize and renormalize QCD come in? Everywhere I have looked for answers they always rely on the perturbative approach, using arguments such as problematic divergencies or scale-dependent mass and coupling, and I don't see how they are present in the non-perturbative approach.

The only 2 leads that in my eyes could lead to problems are the fact that the coupling and mass are bare parameters with no physical value and that the theory is defined as allowing every possible energy at every point in space-time, but I don't understand where these apparent pathologies come in.

Qmechanic
  • 220,844
orochi
  • 353

1 Answers1

10

The Wilsonian viewpoint of renormalization (see this excellent answer by Abdelmalek Abdesselam) is not conceptually tied to perturbative expansions at all. Rather, it conceives of a quantum field theory as having an inherent scale $\Lambda$ (in the simplest case a hard momentum cutoff for Fourier modes in the path integral), and "renormalizing" is starting from a theory at scale $\Lambda$ and going to a scale $\Lambda'$ by integrating out more modes - this transformation is the renormalization (semi-)group flow.

In this framework, QFT really deals with the trajectory under the RG flow, where at each scale $\Lambda$ we have one theory, connected by renormalization to all the other theories along the trajectory. It's not that we start with one theory and then renormalize it, it's that "the theory" is really given by its versions at all scales, but because of the renormalization group equations it sufficies to give one point along the trajectory to determine the full trajectory, so we tend to think of this as "starting" at one scale and renormalizing to the others.

Now, the "scaleless" theory you mean when you talk about the non-perturbative QCD $Z_\text{QCD}$ is really the $\Lambda\to\infty$ limit of this trajectory. In practice for most realistic theories it will turn out that you cannot compute this without hitting divergences even outside of perturbation theory: The standard approach is to put the theory on the lattice with a momentum cutoff and you will typically find that the limit where the lattice spacing goes to zero and the cutoff to infinity introduces ugly divergences in the correlators: The problem is just that something like $\langle \phi(x)^2\rangle$ will always diverge unless you renormalize.

From yet another viewpoint, renormalization is simply "resolving" an ambiguity in the definition of the quantum field theory: While its concrete implementation is related to perturbation theory, the core insight of Epstein-Glaser renormalization is that something like $\phi(x)^4$ is actually ill-defined. While there is no problem with writing down something like that in a classical field theory, in quantum field theory the quantum field has to be an operator-valued distribution (see this answer of mine), not a function, and the pointwise product of distribution does in general not exist - at least not uniquely and without further specification.

In this viewpoint, all the infinities the other approaches encounter are just the price they have to pay for ignoring this fundamental flaw in the setup of the theory, and getting rid of the infinities is a post-hoc fix for this. Garbage in (an action containing ill-defined quantities), garbage out (divergences).

So here renormalization turns out to simply be what's missing to make the theory well-defined - we have to specify, for each of the pointwise products of fields in the Lagrangian, how that product is supposed to work. The renormalization parameters then arise as freedoms of choice during this specification.

In any case, both the Wilsonian and the Epstein-Glaser viewpoint agree that renormalization is neither inherently perturbative nor inherently related to "infinities" - it is simply a necessary part of what you have to consider when you really think in depth about what "a QFT" really is.

ACuriousMind
  • 132,081