8

I was reading this Phys.SE answer written by user346. At the end of point 3, they say they've only made a change of canonical variables from the ADM formalism to get the Ashtekar formalism. Then point no. 4 is about applying the standard Dirac quantisation on this theory. We end up with a Hilbert space of spin-networks. The discretization of spacetime is obtained as a consequence, so it's not an assumption. Point no.5 is about spin foams, which correspond to histories of spin networks. It seems like this is just the usual path integral formalism applied to the Hilbert space of spin networks.

My question is, did this theory quantise general relativity without any additional assumptions? It merely did a change of variables from the metric to the connection. Then why is it that the naive path integral quantisation of gravity is non-renormalizable, but now it suddenly works after merely a change of canonical variables?

Ryder Rude
  • 6,915

2 Answers2

5

The issue is a bit more subtle:

  • Canonical quantization generally gives different results when you use different sets of classical canonical coordinates. For example, classical-mechanical systems will often produce different quantum systems when canonical positions and momenta are non-trivial functions of true positions and momenta. So picking a set of variables for canonical quantization is a non-trivial matter.

  • Ashtekar variables simply represent variables where the quantization is achievable at all (the Hamiltonian constraint becomes polynomial in them). However, they do so by essentially allowing for complex metrics. For example, if you solve the classical equations corresponding to Ashtekar variables, there is no simple way to see whether the result is going to correspond to a real physical metric. Note that this is not the same as having a complex wavefunction in particle quantum mechanics, it is like allowing the particle to be at complex points in space, it enlarges the allowed dynamics considerably. Ultimately, this causes issues in LQG as well. So I would not call this quantization entirely "unadulterated".

  • The non-renormalizability of quantum gravity refers to the perturbative expansion of the metric $g_{\mu\nu} = g_{(0)\mu\nu} + h_{\mu\nu}$ where the fundamental quantized field is $h_{\mu\nu}$, the deviation from the background metric $g_{(0)\mu\nu}$ (which is not quantized). This allows for the perturbative computation of the effective action using a path integral approach or similar. In this procedure, you see non-renormalizable terms arising at the 2-loop level. The renormalization would require a counter-term that scales as Weyl curvature cubed. This remarkable, background-covariant result was obtained by van de Ven in 1992.

  • There is no simple counter-part to this in LQG (and related approaches). It is not clear how the semi-classical limit of LQG works (and if it works at all). Currently, we do not even know two-point functions that would be somehow computed on semi-classical backgrounds. The "naive" UV-divergent 2-loop expansion around a classical background described by van de Ven should have a counter-part in LQG. In return, this counterpart should provide a clear explanation of where exactly does the effective counter-term curing the UV divergence appear in the workings of LQG. Unfortunately, as far as I know, such an explanation is simply not available at the moment.

Other critical discussions of finer points of LQG were given by Nicolai et al. in 2005. Even though there surely are new developments, I do not believe that the question of the emergence of renormalizability of LQG has ever been unambiguously settled.

Void
  • 21,331
1

Back in the 2000s, there was an attempt by outsiders to loop quantum gravity, to understand the method of quantization being employed there. Here it was concluded that

LQG is not canonical quantization ... the classical first-class constraints are not promoted to hold as expectation value equations in the quantum theory

More precisely, in the works under consideration, there are several constraints needed to define the theory. Some of them are imposed in the standard way, with classical variables becoming quantum operators with nontrivial commutation relations.

However, the diffeomorphism constraint is realized in a completely different way. Instead one seeks to realize the classical symmetry directly on the Hilbert space, and then to construct states which are invariant under this realization. This is sometimes called "group averaging". (Another noteworthy difference is that the Hilbert space in question has an uncountable basis; the constraints are then meant to cut it down to countable size.)

One consequence of the "group averaging" method is that if a classical symmetry is imposed in this way, anomalous quantum violation of the symmetry cannot arise. Once this was understood, it led to deep skepticism about LQG among ordinary quantum field theorists, e.g. see this discussion, since anomalies are just fundamental to quantum field theory, and even have observable consequences (e.g. pion decay into photons, is mentioned in that discussion).

I'm not sure if anyone was ever able to calculate anything in LQG quantized in this unconventional fashion, but much simpler systems, such as the harmonic oscillator, were studied according to the altered rules, and the deviation from conventional quantum mechanics was worked out.

The important point here is that "canonical quantization" in LQG contained a highly nonstandard step, which deviates from ordinary methods of quantization and has no counterpart in prior approaches to quantum gravity. I think there is no demonstration that the resulting quantum theory ever approximates classical space-time in any way.

On the other hand, apparently there are calculations using spin foams that do resemble physics in classical space-time, but they are somewhat ad-hoc and haven't been derived from an underlying Hamiltonian or Lagrangian. That's what "Prof. Legolasov" says here.