10

Let's consider a system with order parameter $\rho$ (e.g. density in liquid-gas transition, magnetization in Ising model) and control parameter $\tau$ (usually the normalized temperature, but it could be something else).

Let's say this system goes through a phase transition at $\tau = 0$: $\rho(\tau)$ or one of its derivative is discontinuous at this point.

It is common knowledge that, near the transition, "a lot of" relations between physical quantities will take the form of power laws, with specific critical exponents for each of them. It is often stated that this is a (the?) reason why the system is scale-free in critical regime. We can make sense of that last affirmation by considering in particular the correlation function and the associated power law which occurs at transition (because of the divergence of the correlation length).

My question is: what is the fundamental reason why power laws should occur at phase transition specifically? What assumptions on the system should we make to obtain this result?

Qmechanic
  • 220,844
Weier
  • 314

2 Answers2

11

My question is: what is the fundamental reason why power laws should occur at phase transition specifically? What assumptions on the system should we make to obtain this result?

IMHO this is answered by reversing another sentence from the Q.:

It is often stated that this is a (the?) reason why the system is scale-free in critical regime.

When the system is in scale-free regime, power laws come naturally, just as exponents come naturally when we assume that correlations are local.

Indeed, the exponent arises, e.g., if we expect that a state at distance $1$ from the origin is influence to degree $1/l$ by the state at the origin. Then the state at distance $2$ is influenced to degree $1/l$ by the state at $1$, and to degree $1/l^2$ by the state at the origin, and so on.

This reasoning is inapplicable when the system does not have a specific scale of "influence" $\alpha$. Power laws is the simplest way of modeling such scale-invariant state - they are not exact, just like correlation functions are only approximately exponential (note that Boltzmann exponent is also really a factorial in Stirling approximation, neglecting lots of minor details.)

Why there is no scale? Because we are talking about transition between two different phases - e.g., two states characterized by drastically different scales. We could model it, e.g., by a logistic function: $$ l(T) = l_0 + \frac{\Delta l}{1+e^{N(T-T_0)/T_0}} $$ Here for $|T-T_0|/T_0\gg 1$ we have scale $l_0$, if $T>T_0$ and scale $l_0+\Delta l$, if $T<T_0$. For finite $N$ the change may happen very abruptly, but for $T$ close to $T_0$ the scale is still finite. However, when we take thermodynamic limit $N\rightarrow\infty$ (assuming that for our temperature resolution we always have $N|T - T_0|/T_0\gg 1$), then the scale is completely undefined at $T=T_0$ and a different (scale-free) description invites itself.

Note also that in practice correlation lengths in different phases also have completely different nature - in a paramagnetic phase of ferromagnet we can meaningfully talk about the correlation between neighboring spins, whereas in a ferromagnetic phase we are more likely to talk of spin waves, characterized by wave number and group velocity, but extending throughout the whole crystal. Likewise, we can talk about distance correlations between molecules in a liquid, but phonons and Bloch waves in a solid - in other words, we are not simply talking about sudden change in the correlation length, as the above equation may suggest, but about one parameter vanishing and another one emerging.

Disclaimer: Strongly correlated systems are not my field, but I find this question very interesting. So please do not treat the above as an expert/definitive opinion.

Update: Mechanical analogy
I think a very instructive example is a particle in a double-well potential, $$V(x)=\lambda (x^2-d^2)^2$$ enter image description here
To avoid any confusion: I am not talking here about Landau theory of phase transitions, Kramers escape, or instantons - where one often sees such pictures. I discuss a non-linear oscillator, whose motion is described by Newton's equation: $$ m\ddot{x}(t)=-\frac{d}{dx}V(x)=-4\lambda x(x^2 - d^2). $$
In general case the period of motion can be calculated from the energy conservation: $$ T=\sqrt{2m}\int_{x_L}^{x_R}\frac{dx}{\sqrt{E-V(x)}}, $$ where $x_L, x_R$ are the turning points, i.e., the roots of equation $V(x)=E$.

We can readily see the existence of two limiting regimes:

  • $E<\lambda d^4$ For energies below the barrier energy the motion of the particle is confined to one of the wells, approaching at low energies the motion of a Harmonic oscillator with frequency: $$ V(x)\longrightarrow=4\lambda d^2(x-d)^2=\frac{m\Omega^2x^2}{2} \text{ as } {x\rightarrow d},\\ \Omega = \sqrt{\frac{8\lambda d^2}{m}}. $$
  • $E>\lambda d^4$ For energies above the barrier the motion is close to that of a quartic oscillator with potential $V_4(x)=\lambda x^4$: $$ T=\sqrt{\frac{8m}{\sqrt{\lambda E}}}\int_0^1\frac{dy}{\sqrt{1-y^4}}, \text{ where } \int_0^1\frac{dy}{\sqrt{1-y^4}}=\frac{\left[\Gamma\left(\frac{1}{4}\right)\right]^2}{4\sqrt{2\pi}} $$ the value of the last integral can be found in Gradshtein&Ryzhik or evaluated by reducing it to a beta-function with replacement $y^4=t$ - the main point is that it is a constant, and the oscillations occur at finite frequency, even if energy dependent.

However, we see that there is no smooth transition between the two regimes described above: when the energy approaches the barrier, the motion of the particle becomes very slow, and its period diverges. Indeed, for $E=\lambda d^4$ there is an unstable equilibrium state at the top of the barrier, which means that the particle could stay there infinitely long (provided no noise or other external influences.)

One could show this by solving the Newton's equation, approximating the potential by an inverted parabola - since for $|E-\lambda d^4|\ll\lambda d^4$ the particle spends most of its time near the barrier, we can consider only a small interval around it, $[-x_0,x_0]$, where cutoff distance $x_0$ is an arbitrary point in the available region - that our result will be insensitive to its choice is the manifestation of scaling (for simplicity I treat only the case $E$ is slightly above the barrier): $$ V(x)\approx\lambda d^4-2\lambda d^2x^2=V(0)-\frac{m\Omega^2x^2}{2},\\ T=\frac{4}{\Omega}\log\left[\frac{x_0}{a}+\sqrt{\left(\frac{x_0}{a}\right)^2+1}\right], \text{ where } a=\frac{2[E-V(0)]}{m\Omega^2}=\frac{[E-V(0)]}{2\lambda d^2} $$ since close to the barrier $a\rightarrow 0$, we have: $$ T\approx \frac{4}{\Omega}\log\left[\frac{2\sqrt{2}x_0}{d}\sqrt{\frac{V(0)}{E-V(0)}}\right]= \text{const}-\frac{2}{\Omega}\log\left(\frac{E-V(0)}{V(0)}\right) $$ We see here scaling as $\log\left(\frac{E-V(0)}{V(0)}\right)$. Note in passing that logarithm can be viewed as a power law with extremely small exponent: $$ \frac{x^\alpha - 1}{\alpha}\rightarrow\log x \text{ for }\alpha\rightarrow 0. $$ This property is known to physicists as a replica trick, and to statisticians as Box-Cox transformation.

Since the logarithm diverges for energies close to the to of the barrier, the value of the constant, dependent on the chosen cutoff is irrelevant, i.e., we are free from this scale.

In fact, using Gradshtein&Ryzhik, one can obtain a closed expression for the period in terms of the complete elliptic integrals of the first kind: $$ T=\begin{cases} 2\sqrt{\frac{m}{\lambda d^2}\sqrt{\frac{\lambda d^4}{E}}} K\left(\sqrt{\frac{1+\sqrt{\frac{\lambda d^4}{E}}}{2}}\right), \text{ if } E>\lambda d^4,\\ \sqrt{\frac{2m}{\lambda d^2\left(1+\sqrt{\frac{E}{\lambda d^4}}\right)}}K\left(\sqrt{\frac{2}{1+\sqrt{\frac{E}{\lambda d^4}}}}\right) \end{cases} $$ This allows us to plot the dependence of the period of energy, using any software familiar with special functions: enter image description here
The spike at $E=\lambda d^4$ is the region where the period of the oscillations diverges, as we move from a quasi-harmonic to quartic oscillations.

Roger V.
  • 68,984
5

One can get to these power laws through this sequence:

  1. Continuous phase transition $\to$
  2. physical observables, such as magnetic susceptibility, diverge $\to$
  3. system is scale free $\to$
  4. physical relations given by power laws.

The first step, 1$\to$2, is an experimental result. As for the last step, 3$\to$4, as Roger's answer explains, power laws just happen to be the simplest way to describe a scale free dependence, see: If $f=C x^a$, then $f(\lambda x) \equiv$ $C (\lambda x)^a =$ $C \lambda^a x^a =$ $\lambda^a f(x)$, that is, the functional form of the power law $f$ is preserved when its variable $x$ is rescaled by a constant $\lambda$.

So the interesting question is why 2$\to$3, i.e., why is a divergent (for instance) magnetic susceptibility linked to the system being scale free?

An intuitive answer, sticking to the magnetic example, is that:

  1. For susceptibility to diverge (i.e., for weak fields to change the magnetization of the whole system), the influence of vanishingly small fields must be able to propagate from the smallest scales (where they can act) all the way up to the macro scale, like dominoes toppling each other in a chain reaction.
  2. Weak magnetic fields aren't able to flip large magnetic domains, so small domains must be present and, since a tiny domino can't topple a huge one, at $T_c$ magnetic domains of all sizes, from the smallest to the largest must be present.
  3. And voilà: All scales mixed, from infinitely small to infinitely large, and each keeping a similar relation to the next ones, such that a chain reaction can be sustained is, there you have it: a self-similar, scale-free system.

As described in Wikipedia:

Let us apply a very small magnetic field to the system in the critical point. A very small magnetic field is not able to magnetize a large coherent cluster, but with these fractal clusters the picture changes. It affects easily the smallest size clusters, since they have a nearly paramagnetic behavior. But this change, in its turn, affects the next-scale clusters, and the perturbation climbs the ladder until the whole system changes radically. Thus, critical systems are very sensitive to small changes in the environment.

Notice that the average size of these domains establish a characteristic length, and the item 2 in the sequence above could also be substituted by the divergence of correlation length (in place of the magnetic susceptibility). (Related answers include 1, 2, 3 and 4.)

The argument above is made more rigorous in the context of renormalization groups, as found in textbooks and also, e.g., in this answer or the references mentioned here. Particularly accessible is Kadanoff's block spin renormalization theory.

BTW, the complementary question, why aren't power laws always the appropriate description, is made here.

As for the question:

What assumptions on the system should we make to obtain this result [power laws]?

Its phase transition must be a continuous one, not first-order, or infinite-order (such as the Berezinskii–Kosterlitz–Thouless transition), or a quantum transition (driven by quantum instead of thermal fluctuations), or a dynamic transition, etc.

stafusa
  • 13,064