11

It is often confusing whether a susceptibility is the same as a response function, specially that often they are used interchangeably, in the context of statistical mechanics and thermodynamics. Very generally:


Response function:

For response functions, typical examples would be thermal expansivity $\alpha,$ isothermal compressibility $\kappa_T,$ specific heats $C_v$, $C_p,$ at least for these examples they seem all to be given by first derivatives of either a system parameter or a potential:

$$ \alpha = \frac{1}{V} \left(\frac{\partial V}{\partial T}\right)_{P,N}, \, \kappa_T = -\frac{1}{V} \left(\frac{\partial V}{\partial P}\right)_{T,N}, \, C_v = \left(\frac{\partial E}{\partial T}\right)_{V,N} $$

  1. So can one define response functions as first derivatives (I guess talking of first derivatives already assumes linear responses) of a system's observables (e.g. $V$) and potentials (e.g. $E$) with respect to system parameters (e.g. $T,$ $P$) without loss of generality?

Susceptibilities:

Wikipedia definition:

In physics, the susceptibility of a material or substance describes its response to an applied field. More general, a susceptibility is a quantification for the change of an extensive property under variation of an intensive property.

Typical quantities we refer to as susceptibilities are magnetic and electric susceptibilities, describe the change of magnetization and polarisation with respect to changes of the magnetic field $h$ and electric field $E$ respectively. So one writes, for the magnetic susceptibility e.g.:

$$ \chi = \left(\frac{\partial M}{\partial h}\right)_T $$ But the magnetization itself seems to be a response function given by: $$ M = \left(\frac{\partial F}{\partial h}\right)_T $$ Where $F$ is the Helmholtz free energy. Combining the two expression we can write the susceptibility as the second derivative of $F$: $$ \chi = \left(\frac{\partial^2 F}{\partial h^2}\right)_T $$

  1. The above in mind, was it correct to call the magnetization a response function? As it would be well in line with the given definition of response functions in first part.
  2. From the final expression of $\chi,$ can one conclude that susceptibilities are usually given by second order derivatives of thermodynamic potentials with respect to a system parameter or an external field?

  1. Closing remark: All of this seems to rather point at the fact that response functions and susceptibilities cannot actually be used interchangeably. Anyhow, I really hope someone can resolve such confusions by giving more consistent or complete definitions of response functions and susceptibilities.
user929304
  • 4,910

3 Answers3

17

response function = susceptibility = (pure or mixed) second derivative of a (Helmholtz, Gibbs, etc.) free energy.

Magnetization (a first, not second derivative of a free energy) is not a response function as the free energy is not observable, so one cannot observe its response to a change of some variable.

6

It's been a while since this question was asked, but I think there's a big picture missing in these answers. The connection to probability theory will provide a robust framework to understand why @Arnold's first statement makes sense. Further, linear response theory (I discuss here Physics Stack 20797) is a formal way to work backward up the derivative chain. (i.e. predict magnetization $m$ given your response function/susceptibility $\chi$).

Part 1 - Average Magnetization

Consider the definition of the mean, $$\left< m \right> = \sum_i m_i \times p_i .$$ $i$ indexes all the possible configurations/states a system can be in. $m_\sigma$ is the magnetization of the $\sigma^{th}$ configuration and $p_\sigma$ is the probability the system is in the $\sigma^{th}$ configuration. Thus $\left< m \right>$ gives you the average magnetization for your thermodynamic system. The system information is all contained in the $p_\sigma$

Part 1.a - probability of being in state $\sigma$: So what is $p_\sigma$?? For any canonical thermodynamic system the partition function determines the probability. The partition function is defined, $$\mathcal{Z} = \sum_\sigma e^{-\beta E_\sigma}$$ Here, $\beta = \frac{1}{T}$, and the index $i$ runs over all possible energy levels. The probability of the system having energy $E_\sigma$ is defined by this probability measure as, $$p_\sigma=\frac{e^{-\beta E_\sigma}}{\mathcal{Z}}$$ Now turn on some external magnetic field $H$ which interacts with each of the $j$ particles in state $\sigma$ via their magnetic dipole moments $m_j(\sigma)$. $$\mathcal{Z} = \sum_\sigma e^{-\beta \left(E_\sigma + H \bar{m} (\sigma) \right)}$$ where $\bar{m}(\sigma) \equiv \sum_j m_j(\sigma)$. Thus the probability of being in state $\sigma$ is, $$p_\sigma=\frac{e^{-\beta E_\sigma - \beta H \bar{m}(\sigma) }}{\mathcal{Z}}$$ Notice $E_\sigma + H bar{m}(\sigma)$ resembles the enthalpy of the $\sigma$ state.

Part 1.b - average magnetization is first moment of partition function: we begin by writing down the definition of average magnetization $$\left< m \right> = \sum_\sigma \bar{m}_\sigma \times p_\sigma = \frac{1}{\mathcal{Z}} \sum_\sigma \bar{m}(\sigma) e^{-\beta E_\sigma - \beta H \bar{m}(\sigma) }.$$ Substitute an identity where the derivative pulls down a factor of $\bar{m}(\sigma)$, $$ \left< m \right> = -\frac{T}{\mathcal{Z}} \sum_i \frac{\partial}{\partial H} e^{-\beta E_\sigma - \beta H \bar{m}(\sigma)}.$$ Commute the derivative and the sum to arrive at $$ \left< m \right> = -\frac{T}{\mathcal{Z}} \frac{\partial}{\partial H} \mathcal{Z}.$$ Therefore, we have shown average magnetization $\left< m \right>$ is equal to the first derivate of the partition function $\mathcal{Z}$. By definition, this means that the partition function is the Moment Generating Function (MGF) of probability distribution function $p_i$.

Part 2 - Generating Functionals

The log of the MGF is the Cumulant Generating Function (CGF). Notice Helmholtz free energy is defined via the partition function as $F=- T\log [Z]$ thus the free energy is the CGF. The first moment of the CGF (like the MGF) is also the mean. We can show this by using $\frac{1}{f(x)}\partial_x f(x) = \partial_x \log[f(x)]$. $$ \left< m \right> = - T \left( \frac{1}{\mathcal{Z}} \frac{\partial}{\partial H} \mathcal{Z}\right) = -T \frac{\partial}{\partial H} \log[\mathcal{Z}] = \frac{\partial F}{\partial H}$$

If we plug in this value of $m$ into the definition of the susceptibility we see that the susceptibility is the second moment of the Helmholtz free energy. $$ \chi =\frac{\partial}{\partial H} \left< m \right> = \frac{\partial}{\partial H} \frac{\partial F}{\partial H} = \frac{\partial^2 F}{\left( \partial H \right)^2} $$ This is not surprising!... "Why Dude!? Cause it seems pretty cool!!"... well I'll tell you!

The susceptibility is by definition the correlation in the system and the second derivative of the CGF (i.e. $F$) is by definition is the variance of the distribution in the system. Since correlation is variance they must be one and the same.
$$\frac{\partial^2 F}{\left( \partial H \right)^2} = \left< m^2 \right> - \left< m_i \right>^2 = \chi$$

Part 3 - Conclusions

In total, I have just shown that (1) the partition function $\mathcal{Z}$ is the moment generating function and (2) the free energy, $F$ is the cumulant generating function. And this explains why the derivative look the way they do! Furthermore, an open question here is "how to interpret @Arnold's second statement", because a physicist can certainly measure the free energy in their simulations and experiments. A quick example from my field is that we use the Helmholtz energy (a free energy like the Enthalpy) to study a phase transition in the nuclear strong force. While that heuristic may work for him, I'm not sure that's a broadly applicable.

2

I might be able to answer your question in the context of linear response theory:

Response function: the power series expansion of the applied field generated by a weak external perturbation. Mathematically speaking, we can relate the average value of an observable $X$_i to the response function $\chi$ via \begin{align} \langle X_i(t)\rangle=\int_0^t dt'' \sum_j \chi_{ij}(t,\,t'')f_i(t'') \end{align} where $f_i(t)$ is the external perturbation. We can also express it purely in terms of the known observable of the system: \begin{align} \chi_{ij}(t,\,t')=\beta X_i(t)\dot{X}_j(t')su \end{align}

The generalized susceptibility: define this as $\chi(\omega)$. This is the ratio of the response of an average observable to an external force $F(\omega)$: \begin{align} \chi(\omega)=\frac{\Delta \langle X(\omega)\rangle}{F(\omega)} \end{align}

Furthermore, the susceptibility is the Laplace-Fourier transform of the linear response function--that is, \begin{align} \chi(\omega)=\int_0^\infty dt \chi(t)\exp(-i\omega t) \end{align} Many texts (on non-equilibrium statistical mechanics, at least) use a very liberal definition of response function--that is, one that is synonymous with the susceptibility. For a non-equilibrium stat mech perspective, look in Pottier's 2012 text.