0

On page 2 of this preprint, the authors make a derivation "by saddle point".

$$Z \propto \prod_{\sigma=1}^{q} dx_\sigma \, \exp{\left(-N \left[\sum_{\sigma} \frac{\beta Jx^2_{\sigma}}{2} - \log\left(\sum_{\sigma} e^{\beta J x_\sigma}\right) \right] \right) } \tag{3} $$

From Eq. (3), for $N \rightarrow \infty$ by saddle point, we get immediately the following system of equations:

$$x_{\sigma} = \frac{e^{\beta J x_{\sigma}}}{\sum_{\sigma'} e^{\beta J x_{\sigma'}}} \text{, } \sigma = 1, \ldots, q \tag{4}$$

$Z$ is the partition function for a q-state mean-field Potts model. There are $q$ independent Gaussian variables $x$. How does (4) follow from (3)?

DanielSank
  • 25,766

1 Answers1

1

The essence of the saddle-point method is to realize that the dominant contribution for such integrals only comes from the part where you have a minima in the function that's inside your exponential in the integral. In this case, it's $\sum\beta J x_\sigma^2/2-\log(\sum e^{\beta J x_\sigma})$. All you need to do is to take the derivative with respect to $x_\sigma$ and see when that =0. In this case, since you have multiple variables labeled by $\sigma$, you would have to do that for all of them, which gives you $q$ different equations. And they are exactly Eq. (4).

You can also see the links provided by Yvan. They should be helpful.