9

After reading the derivation of the discrete partition sum, it seems like my book finds the generalization to the continuous case trivial. Just change the summation to an integral and multiply with an differential.

However, I don't understand the validity of this, and I'm wondering where the differential came from. I would understand it if it was a Riemann-sum and we let the step-size go to zero, but I see no step in the expression. Hope someone could help me understand, even though this may be more of a mathematical question than a physics one.

Thanks in advance!


Edit: This is classical statistical mechanics.

The partition sum is defined as follows:

$$Z = \sum_{i=0} \exp(-\beta E_i)$$

A few pages later, the partition sum for an ideal gas is calculated, and the expression is given as

$$Z = \frac{1}{h^3 N!} \int{ \exp(-\beta E)}dx_1...dp_N$$

Which clearly is much more convenient for calculations in the continuous case. There is still no explanation on how the discrete sum now is an integral, and this is what confuses me.

Edit: Thanks for the suggestion to read the other question. However, it seems to me that the question addresses the limit quantum $\rightarrow$ classical, and that my question addresses the limit discrete $\rightarrow$ continuous. There are indeed some similarities, but without knowledge of quantum statistical mechanics I did not manage to find the answer I wanted.

B. Brekke
  • 1,842

4 Answers4

3

Disclaimer: This is not really an answer, but rather a "long comment". However, I hope it can provide some useful insights. Anyway, I think that this question can be basically considered a duplicate of Is there a way to obtain the classical partition function from the quantum partition function in the limit $h→0$?.

The first version of the partition function you report,

$$\tag{1}\label{1} Z_{qm}=\sum_{\text{states}} e^{-\beta E_i}$$

is intrinsically quantum mechanical, and has little meaning in classical mechanics, unless you are considering some model system with a discrete number of states (like the Ising model).

Note that the index $i$ of \ref{1} must be interpreted as running over the states of the system, not over the energy levels. In quantum mechanics, the number of states of a system is in general discrete, and therefore an expression like \ref{1} is meaningful. You can transform it in a sum over the energy levels if you want, by writing

$$\tag{2}\label{2} Z_{qm} = \sum_{\text{energy levels}} g_i e^{-\beta E_i}$$

where $g_i$ is the degeneracy of the energy level $E_i$, i.e. the number of different quantum states corresponding to this energy level.

In classical mechanics, you have a continuum of states, which we call the phase space. Every point $P=(\mathbf x_1, \dots, \mathbf x_N,\mathbf q_1, \dots, \mathbf q_N)$ in phase space corresponds to a different physical state. Therefore an expression like \ref{1} has no meaning in classical physics. The corresponding classical expression is indeed

$$\tag{3}\label{3} Z_{cm} = \frac 1 {h^{3N} N!} \int e^{-\beta H (p,q)} d^N \mathbf p d^N \mathbf x$$

However, we know that classical mechanics is an approximation of quantum mechanics. Therefore, under some condition we must be able to approximate \ref{1} with \ref{2}:

$$Z_{qm} \approx Z_{cm}$$

To rigorously prove that we can do this approximation is quite cumbersome. In K. Huang, Statistical Mechanics, second edition paragraph 9.2 you can find a rigorous proof of this result for the case of non-interacting particles (ideal gas), but the general case is quite cumbersome.

You can find another proof in this article (there is a paywall, though), which also mainly considers an ideal gas.

Another, more simple derivation in the case of a single particle in 1D can be found on these lecture notes (par. 2.1.1).

I have been thinking about a way to explain the general idea of such approximation is simple terms without relying too much on quantum mechanical concepts, but I admit that I found no explanation that would not dumb down the concept excessively. In other words, I cannot provide you any explanation that wouldn't be a "lie", and the best suggestion that I can give you is to actually learn some quantum mechanics and then take a look at the derivation from one of the sources I cited.

In particular, I will note that even though a non quantum mechanical derivation can be attempted, you will never be able to get from it:

  • The factor $h^{3N}$, which come from phase space quantization. In some sense, as also explained by knzhou in his answer, this come from the fact that a quantum state occupies approximately a volume $h$ in phase space.
  • The factor $N!$, which comes from the indistinguishability of quantum particles. In purely classical mechanics, this factor must be put in Z by hand, to avoid double counting of stats which only differ by a permutation of identical particles. Notice that even if in classical mechanics particles are always distinguishable, they are still identical, i.e. the classical Hamiltonian remains unchanged if you exchange the label of two atoms. Because of this, you need the factor $N!$ in classical mechanics too. However, it comes form quantum indistinguishability.
valerio
  • 16,751
1

This is definitely a nontrivial step. The key is that in the semiclassical limit, there is one quantum state per volume $h$ of phase space (see my question here). You can see a basic version of this in the WKB approximation, $$\oint p \, dx = (n + \text{const.}) \, h$$ where the constant depends on boundary conditions, and is negligible in the semiclassical limit $n \gg 1$, or even the Bohr model for the hydrogen atom, $$\oint L \, d \theta = n h$$ since $(L, \theta)$ are phase space variables just as good as $(p, x)$. That's why we can replace the quantum sum over states with a classical integral over phase space. (Note that the sum over states is necessarily quantum; there is no such thing as an energy eigenstate in classical mechanics. If your book is claiming everything is classical, it’s being dishonest!)

knzhou
  • 107,105
0

Let's take one classical particle confined in a volume $V$. The number of states per energy interval $ \Delta E$ will be: $\Delta i=(4 \pi p^2\Delta p V)/h^3 = 4 \pi (2 m^3 E)^{1/2} \Delta E V)/h^3$ and we see that $ \Delta E_i=\Delta E /\Delta i $ tends to zero when $V$ tends to infinity. For one particle partition function we have: $Z_1=\sum (\ e^ {-\beta E_i} \Delta E_i/\Delta E_i)=[[\sum (4 \pi /h^3)(2 m^3 E_i)^{1/2} \ e^ {-\beta E_i} \Delta E_i]]V$ , the expression in pair brackets is a Riemann sum which can be replaced by the integral when $V$ tends to infinity. Of course, an example of a single particle is taken for simplicity

-1

A sum can always be turned into an integral by use of delta functions.

$$\sum_i e^{- \beta E_i} = \sum_i \frac{1}{A} \int d^N q \, d^N p ~ \delta(q-q_i)\delta(p - p_i) e^{- \beta H(q,p)}$$

The normalisation factor $A$ was introduced by hand to match dimensionality on both sides. The state labelled by $i$ is assumed to be specified by phase space variables $q_i$ and $p_i$. Switching the order of integration and summation, one has

$$\sum_i e^{- \beta E_i} = \frac{1}{A} \int d^N q \, d^N p \sum_i \delta(q-q_i)\delta(p - p_i) e^{- \beta H(q,p)}$$

where the sum $\sum_i$ is over all states of the system. Since phase space variables fully specify which state the system is in, the expression $\sum_i \delta(q-q_i)\delta(p - p_i)$ must be the identity, similar to the completeness relations one usually encounters in QM. Thus,

$$ \sum_i e^{- \beta E_i} = \frac{1}{A} \int d^N q \, d^N p ~ e^{- \beta H(q,p)} $$

as claimed by your textbook. Fixing the normalisation factor $A$ requires quantum mechanical considerations, however, and others have already given good answers to this.

Dexter Kim
  • 229
  • 1
  • 5