5

I am interested in contrasting the mixing of gases in the quantum and classical systems, and in understanding what Gibbs's paradox has to say about each.

Let's begin with the classical case. To determine the entropy, one needs to calculate the number of states available in (position, momentum) phase space while satisfying the constraints of the problem$-$namely, that the particles are localized within some box and the total energy $U$ is within some finite window, $U\pm\delta U$. However, naively performing this integral over phase space does not give us a sensible result. To get the correct answer, we must first discretize phase space into chunks of volume $\mathcal{V}={h^{3N}}$ in order to regularize the integral. Doing so puts up a factor of $1/{h^{3N}}$ in front of the phase space integral. From the quantum side, this factor also shows up when we take the semi-classical limit, but now with $h$ replaced by Planck's constant. This follows from the uncertainty relation; it is meaningless to talk about a volume of phase space smaller than $h^{3N}$.

Very well. Now let us proceed with performing our integral over phase space. It turns out that our final expression for the entropy, $S$, is still not right because it is not extensive. This problem was clearly formulated as Gibbs's paradox: if you imagine two systems of particles, $A$ and $B$, separated by a barrier, then removing and subsequently reinserting the barrier should not result in a decrease of the entropy. Unfortunately, this is exactly what our naive expression for $S$ would lead us to believe. In order to resolve this paradox, Gibbs taught us to insert a factor of $1/N!$ in front of the phase space integral. This should be interpreted as thinking of the particles as being indistinguishable.

Before proceeding, let me formalize what I mean by "indistinguishable" in this classical context: a naive observer $\mathcal{O}_1$ who treats the particles in partitions $A$ and $B$ as indistinguishable cannot construct any sort of device that would extract work from their mixing. From her perspective, the box before and after the barrier is removed is identical. Conversely, suppose we had a second observer $\mathcal{O}_2$ who knows more about these particles. Specifically, she is able to construct some device that would allow her to extract work from their mixing once the barrier is removed. To her, the particles in partition $A$ and $B$ are distinguishable from each other.

Here is what bothers me: both the factor of $1/h^{3N}$ and $1/N!$ appear to have been put in by hand. This is in contrast to the quantum case, where they both naturally arise due to position-momentum uncertainty relation and particle statistics, respectively. These properties are intrinsic to the system under consideration, and especially in the case of indistinguishability, do not have anything to do with what observers do or do not know about the system.

Here is my question: would the ignorant observer, $\mathcal{O}_1$, be able to distinguish between the mixing of a classical and a quantum gas? To put it more formally: would she be able to extract more work in the latter case? On a very naive level, I would expect the answer to be yes; particle statistics in quantum mechanics gives rise to correlation through Pauli exclusion effects and entanglement. In fact, one can even define an entropy of entanglement, which should exist regardless of whether you are observer $\mathcal{O}_1$ or $\mathcal{O}_2$.

Thanks for the help!

redfive
  • 437
  • 3
  • 6

1 Answers1

5

In order to resolve this paradox, Gibbs taught us to insert a factor of $1/N!$ in front of the phase space integral. This should be interpreted as thinking of the particles as being indistinguishable.

This is a common narrative - as Feynman would say, "physicist's history of physics", but it is likely wrong. From reading Jaynes, I don't think Gibbs' point was that entropy of mixing of single gas has to be zero, or that the only way this can be achieved is that all gas particles are indistinguishable and thus the factor $1/N!$ has to be added to get the correct number of states. The likely point is rather, as Jaynes writes, the interesting property of entropy of mixing, and its rationalization; entropy of mixing is a discontinuous function of distinguishability, it is finite and always far from zero for distinguishable particles, howsoever small the distinction is; but it is zero for the singular case where the distinction vanishes and the particles are indistinguishable. There is no actual paradox or problem in this discontinuity, it's just an interesting property of the concept of entropy; it depends discontinuously on which of the two options for description (distinguishable, indistinguishable) we choose.

Here is what bothers me: both the factor of $1/h^{3N}$ and $1/N!$ appear to have been put in by hand. This is in contrast to the quantum case, where they both naturally arise due to position-momentum uncertainty relation and particle statistics, respectively. ...in the case of indistinguishability, do not have anything to do with what observers do or do not know about the system.

$h$ appears in Heisenberg's relations, but these do not imply that classical phase space should be discretized into volumes $h^{3N}$ in classical statistical physics; this is just an approximate way to get the number of eigenstates of the quantum Hamiltonian of ideal gas in quantum statistical physics, based on the simpler classical phase space associated with fixed $H,V,N$.

The factor $1/N!$ is different - it appears already in classical statistical physics, but not because of indistinguishability. Phase volume of distinguishable particles grows with $N$ superexponentially, and implies Boltzmann entropy function that is not homogeneous first degree. Adding the factor is done to obtain an entropy function that is homogeneous first degree in $U,V,N$, like the conventional thermodynamic entropy.

This step/change of definition is not required by any physical law, we could drop it and define Boltzmann's entropy just based on the phase volume without the reduction factor. Then we would get a function that is not homogeneous first degree and instead grows with $N$ much faster; this would be strange, bu all thermodynamic phenomena could be described with such function. As there is no real physics need for the correction factor, its application is just a convention and does not reveal anything deep about fundamental indistinguishability of the particles. Entropy function in thermodynamics is to some extent arbitrary, and is just desired to be homogeneous first degree and it is defined to be so even in classical statistical physics for classical systems made of distinguishable particles.

In quantum theory, we get the reduced number of states and Boltzmann's entropy that is homogeneous first degree already if we assume that the particles are indistinguishable, we do not have to put in the correction factor by hand. We have already caused it to be there by assuming the particles are indistinguishable. If we did not get an homogeneous first degree function this way, we would still introduce the correction factor by hand, even in quantum theory, and argue it is there to have such a definition of Boltzmann's entropy based on quantum states that connects to the one used in thermodynamics. This would happen e.g. for a gas in which all the particles are distinguishable, but very similar. So assuming the particles are indistinguishable produces, by the conventional algorithm, Boltzmann's entropy which is homogeneous first degree, but this does not imply the particles really are indistinguishable, because we can get that function also for distinguishable particles.

We do not actually know for sure all nitrogen molecules are always the same and indistinguishable; although a useful assumption in quantum theory, and consistent with experiments, I think probably they aren't fundamentally indistinguishable, since we just can't detect subtle distinctions and if they are small enough, there is no difference in behaviour between distinguishable particles and indistinguishable particles. If 1000 years in the future someone discovers nitrogen molecules actually differ from each other in value of some new parameter, nothing about the presently known thermodynamic properties of nitrogen will change. However, new phenomena could become observable; if a method will be known by which the molecules can be separated into groups based on the value of that parameter (e.g. a semipermeable membrane), then we can expect work will be necessary to separate them, but then also work will be possible to be extracted by mixing the previously separated parts and having the membrane moved. This would then affect how we calculate entropy of nitrogen in statistical physics, and its value, but likely won't contradict real-world measurable facts about the behaviour of nitrogen we know today.

would the ignorant observer, $\mathcal{O}_1$, be able to distinguish between the mixing of a classical and a quantum gas? To put it more formally: would she be able to extract more work in the latter case? On a very naive level, I would expect the answer to be yes; particle statistics in quantum mechanics gives rise to correlation through Pauli exclusion effects and entanglement. In fact, one can even define an entropy of entanglement, which should exist regardless of whether you are observer $\mathcal{O}_1$ or $\mathcal{O}_2$.

I don't get this question, why would mixing of quantum gas allow extracting more work than mixing of classical gas? What does quantum gas and classical gas mean here, specifically?