21

Consider a quantum system in thermal equilibrium with a heat bath. In determining the density operator of the system, the usual procedure is to maximize the von Neumann entropy subject to the constraint that the ensemble average of the Hamiltonian has a fixed value.

What justifies this assumption?

Sakurai, in his QM text, writes

To justify this assumption would involve us in a delicate discussion of how equilibrium is established as a result of interactions with the environment.

I'd appreciate if someone could shed some light on this. References are welcome as well.

I've heard the suggestion that a thermal equilibrium ensemble is simply defined by that density operator which solves the constrained optimization problem above. If this is the case, then why are real physical systems that are in weak contact with heat baths for long periods well described by such mathematical ensembles, and how would one identify the Lagrange multiplier $\beta$ that arises with inverse temperature of the heat bath?

joshphysics
  • 58,991

2 Answers2

19

You need to read this paper by Jaynes. I can't explain it as well as him, but I will try to summarise the main points below.

The first thing is to realise that the entropy is observer-dependent: it depends on what information you have access to about the system. A finite temperature means that you don't have access to all the information about the state of the system; in particular, you cannot keep track of the (infinite) degrees of freedom of the bath. However, suppose that some demon could keep track of all the degrees of freedom of the system and bath: he/she sees zero entropy. For the demon, it looks a bit like the total system is at zero temperature (although really it is better to say that temperature is ill-defined for the demon).

Given that you are ignorant (sorry, but at least I'm not calling you a demon), you need to find a consistent prescription for assigning probabilities to the different microstates. The prescription must be 'honest' about what you do or don't know. The entropy is in some sense a unique measure of ignorance, as proved by Shannon. Therefore you should 'maximise your ignorance', subject to the constraint that you do know certain macroscopic observables, e.g. average energy or average particle number if the system is open, etc.

Maximising the entropy of the system is the most logical way to assign probabilities to the microstates of the system, given access only to a limited subset of observables. The same 'MaxEnt' principle is quite general and applies to all statistical analysis, not only physics. The Lagrange multiplier $\beta$ is identified with inverse temperature by comparing the outcome of this abstract procedure to the experimental facts of phenomenological thermodynamics.

If you are interested in the actual dynamics of equilibration, there has been a lot of literature on this recently, especially in mesoscopic systems. Particular focus is laid on the integrability of the system: non-integrable (chaotic) systems do thermalise, whereas there is a fair bit of evidence that integrable systems do not thermalise properly. Intuitively, this is because integrable systems have a maximal set of locally conserved quantities so that, even when in contact with a heat bath, the memory of the initial conditions is never quite lost.

See, for example: Dynamics of thermalisation in small Hubbard-model systems and Thermalization and ergodicity in many-body open quantum systems, if you search 'thermalization' (sic) on arxiv then you will find many more.

2

Here is an alternate approach to answering this question (which ignores temperature and lagrange multipliers) given to us by the 1st law of thermodynamics and quantum information.

In short, a weakly interacting ensemble at thermal equilibrium maximises the von Neumann entropy because in doing so it minimises the free energy of the system.

Why can we say this? A weakly interacting ensemble at thermal equilibrium corresponds to a Gibbs state, which is nothing more than a quantum version of a canonical ensemble. We can write this out as $$\hat{\rho}_\beta = \frac{e^{-\beta \hat{H}}}{\mathcal{Z}}$$ where $\mathcal{Z}$ is the canonical partition function. Canonical ensembles also have a related quantity known as the Gibbs Free energy given by $$F = U - tS$$ where $U$ is the internal energy and $t$ is the temperature. We can write out a quantum version of this as $$ F = \langle \hat{H} \rangle - tS.$$ For a Gibbs state, this will coincide with the Gibbs Free energy, but we can also define this quantity for an arbitrary quantum state modeling an open quantum system, taking $S$ to be the von Neumann entropy $$S = -\text{tr}\{\hat{\rho}\ln\hat{\rho}\}.$$ In general we have $$F(\hat{\rho}) = \text{tr}\{\hat{\rho}\hat{H}\} + \beta^{-1}\text{tr}\{\hat{\rho}\ln\hat{\rho}\} = \beta^{-1}\text{tr}\{\hat{\rho}(\ln\hat{\rho}+\beta\hat{H})\}$$ by linearity of the trace. The Free energy of a Gibbs state is the Gibbs Free energy $$F(\hat{\rho_\beta}) = -\beta^{-1} \ln \mathcal{Z} = -\beta^{-1}\ln\left(\text{tr}\{e^{-\beta \hat{H}}\}\right).$$ Considering the relative entropy of this arbitrary state $\hat{\rho}$ and the Gibbs state $\hat{\rho}_\beta$ we have $$D(\hat{\rho}||\hat{\rho}_\beta) = \text{tr}\{\hat{\rho}\ln\hat{\rho}\} - \text{tr}\{\hat{\rho}\ln\hat{\rho}_\beta\}$$ which can be expressed in terms of the Free Energy as $$D(\hat{\rho}||\hat{\rho}_\beta) = \beta \left(F(\hat{\rho}) - F(\hat{\rho}_\beta)\right).$$ But, the relative entropy is known to be nonnegative $$ D(\cdot||\cdot)\geq 0.$$ As a result the Gibbs Free energy is the lowest possible free energy or the free energy of the Gibbs state is minimal.

The first law of thermodynamics can be stated in this context as $$dE = tdS +dF$$ so if Gibbs states minimise the free energy, they maximise the von Neumann entropy, which happens to be the Gibbs Entropy.

Beyond this one can also justify the approach to thermal equilibrium on the grounds of quantum information expanding on this argument. Any quantum channel that preserves the Gibbs state cannot increase the free energy. Rather, the free energy of out-of-equilibrium states is monotonically decreasing to the Gibbs Free Energy giving equilibrium, maximising von Neumann entropy on long times.

This argument is given by Preskill in his notes on Quantum Shannon Theory.

aquohn
  • 375
Jake Xuereb
  • 435
  • 4
  • 12