14

I'm a computer scientist doing some research that touches on basic concepts in statistical mechanics: macrostate, microstate and entropy. The way I'm currently conceiving of it is that the microstate includes all the information to perfectly the describe the state of a system, the macrostate provides some of the information, allowing you to narrow down the possibilities to a subset of states and a distribution over them, and the entropy roughly says how much information is still missing after you specify the macrostate.

From various places online, including this SE thread, I read that the choice of what to put in the macro-description depends on what state variables one is interested in. That SE answer seems to downplay the significance of this, but from my uninformed outsider perspective it seems like a big deal. I could, for example, make the entropy of any system zero if I choose the state variables to be the position and momentum of every particle (let's just stick to the classical paradigm for now).

From the examples I've seen, there are only a few state variables such as temperature and pressure that are even considered, but could/does it ever happen that two different experimenters on the same system have different 'opinions' on what the state variables should be, and so calculate totally different values for entropy? If not, is there a satisfying reason why the choice of state variables is not as subjective as it appears?

Mauricio
  • 6,886
ludog
  • 151

3 Answers3

13

I could, for example, make the entropy of any system zero if I choose the state variables to be the position and momentum of every particle (let's just stick to the classical paradigm for now).

Term Entropy may have different meaning, depending on the context, with Jaynes pointing out that there are at least six different quantities called "entropy" (his classification is fully quoted in this answer). Notably, information entropy is distinct from what we call entropy in statistical physics and thermodynamics.

Returning to the quoted passage in terms of information entropy: entropy is a measure of uncertainty (i.e., in classical world, of our ignorance) about the state of the system. If the positions and momenta of all the particles are know, there is no uncertainty, and the entropy is zero. If we analyze the system in terms of macroscopic variables, as is the case in statistical physics, where many different configurations in phase space result in the same macroscopic state and are assumed equally probable, the entropy is different from zero.

In this sense, entropy depends on the choice of variables, which is dependent on how much we know (or can know in principle - see here) about the system. Entropy is not subjective (subjective=based on or influenced by personal feelings, tastes, or opinions) - since different people performing calculations with the same variables and under the same assumptions would come to the same conclusions.

From the examples I've seen, there are only a few state variables such as temperature and pressure that are even considered, but could/does it ever happen that two different experimenters on the same system have different 'opinions' on what the state variables should be, and so calculate totally different values for entropy? If not, is there a satisfying reason why the choice of state variables is not as subjective as it appears?

Although statistical physics texts, for the sake of simplicity, mostly deal with $P,V,T$ variables, mostly suitable for gas, one can introduce any number of additional forces ($X_i$) and external variables ($x_i$), and use for them the same thermodynamic machinery, mutatis mutandis: $$ dU = TdS - pdV +\sum_j \mu_j dN_j + \sum_i X_idx_i $$ Introducing additional state variable is the same as using conditional probabilities: the entropy calculated with additional variables specified will be different from the situation where these variables are ignored, but the experiments remain consistent. The difference is the conditional entropy: $$ H(Y|X)=H(X,Y) - H(X) $$ Again, there is nothing subjective about this.

This answer contains a somewhat more technical discussion on the same issue.

Roger V.
  • 68,984
10

It's a good question, one that has been pondered at length by anyone with a deep knowledge of thermal physics, and I think there is no complete consensus on what is the most clear way to present the reasoning. The question and answer at

Does entropy depend on the observer?

should be noted. I hope the following may also help.

I think this is an example where we should be clear in our minds about the distinction between the physical world 'out there' as it were, and the models we may construct in order to pursue scientific arguments and arrive at conclusions and predictions. In all of physics the models we use are simplifications. We deliberately leave out various details on the grounds that they will be irrelevant to whatever questions we want to examine. For example, in mechanics we neglect the van der Waals forces, or the gravitational influence of distant bodies, or whatever. In thermal physics we typically go about our business in the following way. First we note what kind of system we are dealing with, especially by noting how work can be done on it (e.g. mechanical work by pressure, magnetic work, electrical work, etc.) We then introduce an idealized system in which only a few ways of doing work are possible. We can then write down the fundamental relation in the form $$ dU = T dS + \sum_i Q_i dR_i $$ where $Q_i, R_i$ are pairs of physical variables in terms of which the system state can be specified. The important point for the current question is that this idealized system is not itself the physical system 'out there' in the physical world. It is a human construct. It is a model; one which in some respects will map accurately to the physical system we want to study, and in some respects will not.

Once we have thus picked out some macroscopic variables (the set $\{ Q_i, R_i \}$), the entropy $S = S(U,\{R_i\})$ has a unique value for each state. But this is a statement about our idealized system, our model. It is not a statement about the physical system. The physical system can be modelled in more than one way. The different models may assign to it different entropies. Each model will be consistent with itself and will make accurate statements about measurable quantities such as heat capacities and susceptibilities and how they relate to one another. The models are also consistent with one another, in that those which assign a lower entropy are the ones which also invoke more macroscopic variables and thus constrain the system to a smaller set of microstates for given values of the macroscopic parameters. The whole subject thus coheres together.

My answer to the question is, then, not that entropy is subjective, but that the choice of model we construct in order to study a given system is (to some extent) subjective.

Andrew Steane
  • 65,285
5

Answers from the comments, as suggested here. Answers should be posted as answers, and comments should be used for their intended purpose.


@Tobias Fünke

See the work of Jaynes (his papers are, I think, easy to read, and very clear). See also this thread, especially regarding your last paragraph. I think the link I've added should answer your question, but let me just add that statistical mechanics/thermodynamics is a powerful tool exactly in the situation where you cannot, for various reasons, specify the state of a system exactly (say, by knowing positions and momenta of all particles). If, instead, you are able to find sufficient macroscopic variables (temperature, pressure, volume etc.) you can try to apply the machinery and get something useful out of it.


@naturallyInconsistent

I would have to very much disagree with Jaynes. There is a treatment of thermodynamics axiomatising it from defining a new function called entropy upwards, and it becomes clear that once you have defined what quantum system it is you are dealing with, and the macroscopic state variables, e.g. (E,V,N) or (T,p,N), then for a specific choice of one of a few allowed entropy functions, everything else is fixed. This should not be surprising, because physics is mostly about objectively measurable things, not subjective things. The subjectivity must thus be somewhat irrelevant or controlled.

Physics has already enforced such great constraints on what would be a suitable entropy function, that we do not have much leeway left to subjectively spew forth new ones. Instead, what the axiomatic approach makes clear, is that the remaining degrees of freedom left for us to choose from, is extremely weakly changing the entropy function's behaviour, so that we are now in a sad situation whereby it would take extreme precision measurements for us to experimentally find out which amongst the entropy functions is actually used by Nature.


@FlatterMan

I do not see how anything about dS=dQ_rev/T is supposed to be "subjective". The reversible heat flow is measurable and so is the temperature. Physics is NOT about calculation. Physics is an empirical science. We make measurements, sometimes with high precision, sometimes with rather large errors. It is, admittedly, not easy to make thermodynamic measurements with high precision, what that does teach, however, is when a term like "entropy" applies in a meaningful way: only for systems that are almost in equilibrium.


@mmesser314

Adding my 2 cents here, rather than a comment. It really isn't an answer. Entropy and probability are related. There is the idea that probability of an outcome is the fraction of a set of trials where the outcome happens. There is another idea that probability is a measure of your ignorance.

If you have a room full of air and you know the temperature and pressure, you can calculate the probability that all the air would rush over to one side in the next 24 hours. Your answer would be close to 0. If you have measured the momenta of all the molecules and calculated all the trajectories, your answer would be either exactly 0 or exactly 1.

Also this is how it would work classically. The uncertainty principle would prevent you from calculating trajectories with enough precision. You could construct many isolated rooms (at least they could be isolated in a thought experiment) and measure the outcomes to find the probability. Measuring the outcome would collapse the wave function and show you whether the probability is 0 or 1. This collapse is a different thing that classically finding out an outcome the could have been calculated in advance.

mmesser314
  • 49,702
BioPhysicist
  • 59,060