3

One of the explanations of the second law of thermodynamics is that it goes back to the low entropy in the early universe (How do you prove the second law of thermodynamics from statistical mechanics?). My question is: assuming that's true how does it filter down to everyday occurrences where we see glasses getting broken but not becoming whole, cups of coffee cooling off but not heating up by cooling air, and so on?

The equations of Hamiltonian dynamics are time symmetric, so whatever evolution it allows it also allows the same in reverse, which we do not observe. What is the mechanism that converts low entropy condition in the distant past to the dominance of entropy increasing evolutions in our surroundings? Even assuming that the overall entropy increases shouldn't there be observed instances where the opposite happens locally from time to time, even on a macroscopic scale? To put it another way, why is the state of local systems today so heavily dependent on the condition of the universe as a whole billions of years ago? Why can't we isolate a macroscopic system enough from whatever influence the early universe exerts on it, and have it exhibit entropy decrease?

Note: I am aware of another explanation of the second law based on low probability in some sense of entropy decreasing evolutions (Is the second law of thermodynamics a fundamental law, or does it emerge from other laws?), but it seems to have other problems, so I'd like to know if there is an explanation from the early universe point of view.

Conifold
  • 5,543

2 Answers2

2

This follows from a weak form of the second law of thermodynamics, which can be rendered:

Consider a thermodynamic system of $N$ particles. Given (1) the ergodic hypothesis that all microstates of the system are equally likely and (2) that the system is found to be a non equilibrium state (this is the same as saying "not in its maximum likelihood, or maximum entropy state") at time $t_0$, then, in the absence of other information, the probability that it will be in a higher entropy state at some other time $t\neq t_0$ approaches unity as $N\to\infty$ if $|t-t_0|>\epsilon$ for some $\epsilon>0$.

This particular, weak form can indeed be proven from simply the laws of large numbers, which in this context imply the following: as $N\to\infty$, the set of all microstates comprises microstates that look almost exactly like the maximum likelihood microstate and almost nothing else. We read this statement in the proportional sense: if we plot the maximum likelihood histogram and the observed one, the proportional difference between the two histograms is very little at almost every microstate. You can ponder the meaning of this more carefully by thinking of the simple binomial distribution. If you have a sample of 10 red balls from an infinite population of which $43\%$ are red, then you'll get samples with a wide variation in their proportion of red balls. Mostly there will be 3, 4 or 5 red balls but there is also a significant probability of a sample with 9 or 10 red balls. So the maximum likelihood (entropy) sample, which is 4 red balls, does not describe any random sample very well. But if you sample a million balls, then the probability of getting fewer than 420 000 or greater than 440 000 is so small (of the order of $10^{-90}$) that you can neglect it altogether. The maximum entropy sample of 430 000 red balls is highly representative of almost any random sample.

So now, given this state of affairs, any random walk within the state space of $N$ particles, beginning from a state with that is significantly different from the maximum entropy one will almost certainly increase the entropy. Suppose we happen to draw a million ball sample from our population with only 410 000 red balls. Then we know that the probability that the next sample will have higher entropy, i.e. be nearer in its proportion of red balls to 0.43, is at least $1-10^{-90}$! We'd also suspect that our initial sample were not random, i.e. did not arise with the ergodic hypothesis holding, and therefore we'd look for some mechanism to explain the sample.

For more discussion of these ideas, see my answer to the Physics SE question "Why are the laws of thermodynamics “supreme among the laws of Nature”?"

Selene Routley
  • 90,184
  • 7
  • 198
  • 428
0

What is important here is that the entropy is also very low today. So, since the Big Bang the entropy has been increasing but we're still quite far removed from the heat death state. So, the reason why at the macroscopic level time reversal inavariance appears to be broken is simply because the entropy today is low compared to the maximum entropy, and the reason why the entropy today is very low is because it was even lower a long time ago.

If you don't invoke the low initial entropy, then you can't explain why the entropy is low today. The most probable situation to find ourselves in would be in a maximum entropy state that is just consistent with our existence, which leads straight to the Boltzmann Brain paradox.

The reason why the initial entropy was low is a deep problem in physics. My personal opinion is that this cannot be explained in a conventional way, because the very notion of "explanation" assumes that information can be compressed while the lowest entropy state contains the least amount of information. So, you cannot possibly "explain" some low entropy state in terms of a more typical high entropy state.

Count Iblis
  • 10,396
  • 1
  • 25
  • 49