I understand that we can prove that for any process that occurs in an isolated and closed system it must hold that
$$\Delta S\geq0$$
via Clausius' theorem. My question is, how can I prove this in a mathematical way?
I understand that we can prove that for any process that occurs in an isolated and closed system it must hold that
$$\Delta S\geq0$$
via Clausius' theorem. My question is, how can I prove this in a mathematical way?
In the context of quantum mechanics, the entropy of a system whose initial state is given by a density matrix $\rho(0)$ is given by the so-called von Neumann entropy; $$ S_\mathrm{vn}(\rho) = -k\,\mathrm{tr}(\rho\ln\rho) $$ For an isolated system, quantum mechanical time evolution is unitary; for each time $t$, there is a unitary operator $U(t)$ such that the state of the system at time $t$ is given by $$ \rho(t) = U(t) \rho(0)\, U^\dagger (t) $$ It can be shown that the von Neumann entropy is invariant under unitary similarity transformation of $\rho$; in other words $$ S_\mathrm{vn}(U\rho U^\dagger) = S_\mathrm{vn}(\rho) $$ and it immediately follows that $$ S_\mathrm{vn}(\rho(0)) = S_\mathrm{vn}(\rho(t)) $$ In other words, the entropy of an isolated quantum system does not change with time in accordance with the second law of thermodynamics.
Author's Admission. I have always been somewhat bothered by the argument I just gave you, not because I think it's incorrect, but rather because in light of the conclusion we draw from it regarding isolated systems, why don't people say that the stronger statement $dS=0$ for isolated systems as opposed to $dS\geq 0$. It's not that these are inconsistent statements; one is just stronger than than the other, so I would think one should simply assert the stronger one in the context of isolated systems.
Addendum. In response to my "admission," I should note that there is a cute argument I have seen for the non-negativity of a change in total (von-Neumann) entropy of an isolated system provided one defines total entropy properly. Here it is.
Suppose that we have an isolated system, let's call it the universe, described by a Hilbert space $\mathcal H$. Suppose that this system can be divided into two subsystems $a$ and $b$ so that the combined Hilbert space can be written $\mathcal H = \mathcal H_a\otimes\mathcal H_b$. If the density matrix of the universe is $\rho$, then the density matrices of the subsystems $a$ and $b$ are defined as partial traces over $\rho$; $$ \rho_a = \mathrm{tr}_{\mathcal H_a}\rho, \qquad \rho_b = \mathrm{tr}_{\mathcal H_b}\rho $$ Now we can prove the following:
If systems $a$ and $b$ are initially uncorrelated, then then the total entropy $S(\rho_a) + S(\rho_b)$ will never be lower than at the initial time.
Proof. If the systems are initially uncorrelated, then by definition the total density operator at the initial time is a tensor product $\rho(0) = \rho_a^0\otimes \rho_b^0$. It follows from taking partial traces and using the fact that the density operator is unit trace that the density matrices of the subsystems $a$ and $b$ at the initial time are $$ \rho_a(0) = \rho_a^0, \qquad \rho_b(0) = \rho_b^0 $$ Now, at any later time, the total density matrix evolves unitarily, so that $$ S(\rho(0)) = S(\rho(t)) $$ On the other hand, entropy is subadditive which means that $$ S(\rho(t)) \leq S(\rho_a(t))+S(\rho_b(t)) $$ and is additive for uncorrelated systems which gives $$ S(\rho(0)) = S(\rho_a(0)) + S(\rho_b(0)) $$ Putting this all together yields $$ S(\rho_a(0)) + S(\rho_b(0)) \leq S(\rho_a(t))+S(\rho_b(t)) $$
I've always been somewhat unsatisfied with this argument, however, because (i) it assumes that the subsystems are originally uncorrelated and (ii) it's not clear to me that the definition of total entropy as the sum of the entropies of the reduced density operators of the subsystems is what we should be calling $S$ when we write $\Delta S \geq 0$.
By the way, this argument was stolen from lectures I took: Eric D'Hoker's quantum lecture notes.
Here's an enlightening special case: Take $n$ bodies with temperatures $T_1,\ldots T_n$ and bring them together until they reach a final temperature $T$. The first law of thermodynamics tells you that $T$ is the arithmetic mean of the $T_i$. The second law of thermodynamics tells you that the change in entropy is $n\log(T/G)$ where $G$ is the geometric mean. It's a standard theorem in pure mathematics that $T>G$, whence the change in entropy must be positive.
I understand that we can prove that for any process that occurs in an isolated and closed system it must hold that ΔS≥0 via Clausius' theorem. My question is, how can I prove this in a mathematical way?
The Clausius theorem says that when the system undergoes general cyclic process during which it is connected to reservoir of (possibly varying) temperature $T_r$, the integral $$ C = \int_{t_1}^{t_2} \frac{{Q}'(t)}{T_r(t)}dt \leq 0, $$ where ${Q}'(t)$ is derivative of the heat accepted by the system by the time $t$ ($t$ is just a real number that indexes states as they occur in the irreversible process, it does not have to be actually the time). Second part of the Clausius theorem is the assertion that if the whole cyclical process is reversible, the integral is equal to zero.
Now assume our isolated system undergoes some irreversible process $A\rightarrow^{(irrev.)}B$. The system may undergo such change as a result of change in the imposed internal constraints, like removal of a wall separating two partitions of a vessel filled with gas of different pressures. The system is then thermally connected to a heat reservoir and is let to undergo reversible process $B\rightarrow^{(rev.)}A$.
During the irreversible process $A\rightarrow B$, since the system is isolated, there is no heat transferred and the corresponding contribution to $C$ vanishes.
During the reversible process $B\rightarrow A$, in general heat may be transferred. The integral $C$ is thus
$$ C= \int_{B,\gamma_{\text{rev.}}}^A \frac{Q'_{\text{rev.}}(s)}{T(s)}ds, $$ where $s$ parameterizes reversible trajectory $\gamma_\text{rev.}$ in the space of equilibrium states and $Q'_{\text{rev.}}(s)$ is derivative of the heat $Q_{\text{rev.}}(s)$ already accepted by the system when it is in the state $s$.
The change in entropy when it goes from the equilibrium state A to the equilibrium state B is defined as
$$ \Delta S = \int_{A,\gamma_{\text{rev}}}^B \frac{Q'_{\text{rev.}}(s)}{T}ds, $$
which is the same as $C$ only with opposite sign. Since $C\leq 0$,
$$ \Delta S = -C \geq 0, $$
QED.