4

From several unrelated sources (such as Scott Aaronson's discussion of hypercomputation or this article about a bound on the number of degrees of freedom of any theory with a positive cosmological constant), I have bumped into the claim that the universe contains a finite amount of information (on the order of $10^{122}$ bits), backed by references to the Bekenstein bound.

I followed the references to Bekenstein's original article. The logic seems to be approximately this:

  • General relativity theorems show that (classically) the total area of black hole event horizons cannot decrease.
  • If we assume that black holes have zero entropy, then we can break the second law of thermodynamics by throwing matter with some entropy into a black hole.
  • Therefore, it is natural to extend the concept of entropy by adding a contribution from black holes, whose entropy should be directly proportional to their surface area.
  • The proportionality constant $\frac{S}{A}$, in Planck units, is $\frac{1}{4}$. (This is the step I am having trouble with.)
  • When a black hole absorbs an object of energy $E$ and radius $R$, its area must (for general-relativity reasons) increase by at least $8 \pi ER$. This increases the black hole's entropy by $2 \pi ER$. For the second law not to be violated, the body cannot have more entropy than this. This creates a bound on $\frac{S}{E}$ that we can then plug values for the entire observable universe into, getting the $10^{122}$ bits.

But I am having trouble tracking down where that proportionality constant came from. Bekenstein provides three references for it: Ref. 1, Ref. 3, Ref. 6.

  • In Ref. 3, Bekenstein calculates it from a thought experiment involving dropping a single particle into a black hole. He makes the assumption that the information associated with the particle is at least one bit (as the answer to the question "does the particle exist?"), but he himself considers this a lower bound, with the true value probably not far away, but unspecified.
  • Ref. 6 is also earlier work by Bekenstein; it links to Ref. 3 and makes the additional observation that a higher proportionality constant would forbid particles from being captured by a black hole reversibly (with $\Delta S_g = 0$, where $S_g$ is the generalised entropy, that is, classical entropy plus black hole entropy). This provides some plausibility-based argument for this precise value, but not a derivation from uncontroversial facts.
  • Ref. 1 seems to be the original derivation of Hawking radiation; it's above my skill level and I cannot see whether at any point there is a similar assumption made.

In particular, I could imagine the proportionality constant being bigger. There are many thought experiments considered in these papers and their references that show various processes (which involve dropping various kinds of particles or radiation into a black hole) adhere to the bound, but they would also adhere to the same bound if the proportionality constant were artificially increased tenfold. To disprove this possibility, one would instead want some opposite process, where the black hole's surface area decreases while classical entropy increases. I believe Hawking radiation is of this nature. Is there a better, less plausibility-based assumption in that derivation?

These papers are also rather old, perhaps there have been better treatments since then.

My question is: Does Hawking radiation, or some other thought experiment, provide an upper bound on the entropy of black holes? If so, where in the derivation does one find the upper bound on the amount of information? How is the link between black hole area and bits of information established? Does it rest on a similar assumption of "one particle, one bit"?

Note: The question about the proportionality constant has been asked before here, but I hope you agree my question is far more focused and would not be likely to find answers there.

Qmechanic
  • 220,844
Kotlopou
  • 433
  • 5
  • 21

2 Answers2

1

An reasonably clear explanation is given in General Relativity and its Applications Black Holes, Compact Stars and Gravitational Waves ,By Valeria Ferrari, Leonardo Gualtieri, Paolo Pani, see chapters 20.3 and 20.4.

In a nutshell:

  1. The fact that the area of a black hole always increases can be compared to how the entropy of a thermodynamic system is always increasing. This lead to conjecture that S is proportional to A for a black hole. So $S = \alpha A$ for an unspecified constant $\alpha$, the analogue of the second law of thermodynamics. Also one can derive the first law of black hole thermodynamic relating $dM$ and $dA$.
  2. Bekenstein imagines dropping a massive particle into a black hole. The minimum amount of entropy increase is going to be $ln2$, that is 1 bit of information. This means that $\alpha \approx ln2 / dA_{min}$.
  3. By the uncertainty principle there is an uncertainty in the position of the particle, with associated proper length of the order of the Compton wavelength. From this one can compute the associated $dM_{min}$, by requiring that the proper distance between the horizon and the point in which the particle is released equates the Compton wavelength.
  4. Now using the first law one can plug $dM_{min}$ and $dA_{min}$ and can find that $\alpha \approx 1 / \hbar$, but the precise factor is still not fixed. One can also compute the associated temperature, in term of $\alpha$.
  5. Hawking performed a calculation in QFT in curved space time to show that a scalar field coupled to the black hole effectively behave as if the black hole is emitting radiation, as a black body of temperature T. From the temperature found one can see that the numeric value of $\alpha$ is 1/4.
Rexcirus
  • 5,191
-2

The OP asks I am having trouble tracking down where that proportionality constant came from

You don't need Hawking radiation to make a semi-classical argument why the proportionality constant is $1/4$ rather than something else.

As @Rexcircus above says, we can conjecture that entropy $\propto$ area. The surface area of a sphere: $S=4\pi r^2$. If we consider the smallest semi-classical radius, the Planck length, and use natural units so $L_{Planck}=1$ we have $S=4 \pi$. One unit of entropy for one unit of area. Nice.

PS - In a semi-classical analysis, the Planck length plays the role of a minimal length. But don't take my word for it, you can read this.

However, the Schwarzschild radius of the smallest semi-classical mass (Planck mass $M_P$) black hole (which is a sphere) is: $R=2GM_P/c^2$. In natural units $G=\hbar=c=1$ so $M_{P}=1$ and then $R=2$. Still, this Planck mass black hole must exhibit the minimum semi-classical entropy we just figured out. How to do that? Well, we need a proportionality constant, call it $1/P$. As we can see below, the formula only works when $P=4$, i.e. $1/P=1/4$. Job done.

\begin{equation} S= 4\pi = 4\pi R^2 \frac{1}{P}=\frac{A_H}{4} \end{equation}

PPS. For those who doubt that the Planck mass is the correct mass to use in a semi-classical analysis, read this answer aka the mass at which it happens is about the Planck mass

Bonus: If you want to know where the $S \sim 10^{122}$ of the Universe comes from, it is the entropy (nats) of the cosmic event horizon, which you can read about in this wonderful paper by Lineweaver (Eqn 49). The connection between bits and nats in in my answer here.

Mr Anderson
  • 1,494