10

Maximum entropy is equivalent to minimum information in statistical mechanics and quantum mechanics and the universe as a whole is tending towards an equilibrium of minimum information/maximum entropy. What seems to be part of the canonical picture as well is that all the structure formation we see on earth is the result of the earth being an open system, a thermodynamic machine so to say, that is driven by the energy of the sun, and so the apparent violation of the second law of thermodynamics on earth is only an illusion that is resolved by including the rest of the universe.

But, what I have asked myself just yesterday:

is it legitimate to consider the information that is stored in the brain a part of the thermodynamic information/negentropy? Or is this a completely different level of information?

The discrepancy between these two aspects of information probably becomes more apparent if I ask the same question about the information that is stored in a computer. Because the computer is completely deterministic, and hence it seems to me that it is not valid to consider it as part of a statistical mechanical or quantum mechanical system (at least unless the computer is destroyed, or an electric discharge changes one of its storage flip-flops or something along these lines).

Or to take it yet another step further to absurdity: do the books in the US library of congress, or the art shown in the Louvre in Paris contain any substantial fraction of thermodynamic negentropy, or isn't it rather irrelevant thermodynamically if a page of a book or a painting contains a black or green or red spot of paint that is part of a letter "a" or the eye of the Mona Lisa, or if it contains no paint whatsoever?

Of course, I know that we can define Shannon's information for computer memory straightforwardly as well as we can define the entropy of the brain as a electrochemical-thermodynamic system. But my question is, are these concepts actually the same, or is one included in the other, or are they completely disjoint, complementary or whatever unrelated?

Edit: this question (thanks to Rococo) already has several answers, each of which I find pretty enlightening. However, since the bounty is now already on the table, I encourage everybody in providing his own point of view, or even statements conflicting to the ones in the linked question.

Qmechanic
  • 220,844
oliver
  • 7,624

3 Answers3

2

in short:

Yes, information and negentropy are measuring the same thing, and can be directly compared and just differ by a constant scale factor.

But this introduces a problem when talking about the valuable information in a book, brain, or computer, because the valuable information is overwhelmed by the relatively numerically gargantuan entropy of the arrangement of the mass that is storing that information.  This problem, though, is often easily solved by asking information questions that are phrased in a way to select the information of interest.   So in a computer, where this separation is more obvious, for example, it's important to differentiate between the entropy of a transistor (a crystalline structure with very high information) and the information contained in its logical state (a much much lower information, but usually more interesting).

Therefore, the question ends up being only, do we understand the system well enough to determine what information we are interested in? Once we know that, it's usually possible to estimate it.

are negentropy and information measuring the same thing?:

Yes. This is spelled out very clearly by many people, including this answer on PSE, but I'll go with an old book by Brilluoin, Science and Information Theory (i.e, this is the Brilluoin of Brilluoin Zones, etc, and also the person who coined the term "negentropy").

The essential point is to show that any observation or experiment made on a physical system automatically results in an increase of the entropy of the laboratory. It is then possible to compare the loss of negentropy (increase of entropy) with the amount of information obtained. The efficiency of an experiment can be defined as the ratio of information obtained to the associated increase in entropy.

information vs valuable information:

Brilluoin also distinguishes between "information" and "valuable information", and says that a priori there's no mathematical way to distinguish these, although in certain cases we can define what  we consider to be the valuable information, and in those cases we can calculate it. 

We completely ignore the human value of the information. A selection of 100 letters is given a certain information value, and we do not investigate whether it makes sense in English, and, if so, whether the meaning" of the sentence is of any practical importance. According to our definition, a set of 100 letters selected at random (according to the rules of Table 1.1), a sentence of 100 letters from a newspaper, a piece of Shakespeare or a theorem of Einstein are given exactly the same information value. In other words, we define “information” as distinct from “knowledge,” for which we have no numerical measure. We make no distinction between useful and useless information, and we choose to ignore completely the value of the information. Our statistical definition of information is based only on scarcity. If a situation is scarce, it contains information. Whether this information is valuable or worthless does not concern us. The idea of “value” refers to the possible use by a living observer.

So then, of course, to address the information in the Pricipia, the question is to separate information from valuable information, and note that a similar book with the same letters in a specific but random sequence will have the same information, but different valuable information.

In his book, Brilluoin provides many ordinary examples, but also computes some broader and more interesting examples that are closely related to some subtopics of this questions. Instead of a picture (as the OP's question suggests), Brilluoin constructs a way to quantify what is the information of a schematic diagram.  Instead of a physics text (as the OP's question suggests), he analyses a physical law (in the case, the ideal gas law), and also calculates its information content.  It's not a surprise that these information values are swamped by the non-valuable information in the physical material in which they are embodied.

a specific case, the brain:

Of the three topics brought up by the question the most interesting one to me is the brain.  Here, asking what is the information in the brain, creates a similar ambiguity as for a computer, "are you talking about the crystalline transistors or are you talking about their voltage state?"  But in the brain it is more complex for various reasons, but the most difficult to sort out seems to be that there is not a clear distinction between structure and state and valuable information.

One case where it's clear how to sort this out, is the information in spikes within neurons.  Without giving a full summary of neuroscience, I'll say that neurons can transmit information via voltages that appear across their membranes, and these voltages can fluctuate in a continuous way or exist as discrete events called "spikes".  The spikes are the easiest to quantify their information.  At least for afferent stimulus-encoding neurons where people can make a reasonable guess what stimulus they are encoding, it's often possible to quantify bits/spike, and it is usually found to be 0.1 to 6 bits/spike, depending on the neuron (but there's obviously some pre-selection of the neurons going on here).  There is an excellent book on this topic titled Spikes, by Fred Rieke, et al, although a lot of work has been done since its publication. 

That is,  given a model of 1) what's being being encoded (eg, aspects of the stimuli), and 2) what is the physical mechanism for encoding that information (eg, spikes), it's fairly easy to quantify the information. 

Using a similar program it's possible to quantify the information stored in a synapse, and in continuous voltage variations, although there's less work on these topics.  To find these, search for things like "Shannon information synapse", etc.  It seems to me not hard to imagine a program that continues along this path, and it if it were to scale to the large enough size, could eventually estimate information in the brain from these processes.  But this will only work for the processes that we understand well enough to ask the questions that get at the information we are interested in.

tom10
  • 2,877
1

There are many different ways to define information of the books in a library - depending on whether one views them as physical paper objects, or as the collections of letters/words or as something else.

If we focus on the information stored in the text, i.e., the specific arrangement of letters and words, then it can be viewed on two levels:

  • First of all, the arrangement of letters and words is not random, i.e., it is clearly not in the state of maximum entropy. As an amusing manifestation of this one could compare the frequency of letters used in a text written in English and another language using the same (or at least overlapping) alphabet - they are manifestly different, and in fact often serve as the first step in text recognition algorithms. Obviously, the real text is by far more complex than statistics of letters and words - since the letters and the words are correlated themselves via complex rules (Shannon's paper presents early discussion of such analysis, but there are more complex rules studied in mathematical linguistics under such terms as transformational grammar, generative grammar, etc.)
  • There is however even more to the texts than the statistics of symbols - their correlation with other statistical distributions, e.g., with the words, sentences and other data stored in a head of a person reading a book. Clearly, a monolingual Spanish speaker can get more of a book written in this language than a monolingual English speaker, even though the letter still gets more than a monolingual speaker of Chinese. In the information science such correlations are measured by Mutual information - i.e., the information measured in respect to a specific background/context.

From such a textual point of view, the loss of the information due to a spot or a missing page in a book depends on how much it distorts the statistics of the symbols in this book, and the correlations between them. Sometimes a blacked-out word can be easily guessed from the context, while in other cases it may render a whole phrase or a paragraph incomprehensible.

Urb
  • 2,724
Roger V.
  • 68,984
1

Is it legitimate to consider the information stored in the brain as part of thermodynamic entropy?

Insofar as the brain itself is physical then we can.

However, the mind itself is not understood in this way, for example, it's not even clear what consciousness is. Nor is it clear what can be meant by information stored in the mind. Hence, it's better to steer away from thinking of mind by physical concepts.

Mozibur Ullah
  • 14,713