TODO: explain this diagram which I ripped off from Wikipedia.

Not: what you hope to get from the newspaper. (Although…) Rather: Different types of (formally defined) entropy/information and their disambiguation. The seductive power of the logarithm, and convex functions rather like it.

A proven path to publication is to find or reinvent a derived measure based on Shannon information, and apply it to something provocative-sounding. (Qualia! Stock markets! Evolution! Language! The qualia of evolving stock market languages!)

This is purely about the analytic definition given random variables, not the estimation theory, which is a different problem

Connected also to functional equations and yes, statistical mechanics, and quantum information physics.

## Shannon Information

“I have are given a discrete random process, how many bits of information do I need to reconstruct it?”

Vanilla information, thanks be to Claude Shannon. A thing related to coding of random processes.

Given a random variable \(X\) taking values \(x \in \mathcal{X}\) from some discrete alphabet \(\mathcal{X}\), with probability mass function \(p(x)\).

\[ \begin{array}{ccc} H(X) & := & -\sum_{x \in \mathcal{X}} p(x) \log p(x) \\ & \equiv & E( \log 1/p(x) ) \end{array} \]

More generally if we have a measure \(P\) over some Borel space \[ H(X)=-\int _{X}\log {\frac {\mathrm {d} P}{\mathrm {d} \mu }}\,dP \]

Over at the functional equations page is a link to Tom Leinster’s clever proof of the optimality of Shannon information via functional equations.

One interesting aspect of the proof is where the difficulty lies. Let \(I:\Delta_n \to \mathbb{R}^+\) be continuous functions satisfying the chain rule; we have to show that \(I\) is proportional to \(H\). All the effort and ingenuity goes into showing that \(I\) is proportional to \(H\) when restricted to the uniform distributions. In other words, the hard part is to show that there exists a constant \(c\) such that

\[ I(1/n, \ldots, 1/n) = c H(1/n, \ldots, 1/n) \]

for all \(n \geq 1\).

Venkatesan Guruswami, Atri Rudra and Madhu Sudan, Essential Coding Theory.

## K-L divergence

Because “Kullback-Leibler divergence” is a lot of syllables for something you use so often. Or you could call it the “relative entropy”, if you want to sound fancy and/or mysterious.

KL divergence is defined between the probability mass functions of two discrete random variables, \(X,Y\) over the same space, where those probability mass functions are given \(p(x)\) and \(q(x)\) respectively.

\[ \begin{array}{cccc} D(P \parallel Q) & := & -\sum_{x \in \mathcal{X}} p(x) \log p(x) \frac{p(x)}{q(x)} \\ & \equiv & E \log p(x) \frac{p(x)}{q(x)} \end{array} \]

More generally, if the random variables have laws, respectively \(P\) and \(Q\): \[ {\displaystyle D_{\operatorname {KL} }(P\|Q)=\int _{\operatorname {supp} P}{\frac {\mathrm {d} P}{\mathrm {d} Q}}\log {\frac {\mathrm {d} P}{\mathrm {d} Q}}\,dQ=\int _{\operatorname {supp} P}\log {\frac {\mathrm {d} P}{\mathrm {d} Q}}\,dP,} \]

## Jenson-Shannon divergence

Symmetrized version. Have never used.

## Mutual information

The “informativeness” of one variable given another… Most simply, the K-L divergence between the product distribution and the joint distribution of two random variables. (That is, it vanishes if the two variables are independent).

Now, take \(X\) and \(Y\) with joint probability mass distribution \(p_{XY}(x,y)\) and, for clarity, marginal distributions \(p_X\) and \(p_Y\).

Then the mutual information \(I\) is given

\[ I(X; Y) = H(X) - H(X|Y) \]

Estimating this one has been giving me grief lately, so I’ll be happy when I get to this section and solve it forever. See nonparametric mutual information.

Getting an intuition of what this measure does is handy, so I’ll expound some equivalent definitions that emphasises different characteristics:

\[ \begin{array}{cccc} I(X; Y) & := & \sum_{x \in \mathcal{X}} \sum_{y \in \mathcal{Y}} p_{XY}(x, y) \log p(x, y) \frac{p_{XY}(x,y)}{p_X(x)p_Y(y)} \\ & = & D( p_{XY} \parallel p_X p_Y) \\ & = & E \log \frac{p_{XY}(x,y)}{p_X(x)p_Y(y)} \end{array} \]

More usually we want the Conditional Mutual information.

\[I(X;Y|Z)=\int _{\mathcal {Z}}D_{\mathrm {KL} }(P_{(X,Y)|Z}\|P_{X|Z}\otimes P_{Y|Z})dP_{Z}\]

See Christopher Olah’s excellent visual explanation.

## Kolmogorov-Sinai entropy

Schreiber says:

If \(I\) is obtained by coarse graining a continuous system \(X\) at resolution \(\epsilon\), the entropy \(HX(\epsilon)\) and entropy rate \(hX(\epsilon)\) will depend on the partitioning and in general diverge like \(\log(\epsilon)\) when \(\epsilon \to 0\). However, for the special case of a deterministic dynamical system, \(\lim_{\epsilon\to 0} hX (\epsilon) = hKS\) may exist and is then called the

Kolmogorov-Sinai entropy. (For non-Markov systems, also the limit \(k \to \infty\) needs to be taken.)

That is, it is a special case of the entropy rate for a dynamical system - Cue connection to algorithmic complexity. Also metric entropy?

## Relatives

### Rényi Information

Also, the Hartley measure.

You don’t need to use a logarithm in your information summation. Free energy, something something. (?)

The observation that many of the attractive features of information measures are simply due to the concavity of the logarithm term in the function. So, why not whack another concave function with even more handy features in there? Bam, you are now working on Rényi information. How do you feel?

### Tsallis statistics

Attempting to make information measures “non-extensive”.
“*q*-entropy”.
Seems to
have made a big splash in Brazil, but less in other countries.
Non-extensive measures are an
intriguing idea, though.
I wonder if it’s parochialism that keeps everyone off
Tsallis statistics, or a lack of demonstrated usefulness?

### Fisher information

See maximum likelihood and information criteria.

## Estimating information

Wait, you don’t know the exact parameters of your generating process *a priori*?
You need to estimate it from data.

## Connection to statistical mechanics

Informational entropy versus thermodynamics entropy.

Shalizi and Moore (2003):

We consider the question of whether thermodynamic macrostates are objective consequences of dynamics, or subjective reflections of our ignorance of a physical system. We argue that they are both; more specifically, that the set of macrostates forms the unique maximal partition of phase space which 1) is consistent with our observations (a subjective fact about our ability to observe the system) and 2) obeys a Markov process (an objective fact about the system's dynamics). We review the ideas of computational mechanics, an information-theoretic method for finding optimal causal models of stochastic processes, and argue that macrostates coincide with the ``causal states'' of computational mechanics. Defining a set of macrostates thus consists of an inductive process where we start with a given set of observables, and then refine our partition of phase space until we reach a set of states which predict their own future, i.e. which are Markovian. Macrostates arrived at in this way are provably optimal statistical predictors of the future values of our observables.

John Baez’s A Characterisation of Entropy etc. See also

David H. Wolpert (2006a)

## To Read

- Daniel Ellerman’s Logical Entropy stuff and which he has now written up as Ellerman (2017).
- Information loss and entropy
- Shalizi, Note: Information Theory, Axiomatic Foundations, Connections to Statistics
- Feldman, A Brief Introduction to: Information Theory, Excess Entropy and Computational Mechanics
- It Took Me 10 Years to Understand Entropy, Here is What I Learned. | by Aurelien Pelissier | Cantor’s Paradise

## References

*Proceeding of the Second International Symposium on Information Theory*, edited by Petrovand F Caski, 199–213. Budapest: Akademiai Kiado.

*IEEE Transactions on Information Theory*47: 1701–11.

*The European Physical Journal B - Condensed Matter and Complex Systems*63 (3): 329–39.

*Entropy*13 (11): 1945–57.

*Physical Review Letters*103 (23): 238701.

*Phys. Rev. E*81 (4): 041907.

*Physica A: Statistical and Theoretical Physics*302 (1-4): 89–99.

*Neural Computation*13 (11): 2409–63.

*Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 3*, 4:1230–31. IEEE Computer Society.

*Information and Randomness : An Algorithmic Perspective*. Springer.

*Inference in Hidden Markov Models*. 1st ed. 2005. Corr. 2nd printing 2007 edition. New York ; London: Springer.

*Physica D: Nonlinear Phenomena*120 (1-2): 62–81.

*Physical Review A*55 (5): 3371.

*IBM Journal of Research and Development*.

*Physical Review A*84 (1): 012311.

*IEEE Transactions on Information Theory*14: 462–67.

*Behavioral Science*7 (2): 137–63.

*The Annals of Probability*17 (3): 840–65.

*Elements of Information Theory*. Wiley-Interscience.

*Physical Review Letters*99 (10): 100602.

*Information Theory and Statistics: A Tutorial*. Vol. 1. Foundations and Trends in Communications and Information Theory.

*Stochastic Processes and Their Applications*62 (1): 139–68.

*Journal of Physics A: Mathematical and General*36: 631–41.

*Studies in History and Philosophy of Modern Physics*29 (4): 435–71.

*Studies in History and Philosophy of Modern Physics*30 (1): 1–40.

*Granger-Causality Graphs for Multivariate Time Series*.

*Journal of Machine Learning Research*6: 81–127.

*arXiv:1707.04728 [Quant-Ph]*, May.

*Advances in Complex Systems*7 (03): 329–55.

*Decision Analysis*7 (4): 378–403.

*Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence*, 152–61. UAI’01. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.

*Nature Reviews Neuroscience*11 (2): 127.

*IEEE Transactions on Information Theory*47 (6): 2443–63.

*Journal of Machine Learning Research*, 277–86.

*arXiv:2004.14941 [Cs, Stat]*, May.

*Information and Control*6 (1): 28–48.

*Physics Letters A*128 (6–7): 369–73.

*Entropy and Information Theory*. New York: Springer-Verlag.

*Journal of Machine Learning Research*10: 1469.

*The Annals of Statistics*25 (6): 2451–92.

*Journal of Theoretical Biology*116 (3): 321–41.

*IEEE Transactions on Automatic Control*23 (2): 305–12.

*Statistical Physics*. Vol. 3. Brandeis University Summer Institute Lectures in Theoretical Physics.

*American Journal of Physics*33: 391–98.

*arXiv:1411.4342 [Stat]*, November.

*ACM Trans. Model. Comput. Simul.*4 (2): 213–19.

*Bell System Technical Journal*35 (3): 917–26.

*Uncertainty and Information: Foundations of Generalized Information Theory*. Wiley-IEEE Press.

*International Journal of Computer Mathematics*2 (1): 157–68.

*Physical Review E*69: 066138.

*J. Artif. Int. Res.*35 (1): 557–91.

*The Annals of Mathematical Statistics*22 (1): 79–86.

*Physica D: Nonlinear Phenomena*42 (1–3): 12–37.

*Journal of Statistical Physics*127 (1): 51–106.

*The Annals of Statistics*36 (5): 2153–82.

*Eprint arXiv:1206.1331*, June.

*IEEE Transactions on Information Theory*52 (10): 4394–4412.

*IEEE Transactions on Information Theory*37 (1): 145–51.

*The European Physical Journal B - Condensed Matter and Complex Systems*73 (4): 605–15.

*Physical Review E*77: 026110.

*arXiv:1507.02803 [Math]*, July.

*Philosophy of Science*67 (2): 177–94.

*NIPS 2014*.

*arXiv:physics/0108025*.

*Entropy*23 (4): 464.

*Science*, 1988.

*IEEE Transactions on Information Theory*54 (3): 964–75.

*Neural Computation*15 (6): 1191–1253.

*Transactions of the American Mathematical Society*112 (1): 55–66.

*Complexity*, August, n/a–.

*Scientific American*194 (2): 77–86.

*Journal of Theoretical Biology*205: 147–59.

*2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton)*, 958–65.

*Foundations and Trends in Communications and Information Theory*, December.

*Information and Complexity in Statistical Modeling*. Information Science and Statistics. New York: Springer.

*Physica D: Nonlinear Phenomena*125 (3-4): 285–94.

*IEEE Transactions on Information Theory*56 (3): 1430–35.

*Physical Review Letters*85 (2): 461–64.

*Neural Computation*27 (10): 2097–2106.

*Statistical Mechanics: Entropy, Order Parameters, and Complexity*. Oxford University Press, USA.

*Advances in Complex Systems*05 (01): 91–95.

*Physical Review E*73 (3).

*The Bell Syst Tech J*27: 379–423.

*Statistica Sinica*7: 375–94.

*IEEE Transactions on Information Theory*44 (6): 2079–93.

*Journal of Political Economy*93 (3): 599–609.

*Proceedings of the National Academy of Sciences of the United States of America*102: 18297–302.

*Neural Computation*18 (8): 1739–89.

*Journal of Theoretical Biology*252: 185–97.

*Journal of Theoretical Biology*252: 198–212.

*American Economic Review*92: 434–59.

*arXiv:1507.02284 [Cs, Math, Stat]*, July.

*Phys. Rev. Lett.*80 (1): 197–200.

*arXiv:1612.06599 [Math, Stat]*, December.

*Learning in Graphical Models*, 261–97. Cambridge, Mass.: MIT Press.

*Arxiv Preprint arXiv:0712.4382*.

*arXiv:physics/0004057*, April.

*PERCEPTION-ACTION CYCLE*, 601–36. Springer.

*IEEE Transactions on Information Theory*50 (12): 3265–90.

*IEEE Transactions on Information Theory*56 (7): 3438–54.

*IEEE Transactions on Information Theory*52 (10): 4617–26.

*IEEE Transactions on Information Theory*58 (8): 4969–92.

*IEEE Transactions on Information Theory*59 (3): 1271–87.

*arXiv:comp-Gas/9403002*, March.

*Complex Engineered Systems*, 262–90. Understanding Complex Systems. Springer Berlin Heidelberg.

*The Complex Networks of Economic Interactions*, 293–306. Lecture Notes in Economics and Mathematical Systems 567. Springer.

*EPL (Europhysics Letters)*49: 708.

*arXiv:comp-Gas/9403001*, March.

*Advances In Neural Information Processing Systems*.

*The American Statistician*42 (4): 278–80.

*Neural Computation*26 (11): 2570–93.

## No comments yet. Why not leave one?