Algorithmic statistics

Probably also algorithmic information theory

July 25, 2014 — April 15, 2024

compsci
information
pseudorandomness
statistics
statmech
stringology
Figure 1

The intersection between probability, ignorance, and algorithms, butting up against computational complexity, coding theory, dynamical systems, ergodic theory, minimum description length and probability. Random number generation relates here, too.

When is the relation between things sufficiently jointly unstructured that we may treat them as random? Stochastic approximations to deterministic algorithms. Kolmogorov complexity. Compressibility, Shannon information. Sideswipe at deterministic chaos. Chaotic systems treated as if stochastic. (Are “real” systems not precisely that?) Statistical mechanics and ergodicity.

I saw a provocative talk by Daniela Andrés on, nominally, Parkinson’s disease. The grabbing part was talking about the care and feeding of neural “codewords,” and the information theory of the brain, which she did in the foreign (to me) language of “algorithmic statistics,” and “Kolmogorov structure functions.” I have no idea what she meant. This is a placeholder to remind me to come back and see if it is as useful as it sounded like it might be.

To consider: relationship between underlying event space and measures we construct on such spaces. How much topology is lost by laundering our events through pullback of a (e.g. probability) measure?

Chazelle:

The discrepancy method has produced the most fruitful line of attack on a pivotal computer science question: What is the computational power of random bits? It has also played a major role in recent developments in complexity theory. This book tells the story of the discrepancy method in a few succinct independent vignettes. The chapters explore such topics as communication complexity, pseudo-randomness, rapidly mixing Markov chains, points on a sphere, derandomization, convex hulls and Voronoi diagrams, linear programming, geometric sampling and VC-dimension theory, minimum spanning trees, circuit complexity, and multidimensional searching. The mathematical treatment is thorough and self-contained, with minimal prerequisites. More information can be found on the book’s home page.

Cosma Shalizi’s upcoming textbook has the world’s pithiest summary:

In fact, what we really have to assume is that the relationships between the causes omitted from the DAG and those included are so intricate and convoluted that they might as well be noise, along the lines of algorithmic information theory (Li and Vitányi 2009), whose key result might be summed up as “Any determinism distinguishable from randomness is insufficiently complex.”

Cosma’s Algorithmic Information Theory notebook is also pretty good and extremely pithy.

Here, a John Baez talk on foundational issues.

1 Empirical estimation of computation

Figure 2

As far as I can tell, this is the main research thrust of the Max Planck institute at Leipzig to solve a kind of inverse problem, inspecting inferred probabilities for evidence of computation. Is that… sane? Why would one wish to do that?

Consider the question “could a neuroscientist even understand a microprocessor?(Jonas and Kording 2017).

2 Information-based complexity theory

Is a specialty within this field?

IBC website:

Information-based complexity (IBC) is the branch of computational complexity that studies problems for which the information is partial, contaminated, and priced.

To motivate these assumptions about information consider the problem of the numerical computation of an integral. Here, the integrands consist of functions defined over the d-dimensional unit cube. Since a digital computer can store only a finite set of numbers, these functions must be replaced by such finite sets (by, for example, evaluating the functions at a finite number of points). Therefore, we have only partial information about the functions. Furthermore, the function values may be contaminated by round-off error. Finally, evaluating the functions can be expensive, and so computing these values has a price.

  • The RJLipton post on IBC linking it to complexity theory:

    Now as ordinary complexity theorists, our first instinct would be to define properties intrinsic to the function \(\{f\}\) and try to prove they cause high complexity for any algorithm. Making a continuous analogy to concepts in discrete Boolean complexity, drawing on papers like this by Noam Nisan and Mario Szegedy, we would try to tailor an effective measure of “sensitivity.” We would talk about functions \(\{f\}\) that resemble the \(\{n\}\)-ary parity function in respect of sensitivity but don’t have a simple known integral. Notions of \(\{f\}\) being “isotropic” could cut both ways—they could make the sensitivity pervasive but could enable a good global estimate of the integral.

    IBC, however, focuses on properties of algorithms and restrictions on the kind of inputs they are given. Parlett’s general objection is that doing so begs the question of a proper complexity theory and reverts to the standard—and hallowed enough—domain of ordinary numerical analysis of algorithms.

3 Something something edge of chaos

See Edge of chaos.

4 Incoming

5 References

Ben-David, Hrubeš, Moran, et al. 2019. Learnability Can Be Undecidable.” Nature Machine Intelligence.
Bhattacharyya, John, Ghoshal, et al. 2019. Average Bias and Polynomial Sources.” 079.
Blumer, Ehrenfeucht, Haussler, et al. 1987. Occam’s Razor.” Information Processing Letters.
Braverman. 2008. Polylogarithmic Independence Fools AC0 Circuits.” J. ACM.
Calude. 2002. Information and Randomness : An Algorithmic Perspective.
Chaitin, Gregory J. 1966. “On the Length of Programs for Computing Finite Binary Sequences.” Journal of the ACM (JACM).
Chaitin, Gregory J. 1969. On the Simplicity and Speed of Programs for Computing Infinite Sets of Natural Numbers.” J. ACM.
Chaitin, Gregory J. 1977. “Algorithmic Information Theory.” IBM Journal of Research and Development.
———. 1988. Information, Randomness and Incompleteness: Papers on Algorithmic Information Theory (World Scientific Series in Computer Science, Vol 8).
———. 2002. “The Intelligibility of the Universe and the Notions of Simplicity, Complexity and Irreducibility.”
Chazelle. 2001. The discrepancy method: randomness and complexity.
Clark, Florêncio, Watkins, et al. 2006. Planar Languages and Learnability.” In Grammatical Inference: Algorithms and Applications. Lecture Notes in Computer Science 4201.
Cohen. 1963. The Independence of the Continuum Hypothesis.” Proceedings of the National Academy of Sciences.
———. 1964. The Independence of the Continuum Hypothesis, Ii.” Proceedings of the National Academy of Sciences.
Cortes, Mohri, Rastogi, et al. 2008. On the Computation of the Relative Entropy of Probabilistic Automata.” International Journal of Foundations of Computer Science.
Cover, Gács, and Gray. 1989. Kolmogorov’s Contributions to Information Theory and Algorithmic Complexity.” The Annals of Probability.
Crumiller, Knight, Yu, et al. 2011. Estimating the Amount of Information Conveyed by a Population of Neurons.” Frontiers in Neuroscience.
Davies. 2007. The Implications of a Cosmological Information Bound for Complexity, Quantum Information and the Nature of Physical Law.” arXiv:quant-Ph/0703041.
Diakonikolas, Gopalan, Jaiswal, et al. 2009. Bounded Independence Fools Halfspaces.” 016.
Feldman. 2002. A Brief Introduction to: Information Theory, Excess Entropy and Computational Mechanics.”
Gács, Tromp, and Vitányi. 2001. Algorithmic Statistics.” IEEE Transactions on Information Theory.
Gödel, Kurt. 1931. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I.” Monatshefte für Mathematik und Physik.
Gödel, K. 1940. The Consistency of the Continuum Hypothesis.
Goldreich. 2008. Computational Complexity: A Conceptual Perspective.
Hansen, and Yu. 2001. Model Selection and the Principle of Minimum Description Length.” Journal of the American Statistical Association.
Haslinger, Klinkner, and Shalizi. 2010. The Computational Structure of Spike Trains.” Neural Computation.
Hutter. 2000. A Theory of Universal Artificial Intelligence Based on Algorithmic Complexity.”
Jonas, and Kording. 2017. Could a Neuroscientist Understand a Microprocessor? PLOS Computational Biology.
Kolmogorov. 1968. “Three Approaches to the Quantitative Definition of Information.” International Journal of Computer Mathematics.
Kontorovich, Cortes, and Mohri. 2006. Learning Linearly Separable Languages.” In Algorithmic Learning Theory. Lecture Notes in Computer Science 4264.
Kuipers, and Niederreiter. 2012. Uniform Distribution of Sequences.
Legg. 2006. Is There an Elegant Universal Theory of Prediction? In Algorithmic Learning Theory. Lecture Notes in Computer Science 4264.
Littlestone, and Warmuth. 1986. Relating Data Compression and Learnability.
Li, and Vitányi. 2009. An Introduction to Kolmogorov Complexity and Its Applications.
Mezard, and Montanari. 2009. Information, Physics, and Computation. Oxford Graduate Texts.
Nemenman, Bialek, and de Ruyter van Steveninck. 2004. Entropy and Information in Neural Spike Trains: Progress on the Sampling Problem.” Physical Review E.
Nisan, and Wigderson. 1994. Hardness Vs Randomness.” Journal of Computer and System Sciences.
Norton. 2013. All Shook Up: Fluctuations, Maxwell’s Demon and the Thermodynamics of Computation.” Entropy.
Perekrestenko, Eberhard, and Bölcskei. 2021. High-Dimensional Distribution Generation Through Deep Neural Networks.” Partial Differential Equations and Applications.
Perekrestenko, Müller, and Bölcskei. 2020. Constructive Universal High-Dimensional Distribution Generation Through Deep ReLU Networks.”
Reyzin. 2019. Unprovability Comes to Machine Learning.” Nature.
Riegler, Bölcskei, and Koliander. 2023. Lossy Compression of General Random Variables.”
Rissanen. 2007. Information and Complexity in Statistical Modeling. Information Science and Statistics.
Shibata, Yoshinaka, and Chikayama. 2006. Probabilistic Generalization of Simple Grammars and Its Application to Reinforcement Learning.” In Algorithmic Learning Theory. Lecture Notes in Computer Science 4264.
Solomonoff. 1964a. A Formal Theory of Inductive Inference. Part I.” Information and Control.
———. 1964b. A Formal Theory of Inductive Inference. Part II.” Information and Control.
Sterkenburg. 2016. Solomonoff Prediction and Occam’s Razor.” Philosophy of Science.
Vadhan. 2012. Pseudorandomness.” Foundations and Trends® in Theoretical Computer Science.
Valiant. 1984. A Theory of the Learnable.” Commun. ACM.
Vapnik, and Chervonenkis. 1971. On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities.” Theory of Probability & Its Applications.
Vereshchagin, and Vitányi. 2004. Kolmogorov’s Structure Functions and Model Selection.” IEEE Transactions on Information Theory.
———. 2010. Rate Distortion and Denoising of Individual Data Using Kolmogorov Complexity.” IEEE Transactions on Information Theory.
Vitányi. 2006. Meaningful Information.” IEEE Transactions on Information Theory.