π

- The Worldβs Simplest Ergodic Theorem
- Von Neumann and Birkhoffβs Ergodic Theorems
- The difference between statistical ensembles and sample spaces: Mehmet SΓΌzen, Alignment between statistical mechanics and probability theory

Relevance to actual stochastic processes and dynamical systems, especially linear and non-linear system identification.

Keywords to look up:

- probability-free ergodicity
- Birkhoff ergodic theorem
- Frobenius-Perron operator
- Quasicompactness, correlation decay
- C&C CLT for Markov chains β Nagaev

Not much material, but please see learning theory for dependent data for some interesting categorisations of mixing and transcendence of miscellaneous mixing conditions for statistical estimators.

My main interest is the following 4-stages-of-grief kind of set up.

- Often I can prove that I can learn a thing from my data if it is stationary.
- But I rarely have stationarity, so at least showing the estimator is ergodic might be more useful, which would follow from some appropriate mixing conditions which do not necessarily assume stationarity.
- Except that often these theorems are hard to show, or estimate, or require knowing the parameters in question, and maybe I might suspect that showing some kind of partial identifiability might be more what I need.
- Furthermore, I usually would prefer a finite-sample result instead of some asymptotic guarantee. Sometimes I can get those from learning theory for dependent data.

That last one is TBC.

## Coupling from the past

Dan Piponi explains coupling from the past via functional programming for Markov chains.

## Mixing zoo

A recommended partial overview is BradleyBasic2005. π

### Ξ²-mixing

π

### Ο-mixing

π

### Sequential Rademacher complexity

π

## Lyapunov exponents

Chaotic systems are unpredictable. Or rather chaotic systems are not deterministically predictable in the long run. You can make predictions if you weaken one of these requirements. You can make deterministic predictions in the short run, or statistical predictions in the long run. Lyapunov exponents are a way to measure how quickly the short run turns into the long run.

## References

*arXiv:2002.07928 [Physics, Stat]*, June.

*Probability Surveys*2: 107β44.

*Journal of Applied Probability*38 (1): 122β35.

*arXiv:1602.05125 [Math, Stat]*, February.

*SIAM Review*1 (1): 45β76.

*Probability, Random Processes, and Ergodic Properties*. Springer.

*IMS Lecture Notes-Monograph Series Dynamics & Stochastics*48.

*Algorithmic Learning Theory*, edited by Peter Auer, Alexander Clark, Thomas Zeugmann, and Sandra Zilles, 260β74. Lecture Notes in Computer Science. Bled, Slovenia: Springer International Publishing.

*Machine Learning Journal*.

*Journal of Statistical Mechanics: Theory and Experiment*2012 (07): P07025.

*arXiv:1106.0730 [Cs, Stat]*, June.

*Journal of Machine Learning Research*4: 1β26.

*The Annals of Statistics*24 (1): 370β79.

*Advances in Physics*31 (6): 669β735.

*Random Structures & Algorithms*, 9:223β52. New York, NY, USA: John Wiley & Sons, Inc.

*Microsurveys in Discrete Probability*, edited by David Aldous and James Gary Propp, 41:181β92. DIMACS Series in Discrete Mathematics and Theoretical Computer Science. Providence, Rhode Island: American Mathematical Society.

*The Annals of Probability*12 (4): 1167β80.

*IEEE Transactions on Information Theory*56 (3): 1430β35.

*The Annals of Statistics*35 (4): 1773β1801.

*IEEE Transactions on Information Theory*44 (6): 2079β93.

*The Annals of Statistics*25 (1): 293β304.

*Physical Review E*51 (6): 5228β38.

*Stochastics and Dynamics*12 (01): 1150012.

## No comments yet. Why not leave one?