Computation and the edge of chaos
Is criticality what inference looks like?
2016-12-01 — 2026-01-07
Wherein an account is given of computation at criticality, and the connection to deep neural networks’ poised initialization and phase‑transition models is examined and placed beside historical proposals.
I hadn’t made an edge-of-chaos notebook for a while because I couldn’t face sifting through the woo-woo silt to find the gold. There’s some gold, though — and some iron pyrite that’s still nice to look at, some fun ideas, and a wacky story of cranks, mavericks, Nobel prizes and hype that’s interesting in its own right.
This looks to me like it should connect to the measure-preserving nets discussed in Neural Nets as dynamical systems, statistical mechanics of learning and algorithmic statistics. Also, inevitably, to fractals because of scaling relations.
1 History
Start with Crutchfield and Young (1988); Chris G. Langton (1990), which introduce the association between the edge of chaos and computation.
2 Which chaos? Which edge?
Two ingredients seem popular: a phase-transition model and an assumed computation model. TBD
3 In life
TBD.
4 Neural nets
There’s a resurgence of interest in edge-of-chaos behaviour in the undifferentiated, random computational mess of neural networks; they actually look a lot like the sort of system that should demonstrate edge-of-chaos computation if anything does. See also statistical mechanics of statistics.
Bertschinger, Natschläger, and Legenstein (2004) and Bertschinger and Natschläger (2004) argue that chaotic, random neural nets are “good” in some sense.
Hayou et al. (2020) argues that deep networks are trainable when they’re poised, in some sense, upon an edge-of-chaos. This apparently builds on (Hayou, Doucet, and Rousseau 2019; Schoenholz et al. 2017). We should revisit.
Roberts, Yaida, and Hanin (2022) sounds like it analyzes criticality in an interesting way, and now it really sounds like a dynamical systems analysis of neural nets:
This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyse the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models’ predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterise the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behaviour and lets us categorise networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths.
Here are some more things to read (He et al. 2025; Liu et al. 2025; Morales and Muñoz 2021; Terada and Toyoizumi 2024; Teuscher 2022).
5 Edge of Stability in training neural networks
Is this related? It looks like it might be.
(Cohen et al. 2025; Zhu et al. 2023)
Paper: Understanding Optimization in Deep Learning with Central Flows (Cohen et al. 2022)
6 Incoming
- Sasank’s blog: Self-Organized Criticality: An Explanation of 1/f Noise
- Biologists Find New Rules for Life at the Edge of Chaos
- Inevitably, Shalizi critiques “The Edge of Chaos”
- Olbrich’s lecture

