The edge of chaos

Computation, evolution, competition and other past-times of faculty



I did not create an edge-of-chaos notebook for a while because I could not face the task of sifting through the woo-woo silt to find the gold. There is some gold, though, and also some iron pyrites which is still nice to look at, and some fun ideas, and also a wacky story of cranks, mavericks and Nobel prizes and hype which is interesting in its own right.

Name check: criticality, which is also perhaps related to dynamical stability, and ergodicity, and inevitably to fractals because of scaling relations. Somewhere out of this, a theory of algorithmic statistics might emerge.

Maybe information bottlenecks also?

History

Start with Crutchfield and Young (1988);Chris G. Langton (1990) which introduce the association between the edge of chaos and computing.

Which chaos? Which edge?

Two ingredients seem to be popular: a phase transition model and an assumed computation model. TBD

In life

TBD.

Neural nets

A resurgence of interest in edge of chaos is in the undifferentiated, random computational mess of neural networks, which do look in fact a lot like a system which should demonstrate edge-of-chaos computation if anything does. See also statistical mechanics of statistics.

Hayou et al. (2020) argues that deep networks are trainable in an edge-of-chaos context, building on earlier work (Hayou, Doucet, and Rousseau 2019; Schoenholz et al. 2017). There is a very specific notion of chaos here, which looks like one about measure-preserving, mixing dynamical systems. To revisit.

Roberts, Yaida, and Hanin (2021) may or may not relate:

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

To read: (Bertschinger, Natschläger, and Legenstein 2004; Bertschinger and Natschläger 2004).

References

Bak, Per, Chao Tang, and Kurt Wiesenfeld. 1988. Self-Organized Criticality.” Physical Review A 38 (1): 364–74.
Bertschinger, Nils, and Thomas Natschläger. 2004. Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks.” Neural Computation 16 (7): 1413–36.
Bertschinger, Nils, Thomas Natschläger, and Robert Legenstein. 2004. At the Edge of Chaos: Real-Time Computations and Self-Organized Criticality in Recurrent Neural Networks.” In Advances in Neural Information Processing Systems. Vol. 17. MIT Press.
Crutchfield, James P. 1994. The Calculi of Emergence: Computation, Dynamics and Induction.” Physica D: Nonlinear Phenomena 75 (1-3): 11–54.
Crutchfield, James P, and Karl Young. 1988. “Computation at the Onset of Chaos.”
Hayou, Soufiane, Arnaud Doucet, and Judith Rousseau. 2019. On the Impact of the Activation Function on Deep Neural Networks Training.” In Proceedings of the 36th International Conference on Machine Learning, 2672–80. PMLR.
Hayou, Soufiane, Jean-Francois Ton, Arnaud Doucet, and Yee Whye Teh. 2020. Pruning Untrained Neural Networks: Principles and Analysis.” arXiv:2002.08797 [Cs, Stat], June.
Kanders, Karlis, Tom Lorimer, and Ruedi Stoop. 2017. Avalanche and Edge-of-Chaos Criticality Do Not Necessarily Co-Occur in Neural Networks.” Chaos: An Interdisciplinary Journal of Nonlinear Science 27 (4): 047408.
Kauffman, Stuart A. 1993. The Origins of Order: Self-Organization and Selection in Evolution. New York: Oxford University Press.
Kelso, J.a.s., A.j. Mandell, and M.f. Shlesinger. 1988. Dynamic Patterns in Complex Systems.” In Dynamic Patterns in Complex Systems, 1–432. WORLD SCIENTIFIC.
Krotov, Dmitry, Julien O. Dubuis, Thomas Gregor, and William Bialek. 2014. Morphogenesis at Criticality.” Proceedings of the National Academy of Sciences, February.
Langton, Chris G. 1990. Computation at the Edge of Chaos: Phase Transitions and Emergent Computation.” Physica D: Nonlinear Phenomena 42 (1–3): 12–37.
Langton, Christopher G. 1986. Studying Artificial Life with Cellular Automata.” Physica D: Nonlinear Phenomena, Proceedings of the Fifth Annual International Conference, 22 (1): 120–49.
Mitchell, M., J. P. Crutchfield, and P. T. Hraber. 1993. Dynamics, Computation, and the ‘Edge of Chaos’: A Re-Examination,” June.
Mitchell, Melanie, Peter Hraber, and James P. Crutchfield. 1993. Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations.” arXiv:adap-Org/9303003, March.
Mora, Thierry, and William Bialek. 2011. Are Biological Systems Poised at Criticality? Journal of Statistical Physics 144 (2): 268–302.
Packard, Norman H. 1988. Adaptation Toward the Edge of Chaos. University of Illinois at Urbana-Champaign, Center for Complex Systems Research.
Prokopenko, Mikhail, Fabio Boschetti, and Alex J. Ryan. 2009. An Information-Theoretic Primer on Complexity, Self-Organization, and Emergence.” Complexity 15 (1): 11–28.
Roberts, Daniel A., Sho Yaida, and Boris Hanin. 2021. The Principles of Deep Learning Theory.” arXiv:2106.10165 [Hep-Th, Stat], August.
Scarle, Simon. 2009. Implications of the Turing completeness of reaction-diffusion models, informed by GPGPU simulations on an XBox 360: cardiac arrhythmias, re-entry and the Halting problem.” Computational Biology and Chemistry 33 (4): 253–60.
Schoenholz, Samuel S., Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. 2017. Deep Information Propagation.” In.
Su, Jingtong, Yihang Chen, Tianle Cai, Tianhao Wu, Ruiqi Gao, Liwei Wang, and Jason D. Lee. 2020. Sanity-Checking Pruning Methods: Random Tickets Can Win the Jackpot.” In Advances In Neural Information Processing Systems.
Torres-Sosa, Christian, Sui Huang, and Maximino Aldana. 2012. Criticality Is an Emergent Property of Genetic Networks That Exhibit Evolvability.” PLOS Computational Biology 8 (9): e1002669.
Upadhyay, Ranjit Kumar. 2009. Dynamics of an Ecological Model Living on the Edge of Chaos.” Applied Mathematics and Computation 210 (2): 455–64.
Zhang, Gege, Gangwei Li, Weining Shen, and Weidong Zhang. 2020. The Expressivity and Training of Deep Neural Networks: Toward the Edge of Chaos? Neurocomputing 386 (April): 8–17.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.