Statistical mechanics of statistics
December 2, 2016 — June 2, 2023
Boaz Barak has a miniature dictionary for statisticians:
I’ve always been curious about the statistical physics approach to problems from computer science. The physics-inspired algorithm survey propagation is the current champion for random 3SAT instances, statistical-physics phase transitions have been suggested as explaining computational difficulty, and statistical physics has even been invoked to explain why deep learning algorithms seem to often converge to useful local minima.
Unfortunately, I have always found the terminology of statistical physics, “spin glasses”, “quenched averages”, “annealing”, “replica symmetry breaking”, “metastable states” etc… to be rather daunting
Jaan Altosaar’s guided translation is great.
Connection to singular learning theory and neural networks?
1 Phase transitions in statistical inference
There is a deep analogy between statistical inference and statistical physics; I will give a friendly introduction to both of these fields. I will then discuss phase transitions in two problems of interest to a broad range of data sciences: community detection in social and biological networks, and clustering of sparse high-dimensional data. In both cases, if our data becomes too sparse or too noisy, it suddenly becomes impossible to find the underlying pattern, or even tell if there is one. Physics both helps us locate these phase transitions, and design optimal algorithms that succeed all the way up to this point. Along the way, I will visit ideas from computational complexity, random graphs, random matrices, and spin glass theory.
There is an overview lecture by Thomas Orton, which cites lots of the good stuff
Last week, we saw how certain computational problems like 3SAT exhibit a thresholding behaviour, similar to a phase transition in a physical system. In this post, we’ll continue to look at this phenomenon by exploring a heuristic method, belief propagation (and the cavity method), which has been used to make hardness conjectures, and also has thresholding properties. In particular, we’ll start by looking at belief propagation for approximate inference on sparse graphs as a purely computational problem. After doing this, we’ll switch perspectives and see belief propagation motivated in terms of Gibbs free energy minimisation for physical systems. With these two perspectives in mind, we’ll then try to use belief propagation to do inference on the stochastic block model. We’ll see some heuristic techniques for determining when BP succeeds and fails in inference, as well as some numerical simulation results of belief propagation for this problem. Lastly, we’ll talk about where this all fits into what is currently known about efficient algorithms and information theoretic barriers for the stochastic block model.
See Igor Carron’s “phase diagram” list, and stuff like (Oymak and Tropp 2015). Likely there are connections to Erdős-Renyi giant components and other complex network things in probabilistic graph learning. Read (Barbier 2015; Poole et al. 2016).
2 Replicator equations and evolutionary processes
See also evolution, game theory.
Gentle intro lecture by John Baez, Biology as Information Dynamics.
See (Baez 2011; Harper 2009; Shalizi 2009; Sinervo and Lively 1996).
3 Grokking
Neel Nanda, Tom Lieberum, A Mechanistic Interpretability Analysis of Grokking
Grokking (Power et al. 2022) is a recent phenomenon discovered by OpenAI researchers, that in my opinion is one of the most fascinating mysteries in deep learning. That models trained on small algorithmic tasks like modular addition will initially memorise the training data, but after a long time will suddenly learn to generalize to unseen data.
This is a write-up of an independent research project I did into understanding grokking through the lens of mechanistic interpretability. My most important claim is that grokking has a deep relationship to phase changes. Phase changes, ie a sudden change in the model’s performance for some capability during training, are a general phenomenon that occur when training models, that have also been observed in large models trained on non-toy tasks. For example, the sudden change in a transformer’s capacity to do in-context learning when it forms induction heads. In this work examine several toy settings where a model trained to solve them exhibits a phase change in test loss, regardless of how much data it is trained on. I show that if a model is trained on these limited data with high regularisation, then that the model shows grokking.
4 Annealing
See annealing.
5 Entropy vs information
6 Neural tangent kernel
Has been argued to fit in this category in, e.g. Cagnetta et al. (2023).