Coarse graining
Also, fluctuation theorems
November 12, 2014 — September 9, 2024
AFAICT, coarse graining describes the question “how much worse do your predictions get as you discard information in some orderly fashion?”, as framed by physicists.
Do “renormalization groups”, whatever they are, fit in here? Fast-slow systems?
The ML equivalent seems to be multi-fidelity modelling.
1 Fluctuation theorems
2 Green-Kubo relations
3 Persistent homology
What’s that? Petri et al. (2014):
Persistent homology is a recent technique in computational topology developed for shape recognition and the analysis of high-dimensional datasets.… The central idea is the construction of a sequence of successive approximations of the original dataset seen as a topological space X. This sequence of topological spaces \(X_0, X_1, \dots{}, X_N = X\) is such that \(X_i \subseteq X_j\) whenever \(i < j\) and is called the filtration.
4 Links
5 Incoming
jkbren/einet: Uncertainty and causal emergence in complex networks
Python code for calculating effective information in networks. This can then be used to search for macroscale representations of a network such that the coarse-grained representation has more effective information than the microscale, a phenomenon known as causal emergence. This code accompanies the recent paper: Klein and Hoel (2020)