AFAICT, this is the question ‘how much worse do your predictions get as you discard information in some orderly fashion?’, as framed by physicists.
Do “renormalisation groups”, whatever they are, fit in here? Fast-slow systems?
The ML equivalent seems to be multi-fidelity modelling.
Persistent Homology
What’s that? Petri et al. (2014):
Persistent homology is a recent technique in computational topology developed for shape recognition and the analysis of high dimensional datasets.… The central idea is the construction of a sequence of successive approximations of the original dataset seen as a topological space X. This sequence of topological spaces \(X_0, X_1, \dots{}, X_N = X\) is such that \(X_i \subseteq X_j\) whenever \(i < j\) and is called the filtration.
No comments yet. Why not leave one?