Just saw the author present this in a lecture:
Here’s my summary of his chat:
Produces a frequentist-esque machinery, with different assumptions You have no true models, no repeatability as such, but you can construct approximations for certain goals and with certain kinds of guarantee. Hard to see how you would extract a law of nature in this context, but looks natural for machine learning problems. Assuming no good model rather than a contaminated model?
Obviously I don’t know enough about this to say anything, but looks interesting. However it’s also a one-man shop.
Connection with learning theory?
Davies, L. 2014. Data Analysis and Approximate Models. Vol. 133. Monographs on Statistics and Applied Probability.
Davies, Laurie. 2016. “On $p$-Values,” November. http://arxiv.org/abs/1611.06168.
Davies, P. L. 2008. “Approximating Data.” Journal of the Korean Statistical Society 37 (3): 191–211. https://doi.org/10.1016/j.jkss.2008.03.004.
Davies, P. L., and M. Meise. 2008. “Approximating Data with Weighted Smoothing Splines.” Journal of Nonparametric Statistics 20 (3): 207–28. https://doi.org/10.1080/10485250801948625.