Produces a frequentist-esque machinery, with different assumptions
You have no true models, no repeatability as such,
but you can construct approximations
for certain goals and with certain kinds of guarantee.
Hard to see how you would extract a law of nature in this context,
but looks natural for machine learning problems.
Assuming no good model rather than a contaminated model?
Obviously I don’t know enough about this to say anything, but looks interesting.
However it’s also a one-man shop.
Davies, Patrick Laurie. 2014. Data Analysis and Approximate Models: Model Choice, Location-Scale, Analysis of Variance, Nonparametic Regression and Image Analysis. Monographs on Statistics and Applied Probability. Boca Raton: CRC Press.