Uncertainty quantification


Using machine learning to make predictions, with a measure of the confidence of those predictions.

Bayes

Bayes methods, e.g. Gaussian processes, have this baked in.

Physical model setting

PEST, PEST++, and pyemu are some integrated systems for uncertainty quantification that use some weird terminology, such a FOSM (First-order-second-moment) models, which at first glance resemble LIME-style model interpretations. The thing here is that these models have complicated physical dynamics which are hard to handle directly, but a surrogate model might be more tractable. There are some tricks to approximate model uncertainty using surrogates that I am just now learning of.

Conformal prediction

Predicting with competence: the best machine learning idea you never heard of:

The essential idea is that a “conformity function” exists. Effectively you are constructing a sort of multivariate cumulative distribution function for your machine learning gizmo using the conformity function. Such CDFs exist for classical stuff like ARIMA and linear regression under the correct circumstances; CP brings the idea to machine learning in general, and to models like ARIMA when the standard parametric confidence intervals won’t work. Within the framework, the conformity function, whatever may be, when used correctly can be guaranteed to give confidence intervals to within a probabilistic tolerance

Hmm

References

Burrows, Wesley, and John Doherty. 2015. “Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.” Groundwater 53 (4): 531–41. https://doi.org/10.1111/gwat.12257.
Doherty, John. 2015. Calibration and Uncertainty Analysis for Complex Environmental Models.
Gladish, Daniel W., Daniel E. Pagendam, Luk J. M. Peeters, Petra M. Kuhnert, and Jai Vaze. 2018. “Emulation Engines: Choice and Quantification of Uncertainty for Complex Hydrological Models.” Journal of Agricultural, Biological and Environmental Statistics 23 (1): 39–62. https://doi.org/10.1007/s13253-017-0308-3.
Gratiet, Loïc Le, Stefano Marelli, and Bruno Sudret. 2016. “Metamodel-Based Sensitivity Analysis: Polynomial Chaos Expansions and Gaussian Processes.” In Handbook of Uncertainty Quantification, edited by Roger Ghanem, David Higdon, and Houman Owhadi, 1–37. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-11259-6_38-1.
Jarvenpaa, Marko, Aki Vehtari, and Pekka Marttinen. 2020. “Batch Simulations and Uncertainty Quantification in Gaussian Process Surrogate Approximate Bayesian Computation.” In Conference on Uncertainty in Artificial Intelligence, 779–88. PMLR. http://proceedings.mlr.press/v124/jarvenpaa20a.html.
Kasim, M. F., D. Watson-Parris, L. Deaconu, S. Oliver, P. Hatfield, D. H. Froula, G. Gregori, et al. 2020. “Up to Two Billion Times Acceleration of Scientific Simulations with Deep Neural Architecture Search.” January 17, 2020. http://arxiv.org/abs/2001.08055.
Smith, Leonard A. 2000. “Disentangling Uncertainty and Error: On the Predictability of Nonlinear Systems.” In Nonlinear Dynamics and Statistics.
Stuart, A. M. 2010. “Inverse Problems: A Bayesian Perspective.” Acta Numerica 19: 451–559. https://doi.org/10.1017/S0962492910000061.
Welter, David E., Jeremy T. White, Randall J. Hunt, and John E. Doherty. 2015. “Approaches in Highly Parameterized Inversion—PEST++ Version 3, a Parameter ESTimation and Uncertainty Analysis Software Suite Optimized for Large Environmental Models.” USGS Numbered Series 7-C12. Techniques and Methods. Reston, VA: U.S. Geological Survey. https://doi.org/10.3133/tm7C12.
White, Jeremy T., Michael N. Fienen, and John E. Doherty. 2016a. pyEMU: A Python Framework for Environmental Model Uncertainty Analysis Version .01. U.S. Geological Survey. https://doi.org/10.5066/F75D8Q01.
———. 2016b. “A Python Framework for Environmental Model Uncertainty Analysis.” Environmental Modelling & Software 85 (November): 217–28. https://doi.org/10.1016/j.envsoft.2016.08.017.