Uncertainty quantification



Using machine learning to make predictions, with a measure of the confidence of those predictions.

Taxonomy

Should clarify. TBD. Here is a recent reference on the theme: Kendall and Gal (2017) This extricates aleatoric and epistemic uncertainty. Also to mention, model uncertainty.

DUQ networks

Amersfoort et al. (2020);Kendall and Gal (2017)

Bayes

Bayes methods have some ideas of uncertainty baked in. You can get some way with e.g., e.g. Gaussian process regression, or probabilistic NNs.

Physical model calibration

PEST, PEST++, and pyemu are some integrated systems for uncertainty quantification that use some weird terminology, such a FOSM (First-order-second-moment) models. I think these are best considered as inverse problem solvers, and the uncertainty quantification is a side effect of the inversion.

Conformal prediction

See conformal prediction.

Chaos expansions

See chaos expansions.

Uncertainty Quantification 360

IBM’s Uncertainty Quantification 360 toolkit a summary of popular generic methods:

  • Auxiliary Interval Predictor

Use an auxiliary model to improve the calibration of UQ generated by the original model.

  • Blackbox Metamodel Classification

Extract confidence scores from trained black-box classification models using a meta-model.

  • Blackbox Metamodel Regression

Extract prediction intervals from trained black-box regression models using a meta-model.

  • Classification Calibration

Post-hoc calibration of classification models using Isotonic Regression and Platt Scaling.

  • Heteroscedastic Regression

Train regression models that capture data uncertainty, assuming the targets are noisy and the amount of noise varies between data points.

  • Homoscedastic Gaussian Process Regression

Train Gaussian Process Regression models with homoscedastic noise that capture data and model uncertainty.

  • Horseshoe BNN classification

Train Bayesian neural networks classifiers with Gaussian and Horseshoe priors that capture data and model uncertainty.

  • Horseshoe BNN regression

Train BNNs regression models with Gaussian and Horseshoe priors that capture data and model uncertainty.

  • Infinitesimal Jackknife

Extract uncertainty from trained models by approximating the effect of training data perturbations on the model’s predictions.

  • Quantile Regression

Train Quantile Regression models that capture data uncertainty, by learning two separate models for the upper and lower quantile to obtain the prediction intervals.

  • UCC Recalibration

Recalibrate UQ of a regression model to specified operating point using Uncertainty Characteristics Curve

They provide guidance on method selection in the manual:

References

Alvarsson, Jonathan, Staffan Arvidsson McShane, Ulf Norinder, and Ola Spjuth. 2021. Predicting With Confidence: Using Conformal Prediction in Drug Discovery.” Journal of Pharmaceutical Sciences 110 (1): 42–49.
Amersfoort, Joost Van, Lewis Smith, Yee Whye Teh, and Yarin Gal. 2020. Uncertainty Estimation Using a Single Deep Deterministic Neural Network.” In International Conference on Machine Learning, 9690–700. PMLR.
Bhatt, Umang, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, et al. 2021. Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty.” arXiv:2011.07586 [Cs], May.
Bishop, Christopher. 1994. Mixture Density Networks.” Microsoft Research, January.
Burrows, Wesley, and John Doherty. 2015. Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.” Groundwater 53 (4): 531–41.
Chen, Yan, and Dean S. Oliver. 2013. Levenberg–Marquardt Forms of the Iterative Ensemble Smoother for Efficient History Matching and Uncertainty Quantification.” Computational Geosciences 17 (4): 689–703.
Chipman, Hugh A, Edward I George, and Robert E Mcculloch. 2006. “Bayesian Ensemble Learning.” In, 8.
Daxberger, Erik, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. 2021. Laplace Redux — Effortless Bayesian Deep Learning.” In arXiv:2106.14806 [Cs, Stat].
Doherty, John. 2015. Calibration and uncertainty analysis for complex environmental models.
Gal, Yarin, and Zoubin Ghahramani. 2015. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning.” In Proceedings of the 33rd International Conference on Machine Learning (ICML-16).
———. 2016. Dropout as a Bayesian Approximation: Appendix.” arXiv:1506.02157 [Stat], May.
Ghosh, Soumya, Q. Vera Liao, Karthikeyan Natesan Ramamurthy, Jiri Navratil, Prasanna Sattigeri, Kush R. Varshney, and Yunfeng Zhang. 2021. Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and Communicating the Uncertainty of AI.” arXiv:2106.01410 [Cs], June.
Gladish, Daniel W., Daniel E. Pagendam, Luk J. M. Peeters, Petra M. Kuhnert, and Jai Vaze. 2018. Emulation Engines: Choice and Quantification of Uncertainty for Complex Hydrological Models.” Journal of Agricultural, Biological and Environmental Statistics 23 (1): 39–62.
Gratiet, Loïc Le, Stefano Marelli, and Bruno Sudret. 2016. Metamodel-Based Sensitivity Analysis: Polynomial Chaos Expansions and Gaussian Processes.” In Handbook of Uncertainty Quantification, edited by Roger Ghanem, David Higdon, and Houman Owhadi, 1–37. Cham: Springer International Publishing.
Higdon, Dave, James Gattiker, Brian Williams, and Maria Rightley. 2008. Computer Model Calibration Using High-Dimensional Output.” Journal of the American Statistical Association 103 (482): 570–83.
Hooten, Mevin B., William B. Leeds, Jerome Fiechter, and Christopher K. Wikle. 2011. Assessing First-Order Emulator Inference for Physical Parameters in Nonlinear Mechanistic Models.” Journal of Agricultural, Biological, and Environmental Statistics 16 (4): 475–94.
Jarvenpaa, Marko, Aki Vehtari, and Pekka Marttinen. 2020. Batch Simulations and Uncertainty Quantification in Gaussian Process Surrogate Approximate Bayesian Computation.” In Conference on Uncertainty in Artificial Intelligence, 779–88. PMLR.
Kasim, M. F., D. Watson-Parris, L. Deaconu, S. Oliver, P. Hatfield, D. H. Froula, G. Gregori, et al. 2020. Up to Two Billion Times Acceleration of Scientific Simulations with Deep Neural Architecture Search.” arXiv:2001.08055 [Physics, Stat], January.
Kendall, Alex, and Yarin Gal. 2017. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? March.
Kingma, Diederik P., Tim Salimans, and Max Welling. 2015. Variational Dropout and the Local Reparameterization Trick.” In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, 2575–83. NIPS’15. Cambridge, MA, USA: MIT Press.
Kristiadi, Agustinus, Matthias Hein, and Philipp Hennig. 2021. Learnable Uncertainty Under Laplace Approximations.” In Uncertainty in Artificial Intelligence.
Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” In Proceedings of the 31st International Conference on Neural Information Processing Systems, 6405–16. NIPS’17. Red Hook, NY, USA: Curran Associates Inc.
Minka, Thomas P. 2001. Expectation Propagation for Approximate Bayesian Inference.” In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, 362–69. UAI’01. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Mukhoti, Jishnu, Andreas Kirsch, Joost van Amersfoort, Philip H. S. Torr, and Yarin Gal. 2021. Deterministic Neural Networks with Inductive Biases Capture Epistemic and Aleatoric Uncertainty,” February.
O’Hagan, Anthony. 2013. “Polynomial Chaos: A Tutorial and Critique from a Statistician’s Perspective,” 20.
Pestourie, Raphaël, Youssef Mroueh, Thanh V. Nguyen, Payel Das, and Steven G. Johnson. 2020. Active Learning of Deep Surrogates for PDEs: Application to Metasurface Design.” Npj Computational Materials 6 (1): 1–7.
Sacks, Jerome, Susannah B. Schiller, and William J. Welch. 1989. Designs for Computer Experiments.” Technometrics 31 (1): 41–47.
Sacks, Jerome, William J. Welch, Toby J. Mitchell, and Henry P. Wynn. 1989. Design and Analysis of Computer Experiments.” Statistical Science 4 (4): 409–23.
Shafer, Glenn, and Vladimir Vovk. 2008. A Tutorial on Conformal Prediction.” Journal of Machine Learning Research 9 (12): 371–421.
Siade, Adam J., Mario Putti, and William W. G. Yeh. 2010. Snapshot selection for groundwater model reduction using proper orthogonal decomposition.” Water Resources Research 46 (8): W08539.
Smith, Leonard A. 2000. “Disentangling Uncertainty and Error: On the Predictability of Nonlinear Systems.” In Nonlinear Dynamics and Statistics.
Stuart, Andrew M. 2010. Inverse Problems: A Bayesian Perspective.” Acta Numerica 19: 451–559.
Tibshirani, Ryan J, Rina Foygel Barber, Emmanuel Candes, and Aaditya Ramdas. 2019. Conformal Prediction Under Covariate Shift.” In Advances in Neural Information Processing Systems. Vol. 32. Curran Associates, Inc.
Tonkin, Matthew, and John Doherty. 2009. Calibration-Constrained Monte Carlo Analysis of Highly Parameterized Models Using Subspace Techniques.” Water Resources Research 45 (12).
Ventola, Fabrizio, Steven Braun, Zhongjie Yu, Martin Mundt, and Kristian Kersting. 2023. Probabilistic Circuits That Know What They Don’t Know.” arXiv.org.
Vovk, Vladimir, Alex Gammerman, and Glenn Shafer. 2005. Algorithmic Learning in a Random World. Springer Science & Business Media.
Welter, David E., Jeremy T. White, Randall J. Hunt, and John E. Doherty. 2015. Approaches in Highly Parameterized Inversion—PEST++ Version 3, a Parameter ESTimation and Uncertainty Analysis Software Suite Optimized for Large Environmental Models.” USGS Numbered Series 7-C12. Techniques and Methods. Reston, VA: U.S. Geological Survey.
Wen, Yeming, Dustin Tran, and Jimmy Ba. 2020. BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning.” In ICLR.
White, Jeremy T. 2018. A Model-Independent Iterative Ensemble Smoother for Efficient History-Matching and Uncertainty Quantification in Very High Dimensions.” Environmental Modelling & Software 109 (November): 191–201.
White, Jeremy T., Michael N. Fienen, and John E. Doherty. 2016a. pyEMU: A Python Framework for Environmental Model Uncertainty Analysis Version .01.” U.S. Geological Survey.
———. 2016b. A Python Framework for Environmental Model Uncertainty Analysis.” Environmental Modelling & Software 85 (November): 217–28.
White, Jeremy T., Randall J. Hunt, Michael N. Fienen, and John E. Doherty. 2020. Approaches to Highly Parameterized Inversion: PEST++ Version 5, a Software Suite for Parameter Estimation, Uncertainty Analysis, Management Optimization and Sensitivity Analysis.” USGS Numbered Series 7-C26. Techniques and Methods. Reston, VA: U.S. Geological Survey.
Zeni, Gianluca, Matteo Fontana, and Simone Vantini. 2020. Conformal Prediction: A Unified Review of Theory and New Challenges.” arXiv:2005.07972 [Cs, Econ, Stat], May.
Zhang, Dongkun, Lu Lu, Ling Guo, and George Em Karniadakis. 2019. Quantifying Total Uncertainty in Physics-Informed Neural Networks for Solving Forward and Inverse Stochastic Problems.” Journal of Computational Physics 397 (November): 108850.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.