Using machine learning to make predictions, with a measure of the confidence of those predictions.

## Taxonomy

Should clarify. TBD. Here is a recent reference on the theme: Kendall and Gal (2017) This extricates aleatoric and epistemic uncertainty. Also to mention, model uncertainty.

## Bayes

Bayes methods have some ideas of uncertainty baked in. You can get some way with e.g., e.g. Gaussian process regression, or probabilistic NNs.

## Physical model setting

PEST, PEST++, and pyemu are some integrated systems for uncertainty quantification that use some weird terminology, such a FOSM (First-order-second-moment) models, which at first glance resemble LIME-style local regression model interpretations. The common thread is that these models have complicated physical dynamics which are hard to handle directly, but a surrogate model might be more tractable.

## Conformal prediction

Predicting with competence: the best machine learning idea you never heard of:

The essential idea is that a “conformity function” exists. Effectively you are constructing a sort of multivariate cumulative distribution function for your machine learning gizmo using the conformity function. Such CDFs exist for classical stuff like ARIMA and linear regression under the correct circumstances; CP brings the idea to machine learning in general, and to models like ARIMA when the standard parametric confidence intervals won’t work.

Hmm. Perhaps see (“Predicting With Confidence: Using Conformal Prediction in Drug Discovery” 2021; Shafer and Vovk 2008; Zeni, Fontana, and Vantini 2020). Question: how well does this work under dataset shift? (Tibshirani et al. 2019).

## Chaos expansions

See chaos expansions.

## Uncertainty Quantification 360

IBM’s Uncertainty Quantification 360 toolkit is both a handy software library and a summary of popular generic methods:

- Auxiliary Interval Predictor
Use an auxiliary model to improve the calibration of UQ generated by the original model.

- Blackbox Metamodel Classification
Extract confidence scores from trained black-box classification models using a meta-model.

- Blackbox Metamodel Regression
Extract prediction intervals from trained black-box regression models using a meta-model.

- Classification Calibration
Post-hoc calibration of classification models using Isotonic Regression and Platt Scaling.

- Heteroscedastic Regression

- Homoscedastic Gaussian Process Regression

- Horseshoe BNN classification

- Horseshoe BNN regression

- Infinitesimal Jackknife

- Quantile Regression

- UCC Recalibration

They provide guidance on method selection in the manual:

## References

*International Conference on Machine Learning*, 9690–700. PMLR.

*arXiv:2011.07586 [Cs]*, May.

*Microsoft Research*, January.

*Groundwater*53 (4): 531–41.

*arXiv:2106.14806 [Cs, Stat]*.

*Calibration and uncertainty analysis for complex environmental models*.

*Proceedings of the 33rd International Conference on Machine Learning (ICML-16)*.

*arXiv:1506.02157 [Stat]*, May.

*arXiv:2106.01410 [Cs]*, June.

*Journal of Agricultural, Biological and Environmental Statistics*23 (1): 39–62.

*Handbook of Uncertainty Quantification*, edited by Roger Ghanem, David Higdon, and Houman Owhadi, 1–37. Cham: Springer International Publishing.

*Journal of the American Statistical Association*103 (482): 570–83.

*Journal of Agricultural, Biological, and Environmental Statistics*16 (4): 475–94.

*Conference on Uncertainty in Artificial Intelligence*, 779–88. PMLR.

*arXiv:2001.08055 [Physics, Stat]*, January.

*Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2*, 2575–83. NIPS’15. Cambridge, MA, USA: MIT Press.

*Uncertainty in Artificial Intelligence*.

*Proceedings of the 31st International Conference on Neural Information Processing Systems*, 6405–16. NIPS’17. Red Hook, NY, USA: Curran Associates Inc.

*Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence*, 362–69. UAI’01. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.

*Npj Computational Materials*6 (1): 1–7.

*Journal of Pharmaceutical Sciences*110 (1): 42–49.

*Technometrics*31 (1): 41–47.

*Statistical Science*4 (4): 409–23.

*Journal of Machine Learning Research*9 (12): 371–421.

*Water Resources Research*46 (8): W08539.

*Nonlinear Dynamics and Statistics*.

*Acta Numerica*19: 451–559.

*Water Resources Research*45 (12).

*Algorithmic Learning in a Random World*. Springer Science & Business Media.

*ICLR*.

*Environmental Modelling & Software*109 (November): 191–201.

*pyEMU: A Python Framework for Environmental Model Uncertainty Analysis Version .01*. U.S. Geological Survey.

*Environmental Modelling & Software*85 (November): 217–28.

*arXiv:2005.07972 [Cs, Econ, Stat]*, May.

*Journal of Computational Physics*397 (November): 108850.

## No comments yet. Why not leave one?