Bayes neural nets via subsetting weights

January 12, 2017 — July 3, 2023

Bayes
convolution
density
likelihood free
machine learning
neural nets
nonparametric
sparser than thou
uncertainty
Figure 1

Bayes NNs where only some weights are random and others are fixed. This raises various difficulties — how to you update a fixed parameter?

1 Is this even principled?

Try Sharma et al. (2022).

2 How to update a deterministic parameter?

From the perspective of Bayes inference, parameters we do not update have zero prior variance. And yet we do update them by SGD. What does that mean? How can we make that statistically well-posed?

3 Last layer

The most famous one. See Bayes last layer.

4 Probabilistic weight tying

possibly the same idea? Rafael Oliveira has referred me to Roth and Pernkopf (2020) for some ideas about this.

5 References

Chung, and Chung. 2014. An Efficient Approach for Computing Optimal Low-Rank Regularized Inverse Matrices.” Inverse Problems.
Daxberger, Nalisnick, Allingham, et al. 2020. “Expressive yet Tractable Bayesian Deep Learning via Subnetwork Inference.” In.
Daxberger, Nalisnick, Allingham, et al. 2021. Bayesian Deep Learning via Subnetwork Inference.” In Proceedings of the 38th International Conference on Machine Learning.
Durasov, Bagautdinov, Baque, et al. 2021. Masksembles for Uncertainty Estimation.” In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
Dusenberry, Jerfel, Wen, et al. 2020. Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors.” In Proceedings of the 37th International Conference on Machine Learning.
Izmailov, Maddox, Kirichenko, et al. 2020. Subspace Inference for Bayesian Deep Learning.” In Proceedings of The 35th Uncertainty in Artificial Intelligence Conference.
Ke, and Fan. 2022. On the Optimization and Pruning for Bayesian Deep Learning.”
Kowal. 2022. Bayesian Subset Selection and Variable Importance for Interpretable Prediction and Classification.”
Roth, and Pernkopf. 2020. Bayesian Neural Networks with Weight Sharing Using Dirichlet Processes.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Sharma, Farquhar, Nalisnick, et al. 2022. Do Bayesian Neural Networks Need To Be Fully Stochastic?
Spantini, Cui, Willcox, et al. 2017. Goal-Oriented Optimal Approximations of Bayesian Linear Inverse Problems.” SIAM Journal on Scientific Computing.
Spantini, Solonen, Cui, et al. 2015. Optimal Low-Rank Approximations of Bayesian Linear Inverse Problems.” SIAM Journal on Scientific Computing.
Tran, M.-N., Nguyen, Nott, et al. 2019. Bayesian Deep Net GLM and GLMM.” Journal of Computational and Graphical Statistics.
Tran, Ba-Hien, Rossi, Milios, et al. 2022. All You Need Is a Good Functional Prior for Bayesian Deep Learning.” Journal of Machine Learning Research.
Zhao, Mair, Schön, et al. 2023. On Feynman-Kac Training of Partial Bayesian Neural Networks.”