# Last-layer Bayes neural nets

Bayesian and other probabilistic inference in overparameterized ML

January 12, 2017 — February 9, 2023

algebra
Bayes
convolution
density
functional analysis
Hilbert space
likelihood free
linear algebra
machine learning
nonparametric
sparser than thou
uncertainty

Consider the original linear model. We have a (column) vector $$\mathbf{y}=[y_1,y_2,\dots,t_n]^T$$ of $$n$$ observations, an $$n\times p$$ matrix $$\mathbf{X}$$ of $$p$$ covariates, where each column corresponds to a different covariate and each row to a different observation.

We assume the observations are assumed to related to the covariates by $\mathbf{y}=\mathbf{Xb}+\mathbf{e}$ where $$\mathbf{b}=[b_1,y_2,\dots,b_p]$$ gives the parameters of the model which we don’t yet know, We call $$\mathbf{e}$$ the “residual” vector. Legendre and Gauss pioneered the estimation of the parameters of a linear model by minimising the squared residuals, $$\mathbf{e}^T\mathbf{e}$$, i.e. \begin{aligned}\hat{\mathbf{b}} &=\operatorname{arg min}_\mathbf{b} (\mathbf{y}-\mathbf{Xb})^T (\mathbf{y}-\mathbf{Xb})\\ &=\operatorname{arg min}_\mathbf{b} \|\mathbf{y}-\mathbf{Xb}\|_2\\ &=\mathbf{X}^+\mathbf{y} \end{aligned} where we find the pseudo inverse $$\mathbf{X}^+$$ using a numerical solver of some kind, using one of many carefully optimised methods that exists for least squares.

So far there is no statistical argument, merely function approximation.

However it turns out that if you assume that the $$\mathbf{e}_i$$ are distributed randomly and independently i.i.d. errors in the observations (or at least indepenedent with constant variance), then there is also a statistical justification for this idea;

🏗 more exposition of these. Linkage to Maximum likelihood.

For now, handball to Lu (2022).

## 1 References

Bishop. 2006. Pattern Recognition and Machine Learning. Information Science and Statistics.
Buja, Hastie, and Tibshirani. 1989. Annals of Statistics.
Hoaglin, and Welsch. 1978. The American Statistician.
Kailath. 1980. Linear Systems. Prentice-Hall Information and System Science Series.
Kailath, Sayed, and Hassibi. 2000. Linear Estimation. Prentice Hall Information and System Sciences Series.
Lu. 2022.
Mandelbaum. 1984. Zeitschrift Für Wahrscheinlichkeitstheorie Und Verwandte Gebiete.
Rao, Toutenburg, Shalabh, et al. 2008. Linear models and generalizations: least squares and alternatives. Edited by Michael Shomaker. Springer series in statistics.
Riutort-Mayol, Bürkner, Andersen, et al. 2020. arXiv:2004.11408 [Stat].
Wang, Li, Khabsa, et al. 2020.
Wilson, Borovitskiy, Terenin, et al. 2020. In Proceedings of the 37th International Conference on Machine Learning.
Zammit-Mangion, and Cressie. 2021. Journal of Statistical Software.