A classic. Surprisingly deep.

A few non-comprehensive notes to approximating functions from data by the arbitrary-but-convenient expedient of minimising the sum of the squares of the deviances between two things; The linear algebra of least squares fits seems well-trodden and perenially classic. Used in many many problems. e.g. lasso regression, Gaussian belief propagation. Least squares problems are a kind of platonic ideal of convex optimisation.

Francis Bach is interested in least squares relaxations, which he connects to Fourier domain. See his Sums-of-squares for dummies: a view from the Fourier domain.

## Introduction

- the Ceres Solver Bibliography is a good start.
- Boyd and Vandenberghe’s Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares is a solid introduction to both linear algebra and least squares. Julia Companion. Python Companion.

## Iteratively reweighted

## Nonlinear least squares

Trust region and Levenberg-Marquardt methods in 2nd order optimisation.

## Tools

Nonlinear least squares with ceres-solver:

Ceres Solver is an open source C++ library for modeling and solving large, complicated optimization problems. It can be used to solve Non-linear Least Squares problems with bounds constraints and general unconstrained optimization problems. It is a mature, feature rich, and performant library that has been used in production at Google since 2010.

gradslam gradslam/gradslam: gradslam is an open source differentiable dense SLAM library for PyTorch (Jatavallabhula, Iyer, and Paull 2020)

## Jaxopt

jax toolkit JAXopt includes lots of neatg Nonlinear least squares tooling.

## KeOps

The KeOps library lets you compute reductions of large arrays whose entries are given by a mathematical formula or a neural network. It combines efficient C++ routines with an automatic differentiation engine and can be used with Python (NumPy, PyTorch), Matlab and R.

It is perfectly suited to the computation of kernel matrix-vector products, K-nearest neighbors queries, N-body interactions, point cloud convolutions and the associated gradients. Crucially, it performs well even when the corresponding kernel or distance matrices do not fit into the RAM or GPU memory. Compared with a PyTorch GPU baseline, KeOps provides a x10-x100 speed-up on a wide range of geometric applications, from kernel methods to geometric deep learning.

## Incoming

- The Gauss–Markov Theorem “under certain conditions, the least squares estimator is the minimum-variance linear unbiased estimator of the model parameters.”
- Weighted Least Squares

## References

*arXiv:1701.09120 [Math, Stat]*, January.

*Julia Companion to Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares*. 1st ed. Cambridge University Press.

*Management Science*65 (11): 5171–87.

*Journal of Machine Learning Research*22 (74): 1–6.

*IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008*, 3869–72.

*arXiv:1610.08244 [Stat]*, October.

*Mathematical Programming*143 (1-2): 371–83.

*Computational Geosciences*17 (4): 689–703.

*arXiv:1702.06429 [Math, Stat]*, February.

*Computational Statistics & Data Analysis*, Nonlinear Methods and Data Mining, 38 (4): 367–78.

*The Annals of Applied Statistics*1 (2): 302–32.

*Journal of Statistical Software*33 (1): 1–22.

*IEEE Transactions on Signal Processing*57 (12): 4686–98.

*2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 10303–12. Nashville, TN, USA: IEEE.

*2020 IEEE International Conference on Robotics and Automation (ICRA)*, 2130–37. Paris, France: IEEE.

*arXiv:1011.1576 [Cs]*, November.

*Python Language Companion to Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares*.

*Randomized Algorithms for Matrices and Data*. Vol. 3.

*Statistical Science*12 (4): 279–300.

*Operations Research*63 (5): 1026–43.

*The Annals of Statistics*35 (3): 1012–30.

*Physical Review E*83 (3): 036701.

*Computational Optimization and Applications*48 (2): 273–307.

## No comments yet. Why not leave one?