Loosely, estimating a quantity by choosing it to be the extremum of a function, or, if itβs well-behaved enough, a zero of its derivative.

Popular with machine learning, where loss-function based methods are ubiquitous. In statistics we see this famously in maximum likelihood estimation and robust estimation, and least squares loss, for which M-estimation provides a unifying formalism with a convenient large sample asymptotic theory.

π Discuss influence function motivation.

## Implied density functions

Common loss function imply a density considered as a maximum likelihood estimation problem.

I assume they did not invent this idea, but Davison and Ortiz (2019) points out that if you have a least-squares-compatible model, usually it can generalise it to any elliptical density, which includes Huber losses and many robust ones as special cases.

## Robust Loss functions

π

### Huber loss

### Hampel loss

## Fitting

Discuss representation (and implementation) in terms of weight functions for least-squares loss.

## GM-estimators

Mallows, Schweppe etc.

π

## References

*Advances In Neural Information Processing Systems*.

*Biometrika*70 (2): 343β65.

*Selected Works of Peter J. Bickel*, edited by Jianqing Fan, Yaβacov Ritov, and C. F. Jeff Wu, 51β98. Selected Works in Probability and Statistics 13. Springer New York.

*Asymptotic Theory of Statistics and Probability*. Springer Texts in Statistics. New York: Springer New York.

*arXiv:1910.14139 [Cs]*, October.

*arXiv:1310.7320 [Cs, Math, Stat]*, October.

*arXiv:1403.7023 [Math, Stat]*. Vol. 131.

*Journal of the American Statistical Association*69 (346): 383β93.

*Robust Statistics: The Approach Based on Influence Functions*. John Wiley & Sons.

*The Annals of Mathematical Statistics*35 (1): 73β101.

*arXiv:1411.4342 [Stat]*, November.

*Energy*7 (2): 189β203.

*Annual Review of Statistics and Its Application*8 (1): 301β27.

*Handbook of Statistics*, 15:49β75. Robust Inference. Elsevier.

*The Annals of Statistics*4 (1): 51β67.

*Annals of the Institute of Statistical Mathematics*64 (1): 27β53.

*arXiv:2107.02308 [Cs]*, July.

*Data Segmentation and Model Selection for Computer Vision*, edited by Alireza Bab-Hadiashar and David Suter, 31β40. Springer New York.

*Journal of Statistical Planning and Inference*, Robust Statistics and Data Analysis, Part I, 57 (1): 59β72.

*Statistics*47 (1): 216β35.

*Communications in Statistics - Theory and Methods*48 (5): 1092β1107.

## No comments yet. Why not leave one?