# Density estimation

Especially non- or semiparametrically

June 6, 2016 — October 16, 2019

A statistical estimation problem where you are not trying to estimate a function of a distribution of random observations, but the distribution itself. In a sense, all of statistics implicitly does density estimation, but this is often instrumental in the course of discovering some actual parameter of interest. (Although maybe you’re interested in Bayesian statistics and you care a lot about the shape of the posterior density in particular.)

So, estimating distributions nonparametrically is not too weird a function approximation problem. We wish to find a density function \(f:\mathcal{X}\to\mathbb{R}\) such that \(\int_{\mathcal{X}}f(x)dx=1\) and \(\forall x \in \mathcal{X},f(x)\geq 0\).

We might set ourselves different loss functions than usual in statistical regression problems; instead of, e.g. expected \(L_p\) prediction error we might use a traditional function approximation \(L_p\) loss, or a probability divergence measure.

The most common density estimate, that we use implicitly all the time, is to *not* work with densities as such but distributions. We take the empirical distribution as a distribution estimate; that is, taking the data as a model for itself. This has various non-useful features such as being rough and rather hard to visualize as a density.

Question: When would I *actually* want to estimate, specifically, a density?

Visualization, sure. Nonparametric regression without any better ideas. As latent parameters in a deep probabilistic model.

What about non-parametric *conditional* density estimation? Are there any general ways to do this?

## 1 Divergence measures/contrasts

There are many choices for loss functions between densities here; any of the probability metrics will do. For reasons of tradition or convenience, when the object of interest is the density itself, certain choices dominate:

- \(L_2\) with respect to the density over Lebesgue measure on the state space, which we call the MISE, and works out nicely for convolution kernels.
- KL-divergence. (May not do what you want if you care about performance near 0. See (Hall 1987).)
- Hellinger distance
- Wasserstein divergences.
- …

But having chosen the divergence you wish to minimise, you now have to choose with respect to which criterion you wish to minimise it? Minimax? in probability? In expectation? …? Every combination is a different publication. Hmf.

## 2 Minimising Expected (or whatever) MISE

This works fine for Kernel Density Estimators where it turns out just to be a Wiener filter where you have to choose a bandwidth. How do you do this for other estimators, though?

## 3 Connection to point processes

There is a connection between spatial point process intensity estimation and density estimation. See Densities and intensities.

## 4 Spline/wavelet estimations

🏗

## 5 Mixture models

See mixture models.

## 6 Gaussian processes

Gaussian process can provide posterior densities over densities somehow? (Tokdar 2007; Lenk 2003)

## 7 Normalizing flow models

A.k.a. measure transport etc. Where one uses reparameterization. 🏗

## 8 k-NN estimates

Filed here because too small to do elsewhere.

To use nearest neighbour methods, the integer k must be selected. This is similar to bandwidth selection, although here k is discrete, not continuous. K.C. Li (Annals of Statistics, 1987) showed that for the knn regression estimator under conditional homoskedasticity, it is asymptotically optimal to pick k by Mallows, Generalized CV, or CV. Andrews (Journal of Econometrics, 1991) generalised this result to the case of heteroskedasticity, and showed that CV is asymptotically optimal.

## 9 Kernel density estimators

a.k.a. kernel smoothing.

### 9.1 Fancy ones

HT Gery Geenens for a lecture he just gave on convolution kernel density estimation where he drew a parallel between additive noise in kde estimation and multiplicative noise in non-negative-valued variables.

## 10 References

*Journal of Econometrics*.

*Conditional Specification of Statistical Models*.

*The Annals of Statistics*.

*Computational Statistics & Data Analysis*.

*arXiv:1308.3968 [Stat]*.

*Journal of the Royal Statistical Society. Series B (Methodological)*.

*arXiv:0808.1416 [Math, Stat]*.

*Nonparametric Statistics for Stochastic Processes: Estimation and Prediction*. Lecture Notes in Statistics 110.

*Journal of the Royal Statistical Society: Series B (Methodological)*.

*Proceedings of the 25th International Conference on Machine Learning*. ICML ’08.

*Combinatorial Methods in Density Estimation*. Springer Series in Statistics.

*NIPS*.

*The Annals of Statistics*.

*Statistical Science*.

*Stochastic Processes and Their Applications*.

*Simulation Conference, 2008. WSC 2008. Winter*.

*Journal of the American Statistical Association*.

*The Annals of Statistics*.

*Journal of the American Statistical Association*.

*Unpublished Manuscript*.

*The Annals of Statistics*.

*Institute of Mathematical Statistics Lecture Notes - Monograph Series*.

*Advances in Statistical Modeling and Inference*.

*Computational Statistics & Data Analysis*.

*Journal of Computational and Graphical Statistics*.

*arXiv:1702.07028 [Cs]*.

*Journal of Computational and Graphical Statistics*.

*The Annals of Statistics*.

*The Annals of Statistics*.

*The Annals of Statistics*.

*Zeitschrift Für Wahrscheinlichkeitstheorie Und Verwandte Gebiete*.

*Journal of Statistical Planning and Inference*.

*Scandinavian Journal of Statistics*.

*Journal of Statistical Planning and Inference*.

*arXiv:1905.11255 [Cs, Math, Stat]*.

*Journal of Computational Neuroscience*.

*Journal of Machine Learning Research*.

*International Conference on Artificial Intelligence and Statistics*.

*Communications on Pure and Applied Mathematics*.

*Communications in Mathematical Sciences*.

*Journal of Computational and Graphical Statistics*.

*Methodology and Computing in Applied Probability*.

*Neural Networks: The Official Journal of the International Neural Network Society*.