The interpretation of densities as intensities and vice versa

Point process of observations ↔ observation of a point process



Estimating densities by considering the observations drawn from that as a point process. In one dimension this gives us the particularly lovely trick of survival analysis, but the method is much more general, if not quite as nifty

Consider the problem of estimating the common density \(f(x)dx=dF(x)\) density of indexed i.i.d. random variables \(\{X_i\}_{1\leq i\leq n}\in \mathbb{R}^d\) from \(n\) realisations of those variables, \(\{x_i\}_{i\leq n}\) where \(F:\mathbb{R}^d\rightarrow[0,1]\) a (cumulative) distribution. We assume the state is absolutely continuous with respect to the Lebesgue measure, i.e. \(\mu(A)=0\Rightarrow P(X_i\in A)=0\). This implies that \(P(X_i)=P(X_j)=0\text{ for }i\neq j\) and that the density exists as a standard function (i.e. we do not need to consider generalised functions such as distributions to handle atoms in \(F\) etc.)

Here we parameterise the density with some finite dimensional parameter vector \(\theta,\) i.e. \(f(x;\theta),\) whose value completely characterises the density; the problem of estimating the density is then the same as the one of estimating \(\theta.\)

In the method of maximum likelihood estimation we seek to maximise the value of the empirical likelihood of the data. That is, we choose a parameter estimate \(\hat{\theta}\) to satisfy

\[ \begin{aligned} \hat{\theta} &:=\operatorname{argmax}_\theta\prod_i f(x_i;\theta)\\ &=\operatorname{argmax}_\theta\sum_i \log f(x_i;\theta) \end{aligned} \]

Basis function method for density

Let’s consider the case where we try to estimate this function by constructing it from some given basis of \(p\) functions \(\phi_j: \mathbb{R}^d\rightarrow[0,\infty),\) so that

\[f(x)=\sum_{j\leq p}w_j\phi_j(x)\]

and \(\theta\equiv\{w_j\}_{j\leq p}.\) We keep this simple by requiring \(\int\phi_j(x)dx=1,\) so that they are all valid densities. Then the requirement that \(\int f(x)dx=1\) will imply that \(\sum_j w_j=1,\) i.e. we are taking a convex combination of these basis densities.

Then the maximum likelihood estimator can be written

\[ \begin{aligned} \hat{\theta} &=\operatorname{argmax}_{\{w_i\}}f(\{x_i\};\{w_i\})\\ &=\operatorname{argmax}_{\{w_i\}}\sum_i \log \sum_{j\leq p}w_j\phi_j(x_i) \end{aligned} \]

A moment’s thought reveals that this equation has no solution, since it is strictly increasing in each \(w_j\). However, we are missing a constraint, which is that to be a well-defined probability density, it must integrate to unity, i.e.

\[ \int f(\{x\};\{w_i\})dx = 1 \] and therefore

\[ \begin{aligned} \int \sum_{j\leq p}w_j\phi_j(x)dx&=1\\ \sum_{j\leq p}w_j\int\phi_j(x)dx&=1\\ \sum_{j\leq p}w_j &=1\\ \end{aligned} \]

By messing around with Lagrange multipliers to enforce that constraint we eventually find

\[ \hat{\{w_k\}} = \frac{\sum_i \phi_k(x_i)}{\sum_i \sum_j \phi_j(x_i)} \]

Intensities

Consider the problem of estimating the intensity \(\lambda\) of a simple, non-interacting inhomogeneous point process \(N(B)\) on some compact \(W\subset\mathbb{R}^d\) from a realisation \(\{x_i\}_{i\leq n}\), and this counting function \(N(B)\) counts the number of points that fall on a set \(B\subset\mathbb{R}^d\).

The intensity is (in the simple non interacting case β€” see Daley and Vere-Jones (2003) for other cases) a function \(\lambda:\mathbb{R}\rightarrow [0,\infty)\) such that, or any box \(B\subset W\),

\[N(B)\sim\operatorname{Poisson}(\Lambda(B))\] where

\[\Lambda(B):=\int_Bd\lambda(x)dx\] and for any disjoint boxes, \(N(A)\perp N(B).\)

After some argumentation about intensities we can find a likelihood for the observed distribution:

\[f(\{x_i\};\tau)= \prod_i \lambda(x_i;\tau)\exp\left(-\int\lambda(x;\tau)dx\right). \]

Say that we wish to find the inhomogeneous intensity function by the method of maximum likelihood. We allow the intensity function to be described by a parameter vector \(\tau,\) which we write \(\lambda(x;\tau)\), and we once again construct an estimate:

\[ \begin{aligned} \hat{\tau}&:=\operatorname{argmax}_\tau\sum_i\log f(x;\tau)\\ &=\operatorname{argmax}_\tau\sum_i\log \left(\lambda(x_i;\tau) \exp\left(-\int_W\lambda(x;\tau) dx\right)\right)\\ &=\operatorname{argmax}_\tau\sum_i\log \lambda(x_i;\tau)-\int_W\lambda(x;\tau)dx\\ &=\operatorname{argmax}_\tau\sum_i\log \lambda(x_i;\tau)-\Lambda(W) \end{aligned} \]

Basis function method for intensity

Now consider the case where we assume that the intensity can be written in a \(\phi_k\) basis as above, so that

\[\lambda(x)=\sum_{j\leq p}\omega_j\phi_j(x)\]

with \(\tau\equiv\{\omega_j\}.\) Then our estimate may be written

\[ \begin{aligned} \hat{\tau}&:=\operatorname{argmax}_{\{\omega_j\}}f\left(\{x_i\};\{\omega_j\} \right)\\ &:=\operatorname{argmax}_{\{\omega_j\}}\sum_i\left(\log \lambda(x_i;\tau)-\Lambda(W)\right)\\ &=\operatorname{argmax}_{\{\omega_j\}}\sum_i\left(\log \sum_{j\leq p}\omega_j\phi_j(x_i)-\int_W\sum_{j\leq p}\omega_j\phi_j(x)dx\right)\\ &=\operatorname{argmax}_{\{\omega_j\}}\sum_i\left(\log \sum_{j\leq p}\omega_j\phi_j(x_i)-\sum_{j\leq p}\omega_j\int_W \phi_j(x)dx \right)\\ &=\operatorname{argmax}_{\{\omega_j\}}\sum_i\left(\log \sum_{j\leq p}\omega_j\phi_j(x_i)\right)-\sum_{j\leq p}\omega_j \end{aligned} \]

We have a similar log-likelihood to the density estimation case.

Under the constraint that \(E(\hat{N}=N)\) we have \(\sum_j\omega_j=n\) and therefore

\[ \hat{\tau} =\operatorname{argmax}_{\{\omega_i\}}\sum_i\left(\log \sum_{j\leq p}\omega_j\phi_j(x_i)\right)-n \]

Note that if we consider the points as a density we find the same is the same as the maximum obtained by considering the points as an inhomogeneous spatial point patter, up to an offset of \(n\), i.e. \(\omega_j\equiv nw_j.\)

Count regression

We can formulate density estimation as a count regression; For β€œnice” distributions this will be the same as estimating the correct Poisson intensity for every given small region of the state space (e.g. (Gu 1993; Eilers and Marx 1996)). πŸ—

Probability over boxes

Consider a box in \(B\subset \mathbb{R}^d\). The probability of any one \(X_i\) falling within that box,

\[P(X_i\subset B)=E\left(\mathbb{I}\{X_i\subset B\}\right)=\int_B dF(x).\]

We know that the expected number of \(X_i\) to fall within that box is \(N\) times the probability of any one falling in that box, i.e.

\[E\left(\sum_{i\leq N}\mathbb{I}\{X_i\subset B\}\right)=N\int_B dF(x)\] and thus

\[P(N(B)=k)=\frac{\exp(-\Lambda(B))\Lambda(B)^k}{k!}.\]

…Where was I going with this? Something to do with linear point process estimation perhaps? πŸ—

References

Andersen, Per Kragh, Ornulf Borgan, Richard D. Gill, and Niels Keiding. 1997. Statistical models based on counting processes. Corr. 2. print. Springer series in statistics. New York, NY: Springer.
Anderson, J. A., and S. C. Richardson. 1979. β€œLogistic Discrimination and Bias Correction in Maximum Likelihood Estimation.” Technometrics 21 (1): 71–78.
Barron, Andrew R., and Chyong-Hwa Sheu. 1991. β€œApproximation of Density Functions by Sequences of Exponential Families.” The Annals of Statistics 19 (3): 1347–69.
Berman, Mark, and Peter Diggle. 1989. β€œEstimating Weighted Integrals of the Second-Order Intensity of a Spatial Point Process.” Journal of the Royal Statistical Society. Series B (Methodological) 51 (1): 81–92.
Brown, Lawrence D., T. Tony Cai, and Harrison H. Zhou. 2010. β€œNonparametric Regression in Exponential Families.” The Annals of Statistics 38 (4): 2005–46.
Castellan, G. 2003. β€œDensity Estimation via Exponential Model Selection.” IEEE Transactions on Information Theory 49 (8): 2052–60.
Cox, D. R. 1965. β€œOn the Estimation of the Intensity Function of a Stationary Point Process.” Journal of the Royal Statistical Society: Series B (Methodological) 27 (2): 332–37.
Cunningham, John P., Krishna V. Shenoy, and Maneesh Sahani. 2008. β€œFast Gaussian Process Methods for Point Process Intensity Estimation.” In Proceedings of the 25th International Conference on Machine Learning, 192–99. ICML ’08. New York, NY, USA: ACM Press.
Daley, Daryl J., and David Vere-Jones. 2003. An introduction to the theory of point processes. 2nd ed. Vol. 1. Elementary theory and methods. New York: Springer.
β€”β€”β€”. 2008. An Introduction to the Theory of Point Processes. 2nd ed. Vol. 2. General theory and structure. Probability and Its Applications. New York: Springer.
Efromovich, Sam. 1996. β€œOn Nonparametric Regression for IID Observations in a General Setting.” The Annals of Statistics 24 (3): 1126–44.
β€”β€”β€”. 2007. β€œConditional Density Estimation in a Regression Setting.” The Annals of Statistics 35 (6): 2504–35.
Eilers, Paul H. C., and Brian D. Marx. 1996. β€œFlexible Smoothing with B-Splines and Penalties.” Statistical Science 11 (2): 89–121.
Ellis, Steven P. 1991. β€œDensity Estimation for Point Processes.” Stochastic Processes and Their Applications 39 (2): 345–58.
Giesecke, K., H. Kakavand, and M. Mousavi. 2008. β€œSimulating Point Processes by Intensity Projection.” In Simulation Conference, 2008. WSC 2008. Winter, 560–68.
Gu, Chong. 1993. β€œSmoothing Spline Density Estimation: A Dimensionless Automatic Algorithm.” Journal of the American Statistical Association 88 (422): 495–504.
Heigold, Georg, Ralf SchlΓΌter, and Hermann Ney. 2007. β€œOn the Equivalence of Gaussian HMM and Gaussian HMM-Like Hidden Conditional Random Fields.” In Eighth Annual Conference of the International Speech Communication Association.
Hinton, G., Li Deng, Dong Yu, G.E. Dahl, A. Mohamed, N. Jaitly, A. Senior, et al. 2012. β€œDeep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups.” IEEE Signal Processing Magazine 29 (6): 82–97.
Kooperberg, Charles, and Charles J. Stone. 1991. β€œA Study of Logspline Density Estimation.” Computational Statistics & Data Analysis 12 (3): 327–47.
β€”β€”β€”. 1992. β€œLogspline Density Estimation for Censored Data.” Journal of Computational and Graphical Statistics 1 (4): 301–28.
Leung, G., and A.R. Barron. 2006. β€œInformation Theory and Mixing Least-Squares Regressions.” IEEE Transactions on Information Theory 52 (8): 3396–3410.
Lieshout, Marie-Colette N. M. van. 2011. β€œOn Estimation of the Intensity Function of a Point Process.” Methodology and Computing in Applied Probability 14 (3): 567–78.
Miller, Benjamin Kurt, Alex Cole, and Gilles Louppe. n.d. β€œSimulation-Efficient Marginal Posterior Estimation with Swyft: Stop Wasting Your Precious Time.” In, 9.
MΓΈller, Jesper, and Rasmus Plenge Waagepetersen. 2003. Statistical Inference and Simulation for Spatial Point Processes. Chapman and Hall/CRC.
Norets, Andriy. 2010. β€œApproximation of Conditional Densities by Smooth Mixtures of Regressions.” The Annals of Statistics 38 (3): 1733–66.
Panaretos, Victor M., and Yoav Zemel. 2016. β€œSeparation of Amplitude and Phase Variation in Point Processes.” The Annals of Statistics 44 (2): 771–812.
Papangelou, F. 1974. β€œThe Conditional Intensity of General Point Processes and an Application to Line Processes.” Zeitschrift FΓΌr Wahrscheinlichkeitstheorie Und Verwandte Gebiete 28 (3): 207–26.
Reynaud-Bouret, Patricia. 2003. β€œAdaptive Estimation of the Intensity of Inhomogeneous Poisson Processes via Concentration Inequalities.” Probability Theory and Related Fields 126 (1).
Saul, Lawrence K., and Daniel D. Lee. 2001. β€œMultiplicative Updates for Classification by Mixture Models.” In Advances in Neural Information Processing Systems, 897–904.
Schoenberg, Frederic Paik. 2005. β€œConsistent Parametric Estimation of the Intensity of a Spatial–Temporal Point Process.” Journal of Statistical Planning and Inference 128 (1): 79–93.
Sha, Fei, and Lawrence K. Saul. 2006. β€œLarge Margin Hidden Markov Models for Automatic Speech Recognition.” In Advances in Neural Information Processing Systems, 1249–56.
Sugiyama, Masashi, Ichiro Takeuchi, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, and Daisuke Okanohara. 2010. β€œConditional Density Estimation via Least-Squares Density Ratio Estimation.” In International Conference on Artificial Intelligence and Statistics, 781–88.
TΓΌske, ZoltΓ‘n, Muhammad Ali Tahir, Ralf SchlΓΌter, and Hermann Ney. 2015. β€œIntegrating Gaussian Mixtures into Deep Neural Networks: Softmax Layer with Hidden Variables.” In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4285–89. IEEE.
Willett, R. M., and R. D. Nowak. 2007. β€œMultiscale Poisson Intensity and Density Estimation.” IEEE Transactions on Information Theory 53 (9): 3171–87.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.