Covariance estimation

Esp Gaussian



Estimating the thing that is given to you by oracles in statistics homework assignments: the covariance matrix. Or, if the data is indexed by some parameter we might consider the covariance kernel. We are especially interested in this in Gaussian processes, where the covariance structure characterises the process up to its mean.

I am not introducing a complete theory of covariance estimation here, merely some notes.

Two big data problems problems can arise here: large \(p\) (ambient dimension) and large \(n\) (sample size). Large \(p\) is a problem because the covariance matrix is a \(p \times p\) matrix and frequently we need to invert it to calculate some target estimand.

Often life can be made not too bad for large \(n\) with Gaussian structure because, essentially, the problem has nice nearly low rank structure.

Bayesian

Inverse Wishart priors. πŸ— Other?

Precision estimation

The workhorse of learning graphical models under linearity and Gaussianity. See precision estimation for a more complete treatment.

Continuous

See kernel learning.

Parametric

on a lattice

Estimating a stationary covariance function on a regular lattice? That is a whole field of its own. Useful keywords include circulant embedding. Although strictly more general than Gaussian processes on a lattice, it is often used in that context and some extra results are on that page for now.

Unordered

Thanks to Rothman (2010) I now think about covariance estimates as being different in ordered versus exchangeable data.

Sandwich estimators

For robust covariances of vector data. AKA Heteroskedasticity-consistent covariance estimators. Incorporating Eicker-Huber-White sandwich estimator, Andrews kernel HAC estimator, Newey-West and others. For an intro see Achim Zeileis, Open-Source Econometric Computing in R.

Incoming

Bounding by harmonic and arithmetic means

There are some known bounds for the univariate case. Wikipedia says, in Relations with the harmonic and arithmetic means that it has been shown (Mercer 2000) that for a sample \(\left\{y_i\right\}\) of positive real numbers, \[ \sigma_y^2 \leq 2 y_{\max }(A-H) \] where \(y_{\max }\) is the maximum of the sample, \(A\) is the arithmetic mean, \(H\) is the harmonic mean of the sample and \(\sigma_y^2\) is the (biased) variance of the sample. This bound has been improved, and it is known that variance is bounded by \[ \begin{gathered} \sigma_y^2 \leq \frac{y_{\max }(A-H)\left(y_{\max }-A\right)}{y_{\max }-H}, \\ \sigma_y^2 \geq \frac{y_{\min }(A-H)\left(A-y_{\min }\right)}{H-y_{\min }}, \end{gathered} \] where \(y_{\min }\) is the minimum of the sample (Sharma 2008).

Mond and Pec̆arić (1996) says

Let us define the arithmetic mean of \(A\) with weight \(w\) as \[ A_n(A ; w)=\sum_{i=1}^n w_i A_i \] and the harmonic mean of \(A\) with weight \(w\) as \[ H_n(A ; w)=\left(\sum_{i=1}^n w_i A_i^{-1}\right)^{-1} \] It is well known \([2,5]\) that \[ H_n(A ; w) \leqslant A_n(A ; w) \] Moreover, if \(A_{i j}(i, j=1, \ldots, n)\) are positive definite matrices from \(H_m\), then the following inequality is also valid [1]: \[ \frac{1}{n} \sum_{j=1}^n\left(\frac{1}{n} \sum_{i=1}^n A_{i j}^{-1}\right)^{-1} \leqslant\left[\frac{1}{n} \sum_{i=1}^n\left(\frac{1}{n} \sum_{j=1}^n A_{i j}\right)^{-1}\right]^{-1} \]

For multivariate covariance we are interested in the PSD matrix version of this.

References

Abrahamsen, Petter. 1997. β€œA Review of Gaussian Random Fields and Correlation Functions.”
Anderson, Jeffrey L. 2007. β€œExploring the Need for Localization in Ensemble Data Assimilation Using a Hierarchical Ensemble Filter.” Physica D: Nonlinear Phenomena, Data Assimilation, 230 (1): 99–111.
Azizyan, Martin, Akshay Krishnamurthy, and Aarti Singh. 2015. β€œExtreme Compressive Sampling for Covariance Estimation.” arXiv:1506.00898 [Cs, Math, Stat], June.
Baik, Jinho, GΓ©rard Ben Arous, and Sandrine PΓ©chΓ©. 2005. β€œPhase Transition of the Largest Eigenvalue for Nonnull Complex Sample Covariance Matrices.” The Annals of Probability 33 (5): 1643–97.
Banerjee, Onureena, Laurent El Ghaoui, and Alexandre d’Aspremont. 2008. β€œModel Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data.” Journal of Machine Learning Research 9 (Mar): 485–516.
Barnard, John, Robert McCulloch, and Xiao-Li Meng. 2000. β€œModeling Covariance Matrices in Terms of Standard Deviations and Correlations, with Application to Shrinkage.” Statistica Sinica 10 (4): 1281–311.
Ben Arous, GΓ©rard, and Sandrine PΓ©chΓ©. 2005. β€œUniversality of Local Eigenvalue Statistics for Some Sample Covariance Matrices.” Communications on Pure and Applied Mathematics 58 (10): 1316–57.
Bickel, Peter J., and Elizaveta Levina. 2008. β€œRegularized Estimation of Large Covariance Matrices.” The Annals of Statistics 36 (1): 199–227.
Bosq, Denis. 2002. β€œEstimation of Mean and Covariance Operator of Autoregressive Processes in Banach Spaces.” Statistical Inference for Stochastic Processes 5 (3): 287–306.
Cai, T. Tony, Cun-Hui Zhang, and Harrison H. Zhou. 2010. β€œOptimal Rates of Convergence for Covariance Matrix Estimation.” The Annals of Statistics 38 (4): 2118–44.
Chan, Tony F., Gene H. Golub, and Randall J. Leveque. 1983. β€œAlgorithms for Computing the Sample Variance: Analysis and Recommendations.” The American Statistician 37 (3): 242–47.
Chen, Hao, Lili Zheng, Raed Al Kontar, and Garvesh Raskutti. 2020. β€œStochastic Gradient Descent in Correlated Settings: A Study on Gaussian Processes.” In Proceedings of the 34th International Conference on Neural Information Processing Systems, 2722–33. NIPS’20. Red Hook, NY, USA: Curran Associates Inc.
Chen, Xiaohui, Mengyu Xu, and Wei Biao Wu. 2013. β€œCovariance and Precision Matrix Estimation for High-Dimensional Time Series.” The Annals of Statistics 41 (6).
Cook, R. Dennis. 2018. β€œPrincipal Components, Sufficient Dimension Reduction, and Envelopes.” Annual Review of Statistics and Its Application 5 (1): 533–59.
Cunningham, John P., Krishna V. Shenoy, and Maneesh Sahani. 2008. β€œFast Gaussian Process Methods for Point Process Intensity Estimation.” In Proceedings of the 25th International Conference on Machine Learning, 192–99. ICML ’08. New York, NY, USA: ACM Press.
Damian, Doris, Paul D. Sampson, and Peter Guttorp. 2001. β€œBayesian Estimation of Semi-Parametric Non-Stationary Spatial Covariance Structures.” Environmetrics 12 (2): 161–78.
Daniels, M. J., and M. Pourahmadi. 2009. β€œModeling Covariance Matrices via Partial Autocorrelations.” Journal of Multivariate Analysis 100 (10): 2352–63.
Dasgupta, Sanjoy, and Daniel Hsu. 2007. β€œOn-Line Estimation with the Multivariate Gaussian Distribution.” In Learning Theory, edited by Nader H. Bshouty and Claudio Gentile, 4539:278–92. Berlin, Heidelberg: Springer Berlin Heidelberg.
Efron, Bradley. 2010. β€œCorrelated z-Values and the Accuracy of Large-Scale Statistical Estimates.” Journal of the American Statistical Association 105 (491): 1042–55.
Fan, Jianqing, Yuan Liao, and Han Liu. 2016. β€œAn Overview of the Estimation of Large Covariance and Precision Matrices.” The Econometrics Journal 19 (1): C1–32.
Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 2008. β€œSparse Inverse Covariance Estimation with the Graphical Lasso.” Biostatistics 9 (3): 432–41.
Fuentes, Montserrat. 2006. β€œTesting for Separability of Spatial–Temporal Covariance Functions.” Journal of Statistical Planning and Inference 136 (2): 447–66.
Furrer, R., and T. Bengtsson. 2007. β€œEstimation of high-dimensional prior and posterior covariance matrices in Kalman filter variants.” Journal of Multivariate Analysis 98 (2): 227–55.
Furrer, Reinhard, Marc G Genton, and Douglas Nychka. 2006. β€œCovariance Tapering for Interpolation of Large Spatial Datasets.” Journal of Computational and Graphical Statistics 15 (3): 502–23.
Gneiting, Tilmann, William Kleiber, and Martin Schlather. 2010. β€œMatΓ©rn Cross-Covariance Functions for Multivariate Random Fields.” Journal of the American Statistical Association 105 (491): 1167–77.
Goodman, Leo A. 1960. β€œOn the Exact Variance of Products.” Journal of the American Statistical Association 55 (292): 708–13.
Hackbusch, Wolfgang. 2015. Hierarchical Matrices: Algorithms and Analysis. 1st ed. Springer Series in Computational Mathematics 49. Heidelberg New York Dordrecht London: Springer Publishing Company, Incorporated.
Hansen, Christian B. 2007. β€œGeneralized Least Squares Inference in Panel and Multilevel Models with Serial Correlation and Fixed Effects.” Journal of Econometrics 140 (2): 670–94.
Heinrich, Claudio, and Mark Podolskij. 2014. β€œOn Spectral Distribution of High Dimensional Covariation Matrices.” arXiv:1410.6764 [Math], October.
Huang, Jianhua Z., Naiping Liu, Mohsen Pourahmadi, and Linxu Liu. 2006. β€œCovariance Matrix Selection and Estimation via Penalised Normal Likelihood.” Biometrika 93 (1): 85–98.
James, William, and Charles Stein. 1961. β€œEstimation with Quadratic Loss.” In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, 1:361–79. University of California Press.
JankovΓ‘, Jana, and Sara van de Geer. 2015. β€œHonest Confidence Regions and Optimality in High-Dimensional Precision Matrix Estimation.” arXiv:1507.02061 [Math, Stat], July.
Kauermann, GΓΆran, and Raymond J. Carroll. 2001. β€œA Note on the Efficiency of Sandwich Covariance Matrix Estimation.” Journal of the American Statistical Association 96 (456): 1387–96.
Khoromskij, B. N., A. Litvinenko, and H. G. Matthies. 2009. β€œApplication of Hierarchical Matrices for Computing the Karhunen–LoΓ¨ve Expansion.” Computing 84 (1-2): 49–67.
Khoshgnauz, Ehsan. 2012. β€œLearning Markov Network Structure Using Brownian Distance Covariance.” arXiv:1206.6361 [Cs, Stat], June.
Kuismin, Markku O., and Mikko J. SillanpÀÀ. 2017. β€œEstimation of Covariance and Precision Matrix, Network Structure, and a View Toward Systems Biology.” WIREs Computational Statistics 9 (6): e1415.
Lam, Clifford, and Jianqing Fan. 2009. β€œSparsistency and Rates of Convergence in Large Covariance Matrix Estimation.” Annals of Statistics 37 (6B): 4254–78.
Ledoit, Olivier, and Michael Wolf. 2004. β€œA Well-Conditioned Estimator for Large-Dimensional Covariance Matrices.” Journal of Multivariate Analysis 88 (2): 365–411.
Ling, Robert F. 1974. β€œComparison of Several Algorithms for Computing Sample Means and Variances.” Journal of the American Statistical Association 69 (348): 859–66.
Loh, Wei-Liem. 1991. β€œEstimating Covariance Matrices II.” Journal of Multivariate Analysis 36 (2): 163–74.
Mardia, K. V., and R. J. Marshall. 1984. β€œMaximum Likelihood Estimation of Models for Residual Covariance in Spatial Regression.” Biometrika 71 (1): 135–46.
Meier, Alexander. 2018. β€œA matrix Gamma process and applications to Bayesian analysis of multivariate time series.”
Meier, Alexander, Claudia Kirch, and Renate Meyer. 2020. β€œBayesian Nonparametric Analysis of Multivariate Time Series: A Matrix Gamma Process Approach.” Journal of Multivariate Analysis 175 (January): 104560.
Meinshausen, Nicolai, and Peter BΓΌhlmann. 2006. β€œHigh-Dimensional Graphs and Variable Selection with the Lasso.” The Annals of Statistics 34 (3): 1436–62.
Mercer, A. McD. 2000. β€œBounds for A–G, A–H, G–H, and a Family of Inequalities of Ky Fan’s Type, Using a General Method.” Journal of Mathematical Analysis and Applications 243 (1): 163–73.
Minasny, Budiman, and Alex. B. McBratney. 2005. β€œThe MatΓ©rn Function as a General Model for Soil Variograms.” Geoderma, Pedometrics 2003, 128 (3–4): 192–207.
Mond, B, and J. E PecΜ†ariΔ‡. 1996. β€œA Mixed Arithmetic-Mean-Harmonic-Mean Matrix Inequality.” Linear Algebra and Its Applications, Linear Algebra and Statistics: In Celebration of C. R. Rao’s 75th Birthday (September 10, 1995), 237-238 (April): 449–54.
PΓ©bay, Philippe. 2008. β€œFormulas for Robust, One-Pass Parallel Computation of Covariances and Arbitrary-Order Statistical Moments.” Sandia Report SAND2008-6212, Sandia National Laboratories.
Pleiss, Geoff, Jacob R. Gardner, Kilian Q. Weinberger, and Andrew Gordon Wilson. 2018. β€œConstant-Time Predictive Distributions for Gaussian Processes.” In. arXiv.
Pourahmadi, Mohsen. 2011. β€œCovariance Estimation: The GLM and Regularization Perspectives.” Statistical Science 26 (3): 369–87.
Prause, Annabel, and Ansgar Steland. 2018. β€œEstimation of the Asymptotic Variance of Univariate and Multivariate Random Fields and Statistical Inference.” Electronic Journal of Statistics 12 (1): 890–940.
Ramdas, Aaditya, and Leila Wehbe. 2014. β€œStein Shrinkage for Cross-Covariance Operators and Kernel Independence Testing.” arXiv:1406.1922 [Stat], June.
Ravikumar, Pradeep, Martin J. Wainwright, Garvesh Raskutti, and Bin Yu. 2011. β€œHigh-Dimensional Covariance Estimation by Minimizing β„“1-Penalized Log-Determinant Divergence.” Electronic Journal of Statistics 5: 935–80.
Rosenblatt, M. 1984. β€œAsymptotic Normality, Strong Mixing and Spectral Density Estimates.” The Annals of Probability 12 (4): 1167–80.
Rothman, Adam J. 2010. β€œSparse Estimation of High-Dimensional Covariance Matrices.”
Sampson, Paul D., and Peter Guttorp. 1992. β€œNonparametric Estimation of Nonstationary Spatial Covariance Structure.” Journal of the American Statistical Association 87 (417): 108–19.
SchΓ€fer, Juliane, and Korbinian Strimmer. 2005. β€œA shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics.” Statistical Applications in Genetics and Molecular Biology 4: Article32.
Schmidt, Alexandra M., and Anthony O’Hagan. 2003. β€œBayesian Inference for Non-Stationary Spatial Covariance Structure via Spatial Deformations.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 65 (3): 743–58.
Shao, Xiaofeng, and Wei Biao Wu. 2007. β€œAsymptotic Spectral Theory for Nonlinear Time Series.” The Annals of Statistics 35 (4): 1773–1801.
Sharma, Rajesh. 2008. β€œSome More Inequalities for Arithmetic Mean, Harmonic Mean and Variance.” Journal of Mathematical Inequalities, no. 1: 109–14.
Shimotsu, Katsumi, and Peter C. B. Phillips. 2004. β€œLocal Whittle Estimation in Nonstationary and Unit Root Cases.” The Annals of Statistics 32 (2): 656–92.
Stein, Michael L. 2005. β€œSpace-Time Covariance Functions.” Journal of the American Statistical Association 100 (469): 310–21.
Sun, Ying, and Michael L. Stein. 2016. β€œStatistically and Computationally Efficient Estimating Equations for Large Spatial Datasets.” Journal of Computational and Graphical Statistics 25 (1): 187–208.
Takemura, Akimichi. 1984. β€œAn Orthogonally Invariant Minimax Estimator of the Covariance Matrix of a Multivariate Normal Population.” Tsukuba Journal of Mathematics 8 (2): 367–76.
Warton, David I. 2008. β€œPenalized Normal Likelihood and Ridge Regularization of Correlation and Covariance Matrices.” Journal of the American Statistical Association 103 (481): 340–49.
Whittle, P. 1952. β€œTests of Fit in Time Series.” Biometrika 39 (3-4): 309–18.
β€”β€”β€”. 1953a. β€œThe Analysis of Multiple Stationary Time Series.” Journal of the Royal Statistical Society: Series B (Methodological) 15 (1): 125–39.
β€”β€”β€”. 1953b. β€œEstimation and Information in Stationary Time Series.” Arkiv FΓΆr Matematik 2 (5): 423–34.
Whittle, Peter. 1952. β€œSome Results in Time Series Analysis.” Scandinavian Actuarial Journal 1952 (1-2): 48–60.
Wolter, Kirk M. 2007. Introduction to Variance Estimation. 2nd ed. Statistics for Social and Behavioral Sciences. New York: Springer.
Wu, Wei Biao, and Mohsen Pourahmadi. 2003. β€œNonparametric Estimation of Large Covariance Matrices of Longitudinal Data.” Biometrika 90 (4): 831–44.
Yuan, Ming, and Yi Lin. 2007. β€œModel Selection and Estimation in the Gaussian Graphical Model.” Biometrika 94 (1): 19–35.
Zeileis, Achim. 2004. β€œEconometric Computing with HC and HAC Covariance Matrix Estimators.” Journal of Statistical Software 11 (10).
β€”β€”β€”. 2006a. β€œImplementing a Class of Structural Change Tests: An Econometric Computing Approach.” Computational Statistics & Data Analysis 50 (11): 2987–3008.
β€”β€”β€”. 2006b. β€œObject-Oriented Computation of Sandwich Estimators.” Journal of Statistical Software 16 (1): 1–16.
Zhang, T., and H. Zou. 2014. β€œSparse Precision Matrix Estimation via Lasso Penalized D-Trace Loss.” Biometrika 101 (1): 103–20.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.