Statistical learning theory for time series



Statistical learning theory for dependent data such as time series (and possibly other dependency structures but I only know about result for time series.)

Non-stationary, non-asymptotic bounds please. Keywords: Ergodic, Ξ±-, Ξ²-mixing.

Mohri and Kuznetsov have done lots of work here; See, e.g. their NIPS2016 tutorial. There seem to be a lot of results depending on stringent ergodicity. Notably (Kuznetsov and Mohri 2016, 2015) try to go beyond this setup to other types of mixing. Overview to 2011 in (McDonald, Shalizi, and Schervish 2011b):

(Yu 1994) sets forth many of the uniform ergodic theorems that are needed to derive generalization error bounds for stochastic processes. (Meir 2000) is one of the first papers to construct risk bounds for time series. […]

More recently, others have provided PAC results for non-IID data. (Steinwart and Christmann 2009) prove an oracle inequality for generic regularized empirical risk minimization algorithms learning from Ξ±-mixing processes, a fairly general sort of weak serial dependence, getting learning rates for least-squares support vector machines (SVMs) close to the optimal IID rates. (Mohri and Rostamizadeh 2009b) prove stability-based generalization bounds when the data are stationary and Ο†-mixing or Ξ²-mixing, strictly generalizing IID results and applying to all stable learning algorithms. […] (Karandikar and Vidyasagar n.d.) show that if an algorithm is β€œsubadditive” and yields a predictor whose risk can be upper bounded when the data are IID, then the same algorithm yields predictors whose risk can be bounded if data are Ξ²-mixing. They use this result to derive generalization error bounds in terms of the learning rates for IID data and the Ξ²-mixing coefficients.

Bounds that depend on mixing coefficients can be unsatisfying, since even when the mixing coefficient can be estimated from the data, it will depend on the parameters fit to the data. Which will depend on the mixing coefficient.

One possible resolution sidesteps this entirely is Kuznetsov and Mohri, ((Kuznetsov and Mohri 2015, 2014); note that these the papers make more sense in reverse order of publication date):

Time series forecasting plays a crucial role in a number of domains ranging from weather forecasting and earthquake prediction to applications in economics and finance. The classical statistical approaches to time series analysis are based on generative models such as the autoregressive moving average (ARMA) models, or their integrated versions (ARIMA) and several other extensions […] . Most of these models rely on strong assumptions about the noise terms, often assumed to be i.i.d. random variables sampled from a Gaussian distribution, and the guarantees provided in their support are only asymptotic. An alternative non-parametric approach to time series analysis consists of extending the standard i.i.d. statistical learning theory framework to that of stochastic processes.

[…] we consider the general case of non-stationary non-mixing processes. We are not aware of any prior work providing generalization bounds in this setting. In fact, our bounds appear to be novel even when the process is stationary (but not mixing). The learning guarantees that we present hold for both bounded and unbounded memory models. […] Our guarantees cover the majority of approaches used in practice, including various autoregressive and state space models. The key ingredients of our generalization bounds are a data-dependent measure of sequential complexity (expected sequential covering number or sequential Rademacher complexity [Rakhlin et al., 2010]) and a measure of discrepancy between the sample and target distributions. (Kuznetsov and Mohri 2014) also give generalization bounds in terms of discrepancy. However, unlike the result of (Kuznetsov and Mohri 2014), our analysis does not require any mixing assumptions which are hard to verify in practice. More importantly, under some additional mild assumption, the discrepancy measure that we propose can be estimated from data, which leads to data-dependent learning guarantees for non-stationary non-mixing case.

They still want their time series to have a discrete time index, which can be unsatisfying for continuous processes.

They also work with mixing coefficients (e.g. (Kuznetsov and Mohri 2016)) but last time I saw them speak, they were critical of the whole mixing coefficient setting.

References

Agarwal, Anish, Muhammad Jehangir Amjad, Devavrat Shah, and Dennis Shen. 2018. β€œTime Series Analysis via Matrix Estimation.” arXiv:1802.09064 [Cs, Stat], February.
Alquier, Pierre, Xiaoyin Li, and Olivier Wintenberger. 2013. β€œPrediction of Time Series by Statistical Learning: General Losses and Fast Rates.” Dependence Modeling 1: 65–93.
Alquier, Pierre, and Olivier Wintenberger. 2012. β€œModel Selection for Weakly Dependent Time Series Forecasting.” Bernoulli.
Bergmeir, Christoph, Rob J. Hyndman, and Bonsoo Koo. 2015. β€œA Note on the Validity of Cross-Validation for Evaluating Time Series Prediction.”
Cortes, Corinna, Vitaly Kuznetsov, Mehryar Mohri, and Scott Yang. 2016. β€œStructured Prediction Theory Based on Factor Graph Complexity.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 2514–22. Curran Associates, Inc.
Delft, Anne van, and Michael Eichler. 2016. β€œLocally Stationary Functional Time Series.” arXiv:1602.05125 [Math, Stat], February.
Geer, Sara van de. 2002. β€œOn Hoeffdoing’s Inequality for Dependent Random Variables.” In Empirical Process Techniques for Dependent Data. BirkhhΓ€user.
Gribonval, RΓ©mi, Gilles Blanchard, Nicolas Keriven, and Yann Traonmilin. 2017. β€œCompressive Statistical Learning with Random Feature Moments.” arXiv:1706.07180 [Cs, Math, Stat], June.
Hardt, Moritz, Tengyu Ma, and Benjamin Recht. 2018. β€œGradient Descent Learns Linear Dynamical Systems.” The Journal of Machine Learning Research 19 (1): 1025–68.
Hazan, Elad, Karan Singh, and Cyril Zhang. 2017. β€œLearning Linear Dynamical Systems via Spectral Filtering.” In NIPS.
Karandikar, R. L., and M. Vidyasagar. n.d. β€œProbably Approximately Correct Learning with Beta-Mixing Input Sequences.”
Kontorovich, Leonid (Aryeh), Corinna Cortes, and Mehryar Mohri. 2008. β€œKernel Methods for Learning Languages.” Theoretical Computer Science, Algorithmic Learning Theory, 405 (3): 223–36.
Kontorovich, Leonid, Corinna Cortes, and Mehryar Mohri. 2006. β€œLearning Linearly Separable Languages.” In Algorithmic Learning Theory, edited by JosΓ© L. BalcΓ‘zar, Philip M. Long, and Frank Stephan, 288–303. Lecture Notes in Computer Science 4264. Springer Berlin Heidelberg.
Kuznetsov, Vitaly, and Mehryar Mohri. 2014. β€œForecasting Non-Stationary Time Series: From Theory to Algorithms.”
β€”β€”β€”. 2015. β€œLearning Theory and Algorithms for Forecasting Non-Stationary Time Series.” In Advances in Neural Information Processing Systems, 541–49. Curran Associates, Inc.
β€”β€”β€”. 2016. β€œGeneralization Bounds for Non-Stationary Mixing Processes.” In Machine Learning Journal.
McDonald, Daniel J., Cosma Rohilla Shalizi, and Mark Schervish. 2011a. β€œGeneralization Error Bounds for Stationary Autoregressive Models.” arXiv:1103.0942 [Cs, Stat], March.
β€”β€”β€”. 2011b. β€œRisk Bounds for Time Series Without Strong Mixing.” arXiv:1106.0730 [Cs, Stat], June.
Meir, Ron. 2000. β€œNonparametric Time Series Prediction Through Adaptive Model Selection.” Machine Learning 39 (1): 5–34.
Mohri, Mehryar, and Afshin Rostamizadeh. 2009a. β€œRademacher Complexity Bounds for Non-IID Processes.” In Advances in Neural Information Processing Systems, 1097–1104.
β€”β€”β€”. 2009b. β€œStability Bounds for Stationary Ο•-Mixing and Ξ²-Mixing Processes.” Journal of Machine Learning Research 4: 1–26.
Rakhlin, Alexander, Karthik Sridharan, and Ambuj Tewari. 2014. β€œSequential Complexities and Uniform Martingale Laws of Large Numbers.” Probability Theory and Related Fields 161 (1-2): 111–53.
Rashidinejad, Paria, Jiantao Jiao, and Stuart Russell. 2020. β€œSLIP: Learning to Predict in Unknown Dynamical Systems with Long-Term Memory.” In Advances in Neural Information Processing Systems. Vol. 33.
Sattar, Yahya, and Samet Oymak. 2022. β€œNon-Asymptotic and Accurate Learning of Nonlinear Dynamical Systems.” Journal of Machine Learning Research 23 (140): 1–49.
Shalizi, Cosma, and Aryeh Kontorovich. 2013. β€œPredictive PAC Learning and Process Decompositions.” In Advances in Neural Information Processing Systems, 1619–27.
Simchowitz, Max, Horia Mania, Stephen Tu, Michael I. Jordan, and Benjamin Recht. 2018. β€œLearning Without Mixing: Towards A Sharp Analysis of Linear System Identification.” arXiv:1802.08334 [Cs, Math, Stat], February.
Steinwart, Ingo, and Andreas Christmann. 2009. β€œFast Learning from Non-Iid Observations.” In Advances in Neural Information Processing Systems, 1768–76.
Xu, Aolin, and Maxim Raginsky. 2017. β€œInformation-Theoretic Analysis of Generalization Capability of Learning Algorithms.” In Advances In Neural Information Processing Systems.
Yu, Bin. 1994. β€œRates of Convergence for Empirical Processes of Stationary Mixing Sequences.” The Annals of Probability 22 (1): 94–116.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.