Feedback system identification, not necessarily linear

Learning dynamics from data



The order in which this is presented right now makes no sense.

If I have a system whose future evolution is important to predict, why not try to infer a plausible model instead of a convenient linear one?

To reconstruct the unobserved state, as opposed to the parameters of the process acting upon the state, we do state filtering. There can be interplay between these steps, if we are doing simulation-based online parameter inference, as in recursive estimation (what is the division between this and that?) Or: we might decide the state is unimportant and attempt to estimate the evolution only of the observations. That is the Koopman operator trick.

A compact overview is inserted incidentally in Cosma’s review of Fan and Yao (2003) wherein he also recommends (Bosq and Blanke 2007; Bosq 1998; Taniguchi and Kakizawa 2000).

There are many methods. From an engineering/control perspective, we have (Brunton, Proctor, and Kutz 2016), generalises the process for linear time series. to a sparse regression version via Indirect inference, or recursive hierarchical generalised linear models, which is an obvious way to generalise linear systems in the same way GLM generalizes linear models. Kitagawa and Gersch (1996) is popular in a Bayes context.

Hefny, Downey, and Gordon (2015):

We address […] these problems with a new view of predictive state methods for dynamical system learning. In this view, a dynamical system learning problem is reduced to a sequence of supervised learning problems. So, we can directly apply the rich literature on supervised learning methods to incorporate many types of prior knowledge about problem structure. We give a general convergence rate analysis that allows a high degree of flexibility in designing estimators. And finally, implementing a new estimator becomes as simple as rearranging our data and calling the appropriate supervised learning subroutines.

[…] More specifically, our contribution is to show that we can use much-more- general supervised learning algorithms in place of linear regression, and still get a meaningful theoretical analysis. In more detail:

  • we point out that we can equally well use any well-behaved supervised learning algorithm in place of linear regression in the first stage of instrumental-variable regression;

  • for the second stage of instrumental-variable regression, we generalize ordinary linear regression to its RKHS counterpart;

  • we analyze the resulting combination, and show that we get convergence to the correct answer, with a rate that depends on how quickly the individual supervised learners converge

State filters are cool for estimating time-varying hidden states given known fixed system parameters. How about learning those parameters of the model generating your states? Classic ways that you can do this in dynamical systems include basic linear system identification, and general system identification. But can you identify the fixed parameters (not just hidden states) with a state filter?

Yes. This is called recursive estimation.

Basic Construction

There are a few variations. We start with the basic continuous time state space model.

Here we have an unobserved Markov state process \(x(t)\) on \(\mathcal{X}\) and an observation process \(y(t)\) on \(\mathcal{Y}\). For now they will be assumed to be finite dimensional vectors over \(\mathbb{R}.\) They will additionally depend upon a vector of parameters \(\theta\) We observe the process at discrete times \(t(1:T)=(t_1, t_2,\dots, t_T),\) and we write the observations \(y(1:T)=(y(t_1), y(t_2),\dots, y(1_T)).\)

We presume our processes are completely specified by the following conditional densities (which might not have closed-form expression)

The transition density

\[f(x(t_i)|x(t_{i-1}), \theta)\]

The observation density…

TBC.

Method of adjoints

A trick in differentiation which happens to be useful in differentiating likelihood (or other functions) of time evolving systems using automatic differentiation. e.g. Errico (1997).

See the method of adjoints.

Indirect inference

Popular. See indirect inference.

Incoming

  • Corenflos et al. (2021) describe an optimal transport method
  • Campbell et al. (2021) describes variational inference that factors out the unknown parameters.
  • Gu et al. (2021) unifies neural ODEs with RNNs.

learning SDEs

References

Agarwal, Anish, Muhammad Jehangir Amjad, Devavrat Shah, and Dennis Shen. 2018. β€œTime Series Analysis via Matrix Estimation.” arXiv:1802.09064 [Cs, Stat], February.
Andersson, Joel A. E., Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. 2019. β€œCasADi: A Software Framework for Nonlinear Optimization and Optimal Control.” Mathematical Programming Computation 11 (1): 1–36.
Andrews, Donald W. K. 1994. β€œEmpirical Process Methods in Econometrics.” In Handbook of Econometrics, edited by Robert F. Engle and Daniel L. McFadden, 4:2247–94. Elsevier.
Antoniano-Villalobos, Isadora, and Stephen G. Walker. 2016. β€œA Nonparametric Model for Stationary Time Series.” Journal of Time Series Analysis 37 (1): 126–42.
Arridge, Simon, Peter Maass, Ozan Γ–ktem, and Carola-Bibiane SchΓΆnlieb. 2019. β€œSolving Inverse Problems Using Data-Driven Models.” Acta Numerica 28 (May): 1–174.
Ben Taieb, Souhaib, and Amir F. Atiya. 2016. β€œA Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.” IEEE transactions on neural networks and learning systems 27 (1): 62–76.
Bengio, Samy, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. β€œScheduled Sampling for Sequence Prediction with Recurrent Neural Networks.” In Advances in Neural Information Processing Systems 28, 1171–79. NIPS’15. Cambridge, MA, USA: Curran Associates, Inc.
Berry, Tyrus, Dimitrios Giannakis, and John Harlim. 2020. β€œBridging Data Science and Dynamical Systems Theory.” arXiv:2002.07928 [Physics, Stat], June.
Bosq, Denis. 1998. Nonparametric Statistics for Stochastic Processes: Estimation and Prediction. 2nd ed. Lecture Notes in Statistics 110. New York: Springer.
Bosq, Denis, and Delphine Blanke. 2007. Inference and prediction in large dimensions. Wiley series in probability and statistics. Chichester, England ; Hoboken, NJ: John Wiley/Dunod.
BretΓ³, Carles, Daihai He, Edward L. Ionides, and Aaron A. King. 2009. β€œTime Series Analysis via Mechanistic Models.” The Annals of Applied Statistics 3 (1): 319–48.
Brouwer, Edward de, Jaak Simm, Adam Arany, and Yves Moreau. 2019. β€œGRU-ODE-Bayes: Continuous Modeling of Sporadically-Observed Time Series.” In Advances in Neural Information Processing Systems. Vol. 32. Curran Associates, Inc.
Brunton, Steven L., Joshua L. Proctor, and J. Nathan Kutz. 2016. β€œDiscovering Governing Equations from Data by Sparse Identification of Nonlinear Dynamical Systems.” Proceedings of the National Academy of Sciences 113 (15): 3932–37.
BΓΌhlmann, Peter, and Hans R KΓΌnsch. 1999. β€œBlock Length Selection in the Bootstrap for Time Series.” Computational Statistics & Data Analysis 31 (3): 295–310.
Campbell, Andrew, Yuyang Shi, Tom Rainforth, and Arnaud Doucet. 2021. β€œOnline Variational Filtering and Parameter Learning.” In.
Carmi, Avishy Y. 2014. β€œCompressive System Identification.” In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 281–324. Signals and Communication Technology. Springer Berlin Heidelberg.
Cassidy, Ben, Caroline Rae, and Victor Solo. 2015. β€œBrain Activity: Connectivity, Sparsity, and Mutual Information.” IEEE Transactions on Medical Imaging 34 (4): 846–60.
Chan, Ngai Hang, Ye Lu, and Chun Yip Yau. 2016. β€œFactor Modelling for High-Dimensional Time Series: Inference and Model Selection.” Journal of Time Series Analysis, January, n/a–.
Chen, Chong, Yixuan Dou, Jie Chen, and Yaru Xue. 2022. β€œA Novel Neural Network Training Framework with Data Assimilation.” The Journal of Supercomputing, June.
Chen, Ricky T. Q., and David K Duvenaud. 2019. β€œNeural Networks with Cheap Differential Operators.” In Advances in Neural Information Processing Systems. Vol. 32. Curran Associates, Inc.
Chen, Tian Qi, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. 2018. β€œNeural Ordinary Differential Equations.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 6572–83. Curran Associates, Inc.
Chevillon, Guillaume. 2007. β€œDirect Multi-Step Estimation and Forecasting.” Journal of Economic Surveys 21 (4): 746–85.
Choromanski, Krzysztof, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, and Vikas Sindhwani. 2020. β€œAn Ode to an ODE.” In Advances in Neural Information Processing Systems. Vol. 33.
Clark, James S., and Ottar N. BjΓΈrnstad. 2004. β€œPopulation Time Series: Process Variability, Observation Errors, Missing Values, Lags, and Hidden States.” Ecology 85 (11): 3140–50.
Cook, Alex R., Wilfred Otten, Glenn Marion, Gavin J. Gibson, and Christopher A. Gilligan. 2007. β€œEstimation of Multiple Transmission Rates for Epidemics in Heterogeneous Populations.” Proceedings of the National Academy of Sciences 104 (51): 20392–97.
Corenflos, Adrien, James Thornton, George Deligiannidis, and Arnaud Doucet. 2021. β€œDifferentiable Particle Filtering via Entropy-Regularized Optimal Transport.” arXiv:2102.07850 [Cs, Stat], June.
Course, Kevin, Trefor Evans, and Prasanth Nair. 2020. β€œWeak Form Generalized Hamiltonian Learning.” In Advances in Neural Information Processing Systems. Vol. 33.
Doucet, Arnaud, Pierre E. Jacob, and Sylvain Rubenthaler. 2013. β€œDerivative-Free Estimation of the Score Vector and Observed Information Matrix with Application to State-Space Models.” arXiv:1304.5768 [Stat], April.
Durbin, J., and S. J. Koopman. 1997. β€œMonte Carlo Maximum Likelihood Estimation for Non-Gaussian State Space Models.” Biometrika 84 (3): 669–84.
β€”β€”β€”. 2012. Time Series Analysis by State Space Methods. 2nd ed. Oxford Statistical Science Series 38. Oxford: Oxford University Press.
E, Weinan, Jiequn Han, and Qianxiao Li. 2018. β€œA Mean-Field Optimal Control Formulation of Deep Learning.” arXiv:1807.01083 [Cs, Math], July.
Errico, Ronald M. 1997. β€œWhat Is an Adjoint Model?” Bulletin of the American Meteorological Society 78 (11): 2577–92.
Evensen, Geir. 2003. β€œThe Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation.” Ocean Dynamics 53 (4): 343–67.
β€”β€”β€”. 2009a. Data Assimilation - The Ensemble Kalman Filter. Berlin; Heidelberg: Springer.
β€”β€”β€”. 2009b. β€œThe Ensemble Kalman Filter for Combined State and Parameter Estimation.” IEEE Control Systems 29 (3): 83–104.
Evensen, Geir, and Peter Jan van Leeuwen. 2000. β€œAn Ensemble Kalman Smoother for Nonlinear Dynamics.” Monthly Weather Review 128 (6): 1852–67.
Fan, Jianqing, and Qiwei Yao. 2003. Nonlinear Time Series: Nonparametric and Parametric Methods. Springer Series in Statistics. New York: Springer.
Fearnhead, Paul, and Hans R. KΓΌnsch. 2018. β€œParticle Filters and Data Assimilation.” Annual Review of Statistics and Its Application 5 (1): 421–49.
Finke, Axel, and Sumeetpal S. Singh. 2016. β€œApproximate Smoothing and Parameter Estimation in High-Dimensional State-Space Models.” arXiv:1606.08650 [Stat], June.
Finlay, Chris, JΓΆrn-Henrik Jacobsen, Levon Nurbekyan, and Adam M Oberman. n.d. β€œHow to Train Your Neural ODE: The World of Jacobian and Kinetic Regularization.” In ICML, 14.
Finzi, Marc, Ke Alexander Wang, and Andrew G. Wilson. 2020. β€œSimplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints.” In Advances in Neural Information Processing Systems. Vol. 33.
Flunkert, Valentin, David Salinas, and Jan Gasthaus. 2017. β€œDeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks.” arXiv:1704.04110 [Cs, Stat], April.
Fraser, Andrew M. 2008. Hidden Markov Models and Dynamical Systems. Philadelphia, PA: Society for Industrial and Applied Mathematics.
Gholami, Amir, Kurt Keutzer, and George Biros. 2019. β€œANODE: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs.” arXiv:1902.10298 [Cs], February.
Ghosh, Arnab, Harkirat Behl, Emilien Dupont, Philip Torr, and Vinay Namboodiri. 2020. β€œSTEER : Simple Temporal Regularization For Neural ODE.” In Advances in Neural Information Processing Systems. Vol. 33.
Gorad, Ajinkya, Zheng Zhao, and Simo SΓ€rkkΓ€. 2020. β€œParameter Estimation in Non-Linear State-Space Models by Automatic Differentiation of Non-Linear Kalman Filters.” In, 6.
Grathwohl, Will, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. 2018. β€œFFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models.” arXiv:1810.01367 [Cs, Stat], October.
Gu, Albert, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher RΓ©. 2021. β€œCombining Recurrent, Convolutional, and Continuous-Time Models with Linear State Space Layers.” In Advances in Neural Information Processing Systems, 34:572–85. Curran Associates, Inc.
Haber, Eldad, Felix Lucka, and Lars Ruthotto. 2018. β€œNever Look Back - A Modified EnKF Method and Its Application to the Training of Neural Networks Without Back Propagation.” arXiv:1805.08034 [Cs, Math], May.
Harvey, A., and S. J. Koopman. 2005. β€œStructural Time Series Models.” In Encyclopedia of Biostatistics. John Wiley & Sons, Ltd.
Hazan, Elad, Karan Singh, and Cyril Zhang. 2017. β€œLearning Linear Dynamical Systems via Spectral Filtering.” In NIPS.
He, Daihai, Edward L. Ionides, and Aaron A. King. 2010. β€œPlug-and-Play Inference for Disease Dynamics: Measles in Large and Small Populations as a Case Study.” Journal of The Royal Society Interface 7 (43): 271–83.
Hefny, Ahmed, Carlton Downey, and Geoffrey Gordon. 2015. β€œA New View of Predictive State Methods for Dynamical System Learning.” arXiv:1505.05310 [Cs, Stat], May.
Hirsh, Seth M., David A. Barajas-Solano, and J. Nathan Kutz. 2022. β€œSparsifying Priors for Bayesian Uncertainty Quantification in Model Discovery.” Royal Society Open Science 9 (2): 211823.
Holzschuh, Benjamin, Simona Vegetti, and Nils Thuerey. 2022. β€œScore Matching via Differentiable Physics,” 7.
Hong, X., R. J. Mitchell, S. Chen, C. J. Harris, K. Li, and G. W. Irwin. 2008. β€œModel Selection Approaches for Non-Linear System Identification: A Review.” International Journal of Systems Science 39 (10): 925–46.
Hong, Yongmiao, and Haitao Li. 2005. β€œNonparametric Specification Testing for Continuous-Time Models with Applications to Term Structure of Interest Rates.” Review of Financial Studies 18 (1): 37–84.
Houtekamer, P. L., and Fuqing Zhang. 2016. β€œReview of the Ensemble Kalman Filter for Atmospheric Data Assimilation.” Monthly Weather Review 144 (12): 4489–4532.
Ionides, E. L., C. BretΓ³, and A. A. King. 2006. β€œInference for Nonlinear Dynamical Systems.” Proceedings of the National Academy of Sciences 103 (49): 18438–43.
Ionides, Edward L., Anindya Bhadra, Yves AtchadΓ©, and Aaron King. 2011. β€œIterated Filtering.” The Annals of Statistics 39 (3): 1776–1802.
Jia, Junteng, and Austin R Benson. 2019. β€œNeural Jump Stochastic Differential Equations.” In Advances in Neural Information Processing Systems 32, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d AlchΓ©-Buc, E. Fox, and R. Garnett, 9847–58. Curran Associates, Inc.
Jonschkowski, Rico, Divyam Rastogi, and Oliver Brock. 2018. β€œDifferentiable Particle Filters: End-to-End Learning with Algorithmic Priors.” arXiv:1805.11122 [Cs, Stat], May.
Kalli, Maria, and Jim E. Griffin. 2018. β€œBayesian Nonparametric Vector Autoregressive Models.” Journal of Econometrics 203 (2): 267–82.
Kantas, Nikolas, Arnaud Doucet, Sumeetpal S. Singh, Jan Maciejowski, and Nicolas Chopin. 2015. β€œOn Particle Methods for Parameter Estimation in State-Space Models.” Statistical Science 30 (3): 328–51.
Kantz, Holger, and Thomas Schreiber. 2004. Nonlinear Time Series Analysis. 2nd ed. Cambridge, UK ; New York: Cambridge University Press.
Kass, Robert E., Shun-Ichi Amari, Kensuke Arai, Emery N. Brown, Casey O. Diekman, Markus Diesmann, Brent Doiron, et al. 2018. β€œComputational Neuroscience: Mathematical and Statistical Perspectives.” Annual Review of Statistics and Its Application 5 (1): 183–214.
Kelly, Jacob, Jesse Bettencourt, Matthew James Johnson, and David Duvenaud. 2020. β€œLearning Differential Equations That Are Easy to Solve.” In.
Kemerait, R., and D. Childers. 1972. β€œSignal Detection and Extraction by Cepstrum Techniques.” IEEE Transactions on Information Theory 18 (6): 745–59.
Kendall, Bruce E., Stephen P. Ellner, Edward McCauley, Simon N. Wood, Cheryl J. Briggs, William W. Murdoch, and Peter Turchin. 2005. β€œPopulation Cycles in the Pine Looper Moth: Dynamical Tests of Mechanistic Hypotheses.” Ecological Monographs 75 (2): 259–76.
Kidger, Patrick, Ricky T. Q. Chen, and Terry J. Lyons. 2021. β€œβ€˜Hey, That’s Not an ODE’: Faster ODE Adjoints via Seminorms.” In Proceedings of the 38th International Conference on Machine Learning, 5443–52. PMLR.
Kidger, Patrick, James Morrill, James Foster, and Terry Lyons. 2020. β€œNeural Controlled Differential Equations for Irregular Time Series.” arXiv:2005.08926 [Cs, Stat], November.
Kitagawa, Genshiro. 1987. β€œNon-Gaussian Stateβ€”Space Modeling of Nonstationary Time Series.” Journal of the American Statistical Association 82 (400): 1032–41.
β€”β€”β€”. 1996. β€œMonte Carlo Filter and Smoother for Non-Gaussian Nonlinear State Space Models.” Journal of Computational and Graphical Statistics 5 (1): 1–25.
Kitagawa, Genshiro, and Will Gersch. 1996. Smoothness Priors Analysis of Time Series. Lecture notes in statistics 116. New York, NY: Springer New York : Imprint : Springer.
Kovachki, Nikola B., and Andrew M. Stuart. 2019. β€œEnsemble Kalman Inversion: A Derivative-Free Technique for Machine Learning Tasks.” Inverse Problems 35 (9): 095005.
Krishnamurthy, Kamesh, Tankut Can, and David J. Schwab. 2020. β€œTheory of Gating in Recurrent Neural Networks.” In arXiv:2007.14823 [Cond-Mat, Physics:nlin, q-Bio].
Lamb, Alex, Anirudh Goyal, Ying Zhang, Saizheng Zhang, Aaron Courville, and Yoshua Bengio. 2016. β€œProfessor Forcing: A New Algorithm for Training Recurrent Networks.” In Advances In Neural Information Processing Systems.
Levin, David N. 2017. β€œThe Inner Structure of Time-Dependent Signals.” arXiv:1703.08596 [Cs, Math, Stat], March.
Li, Xuechen, Ting-Kam Leonard Wong, Ricky T. Q. Chen, and David Duvenaud. 2020. β€œScalable Gradients for Stochastic Differential Equations.” In International Conference on Artificial Intelligence and Statistics, 3870–82. PMLR.
Li, Yang, and Jinqiao Duan. 2021a. β€œA Data-Driven Approach for Discovering Stochastic Dynamical Systems with Non-Gaussian Levy Noise.” Physica D: Nonlinear Phenomena 417 (March): 132830.
β€”β€”β€”. 2021b. β€œExtracting Governing Laws from Sample Path Data of Non-Gaussian Stochastic Dynamical Systems.” arXiv:2107.10127 [Math, Stat], July.
Ljung, Lennart. 2010. β€œPerspectives on System Identification.” Annual Reviews in Control 34 (1): 1–12.
Lou, Aaron, Derek Lim, Isay Katsman, Leo Huang, Qingxuan Jiang, Ser Nam Lim, and Christopher M. De Sa. 2020. β€œNeural Manifold Ordinary Differential Equations.” In Advances in Neural Information Processing Systems. Vol. 33.
Lu, Peter Y., Joan AriΓ±o, and Marin SoljačiΔ‡. 2021. β€œDiscovering Sparse Interpretable Dynamics from Partial Observations.” arXiv:2107.10879 [Physics], July.
Luo, Xiaodong, Andreas S. Stordal, Rolf J. Lorentzen, and Geir NΓ¦vdal. 2015. β€œIterative Ensemble Smoother as an Approximate Solution to a Regularized Minimum-Average-Cost Problem: Theory and Applications.” SPE Journal 20 (05): 962–82.
Malartic, Quentin, Alban Farchi, and Marc Bocquet. 2021. β€œState, Global and Local Parameter Estimation Using Local Ensemble Kalman Filters: Applications to Online Machine Learning of Chaotic Dynamics.” arXiv:2107.11253 [Nlin, Physics:physics, Stat], July.
Massaroli, Stefano, Michael Poli, Jinkyoo Park, Atsushi Yamashita, and Hajime Asama. 2020. β€œDissecting Neural ODEs.” In arXiv:2002.08071 [Cs, Stat].
Mitchell, Herschel L., and P. L. Houtekamer. 2000. β€œAn Adaptive Ensemble Kalman Filter.” Monthly Weather Review 128 (2): 416.
Morrill, James, Patrick Kidger, Cristopher Salvi, James Foster, and Terry Lyons. 2020. β€œNeural CDEs for Long Time Series via the Log-ODE Method.” In, 5.
Nerrand, O., P. Roussel-Ragot, L. Personnaz, G. Dreyfus, and S. Marcos. 1993. β€œNeural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms.” Neural Computation 5 (2): 165–99.
Nguyen, Long, and Andy Malinsky. 2020. β€œExploration and Implementation of Neural Ordinary Differential Equations,” 34.
Pereyra, M., P. Schniter, Γ‰ Chouzenoux, J. C. Pesquet, J. Y. Tourneret, A. O. Hero, and S. McLaughlin. 2016. β€œA Survey of Stochastic Simulation and Optimization Methods in Signal Processing.” IEEE Journal of Selected Topics in Signal Processing 10 (2): 224–41.
Pham, Tung, and Victor Panaretos. 2016. β€œMethodology and Convergence Rates for Functional Time Series Regression.” arXiv:1612.07197 [Math, Stat], December.
Pillonetto, Gianluigi. 2016. β€œThe Interplay Between System Identification and Machine Learning.” arXiv:1612.09158 [Cs, Stat], December.
Plis, Sergey, David Danks, and Jianyu Yang. 2015. β€œMesochronal Structure Learning.” Uncertainty in Artificial Intelligence : Proceedings of the … Conference. Conference on Uncertainty in Artificial Intelligence 31 (July).
Poli, Michael, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, and Jinkyoo Park. 2020. β€œTorchDyn: A Neural Differential Equations Library.” arXiv:2009.09346 [Cs], September.
Pugachev, V. S., and I. N. SinitοΈ sοΈ‘yn. 2001. Stochastic systems: theory and applications. River Edge, NJ: World Scientific.
Rackauckas, Christopher. 2019. β€œThe Essential Tools of Scientific Machine Learning (Scientific ML).”
Rackauckas, Christopher, Yingbo Ma, Vaibhav Dixit, Xingjian Guo, Mike Innes, Jarrett Revels, Joakim Nyberg, and Vijay Ivaturi. 2018. β€œA Comparison of Automatic Differentiation and Continuous Sensitivity Analysis for Derivatives of Differential Equation Solutions.” arXiv:1812.01892 [Cs], December.
Rackauckas, Christopher, Yingbo Ma, Julius Martensen, Collin Warner, Kirill Zubov, Rohit Supekar, Dominic Skinner, Ali Ramadhan, and Alan Edelman. 2020. β€œUniversal Differential Equations for Scientific Machine Learning.” arXiv:2001.04385 [Cs, Math, q-Bio, Stat], August.
Robinson, P. M. 1983. β€œNonparametric Estimators for Time Series.” Journal of Time Series Analysis 4 (3): 185–207.
Roeder, Geoffrey, Paul K. Grant, Andrew Phillips, Neil Dalchau, and Edward Meeds. 2019. β€œEfficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems.” arXiv:1905.12090 [Cs, Stat], May.
Routtenberg, Tirza, and Joseph Tabrikian. 2010. β€œBlind MIMO-AR System Identification and Source Separation with Finite-Alphabet.” IEEE Transactions on Signal Processing 58 (3): 990–1000.
Runge, Jakob, Reik V. Donner, and JΓΌrgen Kurths. 2015. β€œOptimal Model-Free Prediction from Multivariate Time Series.” Physical Review E 91 (5).
Ruthotto, Lars, and Eldad Haber. 2020. β€œDeep Neural Networks Motivated by Partial Differential Equations.” Journal of Mathematical Imaging and Vision 62 (3): 352–64.
Sattar, Yahya, and Samet Oymak. 2022. β€œNon-Asymptotic and Accurate Learning of Nonlinear Dynamical Systems.” Journal of Machine Learning Research 23 (140): 1–49.
Schillings, Claudia, and Andrew M. Stuart. 2017. β€œAnalysis of the Ensemble Kalman Filter for Inverse Problems.” SIAM Journal on Numerical Analysis 55 (3): 1264–90.
Schirmer, Mona, Mazin Eltayeb, Stefan Lessmann, and Maja Rudolph. 2022. β€œModeling Irregular Time Series with Continuous Recurrent Units.” arXiv.
Schmidt, Jonathan, Nicholas KrΓ€mer, and Philipp Hennig. 2021. β€œA Probabilistic State Space Model for Joint Inference from Differential Equations and Data.” arXiv:2103.10153 [Cs, Stat], June.
Schneider, Tapio, Andrew M. Stuart, and Jin-Long Wu. 2022. β€œEnsemble Kalman Inversion for Sparse Learning of Dynamical Systems from Time-Averaged Data.” Journal of Computational Physics 470 (December): 111559.
SjΓΆberg, Jonas, Qinghua Zhang, Lennart Ljung, Albert Benveniste, Bernard Delyon, Pierre-Yves Glorennec, HΓ₯kan Hjalmarsson, and Anatoli Juditsky. 1995. β€œNonlinear Black-Box Modeling in System Identification: A Unified Overview.” Automatica, Trends in System Identification, 31 (12): 1691–1724.
StΓ€dler, Nicolas, and Sach Mukherjee. 2013. β€œPenalized Estimation in High-Dimensional Hidden Markov Models with State-Specific Graphical Models.” The Annals of Applied Statistics 7 (4): 2157–79.
Stapor, Paul, Fabian FrΓΆhlich, and Jan Hasenauer. 2018. β€œOptimization and Uncertainty Analysis of ODE Models Using 2nd Order Adjoint Sensitivity Analysis.” bioRxiv, February, 272005.
Stroud, Jonathan R., Matthias Katzfuss, and Christopher K. Wikle. 2018. β€œA Bayesian Adaptive Ensemble Kalman Filter for Sequential State and Parameter Estimation.” Monthly Weather Review 146 (1): 373–86.
Stroud, Jonathan R., Michael L. Stein, Barry M. Lesht, David J. Schwab, and Dmitry Beletsky. 2010. β€œAn Ensemble Kalman Filter and Smoother for Satellite Data Assimilation.” Journal of the American Statistical Association 105 (491): 978–90.
Takamoto, Makoto, Timothy Praditia, Raphael Leiteritz, Dan MacKinlay, Francesco Alesiani, Dirk PflΓΌger, and Mathias Niepert. 2022. β€œPDEBench: An Extensive Benchmark for Scientific Machine Learning.” In.
Tallec, Corentin, and Yann Ollivier. 2017. β€œUnbiasing Truncated Backpropagation Through Time.” arXiv.
Taniguchi, Masanobu, and Yoshihide Kakizawa. 2000. Asymptotic Theory of Statistical Inference for Time Series. Springer Series in Statistics. New York: Springer.
Tanizaki, Hisashi. 2001. β€œEstimation of Unknown Parameters in Nonlinear and Non-Gaussian State-Space Models.” Journal of Statistical Planning and Inference 96 (2): 301–23.
Tzen, Belinda, and Maxim Raginsky. 2019. β€œNeural Stochastic Differential Equations: Deep Latent Gaussian Models in the Diffusion Limit.” arXiv:1905.09883 [Cs, Stat], October.
Unser, Michael A., and Pouya Tafti. 2014. An Introduction to Sparse Stochastic Processes. New York: Cambridge University Press.
Vardasbi, Ali, Telmo Pessoa Pires, Robin M. Schmidt, and Stephan Peitz. 2023. β€œState Spaces Aren’t Enough: Machine Translation Needs Attention.” arXiv.
Wedig, W. 1984. β€œA Critical Review of Methods in Stochastic Structural Dynamics.” Nuclear Engineering and Design 79 (3): 281–87.
Wen, Ruofeng, Kari Torkkola, and Balakrishnan Narayanaswamy. 2017. β€œA Multi-Horizon Quantile Recurrent Forecaster.” arXiv:1711.11053 [Stat], November.
Werbos, Paul J. 1988. β€œGeneralization of Backpropagation with Application to a Recurrent Gas Market Model.” Neural Networks 1 (4): 339–56.
Williams, Ronald J., and David Zipser. 1989. β€œA Learning Algorithm for Continually Running Fully Recurrent Neural Networks.” Neural Computation 1 (2): 270–80.
Yang, Biao, Jonathan R. Stroud, and Gabriel Huerta. 2018. β€œSequential Monte Carlo Smoothing with Parameter Estimation.” Bayesian Analysis 13 (4): 1137–61.
Zammit-Mangion, Andrew, and Christopher K. Wikle. 2020. β€œDeep Integro-Difference Equation Models for Spatio-Temporal Forecasting.” Spatial Statistics 37 (June): 100408.
Zhang, Han, Xi Gao, Jacob Unterman, and Tom Arodz. 2020. β€œApproximation Capabilities of Neural ODEs and Invertible Residual Networks.” arXiv:1907.12998 [Cs, Stat], February.
Zhang, Jiangjiang, Guang Lin, Weixuan Li, Laosheng Wu, and Lingzao Zeng. 2018. β€œAn Iterative Local Updating Ensemble Smoother for Estimation and Uncertainty Assessment of Hydrologic Model Parameters With Multimodal Distributions.” Water Resources Research 54 (3): 1716–33.
Zhao, Yiran, and Tiangang Cui. 2023. β€œTensor-Based Methods for Sequential State and Parameter Estimation in State Space Models.” arXiv.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.