Ensemble Kalman methods

Data Assimilation; Data fusion; Sloppy updates for messy models

2015-06-22 — 2025-09-07

Wherein ensemble approximations are employed to propagate low-rank state covariances via N-member anomalies, and updates are effected in the N−1 ensemble subspace using perturbed or square‑root observation transforms

Bayes
distributed
dynamical systems
generative
graphical models
linear algebra
machine learning
Monte Carlo
optimization
particle
probabilistic algorithms
probability
sciml
signal processing
state space models
statistics
stochastic processes
time series

\[\renewcommand{\var}{\operatorname{Var}} \renewcommand{\cov}{\operatorname{Cov}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\rv}[1]{\mathsf{#1}} \renewcommand{\vrv}[1]{\vv{\rv{#1}}} \renewcommand{\disteq}{\stackrel{d}{=}} \renewcommand{\gvn}{\mid} \renewcommand{\Ex}{\mathbb{E}} \renewcommand{\Pr}{\mathbb{P}} \renewcommand{\one}{\unicode{x1D7D9}}\]

A random-sampling variant/generalisation of the Kalman-Bucy filter. That also describes particle filters, but the randomisation in ensemble methods is different from those. We can do both types of randomisation. This method has a few tweaks that make it more tenable in tricky situations with high-dimensional state spaces or nonlinearities in inconvenient places. A popular data assimilation method for spatiotemporal models.

Figure 1: Ensemble Kalman filters make it somewhat easier to wring estimates out of data.

1 Tutorial introductions

Katzfuss, Stroud, and Wikle (2016), Roth et al. (2017) and Fearnhead and Künsch (2018) are all pretty good. Schillings and Stuart (2017) has been recommended by Haber, Lucka, and Ruthotto (2018) as the canonical ‘modern’ version. Wikle and Berliner (2007) presents a broad data assimilation context, although it is too curt to be helpful for me. Mandel (2009) is helpfully longer. The inventor of the method explains it in Evensen (2003), but I found that paper hard going, since it uses lots of oceanography terminology, which is a barrier to entry for non-oceanographers. Roth et al. (2017) is probably the best for my background. Let us copy their notation.

We start from the discrete-time state-space models; the basic one is the linear system: \[ \begin{aligned} x_{k+1} &=F x_{k}+G v_{k}, \\ y_{k} &=H x_{k}+e_{k}, \end{aligned} \] with state \(x\in\mathbb{R}^n\) and measurement \(y\in\mathbb{R}^m\). The initial state \(x_{0}\), the process noise \(v_{k}\), and the measurement noise \(e_{k}\) are mutually independent such that \[\begin{aligned} \Ex x_{0}&=\hat{x}_{0}\\ \Ex v_{k}&=0\\ \Ex e_{k}&=0\\ \cov x_{0} &=P_{0}\\ \cov v_{k} & =Q\\ \cov e_{k}&=R \end{aligned}\] and all are Gaussian.

The Kalman filter propagates state estimates \(\hat{x}_{k \mid k}\) and covariance matrices \(P_{k \mid k}\) for this model.

Time (prediction/forecast) step \[ \hat{x}_{k+1 \mid k}=F \hat{x}_{k \mid k},\qquad P_{k+1 \mid k}=F P_{k \mid k} F^{\top}+G Q G^{\top}. \] Predicted observation and its covariance \[ \hat{y}_{k \mid k-1} =H \hat{x}_{k \mid k-1},\qquad S_{k} =H P_{k \mid k-1} H^{\top}+R . \]

Measurement (update/analysis) step \[ \begin{aligned} \hat{x}_{k \mid k} &=\hat{x}_{k \mid k-1}+K_{k}\left(y_{k}-\hat{y}_{k \mid k-1}\right), \\ P_{k \mid k} &=\left(I-K_{k} H\right) P_{k \mid k-1}\left(I-K_{k} H\right)^{\top}+K_{k} R K_{k}^{\top}, \end{aligned} \] with variance-minimising gain \[ K_{k}=P_{k \mid k-1} H^{\top} S_{k}^{-1}=M_{k} S_{k}^{-1}, \] where \(M_{k}\) is the cross-covariance between the state and predicted output, \(M_k=P_{k|k-1}H^\top\).

In the Ensemble Kalman filter (EnKF), we approximate these statistics with samples. That relaxes strict Gaussianity and, crucially, enables low-rank computation when ensembles are small.

Figure 2: Observation update in various extensions of Kalman filters, from Katzfuss, Stroud, and Wikle (2016).

Instead of maintaining the \(n\times n\) covariance \(P_{k \mid k}\) explicitly, we maintain an ensemble of \(N\) state realisations \[ X_{k}:=\left[x_{k}^{(i)}\right]_{i=1}^{N}\in\mathbb R^{n\times N}. \] Ensemble moments: \[ \bar{x}_{k \mid k}=\frac{1}{N} X_{k \mid k} \one,\qquad \widetilde{X}_{k \mid k}=X_{k \mid k}\left(I_{N}-\frac{1}{N} \one \one^{\top}\right),\qquad \bar{P}_{k \mid k}=\frac{1}{N-1} \widetilde{X}_{k \mid k} \widetilde{X}_{k \mid k}^{\top}. \] We attempt to match the ensemble moments to the KF moments: \[ \bar{x}_{k \mid k}\approx \hat{x}_{k \mid k},\qquad \bar{P}_{k \mid k}\approx P_{k \mid k}. \] Forecasting with process noise samples \(V_k=[v_k^{(i)}]_{i=1}^N\) (zero-mean, covariance \(Q\)): \[ X_{k+1 \mid k}=F X_{k \mid k}+G V_{k}. \]

Next the \(X_{k \mid k-1}\) is adjusted to obtain the filtering ensemble \(X_{k \mid k}\) by applying an update to each ensemble member: With some gain matrix \(\bar{K}_{k}\) the KF update is applied to the ensemble by the update \[ X_{k \mid k}=\left(I-\bar{K}_{k} H\right) X_{k \mid k-1}+\bar{K}_{k} y_{k} \one^{\top} . \] This does not yet approximate the update of the full Kalman observation — there is no term \(\bar{K}_{k} R \bar{K}_{k}^{\top}\); We have a choice how to implement that.

1.1 “Stochastic” EnKF update

Using measurement-noise realisations \(E_k=[e_k^{(i)}]_{i=1}^N\) with covariance \(R\), \[ X_{k \mid k}=X_{k \mid k-1}+\bar{K}_{k}\left(y_{k} \one^{\top}-Y_{k \mid k-1}\right), \quad Y_{k \mid k-1}:=H X_{k \mid k-1}+E_{k}. \] This gives the correct mean and covariance in expectation. The gain is solved from sample (co)variances: \[ \widetilde{Y}_{k \mid k-1}=Y_{k \mid k-1}\left(I_{N}-\frac{1}{N} \one \one^{\top}\right),\qquad \bar{M}_{k}=\frac{1}{N-1}\widetilde{X}_{k \mid k-1}\,\widetilde{Y}_{k \mid k-1}^{\top},\qquad \bar{S}_{k}=\frac{1}{N-1}\widetilde{Y}_{k \mid k-1}\,\widetilde{Y}_{k \mid k-1}^{\top}, \] and \[ \bar{K}_{k}\,\bar{S}_{k}=\bar{M}_{k}. \]

1.2 “Deterministic” / square-root EnKF

To reduce Monte Carlo noise, replace perturbed observations with a deterministic transform that updates both the mean and the anomalies (no \(E_k\) needed). For additive noise and linear \(H\): \[ \widetilde{Z}_{k \mid k-1}:=H\widetilde{X}_{k \mid k-1},\qquad \bar{S}_{k}=\frac{1}{N-1}\widetilde{Z}_{k \mid k-1}\widetilde{Z}_{k \mid k-1}^{\top}+R,\qquad \bar{M}_{k}=\frac{1}{N-1}\widetilde{X}_{k \mid k-1}\widetilde{Z}_{k \mid k-1}^{\top}. \] Square-root EnKF variants (EAKF/ETKF) apply a right-side transform to the anomalies, \[ \widetilde{X}_{k \mid k}=\widetilde{X}_{k \mid k-1}\,\Pi_k^{1/2}, \] chosen so that the sample covariance matches the Kalman analysis covariance (equivalently, a carefully constructed low-rank Joseph update).

2 Low-rank ensemble-space form (ETKF / Woodbury trick)

Let \(m=N-1\) be the ensemble subspace dimension. Define whitened obs anomalies and innovation \[ \widehat Y=\;R^{-1/2}\,\widetilde{Z}_{k \mid k-1}\in\mathbb R^{m_y\times m},\qquad \widehat d=\;R^{-1/2}\,(y_k-\hat y_{k|k-1})\in\mathbb R^{m_y}, \] and \[ S=\frac{\widehat Y}{\sqrt m},\qquad A=I_m+S^\top S. \] Then the mean increment and anomaly transform are \[ \Delta\bar x\;=\;\frac{\widetilde X_{k|k-1}}{\sqrt m}\,A^{-1}S^\top \widehat d,\qquad T\;=\;A^{-1/2},\quad \widetilde X_{k|k}=\frac{\widetilde X_{k|k-1}}{\sqrt m}\,T\,\sqrt m. \] Everything is solved in \(\mathbb R^{m\times m}\) — the Woodbury / ensemble-space trick — so cost scales with \(m=N-1\), not with state or obs dimension.

3 Localization

Tapering the covariance by spatial distance reduces spurious long-range correlations (Ott et al. 2004). A naive Schur product on full covariances, \(\tilde P = \rho \odot P\), destroys the low-rank factorisation and the efficiency above. Two ensemble-space–safe ways to localise are:

3.1 Square-root Schur localisation (global, keeps low rank)

Build SPSD tapers \(C_y\) (obs–obs) and optionally \(C_x\) (state–state). Use (matrix) square roots on anomalies: \[ \widetilde Y'\leftarrow C_y^{1/2}\,\widetilde Y',\qquad \widetilde X'\leftarrow C_x^{1/2}\,\widetilde X'. \] Whiten and proceed with the same ETKF algebra, \[ \widehat{\widetilde Y}=R^{-1/2}\,\widetilde Y',\quad \widetilde S=\widehat{\widetilde Y}/\sqrt m,\quad \widetilde A=I_m+\widetilde S^\top\widetilde S, \] \[ \Delta\bar x=\frac{\widetilde X'}{\sqrt m}\,\widetilde A^{-1}\,\widetilde S^\top\,(R^{-1/2}d),\qquad \widetilde X_a'=\frac{\widetilde X'}{\sqrt m}\,\widetilde A^{-1/2}\,\sqrt m. \] This realises the effect of a Schur taper at the covariance level while preserving the ensemble-space structure (all inversions remain \(m\times m\)). In practice, \(C_y^{1/2}\) is applied with sparse matvecs using compactly supported kernels (e.g. Gaspari–Cohn).

3.2 Local Ensemble Transform Kalman Filter (LETKF)

The LETKF (Hunt, Kostelich, and Szunyogh 2007) reduces computational burden and improves robustness by restricting each update to a small spatial window (or tile) around each state location, while keeping the low-rank ensemble-space algebra intact.

For each state location (or tile) \(s\), select nearby observations \(\mathcal J(s)\) and form local anomalies/innovation: \[ \widehat Y_s=R_s^{-1/2}\,Y'_s,\quad \widehat d_s=R_s^{-1/2}\,d_s,\quad S_s=\widehat Y_s/\sqrt m,\quad A_s=I_m+S_s^\top S_s. \] Then \[ \Delta\bar x_s=\frac{X'_s}{\sqrt m}\,A_s^{-1}S_s^\top \widehat d_s,\qquad X'_{a,s}=\frac{X'_s}{\sqrt m}\,A_s^{-1/2}\,\sqrt m. \] Apply these only to the local state entries (blend overlaps smoothly). Locality is enforced by construction; all inversions remain \(m\times m\).

3.3 As an empirical Matheron update

The EnKF is an empirical Matheron update.

4 As Approximate Bayesian Computation

Nott, Marshall, and Ngoc (2012) uses Beaumont, Zhang, and Balding (2002), Blum and François (2010) and Lei and Bickel (2009) to interpret EnKF as an Approximate Bayesian computation method.

5 Convergence and consistency

Seems to be complicated (P. Del Moral, Kurtzmann, and Tugaut 2017; Kelly, Law, and Stuart 2014; Kwiatkowski and Mandel 2015; Le Gland, Monbet, and Tran 2009; Mandel, Cobb, and Beezley 2011).

6 Going nonlinear

TBD

The EnKF does not necessarily converge to an extended Kalman filter in the limit of infinite ensemble size (R. Furrer and Bengtsson 2007)

7 Monte Carlo moves in the ensemble

The ensemble is rank deficient. Question: When can we sample other states from the ensemble to improve the rank by stationary posterior moves?

8 Use in smoothing

Katzfuss, Stroud, and Wikle (2016) claims there are two major approaches to smoothing: Reverse methods (Stroud et al. 2010) and the EnKS (Evensen and van Leeuwen 2000) which augments the states with lagged copies rather than doing a reverse pass.

There seem to be many tweaks on this idea (N. K. Chada, Chen, and Sanz-Alonso 2021; Luo et al. 2015; White 2018; Zhang et al. 2018).

9 Use in system identification

Can we use ensemble methods for online parameter estimation? Apparently there are some tricks to enable this (Evensen 2009b; Malartic, Farchi, and Bocquet 2021; Moradkhani et al. 2005; Fearnhead and Künsch 2018; Bocquet, Farchi, and Malartic 2020). The canonical method in my own opinion, which is highly biased, is the Gaussian Ensemble Belief Propagation trick.

10 Theoretical basis of EnKF for probabilists

Various works quantify this filter in terms of its convergence to interesting densities (Bishop and Del Moral 2023a; P. Del Moral, Kurtzmann, and Tugaut 2017; Garbuno-Inigo et al. 2020; Kelly, Law, and Stuart 2014; Le Gland, Monbet, and Tran 2009; Taghvaei and Mehta 2021).

11 Lanczos trick in precision estimates

Pleiss et al. (2018), Ubaru, Chen, and Saad (2017).

12 Relation to particle filters

Intimate. See particle filters.

13 Schilling’s filter

Claudia Schilling’s filter (Schillings and Stuart 2017) is a version which looks somehow more general than the original but simple. I would like to work out what is going on there.

Haber, Lucka, and Ruthotto (2018) use it to train neural nets (!) and show a rather beautiful connection to stochastic gradient descent in section 3.2.

See Neural nets by ensemble Kalman filtering.

14 Handy low-rank tricks for

See low-rank tricks.

15 Tooling

Every grad student and every climate modeling lab makes thier own implementation.

I use the simple research library, nansencenter/DAPPER: Data Assimilation with Python: a Package for Experimental Research.

The also provide us with a helpful overview of the field. The following two tables are quoted verbatim from their work

A table of industrial Ensemble Kalman data assimilation frameworks

Name Developers Purpose (approximately)
DART NCAR General
PDAF AWI General
JEDI JCSDA (NOAA, NASA, ++) General
OpenDA TU Delft General
EMPIRE Reading (Met) General
ERT Statoil History matching (Petroleum DA)
PIPT CIPR History matching (Petroleum DA)
MIKE DHI Oceanographic
OAK Liège Oceanographic
Siroco OMP Oceanographic
Verdandi INRIA Biophysical DA
PyOSSE Edinburgh, Reading Earth-observation DA

A list of projects that research how to do data assimilation:

Name Developers Notes
DAPPER Raanes, Chen, Grudzien Python
SANGOMA Conglomerate* Fortran, Matlab
hIPPYlib Villa, Petra, Ghattas Python, adjoint-based PDE methods
FilterPy R. Labbe Python. Engineering oriented.
[DASoftware][13] Yue Li, Stanford Matlab. Large inverse probs.
Pomp U of Michigan R
EnKF-Matlab Sakov Matlab
EnKF-C Sakov C. Light-weight, off-line DA
pyda Hickman Python
PyDA Shady-Ahmed Python
DasPy Xujun Han Python
DataAssim.jl Alexander-Barth Julia
DataAssimilationBenchmarks.jl Grudzien Julia, Python
EnsembleKalmanProcesses.jl Clim. Modl. Alliance Julia, EKI (optim)
Datum Raanes Matlab
IEnKS code Bocquet Python

*: AWI/Liege/CNRS/NERSC/Reading/Delft

16 Incoming

17 References

Alsup, Venturi, and Peherstorfer. 2022. Multilevel Stein Variational Gradient Descent with Applications to Bayesian Inverse Problems.” In Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference.
Alzraiee, White, Knowling, et al. 2022. A Scalable Model-Independent Iterative Data Assimilation Tool for Sequential and Batch Estimation of High Dimensional Model Parameters and States.” Environmental Modelling & Software.
Ambrogioni, Guclu, and van Gerven. 2019. Wasserstein Variational Gradient Descent: From Semi-Discrete Optimal Transport to Ensemble Variational Inference.”
Ameli, and Shadden. 2023. A Singular Woodbury and Pseudo-Determinant Matrix Identities and Application to Gaussian Process Regression.” Applied Mathematics and Computation.
Anderson, Jeffrey L. 2007. Exploring the Need for Localization in Ensemble Data Assimilation Using a Hierarchical Ensemble Filter.” Physica D: Nonlinear Phenomena, Data Assimilation,.
———. 2009. Ensemble Kalman Filters for Large Geophysical Applications.” IEEE Control Systems Magazine.
Anderson, Jeffrey, Hoar, Raeder, et al. 2009. The Data Assimilation Research Testbed: A Community Facility.” Bulletin of the American Meteorological Society.
Bao, Chipilski, Liang, et al. 2024. Nonlinear Ensemble Filtering with Diffusion Models: Application to the Surface Quasi-Geostrophic Dynamics.”
Bao, Zhang, and Zhang. 2024. An Ensemble Score Filter for Tracking High-Dimensional Nonlinear Dynamical Systems.”
Beaumont, Zhang, and Balding. 2002. Approximate Bayesian Computation in Population Genetics.” Genetics.
Bickel, and Levina. 2008. Regularized Estimation of Large Covariance Matrices.” The Annals of Statistics.
Bishop, and Del Moral. 2019. On the Stability of Matrix-Valued Riccati Diffusions.” Electronic Journal of Probability.
———. 2023a. On the Mathematical Theory of Ensemble (Linear-Gaussian) Kalman-Bucy Filtering.” Mathematics of Control, Signals, and Systems.
———. 2023b. On the Mathematical Theory of Ensemble (Linear-Gaussian) Kalman–Bucy Filtering.” Mathematics of Control, Signals, and Systems.
Bishop, Del Moral, and Niclas. 2020. A Perturbation Analysis of Stochastic Matrix Riccati Diffusions.” Annales de l’Institut Henri Poincaré, Probabilités Et Statistiques.
Bishop, Del Moral, and Pathiraja. 2017. Perturbations and Projections of Kalman-Bucy Semigroups Motivated by Methods in Data Assimilation.” arXiv:1701.05978 [Math].
Blum, and François. 2010. Non-Linear Regression Models for Approximate Bayesian Computation.” Statistics and Computing.
Bocquet, Farchi, and Malartic. 2020. Online Learning of Both State and Dynamics Using Ensemble Kalman Filters.” Foundations of Data Science.
Bocquet, Pires, and Wu. 2010. Beyond Gaussian Statistical Modeling in Geophysical Data Assimilation.” Monthly Weather Review.
Borovitskiy, Terenin, Mostowsky, et al. 2023. Matérn Gaussian Processes on Riemannian Manifolds.” In Advances in Neural Information Processing Systems.
Botha, Adams, Tran, et al. 2022. Component-Wise Iterative Ensemble Kalman Inversion for Static Bayesian Models with Unknown Measurement Error Covariance.”
Brajard, Carrassi, Bocquet, et al. 2020. Combining Data Assimilation and Machine Learning to Emulate a Dynamical Model from Sparse and Noisy Observations: A Case Study with the Lorenz 96 Model.” Journal of Computational Science.
Chada, Neil K., Chen, and Sanz-Alonso. 2021. Iterative Ensemble Kalman Methods: A Unified Perspective with Some New Variants.” Foundations of Data Science.
Chada, Neil, and Tong. 2022. Convergence Acceleration of Ensemble Kalman Inversion in Nonlinear Settings.” Mathematics of Computation.
Chen, Chong, Dou, Chen, et al. 2022. A Novel Neural Network Training Framework with Data Assimilation.” The Journal of Supercomputing.
Cheng, Quilodrán-Casas, Ouala, et al. 2023. Machine Learning With Data Assimilation and Uncertainty Quantification for Dynamical Systems: A Review.” IEEE/CAA Journal of Automatica Sinica.
Chen, Yan, and Oliver. 2012. Ensemble Randomized Maximum Likelihood Method as an Iterative Ensemble Smoother.” Mathematical Geosciences.
———. 2013. Levenberg–Marquardt Forms of the Iterative Ensemble Smoother for Efficient History Matching and Uncertainty Quantification.” Computational Geosciences.
Chen, Yuming, Sanz-Alonso, and Willett. 2022. Autodifferentiable Ensemble Kalman Filters.” SIAM Journal on Mathematics of Data Science.
———. 2023. Reduced-Order Autodifferentiable Ensemble Kalman Filters.” Inverse Problems.
Chilès, and Desassis. 2018. Fifty Years of Kriging.” In Handbook of Mathematical Geosciences.
Chilès, and Lantuéjoul. 2005. Prediction by Conditional Simulation: Models and Algorithms.” In Space, Structure and Randomness: Contributions in Honor of Georges Matheron in the Field of Geostatistics, Random Sets and Mathematical Morphology. Lecture Notes in Statistics.
Del Moral, P., Kurtzmann, and Tugaut. 2017. On the Stability and the Uniform Propagation of Chaos of a Class of Extended Ensemble Kalman-Bucy Filters.” SIAM Journal on Control and Optimization.
Del Moral, Pierre, and Niclas. 2018. A Taylor Expansion of the Square Root Matrix Functional.”
Dolcetti, and Pertici. 2020. Real Square Roots of Matrices: Differential Properties in Semi-Simple, Symmetric and Orthogonal Cases.”
Doucet. 2010. A Note on Efficient Conditional Simulation of Gaussian Distributions.” Technical Report.
Dowling, Zhao, and Park. 2024. eXponential FAmily Dynamical Systems (XFADS): Large-Scale Nonlinear Gaussian State-Space Modeling.” In.
Dubrule. 2018. Kriging, Splines, Conditional Simulation, Bayesian Inversion and Ensemble Kalman Filtering.” In Handbook of Mathematical Geosciences: Fifty Years of IAMG.
Duffin, Cripps, Stemler, et al. 2021. Statistical Finite Elements for Misspecified Models.” Proceedings of the National Academy of Sciences.
Dunbar, Duncan, Stuart, et al. 2022. Ensemble Inference Methods for Models With Noisy and Expensive Likelihoods.” SIAM Journal on Applied Dynamical Systems.
Evensen. 1994. Sequential Data Assimilation with a Nonlinear Quasi-Geostrophic Model Using Monte Carlo Methods to Forecast Error Statistics.” Journal of Geophysical Research: Oceans.
———. 2003. The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation.” Ocean Dynamics.
———. 2004. Sampling Strategies and Square Root Analysis Schemes for the EnKF.” Ocean Dynamics.
———. 2009a. Data Assimilation - The Ensemble Kalman Filter.
———. 2009b. The Ensemble Kalman Filter for Combined State and Parameter Estimation.” IEEE Control Systems.
Evensen, and van Leeuwen. 2000. An Ensemble Kalman Smoother for Nonlinear Dynamics.” Monthly Weather Review.
Fearnhead, and Künsch. 2018. Particle Filters and Data Assimilation.” Annual Review of Statistics and Its Application.
Finn, Geppert, and Ament. 2021. Ensemble-Based Data Assimilation of Atmospheric Boundary Layerobservations Improves the Soil Moisture Analysis.” Preprint.
Furrer, R., and Bengtsson. 2007. Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter variants.” Journal of Multivariate Analysis.
Furrer, Reinhard, Genton, and Nychka. 2006. Covariance Tapering for Interpolation of Large Spatial Datasets.” Journal of Computational and Graphical Statistics.
Galy-Fajou, Perrone, and Opper. 2021. Flexible and Efficient Inference with Particles for the Variational Gaussian Approximation.” Entropy.
Garbuno-Inigo, Hoffmann, Li, et al. 2020. Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler.” SIAM Journal on Applied Dynamical Systems.
Grooms, and Robinson. 2021. A Hybrid Particle-Ensemble Kalman Filter for Problems with Medium Nonlinearity.” PLOS ONE.
Grumitt, Karamanis, and Seljak. 2023. Flow Annealed Kalman Inversion for Gradient-Free Inference in Bayesian Inverse Problems.”
Guth, Schillings, and Weissmann. 2020. Ensemble Kalman Filter for Neural Network Based One-Shot Inversion.”
Haber, Lucka, and Ruthotto. 2018. Never Look Back - A Modified EnKF Method and Its Application to the Training of Neural Networks Without Back Propagation.” arXiv:1805.08034 [Cs, Math].
Heemink, Verlaan, and Segers. 2001. Variance Reduced Ensemble Kalman Filtering.” Monthly Weather Review.
Hensman, Zwiessele, and Lawrence. 2014. Tilted Variational Bayes.” In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics.
Hou, Lawrence, and Hero. 2016. Penalized Ensemble Kalman Filters for High Dimensional Non-Linear Systems.” arXiv:1610.00195 [Physics, Stat].
Houtekamer, and Mitchell. 2001. A Sequential Ensemble Kalman Filter for Atmospheric Data Assimilation.” Monthly Weather Review.
Houtekamer, and Zhang. 2016. Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation.” Monthly Weather Review.
Huang, Schneider, and Stuart. 2022. Iterated Kalman Methodology for Inverse Problems.” Journal of Computational Physics.
Hunt, Kostelich, and Szunyogh. 2007. Efficient Data Assimilation for Spatiotemporal Chaos: A Local Ensemble Transform Kalman Filter.” Physica D: Nonlinear Phenomena, Data Assimilation,.
Iglesias, Law, and Stuart. 2013. Ensemble Kalman Methods for Inverse Problems.” Inverse Problems.
Julier, and Uhlmann. 1997. New Extension of the Kalman Filter to Nonlinear Systems.” In Signal Processing, Sensor Fusion, and Target Recognition VI.
Kantas, Doucet, Singh, et al. 2015. On Particle Methods for Parameter Estimation in State-Space Models.” Statistical Science.
Katzfuss, Stroud, and Wikle. 2016. Understanding the Ensemble Kalman Filter.” The American Statistician.
Keller, and Potthast. 2024. AI-Based Data Assimilation: Learning the Functional of Analysis Estimation.”
Kelly, Law, and Stuart. 2014. Well-Posedness and Accuracy of the Ensemble Kalman Filter in Discrete and Continuous Time.” Nonlinearity.
Kovachki, and Stuart. 2019. Ensemble Kalman Inversion: A Derivative-Free Technique for Machine Learning Tasks.” Inverse Problems.
Kuzin, Yang, Isupova, et al. 2018. Ensemble Kalman Filtering for Online Gaussian Process Regression and Learning.” In 2018 21st International Conference on Information Fusion (FUSION).
Kwiatkowski, and Mandel. 2015. Convergence of the Square Root Ensemble Kalman Filter in the Large Ensemble Limit.” SIAM/ASA Journal on Uncertainty Quantification.
Labahn, Wu, Harris, et al. 2020. Ensemble Kalman Filter for Assimilating Experimental Data into Large-Eddy Simulations of Turbulent Flows.” Flow, Turbulence and Combustion.
Lakshmivarahan, and Stensrud. 2009. Ensemble Kalman Filter.” IEEE Control Systems Magazine.
Lange, and Stannat. 2021. Mean Field Limit of Ensemble Square Root Filters – Discrete and Continuous Time.” Foundations of Data Science.
Law, Tembine, and Tempone. 2016. Deterministic Mean-Field Ensemble Kalman Filtering.” SIAM Journal on Scientific Computing.
Le Gland, Monbet, and Tran. 2009. Large Sample Asymptotics for the Ensemble Kalman Filter.” Report.
Le Gland, Monbet, and Tran. 2011. Large Sample Asymptotics for the Ensemble Kalman Filter.” In The Oxford Handbook of Nonlinear Filtering,.
Lei, and Bickel. 2009. “Ensemble Filtering for High Dimensional Nonlinear State Space Models.” University of California, Berkeley, Rep.
Lei, Bickel, and Snyder. 2009. Comparison of Ensemble Kalman Filters Under Non-Gaussianity.” Monthly Weather Review.
Lin, Li, Yin, et al. 2025. Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems.”
Lin, Sun, Yin, et al. 2024. Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference.”
Luo, Stordal, Lorentzen, et al. 2015. Iterative Ensemble Smoother as an Approximate Solution to a Regularized Minimum-Average-Cost Problem: Theory and Applications.” SPE Journal.
MacKinlay. 2025. The Ensemble Kalman Update Is an Empirical Matheron Update.” In. Lecture Note on Computer Science.
MacKinlay, Tsuchida, Pagendam, et al. 2025. Gaussian Ensemble Belief Propagation for Efficient Inference in High-Dimensional Systems.” In Proceedings of the International Conference on Learning Representations (ICLR).
Mahesh, Collins, Bonev, et al. 2024. Huge Ensembles Part I: Design of Ensemble Weather Forecasts Using Spherical Fourier Neural Operators.”
Malartic, Farchi, and Bocquet. 2021. State, Global and Local Parameter Estimation Using Local Ensemble Kalman Filters: Applications to Online Machine Learning of Chaotic Dynamics.” arXiv:2107.11253 [Nlin, Physics:physics, Stat].
Mandel. 2009. A Brief Tutorial on the Ensemble Kalman Filter.”
Mandel, Cobb, and Beezley. 2011. On the convergence of the ensemble Kalman filter.” Applications of Mathematics.
Masegosa. 2020. Learning Under Model Misspecification: Applications to Variational and Ensemble Methods.” In Proceedings of the 34th International Conference on Neural Information Processing Systems. NIPS’20.
Mitchell, and Houtekamer. 2000. An Adaptive Ensemble Kalman Filter.” Monthly Weather Review.
Moradkhani, Sorooshian, Gupta, et al. 2005. Dual State–Parameter Estimation of Hydrological Models Using Ensemble Kalman Filter.” Advances in Water Resources.
Nott, Marshall, and Ngoc. 2012. The Ensemble Kalman Filter Is an ABC Algorithm.” Statistics and Computing.
Nychka, and Anderson. 2010. “Data Assimilation.” In Handbook of Spatial Statistics.
O’Kane, Sandery, Kitsios, Sakov, Chamberlain, Collier, et al. 2021. CAFE60v1: A 60-Year Large Ensemble Climate Reanalysis. Part I: System Design, Model Configuration and Data Assimilation. Journal of Climate.
O’Kane, Sandery, Kitsios, Sakov, Chamberlain, Squire, et al. 2021. CAFE60v1: A 60-Year Large Ensemble Climate Reanalysis. Part II: Evaluation.” Journal of Climate.
Oliver. 2022. Hybrid Iterative Ensemble Smoother for History Matching of Hierarchical Models.” Mathematical Geosciences.
Ott, Hunt, Szunyogh, et al. 2004. A Local Ensemble Kalman Filter for Atmospheric Data Assimilation.” Tellus A: Dynamic Meteorology and Oceanography.
Petersen, and Pedersen. 2012. The Matrix Cookbook.”
Pleiss, Gardner, Weinberger, et al. 2018. Constant-Time Predictive Distributions for Gaussian Processes.” In.
Popov. 2022. Combining Data-Driven and Theory-Guided Models in Ensemble Data Assimilation.” ETD.
Raanes, P. 2016. Introduction to Data Assimilation and the Ensemble Kalman Filter.” In.
Raanes, Patrick N., Chen, Grudzien, et al. 2024. DAPPER: Data Assimilation with Python: A Package for Experimental Research.”
Raanes, Patrick Nima, Stordal, and Evensen. 2019. Revising the stochastic iterative ensemble smoother.” Nonlinear Processes in Geophysics.
Reich, and Weissmann. 2019. Fokker-Planck Particle Systems for Bayesian Inference: Computational Approaches.”
Roth, Hendeby, Fritsche, et al. 2017. The Ensemble Kalman Filter: A Signal Processing Perspective.” EURASIP Journal on Advances in Signal Processing.
Routray, Osuri, Pattanayak, et al. 2016. Introduction to Data Assimilation Techniques and Ensemble Kalman Filter.” In Advanced Numerical Modeling and Data Assimilation Techniques for Tropical Cyclone Prediction.
Sainsbury-Dale, Zammit-Mangion, and Huser. 2022. Fast Optimal Estimation with Intractable Models Using Permutation-Invariant Neural Networks.”
Sandery, O’Kane, Kitsios, et al. 2020. Climate Model State Estimation Using Variants of EnKF Coupled Data Assimilation.” Monthly Weather Review.
Schillings, and Stuart. 2017. Analysis of the Ensemble Kalman Filter for Inverse Problems.” SIAM Journal on Numerical Analysis.
Schneider, Stuart, and Wu. 2022. Ensemble Kalman Inversion for Sparse Learning of Dynamical Systems from Time-Averaged Data.” Journal of Computational Physics.
Shumway, and Stoffer. 2011. Time Series Analysis and Its Applications. Springer Texts in Statistics.
Song, Sebe, and Wang. 2022. Fast Differentiable Matrix Square Root.” In.
Spantini, Baptista, and Marzouk. 2022. Coupling Techniques for Nonlinear Ensemble Filtering.” SIAM Review.
Stordal, Moraes, Raanes, et al. 2021. P-Kernel Stein Variational Gradient Descent for Data Assimilation and History Matching.” Mathematical Geosciences.
Stroud, Katzfuss, and Wikle. 2018. A Bayesian Adaptive Ensemble Kalman Filter for Sequential State and Parameter Estimation.” Monthly Weather Review.
Stroud, Stein, Lesht, et al. 2010. An Ensemble Kalman Filter and Smoother for Satellite Data Assimilation.” Journal of the American Statistical Association.
Taghvaei, and Mehta. 2021. An Optimal Transport Formulation of the Ensemble Kalman Filter.” IEEE Transactions on Automatic Control.
Tamang, Ebtehaj, van Leeuwen, et al. 2021. Ensemble Riemannian Data Assimilation over the Wasserstein Space.” Nonlinear Processes in Geophysics.
Tippett, Anderson, Bishop, et al. 2003. Ensemble Square Root Filters.” Monthly Weather Review.
Ubaru, Chen, and Saad. 2017. Fast Estimation of \(tr(f(A))\) via Stochastic Lanczos Quadrature.” SIAM Journal on Matrix Analysis and Applications.
Verlaan, and Heemink. 1997. Tidal Flow Forecasting Using Reduced Rank Square Root Filters.” Stochastic Hydrology and Hydraulics.
Wen, and Li. 2022. Affine-Mapping Based Variational Ensemble Kalman Filter.” Statistics and Computing.
White. 2018. A Model-Independent Iterative Ensemble Smoother for Efficient History-Matching and Uncertainty Quantification in Very High Dimensions.” Environmental Modelling & Software.
Wikle, and Berliner. 2007. A Bayesian Tutorial for Data Assimilation.” Physica D: Nonlinear Phenomena, Data Assimilation,.
Wikle, and Hooten. 2010. A General Science-Based Framework for Dynamical Spatio-Temporal Models.” TEST.
Wilson, Borovitskiy, Terenin, et al. 2020. Efficiently Sampling Functions from Gaussian Process Posteriors.” In Proceedings of the 37th International Conference on Machine Learning.
Wilson, Borovitskiy, Terenin, et al. 2021. Pathwise Conditioning of Gaussian Processes.” Journal of Machine Learning Research.
Yang, Stroud, and Huerta. 2018. Sequential Monte Carlo Smoothing with Parameter Estimation.” Bayesian Analysis.
Yegenoglu, Krajsek, Pier, et al. 2020. Ensemble Kalman Filter Optimizing Deep Neural Networks: An Alternative Approach to Non-Performing Gradient Descent.” In Machine Learning, Optimization, and Data Science.
Zhang, Lin, Li, et al. 2018. An Iterative Local Updating Ensemble Smoother for Estimation and Uncertainty Assessment of Hydrologic Model Parameters With Multimodal Distributions.” Water Resources Research.