# Continuous and equilibrium probabilistic graphical models

Placeholder for my notes on probabilistic graphical models over a continuum, i.e. with possibly-uncountably many nodes in the graph; or put another way, where the random field has an uncountable index set (but some kind of structure — a metric space, say.) There is much formalising to be done, which I do not propose to attempt right now, in lieu, here are some notes.

Normally when we discuss graphical models it is in terms of a finitely indexed set of rvs $$\{X_i;i=1,\ldots,n\}$$. If that index $$i$$ ranges instead over a continuum then what does the gaphical model formalism look like?

Here’s a concrete example. Consider Gaussian process whose covariance kernel $$K$$ is continuous and smoothly decays. Let it be over index space $$\mathcal{T}:=\mathbb{R}^n$$ for the sake of argument. It implicitly defines an undirected graphical model where for any given observation index $$t_0\in\mathcal{T}$$, the value $$x_0$$ is influenced by the values of the field at $$\operatorname{supp}\{K(\cdot, t_0)\}$$; (or a really a continuum of different strengths of influence depending on the magnitude of the kernel). This kind of setup is very important in spatiotemporal modelling.

Does the standard finite dimensional distribution argument get us anywhere in this setting if we can introduce some conditional independence?

I suspect that is sufficiently general to cover such cases, but TBH I haven’t read it for long enough that I can’t remember.

## Handling continuous index spaces by pretending a continuous field is discrete

is probably an example of what I mean; they construct a continuous index directed graphical model for point process fields, based on limiting cases of a discrete field, which seems like the obvious method of attack.

tackle this by considering SDE influence via limits of discretizations of SDEs which is, now I think of it, an intuitive way to approach this problem.

## The central european school

There is a strand of research in this area that I am just starting to notice across Tübingen, Amsterdam, and Zürich. Causality on continuous index spaces, and, which turns out to be related, equilibrium/feedback dynamics.

Maybe start from Schölkopf et al. (2012)? Bongers and Mooij (2018) gives the flavour of a more recent result.

Uncertainty and random fluctuations are a very common feature of real dynamical systems. For example, most physical, financial, biochemical and engineering systems are subjected to time-varying external or internal random disturbances. These complex disturbances and their associated responses are most naturally described in terms of stochastic processes. A more realistic formulation of a dynamical system in terms of differential equations should involve such stochastic processes. This led to the fields of stochastic and random differential equations, where the latter deals with processes that are sufficiently regular. Random differential equations (RDEs) provide the most natural extension of ordinary differential equations to the stochastic setting and have been widely accepted as an important mathematical tool in modeling…

Over the years, several attempts have been made to interpret these structural causal models that include cyclic causal relationships. They can be derived from an underlying discrete-time or continuous-time dynamical system. All these methods assume that the dynamical system under consideration converges to a single static equilibrium… These assumptions give rise to a more parsimonious description of the causal relationships of the equilibrium states and ignore the complicated but decaying transient dynamics of the dynamical system. The assumption that the system has to equilibrate to a single static equilibrium is rather strong and limits the applicability of the theory, as many dynamical systems have multiple equilibrium states.

In this paper, we relax this condition and capture, under certain convergence assumptions, every random equilibrium state of the RDE in an SCM. Conversely, we show that under suitable conditions, every solution of the SCM corresponds to a sample-path solution of the RDE. Intuitively, the idea is that in the limit when time tends to infinity the random differential equations converge exactly to the structural equations of the SCM.

“RDEs” seem to be stochastic differential equations with differentiable sample paths.

## References

Aalen, Odd O., Kjetil Røysland, Jon Michael Gran, and Bruno Ledergerber. 2012. Journal of the Royal Statistical Society: Series A (Statistics in Society) 175 (4): 831–61.
Akbari, Kamal, Stephan Winter, and Martin Tomko. 2023. Geographical Analysis 55 (1): 56–89.
Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Information Science and Statistics. New York: Springer.
Blom, Tineke, Stephan Bongers, and Joris M. Mooij. 2020. In Uncertainty in Artificial Intelligence, 585–94. PMLR.
Bongers, Stephan, Patrick Forré, Jonas Peters, Bernhard Schölkopf, and Joris M. Mooij. 2020. arXiv:1611.06221 [Cs, Stat], October.
Bongers, Stephan, and Joris M. Mooij. 2018. arXiv:1803.08784 [Cs, Stat], March.
Bongers, Stephan, Jonas Peters, Bernhard Schölkopf, and Joris M. Mooij. 2016. arXiv:1611.06221 [Cs, Stat], November.
Dash, Denver. 2003.
Dash, Denver, and Marek Druzdzel. 2001. In Symbolic and Quantitative Approaches to Reasoning with Uncertainty, edited by Salem Benferhat and Philippe Besnard, 2143:192–203. Berlin, Heidelberg: Springer Berlin Heidelberg.
Eichler, Michael, Rainer Dahlhaus, and Johannes Dueck. 2016. Journal of Time Series Analysis, January, n/a–.
Glymour, Clark. 2007. Philosophy of Science 74 (3): 330–46.
Hansen, Niels, and Alexander Sokol. 2014. Electronic Journal of Probability 19.
Lauritzen, Steffen L. 1996. Graphical Models. Oxford Statistical Science Series. Clarendon Press.
Lopez-Paz, David, Robert Nishihara, Soumith Chintala, Bernhard Schölkopf, and Léon Bottou. 2016. arXiv:1605.08179 [Cs, Stat], May.
Mogensen, Søren Wengel, Daniel Malinsky, and Niels Richard Hansen. 2018. In UAI2018, 17.
Peters, Jonas, Dominik Janzing, and Bernhard Schölkopf. 2017. Elements of Causal Inference: Foundations and Learning Algorithms. Adaptive Computation and Machine Learning Series. Cambridge, Massachuestts: The MIT Press.
Peters, Jonas, Joris M. Mooij, Dominik Janzing, and Bernhard Schölkopf. 2014. “Causal Discovery with Continuous Additive Noise Models.” The Journal of Machine Learning Research 15 (1): 2009–53.
Peters, Jonas, Joris Mooij, Dominik Janzing, and Bernhard Schoelkopf. 2012. arXiv:1202.3757 [Cs, Stat], February.
Rubenstein, Paul K., Stephan Bongers, Bernhard Schölkopf, and Joris M. Mooij. 2018. In Uncertainty in Artificial Intelligence.
Rubenstein, Paul K., Sebastian Weichwald, Stephan Bongers, Joris M. Mooij, Dominik Janzing, Moritz Grosse-Wentrup, and Bernhard Schölkopf. 2017. arXiv:1707.00819 [Cs, Stat], July.
Runge, Jakob, Sebastian Bathiany, Erik Bollt, Gustau Camps-Valls, Dim Coumou, Ethan Deyle, Clark Glymour, et al. 2019. Nature Communications 10 (1): 2553.
Schölkopf, Bernhard. 2022. In Probabilistic and Causal Inference: The Works of Judea Pearl, 1st ed., 36:765–804. New York, NY, USA: Association for Computing Machinery.
Schölkopf, Bernhard, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij. 2012. In ICML 2012.
Schulam, Peter, and Suchi Saria. 2017. In Proceedings of the 31st International Conference on Neural Information Processing Systems, 1696–706. NIPS’17. Red Hook, NY, USA: Curran Associates Inc.
Wang, Sifan, Shyam Sankaran, and Paris Perdikaris. 2022. arXiv.

### No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.