Message-passing algorithms in graphical models

Cleaving reality at the joint, then summing it at the marginal

Assumed audience:

People who want to learn the tools of modern approximate graphical inference with the minimum of diversions

See also graph computations for a more general sense of Message Passing, and graph NNs for the same idea but with more neural dust sprinkled on it.

Variational message passing refers to inference where the probability density factorizes over some graphical independence structure, which means we get cheap and distributed inference, for some values of cheap and distributed. I am currently particularly interested in this for latent GP models where it generates Gaussian belief propagation, and also the union of message passing with variational inference.

I find the manner in which variational message passing is usually introduced to be confusing. It detours via a lot of ideas that turn out to be impractical to use in inference problems that I care about. In this notebook I am trying to shuffle about the graphical models curriculum until I find a minimum viable set of concepts to explain to myself what I actually need to do in my own approximate variational inference practice when I fire up modern probabilistic programming tools. This document is live and subject to revisions and definitely includes errors I have introduced recently in addition to the earlier errors I have not yet found.

For motivation as to why we would try to solve statistical questions see belief propagation, which is the original setting for message passing, where it turns out there are some nice relations between graphs and integrals of factorised functions. It is worth learning once. Exact Belief Propagation cannot be implemented except in special cases, so you can likely forget it soon after. The key take-home is that some integrals can be solved, at least conceptually, by passing messages around on a graph corresponding to terms in a a graph. It is, IMO, worth not learning too much about because traditionally there are lots of other side-quests they send you at this point, all of which are useless (Junction trees? No thank you).

There are two common ways to build up message passing ideas; one is to start from belief propagation and try to generalise it in various ways, worrying at each step how valid the extension is. Alternatively, we can present a specific VMP algorithm in its own right, which is pretty simple and always does something, even though sometimes it might produce nonsense. This approach is what of a lot of application papers actually do. Later we can wonder under what circumstances it might do marginal inference, and what it might approximate when it does not. The latter approach is my preference for now, since it leads to hands-on intuition-building with easy implementation, and defers the (to my mind) horrible, grinding stage of theorem-proving to later on when we are already solving useful problems and the pay-off is better.

Advice from Tom P. Minka (2005) tells us we have too many choices here:

The recipe to make a message-passing algorithm has four steps:

• Pick an approximating family for q to be chosen from. For example, the set of fully-factorized distributions, the set of Gaussians, the set of k-component mixtures, etc.
• Pick a divergence measure to minimize. For example, mean-field methods minimize the Kullback-Leibler divergence $$KL(q \| p)$$, expectation propagation minimizes $$KL(p \| q)$$, and power EP minimizes α-divergence, $$D\alpha(p \| q)$$.
• Construct an optimization algorithm for the chosen divergence measure and approximating family. Usually this is a fixed-point iteration obtained by setting the gradients to zero.
• Distribute the optimization across the network, by dividing the network p into factors, and minimizing local divergence at each factor.

Take home message: there are a lot of pluggable parts in this family of algorithms. This can be confusing. I think it is easier to start from a concrete set of assumptions, derive a specific (ideally maximally simple) algorithm and then generalise outwards to more fiddly/complicated ones. So here I will mention a few pluggable choices, but I am focussing on some specific choices.

I think the landscape of possible message-passing algorithms looks something like this. I am working on the assumption that starting from the red star is the easiest thing to do.

It is conceptually easier to seek forgiveness for bad approximate inference than permission for exact inference

A classic way to introduce message-passing inference is to spend time wondering when we are permitted do exact inference with local calculations (“exact belief propagation”), and then introduce ever more caveats and complications.

After this complicated machinery breaks down we give up, throw most of it out, and introduce a simpler approximation, i.e. variational message-passing (VMP), which can be applied to for more diverse distributions and graph structures. There is slightly more complex set up in terms of distributions (what does it mean to variationally approximate a distribution?) but it the machinery there is something we can use over and over again and is simpler for me at least then complicated graph algorithms.

Although VMP might not actually work, when it does, it is simple. And in many special cases the approximation and the exact version end up being the same thing, or at least very close. Certainly, comprehension-wise we should start from VMP.

NB Wand (2017) asserts that there is a difference between Mean Field Variational Bayes and Variational Message Passing. I have no idea what it is; I thought MFVB was a special case of VMP. So, take everything I say with a grain of salt because I am clearly missing at least one distinction of note.

Which graph?

We can pass messages on factor graphs or on DAGs or what-have-you. It comes out simpler on factor graphs (although John M. Winn and Bishop (2005) does a good job of being clear about the structure of the method on DAGs by analogy with expectation maximisation, which produces some lovely symmetries). The intuition here is that the factor graph then corresponds exactly to the variational approximation.

Which divergence?

Kullback-Leibler divergence. In combination with the other assumptions we have introduced, this makes everything come out nice. This is the confusingly-named “mean-field” assumption. I assume the name comes from physicists; it has that air about it.

Hold that thought.

Which random variates?

We will assume exponential family RVs, or specifically, that each factorized term in a factor is an exponential condition on the incoming message. Otherwise stuff is weird.

KL variational message-passing for exponential families on factor graphs

Putting this together, I assert that a reasonable choice for our basic starting algorithm is Variational message passing on factor graphs with respect to KL divergence . I will also assume that all the graphs are tree graphs, which is to say, there are no loops. Later on, it will turn out that some things break if we have loops, but generally not very badly and the badness can be quantified and controlled using various tools so we can often not worry about that.

TODO: compare to an exact algorithm like Gaussian BP.

At first glance these together look like insane restrictions, but we can make it work. We can take a short leap of faith where we seem to being doing arbitrary or weird things, then reassure ourselves that we are solving something like a real inference problem.

Let us now look at the factor graph from Wand (2017).

A factor graph. The factors are represented by square nodes, and the random variates or stochastic nodes by circular ones.

Consider a Bayesian statistical model with observed data $$\boldsymbol{D}$$ and parameter vector $$\boldsymbol{\theta}$$. In a factor graph we can write the joint density of these random nodes as $$p(\boldsymbol{\theta}, \boldsymbol{D})=\prod_{j=1}^{N} f_{j}\left(\boldsymbol{\theta}_{\text {neighbors }(j)}\right) .$$ Our goal in inference is to calculate the the posterior density function $$p(\boldsymbol{\theta} \mid \boldsymbol{D}).$$1

We do that with updates that look like this:

Exact belief propagation updates steps from Ortiz et al’s interactive GBP demo.

In the exact case Ortiz et al’s interpretation of these messages in the belief propagation setting makes sense:

Belief Update: The variable node beliefs are updated by taking a product of the incoming messages from all adjacent factors, each of which represents that factor’s belief on the receiving node’s variables.

Factor-to-variable message: To send a message to an adjacent variable node, a factor aggregates messages from all other adjacent variable nodes and marginalizes over all the other nodes’variables to produce a message that expresses the factor’s belief over the receiving node’s variables.

Variable-to-factor message: A variable-to-factor message tells the factor what the belief of the variable would be if the receiving factor node did not exist. This is computed by taking the product of the messages the variable node has received from all other factor nodes.

We are working in an approximated setting, though, so this is more of a heuristic.

A fully-factorized approximation to the posterior density function $$p(\boldsymbol{\theta} \mid \boldsymbol{D})$$ is $p(\boldsymbol{\theta} \mid \boldsymbol{D}) \approx q^{*}(\boldsymbol{\theta})$ where $$q^{*}(\boldsymbol{\theta})$$ is the minimizer of the Kullback-Leibler divergence $$\int q(\boldsymbol{\theta}) \log \left\{\frac{q(\boldsymbol{\theta})}{p(\boldsymbol{\theta} \mid \boldsymbol{D})}\right\} d \boldsymbol{\theta}$$. We suppress the dependence on data here because we are holding it fixed when we conduct a posterior update.

For each $$1 \leq i \leq M$$ and $$1 \leq j \leq N$$, the VMP stochastic node to factor message updates are $m_{\boldsymbol{\theta}_{i} \rightarrow f_{j}}\left(\boldsymbol{\theta}_{i}\right) \longleftarrow \propto \prod_{j^{\prime} \neq j: i \in \text { neighbors }\left(j^{\prime}\right)} m_{f_{j^{\prime}} \rightarrow \boldsymbol{\theta}_{i}}\left(\boldsymbol{\theta}_{i}\right)$ and the factor to stochastic node message updates are $m_{f_{j} \rightarrow \boldsymbol{\theta}_{i}}\left(\boldsymbol{\theta}_{i}\right) \longleftarrow \propto \exp \left[E_{f_{j} \rightarrow \boldsymbol{\theta}_{i}}\left\{\log f_{j}\left(\boldsymbol{\theta}_{\text {neighbors }(j)}\right)\right\}\right]$ where $$E_{f_{j} \rightarrow \boldsymbol{\theta}_{i}}$$ denotes expectation with respect to the density function $\frac{\prod_{i^{\prime} \in \text { neighbors }(j) \backslash\{i\}} m_{f_{j} \rightarrow \boldsymbol{\theta}_{i^{\prime}}}\left(\boldsymbol{\theta}_{i^{\prime}}\right) m_{\boldsymbol{\theta}_{i^{\prime}} \rightarrow f_{j}}\left(\boldsymbol{\theta}_{i^{\prime}}\right)}{\prod_{i^{\prime} \in \text { neighbors }(j) \backslash\{i\}} \int m_{f_{j} \rightarrow \boldsymbol{\theta}_{i^{\prime}}}\left(\boldsymbol{\theta}_{i^{\prime}}\right) m_{\boldsymbol{\theta}_{i^{\prime}} \rightarrow f_{j}}\left(\boldsymbol{\theta}_{i^{\prime}}\right) d \boldsymbol{\theta}_{i^{\prime}}}$ That looks like we have made a ghastly mistake, because this latter message is horrible in general. it does simplify somewhat for conditionally-exponential families where we can exploit conditional conjugacy.

After passing messages around all the nodes in the graph until it seems to have converged, the Kullback-Leibler optimal $$q$$-densities are obtained via $q^{*}\left(\boldsymbol{\theta}_{i}\right) \propto \prod_{j: i \in \text { neighbors }(j)} m_{f_{j} \rightarrow \boldsymbol{\theta}_{i}}\left(\boldsymbol{\theta}_{i}\right).$

What just happened?

I found several things confusing about these algorithms when first I heard of them.

Firstly, what is the message actually? It is a probability distribution, or, more generally a measure, possibly over only some of the axes. This can be obscured by the presentation in some texts, IMO, because of a couple of pedagogic choices seem to be common but determined more by historical inertia rather than ease of learning.

The fact that we can represent some target distribution, or more often some approximation to the probability distribution, as a finite-dimensional vector, is because some distributions can be represented by finite-dimensional parameter vectors (e.g. I can pack the means and variances of a multivariate Gaussian into a vector).

But what do we actually do with these message? This turns out to be subtle, tl;dr: it is a combination of multiplying and re-normalising other distributions, and sometimes optimising with respect to those. Longer story: There are a several things that we might hope we could do in a general case.

What ends up being feasible most often is to introduce that lucky combination of assumptions (KL-divergence, fully-factorized, exponential-family) which causes the factors to decompose nicely such that the messages can be reasonably be considered over each edge separately. (and if that is infeasible there are other things that may be done, such as using semiparametric models which are still fully-factored and exponential, or breaking variables out into observations and auxiliary variables and possibly other things… so this idea is not as terrible as it first seemed to me).

Also, we often see the update rules for multiple i.i.d. exponential family observations, which is a special case, or for a single observation, which is another kind of special case. These are important cases, but they treated as if their relationship to the methodology at its most general was obvious. IMO it is not obvious, but the fact that it comes out so nicely in the special cases means that I feel like an idiot for not “getting” what problem I am solving generally.

tl;dr The magic here is that update rules for (conditionally) exponential variates under a mean field assumption and KL Divergence are surprisingly simple in this application.

Fully-factored assumption

Here we assume that the density at each factor is well-approximated by a product of independent terms, one for each edge in the factor graph. (TODO: example equation).

Anyway, it not always a terrible idea to make this assumptions, but nor is it the only way to do things, at least conceptually. In practice, it is much more effort if we cannot make this assumption.

Nonetheless, papers that do not make it explicit are very confusing. Why would we ever expect inference to look anything like that?2

AFAICT this is a problem of history. Not all variational inference must make the mean-field, or the fully-factored assumptions etc, but methods that relax these restrictions start to look different and need rather different technology to go, such as local Monte Carlo estimation. For a long while this set of assumptions were the main game in town, (or even more onerous assumptions, like assuming everything is discrete and treating inference exactly) so tutorials of a certain vintage take mean-field variational inference as a synonym for variational inference. If I have just learnt some non-mean-field SVI methods from a recent NeurIPS paper before I encounter VMP, I might well be confused by this.

Conditional conjugacy/ exponential families

Under construction.

TODO: note that “natural parameter” form is extra nice here because it summarises the “right” updates.

Attias (1999);Ghahramani and Beal (2001);WinnVariational2005

Dustin Tran and David Blei mention the importance of the conditional conjugacy relations when we are working in (conditionally) exponential families.

Justin Domke summarises exponential families for graphical model usage.

TBD

Loopiness

OK, we have assumed tree graphs for everything thus far, but this is an onerous assumption. We can allow loopy graphs, which often works, sort-of. This choice does introduce extra approximations into the solution, but also allows us to handle more plausible systems within the same setup.

TBD

Which free energy?

Helmholtz Free Energy is the classic flavour. On graphs we are concerned with a related, more general (?) free energy called Bethe free energy [Jonathan S. Yedidia, Freeman, and Weiss (2005); Jonathan S. Yedidia, Freeman, and Weiss (2000); WainwrightGraphical2008].

TBD

In MCMC sampling

See Huang and Gelman (2005); Vehtari et al. (2019); Wang and Dunson (2013).

Semiparametric

This is the main innovation I think, in the synthesis of Wand (2017).

Without Mean-field assumptions

What do relaxations of the mean-field assumption look like, and do they ever arise in message-passing models? There is structured mean-field inference, below, but I think that often we throw out any hope of message passing methods when we relax the mean-field assumption. Variational autoencoder do not use this assumption in general, but they also do not use message passing inference. Re-imposing a graphical structure on VAE models is gruelling and AFAIK only done on a case-by-case basis. Perhaps there are some neural methods that do something else?

TBD.

Loopy belief propagation

From Murphy, Weiss, and Jordan (1999) and analysed in Weiss and Freeman (2001); M. J. Wainwright and Jordan (2008); Jonathan S. Yedidia, Freeman, and Weiss (2005). One day I might be interested in Bethe free energy too?

Structured mean field

Directly generalises mean field . I am not sure how useful this is in practice.

Phase transitions

Thomas Orton’s overview lecture connects message-passing with statistical mechanics of statistics.

Last week, we saw how certain computational problems like 3SAT exhibit a thresholding behavior, similar to a phase transition in a physical system. In this post, we’ll continue to look at this phenomenon by exploring a heuristic method, belief propagation (and the cavity method), which has been used to make hardness conjectures, and also has thresholding properties. In particular, we’ll start by looking at belief propagation for approximate inference on sparse graphs as a purely computational problem. After doing this, we’ll switch perspectives and see belief propagation motivated in terms of Gibbs free energy minimization for physical systems.

More possibly in Barbier (2015).

TBD. See .

Automatic differentiation as message-passing

TBD

A good introduction is M. J. Wainwright and Jordan (2008), which does some deep stuff with setting up graphical models in exponential family context. Bishop (2006) provides extra application context and a different theoretical development. Dauwels (2007) explains why the factor graph setting is the simplest way to develop the message passing algorithms. I agree with him about this. It is simplest to learn message-passing inference for factor graphs and then translate to and from classic DAGs and MRFs as needed.

Dustin Tran and David Blei advocate for Wand (2017) as a kind of modern milestone in this field. Wingate and Weber (2013) connects us to modern stochastic VI.

Next, see the recommended texts about general graphical model theory in the graphical models notebook.

From a signal processing/coding perspective H.-A. Loeliger (2004) is interesting. Read this one if you know what a turbo code is.

Other overviews I have found useful for various reasons are Tom P. Minka (2005),J. S. Yedidia, Freeman, and Weiss (2003), Sutton and Minka (2006) and Cox, van de Laar, and de Vries (2019).

Recommended: Wee Sun Lee’s NeurIPS lecture about how to integrate neural function approximation into distributed inference: Message Passing In Machine Learning.

Tools

• GAMP.

• ForneyLab looks especially useful for me:

The message passing paradigm offers a convenient method for leveraging model-specific structures, while remaining generally applicable. Message passing can be conveniently formulated on a Forney-style factor graph (FFG) representation of the model. Inference tasks on the model can then be decomposed in local computations, represented by messages that flow across the graph. This locality allows for storing pre-computed message updates in a look-up table that can be re-used across models. Automated algorithm construction then amounts to scheduling these messages in the order required by the inference task (see also this conference paper at JuliaCon).

ForneyLab (GitHub) is introduced in Cox, van de Laar, and de Vries (2019) as a novel Julia package that allows the user to specify a probabilistic model as an FFG and pose inference problems on this FFG. In return, ForneyLab automatically constructs a Julia program that executes a message passing-based (approximate) inference procedure. ForneyLab is designed with a focus on flexibility, extensibility and applicability to biologically plausible models for perception and decision making, such as the hierarchical Gaussian filter (HGF). With ForneyLab, the search for better models for perception and action can be accelerated

This idea has been baked into various other probabilistic programming frameworks by now, in that many of them seem to infer a reasonable message-passing structure for models.

References

Aji, S.M., and R.J. McEliece. 2000. IEEE Transactions on Information Theory 46 (2): 325–43.
Akbayrak, Semih, Ivan Bocharov, and Bert de Vries. 2021. Entropy 23 (7): 815.
Ambrogioni, Luca, Kate Lin, Emily Fertig, Sharad Vikram, Max Hinne, Dave Moore, and Marcel van Gerven. 2021. In Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, 676–84. PMLR.
Arnold, Barry C., Enrique Castillo, and Jose M. Sarabia. 1999. Conditional Specification of Statistical Models. Springer Science & Business Media.
Attias, Hagai. 1999. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, 21–30. UAI’99. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Bapst, Victor, and Amin Coja-Oghlan. 2016. Random Structures & Algorithms 49 (4): 694–741.
Barber, David. 2012. Bayesian Reasoning and Machine Learning. Cambridge ; New York: Cambridge University Press.
Barbier, Jean. 2015. arXiv:1511.01650 [Cs, Math], November.
Barbier, Jean, Florent Krzakala, Nicolas Macris, Léo Miolane, and Lenka Zdeborová. 2017. arXiv:1708.03395 [Cond-Mat, Physics:math-Ph], August.
Barfoot, Timothy D. 2020. arXiv:2010.08022 [Cs, Math, Stat], October.
Bayati, Mohsen, and Andrea Montanari. 2011. IEEE Transactions on Information Theory 57 (2): 764–85.
Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Information Science and Statistics. New York: Springer.
Blake, Andrew, Pushmeet Kohli, and Carsten Rother, eds. 2011. Markov Random Fields for Vision and Image Processing. Cambridge, Mass: MIT Press.
Blei, David M., Alp Kucukelbir, and Jon D. McAuliffe. 2017. Journal of the American Statistical Association 112 (518): 859–77.
Borgerding, Mark, and Philip Schniter. 2016. arXiv:1612.01183 [Cs, Math], December.
Boyd, Stephen. 2010. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Vol. 3. Now Publishers Inc.
Cevher, Volkan, Marco F. Duarte, Chinmay Hegde, and Richard Baraniuk. 2009. In Advances in Neural Information Processing Systems, 257–64. Curran Associates, Inc.
Cox, Marco, Thijs van de Laar, and Bert de Vries. 2019. International Journal of Approximate Reasoning 104 (January): 185–204.
Dauwels, Justin. 2007. In 2007 IEEE International Symposium on Information Theory, 2546–50. Nice: IEEE.
Dehaene, Guillaume P. 2016. arXiv:1612.05053 [Stat], December.
Donoho, David L., A. Maleki, and A. Montanari. 2010. In 2010 IEEE Information Theory Workshop (ITW), 1–5.
Donoho, David L., Arian Maleki, and Andrea Montanari. 2009a. Proceedings of the National Academy of Sciences 106 (45): 18914–19.
———. 2009b. In 2010 IEEE Information Theory Workshop (ITW), 1–5.
Donoho, David L., and Andrea Montanari. 2013. arXiv:1310.7320 [Cs, Math, Stat], October.
Forney, G.D. 2001. IEEE Transactions on Information Theory 47 (2): 520–48.
Frey, B.J., and Nebojsa Jojic. 2005. IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (9): 1392–1416.
Gemp, Ian, Brian McWilliams, Claire Vernade, and Thore Graepel. 2020. In.
Ghahramani, Zoubin, and Matthew Beal. 2001. In Advances in Neural Information Processing Systems. Vol. 13. MIT Press.
Hoffman, Matthew D., and David M. Blei. 2015. In Artificial Intelligence and Statistics, 361–69.
Huang, Zaijing, and Andrew Gelman. 2005. SSRN Electronic Journal.
Jaggi, Martin, Virginia Smith, Martin Takac, Jonathan Terhorst, Sanjay Krishnan, Thomas Hofmann, and Michael I Jordan. 2014. In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 3068–76. Curran Associates, Inc.
Jordan, Michael I., Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. 1999. Machine Learning 37 (2): 183–233.
Jordan, Michael Irwin. 1999. Learning in Graphical Models. Cambridge, Mass.: MIT Press.
Kindermann, Ross, and J. Laurie Snell. 1980. Markov Random Fields and Their Applications. Vol. 1. Contemporary Mathematics. Providence, Rhode Island: American Mathematical Society.
Kirkley, Alec, George T. Cantwell, and M. E. J. Newman. 2020. arXiv:2009.12246 [Cond-Mat], September.
Kjærulff, Uffe B., and Anders L. Madsen. 2008. Bayesian Networks and Influence Diagrams. Information Science and Statistics. New York, NY: Springer New York.
Koller, Daphne, and Nir Friedman. 2009. Probabilistic Graphical Models : Principles and Techniques. Cambridge, MA: MIT Press.
Kschischang, F.R., B.J. Frey, and H.-A. Loeliger. 2001. IEEE Transactions on Information Theory 47 (2): 498–519.
Kuck, Jonathan, Shuvam Chakraborty, Hao Tang, Rachel Luo, Jiaming Song, Ashish Sabharwal, and Stefano Ermon. 2020. arXiv:2007.00295 [Cs, Stat], July.
Laar, Thijs van de, Marco Cox, Ismail Senoz, Ivan Bocharov, and Bert de Vries. 2018. In Conference on Complex Systems, 3.
Lauritzen, Steffen L. 1996. Graphical Models. Oxford Statistical Science Series. Clarendon Press.
Li, Yingzhen, and Richard E Turner. 2016. In Advances in Neural Information Processing Systems, 29:1081–89. Red Hook, NY, USA: Curran Associates, Inc.
Liu, Dong. 2020. “Perspectives on Probabilistic Graphical Models.”
Loeliger, H.-A. 2004. IEEE Signal Processing Magazine 21 (1): 28–41.
Loeliger, Hans-Andrea, Justin Dauwels, Junli Hu, Sascha Korl, Li Ping, and Frank R. Kschischang. 2007. Proceedings of the IEEE 95 (6): 1295–1322.
Ma, Chenxin, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtárik, and Martin Takáč. 2015. arXiv:1502.03508 [Cs], February.
Malioutov, Dmitry M., Jason K. Johnson, and Alan S. Willsky. 2006. Journal of Machine Learning Research 7 (October): 2031–64.
Minka, Thomas P. 2001. In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, 362–69. UAI’01. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Minka, Tom P. 2005. Technical report, Microsoft Research.
———. 2008. Techincal Report.
Montanari, Andrea. 2012. Compressed Sensing: Theory and Applications, 394–438.
Montanari, Andrea, and Subhabrata Sen. 2022. April.
Murphy, Kevin P. 2012. Machine learning: a probabilistic perspective. 1 edition. Adaptive computation and machine learning series. Cambridge, MA: MIT Press.
———. 2022. Probabilistic Machine Learning: An Introduction. MIT Press.
Murphy, Kevin P., Yair Weiss, and Michael I. Jordan. 1999. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, 467–75. UAI’99. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Naesseth, Christian Andersson, Fredrik Lindsten, and Thomas B Schön. 2014. In Advances in Neural Information Processing Systems. Vol. 27. Curran Associates, Inc.
Nguyen, Trung V., and Edwin V. Bonilla. 2014. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1, 1404–12. NIPS’14. Cambridge, MA, USA: MIT Press.
Ortiz, Joseph, Talfan Evans, and Andrew J. Davison. 2021. arXiv:2107.02308 [Cs], July.
Parr, Thomas, Dimitrije Markovic, Stefan J. Kiebel, and Karl J. Friston. 2019. Scientific Reports 9 (1): 1889.
Pearl, Judea. 1982. In Proceedings of the Second AAAI Conference on Artificial Intelligence, 133–36. AAAI’82. Pittsburgh, Pennsylvania: AAAI Press.
———. 1986. Artificial Intelligence 29 (3): 241–88.
———. 2008. Probabilistic reasoning in intelligent systems: networks of plausible inference. Rev. 2. print., 12. [Dr.]. The Morgan Kaufmann series in representation and reasoning. San Francisco, Calif: Kaufmann.
———. 2009. Causality: Models, Reasoning and Inference. Cambridge University Press.
Peleg, Tomer, Yonina C. Eldar, and Michael Elad. 2010. IEEE Transactions on Signal Processing 60 (5): 2286–2303.
Rajaei, Boshra, Sylvain Gigan, Florent Krzakala, and Laurent Daudet. 2017. Image Processing On Line 7 (January): 43–55.
Riegler, Erwin, Gunvor Elisabeth Kirkelund, Carles Navarro Manchón, Mihai-Alin Badiu, and Bernard Henry Fleury. 2012. arXiv:1112.0467 [Cs, Math, Stat], June.
Roychowdhury, Anirban, and Brian Kulis. 2015. In Artificial Intelligence and Statistics, 800–808. PMLR.
Satorras, Victor Garcia, and Max Welling. 2021. arXiv:2003.01998 [Cs, Stat], March.
Schniter, P., and S. Rangan. 2012. In 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 815–22.
Smith, David A., and Jason Eisner. 2008. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 145–56. Association for Computational Linguistics.
Spirtes, Peter, Clark Glymour, and Richard Scheines. 2001. Causation, Prediction, and Search. Second Edition. Adaptive Computation and Machine Learning. The MIT Press.
Studený, Milan. 2005. Probabilistic Conditional Independence Structures. Information Science and Statistics. London: Springer.
Sutton, Charles, and Tom P Minka. 2006. Technical Report TR-2006-121, Microsoft Research.
Vehtari, Aki, Andrew Gelman, Tuomas Sivula, Pasi Jylänki, Dustin Tran, Swupnil Sahai, Paul Blomstedt, John P. Cunningham, David Schiminovich, and Christian Robert. 2019. arXiv:1412.4869 [Stat], November.
Wainwright, Martin J., and Michael I. Jordan. 2008. Graphical Models, Exponential Families, and Variational Inference. Vol. 1. Foundations and Trends® in Machine Learning. Now Publishers.
Wainwright, Martin, and Michael I Jordan. 2005. “A Variational Principle for Graphical Models.” In New Directions in Statistical Signal Processing. Vol. 155. MIT Press.
Wand, M. P. 2017. Journal of the American Statistical Association 112 (517): 137–68.
Wang, Xiangyu, and David B. Dunson. 2013. December.
Weiss, Yair, and William T. Freeman. 2001. Neural Computation 13 (10): 2173–2200.
Welling, Max, Tom P Minka, and Yee Whye Teh. 2012. arXiv:1207.1426 [Cs], July.
Wingate, David, and Theophane Weber. 2013. arXiv:1301.1299 [Cs, Stat], January.
Winn, John M. 2004. “Variational Message Passing and Its Applications.” University of Cambridge.
Winn, John M., and Christopher M. Bishop. 2005. In Journal of Machine Learning Research, 661–94.
Xing, Eric P., Michael I. Jordan, and Stuart Russell. 2003. In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, 583–91. UAI’03. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Yedidia, Jonathan S., William T. Freeman, and Yair Weiss. 2000. “Generalized Belief Propagation.” In Proceedings of the 13th International Conference on Neural Information Processing Systems, 668–74. NIPS’00. Cambridge, MA, USA: MIT Press.
Yedidia, Jonathan S., W.T. Freeman, and Y. Weiss. 2005. IEEE Transactions on Information Theory 51 (7): 2282–312.
Yedidia, J.S., W.T. Freeman, and Y. Weiss. 2003. In Exploring Artificial Intelligence in the New Millennium, edited by G. Lakemeyer and B. Nebel, 239–36. Morgan Kaufmann Publishers.
Yoshida, Ryo, and Mike West. 2010. Journal of Machine Learning Research 11 (May): 1771–98.
Yuille, Alan. 2011. “Loopy Belief Propagation, Mean Field Theory and Bethe Approximations.” In Markov Random Fields for Vision and Image Processing, edited by Andrew Blake, Pushmeet Kohli, and Carsten Rother. Cambridge, Mass: MIT Press.

1. or some function of that posterior density, but we don’t gain anything by doing that in the current context. If you want to calculate, e.g. the mode, consider looking up “max-product” inference.↩︎

2. I suspect that is a fairly deep question.↩︎

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.