A particular optimisation method for statistics for that gets you a maximum likelihood estimate despite various annoyances such as missing data.
Vague description of the algorithm:
We have an experimental process that generates a random vector \(B\cup Y\) according to parameter \(\theta\). We wish to estimate the parameter of interest \(\theta\) by maximum likelihood. However, we only observe i.i.d. samples \(b_i\) drawn from \(B\). The likelihood function of the incomplete data \(L(\theta, b)\) is tedious or intractable to maximise. But the “complete” joint likelihood of both the observed and unobserved components, \(L(\theta, \{b_i\}, y)\), is easier to maximise. Then we are potentially in a situation where expectation maximisation can help.
Call \(\theta^{(k)}\) the estimate of \(\theta\) at step \(k\). Write \(\ell(\theta, \{b_i\}, y)\equiv\log L(\theta, \{b_i\}, y)\) because we work in log likelihoods why not.
The following form of the algorithm works when the log-likelihood \(\ell(\theta, b, y)\) is linear in \(b\). (Which is equivalent to it being in a exponential family I believe, but should check.)
At time \(k=0\) we start with an estimate of \(\theta^{(0)}\) chosen arbitrarily or by our favourite approximate method.
We attempt to improve our estimate of the parameter of interest by the following iterative algorithm:
“Expectation”: Under the completed data model with joint distribution \(F(b,y,\theta^{(k)})\) we estimate \(y\) as
\[ y^{(k)}=E_{\theta^{(k)}}[Y|b] \]
“Maximisation”: Solve a (hopefully easier) maximisation problem:
\[ \theta^{(k+1)}=\operatorname{arg max}_\theta \ell(\theta, b, y^{(k)}) \]
In the case that this log likelihood is not linear in \(b\), you are supposed to instead take
\[ \theta^{(k+1)}=\operatorname{arg max}_\theta E_{\theta^{(k)}}[\ell(\theta, b, Y)|b] \]
In practice this nicety is often ignored.
Even if you do the right thing, EM may not converge especially well, or to the global maximum, but it can be easy and robust to get started with, and at least it doesn’t make things worse.
Literature note — apparently the proofs in Dempster, Laird, and Rubin (1977) are dicey; See Wu (1983) for an improved (i.e. correct) versions or Wainwright and Jordan (2008) for an interpretation in terms of graphical models wherein the algorithm is a form of message passing.
A Transparent Interpretation of the EM Algorithm by James Coughlan makes an interesting brief point. We write data \(z\), latent variable \(y\), parameter of interest \(\theta\). Then…
[…] maximizing Neal and Hinton’s joint function of \(\theta\) and a distribution on \(y\) is equivalent to maximum likelihood estimation.
The key point is to note that maximizing \(\log P(z|\theta)\) over \(\theta\) is equivalent to maximizing
\[ \log P (z|\theta)-D(\tilde{P}(y)\|P(y|z,\theta)) \]
jointly over \(\theta\) and \(\tilde{P}(y)\). […]
[…We rewrite this cost function]
\[ H(\tilde{P}) + \sum_y\tilde{P}(y)\log \{P(y|z,\theta)P(z|\theta)\}, \]
where \(H(\tilde{P})=-\sum_y\tilde{P}(y)\log\tilde{P}(y)\) is the entropy of \(\tilde{P}\). This expression is in turn equivalent to
\[ H(\tilde{P}) +\sum_y\tilde{P}(y)\log P(y,z|\theta), \]
which is the same as the function \(F(\tilde{P},\theta)\) given in Neal and Hinton. This function is maximized iteratively, where each iteration consists of two separate maximizations, one over \(\theta\) and another \(\tilde{P}\)
Dan Piponi, Expectation-Maximization with Less Arbitrariness
My goal is to fill in the details of one key step in the derivation of the EM algorithm in a way that makes it inevitable rather than arbitrary.
No comments yet. Why not leave one?