|
|
Line 1: |
Line 1: |
| In [[statistics]], an '''expectation–maximization''' ('''EM''') '''algorithm''' is an [[iterative method]] for finding [[maximum likelihood]] or [[maximum a posteriori]] (MAP) estimates of [[parameter]]s in [[statistical model]]s, where the model depends on unobserved [[latent variable]]s. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the [[Likelihood function#Log-likelihood|log-likelihood]] evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the ''E'' step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
| | The burning sensation during urination is a sign mostly exhibited by the women. Research have shown that different diets like South Beach Diet, which is based off a low carbohydrate diet, might help to shrink your cyst and as a results it will help you to get rid of your ovarian cyst pain sooner. Surgery is usually applied as a last resort, yet even with the surgery there is no assurance that you won't get more cysts later on. A bloated abdomen is a common sign of cysts on ovaries. In order to know for certain then have a laparoscopy performed. <br><br>Twisting of a cyst, which prevents the flow of blood to the ovary, can also result in painful situations. Maybe some women who are reading this have a friend who has a cyst that they care you know, their friend has a cyst, but they themselves as far as they're aware do not have any cysts within their ovaries. Your painful ovarian cyst may be related with your eating habits, so the more junk food you eat, the chances your cyst will grow are higher. If you suffer from Ovarian Cysts you should check out this link now to find safe natural ways to relieve the pain of Ovarian Cysts without the cost or side affects of drugs or surgery. With high progesterone in your body, it reduces blood flow in menstruation or tricks your body to think that you are pregnant. <br><br>Heat Therapy: Positioning an electric heating pad on the pelvic region can offer certain instant relief along with increase recovery from that location. Unfortunately, no-one has yet discovered what causes these three forms of septated ovarian cysts to occur. Most are harmless, but some may cause problems such as rupturing, bleeding, and pain. Aside from these worries, ovarian cyst surgical procedure, like birth control, does not tackle the underlying cause of the cyst, so even if successfully removed the cyst will usually come back once more. However the pain of ovarian cyst miracle is unbearable. <br><br>You're not alone, know that women of all ages have experienced the pain and discomfort of ovarian cysts. Any unaccountable bleeding incidents should be checked right away for any greater problem. One of the things that you can do straight away to relieve some of the cyst pain is to take pressure off of your bladder. If there is no malignancy, then ovarian cysts are considered to be normal. A month went by and then I had another ultrasound, and it appeared to show the cyst still there and possibly a bit of endometreosis. <br><br>Even if they rupture they can clear up without surgery. Use these as yоu ωould with any other bought merchandise. Here are some signs of cervical cancer: Abnormal bleeding that is basically a menstrual period within a menstrual period. Surgery is the next available option but this form of treatment should be considered only as a last resort. These cells can bleed, eventually become scar tissue and overall cause a lot of pain and discomfort.<br><br>If you have any concerns pertaining to exactly where and how to use ovarian cyst surgery ([http://www.math-labo.info/ www.math-labo.info]), you can make contact with us at our own web page. |
| [[File:EM Clustering of Old Faithful data.gif|right|frame|EM clustering of [[Old Faithful]] eruption data. The random initial model (which due to the different scales of the axes appears to be two very flat and wide spheres) is fit to the observed data. In the first iterations, the model changes substantially, but then converges to the two modes of the [[geyser]]. Visualized using [[ELKI]].]]
| |
| | |
| ==History==
| |
| The EM algorithm was explained and given its name in a classic 1977 paper by [[Arthur P. Dempster|Arthur Dempster]], [[Nan Laird]], and [[Donald Rubin]].<ref name="Dempster1977">
| |
| {{cite journal
| |
| |last1=Dempster |first1=A.P. |authorlink1=Arthur P. Dempster
| |
| |last2=Laird |first2=N.M. |authorlink2=Nan Laird |last3=Rubin
| |
| |first3=D.B. |authorlink3=Donald Rubin
| |
| |title=Maximum Likelihood from Incomplete Data via the EM Algorithm
| |
| |journal=[[Journal of the Royal Statistical Society, Series B]]
| |
| |year=1977 |volume=39 |issue=1 |pages=1–38
| |
| |jstor=2984875 | mr = 0501537
| |
| }}
| |
| </ref> They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. In particular, a very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers<ref name="Sundberg1974">{{cite journal
| |
| |last=Sundberg |first=Rolf
| |
| |title=Maximum likelihood theory for incomplete data from an exponential family
| |
| |journal=Scandinavian Journal of Statistics
| |
| |volume=1 |year=1974 |issue=2 |pages=49–58
| |
| |jstor=4615553 | mr = 381110
| |
| }}</ref><ref name="Sundberg1971">
| |
| Rolf Sundberg. 1971. ''Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable''. Dissertation, Institute for Mathematical Statistics, Stockholm University.</ref><ref name="Sundberg1976">{{cite journal
| |
| |last=Sundberg |first=Rolf
| |
| |year=1976
| |
| |title=An iterative method for solution of the likelihood equations for incomplete data from exponential families
| |
| |journal=[[Communications in Statistics]] – Simulation and Computation
| |
| |volume=5 |issue=1 |pages=55–64
| |
| |doi=10.1080/03610917608812007 |mr=443190
| |
| }}</ref> following his collaboration with [[Per Martin-Löf]] and [[Anders Martin-Löf]].<ref>See the acknowledgement by Dempster, Laird and Rubin on pages 3, 5 and 11.</ref><ref>G. Kulldorff. 1961.'' Contributions to the theory of estimation from grouped and partially grouped samples''. Almqvist & Wiksell.</ref><ref name="Martin-Löf1963">Anders Martin-Löf. 1963. "Utvärdering av livslängder i subnanosekundsområdet" ("Evaluation of sub-nanosecond lifetimes"). ("Sundberg formula")</ref><ref name="Martin-Löf1966">
| |
| [[Per Martin-Löf]]. 1966. ''Statistics from the point of view of statistical mechanics''. Lecture notes, Mathematical Institute, Aarhus University. ("Sundberg formula" credited to Anders Martin-Löf).</ref><ref name="Martin-Löf1970">[[Per Martin-Löf]]. 1970. ''Statistika Modeller (Statistical Models): Anteckningar från seminarier läsåret 1969–1970 (Notes from seminars in the academic year 1969-1970), with the assistance of Rolf Sundberg.'' Stockholm University. ("Sundberg formula")</ref><!-- * Martin-Löf, P. "Exact tests, confidence regions and estimates", with a discussion by [[A. W. F. Edwards]], [[George A. Barnard|G. A. Barnard]], D. A. Sprott, O. Barndorff-Nielsen, [[D. Basu]] and [[Rasch model|G. Rasch]]. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 121–138. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974. --><ref name="Martin-Löf1974a">
| |
| Martin-Löf, P. The notion of redundancy and its use as a quantitative measure of the deviation between a statistical hypothesis and a set of observational data. With a discussion by F. Abildgård, [[Arthur P. Dempster|A. P. Dempster]], [[D. Basu]], [[D. R. Cox]], [[A. W. F. Edwards]], D. A. Sprott, [[George A. Barnard|G. A. Barnard]], O. Barndorff-Nielsen, J. D. Kalbfleisch and [[Rasch model|G. Rasch]] and a reply by the author. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 1–42. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974.</ref><ref name="Martin-Löf1974b">
| |
| Martin-Löf, Per The notion of redundancy and its use as a quantitative measure of the discrepancy between a statistical hypothesis and a set of observational data. ''Scand. J. Statist.'' 1 (1974), no. 1, 3–18.</ref>
| |
| The Dempster-Laird-Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. Regardless of earlier inventions, the innovative Dempster-Laird-Rubin paper in the ''Journal of the Royal Statistical Society'' received an enthusiastic discussion at the Royal Statistical Society meeting with Sundberg calling the paper "brilliant". The Dempster-Laird-Rubin paper established the EM method as an important tool of statistical analysis.
| |
| | |
| The convergence analysis of the Dempster-Laird-Rubin paper was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983. Wu's proof established the EM method's convergence outside of the [[exponential family]], as claimed by Dempster-Laird-Rubin.<ref>
| |
| {{cite journal
| |
| |first=C. F. Jeff
| |
| |last=Wu
| |
| |title=On the Convergence Properties of the EM Algorithm
| |
| |journal=[[Annals of Statistics]]
| |
| |volume=11
| |
| |issue=1
| |
| |date=Mar 1983
| |
| |pages=95–103
| |
| |jstor=2240463
| |
| |doi=10.1214/aos/1176346060
| |
| | mr = 684867
| |
| }}</ref>
| |
| | |
| ==Introduction==
| |
| The EM algorithm is used to find the [[maximum likelihood]] parameters of a [[statistical model]] in cases where the equations cannot be solved directly. Typically these models involve [[latent variable]]s in addition to unknown [[parameters]] and known data observations. That is, either there are [[missing values]] among the data, or the model can be formulated more simply by assuming the existence of additional unobserved data points. (For example, a [[mixture model]] can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component that each data point belongs to.)
| |
| | |
| Finding a maximum likelihood solution requires taking the [[derivative]]s of the [[likelihood function]] with respect to all the unknown values — viz. the parameters and the latent variables — and simultaneously solving the resulting equations. In statistical models with latent variables, this usually is not possible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice-versa, but substituting one set of equations into the other produces an unsolvable equation.
| |
| | |
| The EM algorithm proceeds from the observation that the following is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work at all, but in fact it can be proven that in this particular context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a [[saddle point]].{{citation needed|date=July 2012}} In general there may be multiple maxima, and no guarantee that the global maximum will be found. Some likelihoods also have [[Mathematical singularity|singularities]] in them, i.e. nonsensical maxima. For example, one of the "solutions" that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
| |
| | |
| ==Description==
| |
| Given a [[statistical model]] consisting of a set <math>\mathbf{X}</math> of observed data, a set of unobserved latent data or [[missing values]] <math>\mathbf{Z}</math>, and a vector of unknown parameters <math>\boldsymbol\theta</math>, along with a [[likelihood function]] <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}|\boldsymbol\theta)</math>, the [[maximum likelihood estimate]] (MLE) of the unknown parameters is determined by the [[marginal likelihood]] of the observed data
| |
| | |
| :<math>L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}|\boldsymbol\theta) = \sum_{\mathbf{Z}} p(\mathbf{X},\mathbf{Z}|\boldsymbol\theta)</math>
| |
| | |
| However, this quantity is often intractable (e.g. if <math>\mathbf{Z}</math> is a sequence of events, so that the number of values grows exponentially with the sequence length, making the exact calculation of the sum extremely difficult).
| |
| | |
| The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:
| |
| :'''Expectation step (E step)''': Calculate the [[expected value]] of the [[log likelihood]] function, with respect to the [[conditional probability distribution|conditional distribution]] of <math>\mathbf{Z}</math> given <math>\mathbf{X}</math> under the current estimate of the parameters <math>\boldsymbol\theta^{(t)}</math>:
| |
| ::<math>Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}|\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta;\mathbf{X},\mathbf{Z}) \right] \,</math>
| |
| :'''Maximization step (M step)''': Find the parameter that maximizes this quantity:
| |
| ::<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) \, </math>
| |
| | |
| Note that in typical models to which EM is applied:
| |
| #The observed data points <math>\mathbf{X}</math> may be [[discrete random variable|discrete]] (taking values in a finite or countably infinite set) or [[continuous random variable|continuous]] (taking values in an uncountably infinite set). There may in fact be a vector of observations associated with each data point.
| |
| #The [[missing values]] (aka [[latent variables]]) <math>\mathbf{Z}</math> are [[discrete random variable|discrete]], drawn from a fixed number of values, and there is one latent variable per observed data point.
| |
| #The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and parameters associated with a particular value of a latent variable (i.e. associated with all data points whose corresponding latent variable has a particular value).
| |
| However, it is possible to apply EM to other sorts of models.
| |
| | |
| The motivation is as follows. If we know the value of the parameters <math>\boldsymbol\theta</math>, we can usually find the value of the latent variables <math>\mathbf{Z}</math> by maximizing the log-likelihood over all possible values of <math>\mathbf{Z}</math>, either simply by iterating over <math>\mathbf{Z}</math> or through an algorithm such as the [[Viterbi algorithm]] for [[hidden Markov model]]s. Conversely, if we know the value of the latent variables <math>\mathbf{Z}</math>, we can find an estimate of the parameters <math>\boldsymbol\theta</math> fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both <math>\boldsymbol\theta</math> and <math>\mathbf{Z}</math> are unknown:
| |
| #First, initialize the parameters <math>\boldsymbol\theta</math> to some random values.
| |
| #Compute the best value for <math>\mathbf{Z}</math> given these parameter values.
| |
| #Then, use the just-computed values of <math>\mathbf{Z}</math> to compute a better estimate for the parameters <math>\boldsymbol\theta</math>. Parameters associated with a particular value of <math>\mathbf{Z}</math> will use only those data points whose associated latent variable has that value.
| |
| #Iterate steps 2 and 3 until convergence.
| |
| The algorithm as just described monotonically approaches a local minimum of the cost function, and is commonly called ''hard EM''. The [[k-means algorithm|''k''-means algorithm]] is an example of this class of algorithms.
| |
| | |
| However, we can do somewhat better by, rather than making a hard choice for <math>\mathbf{Z}</math> given the current parameter values and averaging only over the set of data points associated with a particular value of <math>\mathbf{Z}</math>, instead determining the probability of each possible value of <math>\mathbf{Z}</math> for each data point, and then using the probabilities associated with a particular value of <math>\mathbf{Z}</math> to compute a [[weighted average]] over the entire set of data points. The resulting algorithm is commonly called ''soft EM'', and is the type of algorithm normally associated with EM. The counts used to compute these weighted averages are called ''soft counts'' (as opposed to the ''hard counts'' used in a hard-EM-type algorithm such as ''k''-means). The probabilities computed for <math>\mathbf{Z}</math> are [[posterior probabilities]] and are what is computed in the E step. The soft counts used to compute new parameter values are what is computed in the M step.
| |
| | |
| == Properties ==
| |
| Speaking of an expectation (E) step is a bit of a [[misnomer]]. What is calculated in the first step are the fixed, data-dependent parameters of the function ''Q''. Once the parameters of ''Q'' are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
| |
| | |
| Although an EM iteration does increase the observed data (i.e. marginal) likelihood function there is no guarantee that the sequence converges to a [[maximum likelihood estimator]]. For [[bimodal distribution|multimodal distributions]], this means that an EM algorithm may converge to a [[local maximum]] of the observed data likelihood function, depending on starting values. There are a variety of heuristic or [[metaheuristic]] approaches for escaping a local maximum such as [[Random-restart hill climbing|random restart]] (starting with several different random initial estimates ''θ''<sup>(''t'')</sub>), or applying [[simulated annealing]] methods.
| |
| | |
| EM is particularly useful when the likelihood is an [[exponential family]]: the E step becomes the sum of expectations of [[sufficient statistic]]s, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive [[Closed-form expression|closed form]] updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of [[Per Martin-Löf]] and [[Anders Martin-Löf]]).<ref name="Sundberg1971"/><ref name="Sundberg1976"/><ref name="Martin-Löf1963"/><ref name="Martin-Löf1966"/><ref name="Martin-Löf1970"/><ref name="Martin-Löf1974a"/><ref name="Martin-Löf1974b"/>
| |
| | |
| The EM method was modified to compute [[maximum a posteriori]] (MAP) estimates for [[Bayesian inference]] in the original paper by Dempster, Laird, and Rubin.
| |
| | |
| There are other methods for finding maximum likelihood estimates, such as [[gradient descent]], [[conjugate gradient]] or variations of the [[Gauss–Newton method]]. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
| |
| | |
| == Proof of correctness ==
| |
| Expectation-maximization works to improve <math>Q(\boldsymbol\theta|\boldsymbol\theta^{(t)})</math> rather than directly improving <math>\log p(\mathbf{X}|\boldsymbol\theta)</math>. Here we show that improvements to the former imply improvements to the latter.<ref name="Little1987">{{cite book | last1 = Little | first1 = Roderick J.A. | author1-link = | last2 = Rubin | first2 = Donald B. | author2-link = Donald Rubin | title = Statistical Analysis with Missing Data | series = Wiley Series in Probability and Mathematical Statistics | year = 1987 | publisher = John Wiley & Sons | location = New York | isbn = 0-471-80254-9 | pages = 134–136}}</ref>
| |
| | |
| For any <math>\mathbf{Z}</math> with non-zero probability <math>p(\mathbf{Z}|\mathbf{X},\boldsymbol\theta)</math>, we can write
| |
| ::<math>
| |
| \log p(\mathbf{X}|\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}|\boldsymbol\theta) - \log p(\mathbf{Z}|\mathbf{X},\boldsymbol\theta) \,.
| |
| </math>
| |
| We take the expectation over values of <math>\mathbf{Z}</math> by multiplying both sides by <math>p(\mathbf{Z}|\mathbf{X},\boldsymbol\theta^{(t)})</math> and summing (or integrating) over <math>\mathbf{Z}</math>. The left-hand side is the expectation of a constant, so we get:
| |
| ::<math>
| |
| \begin{align}
| |
| \log p(\mathbf{X}|\boldsymbol\theta) &
| |
| = \sum_{\mathbf{Z}} p(\mathbf{Z}|\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}|\boldsymbol\theta)
| |
| - \sum_{\mathbf{Z}} p(\mathbf{Z}|\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}|\mathbf{X},\boldsymbol\theta) \\
| |
| & = Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta|\boldsymbol\theta^{(t)}) \,,
| |
| \end{align}
| |
| </math>
| |
| where <math>H(\boldsymbol\theta|\boldsymbol\theta^{(t)})</math> is defined by the negated sum it is replacing.
| |
| This last equation holds for any value of <math>\boldsymbol\theta</math> including <math>\boldsymbol\theta = \boldsymbol\theta^{(t)}</math>,
| |
| ::<math>
| |
| \log p(\mathbf{X}|\boldsymbol\theta^{(t)})
| |
| = Q(\boldsymbol\theta^{(t)}|\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}|\boldsymbol\theta^{(t)}) \,,
| |
| </math>
| |
| and subtracting this last equation from the previous equation gives
| |
| ::<math>
| |
| \log p(\mathbf{X}|\boldsymbol\theta) - \log p(\mathbf{X}|\boldsymbol\theta^{(t)})
| |
| = Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}|\boldsymbol\theta^{(t)})
| |
| + H(\boldsymbol\theta|\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}|\boldsymbol\theta^{(t)}) \,,
| |
| </math>
| |
| However, [[Gibbs' inequality]] tells us that <math>H(\boldsymbol\theta|\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}|\boldsymbol\theta^{(t)})</math>, so we can conclude that | |
| ::<math>
| |
| \log p(\mathbf{X}|\boldsymbol\theta) - \log p(\mathbf{X}|\boldsymbol\theta^{(t)})
| |
| \ge Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}|\boldsymbol\theta^{(t)}) \,.
| |
| </math>
| |
| In words, choosing <math>\boldsymbol\theta</math> to improve <math>Q(\boldsymbol\theta|\boldsymbol\theta^{(t)})</math> beyond <math>Q(\boldsymbol\theta^{(t)}|\boldsymbol\theta^{(t)})</math> will improve <math>\log p(\mathbf{X}|\boldsymbol\theta)</math> beyond <math>\log p(\mathbf{X}|\boldsymbol\theta^{(t)})</math> at least as much.
| |
| | |
| == Alternative description ==
| |
| Under some circumstances, it is convenient to view the EM algorithm as two alternating maximization steps.<ref name="neal1999">{{cite journal|last1=Neal |first=Radford |last2=Hinton |first2=Geoffrey |authorlink2=Geoffrey Hinton |title=A view of the EM algorithm that justifies incremental, sparse, and other variants |journal=Learning in Graphical Models |editor=[[Michael I. Jordan]] |pages= 355–368 | publisher= MIT Press |location=Cambridge, MA |year=1999 |isbn=0-262-60032-3 |url=ftp://ftp.cs.toronto.edu/pub/radford/emk.pdf |accessdate=2009-03-22}}</ref><ref name="hastie2001">{{cite book|last1=Hastie|first1=Trevor|authorlink1=Trevor Hastie|last2=Tibshirani|first2=Robert|authorlink2=Robert Tibshirani|last3=Friedman|first3=Jerome |year=2001 |title=The Elements of Statistical Learning |isbn=0-387-95284-5 |publisher=Springer |location=New York |chapter=8.5 The EM algorithm |pages=236–243}}</ref> Consider the function:
| |
| :<math>F(q,\theta) = \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q) = -D_{\mathrm{KL}}\big(q \big\| p_{Z|X}(\cdot|x;\theta ) \big) + \log L(\theta;x) </math>
| |
| where ''q'' is an arbitrary probability distribution over the unobserved data ''z'', ''p''<sub>''Z''|''X''</sub>(· |''x'';''θ'') is the conditional distribution of the unobserved data given the observed data ''x'', ''H'' is the [[Entropy (information theory)|entropy]] and ''D''<sub>KL</sub> is the [[Kullback–Leibler divergence]].
| |
| | |
| Then the steps in the EM algorithm may be viewed as:
| |
| :'''Expectation step''': Choose ''q'' to maximize ''F'':
| |
| ::<math> q^{(t)} = \operatorname*{arg\,max}_q \ F(q,\theta^{(t)}) </math>
| |
| :'''Maximization step''': Choose ''θ'' to maximize ''F'':
| |
| ::<math> \theta^{(t+1)} = \operatorname*{arg\,max}_\theta \ F(q^{(t)},\theta) </math>
| |
| | |
| == Applications ==
| |
| EM is frequently used for [[data clustering]] in [[machine learning]] and [[computer vision]]. In [[natural language processing]], two prominent instances of the algorithm are the [[Baum-Welch algorithm]] (also known as ''forward-backward'') and the [[inside-outside algorithm]] for unsupervised induction of [[probabilistic context-free grammar]]s.
| |
| | |
| In [[psychometrics]], EM is almost indispensable for estimating item parameters and latent abilities of [[item response theory]] models.
| |
| | |
| With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.
| |
| | |
| The EM algorithm (and its faster variant [[Ordered subset expectation maximization]]) is also widely used in [[medical imaging|medical image]] reconstruction, especially in [[positron emission tomography]] and [[single photon emission computed tomography]]. See below for other faster variants of EM.
| |
| | |
| == Filtering and Smoothing EM Algorithms ==
| |
| A [[Kalman filter]] is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
| |
| | |
| Filtering and smoothing EM algorithms arise by repeating the following two-step procedure.
| |
| | |
| ;E-Step
| |
| : Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
| |
| | |
| ;M-Step
| |
| : Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
| |
| | |
| Suppose that a [[Kalman filter]] or minimum-variance smoother operates on noisy measurements of a single-input-single-output system. An updated measurement noise variance estimate can be obtained from the [[maximum likelihood]] calculation
| |
| :<math>\hat{\sigma}^{2}_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\hat{x}_{k})}^{2}</math>
| |
| | |
| where <math>\hat{x}_k</math> are scalar output estimates calculated by a filter or a smoother from N scalar measurements <math>{z_k}</math>. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
| |
| :<math>\hat{\sigma}^{2}_w = \frac{1}{N} \sum_{k=1}^N {(\hat{x}_{k+1}-\hat{F}\hat{{x}}_{k})}^{2}</math>
| |
| | |
| where <math>\hat{x}_k</math> and <math>\hat{x}_{k+1}</math> are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
| |
| :<math>\hat{F} = \frac{\sum_{k=1}^N (\hat{x}_{k+1}-\hat{F} \hat{x}_k)}{\sum_{k=1}^N \hat{x}_k^{2}} </math>.
| |
| | |
| The convergence of parameter estimates such as those above are studied in<ref>{{Cite journal
| |
| |last = Einicke
| |
| |first = G.A.
| |
| |last2 = Malos
| |
| |first2 = J.T.
| |
| |last3 = Reid
| |
| |first3 = D.C.
| |
| |last4 = Hainsworth
| |
| |first4 = D.W.
| |
| |title = Riccati Equation and EM Algorithm Convergence for Inertial Navigation Alignment
| |
| |journal = IEEE Trans. Signal Processing
| |
| |volume = 57
| |
| |issue = 1
| |
| |pages = 370–375
| |
| |date=January 2009
| |
| |postscript = <!--None-->
| |
| |doi = 10.1109/TSP.2008.2007090
| |
| }}</ref>
| |
| <ref>{{Cite journal
| |
| |last = Einicke
| |
| |first = G.A.
| |
| |last2 = Falco
| |
| |first2 = G.
| |
| |last3 = Malos
| |
| |first3 = J.T.
| |
| |title = EM Algorithm State Matrix Estimation for Navigation
| |
| |journal = IEEE Signal Processing Letters
| |
| |volume = 17
| |
| |issue = 5
| |
| |pages = 437–440
| |
| |date=May 2010
| |
| |postscript = <!--None-->
| |
| |doi = 10.1109/LSP.2010.2043151
| |
| |bibcode = 2010ISPL...17..437E }}</ref>
| |
| .<ref>{{Cite journal
| |
| |last = Einicke
| |
| |first = G.A.
| |
| |last2 = Falco
| |
| |first2 = G.
| |
| |last3 = Dunn
| |
| |first3 = M.T.
| |
| |last4 = Reid
| |
| |first4 = D.C.
| |
| |title = Iterative Smoother-Based Variance Estimation
| |
| |journal = IEEE Signal Processing Letters
| |
| |volume = 19
| |
| |issue = 5
| |
| |pages = 275–278
| |
| |date=May 2012
| |
| |postscript = <!--None-->
| |
| |bibcode = 2012ISPL...19..275E
| |
| |doi = 10.1109/LSP.2012.2190278
| |
| }}</ref>
| |
| | |
| == Variants ==
| |
| A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those utilising [[conjugate gradient]] and modified [[Newton–Raphson]] techniques.<ref>{{cite journal|first1=Mortaza |last1=Jamshidian |first2=Robert I. |last2=Jennrich|title=Acceleration of the EM Algorithm by using Quasi-Newton Methods |year=1997 |journal=[[Journal of the Royal Statistical Society, Series B]] |volume=59 |issue=2 |pages=569–587 |doi=10.1111/1467-9868.00083 |mr=1452026 }}</ref> Additionally EM can be utilised with constrained estimation techniques.
| |
| | |
| '''Expectation conditional maximization (ECM)''' replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter ''θ''<sub>''i''</sub> is maximized individually, conditionally on the other parameters remaining fixed.<ref>{{cite journal|last1=Meng |first1=Xiao-Li |last2=Rubin |first2=Donald B. |authorlink2=Donald Rubin |title=Maximum likelihood estimation via the ECM algorithm: A general framework |year=1993 |journal=[[Biometrika]] |volume=80 |issue=2 |pages=267–278 |doi=10.1093/biomet/80.2.267 |mr=1243503}}</ref>
| |
| | |
| This idea is further extended in '''generalized expectation maximization (GEM)''' algorithm, in which one only seeks an increase in the objective function ''F'' for both the E step and M step under the [[#Alternative description|alternative description]].<ref name="neal1999"/>
| |
| | |
| It is also possible to consider the EM algorithm as a subclass of the '''MM''' (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,<ref>Hunter DR and Lange K (2004), [http://www.stat.psu.edu/~dhunter/papers/mmtutorial.pdf A Tutorial on MM Algorithms], The American Statistician, 58: 30-37</ref> and therefore use any machinery developed in the more general case.
| |
| | |
| ===α-EM algorithm===
| |
| The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm
| |
| <ref>
| |
| {{cite journal
| |
| |last=Matsuyama |first=Yasuo
| |
| |title=The α-EM algorithm: Surrogate likelihood maximization using α-logarithmic information measures
| |
| |journal=IEEE Transactions on Information Theory
| |
| |volume=49 | year=2003 |pages=692–706 |issue=3
| |
| |doi=10.1109/TIT.2002.808105
| |
| }}
| |
| </ref>
| |
| which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by [[Yasuo Matsuyama]] is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.
| |
| <ref> | |
| {{cite journal
| |
| |last=Matsuyama |first=Yasuo
| |
| |title=Hidden Markov model estimation based on alpha-EM algorithm: Discrete and continuous alpha-HMMs
| |
| |journal=International Joint Conference on Neural Networks
| |
| | year=2011 |pages=808–816
| |
| }}
| |
| </ref> | |
| | |
| == Relation to variational Bayes methods ==
| |
| EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a [[probability distribution]] over the latent variables (in the Bayesian style) together with a point estimate for ''θ'' (either a [[maximum likelihood estimation|maximum likelihood estimate]] or a posterior mode). We may want a fully Bayesian version of this, giving a probability distribution over ''θ'' as well as the latent variables. In fact the Bayesian approach to inference is simply to treat ''θ'' as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If we use the factorized Q approximation as described above ([[variational Bayes]]), we may iterate over each latent variable (now including ''θ'') and optimize them one at a time. There are now ''k'' steps per iteration, where ''k'' is the number of latent variables. For [[graphical models]] this is easy to do as each variable's new ''Q'' depends only on its [[Markov blanket]], so local [[message passing]] can be used for efficient inference.
| |
| | |
| == Geometric interpretation ==
| |
| {{details|Information geometry}}
| |
| In [[information geometry]], the E step and the M step are interpreted as projections under dual [[affine connection]]s, called the e-connection and the m-connection; the [[Kullback–Leibler divergence]] can also be understood in these terms.
| |
| | |
| == Examples ==
| |
| === Gaussian mixture === <!--This section is linked from [[Matrix calculus]] -->
| |
| | |
| [[File:Em old faithful.gif|thumb|240px|An animation demonstrating the EM algorithm fitting a two component Gaussian [[mixture model]] to the [[Old Faithful]] dataset. The algorithm steps through from a random initialization to convergence. ]]
| |
| | |
| Let '''x''' = ('''x'''<sub>1</sub>,'''x'''<sub>2</sub>,…,'''x'''<sub>''n''</sub>) be a sample of n independent observations from a [[mixture model|mixture]] of two [[multivariate normal distribution]]s of dimension ''d'', and let '''z'''=(''z''<sub>1</sub>,''z''<sub>2</sub>,…,''z''<sub>''n''</sub>) be the latent variables that determine the component from which the observation originates.<ref name="hastie2001"/>
| |
| :<math>X_i |(Z_i = 1) \sim \mathcal{N}_d(\boldsymbol{\mu}_1,\sigma_1)</math> and <math>X_i |(Z_i = 2) \sim \mathcal{N}_d(\boldsymbol{\mu}_2,\sigma_2)</math>
| |
| where
| |
| :<math>\operatorname{P} (Z_i = 1 ) = \tau_1 \, </math> and <math>\operatorname{P} (Z_i=2) = \tau_2 = 1-\tau_1</math>
| |
| | |
| The aim is to estimate the unknown parameters representing the "mixing" value between the Gaussians and the means and covariances of each:
| |
| :<math>\theta = \big( \boldsymbol{\tau},\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\sigma_1,\sigma_2 \big)</math>
| |
| where the likelihood function is:
| |
| :<math>L(\theta;\mathbf{x},\mathbf{z}) = P(\mathbf{x},\mathbf{z} \vert \theta) = \prod_{i=1}^n \sum_{j=1}^2 \mathbb{I}(z_i=j) \ \tau_j \ f(\mathbf{x}_i;\boldsymbol{\mu}_j,\sigma_j) </math>
| |
| where <math>\mathbb{I}</math> is an [[indicator function]] and ''f'' is the [[probability density function]] of a multivariate normal. This may be rewritten in [[exponential family]] form:
| |
| :<math>L(\theta;\mathbf{x},\mathbf{z}) = \exp \left\{ \sum_{i=1}^n \sum_{j=1}^2 \mathbb{I}(z_i=j) \big[ \log \tau_j -\tfrac{1}{2} \log |\sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big] \right\}. </math>
| |
| To see the last equality, note that for each ''i'' all indicators <math>\mathbb{I}(z_i=j)</math> are equal to zero, except for one which is equal to one. The inner sum thus reduces to a single term.
| |
| | |
| ==== E step ====
| |
| | |
| Given our current estimate of the parameters ''θ''<sup>(''t'')</sup>, the conditional distribution of the ''Z''<sub>''i''</sub> is determined by [[Bayes theorem]] to be the proportional height of the normal [[probability density function|density]] weighted by ''τ'':
| |
| :<math>T_{j,i}^{(t)} := \operatorname{P}(Z_i=j | X_i=\mathbf{x}_i ;\theta^{(t)}) = \frac{\tau_j^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_j^{(t)},\sigma_j^{(t)})}{\tau_1^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_1^{(t)},\sigma_1^{(t)}) + \tau_2^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_2^{(t)},\sigma_2^{(t)})} </math>.
| |
| | |
| Thus, the E step results in the function:
| |
| :<math>\begin{align}Q(\theta|\theta^{(t)})
| |
| &= \operatorname{E} [\log L(\theta;\mathbf{x},\mathbf{Z}) ] \\
| |
| &= \operatorname{E} [\log \prod_{i=1}^{n}L(\theta;\mathbf{x}_i,\mathbf{z}_i) ] \\
| |
| &= \operatorname{E} [\sum_{i=1}^n \log L(\theta;\mathbf{x}_i,\mathbf{z}_i) ] \\
| |
| &= \sum_{i=1}^n\operatorname{E} [\log L(\theta;\mathbf{x}_i,\mathbf{z}_i) ] \\
| |
| &= \sum_{i=1}^n \sum_{j=1}^2 T_{j,i}^{(t)} \big[ \log \tau_j -\tfrac{1}{2} \log |\sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big]
| |
| \end{align}</math>
| |
| | |
| ==== M step ====
| |
| The quadratic form of ''Q''(''θ''|''θ''<sup>(''t'')</sup>) means that determining the maximising values of ''θ'' is relatively straightforward. Note that ''τ'', ('''μ'''<sub>1</sub>,''Σ''<sub>1</sub>) and ('''μ'''<sub>2</sub>,''Σ''<sub>2</sub>) may be all maximised independently of each other since they all appear in separate linear terms.
| |
| | |
| To begin, consider ''τ'', which has the constraint ''τ''<sub>1</sub> + ''τ''<sub>2</sub>=1:
| |
| :<math>\begin{align}\boldsymbol{\tau}^{(t+1)}
| |
| &= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}}\ Q(\theta | \theta^{(t)} ) \\
| |
| &= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}} \ \left\{ \left[ \sum_{i=1}^n T_{1,i}^{(t)} \right] \log \tau_1 + \left[ \sum_{i=1}^n T_{2,i}^{(t)} \right] \log \tau_2 \right\}
| |
| \end{align}</math>
| |
| This has the same form as the MLE for the [[binomial distribution]], so:
| |
| :<math>\tau^{(t+1)}_j = \frac{\sum_{i=1}^n T_{j,i}^{(t)}}{\sum_{i=1}^n (T_{1,i}^{(t)} + T_{2,i}^{(t)} ) } = \frac{1}{n} \sum_{i=1}^n T_{j,i}^{(t)}</math>
| |
| | |
| For the next estimates of ('''μ'''<sub>1</sub>,''σ''<sub>1</sub>):
| |
| :<math>\begin{align}(\boldsymbol{\mu}_1^{(t+1)},\sigma_1^{(t+1)})
| |
| &= \underset{\boldsymbol{\mu}_1,\sigma_1} {\operatorname{arg\,max}}\ Q(\theta | \theta^{(t)} ) \\
| |
| &= \underset{\boldsymbol{\mu}_1,\sigma_1} {\operatorname{arg\,max}}\ \sum_{i=1}^n T_{1,i}^{(t)} \left\{ -\tfrac{1}{2} \log |\sigma_1| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_1)^\top\sigma_1^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_1) \right\}
| |
| \end{align}</math>
| |
| This has the same form as a weighted MLE for a normal distribution, so
| |
| :<math>\boldsymbol{\mu}_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{1,i}^{(t)}} </math> and <math>\sigma_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)})^\top }{\sum_{i=1}^n T_{1,i}^{(t)}} </math>
| |
| and, by symmetry:
| |
| :<math>\boldsymbol{\mu}_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{2,i}^{(t)}} </math> and <math>\sigma_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)})^\top }{\sum_{i=1}^n T_{2,i}^{(t)}} </math>.
| |
| | |
| ==== Termination ====
| |
| | |
| Break the iteration if <math>\log L(\theta^{t};\mathbf{x},\mathbf{Z})</math> and <math>\log L(\theta^{(t-1)};\mathbf{x},\mathbf{Z})</math> are close enough (below some preset threshold).
| |
| | |
| ==== Generalization ====
| |
| | |
| The algorithm illustrated above can be generalized for mixture of multiple (more than two) [[multivariate normal distribution]]s.
| |
| | |
| ===Truncated and censored regression===
| |
| | |
| The EM algorithm has been implemented in the case where there is an underlying [[linear regression]] model explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.<ref name=Wolynetz>{{cite journal | title = Maximum likelihood estimation in a linear model from confined and censored normal data | last1 = Wolynetz | first1 = M.S. | journal = [[Journal of the Royal Statistical Society, Series C]] | year = 1979 | volume = 28 | issue = 2 | pages = 195–206 }}</ref> Special cases of this model include censored or truncated observations from a single [[normal distribution]].<ref name=Wolynetz/>
| |
| | |
| == Further reading ==
| |
| * Robert Hogg, Joseph McKean and [[Allen Craig]]. ''Introduction to Mathematical Statistics''. pp. 359–364. Upper Saddle River, NJ: Pearson Prentice Hall, 2005.
| |
| * [http://www.inference.phy.cam.ac.uk/mackay/itila/ The on-line textbook: Information Theory, Inference, and Learning Algorithms], by [[David J.C. MacKay]] includes simple examples of the EM algorithm such as clustering using the soft ''k''-means algorithm, and emphasizes the variational view of the EM algorithm, as described in Chapter 33.7 of version 7.2 (fourth edition).
| |
| * {{cite paper | id = {{citeseerx|10.1.1.9.9735}} | title = The Expectation Maximization Algorithm | first = Frank | last = Dellaert | authorlink = Frank Dellaert | postscript = , }} gives an easier explanation of EM algorithm in terms of lowerbound maximization.
| |
| * {{cite book
| |
| | last1 = Bishop | first1 = Christopher M.
| |
| | authorlink = Christopher Bishop
| |
| | title = Pattern Recognition and Machine Learning
| |
| | year = 2006
| |
| | publisher = Springer
| |
| | ref = CITEREFBishop2006
| |
| | isbn = 0-387-31073-8
| |
| }}
| |
| * {{cite book | id = {{doi|10.1561/2000000034}} | title = Theory and Use of the EM Method | author = M. R. Gupta and Y. Chen | year = 2010}} A well-written short book on EM, including detailed derivation of EM for GMMs, HMMs, and Dirichlet.
| |
| * {{cite paper | id = {{citeseerx|10.1.1.28.613}} | title = A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models | first = Jeff | last = Bilmes | postscript = , }} includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models.
| |
| * [http://www.cse.buffalo.edu/faculty/mbeal/papers/beal03.pdf Variational Algorithms for Approximate Bayesian Inference], by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs ([http://www.cse.buffalo.edu/faculty/mbeal/thesis/index.html chapters]).
| |
| * [http://www.seanborman.com/publications/EM_algorithm.pdf The Expectation Maximization Algorithm: A short tutorial], A self-contained derivation of the EM Algorithm by Sean Borman.
| |
| * [http://pages.cs.wisc.edu/~jerryzhu/cs838/EM.pdf The EM Algorithm], by Xiaojin Zhu.
| |
| * [http://arxiv.org/pdf/1105.1476.pdf EM algorithm and variants: an informal tutorial] by Alexis Roche. A concise and very clear description of EM and many interesting variants.
| |
| * {{Cite book
| |
| | author = Einicke, G.A.
| |
| | year = 2012
| |
| | title = Smoothing, Filtering and Prediction: Estimating the Past, Present and Future
| |
| | publisher = Intech
| |
| | location = Rijeka, Croatia
| |
| | isbn = 978-953-307-752-9
| |
| | url = http://www.intechopen.com/books/smoothing-filtering-and-prediction-estimating-the-past-present-and-future}}
| |
| | |
| ==References==
| |
| {{reflist}}
| |
| | |
| == External links ==
| |
| * Various 1D, 2D and 3D [http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_2D_PointSegmentation_EM_Mixture demonstrations of EM together with Mixture Modeling] are provided as part of the paired [[SOCR]] activities and applets. These applets and activities show empirically the properties of the EM algorithm for parameter estimation in diverse settings.
| |
| * [https://github.com/l-/CommonDataAnalysis Class hierarchy in C++ (GPL) including Gaussian Mixtures]
| |
| * Fast and clean C implementation of the [https://github.com/juandavm/em4gmm Expectation Maximization] (EM) algorithm for estimating [https://github.com/juandavm/em4gmm Gaussian Mixture Models] (GMMs).
| |
| | |
| {{DEFAULTSORT:Expectation-maximization Algorithm}}
| |
| [[Category:Estimation theory]]
| |
| [[Category:Machine learning algorithms]]
| |
| [[Category:Missing data]]
| |
| [[Category:Statistical algorithms]]
| |
| [[Category:Optimization algorithms and methods]]
| |
| [[Category:Data clustering algorithms]]
| |