The EM Algorithm

less than 1 minute read

The goal of this post is to help others reach a thorough understanding of what EM is and how it works. Below is an outline of how this page will cover the algorithm.

  1. The EM Algorithm
    1. Maximum Likelihood Estimation (MLE)
    2. Latent Variables
  2. Deriving the Formulas
  3. Computing the Algorithm

The EM-algorithm

The name EM algorithm is short for expectation-maximization (EM) algorithm. What EM does is that it finds the maximum likelihood estimate (MLE) for a certain set of parameters of a statistical model (e.g., \(\Sigma\) and \(\mu\) from the multivariate normal distribution).

explain MLE

What makes EM different from the normal process of doing MLE is that it relies on the user input of latent or “hidden” variables which is an input selected a priori (before the algorithm is run) by the user. Below is an example of how a latent variable may appear in a dataset, however, it may not always be so clear what the value for the latent variable should be.

explain latent variables with plot

Deriving the Formulas

Computing the Algorithm

Hello

also

really

[link][www.yuqizheng.com]

lists

  • 1 thing
  1. one
  2. two

R code:

2+2

inline code 1+2+3

LaTeX

\[1+2+3\]