{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](http://pic1.tsingdataedu.com/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E5%B7%A5%E7%A8%8B%E5%B8%88banner.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## EM算法\n",
    "\n",
    "##### by 加号, 网易云课堂 X 稀牛学院 机器学习工程师 附加课件\n",
    "\n",
    "很多情况下，如果我们数据本身有缺省，我们使用EM来最优化一个Objective Function。\n",
    "\n",
    "标记\n",
    "\n",
    "$Y_o$: 观测数据\n",
    "\n",
    "$Y_m$: 缺省数据\n",
    "\n",
    "$Y_c$: $(Y_o, Y_m)$, 完整数据\n",
    "\n",
    "$f(Y_c|\\theta)$: 完整数据的分布\n",
    "\n",
    "$g(Y_o | \\theta) = \\int f(Y_c|\\theta) dY_m$ : 观测数据的分布\n",
    "\n",
    "$k(Y_m | Y_o, \\theta) = \\frac{f(Y_c|\\theta)}{\\int f(Y_c|\\theta) dY_m} =\\frac{f(Y_c|\\theta)}{g(Y_o |\\theta)}$: 当观测数据给出的时候，缺失数据的分布。当然，当我们加上缺省数据的时候，这个应该是等于1的。\n",
    "\n",
    "在机器学习里面，我们的目标就是，在数据缺省的情况下，依旧能最大化一个log likelihood。（或者说，找到一个$\\theta$，来使得log likelihood最大化）\n",
    "$$l(\\theta) = \\log g(Y_o| \\theta)$$\n",
    "\n",
    "显然，当数据缺省的时候，我们很难搞这个最大化的计算，所以我们真正计算的时候，使用的是另一个数值 $Q(\\theta | \\theta^{p})$ 。它是基于 $\\theta^{p}$ 的一个值，代表着 $\\theta$ 在第P次循环以后的值。同时，这个数值还得有一个特性：\n",
    "\n",
    "+ 当 $\\theta$ 最优化了 $Q(\\theta | \\theta^{p})$ 的时候，它也得最优化 $l(\\theta)$ 。（因为P可以无穷大）\n",
    "+ 给定 $\\theta^{p}$ 时，$Q(\\theta | \\theta^{p})$ 的最大化得比 $l(\\theta)$ 容易。（要不然就没有意义了）\n",
    "\n",
    "接下来，我们来创造出符合我们要求的这样一个 $Q(\\theta | \\theta^{p})$ ：\n",
    "\n",
    "利用我们刚刚提到的那个『观测数据给出的时候，缺失数据的分布』的分布 $k(Y_m | Y_o, \\theta) =\\frac{f(Y_c|\\theta)}{g(Y_o |\\theta)}$， 我们可以得到：（这几步变换都不难，悉心看一下就理解了）\n",
    "\n",
    "$$\\begin{aligned}[t] \\log f(Y_c|\\theta) &= \\log g(Y_o|\\theta) + \\log k(Y_m|Y_o,\\theta) \\\\ \\Leftrightarrow \\log f(Y_c|\\theta) k(Y_m|Y_o,\\theta') &= \\log g(Y_o|\\theta) k(Y_m|Y_o,\\theta')+ \\log k(Y_m|Y_o,\\theta) k(Y_m|Y_o,\\theta') \\\\ &\\text{(}\\theta \\neq \\theta' \\text{)} \\\\ \\Leftrightarrow \\int \\log f(Y_c|\\theta) k(Y_m|Y_o,\\theta') dY_m&= \\int \\log g(Y_o|\\theta) k(Y_m|Y_o,\\theta') dY_m+ \\int \\log k(Y_m|Y_o,\\theta) k(Y_m|Y_o,\\theta') dY_m \\\\ \\Leftrightarrow E_{Y_m} [\\log f(Y_c|\\theta) | Y_o, \\theta'] &= \\log g(Y_o|\\theta) + E_{Y_m} [\\log k(Y_m|Y_o,\\theta')] \\\\ \\Leftrightarrow Q(\\theta| \\theta') &= l(\\theta) + H(\\theta | \\theta') \\end{aligned}$$\n",
    "\n",
    "这里，我们关心的是 $Q(\\theta | \\theta')$，而不是 $H(\\theta|\\theta')$。\n",
    "\n",
    "至此，通过Jensen不等式（超纲内容，可自行百度），我们可以得出最大化 $Q(\\theta|\\theta')$ 与 $l(\\theta)$ 是一样儿一样儿的。\n",
    "\n",
    "那么，$Q(\\theta | \\theta^{p})$ 长什么样？\n",
    "\n",
    "$$Q(\\theta | \\theta^{p}) =\\int \\log f(Y_c|\\theta) k(Y_m|Y_o,\\theta') dY_m = E_{Y_m} [\\log f(Y_c|\\theta) | Y_o, \\theta']$$\n",
    "\n",
    "仔细拆解一下这里面所有的成分：\n",
    "\n",
    "+ $Y_o$ 是个已知量，因为它是观测数据，就是说，我们已经得到了的数据。\n",
    "\n",
    "+ 同时 $\\theta'$ 也是已知。根据EM算法框架，我们是先给 $\\theta'$ 一个任意值，然后一步步迭代，最终得到一个稳定最有的解。（这就是机器学习的思维）\n",
    "\n",
    "+ $k(Y_m|Y_o,\\theta')$ 上面提过很多次，这就是观测数据已经给出的情况下的缺省数据的分布，也就是一个条件概率。它也是解决EM问题的关键所在。之后我们会运用它来积起 $\\int \\log f(Y_c|\\theta) k(Y_m|Y_o,\\theta') dY_m$ 里面的 $Y_m$。\n",
    "\n",
    "+ $ \\log f(Y_c|\\theta)$ 这是在 $\\theta$ 给定的情况下的全局分布。\n",
    "\n",
    "+ $Q(\\theta | \\theta^{p})$ 这位大佬就是把 $Y_m$ 从 $ \\log f(Y_c|\\theta)$ 中化出来以后的东西。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 案例\n",
    "\n",
    "好，接下来我们举个具体的例子来看看怎么实际操作这一堆公式：\n",
    "\n",
    "你手上有两个硬币A和B。你眼神儿不太好，也不知道哪个是A哪个是B，就瞎扔一通。按照万物真理，A硬币和B硬币都各自有各自的出正面的几率，我们表示为：$\\theta_A$ 和 $\\theta_B$。\n",
    "\n",
    "你随便挑一个硬币，扔10次。重复5次，也就是一共扔了50次。\n",
    "\n",
    "这时候，你的观测数据，就是那50次的正反面记录。这个是已知的。未知的就是，这里面，你到底每一次扔的是A还是B。\n",
    "\n",
    "用程序的语言给出的结果就是："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[5, 5],\n",
       "       [9, 1],\n",
       "       [8, 2],\n",
       "       [4, 6],\n",
       "       [7, 3]])"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "xs = np.array([(5,5), (9,1), (8,2), (4,6), (7,3)])\n",
    "xs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以上是我们已知的观测数据。\n",
    "\n",
    "### 表达式 \n",
    "\n",
    "接下来，我们的**未知数据**是：\n",
    "\n",
    "$Y_m = (c_1,c_2,c_3,c_4,c_5)$ 其中 $c_i = [c_{i1}, c_{i2}]\\in \\{ [1,0], [0,1]\\}$\n",
    "\n",
    "这就是我们每一次拿起来的拿个硬币是A还是B的分布。\n",
    "\n",
    "比如，当i=3，拿起的是B的时候，我们有$c_{i} = c_{3} = [0,1]$\n",
    "\n",
    "当i=2，拿起的是A的时候，我们有$c_{i} = c_{2} = [1,0]$\n",
    "\n",
    "回过头来，用我们最前面的表达形式，把我们的**已知数据**写出来：\n",
    "\n",
    "$Y_o = (y_{o1}, y_{o2}, y_{o3}, y_{o4}, y_{o5})$ 其中 $Y_{oi} = $ 第i次选币之后，扔出来，出现正面的次数。\n",
    "\n",
    "于是我们刚刚举例子的那个数据，写出来就是：\n",
    "\n",
    "$Y_o = (5,9,8,4,7)$\n",
    "\n",
    "同时我们知道，任何硬币，都是要么正要么反的（不考虑竖着掉在缝儿里的情况）。所以，这是一个二项分布（或者叫伯努利分布），于是我们有：\n",
    "\n",
    "$$y_{oi}|A \\sim binom(y_{oi}, n, \\theta_A) = \\theta_A^{y_{oi}} (1-\\theta_A)^{10-y_{oi}} $$$$y_{oi}|B \\sim binom(y_{oi}, n, \\theta_B)$$\n",
    "\n",
    "一个完整的分布就是观测数据和缺省数据合在一起：\n",
    "\n",
    "$Y_c = (Y_m, Y_o)$.\n",
    "\n",
    "以及，我们最终要求的\n",
    "\n",
    "$\\theta = (\\theta_A, \\theta_B)$ \n",
    "\n",
    "除了以上这些，我们还需要的是我们自己的金手指概率（就是我们摸到了A还是摸到了B的概率），我们这样表示\n",
    "\n",
    "$p(\\theta_A) = \\phi$ and $p(\\theta_B)= 1-\\phi$ （这个例子里，我们假设，挑A还是挑B是0.5的随机）\n",
    "\n",
    "有了这些，我们就可以得出：\n",
    "\n",
    "$f(Y_c|\\theta)$ 在 $(\\theta_A, \\theta_B)$ 给定的情况下，符合二项分布。\n",
    "\n",
    "### 联合概率分布的表达形式\n",
    "\n",
    "现在，我们来把这个案例里所有写出来的表达式与最前面聊的联合分布们联系起来：\n",
    "\n",
    "$$\\begin{aligned}[t] p(y_o,y_m| \\theta) &= f(Y_c|\\theta)\\\\ &= p( [y_{o1},y_{o2},y_{o3},y_{o4},y_{o5}], [c_{1},c_{2},c_{3},c_{4},c_{5}] | (\\theta_A', \\theta_B')) \\\\ &= p( [y_{o1},y_{o2},y_{o3},y_{o4},y_{o5}] | [c_{1},c_{2},c_{3},c_{4},c_{5}] , (\\theta_A', \\theta_B')) p([c_{1},c_{2},c_{3},c_{4},c_{5}]| (\\theta_A', \\theta_B'))\\\\ &= p( [y_{o1},y_{o2},y_{o3},y_{o4},y_{o5}] | [c_{1},c_{2},c_{3},c_{4},c_{5}] , (\\theta_A', \\theta_B')) p([c_{1},c_{2},c_{3},c_{4},c_{5}]) \\\\ &= \\prod_{i=1}^5 p(y_{oi}|c_i,( (\\theta_A', \\theta_B'))) \\prod_{i=1}^5 p(c_i) \\end{aligned}$$\n",
    "\n",
    "假设 $Y_c$ = [(A,A,A,B,B), (5,9,8,4,7)], 那么 $$f(Y_c|\\theta) = binom(5,10,\\theta_A)p(A) * binom(9,10,\\theta_A)p(A) * ... * binom(7,10,\\theta_B)p(B)$$\n",
    "\n",
    "现在，问题来了，我们并不知道[A,A,A,B,B]这个分布，我们的问题设定里，我们的眼神儿不好，看不到这个分布。\n",
    "\n",
    "于是，我们需要用上EM来解决问题："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 开整EM\n",
    "\n",
    "+ 搞定 $ k(Y_m|Y_o,\\theta') $\n",
    "\n",
    "$$Q(\\theta | \\theta^{p}) =\\int \\log f(Y_c|\\theta) k(Y_m|Y_o,\\theta') dY_m $$$$k(Y_m|Y_o,\\theta') = p(Y_m | (5,9,8,4,7), (\\theta_A', \\theta_B'))$$\n",
    "\n",
    "其中 $\\theta_A'$ 和 $\\theta_B'$ 是随机给的。\n",
    "\n",
    "根据独立概率原理，\n",
    "\n",
    "$$k(Y_m|Y_o,\\theta') = p([y_{m1},y_{m2},y_{m3},y_{m4},y_{m5}]| (5,9,8,4,7), (\\theta_A', \\theta_B'))$$ $$ = p([c_{1},c_{2},c_{3},c_{4},c_{5}]| (5,9,8,4,7), (\\theta_A', \\theta_B'))$$ $$= \\prod_{i=1}^5 p(c_{i}| y_{oi}, (\\theta_A', \\theta_B'))$$ 其中 $(5,9,8,4,7) = (y_{o1}, y_{o2}, y_{o3},y_{o4},y_{o5})$\n",
    "\n",
    "那么，什么是 $\\prod_{i=1}^5 p(c_{i}| y_{oi}, (\\theta_A', \\theta_B'))$ 呢？\n",
    "\n",
    "根据贝叶斯公式，\n",
    "\n",
    "$$p(y_{mi}|y_{oi},\\theta') = \\frac{ p( y_{oi} | y_{mi}, \\theta') p(y_{mi} | \\theta')}{ \\sum_{mi = A,B} p( y_{oi} | y_{mi}, \\theta') p(y_{mi} | \\theta')}$$\n",
    "\n",
    "所以 $$p(c_{i}|y_{oi},\\theta') = \\frac{ p( y_{oi} | c_{i}, \\theta') p(c_{i} | \\theta')}{ \\sum_{c_i} p( y_{oi} | c_{i}, \\theta') p(c_{i} | \\theta')}$$\n",
    "\n",
    "并且，我们已知： $p( y_{oi} | y_{mi}, \\theta') $ 和 $p(y_{mi} | \\theta)$\n",
    "\n",
    "$p( y_{oi} | y_{mi}, \\theta') $ 在这儿就是个二项分布，比如 $p(5 | A, \\theta') = p(5|A) = binom(5,10,\\theta_A')$\n",
    "\n",
    "同时，我们选币的概率跟硬币出正反的概率，也是不想干的：\n",
    "\n",
    "$p(y_{mi} |\\theta') = p(y_{mi})$\n",
    "\n",
    "$$\\begin{aligned}[t] \\prod_{i=1}^5 p(c_{i}| y_{oi}, (\\theta_A', \\theta_B')) &= \\prod_{i=1}^5 \\frac{ p( y_{oi} | c_{i}, \\theta') p(c_{i} | \\theta')}{ \\sum_{c_i} p( y_{oi} | c_{i}, \\theta') p(c_{i} | \\theta')} \\\\ &= \\prod_{i=1}^5 \\frac{ p( y_{oi} | c_{i}, \\theta') p(c_{i} )}{ \\sum_{c_i} p( y_{oi} | c_{i}, \\theta') p(c_{i}) } \\\\ &= \\prod_{i=1}^5 \\frac{ p \\left( y_{oi} \\big| \\begin{bmatrix} c_{i1} \\\\ c_{i2} \\end{bmatrix}, \\begin{bmatrix} \\theta'_{1} \\\\ \\theta'_{2} \\end{bmatrix} \\right) p\\left(\\begin{bmatrix} c_{i1} \\\\ c_{i2} \\end{bmatrix} \\right)}{ \\sum_{c_i} p \\left( y_{oi} \\big| \\begin{bmatrix} c_{i1} \\\\ c_{i2} \\end{bmatrix}, \\begin{bmatrix} \\theta_{1}' \\\\ \\theta_{2}' \\end{bmatrix} \\right) p\\left(\\begin{bmatrix} c_{i1} \\\\ c_{i2} \\end{bmatrix} \\right) } \\\\ = \\prod_{i=1}^5 \\prod_{k=1}^2 &\\frac{\\left[ \\theta_k'^{y_{oi} } (1-\\theta_k')^{n- y_{oi}} \\right]^{c_{ik}} p(c_{ik}) }{ \\left[ \\theta_1'^{y_{oi} } (1-\\theta_1')^{n- y_{oi}} \\right]^1 p(c_{i1}) + \\left[ \\theta_2'^{y_{oi} } (1-\\theta_2')^{n- y_{oi}} \\right]^0 p(c_{i2}) + \\left[ \\theta_1'^{y_{oi} } (1-\\theta_1')^{n- y_{oi}} \\right]^0 p(c_{i1}) + \\left[ \\theta_2'^{y_{oi} } (1-\\theta_2')^{n- y_{oi}} \\right]^1 p(c_{i2})} \\\\ \\text{因为 } &c_i \\in \\left\\{ \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}, \\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix} \\right\\}\\\\ &= \\prod_{i=1}^5 \\prod_{k=1}^2 \\frac{\\left[ \\theta_k'^{y_{oi} } (1-\\theta_k')^{n- y_{oi}} \\right]^{c_{ik}} p(c_{ik}) }{ \\left[ \\theta_1'^{y_{oi} } (1-\\theta_1')^{n- y_{oi}} \\right]^1 p(c_{i1}) + \\left[ \\theta_2'^{y_{oi} } (1-\\theta_2')^{n- y_{oi}} \\right]^1 p(c_{i2})} \\end{aligned}$$\n",
    "\n",
    "+ 搞定 $\\log f(Y_c|\\theta)$\n",
    "\n",
    "已知： \n",
    "\n",
    "$$\\begin{aligned}[t] f(Y_c|\\theta) &= \\prod_{i=1}^5 p(y_{oi}|c_i,( (\\theta_A', \\theta_B'))) \\prod_{i=1}^5 p(c_i)\\\\ \\end{aligned}$$\n",
    "\n",
    "和\n",
    "\n",
    "$$\\begin{aligned} p(y_{oi}|c_i,( (\\theta_A', \\theta_B'))) &= \\left[ binom(n, y_{oi}, \\theta_1 ) \\right]^{c_{i1} } \\left[ binom(n, y_{oi}, \\theta_2 ) \\right]^{c_{i2} } \\\\ &= \\prod_{k=1}^2 \\left[\\theta_k^{y_{oi}} (1-\\theta_k)^{n - y_{oi}}\\right]^{c_{ik}} \\end{aligned}$$\n",
    "\n",
    "以及\n",
    "\n",
    "$$\\begin{aligned} p(c_i) &= \\prod_{k=1}^2 \\pi_k^{c_{ik}} \\end{aligned}$$ \n",
    "\n",
    "其中 $\\pi_k$ 是选A还是B的概率：$k=1$ 选 $A$，$k=2$ 选 $B$。 当然，$\\pi_1 + \\pi_2 = 1$。\n",
    "\n",
    "那么\n",
    "\n",
    "$$\\begin{aligned}[t] f(Y_c|\\theta) &= \\prod_{i=1}^5 p(y_{oi}|c_i,( (\\theta_A', \\theta_B'))) \\prod_{i=1}^5 p(c_i)\\\\ &= \\prod_{i=1}^5 \\prod_{k=1}^2 \\left[\\theta_k^{y_{oi}} (1-\\theta_k)^{n - y_{oi}}\\right]^{c_{ik}} \\prod_{i=1}^5 \\prod_{k=1}^2 \\pi_k^{c_{ik}} \\end{aligned}$$\n",
    "\n",
    "所以\n",
    "\n",
    "$$ log f(Y_c|\\theta) = \\sum_{i=1}^5 \\sum_{k=1}^2 c_{ik} \\log \\left[\\theta_k^{y_{oi}} (1-\\theta_k)^{n - y_{oi}} \\right] + \\sum_{i=1}^5 \\sum_{k=1}^2 c_{ik} \\log \\pi_k$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 至此，我们有两个东西了：\n",
    "\n",
    "$ k(Y_m|Y_o,\\theta') $\n",
    "\n",
    "$log f(Y_c|\\theta)$\n",
    "\n",
    "我们需要最大化的是 $$\\int \\log f(Y_c|\\theta) k(Y_m|Y_o,\\theta') dY_m$$\n",
    "\n",
    "### E步骤（Expectation）\n",
    "\n",
    "$$\\begin{aligned}[t] \\int &log f(Y_c|\\theta) k(Y_m|Y_o,\\theta') dY_m \\\\ &= \\int \\left[ \\sum_{i=1}^5 \\sum_{k=1}^2 c_{ik} \\log \\left[\\theta_k^{y_{oi}} (1-\\theta_k)^{n - y_{oi}} \\right] + \\sum_{i=1}^5 \\sum_{k=1}^2 c_{ik} \\log \\pi_k \\right] \\left[ \\prod_{i=1}^5 \\prod_{k=1}^2 \\frac{\\left[ \\theta_k'^{y_{oi} } (1-\\theta_k')^{n- y_{oi}} \\right]^{c_{ik}} p(c_{ik}) }{ \\left[ \\theta_1'^{y_{oi} } (1-\\theta_1')^{n- y_{oi}} \\right]^1 p(c_{i1}) + \\left[ \\theta_2'^{y_{oi} } (1-\\theta_2')^{n- y_{oi}} \\right]^1 p(c_{i2})} \\right] dY_m \\end{aligned}$$\n",
    "\n",
    "因为 $Y_m = (c_1,...c_5)$，所以在这个积分方程中的变量是 $\\log f(Y_c|\\theta)$ 中的$c_{ik}$\n",
    "\n",
    "在这个例子中，为了积分需要，我们考虑其他所有变量为常数。并把 $\\log f(Y_c|\\theta)$ 当做 $c_{ik}$ 的线性相加。 因为积分（log以后）和相加之间的关系，我们只需要计算 $$ E(c_{ik}) = \\int c_{ik} k(Y_m | Y_o, \\theta') dY_m$$\n",
    "\n",
    "$$\\begin{aligned}[t] E(c_{ik}) &= \\int c_{ik} k(Y_m | Y_o, \\theta') dY_m \\\\ & = \\int c_{ik} \\prod_{i=1}^5 \\prod_{j=1}^2 k'(c_{ik} | Y_o, \\theta') dY_m \\\\ & = \\int c_{ik} k'(c_{ik} | Y_o, \\theta') dc_{ik} \\text{ 所有跟 } c_{ik} \\text{ 无关的都积成1}\\\\ &= \\int c_{ik} \\left[ \\frac{\\left[ \\theta_k'^{y_{oi} } (1-\\theta_k')^{n- y_{oi}} \\right]^{c_{ik}} p(c_{ik}) }{ \\left[ \\theta_1'^{y_{oi} } (1-\\theta_1')^{n- y_{oi}} \\right]^1 p(c_{i1}) + \\left[ \\theta_2'^{y_{oi} } (1-\\theta_2')^{n- y_{oi}} \\right]^1 p(c_{i2})} \\right] dc_{ik} \\\\ &\\text{因为 $c_{ik} \\in \\{0,1\\}$, 这个integration就是一个相加} \\\\ &= \\sum_{c_{ik} = \\{0,1 \\}} c_{ik} \\left[ \\frac{\\left[ \\theta_k'^{y_{oi} } (1-\\theta_k')^{n- y_{oi}} \\right]^{c_{ik}} p(c_{ik}) }{ \\left[ \\theta_1'^{y_{oi} } (1-\\theta_1')^{n- y_{oi}} \\right]^1 p(c_{i1}) + \\left[ \\theta_2'^{y_{oi} } (1-\\theta_2')^{n- y_{oi}} \\right]^1 p(c_{i2})} \\right] \\\\ &= \\left[ \\frac{\\left[ \\theta_k'^{y_{oi} } (1-\\theta_k')^{n- y_{oi}} \\right]^{c_{ik}} p(c_{ik}) }{ \\left[ \\theta_1'^{y_{oi} } (1-\\theta_1')^{n- y_{oi}} \\right]^1 p(c_{i1}) + \\left[ \\theta_2'^{y_{oi} } (1-\\theta_2')^{n- y_{oi}} \\right]^1 p(c_{i2})} \\right] \\end{aligned}$$\n",
    "\n",
    "所以，在每一个E步骤，我们要计算这个\n",
    "\n",
    "$$\\begin{aligned}[t] \\int &\\left[ \\sum_{i=1}^5 \\sum_{k=1}^2 c_{ik} \\log \\left[\\theta_k^{y_{oi}} (1-\\theta_k)^{n - y_{oi}} \\right] + \\sum_{i=1}^5 \\sum_{k=1}^2 c_{ik} \\log \\pi_k \\right] \\left[ \\prod_{i=1}^5 \\prod_{k=1}^2 \\frac{\\left[ \\theta_k'^{y_{oi} } (1-\\theta_k')^{n- y_{oi}} \\right]^{c_{ik}} p(c_{ik}) }{ \\left[ \\theta_1'^{y_{oi} } (1-\\theta_1')^{n- y_{oi}} \\right]^1 p(c_{i1}) + \\left[ \\theta_2'^{y_{oi} } (1-\\theta_2')^{n- y_{oi}} \\right]^1 p(c_{i2})} \\right]dY_m \\\\ &= \\left[ \\sum_{i=1}^5 \\sum_{k=1}^2 E(c_{ik}) \\log \\left[\\theta_k^{y_{oi}} (1-\\theta_k)^{n - y_{oi}} \\right] + \\sum_{i=1}^5 \\sum_{k=1}^2 E(c_{ik}) \\log \\pi_k \\right] \\\\ & = Q(\\theta | \\theta^{p}) \\end{aligned}$$\n",
    "\n",
    "用来验证结果是否已经converge了。\n",
    "\n",
    "### M步骤（Maximization）\n",
    "\n",
    "在这里，我们要找的是一个 $\\theta$ 用来最大化 $Q(\\theta | \\theta^{p})$ 。 注意，$E(c_{ik})$ 在这里是个常数，而不是基于 $\\theta$ 的变量。\n",
    "\n",
    "计算方法就是：求导之后等于零。\n",
    "\n",
    "$$\\begin{aligned}[t] \\frac{dQ(\\theta|\\theta^p)}{d\\theta_k} &= \\frac{d}{d\\theta_k} \\left[ \\sum_{i=1}^5 \\sum_{k=1}^2 E(c_{ik}) \\log \\left[\\theta_k^{y_{oi}} (1-\\theta_k)^{n - y_{oi}} \\right] + \\sum_{i=1}^5 \\sum_{k=1}^2 E(c_{ik}) \\log \\pi_k \\right] \\\\ &= \\sum_{i=1}^5 E(c_{ik}) \\left[ y_{oi} \\frac{1}{\\theta_k} - (n-y_{oi}) \\frac{1}{1 -\\theta_k} \\right] = 0 \\\\ &= \\sum_{i=1}^5 E(c_{ik}) \\left[ y_{oi} (1 - \\theta_k) - (n-y_{oi}) \\theta_k \\right] = 0 \\end{aligned}$$\n",
    "\n",
    "最后我们得到\n",
    "\n",
    "$$\\theta_k = \\frac{\\sum_{i=1}^5 E(c_{ik}) y_{oi}}{\\sum_{i=1}^5n E(c_{ik})}$$\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "说了这么多，代码写出来很简单："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Iteration: 5\n",
      "theta_A = 0.71, theta_B = 0.58, difference in loglike = -32.69\n",
      "Iteration: 5\n",
      "theta_A = 0.75, theta_B = 0.57, difference in loglike = 1.43\n",
      "Iteration: 5\n",
      "theta_A = 0.77, theta_B = 0.55, difference in loglike = 0.50\n",
      "Iteration: 5\n",
      "theta_A = 0.78, theta_B = 0.53, difference in loglike = 0.43\n",
      "Iteration: 5\n",
      "theta_A = 0.79, theta_B = 0.53, difference in loglike = 0.26\n",
      "Iteration: 5\n",
      "theta_A = 0.79, theta_B = 0.52, difference in loglike = 0.12\n",
      "Iteration: 5\n",
      "theta_A = 0.80, theta_B = 0.52, difference in loglike = 0.05\n",
      "Iteration: 5\n",
      "theta_A = 0.80, theta_B = 0.52, difference in loglike = 0.02\n",
      "Iteration: 5\n",
      "theta_A = 0.80, theta_B = 0.52, difference in loglike = 0.01\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "ys = np.array([(5,5), (9,1), (8,2), (4,6), (7,3)])\n",
    "thetas = np.array([[0.6, 0.4], [0.5, 0.5]])  # 初始化两个theta\n",
    "pis =np.array([0.5,0.5])  # 随手拿出A还是B硬币的概率都设为0.5\n",
    "\n",
    "tolerance = 0.01\n",
    "max_iter = 100\n",
    "\n",
    "loglike_old = 0\n",
    "for i in range(max_iter):\n",
    "    E_c1 = []\n",
    "    E_c2 = []\n",
    "    EcY_1 = []\n",
    "    EcY_2 = []\n",
    "    loglike_new = 0\n",
    "    # E步骤: \n",
    "    for i in range(len(ys)):\n",
    "\n",
    "        # multinomial log likelihood （对于这个案例，我们用的是伯努利）\n",
    "        log_k1 = np.sum([ys[i]*np.log(thetas[0])])  #  \\log [\\theta_k^{y_{oi}} (1-\\theta_k)^{n - y_{oi}} ]  \n",
    "        log_k2 = np.sum([ys[i]*np.log(thetas[1])])  #  \\log [\\theta_k^{y_{oi}} (1-\\theta_k)^{n - y_{oi}} ] \n",
    "\n",
    "        # 得到 c_ik 的期望\n",
    "        denom = np.exp(log_k1) * pis[0] + np.exp(log_k2) * pis[1]\n",
    "        E_ci1 = np.exp(log_k1) * pis[0] / denom\n",
    "        E_ci2 = np.exp(log_k2) * pis[1] / denom\n",
    "\n",
    "        # 更新完整的 log likelihood  \n",
    "        # 我们只在这一步检查它是否converge，并不更新theta \n",
    "        loglike_new += E_ci1 * log_k1 + E_ci2 * log_k2\n",
    "        E_c1.append(E_ci1)\n",
    "        E_c2.append(E_ci2)\n",
    "\n",
    "    # M步骤：\n",
    "    for i in range(len(ys)):\n",
    "        EcY_1.append(E_c1[i] * ys[i] )  \n",
    "        EcY_2.append(E_c2[i] * ys[i] )\n",
    "    thetas[0] = np.sum(EcY_1, 0)/np.sum(EcY_1)\n",
    "    thetas[1] = np.sum(EcY_2, 0)/np.sum(EcY_2)\n",
    "    print(\"Iteration: %d\" % (i+1))\n",
    "    print(\"theta_A = %.2f, theta_B = %.2f, difference in loglike = %.2f\" % (thetas[0,0], thetas[1,0], loglike_new - loglike_old))\n",
    "\n",
    "    if np.abs(loglike_new - loglike_old) < tolerance:\n",
    "        break\n",
    "    loglike_old = loglike_new"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "### 版权归 © 稀牛学院 所有 保留所有权利"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](http://pic1.tsingdataedu.com/%E7%A8%80%E7%89%9B%20x%20%E7%BD%91%E6%98%93.png)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
