{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<br>\n",
    "<center><font face=\"黑体\" size=4>《机器学习基础实践》课程实验指导书</font></center>\n",
    "<br>\n",
    "<center><font face=\"黑体\",size=4>第5章 贝叶斯决分类器</font></center>\n",
    "\n",
    "$\\textbf{1.实验目标}$\n",
    "\n",
    "通过本章内容的学习，可以掌握贝叶斯决策论、朴素贝叶斯分类器的原理及实现方法、EM算法的原理及实现方法。\n",
    "\n",
    "$\\textbf{2.实验内容}$\n",
    "\n",
    "$\\textbf{5.1 贝叶斯决策论}$\n",
    "\n",
    "贝叶斯决策论是在概率框架下实施决策的基本方法。假设有$N$种可能的类别标记$Y={c_1,c_2,..,c_N}$，$\\lambda_{ij}$表示将一个真实类别标记为$c_j$的样本误分为类别标记$c_i$所产生的损失,基于后验概率$P(c_i|\\textbf{x})$可以获得将样本$\\textbf{x}$分类为$c_i$所产生的期望损失,\n",
    "\n",
    "\\begin{equation}\n",
    "R(c_i|\\textbf{x})=\\sum_{j=1}^{N}\\lambda_{ij}P(c_i|\\textbf{x}), \\ 式(5.1)\n",
    "\\end{equation}\n",
    "\n",
    "为了使得分类器在训练集上的期望损失最小，只需要在每个训练样本上选择一个能使期望损失最小的类别标记，即贝叶斯最优分类器$h^{*}(\\textbf{x})$,\n",
    "\n",
    "\\begin{equation}\n",
    "h^{*}(\\textbf{x})=\\underset{c \\in Y}\\min R(c|\\textbf{x}), \\ 式(5.2)\n",
    "\\end{equation}\n",
    "\n",
    "当我们把分类错误率作为期望损失时，\n",
    "\\begin{equation}\n",
    "\\lambda_{ij}=\n",
    "\\begin{cases}\n",
    "0 & if \\ i=j\n",
    "\\cr 1 &  if \\ i \\ne j\n",
    "\\end{cases}\n",
    "\\end{equation}\n",
    "\n",
    "式(5.1)所示的期望损失可以表示为,\n",
    "\n",
    "\\begin{equation}\n",
    "R(c|\\textbf{x})=1-P(c|\\textbf{x}), \\ 式(5.3)\n",
    "\\end{equation}\n",
    "\n",
    "式(5.2)表示的贝叶斯最优分类器可以表示为\n",
    "\n",
    "\\begin{equation}\n",
    "h^{*}(\\textbf{x})=\\underset{c \\in Y}\\max P(c|\\textbf{x}), \\ 式(5.4)\n",
    "\\end{equation}\n",
    "\n",
    "可以看出，贝叶斯分类器的关键在于如何获得后验概率$P(c|\\textbf{x})$。\n",
    "\n",
    "$\\textbf{5.2 朴素贝叶斯分类器}$\n",
    "\n",
    "在贝叶斯分类器中，后验概率$P(c|\\textbf{x})$可以通过式(5.5)计算得到，\n",
    "\n",
    "\\begin{equation}\n",
    "P(c|\\textbf{x}) = \\frac{P(c,\\textbf{x})}{P(\\textbf{x})}=\\frac{P(c)P(\\textbf{x}|c)}{P(\\textbf{x})}, \\ 式(5.5)\n",
    "\\end{equation}\n",
    "\n",
    "其中，$P(c)$是类别$c$的先验概率，表达了样本空间中各类别样本所占的比例，根据大数定律，$P(c)$可以通过各类别出现的频率来估计。$P(\\textbf{x})$是归一化因子，与类别无关，$P(\\textbf{x}|c)$是类条件概率。\n",
    "\n",
    "在计算类条件概率$P(\\textbf{x}|c)时，最大的困难在于$P(\\textbf{x}|c)涉及所有属性上的联合概率，难以根据有限的训练样本进行估计得到。朴素贝叶斯分类器假设所有的属性相互独立，每个属性独立地对分类结果产生影响，在计算类条件概率时就不需要考虑属性的联合概率，从而简化了类条件概率的计算。在朴素贝叶斯分类器中，后验概率$P(\\textbf{x}|c))的计算如式(5.6)所示。\n",
    "\n",
    "\\begin{equation}\n",
    "P(c|\\textbf{x}) = \\frac{P(c)P(\\textbf{x}|c)}{P(\\textbf{x})}=\\frac{P(c)}{P(\\textbf{x})}\\prod_{i=1}^{d}P(x_i|c), \\ 式(5.6)\n",
    "\\end{equation}\n",
    "\n",
    "由于$P(\\textbf{x})$与类别无关，因此朴素贝叶斯分类器的判定准则可以表示如下,\n",
    "\n",
    "\\begin{equation}\n",
    "h(\\textbf{x})=\\underset{c \\ in Y}P(c)\\prod_{i=1}^{d}P(x_i|c), \\ 式(5.7)\n",
    "\\end{equation}\n",
    "\n",
    "\n",
    "朴素贝叶斯分类器的学习过程就是基于训练集$D$来估计各个类别的先验概率$P(c)$和每个属性的类条件概率$P(x_i|c)$。\n",
    "\n",
    "令$D_c$表示类别为$c$的训练样本组成的集合，根据大数定律，可以估计出各个类别的先验概率$P(c)$,\n",
    "\n",
    "\\begin{equation}\n",
    "P(c) = \\frac{|D_c|}{|D|}, \\ 式(5.8)\n",
    "\\end{equation}\n",
    "\n",
    "对于离散属性，可令$D_{c,x_i}$表示$D_c$中第$i$个属性取值为$x_i$的样本组成的集合，则类条件概率可以估计为,\n",
    "\n",
    "\\begin{equation}\n",
    "P(x_i|c) = \\frac{|D_{c,x_i}|}{D_c}, \\ 式(5.9)\n",
    "\\end{equation}\n",
    "\n",
    "对于连续属性，可以考虑概率密度函数，假设$P(x_i|c)$服从均值为$\\mu_{c,i}$方差为$\\sigma_{c,i}^{2}$的正态分布，则类条件概率可以估计为\n",
    "\n",
    "\\begin{equation}\n",
    "P(x_i|c) = \\frac{1}{\\sqrt{2\\pi \\sigma_{c,i}}}exp(-\\frac{(x_i-\\mu_{c,i})^2}{2\\sigma_{c,i}^{2}}), \\ 式(5.10)\n",
    "\\end{equation}\n",
    "\n",
    "$\\textbf{5.3 朴素贝叶斯分类器的实现}$\n",
    "\n",
    "本节通过使用Python编程语言，实现NaivBayesClassifier类来封装朴素贝叶斯分类模型的实现，代码如下所示,部分重要代码需要补充完整。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(1) NaiveBayesClassifier类的实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "class NaiveBayesClassifier:\n",
    "    def __init__(self):\n",
    "        #类先验概率P(c)：例如{\"yes\":10,\"no\":7}\n",
    "        self.class_prior_probs = {}\n",
    "        #类条件概率P(xi|c): 例如{\"yes\":{\"Color\":{\"A\":p('A'|'yes')}}}\n",
    "        self.class_condt_probs = {}\n",
    "    #根据训练集估计类先验概率\n",
    "    def estimate_class_prior_probs(self, y_train):\n",
    "        \"\"\"\n",
    "        输入参数 y_train: 训样本的类别\n",
    "        \"\"\"\n",
    "        inst_num = len(y_train)\n",
    "        #统计各个类别在训练集中出现的次数\n",
    "        for i in range(inst_num):\n",
    "            key = y_train[i]\n",
    "            if key not in self.class_prior_probs.keys():\n",
    "                self.class_prior_probs[key]=0\n",
    "            #类别出现次数加1\n",
    "            self.class_prior_probs[key] += 1\n",
    "        #计算各个类别在训练集中出现频率\n",
    "        for key in self.class_prior_probs:\n",
    "            self.class_prior_probs[key] =\\\n",
    "            float(self.class_prior_probs[key])/inst_num\n",
    "    #根据训练集估计类条件概率\n",
    "    def estimate_class_condt_probs(self, x_train,\n",
    "                                   y_train,feature_names,\n",
    "                                   feature_types,labels):  \n",
    "        \"\"\"\n",
    "        输入参数 x_train: 训练集的特征矩阵\n",
    "        输入参数 y_train: 训样本的类别\n",
    "        输入参数 feature_names: 特征名的集合\n",
    "        输入参数 feature_types: 特征的类型，1表示离散型，0表示连续型\n",
    "        \"\"\"\n",
    "        feature_num =  len(feature_names)    \n",
    "        inst_num = len(x_train)\n",
    "        #训练样本个数\n",
    "        self.inst_num = inst_num\n",
    "        #估计每个类别下的类条件概率\n",
    "        for label in labels:            \n",
    "            #用字典表示类\"label\"下的类条件概率\n",
    "            self.class_condt_probs[label]={} \n",
    "            #类别\"label\"在训练集中出现的次数，初始为0\n",
    "            self.class_condt_probs[label][\"num\"]=0\n",
    "            for feature in feature_names:\n",
    "                #用字典表示，在类别\"label\"下，\n",
    "                #离散特征\"feature\"的出现次数\n",
    "                #或者连续特征\"feature\"的均值和方差\n",
    "                self.class_condt_probs[label][feature]={}  \n",
    "            #统计各个类别出现的次数\n",
    "            for i in range(inst_num):\n",
    "                if y_train[i]==label:\n",
    "                    self.class_condt_probs[label][\"num\"] += 1\n",
    "            for i in range(feature_num):\n",
    "                if feature_types[i]==1:#离散特征\n",
    "                    for j in range(inst_num):\n",
    "                        if y_train[j]==label:\n",
    "                            if x_train[j,i] not in self.class_condt_probs[label][feature_names[i]].keys():\n",
    "                                self.class_condt_probs[label][feature_names[i]][x_train[j,i]]=0\n",
    "                            self.class_condt_probs[label][feature_names[i]][x_train[j,i]] += 1\n",
    "                else:#连续特征\n",
    "                    values = []\n",
    "                    for j in range(inst_num):\n",
    "                        if y_train[j]==label:\n",
    "                            values.append(float(x_train[j,i]))\n",
    "                    values = np.array(values)                    \n",
    "                    mean_values = np.mean(values)\n",
    "                    var_values = np.var(values)\n",
    "                    self.class_condt_probs[label][feature_names[i]][\"mean\"]=mean_values\n",
    "                    self.class_condt_probs[label][feature_names[i]][\"var\"]=var_values\n",
    "    #针对连续特征，采用高斯分布计算P(xi|c)，mu和std分别表示属性xi的均值和标准差\n",
    "    def normal_prob_dense(self,x,mu,std):\n",
    "        import math            \n",
    "        #此处添加代码，根据式（5.10）计算连续属性的类条件概率\n",
    "        return p\n",
    "    #训练朴素贝叶斯分类器\n",
    "    def fit(self, x_train,y_train,feature_names, feature_types,labels):\n",
    "        self.estimate_class_prior_probs(y_train)\n",
    "        self.estimate_class_condt_probs(x_train, y_train, feature_names, \n",
    "                                        feature_types, labels)\n",
    "    #预测函数\n",
    "    def predict(self,x_test,feature_names,feature_types):\n",
    "        m,n = np.shape(x_test)\n",
    "        pred_labels = []\n",
    "        for i in range(m):#对每个测试样本\n",
    "            union_probs = {} #计算联合概率，如式(6.8)所示\n",
    "            for label in self.class_condt_probs.keys():\n",
    "                #计算式(5.8)中的P(c)\n",
    "                prob = float(self.class_condt_probs[label][\"num\"])/self.inst_num\n",
    "                #计算式(6.7)中的P(xi|c)，并累乘\n",
    "                for j in range(n):\n",
    "                    if feature_types[j]==1:#对于离散特征，计算如式(5.9)所示\n",
    "                        prob = prob*float(self.class_condt_probs[label]\\\n",
    "                                          [feature_names[j]][x_test[i,j]])\\\n",
    "                            /self.class_condt_probs[label][\"num\"]\n",
    "                    else:#对于连续特征，计算如式(6.10)所示\n",
    "                        mu = float(self.class_condt_probs[label][feature_names[j]][\"mean\"]+1e-10)\n",
    "                        threta = float(self.class_condt_probs[label][feature_names[j]][\"var\"]+1e-10)\n",
    "                        p_condit = self.normal_prob_dense(float(x_test[i,j]), mu, threta**0.5)\n",
    "                        prob = prob*p_condit\n",
    "                union_probs[label] = prob\n",
    "            #选择联合概率最大的类别作为预测类别\n",
    "            pred_label = None\n",
    "            max_prob = -1.0\n",
    "            for key in union_probs.keys():\n",
    "                if union_probs[key]>max_prob:\n",
    "                    max_prob = union_probs[key]\n",
    "                    pred_label = key\n",
    "            pred_labels.append(pred_label)\n",
    "        return pred_labels\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(2) 在西瓜数据集上测试编程实现的贝叶斯分类模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "#训练数据集，最后一列为样本类别\n",
    "    data_train= [['A','A','A','A','A','A',0.0,'yes'],\n",
    "               ['B','A','B','A','A','A',0.0,'yes'],\n",
    "               ['B','A','A','A','A','A', 0.0,'yes'],\n",
    "               ['A','A','B','A','A','A',0.0, 'yes'],\n",
    "               ['C','A','B','A','A','A',0.0, 'yes'],\n",
    "               ['A','B','A','A','B','B', 0.0,'yes'],\n",
    "               ['B','B','A','B','B','B', 0.0,'yes'],               \n",
    "               ['B','B','B','B','B','A', 1.6,'no'],\n",
    "               ['A','C','C','A','C','B', 1.55,'no'],\n",
    "               ['C','C','C','C','C','A', 1.67,'no'],\n",
    "               ['C','A','A','C','C','B', 1.88,'no'],\n",
    "               ['A','B','A','B','A','A', 2.00,'no'],\n",
    "               ['C','B','B','B','A','A', 1.91,'no'],               \n",
    "               ['C','A','A','B','C','A', 0.88,'no'],\n",
    "               ['A','A','B','C','B','A', 1.87,'no']]\n",
    "    #测试数据\n",
    "    data_test =[['B','B','A','A','B','A', 0.0,],\n",
    "                ['B','B','A','A','B','B', 1.78,]]\n",
    "    #测试样本的真实类别\n",
    "    label_test=['yes','no']\n",
    "    #添加代码，将数据转换成numpy数组    \n",
    "    #特征名列表\n",
    "    feature_names = ['Color','Root','Sound','texture','jibu','cugan','size']\n",
    "    #特征类别，1表示离散特征，0表示连续特征\n",
    "    feature_types = [1,1,1,1,1,1,0]\n",
    "    #所有可能类别\n",
    "    labels =['yes','no']\n",
    "    #添加代码，构建朴素贝叶斯分类器    \n",
    "    #查看贝叶斯分类模型\n",
    "    print(\"类先验概率P(c)：\")\n",
    "    print(model.class_prior_probs)\n",
    "    print(\"类条件概率P(xi|c):\")\n",
    "    print(model.class_condt_probs)\n",
    "    #测试样本分类\n",
    "    pred_test = model.predict(data_test, feature_names, feature_types)\n",
    "    print(\"predict labels:\")\n",
    "    print(pred_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{5.4 EM算法}$\n",
    "\n",
    "期望最大化算法(Expectation Maximization，简称EM算法)，是在概率模型中寻找参数最大似然估计或者最大后验估计的算法，其中概率模型依赖于无法观测的隐性变量。本节通过一个掷硬币的例子来解释EM算法的基本思想。\n",
    "\n",
    "假设有两个硬币A和B，但是两个硬币的材质不同导致其出现正反面的概率不一样。在实验过程中，我们进行5轮实验，每轮掷硬币5次，得到的正反面的结果如下图所示，但是每轮实验中选择的硬币的类别(A或者B)未知，要求根据5轮实验的结果，分别估计硬币A和硬币B出现正面的概率$P_A$和$P_B$。\n",
    "\n",
    "<img src=picture/NB1.png>\n",
    "\n",
    "由于每次选用的硬币未知，即硬币的种类是一个隐藏变量，不能直接估计硬币A和硬币B出现正面的概率$P_A$和$P_B$。我们先初始化$P_A=0.2$，$P_B=0.6$，根据初始概率，计算分别使用两个硬币产生实验结果的概率，如下图所示，然后根据最大似然原则估计每轮实验选用的硬币类别。从图所示的结果，根据最大似然原则，可以估计出5轮实验使用的硬币分别是(B, B, A, B, B )，由此可以得出硬币A掷了1次，出现正面1次，反面4次，可以估计出$P_A=1/5=0.2$，硬币B掷了4次，出现正面10次，反面10次，可以估计出$P_B=10/20=0.5$。接下来采用新估计的概率$P_A=0.2，P_B=0.5$再次估计5轮实验使用的硬币，并更新概率$P_A$和$P_B$，这个过程不断重复，估计的概率$P_A$和$P_B$越来越接近其真实值。\n",
    "\n",
    "<img src=picture/NB2.png>\n",
    "\n",
    "$\\textbf{EM算法进行参数估计的流程如下：}$\n",
    "\n",
    "$\\textbf{EM算法流程}$\n",
    "\n",
    "$\\textbf{输入：}$\n",
    "\n",
    "观察数据${x_1,x_2,…,x_m}$，联合概率分布$P(x,z,\\theta)$，条件概率分布$P(z|x,\\theta)$，其中$z$是隐藏变量，最大迭代次数$T$\n",
    "\n",
    "$\\textbf{输出：}$\n",
    "\n",
    "参数$\\theta$的估计值\n",
    "\n",
    "$\\textbf{算法步骤:}$\n",
    "\n",
    "(1) 随机初始化参数$\\theta_{0}$ \n",
    "\n",
    "(2) for $t=1 : T$\n",
    "\n",
    "   E步：根据观察数据推断隐藏变量的值，即根据当前参数推断隐藏变量分布$P(z|x,\\theta_{t})$，并计算对数似然关于隐藏变量的期望。\n",
    "   \n",
    "   M步：寻找参数使的E步期望最大，得到参数的估计值$\\theta_{t+1}$\n",
    "   \n",
    "(3) end for\n",
    "\n",
    "\n",
    "$\\textbf{5.5 EM算法求解混合高斯分布问题}$\n",
    "\n",
    "多维高斯分布可以表示为:\n",
    "<img src=picture/NB3.png>式(5.11)\n",
    "\n",
    "其中，$\\textbf{x}$是一个$d$维向量表示的数据样本，$\\boldsymbol{\\mu}$是一个$d$维均值向量，$\\boldsymbol{\\Sigma}$是$d×d$的协方差矩阵。混合高斯分布由$K$个不同的高斯分布共同组成，表示为\n",
    "\n",
    "<img src=picture/NB4.png>式(5.12)\n",
    "\n",
    "假设$\\textbf{x}$是样本空间中的一个样本点，混合高斯分布的意义在于样本空间中的任意样本点可以看作是以不同的概率从多个高斯分布生成的，或者说样本点以不同的概率隶属于多个高斯分布。而实际中，我们并不知道这些高斯分布的参数$\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Sigma}_{k}$以及样本隶属于每个高斯分布的概率$\\alpha_{k}$。为了确定数据集的分布，我们需要对这些参数进行估计，EM算法是解决这一问题的重要手段。\n",
    "\n",
    "给定一个训练数据集$D$，令$\\textbf{X}_{N×d}$为$D$的特征矩阵，每一行表示一个样本的特征向量，假设数据集$D$是由$K$个$d$维高斯分布混合生成，要估计每个高斯分布的均值向量$\\boldsymbol{\\mu}_{k}$和协方差矩阵$\\boldsymbol{\\Sigma}_{k}$以及混合参数$\\alpha_{k}$，可以采用极大似然方法，最大化如式(5.13)所示的极大似然函数。\n",
    "\n",
    "<img src=picture/NB5.png>式(5.13)\n",
    "\n",
    "为了方便计算，可以对式(5.13)取对数，转化为最大化对数似然函数\n",
    "\n",
    "<img src=picture/NB6.png>式(5.14)\n",
    "\n",
    "式(5.14)对$\\boldsymbol{\\mu}_{k}$求偏导数，可得\n",
    "\n",
    "<img src=picture/NB7.png>式(5.15)\n",
    "\n",
    "令式(5.15)两边同时乘以$\\boldsymbol{\\Sigma}_{k}^{-1}$，并令其等于零，可得\n",
    "\n",
    "<img src=picture/NB8.png>式(5.16)\n",
    "\n",
    "为了简化式(5.16)，可令\n",
    "\n",
    "<img src=picture/NB9.png>式(5.17)\n",
    "\n",
    "其中$\\gamma_{n,k}$又被称为第$k$个高斯模型对第$n$个样本的响应度，式(5.17)可表示为\n",
    "\n",
    "<img src=picture/NB10.png>式(5.18)\n",
    "\n",
    "同样，式(5.14)对$\\boldsymbol{\\Sigma}_{k}$求偏导数，并令偏导数等于零，可得\n",
    "\n",
    "<img src=picture/NB11.png>式(5.19)\n",
    "\n",
    "式(5.14)是带约束条件的极值问题，可以使用拉格朗日乘数法求解\n",
    "\n",
    "<img src=picture/NB12.png>式(5.20)\n",
    "\n",
    "是(5.20)对$\\alpha_{k}$求偏导数，并令偏导数等于零，可得\n",
    "\n",
    "\\begin{equation}\n",
    "\\alpha_{k} = \\frac{\\sum_{n=1}^{N}\\gamma_{n,k}}{N}, \\ 式(5.21)\n",
    "\\end{equation}\n",
    "\n",
    "基于上面的推导，可以在EM算法迭代的过程中，分别根据式(5.18)、式(5.19)和式(5.21)更新$\\alpha_{k},\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Sigma}_{k}$的值，最终得到数据集的高斯混合参数的估计值。\n",
    "\n",
    "$\\textbf{利用EM算法估计混合高斯模型参数的实现代码入下：}$\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "class ExpectationMaximization:    \n",
    "    def gaussian(self, X, mu, cov):\n",
    "        n = X.shape[1]#特征维数\n",
    "        diff = (X-mu).T#样本与均值向量的差\n",
    "        #计算多维高斯分布概率\n",
    "        gaussian_prob = np.diagonal(1 / ((2 * np.pi) ** (n / 2) * \\\n",
    "                                           np.linalg.det(cov) ** 0.5) \\\n",
    "                                      * np.exp(-0.5 *\\\n",
    "                                               np.dot(np.dot(diff.T, \\\n",
    "                                                             np.linalg.inv(cov)),\\\n",
    "                                                      diff))).reshape(-1, 1)\n",
    "        return gaussian_prob\n",
    "    #初始化聚类\n",
    "    def init_clusters(self,X, n_clusters):\n",
    "        from sklearn.cluster import KMeans\n",
    "        clusters = []\n",
    "        #采用kmeans初始化聚类中心 \n",
    "        kmeans = KMeans().fit(X)\n",
    "        mu_k = kmeans.cluster_centers_\n",
    "        #初始时，各高斯成分等概率\n",
    "        for i in range(n_clusters):\n",
    "            clusters.append({\n",
    "            'alpha_k': 1.0 / n_clusters,\n",
    "            'mu_k': mu_k[i],\n",
    "            'cov_k': np.identity(X.shape[1], dtype=np.float64)\n",
    "            })\n",
    "        return clusters\n",
    "    #E-step\n",
    "    def expectation_step(self,X, clusters):\n",
    "        totals = np.zeros((X.shape[0], 1), dtype=np.float64) \n",
    "        for cluster in clusters:\n",
    "            alpha_k = cluster['alpha_k']\n",
    "            mu_k = cluster['mu_k']\n",
    "            cov_k = cluster['cov_k'] \n",
    "            gamma_nk = (alpha_k * self.gaussian(X, mu_k, cov_k)).astype(np.float64) \n",
    "            for i in range(X.shape[0]):\n",
    "                totals[i] += gamma_nk[i] \n",
    "            cluster['gamma_nk'] = gamma_nk\n",
    "            cluster['totals'] = totals \n",
    "        for cluster in clusters:\n",
    "            cluster['gamma_nk'] /= cluster['totals'] \n",
    "    #M-step\n",
    "    def maximization_step(self, X, clusters):\n",
    "        N = float(X.shape[0]) \n",
    "        for cluster in clusters:\n",
    "            gamma_nk = cluster['gamma_nk']\n",
    "            cov_k = np.zeros((X.shape[1], X.shape[1]))\n",
    "            N_k = np.sum(gamma_nk, axis=0)\n",
    "            alpha_k = N_k / N\n",
    "            mu_k = np.sum(gamma_nk * X, axis=0) / N_k\n",
    "            for j in range(X.shape[0]):\n",
    "                diff = (X[j] - mu_k).reshape(-1, 1)\n",
    "                cov_k += gamma_nk[j] * np.dot(diff, diff.T)\n",
    "            cov_k /= N_k\n",
    "            #更新参数\n",
    "            cluster['alpha_k'] = alpha_k\n",
    "            cluster['mu_k'] = mu_k\n",
    "            cluster['cov_k'] = cov_k\n",
    "    #计算对数似然\n",
    "    def get_likelihood(self,X, clusters):        \n",
    "        sample_likelihoods = np.log(np.array([cluster['totals'] for cluster in clusters]))\n",
    "        return np.sum(sample_likelihoods), sample_likelihoods\n",
    "    #训练混合高斯模型\n",
    "    def train_gmm(self,X, n_clusters, n_epochs):\n",
    "        clusters = self.init_clusters(X, n_clusters)\n",
    "        likelihoods = np.zeros((n_epochs, ))\n",
    "        scores = np.zeros((X.shape[0], n_clusters))\n",
    "        history = []\n",
    "        #迭代过程\n",
    "        for i in range(n_epochs):\n",
    "            clusters_snapshot = [] \n",
    "            for cluster in clusters:\n",
    "                clusters_snapshot.append({\n",
    "                    'mu_k': cluster['mu_k'].copy(),\n",
    "                    'cov_k': cluster['cov_k'].copy()\n",
    "                    })\n",
    "            history.append(clusters_snapshot)\n",
    "            #E-step\n",
    "            self.expectation_step(X, clusters)\n",
    "            #M-step\n",
    "            self.maximization_step(X, clusters)\n",
    "            likelihood, sample_likelihoods = self.get_likelihood(X, clusters)\n",
    "            likelihoods[i] = likelihood\n",
    "            print('Epoch: ', i + 1, 'Likelihood: ', likelihood)\n",
    "        for i, cluster in enumerate(clusters):\n",
    "            scores[:, i] = np.log(cluster['gamma_nk']).reshape(-1) \n",
    "        return clusters, likelihoods, scores, sample_likelihoods, history\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{5.6 实践任务}$\n",
    "\n",
    "在鸢尾花数据集Iris上完成以下任务：\n",
    "\n",
    "利用本章实现的朴素贝叶斯分类器，对Iris数据集进行分类，并与sklearn库中的GaussianNB模型进行对比；\n",
    "\n",
    "利用本章实现的混合高斯聚类模型，对Iris数据集进行聚类，并与sklearn库中的GMM模型进行对比。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(1) 利用本章实现的朴素贝叶斯分类器，对Iris数据集进行分类，并与sklearn库中的GaussianNB模型进行对比；"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#添加代码，"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(2) 利用本章实现的混合高斯聚类模型，对Iris数据集进行聚类，并与sklearn库中的GMM模型进行对比。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#添加代码"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
