{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "第6讲 朴素贝叶斯法\n",
    "===\n",
    "主讲教师：高鹏\n",
    "---\n",
    "办公地点：网络空间安全学院407\n",
    "---\n",
    "联系方式：pgao@qfnu.edu.cn\n",
    "---\n",
    "面向专业：软件工程（智能数据）\n",
    "---"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# 兼容python2和python3\n",
    "from __future__ import print_function\n",
    "\n",
    "import numpy as np\n",
    "import scipy as sp\n",
    "import pandas as pd\n",
    "import sklearn\n",
    "import math\n",
    "import matplotlib.pyplot as plt\n",
    "from collections import Counter\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 引言\n",
    "\n",
    "朴素贝叶斯（naive Bayes）法是基于贝叶斯定理与特征条件独立假设的分类方法。对于给定的训练数据集，首先基于特征条件独立假设学习输入输出的联合概率分布；然后基于此模型，对给定的输入$x$，利用贝叶斯定理求出后验概率最大的输出$y$。朴素贝叶斯法实现简单，学习与预测的效率都很高，是一种常用的方法。\n",
    "\n",
    "本讲叙述朴素贝叶斯法，包括朴素贝叶斯法的学习与分类、朴素贝叶斯法的参数估计算法。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 朴素贝叶斯法的学习与分类\n",
    "\n",
    "## 基本方法\n",
    "\n",
    "设输入空间$\\mathcal{X}\\subseteq\\mathbb{R}^n$为$n$维向量的集合，输出空间为类标记集合$\\mathcal{Y}=\\{c_1,c_2,\\ldots,c_K\\}$。输入为特征向量$x\\in\\mathcal{X}$，输出为类标记（class label）$y\\in\\mathcal{Y}$。$X$是定义在输入空间$\\mathcal{X}$上的随机向量，$Y$是定义在输出空间了上的随机变量。$P(X,Y)$是$X$和$Y$的联合概率分布，训练数据集\n",
    "\n",
    "$$\n",
    "T=\\{(x_1,y_1),(x_2,y_2),\\ldots,(x_N,y_N)\\}\n",
    "$$\n",
    "\n",
    "由$P(X,Y)$独立同分布产生。\n",
    "\n",
    "朴素贝叶斯法通过训练数据集学习联合概率分布$P(X,Y)$。具体地，学习以下先验概率分布及条件概率分布。先验概率分布\n",
    "\n",
    "$$\n",
    "P(Y=c_k),\\quad k=1,2,\\ldots,K\n",
    "$$\n",
    "\n",
    "条件概率分布\n",
    "\n",
    "$$\n",
    "P(X=x|Y=c_k)=P(X^{(1)}=x^{(1)},\\ldots,X^{(n)}=x^{(n)}|Y=c_k),\\quad k=1,2,\\ldots,K \n",
    "$$\n",
    "\n",
    "于是学习到联合概率分布$P(X,Y)$。\n",
    "\n",
    "条件概率分布$P(X=x|Y=c_k)$有指数级数量的参数，其估计实际是不可行的。事实上，假设$x^{(j)}$可取值有$S_j$个，$j=1,2,\\ldots,n$，$Y$可取值有$K$个，那么参数个数为$K\\displaystyle\\prod^n_{j=1}S_j$。\n",
    "\n",
    "朴素贝叶斯法对条件概率分布作了条件独立性的假设。由于这是一个较强的假设，朴素贝叶斯法也由此得名。具体地，条件独立性假设是\n",
    "\n",
    "$$\n",
    "P(X=x|Y=c_k)=P(X^{(1)}=x^{(1)},\\ldots,X^{(n)}=x^{(b)}|Y=c_k)=\\prod^n_{j=1}P(X^{(j)}=x^{(j)}|Y=c_k)\n",
    "$$\n",
    "\n",
    "朴素贝叶斯法实际上学习到生成数据的机制，所以属于生成模型。条件独立假设等于是说用于分类的特征在类确定的条件下都是条件独立的。这一假设使朴素贝叶斯法变得简单，但有时会牺牲一定的分类准确率。\n",
    "\n",
    "朴素贝叶斯法分类时，对给定的输入$x$，通过学习到的模型计算后验概率分布$P(Y=c_k|X=x)$，将后验概率最大的类作为$x$的类输出。后验概率计算根据贝叶斯定理进行\n",
    "\n",
    "$$\n",
    "P(Y=c_k|X=x)=\\frac{P(X=x|Y=c_k)P(Y=c_k)}{\\displaystyle\\sum_kP(X=x|Y=c_k)P(Y=c_k)}\n",
    "$$\n",
    "\n",
    "由上述两式得\n",
    "\n",
    "$$\n",
    "P(Y=c_k|X=x)=\\frac{P(Y=c_k)\\displaystyle\\prod_jP(X^{(j)}=x^{(j)}|Y=c_k)}{\\displaystyle\\sum_kP(Y=c_k)\\prod_jP(X^{(j)}=x^{(j)}|Y=c_k)},\\quad k=1,2,\\ldots,K\n",
    "$$\n",
    "\n",
    "这是朴素贝叶斯法分类的基本公式。于是，朴素贝叶斯分类器可表示为\n",
    "\n",
    "$$\n",
    "y=f(x)=\\arg\\max_{c_k}\\frac{P(Y=c_k)\\displaystyle\\prod_jP(X^{(j)}=x^{(j)}|Y=c_k)}{\\displaystyle\\sum_kP(Y=c_k)\\prod_jP(X^{(j)}=x^{(j)}|Y=c_k)}\n",
    "$$\n",
    "\n",
    "注意到，在上式中分母对所有c，都是相同的，所以\n",
    "\n",
    "$$\n",
    "y=\\arg\\max_{c_k}P(Y=c_k)\\displaystyle\\prod_jP(X^{(j)}=x^{(j)}|Y=c_k)\n",
    "$$\n",
    "\n",
    "## 后验概率最大化的含义\n",
    "\n",
    "朴素贝叶斯法将实例分到后验概率最大的类中。这等价于期望风险最小化。假设选择0-1损失函数\n",
    "\n",
    "$$\n",
    "L(Y,f(X))=\\left\\{\\begin{array}{ll}\n",
    "1, & Y\\neq f(X)\\\\\n",
    "0, & Y=f(X)\\\n",
    "\\end{array}\\right.\\\\\n",
    "$$\n",
    "\n",
    "式中$f(X)$是分类决策函数。这时，期望风险函数为\n",
    "\n",
    "$$\n",
    "R_{exp}(f)= E[L(Y,f(X))]\n",
    "$$\n",
    "\n",
    "期望是对联合分布$P(X,Y)$取的。由此取条件期望\n",
    "\n",
    "$$\n",
    "R_{exp}(f)=E_X\\sum^K_{k=1}[L(c_k,f(X))]P(c_K|X)\n",
    "$$\n",
    "\n",
    "为了使期望风险最小化，只需对$X=x$逐个极小化，由此得到\n",
    "\n",
    "$$\n",
    "\\begin{aligned}\n",
    "f(x)&=\\arg\\min_{y\\in\\mathcal{Y}}\\sum^K_{k=1}L(c_k,y)P(c_k|X=x)\\\\\n",
    "&=\\arg\\min_{y\\in\\mathcal{Y}}\\sum^K_{k=1}P(y\\neq c_k|X=x)\\\\\n",
    "&=\\arg\\min_{y\\in\\mathcal{Y}}(1-P(y=c_k|X=x))\\\\\n",
    "&=\\arg\\min_{y\\in\\mathcal{Y}}P(y=c_k|X=x)\\\\\n",
    "\\end{aligned}\n",
    "$$\n",
    "\n",
    "这样一来，根据期望风险最小化准则就得到了后验概率最大化准则\n",
    "\n",
    "$$\n",
    "f(x)=\\arg\\max_{c_k}P(c_k|X=x)\n",
    "$$\n",
    "\n",
    "即朴素贝叶斯法所采用的原理。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 朴素贝叶斯法的参数估计\n",
    "\n",
    "## 极大似然估计\n",
    "\n",
    "在朴素贝叶斯法中，学习意味着估计$P(Y=C_k)$和$P(X^{(j)}=x^{(j)}|Y=c_k)$。可以应用极大似然估计法估计相应的概率。先验概率$P(Y=C_k)$的极大似然估计是\n",
    "\n",
    "$$\n",
    "\\displaystyle P(Y=c_k)=\\frac{\\displaystyle\\sum_{i=1}^NI(y_i=c_k)}{N},\\quad k=1,2,\\ldots,K\n",
    "$$\n",
    "\n",
    "设第$j$个特征$x^{(j)}$可能取值的集合为${a_{j1},a_{j2},\\ldots,a_{js_j}}$，条件概率$P(X^{(j)}=a_{jl}|Y=c_k)$的极大似然估计是\n",
    "\n",
    "$$\n",
    "P(X^{(j)}=a_{jl}|Y=c_k)=\\frac{\\displaystyle\\sum^N_{i=1}I(x_i^{(j)}=a_{jl},y_i=c_k)}{\\displaystyle\\sum^N_{i=1}I(y_i=c_k)},\\quad j=1,2,\\ldots,n;\\quad l=1,2,\\ldots,S_j;\\quad k=1,2,\\ldots,K\n",
    "$$\n",
    "\n",
    "式中，$x_i^{(j)}$是第$i$个样本的第$j$个特征；$a_{jl}$是第$j$个特征可能取的第$l$个值；$I$为指示函数。\n",
    "\n",
    "### 思考 \n",
    "\n",
    "用极大似然估计法推出朴素贝叶斯法中的概率估计公式。\n",
    "\n",
    "**解**  \n",
    "**第1步：**证明公式$\\displaystyle P(Y=c_k) = \\frac{\\displaystyle \\sum_{i=1}^N I(y_i=c_k)}{N}$  \n",
    "由于朴素贝叶斯法假设$Y$是定义在输出空间$\\mathcal{Y}$上的随机变量，因此可以定义$P(Y=c_k)$概率为$p$。  \n",
    "令$\\displaystyle m=\\sum_{i=1}^NI(y_i=c_k)$，得出似然函数：$$L(p)=f_D(y_1,y_2,\\cdots,y_n|\\theta)=\\binom{N}{m}p^m(1-p)^{(N-m)}$$使用微分求极值，两边同时对$p$求微分：$$\\begin{aligned}\n",
    "0 &= \\binom{N}{m}\\left[mp^{(m-1)}(1-p)^{(N-m)}-(N-m)p^m(1-p)^{(N-m-1)}\\right] \\\\\n",
    "& = \\binom{N}{m}\\left[p^{(m-1)}(1-p)^{(N-m-1)}(m-Np)\\right]\n",
    "\\end{aligned}$$可求解得到$\\displaystyle p=0,p=1,p=\\frac{m}{N}$  \n",
    "显然$\\displaystyle P(Y=c_k)=p=\\frac{m}{N}=\\frac{\\displaystyle \\sum_{i=1}^N I(y_i=c_k)}{N}$，得证。\n",
    "\n",
    "----\n",
    "\n",
    "**第2步：**证明公式$\\displaystyle P(X^{(j)}=a_{jl}|Y=c_k) = \\frac{\\displaystyle \\sum_{i=1}^N I(x_i^{(j)}=a_{jl},y_i=c_k)}{\\displaystyle \\sum_{i=1}^N I(y_i=c_k)}$  \n",
    "令$P(X^{(j)}=a_{jl}|Y=c_k)=p$，令$\\displaystyle m=\\sum_{i=1}^N I(y_i=c_k), q=\\sum_{i=1}^N I(x_i^{(j)}=a_{jl},y_i=c_k)$，得出似然函数：$$L(p)=\\binom{m}{q}p^q(i-p)^{m-q}$$使用微分求极值，两边同时对$p$求微分：$$\\begin{aligned}\n",
    "0 &= \\binom{m}{q}\\left[qp^{(q-1)}(1-p)^{(m-q)}-(m-q)p^q(1-p)^{(m-q-1)}\\right] \\\\\n",
    "& = \\binom{m}{q}\\left[p^{(q-1)}(1-p)^{(m-q-1)}(q-mp)\\right]\n",
    "\\end{aligned}$$可求解得到$\\displaystyle p=0,p=1,p=\\frac{q}{m}$  \n",
    "显然$\\displaystyle P(X^{(j)}=a_{jl}|Y=c_k)=p=\\frac{q}{m}=\\frac{\\displaystyle \\sum_{i=1}^N I(x_i^{(j)}=a_{jl},y_i=c_k)}{\\displaystyle \\sum_{i=1}^N I(y_i=c_k)}$，得证。\n",
    "\n",
    "\n",
    "## 学习与分类算法\n",
    "\n",
    "下面给出朴素贝叶斯法的学习与分类算法。\n",
    "\n",
    "**算法1（朴素贝叶斯算法（naive Bayes algorithm））**\n",
    "\n",
    "输入：训练数据$T=\\{(x_1,y_1),(x_2,y_2),\\ldots,(x_N,y_N)\\}$，其中$x_i=(x_i^{(1)},x_i^{(2)},\\ldots,x_i^{(n)})^T$， $x_i^{(j)}$是第$i$个样本的第$j$个特征，$x_i^{(j)}\\in\\{a_{j1},a_{j2},\\ldots,a_{js_j}\\}$，$a_{jl}$是第$j$个特征可能取的第$l$个值，$j=1,2,\\ldots,n$，$l=1,2,\\ldots,S_j$，$y_i\\in\\{c_1,c_2,c_K\\}$；实例x；\n",
    "\n",
    "输出：实例$x$的分类。\n",
    "\n",
    "(1) 计算先验概率及条件概率\n",
    "\n",
    "$$\n",
    "P(Y=c_k)=\\frac{\\displaystyle\\sum_{i=1}^NI(y_i=c_k)}{N},\\quad k=1,2,\\ldots,K\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(X^{(j)}=a_{jl}|Y=c_k)=\\frac{\\displaystyle\\sum^N_{i=1}I(x^{(j)}_i=a_{jl},y_i=c_k)}{\\displaystyle\\sum^N_{i=1}I(y_i=c_k)},\\quad j=1,2,\\ldots,n;\\quad l=1,2,\\ldots,S_j;\\quad k=1,2,\\ldots,K\n",
    "$$\n",
    "\n",
    "(2) 对于给定的实例$x=(x^{(1)},x^{(2)},\\ldots,x^{(n)})^T$，计算\n",
    "\n",
    "$$\n",
    "P(Y=c_k)\\prod^n_{j=1}P(X^{(j)}=x^{(j)}|Y=c_k),\\quad k=1,2,\\ldots,K\n",
    "$$\n",
    "\n",
    "(3) 确定实例$x$的类\n",
    "\n",
    "$$\n",
    "y=\\arg\\max_{c_k}P(Y=c_k)\\prod^n_{j=1}P(X^{(j)}=x^{(j)}|Y=c_k)\n",
    "$$\n",
    "\n",
    "### 例\n",
    "\n",
    "试由下表的训练数据学习一个朴素贝叶斯分类器并确定$x=(2,S)^T$的类标记$y$。表中$X^{(1)}$，$X^{(2)}$为特征，取值的集合分别为$A_1=\\{1,2,3\\}$，$A_2=\\{S,M,L\\}$，$Y$为类标记，$Y\\in C=\\{1,-1\\}$。\n",
    "\n",
    "<p align=\"center\">\n",
    "  <img width=\"700\" src=\"Lesson6-1.jpg\">\n",
    "</p>\n",
    "\n",
    "**解** 根据算法1，由表中数据容易计算下列概率：\n",
    "\n",
    "$$\n",
    "P(Y=1)=\\frac{9}{15},\\quad P(Y=-1)=\\frac{6}{15}\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(X^{(1)}=1|Y=1)=\\frac{2}{9},\\quad P(X^{(1)}=2|Y=1)=\\frac{3}{9}, \\quad P(X^{(1)}=3|Y=1)=\\frac{4}{9}\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(X^{(2)}=S|Y=1)=\\frac{1}{9},\\quad P(X^{(2)}=M|Y=1)=\\frac{4}{9}, \\quad P(X^{(2)}=L|Y=1)=\\frac{4}{9}\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(X^{(1)}=1|Y=-1)=\\frac{3}{6},\\quad P(X^{(1)}=2|Y=-1)=\\frac{2}{6}, \\quad P(X^{(1)}=3|Y=-1)=\\frac{1}{6}\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(X^{(2)}=S|Y=-1)=\\frac{3}{6},\\quad P(X^{(2)}=M|Y=-1)=\\frac{2}{6}, \\quad P(X^{(2)}=L|Y=-1)=\\frac{1}{6}\n",
    "$$\n",
    "\n",
    "对于给定的$x=(2,S)^T$计算\n",
    "\n",
    "$$\n",
    "P(Y=1)P(X^{(1)}=2|Y=1)P(X^{(2)}=S|Y=1)=\\frac{9}{15}\\cdot\\frac{3}{9}\\cdot\\frac{1}{9}=\\frac{1}{45}\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(Y=-1)P(X^{(1)}=2|Y=-1)P(X^{(2)}=S|Y=-1)=\\frac{6}{15}\\cdot\\frac{2}{6}\\cdot\\frac{3}{6}=\\frac{1}{15}\n",
    "$$\n",
    "\n",
    "因为$P(Y=-1)P(X^{(1)}=2|Y=-1)P(X^{(2)}=S|Y=-1)$最大，所以$y=-1$。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "lambda_ = 0.2\n",
    "x = [2, 'S']\n",
    "\n",
    "X1 = [1,2,3]\n",
    "X2 = ['S', 'M', 'L']\n",
    "Y = [1, -1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class NaiveBayes:\n",
    "    def __init__(self, lambda_):\n",
    "        self.lambda_ = lambda_\n",
    "        \n",
    "    def fit(self, X, y):\n",
    "        N, M = X.shape\n",
    "        data = np.hstack((X, y.reshape(N, 1)))\n",
    "        \n",
    "        py = {}\n",
    "        pxy = {}\n",
    "        uniquey, countsy = np.unique(y, return_counts=True)\n",
    "        tmp = dict(zip(uniquey, countsy))\n",
    "        for k,v in tmp.items():\n",
    "            py[k] = (v + self.lambda_)/(N + len(uniquey) * self.lambda_)\n",
    "            tmp_data = data[data[:, -1] == k]\n",
    "            for col in range(M):\n",
    "                uniquecol, countscol = np.unique(tmp_data[:,col], return_counts=True)\n",
    "                tmp1 = dict(zip(uniquecol, countscol))\n",
    "                for kk, vv in tmp1.items():\n",
    "                    pxy['X({})={}|Y={}'.format(col+1, kk, k)] = (vv + self.lambda_)/(v + len(uniquecol) * self.lambda_)\n",
    "                    \n",
    "        self.py = py\n",
    "        self.pxy = pxy\n",
    "\n",
    "        #return self.py, self.pxy\n",
    "    \n",
    "    def predict(self, x):\n",
    "        M = len(x)\n",
    "        res = {}\n",
    "        for k,v in self.py.items():\n",
    "            p = v\n",
    "            for i in range(len(x)):\n",
    "                p = p * self.pxy['X({})={}|Y={}'.format(i+1, x[i], k)]\n",
    "            res[k] = p\n",
    "        print(res)\n",
    "        maxp = -1\n",
    "        maxk = -1\n",
    "        for kk,vv in res.items():\n",
    "            if vv > maxp:\n",
    "                maxp = vv\n",
    "                maxk = kk\n",
    "                \n",
    "        return maxk"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "lambda_ = 0.2\n",
    "d = {'S':0, 'M':1, 'L':2}\n",
    "\n",
    "X = np.array([[1, d['S']], [1, d['M']], [1, d['M']],\n",
    "             [1, d['S']], [1, d['S']], [2, d['S']],\n",
    "             [2, d['M']], [2, d['M']], [2, d['L']],\n",
    "             [2, d['L']], [3, d['L']], [3, d['M']],\n",
    "             [3, d['M']], [3, d['L']], [3, d['L']]])\n",
    "\n",
    "y = np.array([-1, -1, 1, 1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 1, -1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X, y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = NaiveBayes(lambda_)\n",
    "model.fit(X,y)\n",
    "model.predict(np.array([2, 0]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 贝叶斯估计\n",
    "\n",
    "用极大似然估计可能会出现所要估计的概率值为0的情况，这时会影响到后验概率的计算结果，使分类产生偏差。解决这一问题的方法是采用贝叶斯估计。具体地，条件概率的贝叶斯估计是\n",
    "\n",
    "$$\n",
    "P_{\\lambda}(X^{(j)}=a_{jl}|Y=c_k)=\\frac{\\displaystyle\\sum^N_{i=1}I(x_i^{(j)}=a_{jl},y_i=c_k)+\\lambda}{\\displaystyle\\sum^N_{i=1}I(y_i=c_k)+S_j\\lambda}\n",
    "$$\n",
    "\n",
    "式中$\\lambda\\geq 0$。等价于在随机变量各个取值的频数上赋予一个正数$\\lambda>0$。当$\\lambda=0$时就是极大似然估计。常取$\\lambda=1$，这时成为拉普拉斯平滑（Laplace smoothing）。显然，对任何$l=1,2,\\ldots,S_j$，$k=1,2,\\ldots,K$，有\n",
    "\n",
    "$$\n",
    "P_{\\lambda}(X^{(j)}=a_{jl}|Y=c_k)>0\n",
    "$$\n",
    "\n",
    "$$\n",
    "\\sum_{l=1}^{s_j}P(X^{(j)}=a_{jl}|Y=c_k)=1\n",
    "$$\n",
    "\n",
    "前式为一种概率分布。同样，先验概率的贝叶斯估计是\n",
    "\n",
    "$$\n",
    "P_{\\lambda}(Y=c_k)=\\frac{\\displaystyle \\sum_{i=1}^NI(y_i=c_k)+\\lambda}{N+K\\lambda}\n",
    "$$\n",
    "\n",
    "### 思考\n",
    "\n",
    "用贝叶斯估计法推出朴素贝叶斯法中的概率估计公式。\n",
    "\n",
    "**解**  \n",
    "**第1步：**证明公式$\\displaystyle P(Y=c_k) = \\frac{\\displaystyle \\sum_{i=1}^N I(y_i=c_k) + \\lambda}{N+K \\lambda}$  \n",
    "加入先验概率，在没有任何信息的情况下，可以假设先验概率为均匀概率（即每个事件的概率是相同的）。  \n",
    "可得$\\displaystyle p=\\frac{1}{K} \\Leftrightarrow pK-1=0\\quad(1)$  \n",
    "根据习题4.1得出先验概率的极大似然估计是$\\displaystyle pN - \\sum_{i=1}^N I(y_i=c_k) = 0\\quad(2)$  \n",
    "存在参数$\\lambda$使得$(1) \\cdot \\lambda + (2) = 0$  \n",
    "所以有$$\\lambda(pK-1) + pN - \\sum_{i=1}^N I(y_i=c_k) = 0$$可得$\\displaystyle P(Y=c_k) = \\frac{\\displaystyle \\sum_{i=1}^N I(y_i=c_k) + \\lambda}{N+K \\lambda}$，得证。  \n",
    "\n",
    "----\n",
    "\n",
    "**第2步：**证明公式$\\displaystyle P_{\\lambda}(X^{(j)}=a_{jl} | Y = c_k) = \\frac{\\displaystyle \\sum_{i=1}^N I(x_i^{(j)}=a_{jl},y_i=c_k) + \\lambda}{\\displaystyle \\sum_{i=1}^N I(y_i=c_k) + S_j \\lambda}$   \n",
    "根据第1步，可同理得到$$\n",
    "P(Y=c_k, x^{(j)}=a_{j l})=\\frac{\\displaystyle \\sum_{i=1}^N I(y_i=c_k, x_i^{(j)}=a_{jl})+\\lambda}{N+K S_j \\lambda}$$  \n",
    "$$\\begin{aligned} \n",
    "P(x^{(j)}=a_{jl} | Y=c_k)\n",
    "&= \\frac{P(Y=c_k, x^{(j)}=a_{j l})}{P(y_i=c_k)} \\\\\n",
    "&= \\frac{\\displaystyle \\frac{\\displaystyle \\sum_{i=1}^N I(y_i=c_k, x_i^{(j)}=a_{jl})+\\lambda}{N+K S_j \\lambda}}{\\displaystyle \\frac{\\displaystyle \\sum_{i=1}^N I(y_i=c_k) + \\lambda}{N+K \\lambda}} \\\\\n",
    "&= (\\lambda可以任意取值，于是取\\lambda = S_j \\lambda) \\\\\n",
    "&= \\frac{\\displaystyle \\frac{\\displaystyle \\sum_{i=1}^N I(y_i=c_k, x_i^{(j)}=a_{jl})+\\lambda}{N+K S_j \\lambda}}{\\displaystyle \\frac{\\displaystyle \\sum_{i=1}^N I(y_i=c_k) + \\lambda}{N+K S_j \\lambda}} \\\\ \n",
    "&= \\frac{\\displaystyle \\sum_{i=1}^N I(y_i=c_k, x_i^{(j)}=a_{jl})+\\lambda}{\\displaystyle \\sum_{i=1}^N I(y_i=c_k) + \\lambda} (其中\\lambda = S_j \\lambda)\\\\\n",
    "&= \\frac{\\displaystyle \\sum_{i=1}^N I(x_i^{(j)}=a_{jl},y_i=c_k) + \\lambda}{\\displaystyle \\sum_{i=1}^N I(y_i=c_k) + S_j \\lambda}\n",
    "\\end{aligned} $$，得证。\n",
    "\n",
    "### 例\n",
    "\n",
    "按照拉普拉斯平滑估计概率，即取$\\lambda=1$，求解前例。\n",
    "\n",
    "**解**  $A_1=\\{1,2,3\\}$，$A_2=\\{S,M,L\\}$,$C=\\{1,-1\\}$。可计算下列概率：\n",
    "\n",
    "$$\n",
    "P(Y=1)=\\frac{10}{17},\\quad P(Y=-1)=\\frac{7}{17}\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(X^{(1)}=1|Y=1)=\\frac{3}{12},\\quad P(X^{(1)}=2|Y=1)=\\frac{4}{12}, \\quad P(X^{(1)}=3|Y=1)=\\frac{5}{12}\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(X^{(2)}=S|Y=1)=\\frac{2}{12},\\quad P(X^{(2)}=M|Y=1)=\\frac{5}{12}, \\quad P(X^{(2)}=L|Y=1)=\\frac{5}{12}\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(X^{(1)}=1|Y=-1)=\\frac{4}{9},\\quad P(X^{(1)}=2|Y=-1)=\\frac{3}{9}, \\quad P(X^{(1)}=3|Y=-1)=\\frac{2}{9}\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(X^{(2)}=S|Y=-1)=\\frac{4}{9},\\quad P(X^{(2)}=M|Y=-1)=\\frac{3}{9}, \\quad P(X^{(2)}=L|Y=-1)=\\frac{2}{9}\n",
    "$$\n",
    "\n",
    "对于给定的$x=(2,S)^T$计算\n",
    "\n",
    "$$\n",
    "P(Y=1)P(X^{(1)}=2|Y=1)P(X^{(2)}=S|Y=1)=\\frac{10}{17}\\cdot\\frac{4}{12}\\cdot\\frac{2}{12}=\\frac{5}{153}\n",
    "$$\n",
    "\n",
    "$$\n",
    "P(Y=-1)P(X^{(1)}=2|Y=-1)P(X^{(2)}=S|Y=-1)=\\frac{7}{17}\\cdot\\frac{3}{9}\\cdot\\frac{4}{9}=\\frac{28}{459}\n",
    "$$\n",
    "\n",
    "因为$P(Y=-1)P(X^{(1)}=2|Y=-1)P(X^{(2)}=S|Y=-1)$最大，所以$y=-1$。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 小结\n",
    "\n",
    "1．朴素贝叶斯法是典型的生成学习方法。生成方法由训练数据学习联合概率分布\n",
    "$P(X,Y)$，然后求得后验概率分布$P(Y|X)$。具体来说，利用训练数据学习$P(X|Y)$和$P(Y)$的估计，得到联合概率分布：\n",
    "\n",
    "$$P(X,Y)＝P(Y)P(X|Y)$$\n",
    "\n",
    "概率估计方法可以是极大似然估计或贝叶斯估计。\n",
    "\n",
    "2．朴素贝叶斯法的基本假设是条件独立性，\n",
    "\n",
    "$$\\begin{aligned} P(X&=x | Y=c_{k} )=P\\left(X^{(1)}=x^{(1)}, \\cdots, X^{(n)}=x^{(n)} | Y=c_{k}\\right) \\\\ &=\\prod_{j=1}^{n} P\\left(X^{(j)}=x^{(j)} | Y=c_{k}\\right) \\end{aligned}$$\n",
    "\n",
    "\n",
    "这是一个较强的假设。由于这一假设，模型包含的条件概率的数量大为减少，朴素贝叶斯法的学习与预测大为简化。因而朴素贝叶斯法高效，且易于实现。其缺点是分类的性能不一定很高。\n",
    "\n",
    "3．朴素贝叶斯法利用贝叶斯定理与学到的联合概率模型进行分类预测。\n",
    "\n",
    "$$P(Y | X)=\\frac{P(X, Y)}{P(X)}=\\frac{P(Y) P(X | Y)}{\\sum_{Y} P(Y) P(X | Y)}$$\n",
    " \n",
    "将输入$x$分到后验概率最大的类$y$。\n",
    "\n",
    "$$y=\\arg \\max _{c_{k}} P\\left(Y=c_{k}\\right) \\prod_{j=1}^{n} P\\left(X_{j}=x^{(j)} | Y=c_{k}\\right)$$\n",
    "\n",
    "后验概率最大等价于0-1损失函数时的期望风险最小化。\n",
    "\n",
    "**模型**\n",
    "\n",
    "- 高斯模型\n",
    "- 多项式模型\n",
    "- 伯努利模型"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
