{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Naive Bayes分类器\n",
    "\n",
    "Naive Bayes是一个概率分类器，也就是说，在文档d中，返回所有类别c中后验概率最大的类别$\\hat{c}$:\n",
    "\n",
    "$$\\hat{c}=\\text{argmax}P(c\\vert d)$$\n",
    "\n",
    "回顾一下贝叶斯法则：\n",
    "\n",
    "$$P(x\\vert y)=\\frac{P(y\\vert x)P(x)}{P(y)}$$\n",
    "\n",
    "它把任何**条件概率**转化成了三个概率。\n",
    "\n",
    "其中，$P(y)$是**先验概率**或者**边缘概率**。\n",
    "\n",
    "贝叶斯法则可以从条件概率的定义推导，过程如下：\n",
    "\n",
    "$$P(A\\vert B) = \\frac{P(A\\cap B)}{P(B)}$$\n",
    "\n",
    "又，\n",
    "\n",
    "$$P(A\\vert B)P(B) = P(A\\cap B) = P(B\\vert A)P(A)$$\n",
    "\n",
    "所以，\n",
    "\n",
    "$$P(A\\vert B) = \\frac{P(B\\vert A)P(A)}{P(B)}$$\n",
    "\n",
    "上面第二个公式又叫做**概率乘法法则**。\n",
    "\n",
    "回到之前的$\\hat{c}$，那么此时有：\n",
    "\n",
    "$$\\hat{c}=\\text{argmax}P(c\\vert d)=\\text{argmax}\\frac{P(d\\vert c)P(c)}{P(d)}$$\n",
    "\n",
    "因为$P(d)$对于任何$c$都是一个不变的值，所以可以省去：\n",
    "\n",
    "$$\\hat{c}=\\text{argmax}P(c\\vert d)=\\text{argmax}P(d\\vert c)P(c)$$\n",
    "\n",
    "上式，$P(d\\vert c)$叫做**似然(likelihood)**，$P(c)$即**先验概率(prior probability)**。\n",
    "\n",
    "此时，假设文档$d$由`n`个特征组成，则有：\n",
    "\n",
    "$$\\hat{c}=\\text{argmax}\\overbrace{P(f_1,f_2,\\dots,f_n\\vert c)}^{\\text{likelihood}}\\ \\overbrace{P(c)}^{\\text{prior}}$$\n",
    "\n",
    "要计算上面的**似然**，需要很多的参数和很大的训练集，这个很难实现。\n",
    "\n",
    "朴素贝叶斯有两个假设：\n",
    "\n",
    "* 位置无关\n",
    "* $P(f_i\\vert c)$条件独立，也称**朴素贝叶斯假设**\n",
    "\n",
    "所以上式可以简化为：\n",
    "\n",
    "$$P(f_1,f_2,\\dots,f_n\\vert c)=P(f_1\\vert c)P(f_2\\vert c)\\dots P(f_n\\vert c)$$\n",
    "\n",
    "即：\n",
    "\n",
    "$$C_{NB}=\\text{argmax}P(c)\\prod_{f\\in F}P(f\\vert c)$$\n",
    "\n",
    "**词袋模型(bag of words)**不考虑词语的位置，把词语出现的频次当做特征，于是有：\n",
    "\n",
    "$$C_{NB}=\\text{argmax}P(c)\\prod_{i\\in positions}P(w_i\\vert c)$$\n",
    "\n",
    "为了避免数值下溢和提高计算速度，通常使用对数形式：\n",
    "\n",
    "$$c_{NB}=\\text{argmax}\\log{P(c)+\\sum_{i\\in positions}\\log{P(w_i\\vert c)}}$$\n",
    "\n",
    "## 训练朴素贝叶斯分类器\n",
    "\n",
    "为了知道$P(c)$和$P(f_i\\vert c)$，我们还是使用**最大似然估计(MLE)**。\n",
    "\n",
    "有：\n",
    "$$\\hat{P}(c)=\\frac{N_c}{N_{doc}}$$\n",
    "\n",
    "$$\\hat{P}(w_i\\vert c)=\\frac{count(w_i,c)}{\\sum_{w\\in V}count(w,c)}$$\n",
    "\n",
    "为了避免某个概率值为0，我们使用**拉普拉斯平滑(Laplace smooth or add-one smooth)**：\n",
    "\n",
    "$$\\hat{P}(w_i\\vert c)=\\frac{count(w_i,c)+1}{\\sum_{w\\in V}(count(w,c)+1)}=\\frac{count(w_i,c)+1}{(\\sum_{w\\in V}count(w,c))+\\vert V\\vert}$$\n",
    "\n",
    "对于**unknown word**怎么处理呢？答案是：**直接从测试数据集中移除这些词，不计算概率**！\n",
    "\n",
    "## 评估\n",
    "TODO\n",
    "### Precision\n",
    "\n",
    "### Recall\n",
    "\n",
    "### F-measure\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
