{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "66ab5d4d-9e72-4bd5-aea8-cccd2a2d0b0c",
   "metadata": {},
   "source": [
    "# 案例：使用贝叶斯分类器完成文本分类\n",
    "\n",
    "## 一、 案例信息\n",
    "\n",
    "### 1. 实验概述\n",
    "\n",
    "&emsp;&emsp;文本分类是现代机器学习应用中的一大模块，更是自然语言处理的基础之一。我们可以通过将文字数据处理成数字数据，然后使用贝叶斯来帮助我们判断一段话，或者一篇文章中的主题分类，感情倾向，甚至文章体裁。现在，绝大多数社交媒体数据的自动化采集，都是依靠首先将文本编码成数字，然后按分类结果采集需要的信息。虽然现在自然语言处理领域大部分由深度学习所控制，贝叶斯分类器依然是文本分类中的一颗明珠。现在，我们就来学习一下，贝叶斯分类器是怎样实现文本分类的。\n",
    "\n",
    "### 2 实验目的\n",
    "\n",
    "- 学习了解朴素贝叶斯的基本介绍\n",
    "- 学习掌握朴素贝叶斯算法相关的统计学知识（条件概率、全概率、以及贝叶斯公式推断）\n",
    "- sklearn 中的朴素贝叶斯（高斯朴素贝叶斯、多项式朴素贝叶斯、伯努利朴素贝叶斯）三种方法的详解\n",
    "- 使用贝叶斯分类器完成文本分类\n",
    "\n",
    "### 3 实验环境\n",
    "\n",
    "- python>=3.6\n",
    "- numpy\n",
    "- pandas\n",
    "- sklearn\n",
    "\n",
    "## 二. 实验指导\n",
    "\n",
    "### 1. 关联技术\n",
    "\n",
    "- 多项式朴素贝叶斯以及其变化\n",
    "  - 多项式朴素贝叶斯\n",
    "  - 伯努利朴素贝叶斯\n",
    "\n",
    "### 2. 实验步骤\n",
    "\n",
    "- 朴素贝叶斯介绍\n",
    "- 朴素贝叶斯 Python 实现\n",
    "- sklearn 中的朴素贝叶斯\n",
    "- 贝叶斯分类器做文本分类\n",
    "\n",
    "### 3. 实验效果\n",
    "\n",
    "---\n",
    "\n",
    "## 三、 实验操作\n",
    "\n",
    "#### 01. 步骤一：朴素贝叶斯介绍\n",
    "\n",
    "#### 步骤操作说明\n",
    "\n",
    "贝叶斯分类算法是统计学的一种概率分类方法，朴素贝叶斯分类是贝叶斯分类中最简单的一种，朴素贝叶斯法（Naive Bayes）是基于贝叶斯定理与特征条件独立假设的分类方法，其分类原理就是利用贝叶斯公式根据某特征的先验概率计算出其后验概率，然后选择具有最大后验概率的类作为该特征所属的类。之所以称之为“朴素”，是因为贝叶斯分类只做最原始、最简单的假设：所有的特征之间都是相互独立的。\n",
    "\n",
    "假设某样本$X$有$a_1,a_2,\\ldots,a_n$个属性，那么有$P(X)=P(a_1,a_2,\\ldots,a_n)=P(a_1)*P(a_2)*P(a_n)$。满足这样的公式就说明特征统计独立。\n",
    "\n",
    "#### 1. 朴素贝叶斯相关的统计学知识\n",
    "\n",
    "在了解朴素贝叶斯的算法之前，我们需要对相关必须的统计学知识做一个回顾。贝叶斯学派很古老，但是从诞生到一百年前一直不是主流。主流是频率学派。频率学派的权威皮尔逊和费歇尔都对贝叶斯学派不屑一顾，但是贝叶斯学派硬是凭借在现代特定领域的出色应用表现为自己赢得了半壁江山。\n",
    "\n",
    "​ 贝叶斯学派的思想可以概括为先验概率+数据=后验概率。也就是说我们在实际问题中需要得到的后验概率，可以通过先验概率和数据一起综合得到。数据大家好理解，被频率学派攻击的是先验概率，一般来说先验概率就是我们对于数据所在领域的历史经验，但是这个经验常常难以量化或者模型化，于是贝叶斯学派大胆的假设先验分布的模型，比如正态分布，beta 分布等。这个假设一般没有特定的依据，因此一直被频率学派认为很荒谬。虽然难以从严密的数学逻辑里推出贝叶斯学派的逻辑，但是在很多实际应用中，贝叶斯理论很好用，比如垃圾邮件分类，文本分类。\n",
    "\n",
    "#### 2. 条件概率公式\n",
    "\n",
    "条件概率公式（Condittional probability），是指在事件 B 发生的情况下，事件 A 发生的概率，用$P(A|B)$来表示。\n",
    "![image-20220103204918462](使用贝叶斯分类器完成文本分类.assets/image-20220103204918462.png)\n",
    "   根据文氏图可知：在事件 B 发生的情况下，事件 A 发生的概率就是$P(A|B)$除以$P(B)$。\n",
    "\n",
    "$$\n",
    "   P(A|B) = \\frac{ P(A\\cap B)}{ P(B) }\\\\\n",
    "   => P(A\\cap B) = P(A|B)P(B)\n",
    "$$\n",
    "\n",
    "同理可得：\n",
    "\n",
    "$$P(A\\cap B) = P(B|A)P(A)$$\n",
    "\n",
    "所以，\n",
    "\n",
    "$$\n",
    "P(A|B)P(B)=P(B|A)P(A)\\\\\n",
    "   => P(A|B)=\\frac{P(B|A)P(A)}{P(B)}\n",
    "$$\n",
    "\n",
    "#### 3. 全概率公式\n",
    "\n",
    "如果事件$A_1,A_2,A_3,,A_N$构成一个完备事件且都有正概率，那么对于任意一个事件 B 则有：\n",
    "\n",
    "$$\n",
    "\\begin{aligned}\n",
    "P(B)&=P(BA_1)+P(BA_2)+\\ldots+P(BA_n) \\\\\n",
    "    &=P(B|A_1)P(A_1)+P(B|A_2)P(A_2)+\\ldots+P(B|A_n)P(A_n)\n",
    " \\end{aligned}\n",
    "$$\n",
    "\n",
    "$$P(B)=\\sum_{i=1}^{n}{P(A_i)P(B|A_i)}$$\n",
    "\n",
    "#### 4. 贝叶斯公式推断\n",
    "\n",
    "根据条件概率和全概率公式，可以得到贝叶斯公式如下：\n",
    "$$P(A|B)=P(A)\\frac{P(B|A)}{P(B)}$$\n",
    "   转换为分类任务的表达式：\n",
    "$$P(类别|特征)=P(类别)\\frac{P(特征|类别)}{P(特征)}$$\n",
    "\n",
    "$$P(A_i | B) =P(A_i) \\frac{ P(B|A_i)} {\\sum_{i=1}^{n}{P(A_i)P(B|A_i)}}$$\n",
    "\n",
    "- P(A)称为“先验概率”（Prior probability），即在 B 事件发生之前，我们对 A 事件概率的一个判断。\n",
    "- P(A|B)称为“后验概率”（Posterior probability），即在 B 事件发生之后，我们对 A 事件概率的重新评估。\n",
    "- P(B|A)/P(B)称为“可能性函数”（Likely hood），这是一个调整因子，使得预估概率更接近真是概率。\n",
    "  - 如果“可能性函数”>1，意味着“先验概率”被增强，事件 A 的发生的可能性变大；\n",
    "  - 如果“可能性函数”=1，意味着 B 事件无助于判断事件 A 的可能性；\n",
    "  - 如果“可能性函数”<1，意味着“先验概率”被削弱，事件 A 的发生的可能性变小；\n",
    "\n",
    "所以条件概率可以理解为：后验概率=先验概率\\*调整因子"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f149a1a3-5a61-4a6a-9904-245a73d52c31",
   "metadata": {},
   "source": [
    "#### 02.步骤二：朴素贝叶斯 Python 实现\n",
    "\n",
    "#### 步骤操作说明\n",
    "\n",
    "过滤广告、垃圾邮件\n",
    "\n",
    "#### 1. 导入所需包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "0091bd29-7e84-4540-bad3-7d9738db94c9",
   "metadata": {},
   "outputs": [],
   "source": [
    "from numpy import *\n",
    "from functools import reduce"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a23d4496-4c4a-47b6-ad42-64c6054066b9",
   "metadata": {},
   "source": [
    "#### 2. 加载数据集合及其对应的分类\n",
    "\n",
    "从训练数据集中提取出属性矩阵和分类数据\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "41c7a41d-bd01-40ab-9c70-0250f26d8c06",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 广告、垃圾标识\n",
    "adClass = 1\n",
    "\n",
    "def loadDataSet():\n",
    "    wordsList = [['周六', '公司', '一起', '聚餐', '时间'],\n",
    "                     ['优惠', '返利', '打折', '优惠', '金融', '理财'],\n",
    "                     ['喜欢', '机器学习', '一起', '研究', '欢迎', '贝叶斯', '算法', '公式'],\n",
    "                     ['公司', '发票', '税点', '优惠', '增值税', '打折'],\n",
    "                     ['北京', '今天', '雾霾', '不宜', '外出', '时间', '在家', '讨论', '学习'],\n",
    "                     ['招聘', '兼职', '日薪', '保险', '返利']]\n",
    "        # 1 是, 0 否\n",
    "    classVec = [0, 1, 0, 1, 0, 1]\n",
    "    return wordsList, classVec"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "683ac248-1bdb-4172-8647-dd4e5a8c12be",
   "metadata": {},
   "source": [
    "#### 3. 生成包含所有单词的 list\n",
    "\n",
    "此处生成的单词向量是不重复的，从第一个和第二个集合开始进行并集操作，最后返回一个不重复的并集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "7f40a77b-28f1-4254-b320-81d826ae98ba",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['喜欢', '理财', '公式', '研究', '北京', '讨论', '一起', '欢迎', '周六', '打折', '学习', '外出', '招聘', '公司', '优惠', '今天', '税点', '兼职', '不宜', '时间', '贝叶斯', '增值税', '发票', '在家', '雾霾', '金融', '算法', '日薪', '聚餐', '保险', '返利', '机器学习']\n"
     ]
    }
   ],
   "source": [
    "docList, classVec = loadDataSet()\n",
    "def doc2VecList(docList):\n",
    "        a = list(reduce(lambda x, y: set(x) | set(y), docList))\n",
    "        return a\n",
    "\n",
    "allWordsVec = doc2VecList(docList)\n",
    "print(allWordsVec)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "89ccea8f-1433-4cb1-a314-5c646aaff3e6",
   "metadata": {},
   "source": [
    "- python 中的 & | 是位运算符 and or 是逻辑运算符 当 and 的运算结果为 true 时候返回的并不是 true 而是运算结果最后一位变量的值\n",
    "- 当 and 返回的结果是 false 时候，如果 A AND B 返回的是第一个 false 的值，如果 a 为 false 则返回 a，如果 a 不是 false，那么返回 b\n",
    "- 如果 a or b 为 true 时候，返回的是第一个真的变量的值，如果 a,b 都为真时候那么返回 a 如果 a 为假 b 为真那么返回 b\n",
    "- a & b a 和 b 为两个 set,返回结果取 a 和 b 的交集 a|b a 和 b 为两个 set,返回结果为两个集合的不重复并集"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10a5a6e2-a323-43c3-be31-f5de7fa9870e",
   "metadata": {},
   "source": [
    "#### 4. 把单词转化为词向量\n",
    "\n",
    "把单词转化为词向量，计算数据集中每一行每个单词出现的次数，如果此单词在数组中，数组的项值置 1\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "3de52290-62ee-447f-a7ed-ea3c4cafd6c0",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[array([0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0,\n",
       "        0, 0, 0, 0, 0, 0, 1, 0, 0, 0]),\n",
       " array([0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0,\n",
       "        0, 0, 0, 1, 0, 0, 0, 0, 1, 0]),\n",
       " array([1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,\n",
       "        0, 0, 0, 0, 1, 0, 0, 0, 0, 1]),\n",
       " array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1,\n",
       "        1, 0, 0, 0, 0, 0, 0, 0, 0, 0]),\n",
       " array([0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0,\n",
       "        0, 1, 1, 0, 0, 0, 0, 0, 0, 0]),\n",
       " array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0,\n",
       "        0, 0, 0, 0, 0, 1, 0, 1, 1, 0])]"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def words2Vec(vecList, inputWords):\n",
    "        # 转化成以一维数组\n",
    "        resultVec = [0] * len(vecList)\n",
    "        for word in inputWords:\n",
    "            if word in vecList:\n",
    "                resultVec[vecList.index(word)] += 1  # 在单词出现的位置上的计数加1\n",
    "            else:\n",
    "                print('没有发现此单词')\n",
    "        return array(resultVec)\n",
    "\n",
    "#构建词向量矩阵，计算docList数据集中每一行每个单词出现的次数，其中返回的trainMat是一个数组的数组\n",
    "trainMat = list(map(lambda x: words2Vec(allWordsVec, x), docList))\n",
    "trainMat"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "24512dce-aeb6-43b4-8eab-6c855e061afe",
   "metadata": {},
   "source": [
    "#### 5.计算生成每个词对于类别上的概率\n",
    "\n",
    "其中概率是以 ln 进行计算的\n",
    "\n",
    "- p0V:每个单词在非分类出现的概率，\n",
    "- p1V:每个单词在是分类出现的概率\n",
    "- pClass1 为类别中是 1 的概率\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "515ae5f4-750e-4d82-ba6e-3d1b68a0b0e0",
   "metadata": {},
   "source": [
    "- 统计每个分类的词的总数，训练数据集的行数作为遍历的条件，从 1 开始\n",
    "- 如果当前类别为 1，那么 p1Num 会加上当前单词矩阵行数据，依次遍历\n",
    "- 如果当前类别为 0，那么 p0Num 会加上当前单词矩阵行数据，依次遍历\n",
    "- 同时统计当前类别下单词的个数和 p1Words 和 p0Words\n",
    "- 计算每种类型里面， 每个单词出现的概率\n",
    "- 朴素贝叶斯分类中，y=x 是单调递增函数，y=ln(x)也是单调的递增的\n",
    "- 如果 x1>x2 那么 ln(x1)>ln(x2)\n",
    "- 在计算过程中，由于概率的值较小，所以我们就取对数进行比较，根据对数的特性\n",
    "- ln(MN) = ln(M)+ln(N)\n",
    "- ln(M/N) = ln(M)-ln(N)\n",
    "- ln(M\\*\\*n)= nln(M)\n",
    "- 注：其中 ln 可替换为 log 的任意对数底\n",
    "\n",
    "```\n",
    "    p0Vec = log(p0Num / p0Words)\n",
    "    p1Vec = log(p1Num / p1Words)\n",
    "\n",
    "    # 计算在类别中1出现的概率，0出现的概率可通过1-p得到\n",
    "    pClass1 = sum(trainClass) / float(numTrainClass)\n",
    "    return p0Vec, p1Vec, pClass1\n",
    "```\n",
    "\n",
    "训练计算每个词在分类上的概率\n",
    "\n",
    "```\n",
    "p0V, p1V, pClass1 = trainNB(trainMat, classVec)\n",
    "pClass1\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "2f2b3c5a-aa90-4b31-a8a7-a724274dd18f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.5"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def trainNB(trainMatrix, trainClass):\n",
    "    \"\"\"计算，生成每个词对于类别上的概率\"\"\"\n",
    "    # 类别行数\n",
    "    numTrainClass = len(trainClass)\n",
    "    # 列数\n",
    "    numWords = len(trainMatrix[0])\n",
    "    #重点\n",
    "    # 全部都初始化为1， 防止出现概率为0的情况出现，影响计算，因为在数量很大的情况下，在分子和分母同时+1的情况不会影响主要的数据\n",
    "    p0Num = ones(numWords)\n",
    "    p1Num = ones(numWords)\n",
    "    p0Words = 2.0      # 相应的单词初始化为2为了分子分母同时都加上某个数λ\n",
    "    p1Words = 2.0\n",
    "    #统计不同类别下所有样本各个特征的总数\n",
    "    for i in range(numTrainClass): \n",
    "        if trainClass[i] == 1:\n",
    "            # 数组在对应的位置上相加\n",
    "            p1Num += trainMatrix[i] #记录每个特征出现的次数\n",
    "            p1Words += sum(trainMatrix[i]) \n",
    "        else:\n",
    "            p0Num += trainMatrix[i]\n",
    "            p0Words += sum(trainMatrix[i])\n",
    "            \n",
    "    p0Vec = log(p0Num / p0Words)\n",
    "    p1Vec = log(p1Num / p1Words)\n",
    "    # 计算在类别中1出现的概率，0出现的概率可通过1-p得到\n",
    "    pClass1 = sum(trainClass) / float(numTrainClass)\n",
    "    return p0Vec, p1Vec, pClass1\n",
    "p0V, p1V, pClass1 = trainNB(trainMat, classVec)\n",
    "pClass1"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ef6e7167-009e-45d6-b071-747ef53bc49d",
   "metadata": {},
   "source": [
    "#### 6. 判断当前数据的分类情况\n",
    "\n",
    "通过将单词向量 testVec 代入，根据贝叶斯公式，比较各个类别的后验概率，判断当前数据的分类情况 max(p0， p1)作为推断的分类，y=x 是单调递增的， y=ln(x)也是单调递增的。如果 x1 > x2, 那么 ln(x1) > ln(x2)，因为概率的值太小了，所以我们可以取 ln， 根据对数特性 ln(ab) = lna + lnb， 可以简化计算。sum 是 numpy 的函数，testVec 是一个数组向量，p1Vec 是一个 1 的概率向量，通过矩阵之间的乘机获得 p(X1|Yj)_p(X2|Yj)_...*p(Xn|Yj)*p(Yj)，其中 pClass1 即为 p(Yj)，此处计算出的 p1 是用对数表示，按照上面所说的，对数也是单调的，而贝叶斯分类主要是通过比较概率出现的大小，不需要确切的概率数据，因此下述表述完全正确。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "81ac061a-ace7-4daf-9123-49a84ad4de44",
   "metadata": {},
   "outputs": [],
   "source": [
    "def classifyNB(testVec, p0Vec, p1Vec, pClass1):\n",
    "    p1 = sum(testVec * p1Vec) + log(pClass1)\n",
    "    p0 = sum(testVec * p0Vec) + log(1 - pClass1)\n",
    "    if p0 > p1:\n",
    "        return 0\n",
    "    return 1"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0797fb86-0daf-4d7d-9e00-7cae143ae7fa",
   "metadata": {},
   "source": [
    "#### 7. 打印出测试结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "1d869cce-7ec8-412c-81ce-fc5734a72fa2",
   "metadata": {},
   "outputs": [],
   "source": [
    "def printClass(words, testClass):\n",
    "    if testClass == adClass:\n",
    "        print(words, '推测为：广告邮件')\n",
    "    else:\n",
    "        print(words, '推测为：正常邮件')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "99778efd-23ec-4b11-94e7-5ac8bf09fbf7",
   "metadata": {},
   "source": [
    "#### 8. 测试新数据，进行分类实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "adc07840-7f48-49e7-90eb-17fb7dc759d5",
   "metadata": {},
   "outputs": [],
   "source": [
    "testWords = ['公司', '聚餐', '讨论', '贝叶斯'] # 测试数据集\n",
    "testVec = words2Vec(allWordsVec, testWords) # 转换成单词向量，32个单词构成的数组，如果此单词在数组中，数组的项值置1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "8515f8b8-5a2d-402b-a87f-912d87e5898d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['公司', '聚餐', '讨论', '贝叶斯'] 推测为：正常邮件\n"
     ]
    }
   ],
   "source": [
    "testClass = classifyNB(testVec, p0V, p1V, pClass1)\n",
    "printClass(testWords, testClass) # 打印出测试结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "09265ece-6fe5-4bd6-a8c6-09234d58babc",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['公司', '聚餐', '讨论', '贝叶斯'] 推测为：正常邮件\n"
     ]
    }
   ],
   "source": [
    "testVec = words2Vec(allWordsVec, testWords)\n",
    "testClass = classifyNB(testVec, p0V, p1V, pClass1)\n",
    "printClass(testWords, testClass) # 打印出测试结果"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "19fd0c87-b521-4df2-a034-b4fb57fcb053",
   "metadata": {},
   "source": [
    "#### 03. 步骤三：sklearn 中的朴素贝叶斯\n",
    "\n",
    "#### 步骤操作说明\n",
    "\n",
    "因为先验概率需要我们先假设一个事件分布的概率分布方式(三种),因此也就有了我们在 sklearn 中对应的三种朴素贝叶斯算法:\n",
    "\n",
    "- 高斯朴素贝叶斯分类器(默认条件概率分布概率符合高斯分布)\n",
    "- 多项式朴素贝叶斯分类器(条件概率符合多项式分布)\n",
    "- 伯努利朴素贝叶斯分类器(条件概率符合二项分布)\n",
    "\n",
    "这三个类适用的分类场景各不相同，一般来说，如果样本特征的分布大部分是连续值，使用 GaussianNB 会比较好。如果样本特征的分布大部分是多元离散值，使用 MultinomialNB 比较合适。而如果样本特征是二元离散值或者很稀疏的多元离散值，应该使用 BernoulliNB。\n",
    "\n",
    "#### 1. 高斯朴素贝叶斯（GaussianNB）\n",
    "\n",
    "在高斯朴素贝叶斯中，每个特征都是连续的，并且都呈高斯分布。高斯分布又称为正态分布。GaussianNB 假设特征的先验概率为正态分布，即如下式：\n",
    "\n",
    "$$P(X_i|Y=C_k)=\\frac{1}{\\sqrt{2\\pi\\sigma_k^2}}exp(-\\frac{(x_j-\\mu_k)^2}{2\\sigma_k^2})$$\n",
    "\n",
    "其中$C_k$为 Y 的第 k 类类别。$\\mu_k$和$\\sigma_k^2$为需要从训练集估计的值。\n",
    "\n",
    "GaussianNB 会根据训练集求出$\\mu_k$和$\\sigma_k^2$。$\\mu_k$ 为在样本类别$C_k$中，所有$X_j(j=1,2,3…)$的平均值。$\\sigma_k^2$为在样本类别$C_k$中，所有$X_j(j=1,2,3…)$的方差。\n",
    "\n",
    "GaussianNB 类的主要参数仅有一个，即先验概率 priors ，对应 Y 的各个类别的先验概率$P(Y=C_k)$。这个值默认不给出，如果不给出此时$P(Y=C_k)=\\frac{m_k}{m}$。其中 m 为训练集样本总数量，$m_k$为输出为第 k 类别的训练集样本数。如果给出的话就以 priors 为准。\n",
    "\n",
    "在使用 GaussianNB 的 fit 方法拟合数据后，我们可以进行预测。此时预测有三种方法，包括 predict，predict_log_proba 和 predict_proba。\n",
    "\n",
    "- predict 方法就是我们最常用的预测方法，直接给出测试集的预测类别输出。\n",
    "- predict_proba 则不同，它会给出测试集样本在各个类别上预测的概率。容易理解，predict_proba 预测出的各个类别概率里的最大值对应的类别，也就是 predict 方法得到类别。\n",
    "- predict_log_proba 和 predict_proba 类似，它会给出测试集样本在各个类别上预测的概率的一个对数转化。转化后 predict_log_proba 预测出的各个类别对数概率里的最大值对应的类别，也就是 predict 方法得到类别。\n",
    "\n",
    "函数示例如下："
   ]
  },
  {
   "cell_type": "raw",
   "id": "c86ae595-d799-469d-86a5-84e5d3f6a031",
   "metadata": {},
   "source": [
    "class sklearn.naive_bayes.GaussianNB(priors=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "296558c7-7c8c-40e2-a836-cb651404e988",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==Predict result by predict==直接给出测试集的预测类别输出\n",
      "[1]\n",
      "==Predict result by predict_proba==给出测试集样本在各个类别上预测的概率\n",
      "[[9.99999949e-01 5.05653254e-08]]\n",
      "==Predict result by predict_log_proba==给出测试集样本在各个类别上预测的概率的一个对数转化\n",
      "[[-5.05653266e-08 -1.67999998e+01]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])#训练集\n",
    "Y = np.array([1, 1, 1, 2, 2, 2])#每个点的类标签\n",
    "from sklearn.naive_bayes import GaussianNB\n",
    "clf = GaussianNB()\n",
    "\n",
    "clf.fit(X, Y)#要先训练(调用fit方法)才能预测(调用predict方法)\n",
    "print(\"==Predict result by predict==直接给出测试集的预测类别输出\")\n",
    "print(clf.predict([[-0.8, -1]]))#预测该点类别\n",
    "print(\"==Predict result by predict_proba==给出测试集样本在各个类别上预测的概率\")\n",
    "print(clf.predict_proba([[-0.8, -1]]))\n",
    "print(\"==Predict result by predict_log_proba==给出测试集样本在各个类别上预测的概率的一个对数转化\")\n",
    "print(clf.predict_log_proba([[-0.8, -1]]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3bf2652c-257b-4a2b-b2fa-b14e3a19b1a5",
   "metadata": {},
   "source": [
    "#### 2. 多项分布朴素贝叶斯（MultinomialNB）\n",
    "\n",
    "MultinomialNB 假设特征的先验概率为多项式分布的贝叶斯算法，是一个经典的朴素贝叶斯文本分类中使用的变种（其中的数据是通常表示为词向量的数量，虽然 TF-IDF 向量在实际项目中表现得很好），对于每一个 y 来说，分布通过向量$\\theta_y=(\\theta_{y1},…,\\theta_{yn})$参数化，n 是类别的数目（在文本分类中，表示词汇量的长度）$\\theta_{yi}$表示标签 i 出现的样本属于类别 y 的概率$P(x_i|y)$。\n",
    "\n",
    "该参数$\\theta_{yi}$是一个平滑的最大似然估计，即相对频率计数：\n",
    "$$\\hat{\\theta}_{yi}=\\frac{N_{ji}+\\alpha}{N_y+\\alpha n}$$\n",
    "\n",
    "其中，$N_{ji}=\\sum_{x\\in T}x_i$表示标签 i 在样本集 T 中属于类别 y 的数目；\n",
    "\n",
    "$N_y=\\sum_{i=1}^{|T|}N_{yi}$表示所有标签中类别 y 出现的数目;\n",
    "\n",
    "$\\alpha$为一个大于 0 的常数，先验平滑先验$\\alpha$>=0 表示学习样本中不存在的特征并防止在计算中概率为 0，设置$\\alpha$= 1 被称为拉普拉斯平滑(Lapalce smoothing)，当$\\alpha$＜ 1 称为利德斯通平滑(Lidstone smoothing)。\n",
    "\n",
    "函数示例如下："
   ]
  },
  {
   "cell_type": "raw",
   "id": "b061b805-b563-435d-8ada-2cd1bf9e6db9",
   "metadata": {},
   "source": [
    "class sklearn.naive_bayes.MultinomialNB(alpha=1.0, fit_prior=True,class_prior=None)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b0e3bd6-d603-4646-9fc4-ab1e15b4e0a9",
   "metadata": {},
   "source": [
    "参数说明如下：\n",
    "\n",
    "- alpha：浮点型可选参数，默认为 1.0，其实就是添加拉普拉斯平滑，即为上述公式中的$\\alpha$ ，如果这个参数设置为 0，就是不添加平滑；\n",
    "- fit_prior：布尔型可选参数，默认为 True。布尔参数 fit_prior 表示是否要考虑先验概率，如果是 false,则所有的样本类别输出都有相同的类别先验概率。否则可以自己用第三个参数 class_prior 输入先验概率，或者不输入第三个参数 class_prior 让 MultinomialNB 自己从训练集样本来计算先验概率，此时的先验概率为$P(Y=C_k)=m_k/m$。其中 m 为训练集样本总数量，$m_k$为输出为第 k 类别的训练集样本数。\n",
    "- class_prior：可选参数，默认为 None。\n",
    "\n",
    "代码示例如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "fc3d242f-bff5-4413-9f16-afcf1d7b0148",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "多项分布朴素贝叶斯，样本总数： 150 错误样本数 : 7\n"
     ]
    }
   ],
   "source": [
    "from sklearn import datasets\n",
    "iris = datasets.load_iris()\n",
    "\n",
    "from sklearn.naive_bayes import MultinomialNB\n",
    "clf = MultinomialNB()\n",
    "clf = clf.fit(iris.data, iris.target)\n",
    "y_pred=clf.predict(iris.data)\n",
    "print(\"多项分布朴素贝叶斯，样本总数： %d 错误样本数 : %d\" % (iris.data.shape[0],(iris.target != y_pred).sum()))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3f2c860-adc3-4a20-baff-e51244432791",
   "metadata": {},
   "source": [
    "#### 3. 伯努利分布朴素贝叶斯（BernoulliNB）\n",
    "\n",
    "BernoulliNB 假设特征的先验概率为二元伯努利分布的朴素贝叶斯训练和分类算法，即有多个特征，但每个特征 都假设是一个二元 (Bernoulli, boolean) 变量。 因此，这类算法要求样本以二元值特征向量表示；如果样本含有其他类型的数据， 一个 BernoulliNB 实例会将其二值化(取决于 binarize 参数)。\n",
    "\n",
    "伯努利朴素贝叶斯的决策规则基于：\n",
    "$$P(X_j=x_{jl}|Y=C_k)=P(j|Y=C_k)x_{jl}+(1-P(j|Y=C_k))(1-x_{jl})$$\n",
    "\n",
    "其中$l$只有两种取值。$x_{jl}$只能取值 0 或者 1。与多项分布朴素贝叶斯的规则不同，伯努利朴素贝叶斯明确地惩罚类 Y 中没有出现作为预测因子的特征$j$，而多项分布分布朴素贝叶斯只是简单地忽略没出现的特征。\n",
    "\n",
    "函数示例如下："
   ]
  },
  {
   "cell_type": "raw",
   "id": "0cdd69cc-47e5-4e62-8566-ecf67490554b",
   "metadata": {},
   "source": [
    "class sklearn.naive_bayes.BernoulliNB(alpha=1.0, binarize=0.0,fit_prior=True, class_prior=None)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47909fcb-5034-41a9-9020-b5f207324252",
   "metadata": {},
   "source": [
    "BernoulliNB 一共有 4 个参数，其中 3 个参数的名字和意义和 MultinomialNB 完全相同。唯一增加的一个参数是 binarize。这个参数主要是用来帮 BernoulliNB 处理二项分布的，可以是数值或者不输入。如果不输入，则 BernoulliNB 认为每个数据特征都已经是二元的。否则的话，小于 binarize 的会归为一类，大于 binarize 的会归为另外一类。\n",
    "\n",
    "代码示例如下：\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "a664d3cc-ca47-4519-9137-e786e77e67e8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "伯努利朴素贝叶斯，样本总数： 150 错误样本数 : 100\n"
     ]
    }
   ],
   "source": [
    "from sklearn import datasets\n",
    "iris = datasets.load_iris()\n",
    "\n",
    "from sklearn.naive_bayes import BernoulliNB\n",
    "clf = BernoulliNB()\n",
    "clf = clf.fit(iris.data, iris.target)\n",
    "y_pred=clf.predict(iris.data)\n",
    "print(\"伯努利朴素贝叶斯，样本总数： %d 错误样本数 : %d\" % (iris.data.shape[0],(iris.target != y_pred).sum()))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "de5b9e64-27ce-4fb6-b995-55034967d8f4",
   "metadata": {},
   "source": [
    "#### 04. 步骤四：贝叶斯分类器做文本分类\n",
    "\n",
    "文本分类是现代机器学习应用中的一大模块，更是自然语言处理的基础之一。我们可以通过将文字数据处理成数字数据，然后使用贝叶斯来帮助我们判断一段话，或者一篇文章中的主题分类，感情倾向，甚至文章体裁。现在，绝大多数社交媒体数据的自动化采集，都是依靠首先将文本编码成数字，然后按分类结果采集需要的信息。虽然现在自然语言处理领域大部分由深度学习所控制，贝叶斯分类器依然是文本分类中的一颗明珠。现在，我们就来学习一下，贝叶斯分类器是怎样实现文本分类的。\n",
    "\n",
    "#### 1. 文本编码技术简介\n",
    "\n",
    "**单词计数向量**\n",
    "\n",
    "在开始分类之前，我们必须先将文本编码成数字。一种常用的方法是单词计数向量。在这种技术中，一个样本可以包含一段话或一篇文章，这个样本中如果出现了 10 个单词，就会有 10 个特征(n=10)，每个特征 代表一个单词，特征的取值 表示这个单词在这个样本中总共出现了几次，**是一个离散的，代表次数的，正整数**。 在 sklearn 当中，单词计数向量计数可以通过 feature_extraction.text 模块中的 CountVectorizer 类实现，来看一个下列的例子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "f969fde7-5452-4fe0-8b11-6be3ca017ed1",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>character</th>\n",
       "      <th>elsa</th>\n",
       "      <th>fascinating</th>\n",
       "      <th>is</th>\n",
       "      <th>it</th>\n",
       "      <th>learning</th>\n",
       "      <th>machine</th>\n",
       "      <th>popular</th>\n",
       "      <th>sensational</th>\n",
       "      <th>techonology</th>\n",
       "      <th>wonderful</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   character  elsa  fascinating  is  it  learning  machine  popular  \\\n",
       "0          0     0            1   2   1         1        1        0   \n",
       "1          0     0            0   1   0         1        1        0   \n",
       "2          1     1            0   1   0         0        0        1   \n",
       "\n",
       "   sensational  techonology  wonderful  \n",
       "0            0            0          1  \n",
       "1            1            1          0  \n",
       "2            0            0          0  "
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sample = [\"Machine learning is fascinating, it is wonderful\",\"Machine learning is a sensational techonology\",\"Elsa is a popular character\"]\n",
    "from sklearn.feature_extraction.text import CountVectorizer\n",
    "vec = CountVectorizer()\n",
    "X = vec.fit_transform(sample)#使用接口get_feature_names()调用每个列的名称\n",
    "import pandas as pd\n",
    "#注意稀疏矩阵是无法输入pandas的\n",
    "CVresult = pd.DataFrame(X.toarray(),columns = vec.get_feature_names_out())\n",
    "CVresult"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8e027fa9-8cc1-43f2-a2c9-324f2d91c4ac",
   "metadata": {},
   "source": [
    "从这个编码结果，我们可以发现两个问题。首先，来回忆一下我们多项式朴素贝叶斯的计算公式：\n",
    "\n",
    "$$\n",
    "\\theta_{ci}=\\frac{\\sum_{yi=c}x_{ji}+\\alpha}{\\sum_{i=1}^{n}\\sum_{y_i=c}x_{ji}+\\alpha n}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "32b0ac7f-442d-48a8-ac24-9d3861b3fbe0",
   "metadata": {},
   "source": [
    "在朴素贝叶斯分类器中，我们通常需要估计每个特征在不同类别下的概率。如果直接使用特征在类别下的出现频率来估计概率，那么特征数量较多的样本会对概率的估计产生较大的影响，可能导致概率估计不准确。\n",
    "\n",
    "为了解决这个问题，补集朴素贝叶斯算法引入了 L2 范数来对特征权重进行归一化。L2 范数是一种衡量向量大小的方法，它等于向量中各个元素平方和的平方根。在这个上下文中，每个特征的权重（即特征在类别下的出现频率）会被除以其 L2 范数，这样可以确保特征数量多的样本不会对概率估计产生过大的影响。\n",
    "\n",
    "通过这种方式，算法能够更公平地对待每个样本，避免因样本特征数量的多少而导致的概率估计偏差。这有助于提高朴素贝叶斯分类器在处理具有不同特征数量的样本时的性能和准确性。补集朴素贝叶斯（complement naive Bayes，CNB）算法是标准多项式朴素贝叶斯算法的改进。 CNB 能够解决样本不平衡问题，并且能够一定程度上忽略朴素假设的补集朴素贝叶斯。 在实验中，CNB 的参数估计已经被证明比普通多项式朴素贝叶斯更稳定，并且它特别适合于样本不平衡的数据集。\n",
    "第二个问题，观察我们的矩阵，会发现\"is\"这个单词出现了四次，那经过计算，这个单词出现的概率就会最大，但其实它对我们的语义并没有什么影响（除非我们希望判断的是，文章描述的是过去的事件还是现在发生的事件）。可以遇见，如果使用单词计数向量，可能会导致一部分常用词（比如中文中的”的“）频繁出现在我们的矩阵中并且占有很高的权重，对分类来说，这明显是对算法的一种误导。为了解决这个问题，比起使用次数，我们使用单词在句子中所占的比例来编码我们的单词，这就是我们著名的 TF-IDF 方法。\n",
    "\n",
    "#### 2. TF-IDF\n",
    "\n",
    "TF-IDF 全称 term frequency-inverse document frequency，词频逆文档频率，是通过单词在文档中出现的频率来衡量其权重，也就是说，IDF 的大小与一个词的常见程度成反比，这个词越常见，编码后为它设置的权重会倾向于越小，以此来压制频繁出现的一些无意义的词。在 sklearn 当中，我们使用 feature_extraction.text 中类 TfifidfVectorizer 来执行这种编码。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "3bd4ccbe-d567-4aaa-8591-af5ebb54df04",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "character      0.083071\n",
       "elsa           0.083071\n",
       "fascinating    0.064516\n",
       "is             0.173225\n",
       "it             0.064516\n",
       "learning       0.110815\n",
       "machine        0.110815\n",
       "popular        0.083071\n",
       "sensational    0.081192\n",
       "techonology    0.081192\n",
       "wonderful      0.064516\n",
       "dtype: float64"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.feature_extraction.text import TfidfVectorizer as TFIDF\n",
    "vec = TFIDF()\n",
    "X = vec.fit_transform(sample)\n",
    "#同样使用接口get_feature_names()调用每个列的名称\n",
    "TFIDFresult = pd.DataFrame(X.toarray(),columns=vec.get_feature_names_out())\n",
    "TFIDFresult\n",
    "#使用TF-IDF编码之后，出现得多的单词的权重被降低了么？\n",
    "CVresult.sum(axis=0)/CVresult.sum(axis=0).sum()\n",
    "TFIDFresult.sum(axis=0) / TFIDFresult.sum(axis=0).sum()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af9151e7-2607-447a-9150-1ef13856d40f",
   "metadata": {},
   "source": [
    "#### 3. 探索文本数据\n",
    "\n",
    "在现实中，文本数据的处理是十分耗时耗力的，尤其是不规则的长文本的处理方式，绝对不是一两句话能够说明白的，因此在这里我们将使用的数据集是 sklearn 中自带的文本数据集 fetch_20newsgroup。这个数据集是 20 个网络新闻组的语料库，其中包含约 2 万篇新闻，全部以英文显示，如果大家希望使用中文则处理过程会更加困难，会需要自己加载中文的语料库。在这个例子中，主要目的是为大家展示贝叶斯的用法和效果，因此我们就使用英文的语料库。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "0a00092a-c021-45a9-9d27-1641487faf75",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['alt.atheism',\n",
       " 'comp.graphics',\n",
       " 'comp.os.ms-windows.misc',\n",
       " 'comp.sys.ibm.pc.hardware',\n",
       " 'comp.sys.mac.hardware',\n",
       " 'comp.windows.x',\n",
       " 'misc.forsale',\n",
       " 'rec.autos',\n",
       " 'rec.motorcycles',\n",
       " 'rec.sport.baseball',\n",
       " 'rec.sport.hockey',\n",
       " 'sci.crypt',\n",
       " 'sci.electronics',\n",
       " 'sci.med',\n",
       " 'sci.space',\n",
       " 'soc.religion.christian',\n",
       " 'talk.politics.guns',\n",
       " 'talk.politics.mideast',\n",
       " 'talk.politics.misc',\n",
       " 'talk.religion.misc']"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.datasets import fetch_20newsgroups\n",
    "data = fetch_20newsgroups()\n",
    "data.target_names"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "fe943a0a-d880-4656-a6fd-d204334a2239",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1 0.25749023013460703\n",
      "2 0.23708206686930092\n",
      "3 0.24489795918367346\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "categories = [\"sci.space\" #科学技术 - 太空\n",
    "             ,\"rec.sport.hockey\" #运动 - 曲棍球\n",
    "             ,\"talk.politics.guns\" #政治 - 枪支问题\n",
    "             ,\"talk.politics.mideast\"] #政治 - 中东问题\n",
    "train = fetch_20newsgroups(subset=\"train\",categories = categories)\n",
    "test = fetch_20newsgroups(subset=\"test\",categories = categories)\n",
    "train\n",
    "#可以观察到，里面依然是类字典结构，我们可以通过使用键的方式来提取内容\n",
    "train.target_names\n",
    "#查看总共有多少篇文章存在\n",
    "len(train.data) #随意提取一篇文章来看看\n",
    "train.data[0] #查看一下我们的标签\n",
    "np.unique(train.target)\n",
    "len(train.target) #是否存在样本不平衡问题？\n",
    "for i in [1,2,3]:\n",
    "  print(i,(train.target == i).sum()/len(train.target))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b77842fd-3933-4788-b235-bc4f57de469f",
   "metadata": {},
   "source": [
    "#### 4. 使用 TF-IDF 将文本数据编码"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "93fb0ecf-e169-4ed0-91e1-e961d16694bf",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(2303, 40725)"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.feature_extraction.text import TfidfVectorizer as TFIDF\n",
    "Xtrain = train.data\n",
    "Xtest = test.data\n",
    "Ytrain = train.target\n",
    "Ytest = test.target\n",
    "tfidf = TFIDF().fit(Xtrain)\n",
    "Xtrain_ = tfidf.transform(Xtrain)\n",
    "Xtest_ = tfidf.transform(Xtest)\n",
    "Xtrain_\n",
    "tosee = pd.DataFrame(Xtrain_.toarray(),columns=tfidf.get_feature_names_out())\n",
    "tosee.head()\n",
    "tosee.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2c5ad55d-aaff-4693-a5ad-e5795b52449c",
   "metadata": {},
   "source": [
    "#### 5. 在贝叶斯上分别建模，查看结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "a88e31bd-e0bd-4d8b-acf8-1a4ca7a66cd1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Multinomial\n",
      "\tBrier under rec.sport.hockey:0.857\n",
      "\tBrier under sci.space:0.033\n",
      "\tBrier under talk.politics.guns:0.169\n",
      "\tBrier under talk.politics.mideast:0.178\n",
      "\tAverage Brier:0.309\n",
      "\tAccuracy:0.975\n",
      "\n",
      "\n",
      "Complement\n",
      "\tBrier under rec.sport.hockey:0.804\n",
      "\tBrier under sci.space:0.039\n",
      "\tBrier under talk.politics.guns:0.137\n",
      "\tBrier under talk.politics.mideast:0.160\n",
      "\tAverage Brier:0.285\n",
      "\tAccuracy:0.986\n",
      "\n",
      "\n",
      "Bournulli\n",
      "\tBrier under rec.sport.hockey:0.925\n",
      "\tBrier under sci.space:0.025\n",
      "\tBrier under talk.politics.guns:0.205\n",
      "\tBrier under talk.politics.mideast:0.193\n",
      "\tAverage Brier:0.337\n",
      "\tAccuracy:0.902\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "from sklearn.naive_bayes import MultinomialNB, ComplementNB, BernoulliNB\n",
    "from sklearn.metrics import brier_score_loss as BS\n",
    "name = [\"Multinomial\",\"Complement\",\"Bournulli\"] #注意高斯朴素贝叶斯不接受稀疏矩阵\n",
    "models = [MultinomialNB(),ComplementNB(),BernoulliNB()]\n",
    "for name,clf in zip(name,models):\n",
    "    clf.fit(Xtrain_,Ytrain)\n",
    "    y_pred = clf.predict(Xtest_)\n",
    "    proba = clf.predict_proba(Xtest_)\n",
    "    score = clf.score(Xtest_,Ytest)\n",
    "    print(name)\n",
    "\n",
    "    #4个不同的标签取值下的布里尔分数\n",
    "    Bscore = []\n",
    "    Ytest_= Ytest.copy()\n",
    "    Ytest_ = pd.get_dummies(Ytest_)\n",
    "    for i in range(len(np.unique(Ytrain))):\n",
    "\n",
    "        bs = BS(Ytest_[i],proba[:,i],pos_label=i)\n",
    "        Bscore.append(bs)\n",
    "        print(\"\\tBrier under {}:{:.3f}\".format(train.target_names[i],bs))\n",
    "\n",
    "    print(\"\\tAverage Brier:{:.3f}\".format(np.mean(Bscore)))\n",
    "    print(\"\\tAccuracy:{:.3f}\".format(score))\n",
    "    print(\"\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f20e1592-32e0-4da7-9ee1-9c05b40fb59b",
   "metadata": {},
   "source": [
    "从结果上来看，两种贝叶斯的效果都很不错。虽然补集贝叶斯的布里尔分数更高，但它的精确度更高。我们可以使用概率校准来试试看能否让模型进一步突破："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "90bbb791-fbf7-40f1-ac51-cafefce67591",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Multinomial\n",
      "\tBrier under rec.sport.hockey:0.857\n",
      "\tBrier under sci.space:0.033\n",
      "\tBrier under talk.politics.guns:0.169\n",
      "\tBrier under talk.politics.mideast:0.178\n",
      "\tAverage Brier:0.309\n",
      "\tAccuracy:0.975\n",
      "\n",
      "\n",
      "Complement\n",
      "\tBrier under rec.sport.hockey:0.804\n",
      "\tBrier under sci.space:0.039\n",
      "\tBrier under talk.politics.guns:0.137\n",
      "\tBrier under talk.politics.mideast:0.160\n",
      "\tAverage Brier:0.285\n",
      "\tAccuracy:0.986\n",
      "\n",
      "\n",
      "Bournulli\n",
      "\tBrier under rec.sport.hockey:0.925\n",
      "\tBrier under sci.space:0.025\n",
      "\tBrier under talk.politics.guns:0.205\n",
      "\tBrier under talk.politics.mideast:0.193\n",
      "\tAverage Brier:0.337\n",
      "\tAccuracy:0.902\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "from sklearn.naive_bayes import MultinomialNB, ComplementNB, BernoulliNB\n",
    "from sklearn.metrics import brier_score_loss as BS\n",
    "name = [\"Multinomial\",\"Complement\",\"Bournulli\"] #注意高斯朴素贝叶斯不接受稀疏矩阵\n",
    "models = [MultinomialNB(),ComplementNB(),BernoulliNB()]\n",
    "for name,clf in zip(name,models):\n",
    "    clf.fit(Xtrain_,Ytrain)\n",
    "    y_pred = clf.predict(Xtest_)\n",
    "    proba = clf.predict_proba(Xtest_)\n",
    "    score = clf.score(Xtest_,Ytest)\n",
    "    print(name)\n",
    "\n",
    "    #4个不同的标签取值下的布里尔分数\n",
    "    Bscore = []\n",
    "    Ytest_= Ytest.copy()\n",
    "    Ytest_ = pd.get_dummies(Ytest_)\n",
    "    for i in range(len(np.unique(Ytrain))):\n",
    "\n",
    "        bs = BS(Ytest_[i],proba[:,i],pos_label=i)\n",
    "        Bscore.append(bs)\n",
    "        print(\"\\tBrier under {}:{:.3f}\".format(train.target_names[i],bs))\n",
    "\n",
    "    print(\"\\tAverage Brier:{:.3f}\".format(np.mean(Bscore)))\n",
    "    print(\"\\tAccuracy:{:.3f}\".format(score))\n",
    "    print(\"\\n\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
