{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from bayes import createVocabList, setOfWords2Vec, trainNB0, classifyNB"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "朴素贝叶斯算法可以用来做文本情感分类，比如下面的数据来自斑点犬爱好者留言系统，我们首先对拿到的留言进行手工分类（打标签），用1代表侮辱性留言，0代表非侮辱性留言。使用这些数据来训练朴素贝叶斯算法，训练的目的是为了获取：侮辱性留言中每个词出现的频率（为一个向量），非侮辱性留言中每个词出现的频率（也为一个向量），以及侮辱性留言所占比例。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 测试数据\n",
    "def createDataSet():\n",
    "    postingList=[\n",
    "        ['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],\n",
    "        ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],\n",
    "        ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],\n",
    "        ['stop', 'posting', 'stupid', 'worthless', 'garbage'],\n",
    "        ['mr', 'licks','ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],\n",
    "        ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']\n",
    "    ]\n",
    "    classVec=[0,1,0,1,0,1] #1 代表侮辱性文字, 0代表正常言论\n",
    "    return postingList, classVec"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "第一步：创建词汇表"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['I', 'ate', 'buying', 'cute', 'dalmation']"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dataSet, dataLabels = createDataSet()\n",
    "vocabList = createVocabList(dataSet)\n",
    "# 可以对词汇表排序，方便调试\n",
    "vocabList.sort()\n",
    "vocabList[:5]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "第二步：构建训练集，也就是将其中的每一个文档根据词汇表进行向量化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(6, 32)\n"
     ]
    }
   ],
   "source": [
    "trainMat = []\n",
    "for document in dataSet:\n",
    "    trainMat.append(setOfWords2Vec(vocabList,document))\n",
    "\n",
    "print(np.array(trainMat).shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "第三步：训练分类器，获取p0V(代表非侮辱性留言中每个单词出现的频率)，p1v(代表侮辱性留言中每个单词出现的频率)以及pAbusive(侮辱性留言所占总留言的比率)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([-2.56494936, -2.56494936, -3.25809654, -2.56494936, -2.56494936,\n",
       "       -2.56494936, -2.56494936, -3.25809654, -3.25809654, -2.56494936,\n",
       "       -2.56494936, -2.15948425, -2.56494936, -2.56494936, -2.56494936,\n",
       "       -2.56494936, -3.25809654, -2.56494936, -1.87180218, -3.25809654,\n",
       "       -3.25809654, -2.56494936, -3.25809654, -2.56494936, -3.25809654,\n",
       "       -2.56494936, -2.56494936, -2.56494936, -3.25809654, -3.25809654,\n",
       "       -2.56494936, -3.25809654])"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "p0V, p1V, pAbusive = trainNB0(trainMat,dataLabels)\n",
    "p0V"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "第四步：使用训练好的分类器，对一条新的留言进行分类。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 新的留言\n",
    "testEntry = ['stupid', 'garbage']\n",
    "# 对新留言向量化\n",
    "thisDoc=np.array(setOfWords2Vec(vocabList,testEntry))\n",
    "\n",
    "# 使用分类器进行分类\n",
    "classifyNB(thisDoc,p0V, p1V, pAbusive)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由上述使用朴素贝叶斯算法对文本进行分类可以知道，朴素贝叶斯算法严重依赖训练集，因为词汇表就是由训练集得到的，所以不断扩大词汇表，应该是保持朴素贝叶斯算法准确率的基础。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "da",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
