{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 豆瓣评分的预测\n",
    "\n",
    "## 作业要求\n",
    "在这个项目中，我们要预测一部电影的评分，这个问题实际上就是一个分类问题。给定的输入为一段文本，输出为具体的评分。 在这个项目中，我们需要做：\n",
    "- 文本的预处理，如停用词的过滤，低频词的过滤，特殊符号的过滤等\n",
    "- 文本转化成向量，将使用三种方式，分别为tf-idf, word2vec以及BERT向量。 \n",
    "- 训练逻辑回归和朴素贝叶斯模型，并做交叉验证\n",
    "- 评估模型的准确率\n",
    "\n",
    "在具体标记为``TODO``的部分填写相应的代码。 \n",
    "\n",
    "## 文件说明\n",
    "* douban_starter.ipynb：主文件。\n",
    "* douban_handbook.pdf：操作指导。\n",
    "* 中文文本预处理.ipynb：参考文件。\n",
    "* word2vec_test.ipynb：word2vec使用案例。\n",
    "* 数据源：[proect-1 douban(阿里云网盘)](https://www.aliyundrive.com/s/2DrpuSfp1Pp)\n",
    "\n",
    "## 结果与感想\n",
    "1. tf-idf, word2vec以及BERT向量三类模型在训练集和预测集的准确率都在0.8以上，说明整体不存在过拟合的情况。\n",
    "2. 三类模型F1score都低于0.8，说明召回率较低。通过描述统计发现，训练集和验证集中证明评价(1样本)数量远多于负面评价，说明样本倾斜严重。\n",
    "3. 建模结果上，TF-IDF>Word2vec>bert，估计是由于矩阵愈发稀疏导致，也有可能是预训练模型缺乏针对性，因为本人没有GPU，没有再重新训练预训练模型 \n",
    "4. 如果要提高，一方面可以使用欠采样/过采样/SMOTE等方式处理样本不均衡问题，另一方面可以把豆瓣语料加入预训练模型再次训练，也可以尝试LR之外的分类模型，例如贝叶斯网络，或者LSTM等等。\n",
    "5. 为了bert部分能按时完成，加快运行速度，没有使用原始的gensim包，而是fastNLP。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Applications/anaconda3/envs/py36/lib/python3.6/site-packages/gensim/similarities/__init__.py:15: UserWarning: The gensim.similarities.levenshtein submodule is disabled, because the optional Levenshtein package <https://pypi.org/project/python-Levenshtein/> is unavailable. Install Levenhstein (e.g. `pip install python-Levenshtein`) to suppress this warning.\n",
      "  warnings.warn(msg)\n"
     ]
    }
   ],
   "source": [
    "#导入数据处理的基础包\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "\n",
    "#导入用于计数的包\n",
    "from collections import Counter\n",
    "\n",
    "#导入tf-idf相关的包\n",
    "from sklearn.feature_extraction.text import TfidfTransformer    \n",
    "from sklearn.feature_extraction.text import CountVectorizer\n",
    "\n",
    "#导入模型评估的包\n",
    "from sklearn import metrics\n",
    "\n",
    "#导入与word2vec相关的包\n",
    "from gensim.models import KeyedVectors\n",
    "\n",
    "#导入与bert embedding相关的包，关于mxnet包下载的注意事项参考实验手册\n",
    "from bert_embedding import BertEmbedding\n",
    "import mxnet\n",
    "\n",
    "#包tqdm是用来对可迭代对象执行时生成一个进度条用以监视程序运行过程\n",
    "from tqdm import tqdm\n",
    "\n",
    "#导入其他一些功能包\n",
    "import requests\n",
    "import os"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. 读取数据并做文本的处理\n",
    "你需要完成以下几步操作：\n",
    "- 去掉无用的字符如！&，可自行定义\n",
    "- 中文分词\n",
    "- 去掉低频词"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>ID</th>\n",
       "      <th>Movie_Name_EN</th>\n",
       "      <th>Movie_Name_CN</th>\n",
       "      <th>Crawl_Date</th>\n",
       "      <th>Number</th>\n",
       "      <th>Username</th>\n",
       "      <th>Date</th>\n",
       "      <th>Star</th>\n",
       "      <th>Comment</th>\n",
       "      <th>Like</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>Avengers Age of Ultron</td>\n",
       "      <td>复仇者联盟2</td>\n",
       "      <td>2017-01-22</td>\n",
       "      <td>1</td>\n",
       "      <td>然潘</td>\n",
       "      <td>2015-05-13</td>\n",
       "      <td>3</td>\n",
       "      <td>连奥创都知道整容要去韩国。</td>\n",
       "      <td>2404</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>10</td>\n",
       "      <td>Avengers Age of Ultron</td>\n",
       "      <td>复仇者联盟2</td>\n",
       "      <td>2017-01-22</td>\n",
       "      <td>11</td>\n",
       "      <td>影志</td>\n",
       "      <td>2015-04-30</td>\n",
       "      <td>4</td>\n",
       "      <td>“一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫，开场即高潮、一直到结束，会有人觉...</td>\n",
       "      <td>381</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>20</td>\n",
       "      <td>Avengers Age of Ultron</td>\n",
       "      <td>复仇者联盟2</td>\n",
       "      <td>2017-01-22</td>\n",
       "      <td>21</td>\n",
       "      <td>随时流感</td>\n",
       "      <td>2015-04-28</td>\n",
       "      <td>2</td>\n",
       "      <td>奥创弱爆了弱爆了弱爆了啊！！！！！！</td>\n",
       "      <td>120</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>30</td>\n",
       "      <td>Avengers Age of Ultron</td>\n",
       "      <td>复仇者联盟2</td>\n",
       "      <td>2017-01-22</td>\n",
       "      <td>31</td>\n",
       "      <td>乌鸦火堂</td>\n",
       "      <td>2015-05-08</td>\n",
       "      <td>4</td>\n",
       "      <td>与第一集不同，承上启下，阴郁严肃，但也不会不好看啊，除非本来就不喜欢漫威电影。场面更加宏大...</td>\n",
       "      <td>30</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>40</td>\n",
       "      <td>Avengers Age of Ultron</td>\n",
       "      <td>复仇者联盟2</td>\n",
       "      <td>2017-01-22</td>\n",
       "      <td>41</td>\n",
       "      <td>办公室甜心</td>\n",
       "      <td>2015-05-10</td>\n",
       "      <td>5</td>\n",
       "      <td>看毕，我激动地对友人说，等等奥创要来毁灭台北怎么办厚，她拍了拍我肩膀，没事，反正你买了两份...</td>\n",
       "      <td>16</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   ID           Movie_Name_EN Movie_Name_CN  Crawl_Date  Number Username  \\\n",
       "0   0  Avengers Age of Ultron        复仇者联盟2  2017-01-22       1       然潘   \n",
       "1  10  Avengers Age of Ultron        复仇者联盟2  2017-01-22      11       影志   \n",
       "2  20  Avengers Age of Ultron        复仇者联盟2  2017-01-22      21     随时流感   \n",
       "3  30  Avengers Age of Ultron        复仇者联盟2  2017-01-22      31     乌鸦火堂   \n",
       "4  40  Avengers Age of Ultron        复仇者联盟2  2017-01-22      41    办公室甜心   \n",
       "\n",
       "         Date  Star                                            Comment  Like  \n",
       "0  2015-05-13     3                                      连奥创都知道整容要去韩国。  2404  \n",
       "1  2015-04-30     4   “一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫，开场即高潮、一直到结束，会有人觉...   381  \n",
       "2  2015-04-28     2                                 奥创弱爆了弱爆了弱爆了啊！！！！！！   120  \n",
       "3  2015-05-08     4   与第一集不同，承上启下，阴郁严肃，但也不会不好看啊，除非本来就不喜欢漫威电影。场面更加宏大...    30  \n",
       "4  2015-05-10     5   看毕，我激动地对友人说，等等奥创要来毁灭台北怎么办厚，她拍了拍我肩膀，没事，反正你买了两份...    16  "
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#读取数据\n",
    "data = pd.read_csv('data/DMSC.csv')\n",
    "#观察数据格式\n",
    "data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 212506 entries, 0 to 212505\n",
      "Data columns (total 10 columns):\n",
      " #   Column         Non-Null Count   Dtype \n",
      "---  ------         --------------   ----- \n",
      " 0   ID             212506 non-null  int64 \n",
      " 1   Movie_Name_EN  212506 non-null  object\n",
      " 2   Movie_Name_CN  212506 non-null  object\n",
      " 3   Crawl_Date     212506 non-null  object\n",
      " 4   Number         212506 non-null  int64 \n",
      " 5   Username       212496 non-null  object\n",
      " 6   Date           212506 non-null  object\n",
      " 7   Star           212506 non-null  int64 \n",
      " 8   Comment        212506 non-null  object\n",
      " 9   Like           212506 non-null  int64 \n",
      "dtypes: int64(4), object(6)\n",
      "memory usage: 16.2+ MB\n"
     ]
    }
   ],
   "source": [
    "#输出数据的一些相关信息\n",
    "data.info()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>Comment</th>\n",
       "      <th>Star</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>连奥创都知道整容要去韩国。</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>“一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫，开场即高潮、一直到结束，会有人觉...</td>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>奥创弱爆了弱爆了弱爆了啊！！！！！！</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>与第一集不同，承上启下，阴郁严肃，但也不会不好看啊，除非本来就不喜欢漫威电影。场面更加宏大...</td>\n",
       "      <td>4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>看毕，我激动地对友人说，等等奥创要来毁灭台北怎么办厚，她拍了拍我肩膀，没事，反正你买了两份...</td>\n",
       "      <td>5</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                             Comment  Star\n",
       "0                                      连奥创都知道整容要去韩国。     3\n",
       "1   “一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫，开场即高潮、一直到结束，会有人觉...     4\n",
       "2                                 奥创弱爆了弱爆了弱爆了啊！！！！！！     2\n",
       "3   与第一集不同，承上启下，阴郁严肃，但也不会不好看啊，除非本来就不喜欢漫威电影。场面更加宏大...     4\n",
       "4   看毕，我激动地对友人说，等等奥创要来毁灭台北怎么办厚，她拍了拍我肩膀，没事，反正你买了两份...     5"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#只保留数据中我们需要的两列：Comment列和Star列\n",
    "data = data[['Comment','Star']]\n",
    "#观察新的数据的格式\n",
    "data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>Comment</th>\n",
       "      <th>Star</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>连奥创都知道整容要去韩国。</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>“一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫，开场即高潮、一直到结束，会有人觉...</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>奥创弱爆了弱爆了弱爆了啊！！！！！！</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>与第一集不同，承上启下，阴郁严肃，但也不会不好看啊，除非本来就不喜欢漫威电影。场面更加宏大...</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>看毕，我激动地对友人说，等等奥创要来毁灭台北怎么办厚，她拍了拍我肩膀，没事，反正你买了两份...</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                             Comment  Star\n",
       "0                                      连奥创都知道整容要去韩国。     1\n",
       "1   “一个没有黑暗面的人不值得信任。” 第二部剥去冗长的铺垫，开场即高潮、一直到结束，会有人觉...     1\n",
       "2                                 奥创弱爆了弱爆了弱爆了啊！！！！！！     0\n",
       "3   与第一集不同，承上启下，阴郁严肃，但也不会不好看啊，除非本来就不喜欢漫威电影。场面更加宏大...     1\n",
       "4   看毕，我激动地对友人说，等等奥创要来毁灭台北怎么办厚，她拍了拍我肩膀，没事，反正你买了两份...     1"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 这里的star代表具体的评分。但在这个项目中，我们要预测的是正面还是负面。我们把评分为1和2的看作是负面，把评分为3，4，5的作为正面\n",
    "data['Star']=(data.Star/3).astype(int)\n",
    "data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务1： 去掉一些无用的字符"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "apply: 100%|██████████| 212506/212506 [00:02<00:00, 88373.87it/s]\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>Comment</th>\n",
       "      <th>Star</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>连奥创都知道整容要去韩国</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>一个没有黑暗面的人不值得信任第二部剥去冗长的铺垫开场即高潮一直到结束会有人觉得只剩动作特技不...</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>奥创弱爆了弱爆了弱爆了啊</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>与第一集不同承上启下阴郁严肃但也不会不好看啊除非本来就不喜欢漫威电影场面更加宏大单打与团战又...</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>看毕我激动地对友人说等等奥创要来毁灭台北怎么办厚她拍了拍我肩膀没事反正你买了两份旅行保险惹</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                             Comment  Star\n",
       "0                                       连奥创都知道整容要去韩国     1\n",
       "1  一个没有黑暗面的人不值得信任第二部剥去冗长的铺垫开场即高潮一直到结束会有人觉得只剩动作特技不...     1\n",
       "2                                       奥创弱爆了弱爆了弱爆了啊     0\n",
       "3  与第一集不同承上启下阴郁严肃但也不会不好看啊除非本来就不喜欢漫威电影场面更加宏大单打与团战又...     1\n",
       "4      看毕我激动地对友人说等等奥创要来毁灭台北怎么办厚她拍了拍我肩膀没事反正你买了两份旅行保险惹     1"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# TODO1: 去掉一些无用的字符，自行定一个字符集合，并从文本中去掉\n",
    "#    your to do \n",
    "#去除字母数字表情和其它字符\n",
    "import re\n",
    "def clear_character(sentence):\n",
    "    pattern1='[a-zA-Z0-9]'\n",
    "    pattern2 = re.compile(u'[^\\s1234567890:：' + '\\u4e00-\\u9fa5]+')\n",
    "    pattern3='[’!\"#$%&\\'()*+,-./:;<=>?@[\\\\]^_`{|}~]+'\n",
    "    line1=re.sub(pattern1,'',sentence)   #去除英文字母和数字\n",
    "    line2=re.sub(pattern2,'',line1)   #去除表情和其他字符\n",
    "    line3=re.sub(pattern3,'',line2)   #去除去掉残留的冒号及其它符号\n",
    "    new_sentence=''.join(line3.split()) #去除空白\n",
    "    return new_sentence\n",
    "\n",
    "# 输出进度条\n",
    "tqdm.pandas(desc='apply')\n",
    "data['Comment'] = data['Comment'].progress_apply(clear_character)\n",
    "data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务2：使用结巴分词对文本做分词"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 324,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "apply: 100%|██████████| 209926/209926 [00:01<00:00, 199632.76it/s]\n"
     ]
    }
   ],
   "source": [
    "# TODO2: 导入中文分词包jieba, 并用jieba对原始文本做分词\n",
    "import jieba\n",
    "def comment_cut(content):\n",
    "    # TODO: 使用结巴完成对每一个comment的分词\n",
    "    new_str=jieba.cut(content)\n",
    "    return new_str\n",
    "\n",
    "# 输出进度条\n",
    "tqdm.pandas(desc='apply')\n",
    "data['comment_processed'] = data['Comment'].progress_apply(comment_cut)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 325,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>Comment</th>\n",
       "      <th>Star</th>\n",
       "      <th>comment_processed</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>连奥创都知道整容要去韩国</td>\n",
       "      <td>1</td>\n",
       "      <td>&lt;generator object Tokenizer.cut at 0x175fa2bf8&gt;</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>一个没有黑暗面的人不值得信任第二部剥去冗长的铺垫开场即高潮一直到结束会有人觉得只剩动作特技不...</td>\n",
       "      <td>1</td>\n",
       "      <td>&lt;generator object Tokenizer.cut at 0x1961c0308&gt;</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>奥创弱爆了弱爆了弱爆了啊</td>\n",
       "      <td>0</td>\n",
       "      <td>&lt;generator object Tokenizer.cut at 0x16f274af0&gt;</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>与第一集不同承上启下阴郁严肃但也不会不好看啊除非本来就不喜欢漫威电影场面更加宏大单打与团战又...</td>\n",
       "      <td>1</td>\n",
       "      <td>&lt;generator object Tokenizer.cut at 0x16f274d00&gt;</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>看毕我激动地对友人说等等奥创要来毁灭台北怎么办厚她拍了拍我肩膀没事反正你买了两份旅行保险惹</td>\n",
       "      <td>1</td>\n",
       "      <td>&lt;generator object Tokenizer.cut at 0x16f274b48&gt;</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                             Comment  Star  \\\n",
       "0                                       连奥创都知道整容要去韩国     1   \n",
       "1  一个没有黑暗面的人不值得信任第二部剥去冗长的铺垫开场即高潮一直到结束会有人觉得只剩动作特技不...     1   \n",
       "2                                       奥创弱爆了弱爆了弱爆了啊     0   \n",
       "3  与第一集不同承上启下阴郁严肃但也不会不好看啊除非本来就不喜欢漫威电影场面更加宏大单打与团战又...     1   \n",
       "4      看毕我激动地对友人说等等奥创要来毁灭台北怎么办厚她拍了拍我肩膀没事反正你买了两份旅行保险惹     1   \n",
       "\n",
       "                                 comment_processed  \n",
       "0  <generator object Tokenizer.cut at 0x175fa2bf8>  \n",
       "1  <generator object Tokenizer.cut at 0x1961c0308>  \n",
       "2  <generator object Tokenizer.cut at 0x16f274af0>  \n",
       "3  <generator object Tokenizer.cut at 0x16f274d00>  \n",
       "4  <generator object Tokenizer.cut at 0x16f274b48>  "
      ]
     },
     "execution_count": 325,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 观察新的数据的格式\n",
    "data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务3：设定停用词并去掉停用词"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 326,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "apply: 100%|██████████| 209926/209926 [01:53<00:00, 1849.74it/s]\n"
     ]
    }
   ],
   "source": [
    "# TODO3: 设定停用词并从文本中去掉停用词\n",
    "\n",
    "# 下载中文停用词表至data/stopWord.json中，下载地址:https://github.com/goto456/stopwords/\n",
    "if not os.path.exists('data/stopWord.json'):\n",
    "    stopWord = requests.get(\"https://raw.githubusercontent.com/goto456/stopwords/master/cn_stopwords.txt\")\n",
    "    with open(\"data/stopWord.json\", \"wb\") as f:\n",
    "         f.write(stopWord.content)\n",
    "    \n",
    "# 读取下载的停用词表，并保存在列表中\n",
    "with open(\"data/stopWord.json\",\"r\") as f:\n",
    "    stopwords = f.read().split(\"\\n\") \n",
    "            \n",
    "# 去除停用词\n",
    "def rm_stop_word(wordList,stopwords):\n",
    "    outstr = ''\n",
    "    for word in wordList:\n",
    "        if word not in stopwords:\n",
    "            if word != '\\t':\n",
    "                outstr += word\n",
    "                outstr += \" \"\n",
    "    return outstr\n",
    "\n",
    "#这行代码中.progress_apply()函数的作用等同于.apply()函数的作用，只是写成.progress_apply()函数才能被tqdm包监控从而输出进度条。\n",
    "data['comment_processed'] = data['comment_processed'].progress_apply(rm_stop_word,stopwords=stopwords)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 328,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>Comment</th>\n",
       "      <th>Star</th>\n",
       "      <th>comment_processed</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>连奥创都知道整容要去韩国</td>\n",
       "      <td>1</td>\n",
       "      <td>奥创 知道 整容 韩国</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>一个没有黑暗面的人不值得信任第二部剥去冗长的铺垫开场即高潮一直到结束会有人觉得只剩动作特技不...</td>\n",
       "      <td>1</td>\n",
       "      <td>一个 没有 黑暗面 值得 信任 第二部 剥去 冗长 铺垫 开场 高潮 一直 结束 会 有人 ...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>奥创弱爆了弱爆了弱爆了啊</td>\n",
       "      <td>0</td>\n",
       "      <td>奥创 弱 爆 弱 爆 弱 爆</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>与第一集不同承上启下阴郁严肃但也不会不好看啊除非本来就不喜欢漫威电影场面更加宏大单打与团战又...</td>\n",
       "      <td>1</td>\n",
       "      <td>第一集 不同 承上启下 阴郁 严肃 不会 好看 本来 喜欢 漫威 电影 场面 更加 宏大 单...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>看毕我激动地对友人说等等奥创要来毁灭台北怎么办厚她拍了拍我肩膀没事反正你买了两份旅行保险惹</td>\n",
       "      <td>1</td>\n",
       "      <td>看毕 激动 友人 说 奥创 毁灭 台北 厚 拍了拍 肩膀 没事 反正 买 两份 旅行 保险 惹</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                             Comment  Star  \\\n",
       "0                                       连奥创都知道整容要去韩国     1   \n",
       "1  一个没有黑暗面的人不值得信任第二部剥去冗长的铺垫开场即高潮一直到结束会有人觉得只剩动作特技不...     1   \n",
       "2                                       奥创弱爆了弱爆了弱爆了啊     0   \n",
       "3  与第一集不同承上启下阴郁严肃但也不会不好看啊除非本来就不喜欢漫威电影场面更加宏大单打与团战又...     1   \n",
       "4      看毕我激动地对友人说等等奥创要来毁灭台北怎么办厚她拍了拍我肩膀没事反正你买了两份旅行保险惹     1   \n",
       "\n",
       "                                   comment_processed  \n",
       "0                                       奥创 知道 整容 韩国   \n",
       "1  一个 没有 黑暗面 值得 信任 第二部 剥去 冗长 铺垫 开场 高潮 一直 结束 会 有人 ...  \n",
       "2                                    奥创 弱 爆 弱 爆 弱 爆   \n",
       "3  第一集 不同 承上启下 阴郁 严肃 不会 好看 本来 喜欢 漫威 电影 场面 更加 宏大 单...  \n",
       "4   看毕 激动 友人 说 奥创 毁灭 台北 厚 拍了拍 肩膀 没事 反正 买 两份 旅行 保险 惹   "
      ]
     },
     "execution_count": 328,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 观察新的数据的格式\n",
    "data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务4：去掉低频词，出现次数少于10次的词去掉"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 329,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "apply: 100%|██████████| 209926/209926 [1:14:31<00:00, 46.94it/s]\n"
     ]
    }
   ],
   "source": [
    "# TODO4: 去除低频词, 去掉词频小于10的单词，并把结果存放在data['comment_processed']里\n",
    "# 通过键值对的形式存储词语及其出现的次数\n",
    "counts = {}     \n",
    "for i0 in range(len(data)):\n",
    "    x_list=data.iloc[i0,2].split()\n",
    "    for word in x_list:\n",
    "        if len(word) == 1:    # 单个词语不计算在内\n",
    "            continue\n",
    "        else:\n",
    "            counts[word] = counts.get(word, 0) + 1    # 遍历所有词语，每出现一次其对应的值加 1\n",
    "#提取低频词\n",
    "rare_dict={k:v for k, v in counts.items() if v <10 }\n",
    "rare_list=list(rare_dict.keys())\n",
    "\n",
    "#删除低频词函数\n",
    "def rm_word(word,rare_list):\n",
    "    outstr=word\n",
    "    for rare_word in rare_list:\n",
    "        outstr = outstr.replace(rare_word,'')\n",
    "    return outstr\n",
    "\n",
    "#删除低频词\n",
    "data['comment_processed'] = data['comment_processed'].progress_apply(rm_word,rare_list=rare_list)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. 把文本分为训练集和测试集\n",
    "选择语料库中的20%作为测试数据，剩下的作为训练数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 357,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(167940,) (41986,)\n"
     ]
    }
   ],
   "source": [
    "# TODO5: 把数据分为训练集和测试集. comments_train（list)保存用于训练的文本，comments_test(list)保存用于测试的文本。 y_train, y_test是对应的标签（0、1）\n",
    "from sklearn.model_selection import train_test_split  # 用来拆分训练和测试数据\n",
    "test_ratio = 0.2\n",
    "comments_train, comments_test, y_train, y_test = train_test_split(data['comment_processed'],data['Star'], test_size=test_ratio, random_state=0)\n",
    "print (comments_train.shape, comments_test.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 434,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练集\n",
      "Counter({1: 138933, 0: 29007})\n",
      "验证集\n",
      "Counter({1: 34655, 0: 7331})\n"
     ]
    }
   ],
   "source": [
    "#统计类别是否均衡\n",
    "print('训练集')\n",
    "print(Counter(y_train))\n",
    "print('验证集')\n",
    "print(Counter(y_test))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. 把文本转换成向量的形式\n",
    "\n",
    "在这个部分我们会采用三种不同的方式:\n",
    "- 使用tf-idf向量\n",
    "- 使用word2vec\n",
    "- 使用bert向量\n",
    "\n",
    "转换成向量之后，我们接着做模型的训练"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务6：把文本转换成tf-idf向量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 331,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(167940, 14353) (41986, 14353)\n"
     ]
    }
   ],
   "source": [
    "# TODO6: 把训练文本和测试文本转换成tf-idf向量。使用sklearn的feature_extraction.text.TfidfTransformer模块\n",
    "#    请留意fit_transform和transform之间的区别。 常见的错误是在训练集和测试集上都使用 fit_transform，需要避免！ \n",
    "#    另外，可以留意一下结果是否为稀疏矩阵\n",
    "\n",
    "#计算词频\n",
    "count_vect = CountVectorizer(decode_error='ignore')\n",
    "X_train_counts = count_vect.fit_transform(comments_train)\n",
    "#计算TF-IDF\n",
    "from sklearn.feature_extraction.text import TfidfVectorizer\n",
    "tf_transfomer = TfidfVectorizer(decode_error='ignore')\n",
    "tfidf_train = tf_transfomer.fit_transform(comments_train)\n",
    "tfidf_test= tf_transfomer.transform(comments_test)\n",
    "print (tfidf_train.shape, tfidf_test.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务7：把文本转换成word2vec向量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 139,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 由于训练出一个高效的word2vec词向量往往需要非常大的语料库与计算资源，所以我们通常不自己训练Wordvec词向量，而直接使用网上开源的已训练好的词向量。\n",
    "# data/sgns.zhihu.word是从https://github.com/Embedding/Chinese-Word-Vectors下载到的预训练好的中文词向量文件\n",
    "# 使用KeyedVectors.load_word2vec_format()函数加载预训练好的词向量文件\n",
    "model = KeyedVectors.load_word2vec_format('data/sgns.zhihu.word')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 335,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([-3.51068e-01,  2.57389e-01, -1.46752e-01, -4.45400e-03,\n",
       "       -1.04235e-01,  3.72475e-01, -4.29349e-01, -2.80470e-02,\n",
       "        1.56651e-01, -1.27600e-01, -1.68833e-01, -2.91350e-02,\n",
       "        4.57850e-02, -3.53735e-01,  1.61205e-01, -1.82645e-01,\n",
       "       -1.35340e-02, -2.42591e-01, -1.33356e-01, -1.31012e-01,\n",
       "       -9.29500e-02, -1.70479e-01, -2.54004e-01, -1.20530e-01,\n",
       "       -1.33690e-01,  7.84360e-02, -1.46603e-01, -2.77378e-01,\n",
       "       -1.36723e-01,  9.29070e-02, -4.00197e-01,  2.80726e-01,\n",
       "       -1.73282e-01,  8.56630e-02,  2.37251e-01,  6.24290e-02,\n",
       "       -1.57132e-01,  2.15685e-01,  9.54770e-02,  1.09896e-01,\n",
       "       -2.05394e-01, -3.37900e-03, -2.77480e-02,  8.16580e-02,\n",
       "        9.65290e-02,  1.23188e-01,  9.55090e-02, -2.31017e-01,\n",
       "       -8.59590e-02, -2.21634e-01, -1.37885e-01, -1.84790e-01,\n",
       "       -2.40127e-01, -2.79150e-01, -4.56200e-03,  1.04099e-01,\n",
       "        3.20523e-01, -6.77270e-02,  1.95719e-01,  4.06145e-01,\n",
       "       -2.98546e-01, -1.67750e-02,  2.74917e-01, -9.02350e-02,\n",
       "       -1.06762e-01, -2.47535e-01, -4.00415e-01,  2.06635e-01,\n",
       "        2.76320e-01, -3.13900e-03,  3.04576e-01,  1.17664e-01,\n",
       "       -2.17286e-01,  7.54650e-02, -1.44985e-01,  6.36960e-02,\n",
       "        1.58869e-01, -4.71568e-01, -1.08640e-01,  4.00144e-01,\n",
       "       -1.83435e-01,  1.88286e-01,  1.32482e-01, -8.50580e-02,\n",
       "       -8.65500e-03, -2.80691e-01, -1.10871e-01,  4.72890e-02,\n",
       "       -1.47635e-01, -5.17090e-02, -4.65100e-03, -1.73998e-01,\n",
       "       -6.15050e-02,  1.14153e-01,  7.09480e-02,  9.88670e-02,\n",
       "       -7.25230e-02,  4.64800e-02, -1.83534e-01, -1.97097e-01,\n",
       "       -7.94430e-02,  2.80280e-01, -2.44620e-01, -3.95528e-01,\n",
       "       -6.10930e-02, -2.53600e-01,  1.49320e-01,  2.82553e-01,\n",
       "        4.33800e-02,  3.50895e-01, -1.42657e-01, -9.72500e-03,\n",
       "       -1.38536e-01, -1.25489e-01, -1.06447e-01, -9.92880e-02,\n",
       "        4.94210e-02,  1.19487e-01, -6.15150e-02,  1.44710e-01,\n",
       "        1.85710e-01,  7.26870e-02,  1.90587e-01,  2.89779e-01,\n",
       "        2.03630e-01, -9.82690e-02,  1.36294e-01, -1.17514e-01,\n",
       "       -3.54500e-01,  3.30250e-02,  3.01922e-01, -6.46030e-02,\n",
       "       -2.21900e-03, -1.35516e-01,  1.81371e-01,  9.43760e-02,\n",
       "        2.73173e-01, -1.90694e-01,  1.20015e-01,  1.08732e-01,\n",
       "       -3.41390e-02,  1.17405e-01,  3.11844e-01, -8.31670e-02,\n",
       "        2.78229e-01,  3.37064e-01,  6.89230e-02,  2.01023e-01,\n",
       "        3.29060e-02, -4.36554e-01, -1.64540e-02,  2.31550e-02,\n",
       "       -1.96904e-01, -1.49370e-01,  7.83610e-02,  3.27980e-02,\n",
       "        2.42316e-01, -1.67102e-01,  2.93025e-01, -7.99780e-02,\n",
       "        5.57970e-02,  4.07600e-02, -1.87006e-01,  1.90802e-01,\n",
       "        1.10987e-01, -2.66690e-02, -1.09340e-01,  2.88753e-01,\n",
       "       -2.08372e-01,  6.85860e-02, -3.21254e-01,  6.55090e-02,\n",
       "       -2.84544e-01, -2.70365e-01,  2.22242e-01, -8.31220e-02,\n",
       "       -1.01721e-01,  3.11709e-01, -1.59856e-01,  3.19859e-01,\n",
       "        5.72180e-02,  3.15010e-01, -7.65140e-02,  3.07237e-01,\n",
       "        4.14023e-01,  9.61900e-02, -8.12400e-03,  3.59550e-01,\n",
       "       -1.05667e-01, -4.35740e-02,  1.97829e-01, -1.71804e-01,\n",
       "        1.21416e-01, -6.59890e-02,  3.14697e-01, -1.31049e-01,\n",
       "       -1.27306e-01, -4.13040e-02,  3.01799e-01, -2.47272e-01,\n",
       "        8.71550e-02, -4.88150e-01, -2.20991e-01,  4.65800e-02,\n",
       "       -1.34422e-01,  1.35731e-01, -1.72283e-01,  1.16328e-01,\n",
       "        2.88320e-02,  3.31440e-02,  9.48420e-02, -3.48560e-02,\n",
       "        7.54000e-02,  3.56407e-01, -2.56189e-01, -1.32000e-04,\n",
       "        1.05849e-01,  4.28803e-01,  2.86090e-02,  7.92700e-03,\n",
       "        3.58461e-01,  2.82804e-01, -5.88800e-02,  1.73850e-02,\n",
       "        9.28060e-02, -3.90392e-01,  1.89097e-01,  2.85916e-01,\n",
       "        1.51707e-01,  2.58823e-01,  1.63509e-01,  1.26390e-01,\n",
       "        1.95748e-01, -9.80750e-02,  9.12650e-02, -8.20320e-02,\n",
       "       -1.50282e-01,  1.10330e-01,  3.82834e-01, -1.21887e-01,\n",
       "       -1.31515e-01, -4.10777e-01,  2.19966e-01, -1.48785e-01,\n",
       "        1.02161e-01,  8.31420e-02,  2.08074e-01,  3.58526e-01,\n",
       "        1.41909e-01,  2.27764e-01,  4.61127e-01, -1.61267e-01,\n",
       "       -1.22107e-01,  1.02524e-01, -6.15770e-02,  2.10200e-02,\n",
       "        1.46990e-02, -2.23617e-01,  1.71110e-02,  1.20386e-01,\n",
       "       -5.65090e-02, -2.34566e-01,  4.34660e-02,  1.97851e-01,\n",
       "        2.37255e-01, -1.44901e-01,  4.41118e-01, -3.86210e-02,\n",
       "       -2.60820e-01,  4.17700e-02, -9.47700e-02,  3.21410e-02,\n",
       "       -1.86014e-01, -1.40884e-01,  2.02842e-01, -4.83673e-01,\n",
       "        2.19995e-01,  3.59395e-01, -1.84255e-01,  1.30998e-01,\n",
       "        1.10280e-01,  1.42483e-01, -2.01510e-01, -1.34156e-01,\n",
       "       -1.25440e-01, -9.89700e-02, -1.45869e-01, -2.23137e-01,\n",
       "        4.83180e-02,  2.55901e-01, -1.25977e-01, -1.36290e-01,\n",
       "       -3.33329e-01, -2.65370e-01, -1.48834e-01,  1.28487e-01,\n",
       "       -7.88080e-02,  1.35266e-01,  2.17841e-01,  6.60870e-02],\n",
       "      dtype=float32)"
      ]
     },
     "execution_count": 335,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#预训练词向量使用举例\n",
    "model['今天']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 358,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练集\n",
      "前10000行花费930.16秒\n",
      "前20000行花费1844.12秒\n",
      "前30000行花费2758.68秒\n",
      "前40000行花费3674.82秒\n",
      "前50000行花费4598.19秒\n",
      "前60000行花费5515.28秒\n",
      "前70000行花费6428.56秒\n",
      "前80000行花费7333.20秒\n",
      "前90000行花费8217.91秒\n",
      "前100000行花费9105.13秒\n",
      "前110000行花费9996.44秒\n",
      "前120000行花费10899.80秒\n",
      "前130000行花费11800.79秒\n",
      "前140000行花费12690.13秒\n",
      "前150000行花费13589.60秒\n",
      "前160000行花费14483.87秒\n",
      "测试集\n",
      "前10000行花费914.94秒\n",
      "前20000行花费1830.88秒\n",
      "前30000行花费2753.33秒\n",
      "前40000行花费3661.35秒\n",
      "(167940, 300) (41986, 300)\n"
     ]
    }
   ],
   "source": [
    "# TODO7: 对于每个句子，生成句子的向量。具体的做法是：包含在句子中的所有单词的向量做平均。\n",
    "#vocabulary = model.vocab\n",
    "import time\n",
    "\n",
    "\n",
    "#循环处理函数\n",
    "def get_word2vec(comments_set,model):\n",
    "    #预定义矩阵 \n",
    "    m=len(comments_set)\n",
    "    n=len(model['今天'])\n",
    "    output = np.zeros((m, n))\n",
    "    #循环处理\n",
    "    startTime = time.time()\n",
    "    for i0 in range(len(comments_set)):\n",
    "        #拆分\n",
    "        content=comments_set.iloc[i0]\n",
    "        cutWords=content.split(\" \")\n",
    "        #筛选在模型词库中的词\n",
    "        cutWords=[x for x in cutWords if x in list(model.key_to_index)]\n",
    "        #取平均\n",
    "        if len(cutWords)>0:\n",
    "            output[i0] = np.array(model[cutWords]).mean(axis=0)\n",
    "        #显示运行时间\n",
    "        if (i0+1) % 10000 == 0:\n",
    "            usedTime = time.time() - startTime\n",
    "            print('前%d行花费%.2f秒' %(i0+1, usedTime))\n",
    "    return output\n",
    "print('训练集')\n",
    "word2vec_train=get_word2vec(comments_set=comments_train,model=model)\n",
    "print('测试集')\n",
    "word2vec_test=get_word2vec(comments_set=comments_test,model=model)\n",
    "print (word2vec_train.shape, word2vec_test.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务8：把文本转换成bert向量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loading vocabulary file /Users/zhuochen/.fastNLP/embedding/bert-base-chinese/vocab.txt\n",
      "Load pre-trained BERT parameters from file /Users/zhuochen/.fastNLP/embedding/bert-base-chinese/pytorch_model.bin.\n",
      "训练集\n",
      "前500行花费52.65秒\n",
      "前1000行花费103.45秒\n",
      "前1500行花费153.96秒\n",
      "前2000行花费203.33秒\n",
      "前2500行花费252.75秒\n",
      "前3000行花费300.27秒\n",
      "前3500行花费348.40秒\n",
      "前4000行花费397.53秒\n",
      "前4500行花费446.81秒\n",
      "前5000行花费496.64秒\n",
      "前5500行花费546.89秒\n",
      "前6000行花费596.65秒\n",
      "前6500行花费647.02秒\n",
      "前7000行花费702.15秒\n",
      "前7500行花费750.89秒\n",
      "前8000行花费800.25秒\n",
      "前8500行花费848.25秒\n",
      "前9000行花费897.51秒\n",
      "前9500行花费946.65秒\n",
      "前10000行花费996.23秒\n",
      "前10500行花费1044.77秒\n",
      "前11000行花费1094.60秒\n",
      "前11500行花费1147.95秒\n",
      "前12000行花费1203.76秒\n",
      "前12500行花费1256.62秒\n",
      "前13000行花费1304.21秒\n",
      "前13500行花费1353.75秒\n",
      "前14000行花费1402.88秒\n",
      "前14500行花费1452.64秒\n",
      "前15000行花费1501.59秒\n",
      "前15500行花费1550.71秒\n",
      "前16000行花费1599.64秒\n",
      "前16500行花费1648.60秒\n",
      "前17000行花费1698.62秒\n",
      "前17500行花费1747.94秒\n",
      "前18000行花费1796.87秒\n",
      "前18500行花费1846.46秒\n",
      "前19000行花费1895.98秒\n",
      "前19500行花费1945.16秒\n",
      "前20000行花费1992.87秒\n",
      "前20500行花费2042.35秒\n",
      "前21000行花费2090.73秒\n",
      "前21500行花费2140.28秒\n",
      "前22000行花费2189.87秒\n",
      "前22500行花费2238.43秒\n",
      "前23000行花费2287.99秒\n",
      "前23500行花费2336.95秒\n",
      "前24000行花费2386.51秒\n",
      "前24500行花费2435.14秒\n",
      "前25000行花费2483.02秒\n",
      "前25500行花费2532.56秒\n",
      "前26000行花费2581.66秒\n",
      "前26500行花费2631.60秒\n",
      "前27000行花费2680.35秒\n",
      "前27500行花费2730.33秒\n",
      "前28000行花费2780.43秒\n",
      "前28500行花费2829.63秒\n",
      "前29000行花费2879.18秒\n",
      "前29500行花费2928.02秒\n",
      "前30000行花费2978.42秒\n",
      "前30500行花费3027.82秒\n",
      "前31000行花费3077.54秒\n",
      "前31500行花费3127.11秒\n",
      "前32000行花费3176.14秒\n",
      "前32500行花费3226.08秒\n",
      "前33000行花费3276.33秒\n",
      "前33500行花费3324.02秒\n",
      "前34000行花费3373.00秒\n",
      "前34500行花费3421.78秒\n",
      "前35000行花费3470.28秒\n",
      "前35500行花费3519.84秒\n",
      "前36000行花费3570.82秒\n",
      "前36500行花费3620.99秒\n",
      "前37000行花费3668.99秒\n",
      "前37500行花费3717.74秒\n",
      "前38000行花费3766.98秒\n",
      "前38500行花费3816.74秒\n",
      "前39000行花费3867.52秒\n",
      "前39500行花费3917.95秒\n",
      "前40000行花费3968.13秒\n",
      "前40500行花费4016.98秒\n",
      "前41000行花费4066.46秒\n",
      "前41500行花费4116.77秒\n",
      "前42000行花费4165.30秒\n",
      "前42500行花费4213.76秒\n",
      "前43000行花费4263.18秒\n",
      "前43500行花费4311.61秒\n",
      "前44000行花费4360.86秒\n",
      "前44500行花费4410.54秒\n",
      "前45000行花费4460.24秒\n",
      "前45500行花费4509.80秒\n",
      "前46000行花费4559.58秒\n",
      "前46500行花费4610.35秒\n",
      "前47000行花费4660.01秒\n",
      "前47500行花费4709.15秒\n",
      "前48000行花费4758.28秒\n",
      "前48500行花费4807.62秒\n",
      "前49000行花费4857.58秒\n",
      "前49500行花费4907.65秒\n",
      "前50000行花费4957.20秒\n",
      "前50500行花费5006.28秒\n",
      "前51000行花费5055.34秒\n",
      "前51500行花费5105.00秒\n",
      "前52000行花费5155.12秒\n",
      "前52500行花费5204.54秒\n",
      "前53000行花费5253.53秒\n",
      "前53500行花费5302.45秒\n",
      "前54000行花费5352.99秒\n",
      "前54500行花费5402.81秒\n",
      "前55000行花费5452.36秒\n",
      "前55500行花费5501.65秒\n",
      "前56000行花费5549.94秒\n",
      "前56500行花费5598.63秒\n",
      "前57000行花费5647.28秒\n",
      "前57500行花费5697.09秒\n",
      "前58000行花费5747.57秒\n",
      "前58500行花费5798.68秒\n",
      "前59000行花费5847.46秒\n",
      "前59500行花费5896.21秒\n",
      "前60000行花费5945.66秒\n",
      "前60500行花费5995.37秒\n",
      "前61000行花费6045.22秒\n",
      "前61500行花费6095.24秒\n",
      "前62000行花费6144.32秒\n",
      "前62500行花费6193.43秒\n",
      "前63000行花费6243.82秒\n",
      "前63500行花费6297.35秒\n",
      "前64000行花费6347.15秒\n",
      "前64500行花费6395.87秒\n",
      "前65000行花费6444.70秒\n",
      "前65500行花费6494.67秒\n",
      "前66000行花费6544.33秒\n",
      "前66500行花费6593.94秒\n",
      "前67000行花费6643.50秒\n",
      "前67500行花费6692.86秒\n",
      "前68000行花费6744.72秒\n",
      "前68500行花费6794.07秒\n",
      "前69000行花费6842.23秒\n",
      "前69500行花费6890.52秒\n",
      "前70000行花费6939.73秒\n",
      "前70500行花费6989.22秒\n",
      "前71000行花费7039.26秒\n",
      "前71500行花费7088.98秒\n",
      "前72000行花费7138.02秒\n",
      "前72500行花费7187.04秒\n",
      "前73000行花费7236.74秒\n",
      "前73500行花费7284.33秒\n",
      "前74000行花费7333.76秒\n",
      "前74500行花费7383.44秒\n",
      "前75000行花费7433.57秒\n",
      "前75500行花费7483.26秒\n",
      "前76000行花费7532.34秒\n",
      "前76500行花费7582.01秒\n",
      "前77000行花费7631.19秒\n",
      "前77500行花费7679.90秒\n",
      "前78000行花费7729.88秒\n",
      "前78500行花费7779.12秒\n",
      "前79000行花费7827.84秒\n",
      "前79500行花费7877.14秒\n",
      "前80000行花费7925.78秒\n",
      "前80500行花费8278.29秒\n",
      "前81000行花费8328.02秒\n"
     ]
    }
   ],
   "source": [
    "# 导入gpu版本的bert embedding预训练的模型。\n",
    "# 若没有gpu，则ctx可使用其默认值cpu(0)。但使用cpu会使程序运行的时间变得非常慢\n",
    "# 若之前没有下载过bert embedding预训练的模型，执行此句时会花费一些时间来下载预训练的模型\n",
    "#ctx = mxnet.gpu()\n",
    "#embedding = BertEmbedding(ctx=ctx)\n",
    "\n",
    "#使用faseNLP\n",
    "#https://fastnlp.readthedocs.io/zh/v0.5.0/tutorials/tutorial_3_embedding.html#part-v-bert-embedding\n",
    "import torch\n",
    "from fastNLP import Vocabulary\n",
    "from fastNLP.embeddings import BertEmbedding\n",
    "#初始化\n",
    "vocab = Vocabulary()\n",
    "model = BertEmbedding(vocab, model_dir_or_name='cn-base', requires_grad=False)\n",
    "#统计行数\n",
    "words = torch.LongTensor([[vocab.to_index(word) for word in ['今天']]])\n",
    "n1=model(words).numpy()\n",
    "n2=n1.shape\n",
    "n=n2[2]\n",
    "#关闭告警\n",
    "import warnings\n",
    "warnings.filterwarnings(\"ignore\")\n",
    "\n",
    "def get_bert(comments_set,model,n):\n",
    "    #预定义矩阵 \n",
    "    m=len(comments_set)\n",
    "    output = np.zeros((m, n))\n",
    "    #循环处理\n",
    "    startTime = time.time()\n",
    "    for i0 in range(len(comments_set)):\n",
    "        #拆分\n",
    "        content=comments_set.iloc[i0]\n",
    "        words = torch.LongTensor([[vocab.to_index(word) for word in content.split()]])\n",
    "        #取平均\n",
    "        x=embed(words).numpy()\n",
    "        output[i0]=np.array(x.mean(axis=0)).mean(axis=0)\n",
    "        #显示运行时间\n",
    "        if (i0+1) % 500 == 0:\n",
    "            usedTime = time.time() - startTime\n",
    "            print('前%d行花费%.2f秒' %(i0+1, usedTime))\n",
    "    return output\n",
    "\n",
    "# TODO8: 跟word2vec一样，计算出训练文本和测试文本的向量，仍然采用单词向量的平均。\n",
    "print('训练集')\n",
    "bert_train=get_bert(comments_set=comments_train,model=model,n=n)\n",
    "print('测试集')\n",
    "bert_test=get_bert(comments_set=comments_test,model=model,n=n)\n",
    "print (bert_train.shape, bert_test.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 408,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(167940, 14353) (41986, 14353)\n",
      "(167940, 300) (41986, 300)\n",
      "(167940, 768) (41986, 768)\n"
     ]
    }
   ],
   "source": [
    "print (tfidf_train.shape, tfidf_test.shape)\n",
    "print (word2vec_train.shape, word2vec_test.shape)\n",
    "print (bert_train.shape, bert_test.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 410,
   "metadata": {},
   "outputs": [],
   "source": [
    "#备份预训练文件\n",
    "np.save('word2vec_train',word2vec_train)\n",
    "np.save('word2vec_test',word2vec_test)\n",
    "np.save('bert_train',bert_train)\n",
    "np.save('bert_test',bert_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. 训练模型以及评估\n",
    "对如上三种不同的向量表示法，分别训练逻辑回归模型，需要做：\n",
    "- 搭建模型\n",
    "- 训练模型（并做交叉验证）\n",
    "- 输出最好的结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 411,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入逻辑回归的包\n",
    "from sklearn.linear_model import LogisticRegression"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务9：使用tf-idf，并结合逻辑回归训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 421,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "TF-IDF LR train accuracy 0.8694652852209123\n",
      "TF-IDF LR test accuracy 0.867455818606202\n",
      "TF-IDF LR test F1_score 0.7112511803267086\n"
     ]
    }
   ],
   "source": [
    "# TODO9: 使用tf-idf + 逻辑回归训练模型，需要用gridsearchCV做交叉验证，并选择最好的超参数\n",
    "\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "parameters = {'penalty': ('l1', 'l2'),'C': (0.01, 0.1, 1, 10)}\n",
    "grid_search = GridSearchCV(LogisticRegression(), parameters,  verbose=0, scoring='accuracy', cv=5)\n",
    "grid =grid_search.fit(tfidf_train, y_train)\n",
    "print('TF-IDF LR train accuracy %s' % grid.best_score_)\n",
    "best_model=grid.best_estimator_\n",
    "tf_idf_y_pred=best_model.predict(tfidf_test)\n",
    "print('TF-IDF LR test accuracy %s' % metrics.accuracy_score(y_test, tf_idf_y_pred))\n",
    "#逻辑回归模型在测试集上的F1_Score\n",
    "print('TF-IDF LR test F1_score %s' % metrics.f1_score(y_test, tf_idf_y_pred,average=\"macro\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务10：使用word2vec，并结合逻辑回归训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 422,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "word2vec LR train accuracy 0.8469691556508276\n",
      "Word2vec LR test accuracy 0.84378126042014\n",
      "Word2vec LR test F1_score 0.62832619024489\n"
     ]
    }
   ],
   "source": [
    "# TODO10: 使用word2vec + 逻辑回归训练模型，需要用gridsearchCV做交叉验证，并选择最好的超参数\n",
    "\n",
    "parameters = {'penalty': ('l1', 'l2'),'C': (0.01, 0.1, 1, 10)}\n",
    "grid_search = GridSearchCV(LogisticRegression(), parameters,  verbose=0, scoring='accuracy', cv=5)\n",
    "grid =grid_search.fit(word2vec_train, y_train)\n",
    "print('word2vec LR train accuracy %s' % grid.best_score_)\n",
    "best_model=grid.best_estimator_\n",
    "word2vec_y_pred=best_model.predict(word2vec_test)\n",
    "\n",
    "print('Word2vec LR test accuracy %s' % metrics.accuracy_score(y_test, word2vec_y_pred))\n",
    "#逻辑回归模型在测试集上的F1_Score\n",
    "print('Word2vec LR test F1_score %s' % metrics.f1_score(y_test, word2vec_y_pred,average=\"macro\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务11：使用bert，并结合逻辑回归训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 431,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "bert LR train accuracy 0.8272775991425508\n",
      "Bert LR test accuracy 0.8253941790120516\n",
      "Bert LR test F1_score 0.4521731188267376\n"
     ]
    }
   ],
   "source": [
    "# TODO11: 使用bert + 逻辑回归训练模型，需要用gridsearchCV做交叉验证，并选择最好的超参数\n",
    "\n",
    "parameters = {'penalty': ('l1', 'l2'),'C': (0.01, 0.1, 1, 10)}\n",
    "grid_search = GridSearchCV(LogisticRegression(), parameters,  verbose=0, scoring='accuracy', cv=5)\n",
    "\n",
    "#bert向量需要剔除异常值\n",
    "bert_train[np.isnan(bert_train)]=0\n",
    "bert_test[np.isnan(bert_test)]=0\n",
    "grid =grid_search.fit(bert_train, y_train)\n",
    "print('bert LR train accuracy %s' % grid.best_score_)\n",
    "best_model=grid.best_estimator_\n",
    "bert_y_pred=best_model.predict(bert_test)\n",
    "\n",
    "print('Bert LR test accuracy %s' % metrics.accuracy_score(y_test, bert_y_pred))\n",
    "#逻辑回归模型在测试集上的F1_Score\n",
    "print('Bert LR test F1_score %s' % metrics.f1_score(y_test, bert_y_pred,average=\"macro\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 任务12：对于以上结果请做一下简单的总结，按照1，2，3，4提取几个关键点，包括：\n",
    "- 结果说明什么问题？\n",
    "- 接下来如何提高？"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 小结\n",
    "1. 三类模型在训练集和预测集的准确率都在0.8以上，说明整体不存在过拟合的情况\n",
    "2. 三类模型F1score都低于0.8，说明召回率较低。通过描述统计发现，训练集和验证集中证明评价(1样本)数量远多于负面评价，说明样本倾斜严重。\n",
    "3. 建模结果上，TF-IDF>Word2vec>bert，估计是由于矩阵愈发稀疏导致，也有可能是预训练模型缺乏针对性，因为本人没有GPU，没有再重新训练预训练模型 \n",
    "4. 如果要提高，一方面可以使用欠采样/过采样/SMOTE等方式处理样本不均衡问题，另一方面可以把豆瓣语料加入预训练模型再次训练，也可以尝试LR之外的分类模型，例如贝叶斯网络，或者LSTM等等。\n",
    "5. 为了bert部分能按时完成，加快运行速度，没有使用原始的gensim包，而是fastNLP。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
