{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 自然语言处理实战 —— 文本分类\n",
    "\n",
    "文本分类是自然语言处理（NLP）领域的重要研究领域。文本分类指将文本按照一定的分类体系或标准进行分类标记，包括二分类和多分类等。在人工智能浪潮席卷全球的今天，文本分类技术已经被广泛地应用在情感分析、文本审核、广告过滤和反黄识别等 NLP 领域。现阶段的文本分类模型种类繁多，既有机器学习中的朴素贝叶斯模型、SVM 等，也有深度学习中的各种模型，比如经典的 CNN、RNN，以及它们的变形，如 CNN-LSTM 等。\n",
    "\n",
    "本实践首先介绍 ModelArts 的文本分类功能，之后使用 BERT 模型进行文本分类任务——中文文本情感分析。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## ModelArts 文本分类功能\n",
    "\n",
    "本部分将介绍通过 ModelArts 的文本分类标注功能：对文本的内容按照标签进行分类处理。\n",
    "\n",
    "登录 ModelArts 管理控制台，在左侧菜单栏中选择`数据标注`，进入`数据集`管理页面。\n",
    "\n",
    "点击`创建数据集`，准备用于数据标注的文本数据。\n",
    "\n",
    "![](./img/data_tagging.png)\n",
    "\n",
    "#### 准备未标注数据集\n",
    "\n",
    "首先需要在 OBS 中创建一个数据集，后续的操作如标注数据、数据集发布等，都是基于创建和管理的数据集。\n",
    "\n",
    "OBS 链接在这里：https://www.huaweicloud.com/product/obs0.html\n",
    "\n",
    "数据标注功能需要获取访问 OBS 权限，在未进行委托授权之前，无法使用此功能。需要可以在`数据标注`页面，单击`服务授权`，由具备授权的账号`同意授权`后，即可使用。\n",
    "\n",
    "创建用于存储数据的 OBS 桶及文件夹。本实践中桶名设定为`classification-tagging`，**请用户建立新桶并自定义命名，OBS桶名全局唯一，若创建时桶名冲突，请选择其他不冲突桶名**。\n",
    "\n",
    "桶创建成功后，在桶中创建标注输入和标注输出的文件夹，并将用于标注是文本文件上传到输入文件夹中。\n",
    "\n",
    "文本标注文件的要求为：**文件格式要求 txt 或者 csv，文件大小不超过 8M，以换行符作为分隔符，每行数据代表一个标注对象。**\n",
    "\n",
    "在本实践中使用的示例标注文件`text.txt`可以[点此下载](https://modelarts-labs.obs.cn-north-1.myhuaweicloud.com/notebook/DL_nlp_text_classification/text.tar.gz)，解压后可上传到输入文件夹中按照本案例步骤使用。\n",
    "\n",
    "在本实践中创建文件夹结构示例如下：\n",
    "\n",
    "```\n",
    "tagging\n",
    "   │\n",
    "   ├─input\n",
    "   │       └─text.txt\n",
    "   └─output\n",
    "```\n",
    "\n",
    "其中\n",
    "\n",
    "- `input`   为文本分类输入文件夹\n",
    "- `text.txt`   为文本分类输入文本文件\n",
    "- `output`   为文本分类输出文件夹\n",
    "\n",
    "\n",
    "创建文本分类任务数据集，如下图所示\n",
    "\n",
    "![](./img/tagging_classification_1.png)\n",
    "\n",
    "注意创建参数\n",
    "\n",
    "- 名称：可自定义数据集名称，本案例中设定为`classification-tagging`\n",
    "- 数据集输入位置：本案例中设定为`/classification-tagging/tagging/input/`\n",
    "- 数据集输出位置：本案例中设定为`/classification-tagging/tagging/output/`\n",
    "- 标注场景：选择`文本`\n",
    "- 标注类型：选择`文本分类`\n",
    "- 添加标签集：可自定义标签名称、个数、颜色。本案例中设定两个分类标签：`正面`标签为红色；`负面`标签为绿色。\n",
    "\n",
    "![](./img/label_color.png)\n",
    "\n",
    "完成以上设定后，点击右下角`创建`。文本分类数据集创建完成后，系统自动跳转至数据集管理页面。\n",
    "\n",
    "![](./img/tagging_classification_2.png)\n",
    "\n",
    "点击数据集名称，进入标注界面。选择未标注对象，点击标签进行标注，如图所示\n",
    "\n",
    "![](./img/tagging_classification_3.png)\n",
    "\n",
    "选择标注对象：`那场比赛易建联打得真好！`，从标签集选择`正面`标签，然后点击下方`保存当前页`进行保存。\n",
    "\n",
    "继续选择其他标注对象，按上述方法进行标注。数据全部标注完成后（本样例中仅提供五条分类文本），点击`已标注`可查看标注结果。\n",
    "\n",
    "![](./img/tagging_classification_4.png)\n",
    "\n",
    "点击`返回数据集`，可以看到数据集已全部标注成功。\n",
    "\n",
    "![](./img/tagging_classification_5.png)\n",
    "\n",
    "针对刚创建的数据集（未发布前），无数据集版本信息，必须执行发布操作后，才能应用于模型开发或训练。\n",
    "\n",
    "点击`发布`，可以编辑版本名称，本案例中为默认`V001`。\n",
    "\n",
    "![](./img/tagging_classification_6.png)\n",
    "\n",
    "发布成功如图所示。\n",
    "\n",
    "![](./img/tagging_classification_7.png)\n",
    "\n",
    "可以查看数据集版本的 “名称”、 “状态”、 “文件总数”、 “已标注文件个数”，并在左侧的 “演进过程”中查看版本的发布时间。\n",
    "\n",
    "随后可以使用标注成功的数据集，标注结果储存在`output`文件夹中。\n",
    "\n",
    "后续 ModelArts 将会上线智能标注功能，相信大家已经体验过第二期实战的图像智能标注，能够快速完成数据标注，节省70%以上的标注时间。智能标注是指基于当前标注阶段的标签及学习训练，选中系统中已有的模型进行智能标注，快速完成剩余数据的标注操作。请持续关注数据标注功能。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 进入ModelArts\n",
    "\n",
    "点击如下链接：https://www.huaweicloud.com/product/modelarts.html ， 进入ModelArts主页。点击“立即使用”按钮，输入用户名和密码登录，进入ModelArts使用页面。\n",
    "\n",
    "### 创建ModelArts notebook\n",
    "\n",
    "下面，我们在ModelArts中创建一个notebook开发环境，ModelArts notebook提供网页版的Python开发环境，可以方便的编写、运行代码，并查看运行结果。\n",
    "\n",
    "第一步：在ModelArts服务主界面依次点击“开发环境”、“创建”\n",
    "\n",
    "![create_nb_create_button](./img/create_nb_create_button.png)\n",
    "\n",
    "第二步：填写notebook所需的参数：\n",
    "\n",
    "| 参数 | 说明 |\n",
    "| - - - - - | - - - - - |\n",
    "| 计费方式 | 按需计费  |\n",
    "| 名称 | Notebook实例名称，如 text_sentiment_analysis |\n",
    "| 工作环境 | Python3 |\n",
    "| 资源池 | 选择\"公共资源池\"即可 |\n",
    "| 类型 | 本案例使用较为复杂的深度神经网络模型，需要较高算力，选择\"GPU\" |\n",
    "| 规格 | 选择\"8核 &#124; 64GiB &#124; 1*p100\" |\n",
    "| 存储配置 | 选择EVS，磁盘规格5GB |\n",
    "\n",
    "第三步：配置好notebook参数后，点击下一步，进入notebook信息预览。确认无误后，点击“立即创建”\n",
    "\n",
    "![create_nb_creation_summary](./img/create_nb_creation_summary.png)\n",
    "\n",
    "第四步：创建完成后，返回开发环境主界面，等待Notebook创建完毕后，打开Notebook，进行下一步操作。\n",
    "![modelarts_notebook_index](./img/modelarts_notebook_index.png)\n",
    "\n",
    "### 在ModelArts中创建开发环境\n",
    "\n",
    "接下来，我们创建一个实际的开发环境，用于后续的实验步骤。\n",
    "\n",
    "第一步：点击下图所示的“打开”按钮，进入刚刚创建的Notebook\n",
    "![inter_dev_env](img/enter_dev_env.png)\n",
    "\n",
    "第二步：创建一个Python3环境的的Notebook。点击右上角的\"New\"，然后创建TensorFlow 1.13.1开发环境。\n",
    "\n",
    "第三步：点击左上方的文件名\"Untitled\"，并输入一个与本实验相关的名称\n",
    "![notebook_untitled_filename](./img/notebook_untitled_filename.png)\n",
    "![notebook_name_the_ipynb](./img/notebook_name_the_ipynb.png)\n",
    "\n",
    "\n",
    "### 在Notebook中编写并执行代码\n",
    "\n",
    "在Notebook中，我们输入一个简单的打印语句，然后点击上方的运行按钮，可以查看语句执行的结果：\n",
    "![run_helloworld](./img/run_helloworld.png)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 文本分类——中文文本情感分析\n",
    "\n",
    "文本情感分析是指对带有主观性的观点、喜好、情感等文本进行分析和挖掘。最初的文本情感分析来自对带有情感色彩的词语的分析，例如，“美好”是带有褒义色彩的词语，而“丑陋”是带有贬义色彩的词语。随着互联网上大量的带有情感色彩的主观性文本的出现，研究者们逐渐从简单的情感词语的分析研究扩展到更为复杂的完整情感文本的研究。\n",
    "\n",
    "为了定量表示情感偏向，一般使用0到1之间的一个浮点数给文本打上情感标签，越接近1表示文本的情感越正向，越接近0表示情感越负向。\n",
    "\n",
    "### 数据集\n",
    "\n",
    "在本实战中，使用的中文文本分类的数据集来自谭松波老师从某酒店网站上整理的酒店评论数据。数据集共7000多条评论数据，5000多条正向评论，2000多条负向评论。\n",
    "\n",
    "数据格式：\n",
    "\n",
    "| 字段 | label  | review     | \n",
    "| ---- | ------- | ---------- | \n",
    "| 含义 | 情感标签  | 评论文本 |\n",
    " \n",
    "\n",
    " \n",
    " \n",
    "### BERT 模型\n",
    "\n",
    "本实践使用 NLP 领域最新最强大的 **BERT** 模型。\n",
    "\n",
    "中文**BERT-Base,Chinese**预训练模型，可以从链接[BERT-Base, Chinese](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip)下载使用。\n",
    "\n",
    "#### 准备源代码和数据\n",
    "\n",
    "准备案例所需的源代码和数据，相关资源已经保存在 OBS 中，我们通过 ModelArts SDK 将资源下载到本地。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Successfully download file modelarts-labs/notebook/DL_nlp_text_classification/text_classification.tar.gz from OBS to local ./text_classification.tar.gz\n",
      "total 374440\r\n",
      "drwxrwxrwx  4 ma-user ma-group      4096 Sep 12 10:29 .\r\n",
      "drwsrwsr-x 22 ma-user ma-group      4096 Sep 12 09:42 ..\r\n",
      "drwxr-x---  2 ma-user ma-group      4096 Sep 12 09:39 .ipynb_checkpoints\r\n",
      "-rw-r-----  1 ma-user ma-group     33828 Sep 12 10:23 text_classification.ipynb\r\n",
      "-rw-r-----  1 ma-user ma-group 383370868 Sep 12 10:29 text_classification.tar.gz\r\n",
      "drwx------  2 ma-user ma-group      4096 Sep 12 10:04 .Trash-1000\r\n"
     ]
    }
   ],
   "source": [
    "from modelarts.session import Session\n",
    "session = Session()\n",
    "\n",
    "if session.region_name == 'cn-north-1':\n",
    "    bucket_path = 'modelarts-labs/notebook/DL_nlp_text_classification/text_classification.tar.gz'\n",
    "    \n",
    "elif session.region_name == 'cn-north-4':\n",
    "    bucket_path = 'modelarts-labs-bj4/notebook/DL_nlp_text_classification/text_classification.tar.gz'\n",
    "else:\n",
    "    print(\"请更换地区到北京一或北京四\")\n",
    "    \n",
    "session.download_data(bucket_path=bucket_path, path='./text_classification.tar.gz')\n",
    "\n",
    "!ls -la"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "解压从obs下载的压缩包，解压后删除压缩包。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "total 56\r\n",
      "drwxrwxrwx  5 ma-user ma-group  4096 Sep 12 10:29 .\r\n",
      "drwsrwsr-x 22 ma-user ma-group  4096 Sep 12 09:42 ..\r\n",
      "drwxr-x---  2 ma-user ma-group  4096 Sep 12 09:39 .ipynb_checkpoints\r\n",
      "drwxr-x---  6 ma-user ma-group  4096 Sep 11 11:28 text_classification\r\n",
      "-rw-r-----  1 ma-user ma-group 33828 Sep 12 10:23 text_classification.ipynb\r\n",
      "drwx------  2 ma-user ma-group  4096 Sep 12 10:04 .Trash-1000\r\n"
     ]
    }
   ],
   "source": [
    "!tar xf ./text_classification.tar.gz\n",
    "\n",
    "!rm ./text_classification.tar.gz\n",
    "\n",
    "!ls -la"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 导入依赖包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import re\n",
    "import pandas as pd\n",
    "import tensorflow as tf\n",
    "from tensorflow import keras\n",
    "from sklearn.model_selection import train_test_split\n",
    "from text_classification.bert import modeling, optimization, tokenization\n",
    "\n",
    "tf.logging.set_verbosity(tf.logging.INFO)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 定义数据和模型路径"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_dir = './text_classification/data/'\n",
    "output_dir = './text_classification/output/'\n",
    "vocab_file = './text_classification/chinese_L-12_H-768_A-12/vocab.txt'\n",
    "bert_config_file = './text_classification/chinese_L-12_H-768_A-12/bert_config.json'\n",
    "init_checkpoint = './text_classification/chinese_L-12_H-768_A-12/bert_model.ckpt'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 设置模型参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "batch_size = 64\n",
    "learning_rate = 2e-5 \n",
    "num_train_epochs = 10\n",
    "warmup_proportion = 0.1\n",
    "save_checkpoints_steps = 500 \n",
    "save_summary_steps = 100 \n",
    "max_seq_length = 128\n",
    "label_list = [0, 1]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 读取数据集\n",
    "\n",
    "需要获取非倾斜的数据集，使标签的比例基本相等。\n",
    "\n",
    "随机展示训练集样本20条。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "总评论数：4000\n",
      "正向评论：2000\n",
      "负向评论：2000\n",
      "\n",
      "训练样本示例\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>review</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>1885</th>\n",
       "      <td>1</td>\n",
       "      <td>特别感谢工号0649的崔小姐（手机号136134462**），下次到太原出差一定要当面感谢她...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7548</th>\n",
       "      <td>0</td>\n",
       "      <td>自称4星，实际上顶多2星。贵宾楼比较好，房内还挺干净。</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>129</th>\n",
       "      <td>1</td>\n",
       "      <td>酒店地理位置不错，交通非常便利，房间整体感觉还可以，只是浴室有些小。酒店提供的早餐不够丰富。...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2809</th>\n",
       "      <td>1</td>\n",
       "      <td>装修和服务一流，但房间的床有点偏小，卫生间浴缸旁连放脚垫的地方度没有。另外床头灯不可调节。旁...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7214</th>\n",
       "      <td>0</td>\n",
       "      <td>怎么说呢。以北京这种地方的房价以及房间质量来说。这价格已经算便宜的了。因为先前住的几个北京的...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3683</th>\n",
       "      <td>1</td>\n",
       "      <td>性价比还可以,装修设备都挺不错~就是有些房间没有窗户~闷~还有就是有乱按门铃的~</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5056</th>\n",
       "      <td>1</td>\n",
       "      <td>如家的标准装潢，没的说，不过还和楼下的一样的老问题，水温是确实的忽冷忽热，隔音也是确实的惨了...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4942</th>\n",
       "      <td>1</td>\n",
       "      <td>住的是豪华单人房,房间很大,但是空调明显不足,冷死了补充点评2008年1月25日：还有,房间...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6918</th>\n",
       "      <td>0</td>\n",
       "      <td>豪华大床房房间大小一般，墙壁看起来很不舒服，都有掉落的斑驳痕迹，其他一般，在所有住过的经济型...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6196</th>\n",
       "      <td>0</td>\n",
       "      <td>酒店服务态度还不错，就是感觉地点比较偏，周围没有什么吃饭的地方。早餐感觉也没有以前好了。宾馆...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1448</th>\n",
       "      <td>1</td>\n",
       "      <td>订的是标A，房间装修比较新。只是洗澡的热水要等个快7-8分钟才热起来。附近很热闹，楼下就是文...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5651</th>\n",
       "      <td>0</td>\n",
       "      <td>酒店大堂像住宅公寓一楼大厅，所谓的“套房”的卧室比我住过的任何一个五星级的一间普通标房还小，...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3479</th>\n",
       "      <td>1</td>\n",
       "      <td>1.房间：就是冲着280元的三人间才定的，感觉性价比较高；房间采用复合地板，可惜没有冰箱；是...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>278</th>\n",
       "      <td>1</td>\n",
       "      <td>酒店境和服度亦算不,但房空太小~~不宣容太大件行李~~且房格可以~~中餐的心不太好吃~~要改...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2543</th>\n",
       "      <td>1</td>\n",
       "      <td>交通不便,自己没车的话比较麻烦,而且周围吃饭的地方很少很少......</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5004</th>\n",
       "      <td>1</td>\n",
       "      <td>很好..一如既往的好.除了贵点.地段和服务都是一流的.第一天我只是睡前喝了口水而已,第二天就...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6061</th>\n",
       "      <td>0</td>\n",
       "      <td>受不了，基本隔壁所有的声音都听的一清二楚。因为隔壁没睡，所以我也跟着无法入睡，见过隔音不好的...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1531</th>\n",
       "      <td>1</td>\n",
       "      <td>建CTRIP提供更惠的格。我的客人均被告知，退房直接去前CHECKIN可得免早餐。一致要求我...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1948</th>\n",
       "      <td>1</td>\n",
       "      <td>第三次入住,前两次是豪华房此次是标准间.豪华间超大,标间虽小也很舒适.前台服务,速度基本满意...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2966</th>\n",
       "      <td>1</td>\n",
       "      <td>酒店服务亲切,我们当时喝红酒没有开瓶器，打电话给客房部，专门给我们送了过来，非常感谢。同时麻...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "      label                                             review\n",
       "1885      1  特别感谢工号0649的崔小姐（手机号136134462**），下次到太原出差一定要当面感谢她...\n",
       "7548      0                        自称4星，实际上顶多2星。贵宾楼比较好，房内还挺干净。\n",
       "129       1  酒店地理位置不错，交通非常便利，房间整体感觉还可以，只是浴室有些小。酒店提供的早餐不够丰富。...\n",
       "2809      1  装修和服务一流，但房间的床有点偏小，卫生间浴缸旁连放脚垫的地方度没有。另外床头灯不可调节。旁...\n",
       "7214      0  怎么说呢。以北京这种地方的房价以及房间质量来说。这价格已经算便宜的了。因为先前住的几个北京的...\n",
       "3683      1           性价比还可以,装修设备都挺不错~就是有些房间没有窗户~闷~还有就是有乱按门铃的~\n",
       "5056      1  如家的标准装潢，没的说，不过还和楼下的一样的老问题，水温是确实的忽冷忽热，隔音也是确实的惨了...\n",
       "4942      1  住的是豪华单人房,房间很大,但是空调明显不足,冷死了补充点评2008年1月25日：还有,房间...\n",
       "6918      0  豪华大床房房间大小一般，墙壁看起来很不舒服，都有掉落的斑驳痕迹，其他一般，在所有住过的经济型...\n",
       "6196      0  酒店服务态度还不错，就是感觉地点比较偏，周围没有什么吃饭的地方。早餐感觉也没有以前好了。宾馆...\n",
       "1448      1  订的是标A，房间装修比较新。只是洗澡的热水要等个快7-8分钟才热起来。附近很热闹，楼下就是文...\n",
       "5651      0  酒店大堂像住宅公寓一楼大厅，所谓的“套房”的卧室比我住过的任何一个五星级的一间普通标房还小，...\n",
       "3479      1  1.房间：就是冲着280元的三人间才定的，感觉性价比较高；房间采用复合地板，可惜没有冰箱；是...\n",
       "278       1  酒店境和服度亦算不,但房空太小~~不宣容太大件行李~~且房格可以~~中餐的心不太好吃~~要改...\n",
       "2543      1                交通不便,自己没车的话比较麻烦,而且周围吃饭的地方很少很少......\n",
       "5004      1  很好..一如既往的好.除了贵点.地段和服务都是一流的.第一天我只是睡前喝了口水而已,第二天就...\n",
       "6061      0  受不了，基本隔壁所有的声音都听的一清二楚。因为隔壁没睡，所以我也跟着无法入睡，见过隔音不好的...\n",
       "1531      1  建CTRIP提供更惠的格。我的客人均被告知，退房直接去前CHECKIN可得免早餐。一致要求我...\n",
       "1948      1  第三次入住,前两次是豪华房此次是标准间.豪华间超大,标间虽小也很舒适.前台服务,速度基本满意...\n",
       "2966      1  酒店服务亲切,我们当时喝红酒没有开瓶器，打电话给客房部，专门给我们送了过来，非常感谢。同时麻..."
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def get_balance_corpus(corpus_size, corpus_pos, corpus_neg):\n",
    "    sample_size = corpus_size // 2\n",
    "    pd_corpus_balance = pd.concat([corpus_pos.sample(sample_size, replace=corpus_pos.shape[0]<sample_size), \\\n",
    "                                   corpus_neg.sample(sample_size, replace=corpus_neg.shape[0]<sample_size)])\n",
    "    \n",
    "    print('总评论数：%d' % pd_corpus_balance.shape[0])\n",
    "    print('正向评论：%d' % pd_corpus_balance[pd_corpus_balance.label==1].shape[0])\n",
    "    print('负向评论：%d' % pd_corpus_balance[pd_corpus_balance.label==0].shape[0])    \n",
    "    \n",
    "    return pd_corpus_balance\n",
    "\n",
    "reviews_all = pd.read_csv(data_dir + 'ChnSentiCorp_htl_all.csv')\n",
    "\n",
    "pd_positive = reviews_all[reviews_all.label==1]\n",
    "pd_negative = reviews_all[reviews_all.label==0]\n",
    "\n",
    "reviews_4000 = get_balance_corpus(4000, pd_positive, pd_negative)\n",
    "\n",
    "train, test = train_test_split(reviews_4000, test_size=0.2)\n",
    "\n",
    "print('\\n训练样本示例\\n')\n",
    "train.sample(20)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 读取BERT预训练模型中文字典"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['今', '天', '的', '天', '气', '真', '好', '！']"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=False)\n",
    "\n",
    "tokenizer.tokenize(\"今天的天气真好！\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 创建数据输入类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "class InputExample(object):\n",
    "\n",
    "    def __init__(self, guid, text_a, text_b=None, label=None):\n",
    "        self.guid = guid\n",
    "        self.text_a = text_a\n",
    "        self.text_b = text_b\n",
    "        self.label = label\n",
    "\n",
    "class InputFeatures(object):\n",
    "\n",
    "    def __init__(self,\n",
    "               input_ids,\n",
    "               input_mask,\n",
    "               segment_ids,\n",
    "               label_id,\n",
    "               is_real_example=True):\n",
    "        self.input_ids = input_ids\n",
    "        self.input_mask = input_mask\n",
    "        self.segment_ids = segment_ids\n",
    "        self.label_id = label_id\n",
    "        self.is_real_example = is_real_example\n",
    "    \n",
    "class PaddingInputExample(object):\n",
    "    pass\n",
    "\n",
    "\n",
    "DATA_COLUMN = 'review'\n",
    "LABEL_COLUMN = 'label'\n",
    "\n",
    "train_InputExamples = train.apply(lambda x: InputExample(guid=None,  \n",
    "                                                         text_a = x[DATA_COLUMN], \n",
    "                                                         text_b = None, \n",
    "                                                         label = x[LABEL_COLUMN]), axis = 1)\n",
    "\n",
    "test_InputExamples = test.apply(lambda x: InputExample(guid=None, \n",
    "                                                       text_a = x[DATA_COLUMN], \n",
    "                                                       text_b = None, \n",
    "                                                       label = x[LABEL_COLUMN]), axis = 1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 转换为 BERT 输入向量\n",
    "\n",
    "打印前5个样例文本及其字向量、文本向量、位置向量和标签。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Writing example 0 of 3200\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: None\n",
      "INFO:tensorflow:tokens: [CLS] 7 月 16 日 入 住 ， 是 在 携 程 买 的 半 自 助 自 由 行 ， 所 以 携 程 已 经 处 理 好 所 有 的 文 件 ， 只 提 供 了 通 行 证 就 顺 利 入 住 ， 非 常 快 ， 前 台 服 务 生 也 非 常 客 气 ， 很 专 业 。 酒 店 大 堂 有 很 贴 心 的 安 排 ， 大 人 在 办 理 入 住 的 时 候 ， 孩 子 可 以 在 一 旁 看 迪 士 尼 的 动 画 片 。 酒 店 的 房 间 住 2 个 大 人 2 个 孩 子 都 很 宽 敞 ， 到 [SEP]\n",
      "INFO:tensorflow:input_ids: 101 128 3299 8121 3189 1057 857 8024 3221 1762 3025 4923 743 4638 1288 5632 1221 5632 4507 6121 8024 2792 809 3025 4923 2347 5307 1905 4415 1962 2792 3300 4638 3152 816 8024 1372 2990 897 749 6858 6121 6395 2218 7556 1164 1057 857 8024 7478 2382 2571 8024 1184 1378 3302 1218 4495 738 7478 2382 2145 3698 8024 2523 683 689 511 6983 2421 1920 1828 3300 2523 6585 2552 4638 2128 2961 8024 1920 782 1762 1215 4415 1057 857 4638 3198 952 8024 2111 2094 1377 809 1762 671 3178 4692 6832 1894 2225 4638 1220 4514 4275 511 6983 2421 4638 2791 7313 857 123 702 1920 782 123 702 2111 2094 6963 2523 2160 3139 8024 1168 102\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 1 (id = 1)\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: None\n",
      "INFO:tensorflow:tokens: [CLS] 酒 店 的 的 布 置 很 舒 适 ， 位 置 也 很 好 ， 在 闹 市 区 ， 有 闹 中 取 静 的 感 觉 。 很 不 错 ！ [SEP]\n",
      "INFO:tensorflow:input_ids: 101 6983 2421 4638 4638 2357 5390 2523 5653 6844 8024 855 5390 738 2523 1962 8024 1762 7317 2356 1277 8024 3300 7317 704 1357 7474 4638 2697 6230 511 2523 679 7231 8013 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 1 (id = 1)\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: None\n",
      "INFO:tensorflow:tokens: [CLS] 在 本 地 肯 定 是 不 错 的 酒 店 了 ， 去 淮 南 的 话 值 得 一 住 。 [SEP]\n",
      "INFO:tensorflow:input_ids: 101 1762 3315 1765 5507 2137 3221 679 7231 4638 6983 2421 749 8024 1343 3917 1298 4638 6413 966 2533 671 857 511 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 1 (id = 1)\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: None\n",
      "INFO:tensorflow:tokens: [CLS] 入 住 的 当 天 02 - 26 日 ， 房 间 828 。 这 个 房 间 紧 邻 着 一 个 下 水 管 道 ， 整 夜 的 流 水 ， 难 以 入 睡 ， 我 只 有 睡 另 外 一 张 离 下 水 管 较 远 的 单 人 床 用 被 捂 着 脑 袋 入 睡 。 从 27 日 期 直 到 3 月 2 日 ， 我 每 天 都 在 和 前 台 交 涉 需 要 更 换 一 个 房 间 ， 每 天 都 答 应 但 是 每 天 都 没 有 解 决 。 3 月 1 日 前 台 有 个 个 子 不 高 （ [UNK] ） [SEP]\n",
      "INFO:tensorflow:input_ids: 101 1057 857 4638 2496 1921 8150 118 8153 3189 8024 2791 7313 13120 511 6821 702 2791 7313 5165 6943 4708 671 702 678 3717 5052 6887 8024 3146 1915 4638 3837 3717 8024 7410 809 1057 4717 8024 2769 1372 3300 4717 1369 1912 671 2476 4895 678 3717 5052 6772 6823 4638 1296 782 2414 4500 6158 2926 4708 5554 6150 1057 4717 511 794 8149 3189 3309 4684 1168 124 3299 123 3189 8024 2769 3680 1921 6963 1762 1469 1184 1378 769 3868 7444 6206 3291 2940 671 702 2791 7313 8024 3680 1921 6963 5031 2418 852 3221 3680 1921 6963 3766 3300 6237 1104 511 124 3299 122 3189 1184 1378 3300 702 702 2094 679 7770 8020 100 8021 102\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 0 (id = 0)\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: None\n",
      "INFO:tensorflow:tokens: [CLS] 强 烈 建 议 大 家 不 要 住 这 个 酒 店 ， 预 订 的 时 候 没 有 说 酒 店 二 楼 是 卡 拉 [UNK] 厅 ， 到 晚 上 12 点 多 了 还 在 蹦 迪 ， 噪 音 吵 得 我 睡 不 着 觉 。 房 间 设 施 跟 全 国 的 如 家 都 差 不 多 ， 强 烈 抵 制 这 样 的 店 [SEP]\n",
      "INFO:tensorflow:input_ids: 101 2487 4164 2456 6379 1920 2157 679 6206 857 6821 702 6983 2421 8024 7564 6370 4638 3198 952 3766 3300 6432 6983 2421 753 3517 3221 1305 2861 100 1324 8024 1168 3241 677 8110 4157 1914 749 6820 1762 6698 6832 8024 1692 7509 1427 2533 2769 4717 679 4708 6230 511 2791 7313 6392 3177 6656 1059 1744 4638 1963 2157 6963 2345 679 1914 8024 2487 4164 2850 1169 6821 3416 4638 2421 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 0 (id = 0)\n",
      "INFO:tensorflow:Writing example 0 of 800\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: None\n",
      "INFO:tensorflow:tokens: [CLS] 四 星 级 的 牌 子 他 们 还 真 敢 挂 ！ 北 京 的 酒 店 向 来 这 样 ， 又 贵 又 差 ！ 房 间 里 面 那 个 叫 热 啊 ， 开 了 空 调 制 冷 根 本 不 起 作 用 。 打 电 话 给 客 房 服 务 ， 居 然 说 只 有 自 然 风 ， 没 有 冷 风 ！ 哪 是 什 么 自 然 风 啊 ， 明 显 比 室 外 的 气 温 高 得 多 。 最 后 ， 他 们 给 我 的 建 议 是 把 房 间 的 窗 户 打 开 。 北 京 那 个 鬼 气 候 ， 外 面 正 [SEP]\n",
      "INFO:tensorflow:input_ids: 101 1724 3215 5277 4638 4277 2094 800 812 6820 4696 3140 2899 8013 1266 776 4638 6983 2421 1403 3341 6821 3416 8024 1348 6586 1348 2345 8013 2791 7313 7027 7481 6929 702 1373 4178 1557 8024 2458 749 4958 6444 1169 1107 3418 3315 679 6629 868 4500 511 2802 4510 6413 5314 2145 2791 3302 1218 8024 2233 4197 6432 1372 3300 5632 4197 7599 8024 3766 3300 1107 7599 8013 1525 3221 784 720 5632 4197 7599 1557 8024 3209 3227 3683 2147 1912 4638 3698 3946 7770 2533 1914 511 3297 1400 8024 800 812 5314 2769 4638 2456 6379 3221 2828 2791 7313 4638 4970 2787 2802 2458 511 1266 776 6929 702 7787 3698 952 8024 1912 7481 3633 102\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:label: 0 (id = 0)\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: None\n",
      "INFO:tensorflow:tokens: [CLS] 我 们 要 了 2 个 房 间 ， 一 个 有 阳 台 一 个 没 有 ， 感 觉 明 显 有 阳 台 的 房 间 感 觉 比 较 舒 服 ， 没 阳 台 的 就 很 压 抑 。 [SEP]\n",
      "INFO:tensorflow:input_ids: 101 2769 812 6206 749 123 702 2791 7313 8024 671 702 3300 7345 1378 671 702 3766 3300 8024 2697 6230 3209 3227 3300 7345 1378 4638 2791 7313 2697 6230 3683 6772 5653 3302 8024 3766 7345 1378 4638 2218 2523 1327 2829 511 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 1 (id = 1)\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: None\n",
      "INFO:tensorflow:tokens: [CLS] 175 元 入 住 标 房 [UNK] 。 大 厅 和 电 梯 里 有 异 味 。 房 间 里 桌 子 也 没 擦 干 净 , 洗 浴 间 装 潢 等 设 备 太 旧 。 早 饭 很 差 。 不 值 。 补 充 点 评 2008 年 8 月 6 日 ： 看 了 点 评 去 入 住 的 , 上 当 了 。 估 计 yang ##ming 的 点 评 是 托 ! ! [SEP]\n",
      "INFO:tensorflow:input_ids: 101 9251 1039 1057 857 3403 2791 100 511 1920 1324 1469 4510 3461 7027 3300 2460 1456 511 2791 7313 7027 3430 2094 738 3766 3092 2397 1112 117 3819 3861 7313 6163 4055 5023 6392 1906 1922 3191 511 3193 7649 2523 2345 511 679 966 511 6133 1041 4157 6397 8182 2399 129 3299 127 3189 8038 4692 749 4157 6397 1343 1057 857 4638 117 677 2496 749 511 844 6369 12086 10693 4638 4157 6397 3221 2805 106 106 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 0 (id = 0)\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: None\n",
      "INFO:tensorflow:tokens: [CLS] 总 体 来 说 不 错 。 交 通 比 较 方 便 ， 早 餐 也 可 以 。 步 行 去 沃 尔 玛 购 物 约 十 分 钟 。 去 旅 游 的 话 还 是 开 车 方 便 ， 景 点 可 以 去 多 去 几 处 。 补 充 点 评 2008 年 4 月 13 日 ： 小 心 电 梯 夹 人 ， 合 上 的 速 度 太 快 了 ， 很 少 见 。 [SEP]\n",
      "INFO:tensorflow:input_ids: 101 2600 860 3341 6432 679 7231 511 769 6858 3683 6772 3175 912 8024 3193 7623 738 1377 809 511 3635 6121 1343 3753 2209 4377 6579 4289 5276 1282 1146 7164 511 1343 3180 3952 4638 6413 6820 3221 2458 6756 3175 912 8024 3250 4157 1377 809 1343 1914 1343 1126 1905 511 6133 1041 4157 6397 8182 2399 125 3299 8124 3189 8038 2207 2552 4510 3461 1931 782 8024 1394 677 4638 6862 2428 1922 2571 749 8024 2523 2208 6224 511 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 1 (id = 1)\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: None\n",
      "INFO:tensorflow:tokens: [CLS] 房 间 感 觉 不 错 , 就 是 入 住 是 花 费 的 时 间 较 长 , 前 台 说 是 房 间 还 没 有 收 拾 好 . 补 充 点 评 2008 年 8 月 14 日 ： 门 童 真 的 是 很 勤 快 ! [SEP]\n",
      "INFO:tensorflow:input_ids: 101 2791 7313 2697 6230 679 7231 117 2218 3221 1057 857 3221 5709 6589 4638 3198 7313 6772 7270 117 1184 1378 6432 3221 2791 7313 6820 3766 3300 3119 2896 1962 119 6133 1041 4157 6397 8182 2399 129 3299 8122 3189 8038 7305 4997 4696 4638 3221 2523 1249 2571 106 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 1 (id = 1)\n"
     ]
    }
   ],
   "source": [
    "def truncate_seq_pair(tokens_a, tokens_b, max_length):\n",
    "    while True:\n",
    "        total_length = len(tokens_a) + len(tokens_b)\n",
    "        if total_length <= max_length:\n",
    "            break\n",
    "        if len(tokens_a) > len(tokens_b):\n",
    "            tokens_a.pop()\n",
    "        else:\n",
    "            tokens_b.pop()\n",
    "\n",
    "\n",
    "def convert_single_example(ex_index, example, label_list, max_seq_length,\n",
    "                           tokenizer):\n",
    "\n",
    "    if isinstance(example, PaddingInputExample):\n",
    "        return InputFeatures(\n",
    "            input_ids=[0] * max_seq_length,\n",
    "            input_mask=[0] * max_seq_length,\n",
    "            segment_ids=[0] * max_seq_length,\n",
    "            label_id=0,\n",
    "            is_real_example=False)\n",
    "    \n",
    "    label_map = {}\n",
    "    for (i, label) in enumerate(label_list):\n",
    "        label_map[label] = i\n",
    "\n",
    "    tokens_a = tokenizer.tokenize(example.text_a)\n",
    "    tokens_b = None\n",
    "    if example.text_b:\n",
    "        tokens_b = tokenizer.tokenize(example.text_b)\n",
    "\n",
    "    if tokens_b:\n",
    "        truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)\n",
    "    else:\n",
    "        if len(tokens_a) > max_seq_length - 2:\n",
    "            tokens_a = tokens_a[0:(max_seq_length - 2)]\n",
    "\n",
    "    tokens = []\n",
    "    segment_ids = []\n",
    "    tokens.append(\"[CLS]\") # 句头添加 [CLS] 标志\n",
    "    segment_ids.append(0)\n",
    "    for token in tokens_a:\n",
    "        tokens.append(token)\n",
    "        segment_ids.append(0)\n",
    "    tokens.append(\"[SEP]\") # 句尾添加[SEP] 标志\n",
    "    segment_ids.append(0)\n",
    "\n",
    "    if tokens_b:\n",
    "        for token in tokens_b:\n",
    "            tokens.append(token)\n",
    "            segment_ids.append(1)\n",
    "        tokens.append(\"[SEP]\")\n",
    "        segment_ids.append(1)\n",
    "\n",
    "    input_ids = tokenizer.convert_tokens_to_ids(tokens)  \n",
    "    input_mask = [1] * len(input_ids)\n",
    "\n",
    "    while len(input_ids) < max_seq_length:\n",
    "        input_ids.append(0)\n",
    "        input_mask.append(0)\n",
    "        segment_ids.append(0)\n",
    "\n",
    "    assert len(input_ids) == max_seq_length\n",
    "    assert len(input_mask) == max_seq_length\n",
    "    assert len(segment_ids) == max_seq_length\n",
    "\n",
    "    label_id = label_map[example.label]\n",
    "    \n",
    "    if ex_index < 5:\n",
    "        tf.logging.info(\"*** 示例 ***\")\n",
    "        tf.logging.info(\"guid: %s\" % (example.guid)) \n",
    "        tf.logging.info(\"tokens: %s\" % \" \".join([tokenization.printable_text(x) for x in tokens])) \n",
    "        tf.logging.info(\"input_ids: %s\" % \" \".join([str(x) for x in input_ids]))  \n",
    "        tf.logging.info(\"input_mask: %s\" % \" \".join([str(x) for x in input_mask])) \n",
    "        tf.logging.info(\"segment_ids: %s\" % \" \".join([str(x) for x in segment_ids])) \n",
    "        tf.logging.info(\"label: %s (id = %d)\" % (example.label, label_id)) \n",
    "\n",
    "    feature = InputFeatures(\n",
    "        input_ids=input_ids,\n",
    "        input_mask=input_mask,\n",
    "        segment_ids=segment_ids,\n",
    "        label_id=label_id,\n",
    "        is_real_example=True)\n",
    "    return feature\n",
    "\n",
    "def convert_examples_to_features(examples, label_list, max_seq_length, tokenizer):\n",
    "\n",
    "    features = []\n",
    "    for (ex_index, example) in enumerate(examples):\n",
    "        if ex_index % 10000 == 0:\n",
    "            tf.logging.info(\"Writing example %d of %d\" % (ex_index, len(examples)))\n",
    "\n",
    "        feature = convert_single_example(ex_index, example, label_list,\n",
    "                                     max_seq_length, tokenizer)\n",
    "\n",
    "        features.append(feature)\n",
    "    return features\n",
    "\n",
    "\n",
    "train_features = convert_examples_to_features(train_InputExamples, label_list, max_seq_length, tokenizer)\n",
    "test_features = convert_examples_to_features(test_InputExamples, label_list, max_seq_length, tokenizer)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 加载模型参数，构造模型结构"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "bert_config = modeling.BertConfig.from_json_file(bert_config_file)\n",
    "\n",
    "def create_model(bert_config, is_training, input_ids, input_mask, segment_ids,\n",
    "                 labels, num_labels, use_one_hot_embeddings):\n",
    "\n",
    "    model = modeling.BertModel(\n",
    "        config=bert_config,\n",
    "        is_training=is_training,\n",
    "        input_ids=input_ids,\n",
    "        input_mask=input_mask,\n",
    "        token_type_ids=segment_ids,\n",
    "        use_one_hot_embeddings=use_one_hot_embeddings)\n",
    "\n",
    "    output_layer = model.get_pooled_output()\n",
    "    hidden_size = output_layer.shape[-1].value\n",
    "\n",
    "    output_weights = tf.get_variable(\n",
    "        \"output_weights\", [num_labels, hidden_size],\n",
    "        initializer=tf.truncated_normal_initializer(stddev=0.02))\n",
    "\n",
    "    output_bias = tf.get_variable(\n",
    "        \"output_bias\", [num_labels], initializer=tf.zeros_initializer())\n",
    "\n",
    "    with tf.variable_scope(\"loss\"):\n",
    "        if is_training:\n",
    "            output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)\n",
    "\n",
    "        logits = tf.matmul(output_layer, output_weights, transpose_b=True)\n",
    "        logits = tf.nn.bias_add(logits, output_bias)\n",
    "\n",
    "        probabilities = tf.nn.softmax(logits, axis=-1)\n",
    "        log_probs = tf.nn.log_softmax(logits, axis=-1)\n",
    "\n",
    "        one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)\n",
    "\n",
    "        per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)\n",
    "\n",
    "        loss = tf.reduce_mean(per_example_loss)\n",
    "        return (loss, per_example_loss, logits, probabilities)\n",
    "\n",
    "\n",
    "def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate,\n",
    "                     num_train_steps, num_warmup_steps):\n",
    "\n",
    "    def model_fn(features, labels, mode, params):\n",
    "\n",
    "        input_ids = features[\"input_ids\"]\n",
    "        input_mask = features[\"input_mask\"]\n",
    "        segment_ids = features[\"segment_ids\"]\n",
    "        label_ids = features[\"label_ids\"]\n",
    "        is_real_example = None\n",
    "        if \"is_real_example\" in features:\n",
    "            is_real_example = tf.cast(features[\"is_real_example\"], dtype=tf.float32)\n",
    "        else:\n",
    "            is_real_example = tf.ones(tf.shape(label_ids), dtype=tf.float32)\n",
    "\n",
    "        is_training = (mode == tf.estimator.ModeKeys.TRAIN)\n",
    "        use_one_hot_embeddings = False\n",
    "\n",
    "        (total_loss, per_example_loss, logits, probabilities) = create_model(\n",
    "            bert_config, is_training, input_ids, input_mask, segment_ids, label_ids,\n",
    "            num_labels, use_one_hot_embeddings)\n",
    "\n",
    "        tvars = tf.trainable_variables()\n",
    "        initialized_variable_names = {}\n",
    "        scaffold_fn = None\n",
    "    \n",
    "        if init_checkpoint:\n",
    "            (assignment_map, initialized_variable_names\n",
    "            ) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)\n",
    "    \n",
    "            tf.train.init_from_checkpoint(init_checkpoint, assignment_map)\n",
    "\n",
    "        output_spec = None\n",
    "   \n",
    "        if mode == tf.estimator.ModeKeys.TRAIN:\n",
    "\n",
    "            train_op = optimization.create_optimizer(\n",
    "              total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False)\n",
    "\n",
    "            output_spec = tf.estimator.EstimatorSpec(\n",
    "              mode=mode,\n",
    "              loss=total_loss,\n",
    "              train_op=train_op)\n",
    "\n",
    "        elif mode == tf.estimator.ModeKeys.EVAL:\n",
    "\n",
    "            def metric_fn(per_example_loss, label_ids, logits, is_real_example):\n",
    "                predictions = tf.argmax(logits, axis=-1, output_type=tf.int32)\n",
    "                accuracy = tf.metrics.accuracy(\n",
    "                    labels=label_ids, predictions=predictions, weights=is_real_example)\n",
    "                loss = tf.metrics.mean(values=per_example_loss, weights=is_real_example)\n",
    "                return {\n",
    "                    \"eval_accuracy\": accuracy,\n",
    "                    \"eval_loss\": loss,\n",
    "                }\n",
    "\n",
    "            eval_metrics = metric_fn(per_example_loss, label_ids, logits, is_real_example)\n",
    "            output_spec = tf.estimator.EstimatorSpec(\n",
    "              mode=mode,\n",
    "              loss=total_loss,\n",
    "              eval_metric_ops=eval_metrics)\n",
    "    \n",
    "        else:\n",
    "            output_spec = tf.estimator.EstimatorSpec(\n",
    "              mode=mode,\n",
    "              predictions={\"probabilities\": probabilities})\n",
    "    \n",
    "        return output_spec\n",
    "\n",
    "    return model_fn\n",
    "\n",
    "\n",
    "num_train_steps = int(len(train_features) / batch_size * num_train_epochs)\n",
    "num_warmup_steps = int(num_train_steps * warmup_proportion)\n",
    "\n",
    "model_fn = model_fn_builder(\n",
    "    bert_config=bert_config,\n",
    "    num_labels=len(label_list),\n",
    "    learning_rate=learning_rate,\n",
    "    init_checkpoint=init_checkpoint,\n",
    "    num_train_steps=num_train_steps,\n",
    "    num_warmup_steps=num_warmup_steps)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 模型训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Using config: {'_model_dir': './text_classification/output/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 500, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true\n",
      "graph_options {\n",
      "  rewrite_options {\n",
      "    meta_optimizer_iterations: ONE\n",
      "  }\n",
      "}\n",
      ", '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f222ad54da0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Create CheckpointSaverHook.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Saving checkpoints for 0 into ./text_classification/output/model.ckpt.\n",
      "INFO:tensorflow:loss = 0.7280515, step = 0\n",
      "INFO:tensorflow:global_step/sec: 0.950324\n",
      "INFO:tensorflow:loss = 0.33555338, step = 100 (105.229 sec)\n",
      "INFO:tensorflow:global_step/sec: 1.14775\n",
      "INFO:tensorflow:loss = 0.030396178, step = 200 (87.125 sec)\n",
      "INFO:tensorflow:global_step/sec: 1.14885\n",
      "INFO:tensorflow:loss = 0.0006962415, step = 300 (87.045 sec)\n",
      "INFO:tensorflow:global_step/sec: 1.14982\n",
      "INFO:tensorflow:loss = 0.00030710583, step = 400 (86.969 sec)\n",
      "INFO:tensorflow:Saving checkpoints for 500 into ./text_classification/output/model.ckpt.\n",
      "INFO:tensorflow:Loss for final step: 0.00025223877.\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow.python.estimator.estimator.Estimator at 0x7f222ad54fd0>"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def input_fn_builder(features, seq_length, is_training, drop_remainder):\n",
    "\n",
    "    all_input_ids = []\n",
    "    all_input_mask = []\n",
    "    all_segment_ids = []\n",
    "    all_label_ids = []\n",
    "\n",
    "    for feature in features:\n",
    "        all_input_ids.append(feature.input_ids)\n",
    "        all_input_mask.append(feature.input_mask)\n",
    "        all_segment_ids.append(feature.segment_ids)\n",
    "        all_label_ids.append(feature.label_id)\n",
    "\n",
    "    def input_fn(params):\n",
    "        batch_size = params[\"batch_size\"]\n",
    "\n",
    "        num_examples = len(features)\n",
    "\n",
    "        d = tf.data.Dataset.from_tensor_slices({\n",
    "            \"input_ids\":\n",
    "                tf.constant(\n",
    "                    all_input_ids, shape=[num_examples, seq_length],\n",
    "                    dtype=tf.int32),\n",
    "            \"input_mask\":\n",
    "                tf.constant(\n",
    "                    all_input_mask,\n",
    "                    shape=[num_examples, seq_length],\n",
    "                    dtype=tf.int32),\n",
    "            \"segment_ids\":\n",
    "                tf.constant(\n",
    "                    all_segment_ids,\n",
    "                    shape=[num_examples, seq_length],\n",
    "                    dtype=tf.int32),\n",
    "            \"label_ids\":\n",
    "                tf.constant(all_label_ids, shape=[num_examples], dtype=tf.int32),\n",
    "        })\n",
    "    \n",
    "        if is_training:\n",
    "            d = d.repeat()\n",
    "            d = d.shuffle(buffer_size=100)\n",
    "\n",
    "        d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder)\n",
    "        return d\n",
    "\n",
    "    return input_fn\n",
    "\n",
    "\n",
    "run_config = tf.estimator.RunConfig(\n",
    "    model_dir=output_dir,\n",
    "    save_summary_steps=save_summary_steps,\n",
    "    save_checkpoints_steps=save_checkpoints_steps)\n",
    "\n",
    "estimator = tf.estimator.Estimator(\n",
    "    model_fn=model_fn,\n",
    "    config=run_config,\n",
    "    params={\"batch_size\": batch_size})\n",
    "\n",
    "train_input_fn = input_fn_builder(\n",
    "    features=train_features,\n",
    "    seq_length=max_seq_length,\n",
    "    is_training=True,\n",
    "    drop_remainder=False) \n",
    "\n",
    "estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 在测试集上测试，评估测试结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Starting evaluation at 2019-09-12-02:37:48\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from ./text_classification/output/model.ckpt-500\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Finished evaluation at 2019-09-12-02:37:53\n",
      "INFO:tensorflow:Saving dict for global step 500: eval_accuracy = 0.90625, eval_loss = 0.6071, global_step = 500, loss = 0.60426533\n",
      "INFO:tensorflow:Saving 'checkpoint_path' summary for global step 500: ./text_classification/output/model.ckpt-500\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "打印测试评估指标\n",
      "eval_accuracy : 0.90625\n",
      "eval_loss : 0.6071\n",
      "loss : 0.60426533\n",
      "global_step : 500\n"
     ]
    }
   ],
   "source": [
    "eval_input_fn = input_fn_builder(\n",
    "    features=test_features,\n",
    "    seq_length=max_seq_length,\n",
    "    is_training=False,\n",
    "    drop_remainder=False)\n",
    "\n",
    "evaluate_info = estimator.evaluate(input_fn=eval_input_fn, steps=None)\n",
    "\n",
    "print(\"\\n打印测试评估指标\")\n",
    "for key in evaluate_info:\n",
    "    print(key+' : '+str(evaluate_info[key]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 在线测试\n",
    "\n",
    "由以上训练得到模型进行在线测试，可以任意输入句子，进行文本情感分析。\n",
    "\n",
    "输入“再见”，结束在线文本情感分析。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "在线文本情感分析:\n",
      "\n",
      "前台服务态度很热情\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Writing example 0 of 1\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: \n",
      "INFO:tensorflow:tokens: [CLS] 前 台 服 务 态 度 很 热 情 [SEP]\n",
      "INFO:tensorflow:input_ids: 101 1184 1378 3302 1218 2578 2428 2523 4178 2658 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 0 (id = 0)\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from ./text_classification/output/model.ckpt-500\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "评论： 前台服务态度很热情\n",
      "得分： [1.8808217e-04 9.9981195e-01]\n",
      "评论情感分析： 正面评价\n",
      "房间窗外的风景好，每天醒来都很开心\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Writing example 0 of 1\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: \n",
      "INFO:tensorflow:tokens: [CLS] 房 间 窗 外 的 风 景 好 ， 每 天 醒 来 都 很 开 心 [SEP]\n",
      "INFO:tensorflow:input_ids: 101 2791 7313 4970 1912 4638 7599 3250 1962 8024 3680 1921 7008 3341 6963 2523 2458 2552 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 0 (id = 0)\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from ./text_classification/output/model.ckpt-500\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "评论： 房间窗外的风景好，每天醒来都很开心\n",
      "得分： [1.9213662e-04 9.9980789e-01]\n",
      "评论情感分析： 正面评价\n",
      "房间很脏，看起来像没有打扫的样子\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Writing example 0 of 1\n",
      "INFO:tensorflow:*** 示例 ***\n",
      "INFO:tensorflow:guid: \n",
      "INFO:tensorflow:tokens: [CLS] 房 间 很 脏 ， 看 起 来 像 没 有 打 扫 的 样 子 [SEP]\n",
      "INFO:tensorflow:input_ids: 101 2791 7313 2523 5552 8024 4692 6629 3341 1008 3766 3300 2802 2812 4638 3416 2094 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
      "INFO:tensorflow:label: 0 (id = 0)\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from ./text_classification/output/model.ckpt-500\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "评论： 房间很脏，看起来像没有打扫的样子\n",
      "得分： [9.9980527e-01 1.9473233e-04]\n",
      "评论情感分析： 负面评价\n",
      "再见\n",
      "\n",
      "再见\n"
     ]
    }
   ],
   "source": [
    "def getPrediction(in_sentences):\n",
    "    labels = [\"负面评价\", \"正面评价\"]\n",
    "    input_examples = [InputExample(guid=\"\", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, \"\" is just a dummy label\n",
    "    input_features = convert_examples_to_features(input_examples, label_list, max_seq_length, tokenizer)\n",
    "    predict_input_fn = input_fn_builder(features=input_features, seq_length=max_seq_length, is_training=False, drop_remainder=False)\n",
    "    predictions = estimator.predict(predict_input_fn)\n",
    "    for sentence, prediction in zip(in_sentences, predictions):\n",
    "        print(\"\\n评论：\", sentence)\n",
    "        print(\"得分：\", prediction['probabilities'])\n",
    "        print(\"评论情感分析：\", labels[int(round(prediction['probabilities'][1]))])\n",
    "    return \n",
    "\n",
    "def sentiment_analysis():\n",
    "    while True:\n",
    "        pred_sentences = [input()]\n",
    "        if pred_sentences == [\"再见\"]:\n",
    "            print(\"\\n再见\")\n",
    "            return\n",
    "        else:\n",
    "            predictions = getPrediction(pred_sentences)\n",
    "\n",
    "print(\"在线文本情感分析:\\n\")            \n",
    "sentiment_analysis()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
