{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## PaddlePaddle：高层API助你快速上手深度七日打卡营\n",
    "\n",
    "第四课 NLP入门\n",
    "\n",
    "课程地址：https://aistudio.baidu.com/aistudio/course/introduce/6771\n",
    "\n",
    "## 一、课程总结 \n",
    "\n",
    "## 1.一些基本概念的回顾\n",
    "\n",
    "情感分析是什么？\n",
    "\n",
    "自然语言处理领域一个老生常谈的任务。句子情感分析目的是为了判别说者的情感倾向，比如在某些话题上给出的的态度明确的观点，或者反映的情绪状态等。情感分析有着广泛应用，比如电商评论分析、舆情分析等。\n",
    "\n",
    "<p align=\"center\">\n",
    "<img src=\"https://ai-studio-static-online.cdn.bcebos.com/febb8a1478e34258953e56611ddc76cd20b412fec89845b0a4a2e6b9f8aae774\" hspace='10'/> <br />\n",
    "</p>\n",
    "\n",
    "2.RNN和LSTM和GRU\n",
    "\n",
    "RNN：在计算下一个token语义信息时，利用上一个token语义信息作为其输入：\n",
    "\n",
    "<img src=\"图片/RNN.png\">\n",
    "\n",
    "LSTM：LSTM是RNN的一种变种。为了学到长期依赖关系，LSTM 中引入了门控机制来控制信息的累计速度，包括有选择地加入新的信息，并有选择地遗忘之前累计的信息。\n",
    "\n",
    "<img src='图片/LSTM.png'>\n",
    "\n",
    "GRU:GRU也是RNN的一种变种。一个LSTM单元有四个输入 ，因而参数是RNN的四倍，带来的结果是训练速度慢。GRU对LSTM进行了简化，在不影响效果的前提下加快了训练速度。\n",
    "\n",
    "<img src='图片/GRU.png'>\n",
    "\n",
    "\n",
    "## 2.作业中的要点：\n",
    "\n",
    "**替换网络模型**\n",
    "\n",
    "首先需要自习看看源代码和参考：https://github.com/PaddlePaddle/models/blob/develop/PaddleNLP/paddlenlp/seq2vec/encoder.py\n",
    "\n",
    "在其中每一个模型都是一个class,在这个class中有api的命名和意义，直接替换原有代码的时候需要注意LSTM，GRU，RNN三者的API是否有差别,例如GRUencoder\n",
    "```\n",
    "class GRUEncoder(nn.Layer):\n",
    "    def __init__(self,\n",
    "                 input_size,\n",
    "                 hidden_size,\n",
    "                 num_layers=1,\n",
    "                 direction=\"forward\",\n",
    "                 dropout=0.0,\n",
    "                 pooling_type=None,\n",
    "                 **kwargs):\n",
    "        super().__init__()\n",
    "        self._input_size = input_size\n",
    "        self._hidden_size = hidden_size\n",
    "        self._direction = direction\n",
    "        self._pooling_type = pooling_type\n",
    "\n",
    "        self.gru_layer = nn.GRU(input_size=input_size,\n",
    "                                hidden_size=hidden_size,\n",
    "                                num_layers=num_layers,\n",
    "                                direction=direction,\n",
    "                                dropout=dropout,\n",
    "                                **kwargs)\n",
    "\n",
    "    def get_input_dim(self):\n",
    "        \"\"\"\n",
    "        Returns the dimension of the vector input for each element in the sequence input\n",
    "        to a `GRUEncoder`. This is not the shape of the input tensor, but the\n",
    "        last element of that shape.\n",
    "        \"\"\"\n",
    "        return self._input_size\n",
    "\n",
    "    def get_output_dim(self):\n",
    "        \"\"\"\n",
    "        Returns the dimension of the final vector output by this `GRUEncoder`.  This is not\n",
    "        the shape of the returned tensor, but the last element of that shape.\n",
    "        \"\"\"\n",
    "        if self._direction == \"bidirect\":\n",
    "            return self._hidden_size * 2\n",
    "        else:\n",
    "            return self._hidden_size\n",
    "\n",
    "    def forward(self, inputs, sequence_length):\n",
    "        \"\"\"\n",
    "        GRUEncoder takes the a sequence of vectors and and returns a\n",
    "        single vector, which is a combination of multiple GRU layers.\n",
    "        The input to this module is of shape `(batch_size, num_tokens, input_size)`, \n",
    "        The output is of shape `(batch_size, hidden_size*2)` if GRU is bidirection;\n",
    "        If not, output is of shape `(batch_size, hidden_size)`.\n",
    "        Args:\n",
    "            inputs (paddle.Tensor): Shape as `(batch_size, num_tokens, input_size)`.\n",
    "            sequence_length (paddle.Tensor): Shape as `(batch_size)`.\n",
    "        Returns:\n",
    "            last_hidden (paddle.Tensor): Shape as `(batch_size, hidden_size)`.\n",
    "                The hidden state at the last time step for every layer.\n",
    "        \"\"\"\n",
    "        encoded_text, last_hidden = self.gru_layer(\n",
    "            inputs, sequence_length=sequence_length)\n",
    "        if not self._pooling_type:\n",
    "            # We exploit the `last_hidden` (the hidden state at the last time step for every layer)\n",
    "            # to create a single vector.\n",
    "            # If gru is not bidirection, then output is the hidden state of the last time step \n",
    "            # at last layer. Output is shape of `(batch_size, hidden_size)`.\n",
    "            # If gru is bidirection, then output is concatenation of the forward and backward hidden state \n",
    "            # of the last time step at last layer. Output is shape of `(batch_size, hidden_size*2)`.\n",
    "            if self._direction != 'bidirect':\n",
    "                output = last_hidden[-1, :, :]\n",
    "            else:\n",
    "                output = paddle.concat(\n",
    "                    (last_hidden[-2, :, :], last_hidden[-1, :, :]), axis=1)\n",
    "        else:\n",
    "            # We exploit the `encoded_text` (the hidden state at the every time step for last layer)\n",
    "            # to create a single vector. We perform pooling on the encoded text.\n",
    "            # The output shape is `(batch_size, hidden_size*2)` if use bidirectional GRU, \n",
    "            # otherwise the output shape is `(batch_size, hidden_size*2)`.\n",
    "            if self._pooling_type == 'sum':\n",
    "                output = paddle.sum(encoded_text, axis=1)\n",
    "            elif self._pooling_type == 'max':\n",
    "                output = paddle.max(encoded_text, axis=1)\n",
    "            elif self._pooling_type == 'mean':\n",
    "                output = paddle.mean(encoded_text, axis=1)\n",
    "            else:\n",
    "                raise RuntimeError(\n",
    "                    \"Unexpected pooling type %s .\"\n",
    "                    \"Pooling type must be one of sum, max and mean.\" %\n",
    "                    self._pooling_type)\n",
    "        return output\n",
    "```\n",
    "\n",
    "**两分类到三分类**\n",
    "\n",
    "需要把原有代码中有关labels的所有地方都修改一下；\n",
    "\n",
    "另外，加多训练的epochs，同时学习率上下调整来寻找更优解"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 环境介绍\n",
    "\n",
    "- PaddlePaddle框架，AI Studio平台已经默认安装最新版2.0。\n",
    "\n",
    "- PaddleNLP，深度兼容框架2.0，是飞桨框架2.0在NLP领域的最佳实践。\n",
    "\n",
    "首先是安装:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "# 下载paddlenlp\n",
    "!pip install --upgrade paddlenlp==2.0.0b4 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "查看安装的版本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2.0.0 2.0.0b4\n"
     ]
    }
   ],
   "source": [
    "import paddle\n",
    "import paddlenlp\n",
    "\n",
    "print(paddle.__version__, paddlenlp.__version__)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## PaddleNLP和Paddle框架是什么关系？\n",
    "\n",
    "\n",
    "\n",
    "<p align=\"center\">\n",
    "<img src=\"https://ai-studio-static-online.cdn.bcebos.com/165924e86d9f4b5fa5d6fdee9e8496bf01be524e61f341b3879aceba48ae80fb\" width = \"300\" height = \"250\"  hspace='10'/> <br />\n",
    "</p><br></br>\n",
    "\n",
    "- Paddle框架是基础底座，提供深度学习任务全流程API。PaddleNLP基于Paddle框架开发，适用于NLP任务。\n",
    "\n",
    "PaddleNLP中数据处理、数据集、组网单元等API未来会沉淀到框架`paddle.text`中。\n",
    "\n",
    "\n",
    "- 代码中继承\n",
    "`class TSVDataset(paddle.io.Dataset)`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用飞桨完成深度学习任务的通用流程\n",
    "\n",
    "- 数据集和数据处理  \n",
    "paddle.io.Dataset   \n",
    "paddle.io.DataLoader   \n",
    "paddlenlp.data   \n",
    "\n",
    "- 组网和网络配置\n",
    "\n",
    "paddle.nn.Embedding   \n",
    "paddlenlp.seq2vec\n",
    "paddle.nn.Linear   \n",
    "paddle.tanh\n",
    "\n",
    "paddle.nn.CrossEntropyLoss    \n",
    "paddle.metric.Accuracy   \n",
    "paddle.optimizer   \n",
    "\n",
    "model.prepare    \n",
    "\n",
    "- 网络训练和评估   \n",
    "model.fit   \n",
    "model.evaluate   \n",
    "\n",
    "- 预测\n",
    "model.predict   \n",
    "\n",
    "注意：建议在GPU下运行。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from functools import partial\n",
    "\n",
    "import paddle.nn as nn\n",
    "import paddle.nn.functional as F\n",
    "import paddlenlp as ppnlp\n",
    "from paddlenlp.data import Pad, Stack, Tuple\n",
    "from paddlenlp.datasets import MapDatasetWrapper\n",
    "\n",
    "from utils import load_vocab, convert_example"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 数据集和数据处理\n",
    "\n",
    "## 自定义数据集\n",
    "\n",
    "映射式(map-style)数据集需要继承`paddle.io.Dataset`\n",
    "\n",
    "- `__getitem__`: 根据给定索引获取数据集中指定样本，在 paddle.io.DataLoader 中需要使用此函数通过下标获取样本。\n",
    "\n",
    "- `__len__`: 返回数据集样本个数， paddle.io.BatchSampler 中需要样本个数生成下标序列。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "class SelfDefinedDataset(paddle.io.Dataset):\r\n",
    "    def __init__(self, data):\r\n",
    "        super(SelfDefinedDataset, self).__init__()\r\n",
    "        self.data = data\r\n",
    "\r\n",
    "    def __getitem__(self, idx):\r\n",
    "        return self.data[idx]\r\n",
    "\r\n",
    "    def __len__(self):\r\n",
    "        return len(self.data)\r\n",
    "        \r\n",
    "    def get_labels(self):\r\n",
    "        return [\"0\", \"1\"]\r\n",
    "\r\n",
    "def txt_to_list(file_name):\r\n",
    "    res_list = []\r\n",
    "    for line in open(file_name):\r\n",
    "        res_list.append(line.strip().split('\\t'))\r\n",
    "    return res_list\r\n",
    "\r\n",
    "trainlst = txt_to_list('train.txt')\r\n",
    "devlst = txt_to_list('dev.txt')\r\n",
    "testlst = txt_to_list('test.txt')\r\n",
    "\r\n",
    "# 通过get_datasets()函数，将list数据转换为dataset。\r\n",
    "# get_datasets()可接收[list]参数，或[str]参数，根据自定义数据集的写法自由选择。\r\n",
    "# train_ds, dev_ds, test_ds = ppnlp.datasets.ChnSentiCorp.get_datasets(['train', 'dev', 'test'])\r\n",
    "train_ds, dev_ds, test_ds = SelfDefinedDataset.get_datasets([trainlst, devlst, testlst])\r\n",
    "\r\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "看看数据长什么样"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['0', '1']\n",
      "['赢在心理，输在出品！杨枝太酸，三文鱼熟了，酥皮焗杏汁杂果可以换个名（九唔搭八）', '0']\n",
      "['服务一般，客人多，服务员少，但食品很不错', '1']\n",
      "['東坡肉竟然有好多毛，問佢地點解，佢地仲話係咁架\\ue107\\ue107\\ue107\\ue107\\ue107\\ue107\\ue107冇天理，第一次食東坡肉有毛，波羅包就幾好食', '0']\n",
      "['父亲节去的，人很多，口味还可以上菜快！但是结账的时候，算错了没有打折，我也忘记拿清单了。说好打8折的，收银员没有打，人太多一时自己也没有想起。不知道收银员忘记，还是故意那钱露入自己钱包。。', '0']\n",
      "['吃野味，吃个新鲜，你当然一定要来广州吃鹿肉啦*价格便宜，量好足，', '1']\n",
      "['味道几好服务都五错推荐鹅肝乳鸽飞鱼', '1']\n",
      "['作为老字号，水准保持算是不错，龟岗分店可能是位置问题，人不算多，基本不用等位，自从抢了券，去过好几次了，每次都可以打85以上的评分，算是可以了～粉丝煲每次必点，哈哈，鱼也不错，还会来帮衬的，楼下还可以免费停车！', '1']\n",
      "['边到正宗啊？味味都咸死人啦，粤菜讲求鲜甜，五知点解感多人话好吃。', '0']\n",
      "['环境卫生差，出品垃圾，冇下次，不知所为', '0']\n",
      "['和苑真是精致粤菜第一家，服务菜品都一流', '1']\n"
     ]
    }
   ],
   "source": [
    "label_list = train_ds.get_labels()\n",
    "print(label_list)\n",
    "\n",
    "for i in range(10):\n",
    "    print (train_ds[i])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据处理\n",
    "\n",
    "为了将原始数据处理成模型可以读入的格式，本项目将对数据作以下处理：\n",
    "\n",
    "- 首先使用jieba切词，之后将jieba切完后的单词映射词表中单词id。\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/c538bbd04cb2489ab4ff260133247fa7ab8fb0da17874179bc320d773977cb5d)\n",
    "\n",
    "- 使用`paddle.io.DataLoader`接口多线程异步加载数据。\n",
    "\n",
    "其中用到了PaddleNLP中关于数据处理的API。PaddleNLP提供了许多关于NLP任务中构建有效的数据pipeline的常用API\n",
    "\n",
    "| API                             | 简介                                       |\n",
    "| ------------------------------- | :----------------------------------------- |\n",
    "| `paddlenlp.data.Stack`          | 堆叠N个具有相同shape的输入数据来构建一个batch，它的输入必须具有相同的shape，输出便是这些输入的堆叠组成的batch数据。 |\n",
    "| `paddlenlp.data.Pad`            | 堆叠N个输入数据来构建一个batch，每个输入数据将会被padding到N个输入数据中最大的长度 |\n",
    "| `paddlenlp.data.Tuple`          | 将多个组batch的函数包装在一起 |\n",
    "\n",
    "更多数据处理操作详见： [https://github.com/PaddlePaddle/PaddleNLP/blob/develop/docs/data.md](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/docs/data.md)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[PAD] 0\n"
     ]
    }
   ],
   "source": [
    "# 下载词汇表文件word_dict.txt，用于构造词-id映射关系。\n",
    "#!wget https://paddlenlp.bj.bcebos.com/data/senta_word_dict.txt\n",
    "\n",
    "# 加载词表\n",
    "vocab = load_vocab('./senta_word_dict.txt')\n",
    "\n",
    "for k, v in vocab.items():\n",
    "    print(k, v)\n",
    "    break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 构造dataloder\n",
    "\n",
    "下面的`create_data_loader`函数用于创建运行和预测时所需要的`DataLoader`对象。\n",
    "\n",
    "* `paddle.io.DataLoader`返回一个迭代器，该迭代器根据`batch_sampler`指定的顺序迭代返回dataset数据。异步加载数据。\n",
    "\n",
    "* `batch_sampler`：DataLoader通过 batch_sampler 产生的mini-batch索引列表来 dataset 中索引样本并组成mini-batch\n",
    "\n",
    "* `collate_fn`：指定如何将样本列表组合为mini-batch数据。传给它参数需要是一个callable对象，需要实现对组建的batch的处理逻辑，并返回每个batch的数据。在这里传入的是`prepare_input`函数，对产生的数据进行pad操作，并返回实际长度等。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "# Reads data and generates mini-batches.\n",
    "def create_dataloader(dataset,\n",
    "                      trans_function=None,\n",
    "                      mode='train',\n",
    "                      batch_size=1,\n",
    "                      pad_token_id=0,\n",
    "                      batchify_fn=None):\n",
    "    if trans_function:\n",
    "        dataset = dataset.apply(trans_function, lazy=True)\n",
    "\n",
    "    # return_list 数据是否以list形式返回\n",
    "    # collate_fn  指定如何将样本列表组合为mini-batch数据。传给它参数需要是一个callable对象，需要实现对组建的batch的处理逻辑，并返回每个batch的数据。在这里传入的是`prepare_input`函数，对产生的数据进行pad操作，并返回实际长度等。\n",
    "    dataloader = paddle.io.DataLoader(\n",
    "        dataset,\n",
    "        return_list=True,\n",
    "        batch_size=batch_size,\n",
    "        collate_fn=batchify_fn)\n",
    "        \n",
    "    return dataloader\n",
    "\n",
    "# python中的偏函数partial，把一个函数的某些参数固定住（也就是设置默认值），返回一个新的函数，调用这个新函数会更简单。\n",
    "trans_function = partial(\n",
    "    convert_example,\n",
    "    vocab=vocab,\n",
    "    unk_token_id=vocab.get('[UNK]', 1),\n",
    "    is_test=False)\n",
    "\n",
    "# 将读入的数据batch化处理，便于模型batch化运算。\n",
    "# batch中的每个句子将会padding到这个batch中的文本最大长度batch_max_seq_len。\n",
    "# 当文本长度大于batch_max_seq时，将会截断到batch_max_seq_len；当文本长度小于batch_max_seq时，将会padding补齐到batch_max_seq_len.\n",
    "batchify_fn = lambda samples, fn=Tuple(\n",
    "    Pad(axis=0, pad_val=vocab['[PAD]']),  # input_ids\n",
    "    Stack(dtype=\"int64\"),  # seq len\n",
    "    Stack(dtype=\"int64\")  # label\n",
    "): [data for data in fn(samples)]\n",
    "\n",
    "\n",
    "train_loader = create_dataloader(\n",
    "    train_ds,\n",
    "    trans_function=trans_function,\n",
    "    batch_size=128,\n",
    "    mode='train',\n",
    "    batchify_fn=batchify_fn)\n",
    "dev_loader = create_dataloader(\n",
    "    dev_ds,\n",
    "    trans_function=trans_function,\n",
    "    batch_size=128,\n",
    "    mode='validation',\n",
    "    batchify_fn=batchify_fn)\n",
    "test_loader = create_dataloader(\n",
    "    test_ds,\n",
    "    trans_function=trans_function,\n",
    "    batch_size=128,\n",
    "    mode='test',\n",
    "    batchify_fn=batchify_fn)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 模型搭建\n",
    "\n",
    "使用`LSTMencoder`搭建一个BiLSTM模型用于进行句子建模，得到句子的向量表示。\n",
    "\n",
    "然后接一个线性变换层，完成二分类任务。\n",
    "\n",
    "- `paddle.nn.Embedding`组建word-embedding层\n",
    "- `ppnlp.seq2vec.LSTMEncoder`组建句子建模层\n",
    "- `paddle.nn.Linear`构造二分类器\n",
    "\n",
    "\n",
    "<p align=\"center\">\n",
    "<img src=\"https://ai-studio-static-online.cdn.bcebos.com/ecf309c20e5347399c55f1e067821daa088842fa46ad49be90de4933753cd3cf\" width = \"800\" height = \"450\"  hspace='10'/> <br />\n",
    "</p><br><center>图1：seq2vec示意图</center></br>\n",
    "\n",
    "* 除LSTM外，`seq2vec`还提供了许多语义表征方法，详细可参考：[seq2vec介绍](https://aistudio.baidu.com/aistudio/projectdetail/1283423)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**首先跑通默认的LSTM模型**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "class LSTMModel(nn.Layer):\r\n",
    "    def __init__(self,\r\n",
    "                 vocab_size,\r\n",
    "                 num_classes,\r\n",
    "                 emb_dim=128,\r\n",
    "                 padding_idx=0,\r\n",
    "                 lstm_hidden_size=198,\r\n",
    "                 direction='forward',\r\n",
    "                 lstm_layers=1,\r\n",
    "                 dropout_rate=0,\r\n",
    "                 pooling_type=None,\r\n",
    "                 fc_hidden_size=96):\r\n",
    "        super().__init__()\r\n",
    "\r\n",
    "        # 首先将输入word id 查表后映射成 word embedding\r\n",
    "        self.embedder = nn.Embedding(\r\n",
    "            num_embeddings=vocab_size,\r\n",
    "            embedding_dim=emb_dim,\r\n",
    "            padding_idx=padding_idx)\r\n",
    "\r\n",
    "        # 将word embedding经过LSTMEncoder变换到文本语义表征空间中\r\n",
    "        self.lstm_encoder = ppnlp.seq2vec.LSTMEncoder(\r\n",
    "            emb_dim,\r\n",
    "            lstm_hidden_size,\r\n",
    "            num_layers=lstm_layers,\r\n",
    "            direction=direction,\r\n",
    "            dropout=dropout_rate,\r\n",
    "            pooling_type=pooling_type)\r\n",
    "\r\n",
    "        # LSTMEncoder.get_output_dim()方法可以获取经过encoder之后的文本表示hidden_size\r\n",
    "        self.fc = nn.Linear(self.lstm_encoder.get_output_dim(), fc_hidden_size)\r\n",
    "\r\n",
    "        # 最后的分类器\r\n",
    "        self.output_layer = nn.Linear(fc_hidden_size, num_classes)\r\n",
    "\r\n",
    "    def forward(self, text, seq_len):\r\n",
    "        # text shape: (batch_size, num_tokens)\r\n",
    "        # print('input :', text.shape)\r\n",
    "        \r\n",
    "        # Shape: (batch_size, num_tokens, embedding_dim)\r\n",
    "        embedded_text = self.embedder(text)\r\n",
    "        # print('after word-embeding:', embedded_text.shape)\r\n",
    "\r\n",
    "        # Shape: (batch_size, num_tokens, num_directions*lstm_hidden_size)\r\n",
    "        # num_directions = 2 if direction is 'bidirectional' else 1\r\n",
    "        text_repr = self.lstm_encoder(embedded_text, sequence_length=seq_len)\r\n",
    "        # print('after lstm:', text_repr.shape)\r\n",
    "\r\n",
    "\r\n",
    "        # Shape: (batch_size, fc_hidden_size)\r\n",
    "        fc_out = paddle.tanh(self.fc(text_repr))\r\n",
    "        # print('after Linear classifier:', fc_out.shape)\r\n",
    "\r\n",
    "        # Shape: (batch_size, num_classes)\r\n",
    "        logits = self.output_layer(fc_out)\r\n",
    "        # print('output:', logits.shape)\r\n",
    "        \r\n",
    "        # probs 分类概率值\r\n",
    "        probs = F.softmax(logits, axis=-1)\r\n",
    "        # print('output probability:', probs.shape)\r\n",
    "        return probs\r\n",
    "\r\n",
    "model= LSTMModel(\r\n",
    "        len(vocab),\r\n",
    "        len(label_list),\r\n",
    "        direction='bidirectional',\r\n",
    "        padding_idx=vocab['[PAD]'])\r\n",
    "model = paddle.Model(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**将lstm网络替换成GRU**\n",
    "\n",
    "API参考：https://github.com/PaddlePaddle/models/blob/develop/PaddleNLP/paddlenlp/seq2vec/encoder.py\n",
    "\n",
    "```\n",
    "self.gru_encoder = ppnlp.seq2vec.GRUEncoder()\n",
    "```\n",
    "具体的使用方法：\n",
    "\n",
    "```\n",
    "Args:\n",
    "        input_size (obj:`int`, required): The number of expected features in the input (the last dimension).\n",
    "        hidden_size (obj:`int`, required): The number of features in the hidden state.\n",
    "        num_layers (obj:`int`, optional, defaults to 1): Number of recurrent layers. \n",
    "            E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, \n",
    "            with the second GRU taking in outputs of the first GRU and computing the final results.\n",
    "        direction (obj:`str`, optional, defaults to obj:`forward`): The direction of the network. \n",
    "            It can be `forward` and `bidirect` (it means bidirection network).\n",
    "            If `biderect`, it is a birectional GRU, and returns the concat output from both directions.\n",
    "        dropout (obj:`float`, optional, defaults to 0.0): If non-zero, introduces a Dropout layer \n",
    "            on the outputs of each GRU layer except the last layer, with dropout probability equal to dropout.\n",
    "        pooling_type (obj: `str`, optional, defaults to obj:`None`): If `pooling_type` is None, \n",
    "            then the GRUEncoder will return the hidden state of the last time step at last layer as a single vector.\n",
    "            If pooling_type is not None, it must be one of `sum`, `max` and `mean`. Then it will be pooled on \n",
    "            the GRU output (the hidden state of every time step at last layer) to create a single vector.\n",
    "```\n",
    "\n",
    "实际使用中把原先LSTM的部分替换成GRU就行了"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "#self.gru_encoder = ppnlp.seq2vec.GRUEncoder()\r\n",
    "\r\n",
    "class GRUModel(nn.Layer):\r\n",
    "    def __init__(self,\r\n",
    "                 vocab_size,\r\n",
    "                 num_classes,\r\n",
    "                 emb_dim=128,\r\n",
    "                 padding_idx=0,\r\n",
    "                 gru_hidden_size=198,\r\n",
    "                 direction='forward',\r\n",
    "                 gru_layers=1,\r\n",
    "                 dropout_rate=0,\r\n",
    "                 pooling_type=None,\r\n",
    "                 fc_hidden_size=96):\r\n",
    "        super().__init__()\r\n",
    "\r\n",
    "        # 首先将输入word id 查表后映射成 word embedding\r\n",
    "        self.embedder = nn.Embedding(\r\n",
    "            num_embeddings=vocab_size,\r\n",
    "            embedding_dim=emb_dim,\r\n",
    "            padding_idx=padding_idx)\r\n",
    "\r\n",
    "        # 将word embedding经过LSTMEncoder变换到文本语义表征空间中\r\n",
    "        self.gru_encoder = ppnlp.seq2vec.GRUEncoder(\r\n",
    "            emb_dim,\r\n",
    "            gru_hidden_size,\r\n",
    "            num_layers=gru_layers,\r\n",
    "            direction=direction,\r\n",
    "            dropout=dropout_rate,\r\n",
    "            pooling_type=pooling_type)\r\n",
    "\r\n",
    "        # LSTMEncoder.get_output_dim()方法可以获取经过encoder之后的文本表示hidden_size\r\n",
    "        self.fc = nn.Linear(self.gru_encoder.get_output_dim(), fc_hidden_size)\r\n",
    "\r\n",
    "        # 最后的分类器\r\n",
    "        self.output_layer = nn.Linear(fc_hidden_size, num_classes)\r\n",
    "\r\n",
    "    def forward(self, text, seq_len):\r\n",
    "        # text shape: (batch_size, num_tokens)\r\n",
    "        # print('input :', text.shape)\r\n",
    "        \r\n",
    "        # Shape: (batch_size, num_tokens, embedding_dim)\r\n",
    "        embedded_text = self.embedder(text)\r\n",
    "        # print('after word-embeding:', embedded_text.shape)\r\n",
    "\r\n",
    "        # Shape: (batch_size, num_tokens, num_directions*lstm_hidden_size)\r\n",
    "        # num_directions = 2 if direction is 'bidirectional' else 1\r\n",
    "        text_repr = self.gru_encoder(embedded_text, sequence_length=seq_len)\r\n",
    "        # print('after lstm:', text_repr.shape)\r\n",
    "\r\n",
    "\r\n",
    "        # Shape: (batch_size, fc_hidden_size)\r\n",
    "        fc_out = paddle.tanh(self.fc(text_repr))\r\n",
    "        # print('after Linear classifier:', fc_out.shape)\r\n",
    "\r\n",
    "        # Shape: (batch_size, num_classes)\r\n",
    "        logits = self.output_layer(fc_out)\r\n",
    "        # print('output:', logits.shape)\r\n",
    "        \r\n",
    "        # probs 分类概率值\r\n",
    "        probs = F.softmax(logits, axis=-1)\r\n",
    "        # print('output probability:', probs.shape)\r\n",
    "        return probs\r\n",
    "\r\n",
    "model1= GRUModel(\r\n",
    "        len(vocab),\r\n",
    "        len(label_list),\r\n",
    "        direction='bidirectional',\r\n",
    "        padding_idx=vocab['[PAD]'])\r\n",
    "model1 = paddle.Model(model1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将lstm网络替换成RNN\n",
    "\n",
    "API参考：https://github.com/PaddlePaddle/models/blob/develop/PaddleNLP/paddlenlp/seq2vec/encoder.py\n",
    "\n",
    "```\n",
    "self.rnn_encoder = ppnlp.seq2vec.RNNEncoder\n",
    "```\n",
    "具体的使用方法：\n",
    "```\n",
    "   Args:\n",
    "        input_size (obj:`int`, required): The number of expected features in the input (the last dimension).\n",
    "        hidden_size (obj:`int`, required): The number of features in the hidden state.\n",
    "        num_layers (obj:`int`, optional, defaults to 1): Number of recurrent layers. \n",
    "            E.g., setting num_layers=2 would mean stacking two RNNs together to form a stacked RNN, \n",
    "            with the second RNN taking in outputs of the first RNN and computing the final results.\n",
    "        direction (obj:`str`, optional, defaults to obj:`forward`): The direction of the network. \n",
    "            It can be \"forward\" and \"bidirect\" (it means bidirection network).\n",
    "            If `biderect`, it is a birectional RNN, and returns the concat output from both directions.\n",
    "        dropout (obj:`float`, optional, defaults to 0.0): If non-zero, introduces a Dropout layer \n",
    "            on the outputs of each RNN layer except the last layer, with dropout probability equal to dropout.\n",
    "        pooling_type (obj: `str`, optional, defaults to obj:`None`): If `pooling_type` is None, \n",
    "            then the RNNEncoder will return the hidden state of the last time step at last layer as a single vector.\n",
    "            If pooling_type is not None, it must be one of `sum`, `max` and `mean`. Then it will be pooled on \n",
    "            the RNN output (the hidden state of every time step at last layer) to create a single vector.\n",
    "```\n",
    "实际使用中把原先LSTM的部分替换成RNN就行了"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "class RNNModel(nn.Layer):\r\n",
    "    def __init__(self,\r\n",
    "                 vocab_size,\r\n",
    "                 num_classes,\r\n",
    "                 emb_dim=128,\r\n",
    "                 padding_idx=0,\r\n",
    "                 rnn_hidden_size=198,\r\n",
    "                 direction='forward',\r\n",
    "                 rnn_layers=1,\r\n",
    "                 dropout_rate=0,\r\n",
    "                 pooling_type=None,\r\n",
    "                 fc_hidden_size=96):\r\n",
    "        super().__init__()\r\n",
    "\r\n",
    "        # 首先将输入word id 查表后映射成 word embedding\r\n",
    "        self.embedder = nn.Embedding(\r\n",
    "            num_embeddings=vocab_size,\r\n",
    "            embedding_dim=emb_dim,\r\n",
    "            padding_idx=padding_idx)\r\n",
    "\r\n",
    "        # 将word embedding经过RNNEncoder变换到文本语义表征空间中\r\n",
    "        self.rnn_encoder = ppnlp.seq2vec.RNNEncoder(\r\n",
    "            emb_dim,\r\n",
    "            rnn_hidden_size,\r\n",
    "            num_layers=rnn_layers,\r\n",
    "            direction=direction,\r\n",
    "            dropout=dropout_rate,\r\n",
    "            pooling_type=pooling_type)\r\n",
    "\r\n",
    "        # RNNEncoder.get_output_dim()方法可以获取经过encoder之后的文本表示hidden_size\r\n",
    "        self.fc = nn.Linear(self.rnn_encoder.get_output_dim(), fc_hidden_size)\r\n",
    "\r\n",
    "        # 最后的分类器\r\n",
    "        self.output_layer = nn.Linear(fc_hidden_size, num_classes)\r\n",
    "\r\n",
    "    def forward(self, text, seq_len):\r\n",
    "        # text shape: (batch_size, num_tokens)\r\n",
    "        # print('input :', text.shape)\r\n",
    "        \r\n",
    "        # Shape: (batch_size, num_tokens, embedding_dim)\r\n",
    "        embedded_text = self.embedder(text)\r\n",
    "        # print('after word-embeding:', embedded_text.shape)\r\n",
    "\r\n",
    "        # Shape: (batch_size, num_tokens, num_directions*lstm_hidden_size)\r\n",
    "        # num_directions = 2 if direction is 'bidirectional' else 1\r\n",
    "        text_repr = self.rnn_encoder(embedded_text, sequence_length=seq_len)\r\n",
    "        # print('after lstm:', text_repr.shape)\r\n",
    "\r\n",
    "\r\n",
    "        # Shape: (batch_size, fc_hidden_size)\r\n",
    "        fc_out = paddle.tanh(self.fc(text_repr))\r\n",
    "        # print('after Linear classifier:', fc_out.shape)\r\n",
    "\r\n",
    "        # Shape: (batch_size, num_classes)\r\n",
    "        logits = self.output_layer(fc_out)\r\n",
    "        # print('output:', logits.shape)\r\n",
    "        \r\n",
    "        # probs 分类概率值\r\n",
    "        probs = F.softmax(logits, axis=-1)\r\n",
    "        # print('output probability:', probs.shape)\r\n",
    "        return probs\r\n",
    "\r\n",
    "model2= RNNModel(\r\n",
    "        len(vocab),\r\n",
    "        len(label_list),\r\n",
    "        direction='bidirectional',\r\n",
    "        padding_idx=vocab['[PAD]'])\r\n",
    "model2 = paddle.Model(model2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 模型配置和训练"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型配置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "optimizer = paddle.optimizer.Adam(\r\n",
    "        parameters=model2.parameters(), learning_rate=5e-5)\r\n",
    "\r\n",
    "loss = paddle.nn.CrossEntropyLoss()\r\n",
    "metric = paddle.metric.Accuracy()\r\n",
    "\r\n",
    "#model.prepare(optimizer, loss, metric)\r\n",
    "#model1.prepare(optimizer,loss,metric)\r\n",
    "model2.prepare(optimizer,loss,metric)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "# 设置visualdl路径\n",
    "log_dir = './visualdl'\n",
    "callback = paddle.callbacks.VisualDL(log_dir=log_dir)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型训练\n",
    "\n",
    "训练过程中会输出loss、acc等信息。这里设置了10个epoch，在训练集上准确率约97%。\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/254cc9f80f474181a0f7fd00bb6f431502efdfdf54e54989a26549bc3abbe3c3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先是LSTM模型的训练，得到的ACC在98%左右\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous step.\n",
      "Epoch 1/10\n",
      "step  10/125 - loss: 0.3212 - acc: 0.9812 - 118ms/step\n",
      "step  20/125 - loss: 0.3367 - acc: 0.9852 - 103ms/step\n",
      "step  30/125 - loss: 0.3211 - acc: 0.9857 - 98ms/step\n",
      "step  40/125 - loss: 0.3524 - acc: 0.9848 - 95ms/step\n",
      "step  50/125 - loss: 0.3133 - acc: 0.9850 - 94ms/step\n",
      "step  60/125 - loss: 0.3133 - acc: 0.9854 - 93ms/step\n",
      "step  70/125 - loss: 0.3136 - acc: 0.9864 - 92ms/step\n",
      "step  80/125 - loss: 0.3133 - acc: 0.9861 - 92ms/step\n",
      "step  90/125 - loss: 0.3136 - acc: 0.9860 - 91ms/step\n",
      "step 100/125 - loss: 0.3212 - acc: 0.9859 - 91ms/step\n",
      "step 110/125 - loss: 0.3214 - acc: 0.9860 - 91ms/step\n",
      "step 120/125 - loss: 0.3367 - acc: 0.9861 - 90ms/step\n",
      "step 125/125 - loss: 0.3135 - acc: 0.9861 - 89ms/step\n",
      "save checkpoint at /home/aistudio/checkpoints/0\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3539 - acc: 0.9641 - 85ms/step\n",
      "step 20/84 - loss: 0.3741 - acc: 0.9664 - 70ms/step\n",
      "step 30/84 - loss: 0.3503 - acc: 0.9659 - 66ms/step\n",
      "step 40/84 - loss: 0.3313 - acc: 0.9658 - 64ms/step\n",
      "step 50/84 - loss: 0.3489 - acc: 0.9648 - 62ms/step\n",
      "step 60/84 - loss: 0.3215 - acc: 0.9655 - 61ms/step\n",
      "step 70/84 - loss: 0.3322 - acc: 0.9648 - 60ms/step\n",
      "step 80/84 - loss: 0.3441 - acc: 0.9647 - 59ms/step\n",
      "step 84/84 - loss: 0.3133 - acc: 0.9649 - 57ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 2/10\n",
      "step  10/125 - loss: 0.3211 - acc: 0.9805 - 108ms/step\n",
      "step  20/125 - loss: 0.3368 - acc: 0.9844 - 97ms/step\n",
      "step  30/125 - loss: 0.3290 - acc: 0.9839 - 93ms/step\n",
      "step  40/125 - loss: 0.3529 - acc: 0.9834 - 90ms/step\n",
      "step  50/125 - loss: 0.3134 - acc: 0.9839 - 90ms/step\n",
      "step  60/125 - loss: 0.3133 - acc: 0.9845 - 89ms/step\n",
      "step  70/125 - loss: 0.3133 - acc: 0.9856 - 89ms/step\n",
      "step  80/125 - loss: 0.3133 - acc: 0.9858 - 89ms/step\n",
      "step  90/125 - loss: 0.3133 - acc: 0.9857 - 89ms/step\n",
      "step 100/125 - loss: 0.3211 - acc: 0.9856 - 89ms/step\n",
      "step 110/125 - loss: 0.3211 - acc: 0.9857 - 89ms/step\n",
      "step 120/125 - loss: 0.3368 - acc: 0.9859 - 89ms/step\n",
      "step 125/125 - loss: 0.3133 - acc: 0.9858 - 87ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3540 - acc: 0.9656 - 87ms/step\n",
      "step 20/84 - loss: 0.3755 - acc: 0.9672 - 71ms/step\n",
      "step 30/84 - loss: 0.3495 - acc: 0.9664 - 66ms/step\n",
      "step 40/84 - loss: 0.3325 - acc: 0.9654 - 64ms/step\n",
      "step 50/84 - loss: 0.3479 - acc: 0.9648 - 63ms/step\n",
      "step 60/84 - loss: 0.3134 - acc: 0.9655 - 62ms/step\n",
      "step 70/84 - loss: 0.3382 - acc: 0.9651 - 61ms/step\n",
      "step 80/84 - loss: 0.3520 - acc: 0.9647 - 60ms/step\n",
      "step 84/84 - loss: 0.3133 - acc: 0.9649 - 58ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 3/10\n",
      "step  10/125 - loss: 0.3351 - acc: 0.9781 - 114ms/step\n",
      "step  20/125 - loss: 0.3370 - acc: 0.9824 - 103ms/step\n",
      "step  30/125 - loss: 0.3297 - acc: 0.9833 - 98ms/step\n",
      "step  40/125 - loss: 0.3523 - acc: 0.9830 - 95ms/step\n",
      "step  50/125 - loss: 0.3136 - acc: 0.9834 - 94ms/step\n",
      "step  60/125 - loss: 0.3133 - acc: 0.9842 - 92ms/step\n",
      "step  70/125 - loss: 0.3133 - acc: 0.9854 - 91ms/step\n",
      "step  80/125 - loss: 0.3133 - acc: 0.9856 - 91ms/step\n",
      "step  90/125 - loss: 0.3134 - acc: 0.9857 - 91ms/step\n",
      "step 100/125 - loss: 0.3211 - acc: 0.9856 - 91ms/step\n",
      "step 110/125 - loss: 0.3211 - acc: 0.9857 - 91ms/step\n",
      "step 120/125 - loss: 0.3370 - acc: 0.9859 - 90ms/step\n",
      "step 125/125 - loss: 0.3133 - acc: 0.9859 - 89ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3541 - acc: 0.9633 - 86ms/step\n",
      "step 20/84 - loss: 0.3738 - acc: 0.9672 - 76ms/step\n",
      "step 30/84 - loss: 0.3516 - acc: 0.9661 - 73ms/step\n",
      "step 40/84 - loss: 0.3304 - acc: 0.9662 - 69ms/step\n",
      "step 50/84 - loss: 0.3475 - acc: 0.9659 - 66ms/step\n",
      "step 60/84 - loss: 0.3133 - acc: 0.9663 - 64ms/step\n",
      "step 70/84 - loss: 0.3372 - acc: 0.9655 - 63ms/step\n",
      "step 80/84 - loss: 0.3480 - acc: 0.9653 - 61ms/step\n",
      "step 84/84 - loss: 0.3133 - acc: 0.9655 - 59ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 4/10\n",
      "step  10/125 - loss: 0.3211 - acc: 0.9812 - 114ms/step\n",
      "step  20/125 - loss: 0.3367 - acc: 0.9852 - 102ms/step\n",
      "step  30/125 - loss: 0.3290 - acc: 0.9857 - 97ms/step\n",
      "step  40/125 - loss: 0.3523 - acc: 0.9850 - 95ms/step\n",
      "step  50/125 - loss: 0.3133 - acc: 0.9852 - 94ms/step\n",
      "step  60/125 - loss: 0.3133 - acc: 0.9857 - 93ms/step\n",
      "step  70/125 - loss: 0.3134 - acc: 0.9866 - 92ms/step\n",
      "step  80/125 - loss: 0.3133 - acc: 0.9868 - 92ms/step\n",
      "step  90/125 - loss: 0.3133 - acc: 0.9867 - 91ms/step\n",
      "step 100/125 - loss: 0.3211 - acc: 0.9866 - 91ms/step\n",
      "step 110/125 - loss: 0.3211 - acc: 0.9866 - 90ms/step\n",
      "step 120/125 - loss: 0.3367 - acc: 0.9867 - 90ms/step\n",
      "step 125/125 - loss: 0.3133 - acc: 0.9866 - 88ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3567 - acc: 0.9617 - 86ms/step\n",
      "step 20/84 - loss: 0.3754 - acc: 0.9652 - 71ms/step\n",
      "step 30/84 - loss: 0.3520 - acc: 0.9651 - 66ms/step\n",
      "step 40/84 - loss: 0.3301 - acc: 0.9658 - 64ms/step\n",
      "step 50/84 - loss: 0.3495 - acc: 0.9653 - 63ms/step\n",
      "step 60/84 - loss: 0.3134 - acc: 0.9659 - 62ms/step\n",
      "step 70/84 - loss: 0.3374 - acc: 0.9654 - 61ms/step\n",
      "step 80/84 - loss: 0.3437 - acc: 0.9652 - 59ms/step\n",
      "step 84/84 - loss: 0.3133 - acc: 0.9653 - 57ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 5/10\n",
      "step  10/125 - loss: 0.3212 - acc: 0.9812 - 108ms/step\n",
      "step  20/125 - loss: 0.3367 - acc: 0.9852 - 96ms/step\n",
      "step  30/125 - loss: 0.3290 - acc: 0.9857 - 92ms/step\n",
      "step  40/125 - loss: 0.3523 - acc: 0.9850 - 90ms/step\n",
      "step  50/125 - loss: 0.3133 - acc: 0.9853 - 89ms/step\n",
      "step  60/125 - loss: 0.3133 - acc: 0.9858 - 87ms/step\n",
      "step  70/125 - loss: 0.3133 - acc: 0.9867 - 87ms/step\n",
      "step  80/125 - loss: 0.3133 - acc: 0.9869 - 87ms/step\n",
      "step  90/125 - loss: 0.3133 - acc: 0.9868 - 87ms/step\n",
      "step 100/125 - loss: 0.3211 - acc: 0.9866 - 87ms/step\n",
      "step 110/125 - loss: 0.3211 - acc: 0.9866 - 88ms/step\n",
      "step 120/125 - loss: 0.3367 - acc: 0.9867 - 88ms/step\n",
      "step 125/125 - loss: 0.3133 - acc: 0.9867 - 86ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3577 - acc: 0.9617 - 89ms/step\n",
      "step 20/84 - loss: 0.3751 - acc: 0.9652 - 72ms/step\n",
      "step 30/84 - loss: 0.3520 - acc: 0.9654 - 67ms/step\n",
      "step 40/84 - loss: 0.3302 - acc: 0.9660 - 65ms/step\n",
      "step 50/84 - loss: 0.3494 - acc: 0.9653 - 63ms/step\n",
      "step 60/84 - loss: 0.3134 - acc: 0.9659 - 63ms/step\n",
      "step 70/84 - loss: 0.3373 - acc: 0.9654 - 62ms/step\n",
      "step 80/84 - loss: 0.3440 - acc: 0.9652 - 61ms/step\n",
      "step 84/84 - loss: 0.3133 - acc: 0.9653 - 58ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 6/10\n",
      "step  10/125 - loss: 0.3212 - acc: 0.9812 - 110ms/step\n",
      "step  20/125 - loss: 0.3367 - acc: 0.9852 - 99ms/step\n",
      "step  30/125 - loss: 0.3289 - acc: 0.9857 - 95ms/step\n",
      "step  40/125 - loss: 0.3523 - acc: 0.9850 - 92ms/step\n",
      "step  50/125 - loss: 0.3133 - acc: 0.9853 - 92ms/step\n",
      "step  60/125 - loss: 0.3133 - acc: 0.9858 - 90ms/step\n",
      "step  70/125 - loss: 0.3133 - acc: 0.9867 - 90ms/step\n",
      "step  80/125 - loss: 0.3133 - acc: 0.9869 - 90ms/step\n",
      "step  90/125 - loss: 0.3133 - acc: 0.9868 - 90ms/step\n",
      "step 100/125 - loss: 0.3211 - acc: 0.9866 - 90ms/step\n",
      "step 110/125 - loss: 0.3211 - acc: 0.9866 - 90ms/step\n",
      "step 120/125 - loss: 0.3367 - acc: 0.9867 - 90ms/step\n",
      "step 125/125 - loss: 0.3133 - acc: 0.9867 - 88ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3577 - acc: 0.9617 - 86ms/step\n",
      "step 20/84 - loss: 0.3750 - acc: 0.9652 - 70ms/step\n",
      "step 30/84 - loss: 0.3520 - acc: 0.9651 - 66ms/step\n",
      "step 40/84 - loss: 0.3302 - acc: 0.9656 - 64ms/step\n",
      "step 50/84 - loss: 0.3494 - acc: 0.9648 - 63ms/step\n",
      "step 60/84 - loss: 0.3134 - acc: 0.9655 - 62ms/step\n",
      "step 70/84 - loss: 0.3372 - acc: 0.9651 - 61ms/step\n",
      "step 80/84 - loss: 0.3442 - acc: 0.9649 - 60ms/step\n",
      "step 84/84 - loss: 0.3133 - acc: 0.9651 - 58ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 7/10\n",
      "step  10/125 - loss: 0.3211 - acc: 0.9812 - 106ms/step\n",
      "step  20/125 - loss: 0.3367 - acc: 0.9852 - 96ms/step\n",
      "step  30/125 - loss: 0.3289 - acc: 0.9857 - 92ms/step\n",
      "step  40/125 - loss: 0.3523 - acc: 0.9850 - 89ms/step\n",
      "step  50/125 - loss: 0.3133 - acc: 0.9853 - 88ms/step\n",
      "step  60/125 - loss: 0.3133 - acc: 0.9858 - 88ms/step\n",
      "step  70/125 - loss: 0.3133 - acc: 0.9867 - 87ms/step\n",
      "step  80/125 - loss: 0.3133 - acc: 0.9869 - 87ms/step\n",
      "step  90/125 - loss: 0.3133 - acc: 0.9868 - 87ms/step\n",
      "step 100/125 - loss: 0.3211 - acc: 0.9866 - 87ms/step\n",
      "step 110/125 - loss: 0.3211 - acc: 0.9866 - 87ms/step\n",
      "step 120/125 - loss: 0.3367 - acc: 0.9867 - 87ms/step\n",
      "step 125/125 - loss: 0.3133 - acc: 0.9867 - 85ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3577 - acc: 0.9617 - 85ms/step\n",
      "step 20/84 - loss: 0.3748 - acc: 0.9648 - 69ms/step\n",
      "step 30/84 - loss: 0.3520 - acc: 0.9648 - 65ms/step\n",
      "step 40/84 - loss: 0.3302 - acc: 0.9654 - 63ms/step\n",
      "step 50/84 - loss: 0.3494 - acc: 0.9647 - 62ms/step\n",
      "step 60/84 - loss: 0.3134 - acc: 0.9654 - 61ms/step\n",
      "step 70/84 - loss: 0.3372 - acc: 0.9650 - 60ms/step\n",
      "step 80/84 - loss: 0.3444 - acc: 0.9648 - 58ms/step\n",
      "step 84/84 - loss: 0.3133 - acc: 0.9650 - 56ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 8/10\n",
      "step  10/125 - loss: 0.3211 - acc: 0.9812 - 107ms/step\n",
      "step  20/125 - loss: 0.3367 - acc: 0.9852 - 98ms/step\n",
      "step  30/125 - loss: 0.3289 - acc: 0.9857 - 94ms/step\n",
      "step  40/125 - loss: 0.3523 - acc: 0.9850 - 91ms/step\n",
      "step  50/125 - loss: 0.3133 - acc: 0.9853 - 91ms/step\n",
      "step  60/125 - loss: 0.3133 - acc: 0.9858 - 89ms/step\n",
      "step  70/125 - loss: 0.3133 - acc: 0.9867 - 89ms/step\n",
      "step  80/125 - loss: 0.3133 - acc: 0.9869 - 89ms/step\n",
      "step  90/125 - loss: 0.3133 - acc: 0.9868 - 88ms/step\n",
      "step 100/125 - loss: 0.3211 - acc: 0.9866 - 88ms/step\n",
      "step 110/125 - loss: 0.3211 - acc: 0.9866 - 88ms/step\n",
      "step 120/125 - loss: 0.3367 - acc: 0.9867 - 87ms/step\n",
      "step 125/125 - loss: 0.3133 - acc: 0.9867 - 86ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3577 - acc: 0.9617 - 86ms/step\n",
      "step 20/84 - loss: 0.3745 - acc: 0.9648 - 70ms/step\n",
      "step 30/84 - loss: 0.3520 - acc: 0.9648 - 66ms/step\n",
      "step 40/84 - loss: 0.3301 - acc: 0.9654 - 64ms/step\n",
      "step 50/84 - loss: 0.3493 - acc: 0.9647 - 64ms/step\n",
      "step 60/84 - loss: 0.3134 - acc: 0.9654 - 63ms/step\n",
      "step 70/84 - loss: 0.3372 - acc: 0.9650 - 62ms/step\n",
      "step 80/84 - loss: 0.3445 - acc: 0.9647 - 60ms/step\n",
      "step 84/84 - loss: 0.3133 - acc: 0.9649 - 58ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 9/10\n",
      "step  10/125 - loss: 0.3211 - acc: 0.9812 - 111ms/step\n",
      "step  20/125 - loss: 0.3367 - acc: 0.9852 - 99ms/step\n",
      "step  30/125 - loss: 0.3289 - acc: 0.9857 - 95ms/step\n",
      "step  40/125 - loss: 0.3523 - acc: 0.9850 - 92ms/step\n",
      "step  50/125 - loss: 0.3133 - acc: 0.9853 - 91ms/step\n",
      "step  60/125 - loss: 0.3133 - acc: 0.9858 - 90ms/step\n",
      "step  70/125 - loss: 0.3133 - acc: 0.9867 - 90ms/step\n",
      "step  80/125 - loss: 0.3133 - acc: 0.9869 - 90ms/step\n",
      "step  90/125 - loss: 0.3133 - acc: 0.9868 - 90ms/step\n",
      "step 100/125 - loss: 0.3211 - acc: 0.9866 - 90ms/step\n",
      "step 110/125 - loss: 0.3211 - acc: 0.9866 - 90ms/step\n",
      "step 120/125 - loss: 0.3367 - acc: 0.9867 - 89ms/step\n",
      "step 125/125 - loss: 0.3133 - acc: 0.9867 - 88ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3577 - acc: 0.9617 - 86ms/step\n",
      "step 20/84 - loss: 0.3739 - acc: 0.9648 - 70ms/step\n",
      "step 30/84 - loss: 0.3520 - acc: 0.9648 - 66ms/step\n",
      "step 40/84 - loss: 0.3301 - acc: 0.9652 - 64ms/step\n",
      "step 50/84 - loss: 0.3494 - acc: 0.9645 - 63ms/step\n",
      "step 60/84 - loss: 0.3133 - acc: 0.9652 - 61ms/step\n",
      "step 70/84 - loss: 0.3371 - acc: 0.9648 - 61ms/step\n",
      "step 80/84 - loss: 0.3447 - acc: 0.9646 - 59ms/step\n",
      "step 84/84 - loss: 0.3133 - acc: 0.9648 - 57ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 10/10\n",
      "step  10/125 - loss: 0.3211 - acc: 0.9812 - 106ms/step\n",
      "step  20/125 - loss: 0.3367 - acc: 0.9852 - 95ms/step\n",
      "step  30/125 - loss: 0.3289 - acc: 0.9857 - 92ms/step\n",
      "step  40/125 - loss: 0.3523 - acc: 0.9850 - 89ms/step\n",
      "step  50/125 - loss: 0.3133 - acc: 0.9853 - 89ms/step\n",
      "step  60/125 - loss: 0.3133 - acc: 0.9858 - 88ms/step\n",
      "step  70/125 - loss: 0.3133 - acc: 0.9867 - 88ms/step\n",
      "step  80/125 - loss: 0.3133 - acc: 0.9869 - 88ms/step\n",
      "step  90/125 - loss: 0.3133 - acc: 0.9868 - 88ms/step\n",
      "step 100/125 - loss: 0.3211 - acc: 0.9866 - 88ms/step\n",
      "step 110/125 - loss: 0.3211 - acc: 0.9866 - 88ms/step\n",
      "step 120/125 - loss: 0.3367 - acc: 0.9867 - 87ms/step\n",
      "step 125/125 - loss: 0.3133 - acc: 0.9867 - 86ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3577 - acc: 0.9617 - 85ms/step\n",
      "step 20/84 - loss: 0.3729 - acc: 0.9648 - 69ms/step\n",
      "step 30/84 - loss: 0.3519 - acc: 0.9648 - 66ms/step\n",
      "step 40/84 - loss: 0.3301 - acc: 0.9652 - 64ms/step\n",
      "step 50/84 - loss: 0.3494 - acc: 0.9645 - 63ms/step\n",
      "step 60/84 - loss: 0.3133 - acc: 0.9652 - 61ms/step\n",
      "step 70/84 - loss: 0.3371 - acc: 0.9648 - 60ms/step\n",
      "step 80/84 - loss: 0.3448 - acc: 0.9646 - 59ms/step\n",
      "step 84/84 - loss: 0.3133 - acc: 0.9648 - 57ms/step\n",
      "Eval samples: 10646\n",
      "save checkpoint at /home/aistudio/checkpoints/final\n"
     ]
    }
   ],
   "source": [
    "model.fit(train_loader, dev_loader, epochs=10, save_dir='./checkpoints', save_freq=20,verbose=2, callbacks=callback)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**再对GRU模型进行训练，得到的精度在97%左右**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous step.\n",
      "Epoch 1/10\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Building prefix dict from the default dictionary ...\n",
      "Dumping model to file cache /tmp/jieba.cache\n",
      "Loading model cost 0.804 seconds.\n",
      "Prefix dict has been built successfully.\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\n",
      "  return (isinstance(seq, collections.Sequence) and\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step  10/125 - loss: 0.6911 - acc: 0.5188 - 211ms/step\n",
      "step  20/125 - loss: 0.6931 - acc: 0.5043 - 149ms/step\n",
      "step  30/125 - loss: 0.6912 - acc: 0.5154 - 129ms/step\n",
      "step  40/125 - loss: 0.6891 - acc: 0.5174 - 119ms/step\n",
      "step  50/125 - loss: 0.6860 - acc: 0.5197 - 113ms/step\n",
      "step  60/125 - loss: 0.6934 - acc: 0.5180 - 109ms/step\n",
      "step  70/125 - loss: 0.6894 - acc: 0.5180 - 106ms/step\n",
      "step  80/125 - loss: 0.6841 - acc: 0.5168 - 104ms/step\n",
      "step  90/125 - loss: 0.6838 - acc: 0.5349 - 103ms/step\n",
      "step 100/125 - loss: 0.6778 - acc: 0.5499 - 101ms/step\n",
      "step 110/125 - loss: 0.6710 - acc: 0.5612 - 101ms/step\n",
      "step 120/125 - loss: 0.6656 - acc: 0.5779 - 100ms/step\n",
      "step 125/125 - loss: 0.6675 - acc: 0.5850 - 98ms/step\n",
      "save checkpoint at /home/aistudio/checkpoints/0\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.6628 - acc: 0.7742 - 97ms/step\n",
      "step 20/84 - loss: 0.6633 - acc: 0.7762 - 77ms/step\n",
      "step 30/84 - loss: 0.6642 - acc: 0.7818 - 72ms/step\n",
      "step 40/84 - loss: 0.6618 - acc: 0.7836 - 68ms/step\n",
      "step 50/84 - loss: 0.6679 - acc: 0.7842 - 66ms/step\n",
      "step 60/84 - loss: 0.6595 - acc: 0.7859 - 66ms/step\n",
      "step 70/84 - loss: 0.6649 - acc: 0.7871 - 65ms/step\n",
      "step 80/84 - loss: 0.6639 - acc: 0.7864 - 63ms/step\n",
      "step 84/84 - loss: 0.6568 - acc: 0.7867 - 61ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 2/10\n",
      "step  10/125 - loss: 0.6692 - acc: 0.7641 - 119ms/step\n",
      "step  20/125 - loss: 0.6440 - acc: 0.7961 - 108ms/step\n",
      "step  30/125 - loss: 0.6293 - acc: 0.8039 - 103ms/step\n",
      "step  40/125 - loss: 0.6115 - acc: 0.8111 - 99ms/step\n",
      "step  50/125 - loss: 0.5831 - acc: 0.8189 - 98ms/step\n",
      "step  60/125 - loss: 0.5344 - acc: 0.8285 - 96ms/step\n",
      "step  70/125 - loss: 0.4825 - acc: 0.8353 - 95ms/step\n",
      "step  80/125 - loss: 0.4362 - acc: 0.8394 - 94ms/step\n",
      "step  90/125 - loss: 0.3906 - acc: 0.8452 - 94ms/step\n",
      "step 100/125 - loss: 0.3960 - acc: 0.8496 - 94ms/step\n",
      "step 110/125 - loss: 0.4132 - acc: 0.8548 - 94ms/step\n",
      "step 120/125 - loss: 0.4088 - acc: 0.8600 - 93ms/step\n",
      "step 125/125 - loss: 0.3999 - acc: 0.8612 - 92ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.4021 - acc: 0.9156 - 90ms/step\n",
      "step 20/84 - loss: 0.4114 - acc: 0.9148 - 77ms/step\n",
      "step 30/84 - loss: 0.3828 - acc: 0.9174 - 76ms/step\n",
      "step 40/84 - loss: 0.3706 - acc: 0.9199 - 73ms/step\n",
      "step 50/84 - loss: 0.3927 - acc: 0.9205 - 71ms/step\n",
      "step 60/84 - loss: 0.4023 - acc: 0.9193 - 69ms/step\n",
      "step 70/84 - loss: 0.4037 - acc: 0.9185 - 68ms/step\n",
      "step 80/84 - loss: 0.3788 - acc: 0.9194 - 69ms/step\n",
      "step 84/84 - loss: 0.4290 - acc: 0.9195 - 66ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 3/10\n",
      "step  10/125 - loss: 0.4350 - acc: 0.9109 - 112ms/step\n",
      "step  20/125 - loss: 0.4100 - acc: 0.9156 - 101ms/step\n",
      "step  30/125 - loss: 0.4013 - acc: 0.9154 - 97ms/step\n",
      "step  40/125 - loss: 0.3898 - acc: 0.9187 - 95ms/step\n",
      "step  50/125 - loss: 0.4026 - acc: 0.9194 - 94ms/step\n",
      "step  60/125 - loss: 0.3575 - acc: 0.9243 - 92ms/step\n",
      "step  70/125 - loss: 0.3478 - acc: 0.9281 - 92ms/step\n",
      "step  80/125 - loss: 0.3693 - acc: 0.9289 - 92ms/step\n",
      "step  90/125 - loss: 0.3460 - acc: 0.9305 - 92ms/step\n",
      "step 100/125 - loss: 0.3637 - acc: 0.9306 - 92ms/step\n",
      "step 110/125 - loss: 0.3679 - acc: 0.9316 - 92ms/step\n",
      "step 120/125 - loss: 0.3857 - acc: 0.9332 - 91ms/step\n",
      "step 125/125 - loss: 0.3751 - acc: 0.9331 - 90ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3803 - acc: 0.9383 - 90ms/step\n",
      "step 20/84 - loss: 0.3775 - acc: 0.9418 - 74ms/step\n",
      "step 30/84 - loss: 0.3670 - acc: 0.9414 - 69ms/step\n",
      "step 40/84 - loss: 0.3596 - acc: 0.9422 - 67ms/step\n",
      "step 50/84 - loss: 0.3702 - acc: 0.9419 - 66ms/step\n",
      "step 60/84 - loss: 0.3508 - acc: 0.9436 - 65ms/step\n",
      "step 70/84 - loss: 0.3800 - acc: 0.9423 - 64ms/step\n",
      "step 80/84 - loss: 0.3660 - acc: 0.9428 - 62ms/step\n",
      "step 84/84 - loss: 0.3480 - acc: 0.9435 - 60ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 4/10\n",
      "step  10/125 - loss: 0.3873 - acc: 0.9336 - 112ms/step\n",
      "step  20/125 - loss: 0.3845 - acc: 0.9375 - 101ms/step\n",
      "step  30/125 - loss: 0.3611 - acc: 0.9393 - 98ms/step\n",
      "step  40/125 - loss: 0.3770 - acc: 0.9420 - 95ms/step\n",
      "step  50/125 - loss: 0.3739 - acc: 0.9428 - 95ms/step\n",
      "step  60/125 - loss: 0.3397 - acc: 0.9460 - 93ms/step\n",
      "step  70/125 - loss: 0.3381 - acc: 0.9491 - 93ms/step\n",
      "step  80/125 - loss: 0.3530 - acc: 0.9493 - 92ms/step\n",
      "step  90/125 - loss: 0.3381 - acc: 0.9497 - 92ms/step\n",
      "step 100/125 - loss: 0.3448 - acc: 0.9499 - 92ms/step\n",
      "step 110/125 - loss: 0.3472 - acc: 0.9505 - 92ms/step\n",
      "step 120/125 - loss: 0.3640 - acc: 0.9518 - 92ms/step\n",
      "step 125/125 - loss: 0.3590 - acc: 0.9516 - 90ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3758 - acc: 0.9477 - 93ms/step\n",
      "step 20/84 - loss: 0.3660 - acc: 0.9516 - 75ms/step\n",
      "step 30/84 - loss: 0.3624 - acc: 0.9516 - 71ms/step\n",
      "step 40/84 - loss: 0.3505 - acc: 0.9531 - 68ms/step\n",
      "step 50/84 - loss: 0.3713 - acc: 0.9527 - 67ms/step\n",
      "step 60/84 - loss: 0.3516 - acc: 0.9535 - 65ms/step\n",
      "step 70/84 - loss: 0.3650 - acc: 0.9523 - 64ms/step\n",
      "step 80/84 - loss: 0.3543 - acc: 0.9523 - 62ms/step\n",
      "step 84/84 - loss: 0.3545 - acc: 0.9530 - 60ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 5/10\n",
      "step  10/125 - loss: 0.3742 - acc: 0.9469 - 109ms/step\n",
      "step  20/125 - loss: 0.3738 - acc: 0.9516 - 100ms/step\n",
      "step  30/125 - loss: 0.3449 - acc: 0.9542 - 97ms/step\n",
      "step  40/125 - loss: 0.3689 - acc: 0.9555 - 95ms/step\n",
      "step  50/125 - loss: 0.3654 - acc: 0.9561 - 94ms/step\n",
      "step  60/125 - loss: 0.3306 - acc: 0.9577 - 93ms/step\n",
      "step  70/125 - loss: 0.3384 - acc: 0.9589 - 93ms/step\n",
      "step  80/125 - loss: 0.3424 - acc: 0.9587 - 94ms/step\n",
      "step  90/125 - loss: 0.3325 - acc: 0.9585 - 94ms/step\n",
      "step 100/125 - loss: 0.3376 - acc: 0.9581 - 94ms/step\n",
      "step 110/125 - loss: 0.3382 - acc: 0.9584 - 94ms/step\n",
      "step 120/125 - loss: 0.3680 - acc: 0.9590 - 94ms/step\n",
      "step 125/125 - loss: 0.3528 - acc: 0.9587 - 92ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3759 - acc: 0.9555 - 91ms/step\n",
      "step 20/84 - loss: 0.3760 - acc: 0.9531 - 74ms/step\n",
      "step 30/84 - loss: 0.3522 - acc: 0.9534 - 69ms/step\n",
      "step 40/84 - loss: 0.3476 - acc: 0.9549 - 67ms/step\n",
      "step 50/84 - loss: 0.3683 - acc: 0.9537 - 66ms/step\n",
      "step 60/84 - loss: 0.3524 - acc: 0.9544 - 65ms/step\n",
      "step 70/84 - loss: 0.3600 - acc: 0.9539 - 64ms/step\n",
      "step 80/84 - loss: 0.3500 - acc: 0.9541 - 62ms/step\n",
      "step 84/84 - loss: 0.4450 - acc: 0.9543 - 60ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 6/10\n",
      "step  10/125 - loss: 0.3715 - acc: 0.9523 - 109ms/step\n",
      "step  20/125 - loss: 0.3694 - acc: 0.9559 - 98ms/step\n",
      "step  30/125 - loss: 0.3461 - acc: 0.9591 - 95ms/step\n",
      "step  40/125 - loss: 0.3702 - acc: 0.9590 - 92ms/step\n",
      "step  50/125 - loss: 0.3495 - acc: 0.9602 - 92ms/step\n",
      "step  60/125 - loss: 0.3294 - acc: 0.9629 - 91ms/step\n",
      "step  70/125 - loss: 0.3314 - acc: 0.9650 - 91ms/step\n",
      "step  80/125 - loss: 0.3390 - acc: 0.9646 - 91ms/step\n",
      "step  90/125 - loss: 0.3297 - acc: 0.9647 - 91ms/step\n",
      "step 100/125 - loss: 0.3322 - acc: 0.9639 - 91ms/step\n",
      "step 110/125 - loss: 0.3332 - acc: 0.9643 - 91ms/step\n",
      "step 120/125 - loss: 0.3584 - acc: 0.9650 - 91ms/step\n",
      "step 125/125 - loss: 0.3505 - acc: 0.9649 - 89ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3645 - acc: 0.9578 - 89ms/step\n",
      "step 20/84 - loss: 0.3603 - acc: 0.9605 - 76ms/step\n",
      "step 30/84 - loss: 0.3581 - acc: 0.9602 - 70ms/step\n",
      "step 40/84 - loss: 0.3467 - acc: 0.9611 - 67ms/step\n",
      "step 50/84 - loss: 0.3673 - acc: 0.9603 - 65ms/step\n",
      "step 60/84 - loss: 0.3342 - acc: 0.9605 - 64ms/step\n",
      "step 70/84 - loss: 0.3655 - acc: 0.9590 - 63ms/step\n",
      "step 80/84 - loss: 0.3523 - acc: 0.9596 - 62ms/step\n",
      "step 84/84 - loss: 0.3492 - acc: 0.9599 - 60ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 7/10\n",
      "step  10/125 - loss: 0.3754 - acc: 0.9555 - 112ms/step\n",
      "step  20/125 - loss: 0.3677 - acc: 0.9625 - 100ms/step\n",
      "step  30/125 - loss: 0.3403 - acc: 0.9638 - 96ms/step\n",
      "step  40/125 - loss: 0.3626 - acc: 0.9635 - 94ms/step\n",
      "step  50/125 - loss: 0.3439 - acc: 0.9645 - 93ms/step\n",
      "step  60/125 - loss: 0.3303 - acc: 0.9664 - 92ms/step\n",
      "step  70/125 - loss: 0.3270 - acc: 0.9681 - 92ms/step\n",
      "step  80/125 - loss: 0.3360 - acc: 0.9679 - 91ms/step\n",
      "step  90/125 - loss: 0.3269 - acc: 0.9679 - 92ms/step\n",
      "step 100/125 - loss: 0.3335 - acc: 0.9673 - 92ms/step\n",
      "step 110/125 - loss: 0.3304 - acc: 0.9678 - 92ms/step\n",
      "step 120/125 - loss: 0.3621 - acc: 0.9686 - 91ms/step\n",
      "step 125/125 - loss: 0.3406 - acc: 0.9683 - 89ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3702 - acc: 0.9617 - 92ms/step\n",
      "step 20/84 - loss: 0.3527 - acc: 0.9637 - 75ms/step\n",
      "step 30/84 - loss: 0.3515 - acc: 0.9635 - 71ms/step\n",
      "step 40/84 - loss: 0.3451 - acc: 0.9645 - 68ms/step\n",
      "step 50/84 - loss: 0.3720 - acc: 0.9633 - 67ms/step\n",
      "step 60/84 - loss: 0.3325 - acc: 0.9635 - 65ms/step\n",
      "step 70/84 - loss: 0.3517 - acc: 0.9623 - 64ms/step\n",
      "step 80/84 - loss: 0.3441 - acc: 0.9621 - 62ms/step\n",
      "step 84/84 - loss: 0.3532 - acc: 0.9624 - 60ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 8/10\n",
      "step  10/125 - loss: 0.3562 - acc: 0.9602 - 115ms/step\n",
      "step  20/125 - loss: 0.3646 - acc: 0.9668 - 103ms/step\n",
      "step  30/125 - loss: 0.3354 - acc: 0.9677 - 98ms/step\n",
      "step  40/125 - loss: 0.3585 - acc: 0.9678 - 95ms/step\n",
      "step  50/125 - loss: 0.3413 - acc: 0.9681 - 95ms/step\n",
      "step  60/125 - loss: 0.3268 - acc: 0.9699 - 94ms/step\n",
      "step  70/125 - loss: 0.3249 - acc: 0.9717 - 93ms/step\n",
      "step  80/125 - loss: 0.3324 - acc: 0.9715 - 93ms/step\n",
      "step  90/125 - loss: 0.3256 - acc: 0.9712 - 93ms/step\n",
      "step 100/125 - loss: 0.3293 - acc: 0.9705 - 93ms/step\n",
      "step 110/125 - loss: 0.3275 - acc: 0.9708 - 93ms/step\n",
      "step 120/125 - loss: 0.3526 - acc: 0.9717 - 92ms/step\n",
      "step 125/125 - loss: 0.3306 - acc: 0.9717 - 91ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3610 - acc: 0.9602 - 92ms/step\n",
      "step 20/84 - loss: 0.3528 - acc: 0.9637 - 75ms/step\n",
      "step 30/84 - loss: 0.3532 - acc: 0.9622 - 71ms/step\n",
      "step 40/84 - loss: 0.3437 - acc: 0.9633 - 68ms/step\n",
      "step 50/84 - loss: 0.3720 - acc: 0.9625 - 68ms/step\n",
      "step 60/84 - loss: 0.3409 - acc: 0.9628 - 66ms/step\n",
      "step 70/84 - loss: 0.3476 - acc: 0.9616 - 65ms/step\n",
      "step 80/84 - loss: 0.3540 - acc: 0.9617 - 63ms/step\n",
      "step 84/84 - loss: 0.3555 - acc: 0.9621 - 61ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 9/10\n",
      "step  10/125 - loss: 0.3559 - acc: 0.9617 - 110ms/step\n",
      "step  20/125 - loss: 0.3596 - acc: 0.9695 - 99ms/step\n",
      "step  30/125 - loss: 0.3345 - acc: 0.9703 - 96ms/step\n",
      "step  40/125 - loss: 0.3566 - acc: 0.9695 - 94ms/step\n",
      "step  50/125 - loss: 0.3390 - acc: 0.9700 - 93ms/step\n",
      "step  60/125 - loss: 0.3240 - acc: 0.9720 - 93ms/step\n",
      "step  70/125 - loss: 0.3230 - acc: 0.9737 - 92ms/step\n",
      "step  80/125 - loss: 0.3268 - acc: 0.9735 - 92ms/step\n",
      "step  90/125 - loss: 0.3249 - acc: 0.9735 - 92ms/step\n",
      "step 100/125 - loss: 0.3294 - acc: 0.9733 - 92ms/step\n",
      "step 110/125 - loss: 0.3260 - acc: 0.9737 - 92ms/step\n",
      "step 120/125 - loss: 0.3477 - acc: 0.9746 - 92ms/step\n",
      "step 125/125 - loss: 0.3330 - acc: 0.9744 - 90ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3599 - acc: 0.9633 - 89ms/step\n",
      "step 20/84 - loss: 0.3526 - acc: 0.9652 - 73ms/step\n",
      "step 30/84 - loss: 0.3519 - acc: 0.9651 - 69ms/step\n",
      "step 40/84 - loss: 0.3442 - acc: 0.9666 - 67ms/step\n",
      "step 50/84 - loss: 0.3690 - acc: 0.9656 - 65ms/step\n",
      "step 60/84 - loss: 0.3300 - acc: 0.9659 - 65ms/step\n",
      "step 70/84 - loss: 0.3494 - acc: 0.9651 - 64ms/step\n",
      "step 80/84 - loss: 0.3435 - acc: 0.9650 - 62ms/step\n",
      "step 84/84 - loss: 0.3451 - acc: 0.9653 - 60ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 10/10\n",
      "step  10/125 - loss: 0.3522 - acc: 0.9633 - 109ms/step\n",
      "step  20/125 - loss: 0.3474 - acc: 0.9715 - 99ms/step\n",
      "step  30/125 - loss: 0.3334 - acc: 0.9734 - 96ms/step\n",
      "step  40/125 - loss: 0.3562 - acc: 0.9721 - 94ms/step\n",
      "step  50/125 - loss: 0.3374 - acc: 0.9728 - 93ms/step\n",
      "step  60/125 - loss: 0.3306 - acc: 0.9741 - 92ms/step\n",
      "step  70/125 - loss: 0.3310 - acc: 0.9756 - 92ms/step\n",
      "step  80/125 - loss: 0.3243 - acc: 0.9755 - 92ms/step\n",
      "step  90/125 - loss: 0.3243 - acc: 0.9753 - 91ms/step\n",
      "step 100/125 - loss: 0.3284 - acc: 0.9750 - 92ms/step\n",
      "step 110/125 - loss: 0.3246 - acc: 0.9754 - 91ms/step\n",
      "step 120/125 - loss: 0.3482 - acc: 0.9762 - 91ms/step\n",
      "step 125/125 - loss: 0.3318 - acc: 0.9760 - 90ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3613 - acc: 0.9594 - 90ms/step\n",
      "step 20/84 - loss: 0.3559 - acc: 0.9637 - 77ms/step\n",
      "step 30/84 - loss: 0.3496 - acc: 0.9648 - 71ms/step\n",
      "step 40/84 - loss: 0.3447 - acc: 0.9658 - 68ms/step\n",
      "step 50/84 - loss: 0.3621 - acc: 0.9655 - 67ms/step\n",
      "step 60/84 - loss: 0.3267 - acc: 0.9660 - 65ms/step\n",
      "step 70/84 - loss: 0.3552 - acc: 0.9653 - 65ms/step\n",
      "step 80/84 - loss: 0.3503 - acc: 0.9651 - 63ms/step\n",
      "step 84/84 - loss: 0.3303 - acc: 0.9655 - 60ms/step\n",
      "Eval samples: 10646\n",
      "save checkpoint at /home/aistudio/checkpoints/final\n"
     ]
    }
   ],
   "source": [
    "model1.fit(train_loader, dev_loader, epochs=10, save_dir='./checkpoints', save_freq=20,verbose=2, callbacks=callback)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**再对RNN模型进行训练，得到的精度在98%左右**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous step.\n",
      "Epoch 1/10\n",
      "step  10/125 - loss: 0.3351 - acc: 0.9703 - 106ms/step\n",
      "step  20/125 - loss: 0.3232 - acc: 0.9758 - 93ms/step\n",
      "step  30/125 - loss: 0.3356 - acc: 0.9786 - 89ms/step\n",
      "step  40/125 - loss: 0.3573 - acc: 0.9777 - 87ms/step\n",
      "step  50/125 - loss: 0.3325 - acc: 0.9789 - 86ms/step\n",
      "step  60/125 - loss: 0.3220 - acc: 0.9796 - 85ms/step\n",
      "step  70/125 - loss: 0.3188 - acc: 0.9806 - 85ms/step\n",
      "step  80/125 - loss: 0.3146 - acc: 0.9807 - 84ms/step\n",
      "step  90/125 - loss: 0.3171 - acc: 0.9806 - 84ms/step\n",
      "step 100/125 - loss: 0.3294 - acc: 0.9802 - 84ms/step\n",
      "step 110/125 - loss: 0.3228 - acc: 0.9802 - 84ms/step\n",
      "step 120/125 - loss: 0.3450 - acc: 0.9808 - 83ms/step\n",
      "step 125/125 - loss: 0.3388 - acc: 0.9805 - 82ms/step\n",
      "save checkpoint at /home/aistudio/checkpoints/0\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3495 - acc: 0.9656 - 82ms/step\n",
      "step 20/84 - loss: 0.3478 - acc: 0.9676 - 67ms/step\n",
      "step 30/84 - loss: 0.3402 - acc: 0.9677 - 63ms/step\n",
      "step 40/84 - loss: 0.3350 - acc: 0.9680 - 61ms/step\n",
      "step 50/84 - loss: 0.3608 - acc: 0.9672 - 60ms/step\n",
      "step 60/84 - loss: 0.3298 - acc: 0.9678 - 59ms/step\n",
      "step 70/84 - loss: 0.3419 - acc: 0.9666 - 58ms/step\n",
      "step 80/84 - loss: 0.3451 - acc: 0.9665 - 57ms/step\n",
      "step 84/84 - loss: 0.3208 - acc: 0.9668 - 55ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 2/10\n",
      "step  10/125 - loss: 0.3343 - acc: 0.9719 - 104ms/step\n",
      "step  20/125 - loss: 0.3240 - acc: 0.9789 - 92ms/step\n",
      "step  30/125 - loss: 0.3342 - acc: 0.9810 - 88ms/step\n",
      "step  40/125 - loss: 0.3571 - acc: 0.9793 - 85ms/step\n",
      "step  50/125 - loss: 0.3316 - acc: 0.9802 - 84ms/step\n",
      "step  60/125 - loss: 0.3219 - acc: 0.9806 - 83ms/step\n",
      "step  70/125 - loss: 0.3163 - acc: 0.9816 - 82ms/step\n",
      "step  80/125 - loss: 0.3145 - acc: 0.9817 - 82ms/step\n",
      "step  90/125 - loss: 0.3162 - acc: 0.9817 - 82ms/step\n",
      "step 100/125 - loss: 0.3297 - acc: 0.9812 - 82ms/step\n",
      "step 110/125 - loss: 0.3230 - acc: 0.9812 - 82ms/step\n",
      "step 120/125 - loss: 0.3432 - acc: 0.9818 - 82ms/step\n",
      "step 125/125 - loss: 0.3398 - acc: 0.9815 - 80ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3533 - acc: 0.9633 - 84ms/step\n",
      "step 20/84 - loss: 0.3434 - acc: 0.9664 - 68ms/step\n",
      "step 30/84 - loss: 0.3421 - acc: 0.9674 - 64ms/step\n",
      "step 40/84 - loss: 0.3358 - acc: 0.9688 - 61ms/step\n",
      "step 50/84 - loss: 0.3563 - acc: 0.9683 - 60ms/step\n",
      "step 60/84 - loss: 0.3225 - acc: 0.9689 - 59ms/step\n",
      "step 70/84 - loss: 0.3463 - acc: 0.9679 - 58ms/step\n",
      "step 80/84 - loss: 0.3508 - acc: 0.9677 - 56ms/step\n",
      "step 84/84 - loss: 0.3175 - acc: 0.9682 - 54ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 3/10\n",
      "step  10/125 - loss: 0.3323 - acc: 0.9719 - 102ms/step\n",
      "step  20/125 - loss: 0.3238 - acc: 0.9793 - 92ms/step\n",
      "step  30/125 - loss: 0.3340 - acc: 0.9818 - 88ms/step\n",
      "step  40/125 - loss: 0.3542 - acc: 0.9803 - 86ms/step\n",
      "step  50/125 - loss: 0.3310 - acc: 0.9814 - 86ms/step\n",
      "step  60/125 - loss: 0.3221 - acc: 0.9822 - 84ms/step\n",
      "step  70/125 - loss: 0.3162 - acc: 0.9831 - 84ms/step\n",
      "step  80/125 - loss: 0.3144 - acc: 0.9833 - 84ms/step\n",
      "step  90/125 - loss: 0.3153 - acc: 0.9830 - 83ms/step\n",
      "step 100/125 - loss: 0.3293 - acc: 0.9828 - 83ms/step\n",
      "step 110/125 - loss: 0.3223 - acc: 0.9828 - 83ms/step\n",
      "step 120/125 - loss: 0.3419 - acc: 0.9832 - 82ms/step\n",
      "step 125/125 - loss: 0.3389 - acc: 0.9829 - 81ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3541 - acc: 0.9641 - 81ms/step\n",
      "step 20/84 - loss: 0.3408 - acc: 0.9676 - 66ms/step\n",
      "step 30/84 - loss: 0.3371 - acc: 0.9682 - 62ms/step\n",
      "step 40/84 - loss: 0.3354 - acc: 0.9691 - 60ms/step\n",
      "step 50/84 - loss: 0.3526 - acc: 0.9686 - 59ms/step\n",
      "step 60/84 - loss: 0.3222 - acc: 0.9697 - 58ms/step\n",
      "step 70/84 - loss: 0.3478 - acc: 0.9684 - 57ms/step\n",
      "step 80/84 - loss: 0.3524 - acc: 0.9682 - 56ms/step\n",
      "step 84/84 - loss: 0.3155 - acc: 0.9685 - 54ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 4/10\n",
      "step  10/125 - loss: 0.3312 - acc: 0.9734 - 104ms/step\n",
      "step  20/125 - loss: 0.3199 - acc: 0.9805 - 93ms/step\n",
      "step  30/125 - loss: 0.3328 - acc: 0.9826 - 89ms/step\n",
      "step  40/125 - loss: 0.3583 - acc: 0.9809 - 87ms/step\n",
      "step  50/125 - loss: 0.3298 - acc: 0.9817 - 86ms/step\n",
      "step  60/125 - loss: 0.3221 - acc: 0.9822 - 84ms/step\n",
      "step  70/125 - loss: 0.3153 - acc: 0.9834 - 84ms/step\n",
      "step  80/125 - loss: 0.3141 - acc: 0.9834 - 83ms/step\n",
      "step  90/125 - loss: 0.3146 - acc: 0.9832 - 83ms/step\n",
      "step 100/125 - loss: 0.3294 - acc: 0.9830 - 83ms/step\n",
      "step 110/125 - loss: 0.3223 - acc: 0.9831 - 83ms/step\n",
      "step 120/125 - loss: 0.3408 - acc: 0.9835 - 83ms/step\n",
      "step 125/125 - loss: 0.3349 - acc: 0.9832 - 81ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3538 - acc: 0.9609 - 82ms/step\n",
      "step 20/84 - loss: 0.3440 - acc: 0.9656 - 68ms/step\n",
      "step 30/84 - loss: 0.3323 - acc: 0.9669 - 63ms/step\n",
      "step 40/84 - loss: 0.3380 - acc: 0.9678 - 61ms/step\n",
      "step 50/84 - loss: 0.3557 - acc: 0.9673 - 60ms/step\n",
      "step 60/84 - loss: 0.3265 - acc: 0.9680 - 59ms/step\n",
      "step 70/84 - loss: 0.3477 - acc: 0.9670 - 57ms/step\n",
      "step 80/84 - loss: 0.3551 - acc: 0.9666 - 56ms/step\n",
      "step 84/84 - loss: 0.3183 - acc: 0.9670 - 54ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 5/10\n",
      "step  10/125 - loss: 0.3319 - acc: 0.9742 - 101ms/step\n",
      "step  20/125 - loss: 0.3164 - acc: 0.9812 - 90ms/step\n",
      "step  30/125 - loss: 0.3322 - acc: 0.9831 - 87ms/step\n",
      "step  40/125 - loss: 0.3537 - acc: 0.9816 - 84ms/step\n",
      "step  50/125 - loss: 0.3290 - acc: 0.9825 - 85ms/step\n",
      "step  60/125 - loss: 0.3225 - acc: 0.9824 - 84ms/step\n",
      "step  70/125 - loss: 0.3230 - acc: 0.9819 - 84ms/step\n",
      "step  80/125 - loss: 0.3197 - acc: 0.9814 - 84ms/step\n",
      "step  90/125 - loss: 0.3223 - acc: 0.9804 - 84ms/step\n",
      "step 100/125 - loss: 0.3348 - acc: 0.9795 - 84ms/step\n",
      "step 110/125 - loss: 0.3169 - acc: 0.9798 - 84ms/step\n",
      "step 120/125 - loss: 0.3384 - acc: 0.9805 - 84ms/step\n",
      "step 125/125 - loss: 0.3277 - acc: 0.9804 - 82ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3540 - acc: 0.9648 - 81ms/step\n",
      "step 20/84 - loss: 0.3489 - acc: 0.9680 - 67ms/step\n",
      "step 30/84 - loss: 0.3361 - acc: 0.9688 - 63ms/step\n",
      "step 40/84 - loss: 0.3386 - acc: 0.9688 - 61ms/step\n",
      "step 50/84 - loss: 0.3484 - acc: 0.9686 - 60ms/step\n",
      "step 60/84 - loss: 0.3184 - acc: 0.9689 - 59ms/step\n",
      "step 70/84 - loss: 0.3469 - acc: 0.9680 - 59ms/step\n",
      "step 80/84 - loss: 0.3535 - acc: 0.9679 - 57ms/step\n",
      "step 84/84 - loss: 0.3160 - acc: 0.9684 - 55ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 6/10\n",
      "step  10/125 - loss: 0.3303 - acc: 0.9766 - 99ms/step\n",
      "step  20/125 - loss: 0.3173 - acc: 0.9832 - 90ms/step\n",
      "step  30/125 - loss: 0.3320 - acc: 0.9844 - 86ms/step\n",
      "step  40/125 - loss: 0.3534 - acc: 0.9826 - 84ms/step\n",
      "step  50/125 - loss: 0.3265 - acc: 0.9828 - 84ms/step\n",
      "step  60/125 - loss: 0.3218 - acc: 0.9836 - 83ms/step\n",
      "step  70/125 - loss: 0.3158 - acc: 0.9846 - 82ms/step\n",
      "step  80/125 - loss: 0.3137 - acc: 0.9848 - 82ms/step\n",
      "step  90/125 - loss: 0.3141 - acc: 0.9847 - 81ms/step\n",
      "step 100/125 - loss: 0.3292 - acc: 0.9843 - 81ms/step\n",
      "step 110/125 - loss: 0.3150 - acc: 0.9844 - 81ms/step\n",
      "step 120/125 - loss: 0.3443 - acc: 0.9842 - 81ms/step\n",
      "step 125/125 - loss: 0.3269 - acc: 0.9839 - 80ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3495 - acc: 0.9617 - 83ms/step\n",
      "step 20/84 - loss: 0.3603 - acc: 0.9660 - 67ms/step\n",
      "step 30/84 - loss: 0.3376 - acc: 0.9651 - 64ms/step\n",
      "step 40/84 - loss: 0.3384 - acc: 0.9658 - 62ms/step\n",
      "step 50/84 - loss: 0.3679 - acc: 0.9653 - 61ms/step\n",
      "step 60/84 - loss: 0.3340 - acc: 0.9656 - 59ms/step\n",
      "step 70/84 - loss: 0.3519 - acc: 0.9646 - 58ms/step\n",
      "step 80/84 - loss: 0.3506 - acc: 0.9646 - 57ms/step\n",
      "step 84/84 - loss: 0.3418 - acc: 0.9649 - 55ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 7/10\n",
      "step  10/125 - loss: 0.3249 - acc: 0.9789 - 100ms/step\n",
      "step  20/125 - loss: 0.3154 - acc: 0.9840 - 90ms/step\n",
      "step  30/125 - loss: 0.3320 - acc: 0.9852 - 86ms/step\n",
      "step  40/125 - loss: 0.3536 - acc: 0.9832 - 84ms/step\n",
      "step  50/125 - loss: 0.3238 - acc: 0.9836 - 83ms/step\n",
      "step  60/125 - loss: 0.3217 - acc: 0.9842 - 83ms/step\n",
      "step  70/125 - loss: 0.3158 - acc: 0.9852 - 82ms/step\n",
      "step  80/125 - loss: 0.3137 - acc: 0.9854 - 82ms/step\n",
      "step  90/125 - loss: 0.3215 - acc: 0.9852 - 82ms/step\n",
      "step 100/125 - loss: 0.3294 - acc: 0.9846 - 82ms/step\n",
      "step 110/125 - loss: 0.3139 - acc: 0.9848 - 82ms/step\n",
      "step 120/125 - loss: 0.3331 - acc: 0.9851 - 82ms/step\n",
      "step 125/125 - loss: 0.3252 - acc: 0.9849 - 80ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3541 - acc: 0.9609 - 81ms/step\n",
      "step 20/84 - loss: 0.3605 - acc: 0.9648 - 66ms/step\n",
      "step 30/84 - loss: 0.3368 - acc: 0.9646 - 62ms/step\n",
      "step 40/84 - loss: 0.3383 - acc: 0.9643 - 60ms/step\n",
      "step 50/84 - loss: 0.3554 - acc: 0.9641 - 59ms/step\n",
      "step 60/84 - loss: 0.3464 - acc: 0.9643 - 58ms/step\n",
      "step 70/84 - loss: 0.3521 - acc: 0.9634 - 57ms/step\n",
      "step 80/84 - loss: 0.3591 - acc: 0.9633 - 56ms/step\n",
      "step 84/84 - loss: 0.3648 - acc: 0.9636 - 54ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 8/10\n",
      "step  10/125 - loss: 0.3233 - acc: 0.9766 - 107ms/step\n",
      "step  20/125 - loss: 0.3159 - acc: 0.9840 - 96ms/step\n",
      "step  30/125 - loss: 0.3348 - acc: 0.9852 - 91ms/step\n",
      "step  40/125 - loss: 0.3530 - acc: 0.9828 - 89ms/step\n",
      "step  50/125 - loss: 0.3226 - acc: 0.9834 - 88ms/step\n",
      "step  60/125 - loss: 0.3216 - acc: 0.9841 - 87ms/step\n",
      "step  70/125 - loss: 0.3218 - acc: 0.9848 - 86ms/step\n",
      "step  80/125 - loss: 0.3138 - acc: 0.9838 - 86ms/step\n",
      "step  90/125 - loss: 0.3137 - acc: 0.9837 - 86ms/step\n",
      "step 100/125 - loss: 0.3294 - acc: 0.9835 - 86ms/step\n",
      "step 110/125 - loss: 0.3147 - acc: 0.9838 - 86ms/step\n",
      "step 120/125 - loss: 0.3311 - acc: 0.9843 - 85ms/step\n",
      "step 125/125 - loss: 0.3216 - acc: 0.9841 - 84ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3527 - acc: 0.9609 - 86ms/step\n",
      "step 20/84 - loss: 0.3571 - acc: 0.9656 - 70ms/step\n",
      "step 30/84 - loss: 0.3308 - acc: 0.9667 - 66ms/step\n",
      "step 40/84 - loss: 0.3378 - acc: 0.9672 - 63ms/step\n",
      "step 50/84 - loss: 0.3575 - acc: 0.9664 - 62ms/step\n",
      "step 60/84 - loss: 0.3269 - acc: 0.9667 - 61ms/step\n",
      "step 70/84 - loss: 0.3506 - acc: 0.9658 - 60ms/step\n",
      "step 80/84 - loss: 0.3582 - acc: 0.9661 - 59ms/step\n",
      "step 84/84 - loss: 0.3182 - acc: 0.9665 - 56ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 9/10\n",
      "step  10/125 - loss: 0.3254 - acc: 0.9781 - 101ms/step\n",
      "step  20/125 - loss: 0.3144 - acc: 0.9848 - 89ms/step\n",
      "step  30/125 - loss: 0.3305 - acc: 0.9857 - 86ms/step\n",
      "step  40/125 - loss: 0.3530 - acc: 0.9836 - 83ms/step\n",
      "step  50/125 - loss: 0.3224 - acc: 0.9842 - 83ms/step\n",
      "step  60/125 - loss: 0.3216 - acc: 0.9846 - 82ms/step\n",
      "step  70/125 - loss: 0.3148 - acc: 0.9855 - 82ms/step\n",
      "step  80/125 - loss: 0.3134 - acc: 0.9855 - 82ms/step\n",
      "step  90/125 - loss: 0.3146 - acc: 0.9855 - 82ms/step\n",
      "step 100/125 - loss: 0.3290 - acc: 0.9852 - 83ms/step\n",
      "step 110/125 - loss: 0.3141 - acc: 0.9852 - 83ms/step\n",
      "step 120/125 - loss: 0.3306 - acc: 0.9856 - 82ms/step\n",
      "step 125/125 - loss: 0.3229 - acc: 0.9855 - 81ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3528 - acc: 0.9648 - 82ms/step\n",
      "step 20/84 - loss: 0.3546 - acc: 0.9684 - 67ms/step\n",
      "step 30/84 - loss: 0.3312 - acc: 0.9690 - 63ms/step\n",
      "step 40/84 - loss: 0.3360 - acc: 0.9682 - 60ms/step\n",
      "step 50/84 - loss: 0.3489 - acc: 0.9681 - 59ms/step\n",
      "step 60/84 - loss: 0.3253 - acc: 0.9686 - 58ms/step\n",
      "step 70/84 - loss: 0.3463 - acc: 0.9674 - 57ms/step\n",
      "step 80/84 - loss: 0.3570 - acc: 0.9674 - 56ms/step\n",
      "step 84/84 - loss: 0.3159 - acc: 0.9676 - 54ms/step\n",
      "Eval samples: 10646\n",
      "Epoch 10/10\n",
      "step  10/125 - loss: 0.3230 - acc: 0.9797 - 101ms/step\n",
      "step  20/125 - loss: 0.3169 - acc: 0.9852 - 91ms/step\n",
      "step  30/125 - loss: 0.3309 - acc: 0.9862 - 87ms/step\n",
      "step  40/125 - loss: 0.3529 - acc: 0.9842 - 84ms/step\n",
      "step  50/125 - loss: 0.3220 - acc: 0.9847 - 83ms/step\n",
      "step  60/125 - loss: 0.3227 - acc: 0.9845 - 82ms/step\n",
      "step  70/125 - loss: 0.3140 - acc: 0.9852 - 82ms/step\n",
      "step  80/125 - loss: 0.3136 - acc: 0.9854 - 81ms/step\n",
      "step  90/125 - loss: 0.3137 - acc: 0.9855 - 81ms/step\n",
      "step 100/125 - loss: 0.3290 - acc: 0.9852 - 81ms/step\n",
      "step 110/125 - loss: 0.3168 - acc: 0.9853 - 81ms/step\n",
      "step 120/125 - loss: 0.3299 - acc: 0.9857 - 81ms/step\n",
      "step 125/125 - loss: 0.3200 - acc: 0.9856 - 79ms/step\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3547 - acc: 0.9633 - 85ms/step\n",
      "step 20/84 - loss: 0.3577 - acc: 0.9672 - 69ms/step\n",
      "step 30/84 - loss: 0.3338 - acc: 0.9672 - 65ms/step\n",
      "step 40/84 - loss: 0.3359 - acc: 0.9674 - 62ms/step\n",
      "step 50/84 - loss: 0.3521 - acc: 0.9672 - 61ms/step\n",
      "step 60/84 - loss: 0.3268 - acc: 0.9673 - 60ms/step\n",
      "step 70/84 - loss: 0.3481 - acc: 0.9664 - 59ms/step\n",
      "step 80/84 - loss: 0.3546 - acc: 0.9667 - 57ms/step\n",
      "step 84/84 - loss: 0.3186 - acc: 0.9671 - 55ms/step\n",
      "Eval samples: 10646\n",
      "save checkpoint at /home/aistudio/checkpoints/final\n"
     ]
    }
   ],
   "source": [
    "model2.fit(train_loader, dev_loader, epochs=10, save_dir='./checkpoints', save_freq=20,verbose=2, callbacks=callback)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 启动VisualDL查看训练过程可视化结果\n",
    "启动步骤：\n",
    "- 1、切换到本界面左侧「可视化」\n",
    "- 2、日志文件路径选择 'visualdl'\n",
    "- 3、点击「启动VisualDL」后点击「打开VisualDL」，即可查看可视化结果：\n",
    "Accuracy和Loss的实时变化趋势如下：\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/cb5dff1e17e2407f91b83a1faabd09b4aaa7daac50f44d74a903a576452fbd09)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/84 - loss: 0.3549 - acc: 0.9656 - 91ms/step\n",
      "step 20/84 - loss: 0.3699 - acc: 0.9676 - 73ms/step\n",
      "step 30/84 - loss: 0.3412 - acc: 0.9669 - 69ms/step\n",
      "step 40/84 - loss: 0.3362 - acc: 0.9682 - 66ms/step\n",
      "step 50/84 - loss: 0.3583 - acc: 0.9673 - 64ms/step\n",
      "step 60/84 - loss: 0.3250 - acc: 0.9674 - 63ms/step\n",
      "step 70/84 - loss: 0.3338 - acc: 0.9665 - 61ms/step\n",
      "step 80/84 - loss: 0.3357 - acc: 0.9666 - 60ms/step\n",
      "step 84/84 - loss: 0.3134 - acc: 0.9668 - 58ms/step\n",
      "Eval samples: 10646\n",
      "Finally test acc: 0.96684\n"
     ]
    }
   ],
   "source": [
    "results = model.evaluate(dev_loader)\r\n",
    "print(\"Finally test acc: %.5f\" % results['acc'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 预测"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Predict begin...\n",
      "step 42/42 [==============================] - ETA: 3s - 93ms/ste - ETA: 3s - 99ms/ste - ETA: 3s - 111ms/st - ETA: 3s - 99ms/step - ETA: 2s - 90ms/ste - ETA: 2s - 86ms/ste - ETA: 2s - 83ms/ste - ETA: 2s - 79ms/ste - ETA: 1s - 77ms/ste - ETA: 1s - 75ms/ste - ETA: 1s - 73ms/ste - ETA: 1s - 72ms/ste - ETA: 1s - 71ms/ste - ETA: 0s - 70ms/ste - ETA: 0s - 69ms/ste - ETA: 0s - 69ms/ste - ETA: 0s - 68ms/ste - ETA: 0s - 67ms/ste - ETA: 0s - 66ms/ste - ETA: 0s - 64ms/ste - 62ms/step          \n",
      "Predict samples: 5353\n",
      "Data: 楼面经理服务态度极差，等位和埋单都差，楼面小妹还挺好 \t Label: negative\n",
      "Data: 欺负北方人没吃过鲍鱼是怎么着？简直敷衍到可笑的程度，团购连青菜都是两人份？！难吃到死，菜色还特别可笑，什么时候粤菜的小菜改成拍黄瓜了？！把团购客人当傻子，可这满大厅的傻子谁还会再来？！ \t Label: negative\n",
      "Data: 如果大家有时间而且不怕麻烦的话可以去这里试试，点一个饭等左2个钟，没错！是两个钟！期间催了n遍都说马上到，结果？呵呵。乳鸽的味道，太咸，可能不新鲜吧……要用重口味盖住异味。上菜超级慢！中途还搞什么表演，麻烦有人手的话就上菜啊，表什么演？！？！要大家饿着看表演吗？最后结账还算错单，我真心服了……有一种店叫不会有下次，大概就是指它吧 \t Label: negative\n",
      "Data: 偌大的一个大厅就一个人点菜，点菜速度超级慢，菜牌上多个菜停售，连续点了两个没标停售的菜也告知没有，粥上来是凉的，榴莲酥火大了，格格肉超级油腻而且咸?????? \t Label: negative\n",
      "Data: 泥撕雞超級好吃！！！吃了一個再叫一個還想打包的節奏！ \t Label: positive\n",
      "Data: 作为地道的广州人，从小就跟着家人在西关品尝各式美食，今日带着家中长辈来这个老字号泮溪酒家真实失望透顶，出品差、服务差、洗手间邋遢弥漫着浓郁尿骚味、丢广州人的脸、丢广州老字号的脸。 \t Label: negative\n",
      "Data: 辣味道很赞哦！猪肚鸡一直是我们的最爱，每次来都必点，服务很给力，环境很好，值得分享哦！西洋菜 \t Label: positive\n",
      "Data: 第一次吃到這麼脏的火鍋：吃着吃著吃出一條尾指粗的黑毛毛蟲——惡心！脏！！！第一次吃到這麼無誠信的火鍋服務：我們呼喚人員時，某女部長立即使服務員迅速取走蟲所在的碗，任我們多次叫「放下」論理，她們也置若罔聞轉身將蟲毁屍滅跡，還嘻皮笑臉辯稱只是把碗換走,態度行為惡劣——奸詐！毫無誠信！！爛！！！當然還有剛坐下時的情形：第一次吃到這樣的火鍋：所有肉食熟食都上桌了，鍋底遲遲沒上，足足等了半小時才姍姍來遲；---差！！第一次吃到這樣的火鍋：1元雞鍋、1碟6塊小牛肉、1碟小腐皮、1碟5塊裝的普通肥牛、1碟數片的細碎牛肚結帳便2百多元；---不值！！以下省略千字差評......白云路的稻香是最差、最失禮的稻香，天河城、華廈的都比它好上過萬倍！！白云路的稻香是史上最差的餐廳！！！ \t Label: negative\n",
      "Data: 文昌鸡份量很少且很咸，其他菜味道很一般！服务态度差差差！还要10%的服务费、 \t Label: negative\n",
      "Data: 这个网站的评价真是越来越不可信了，搞不懂为什么这么多好评。真的是很一般，不要迷信什么哪里回来的大厨吧。环境和出品若是当作普通茶餐厅来看待就还说得过去，但是价格又不是茶餐厅的价格，这就很尴尬了。。服务也是有待提高。 \t Label: negative\n"
     ]
    }
   ],
   "source": [
    "label_map = {0: 'negative', 1: 'positive'}\r\n",
    "results = model.predict(test_loader, batch_size=128)[0]\r\n",
    "predictions = []\r\n",
    "\r\n",
    "for batch_probs in results:\r\n",
    "    # 映射分类label\r\n",
    "    idx = np.argmax(batch_probs, axis=-1)\r\n",
    "    idx = idx.tolist()\r\n",
    "    labels = [label_map[i] for i in idx]\r\n",
    "    predictions.extend(labels)\r\n",
    "\r\n",
    "# 看看预测数据前5个样例分类结果\r\n",
    "for idx, data in enumerate(test_ds.data[:10]):\r\n",
    "    print('Data: {} \\t Label: {}'.format(data[0], predictions[idx]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里只采用了一个基础的模型，就得到了较高的的准确率。\n",
    "\n",
    "可以试试预训练模型，能得到更好的效果！参考[如何通过预训练模型Fine-tune下游任务](https://aistudio.baidu.com/aistudio/projectdetail/1294333)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**测试三分类模型**\n",
    "\n",
    "前面的dataloader和模型结构，训练方法都是差不多的，需要注意的是原本出来label为0,1的地方需要改成0,1,2\n",
    "\n",
    "在预测的时候需要将二分类改为三分类\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[PAD] 0\n"
     ]
    }
   ],
   "source": [
    "class SelfDefinedDataset(paddle.io.Dataset):\r\n",
    "    def __init__(self, data):\r\n",
    "        super(SelfDefinedDataset, self).__init__()\r\n",
    "        self.data = data\r\n",
    "\r\n",
    "    def __getitem__(self, idx):\r\n",
    "        return self.data[idx]\r\n",
    "\r\n",
    "    def __len__(self):\r\n",
    "        return len(self.data)\r\n",
    "        \r\n",
    "    def get_labels(self):\r\n",
    "        return [\"0\", \"1\",\"2\"]\r\n",
    "\r\n",
    "def txt_to_list(file_name):\r\n",
    "    res_list = []\r\n",
    "    for line in open(file_name):\r\n",
    "        res_list.append(line.strip().split('\\t'))\r\n",
    "    return res_list\r\n",
    "\r\n",
    "trainlst = txt_to_list('3div/train.txt')\r\n",
    "devlst = txt_to_list('3div/dev.txt')\r\n",
    "testlst = txt_to_list('3div/test.txt')\r\n",
    "\r\n",
    "# 通过get_datasets()函数，将list数据转换为dataset。\r\n",
    "# get_datasets()可接收[list]参数，或[str]参数，根据自定义数据集的写法自由选择。\r\n",
    "# train_ds, dev_ds, test_ds = ppnlp.datasets.ChnSentiCorp.get_datasets(['train', 'dev', 'test'])\r\n",
    "train_ds, dev_ds, test_ds = SelfDefinedDataset.get_datasets([trainlst, devlst, testlst])\r\n",
    "label_list = train_ds.get_labels()\r\n",
    "\r\n",
    "vocab = load_vocab('./senta_word_dict.txt')\r\n",
    "\r\n",
    "for k, v in vocab.items():\r\n",
    "    print(k, v)\r\n",
    "    break\r\n",
    "\r\n",
    "def create_dataloader(dataset,\r\n",
    "                      trans_function=None,\r\n",
    "                      mode='train',\r\n",
    "                      batch_size=1,\r\n",
    "                      pad_token_id=0,\r\n",
    "                      batchify_fn=None):\r\n",
    "    if trans_function:\r\n",
    "        dataset = dataset.apply(trans_function, lazy=True)\r\n",
    "\r\n",
    "    # return_list 数据是否以list形式返回\r\n",
    "    # collate_fn  指定如何将样本列表组合为mini-batch数据。传给它参数需要是一个callable对象，需要实现对组建的batch的处理逻辑，并返回每个batch的数据。在这里传入的是`prepare_input`函数，对产生的数据进行pad操作，并返回实际长度等。\r\n",
    "    dataloader = paddle.io.DataLoader(\r\n",
    "        dataset,\r\n",
    "        return_list=True,\r\n",
    "        batch_size=batch_size,\r\n",
    "        collate_fn=batchify_fn)\r\n",
    "        \r\n",
    "    return dataloader\r\n",
    "\r\n",
    "# python中的偏函数partial，把一个函数的某些参数固定住（也就是设置默认值），返回一个新的函数，调用这个新函数会更简单。\r\n",
    "trans_function = partial(\r\n",
    "    convert_example,\r\n",
    "    vocab=vocab,\r\n",
    "    unk_token_id=vocab.get('[UNK]', 1),\r\n",
    "    is_test=False)\r\n",
    "\r\n",
    "# 将读入的数据batch化处理，便于模型batch化运算。\r\n",
    "# batch中的每个句子将会padding到这个batch中的文本最大长度batch_max_seq_len。\r\n",
    "# 当文本长度大于batch_max_seq时，将会截断到batch_max_seq_len；当文本长度小于batch_max_seq时，将会padding补齐到batch_max_seq_len.\r\n",
    "batchify_fn = lambda samples, fn=Tuple(\r\n",
    "    Pad(axis=0, pad_val=vocab['[PAD]']),  # input_ids\r\n",
    "    Stack(dtype=\"int64\"),  # seq len\r\n",
    "    Stack(dtype=\"int64\")  # label\r\n",
    "): [data for data in fn(samples)]\r\n",
    "\r\n",
    "\r\n",
    "train_loader = create_dataloader(\r\n",
    "    train_ds,\r\n",
    "    trans_function=trans_function,\r\n",
    "    batch_size=128,\r\n",
    "    mode='train',\r\n",
    "    batchify_fn=batchify_fn)\r\n",
    "dev_loader = create_dataloader(\r\n",
    "    dev_ds,\r\n",
    "    trans_function=trans_function,\r\n",
    "    batch_size=128,\r\n",
    "    mode='validation',\r\n",
    "    batchify_fn=batchify_fn)\r\n",
    "test_loader = create_dataloader(\r\n",
    "    test_ds,\r\n",
    "    trans_function=trans_function,\r\n",
    "    batch_size=128,\r\n",
    "    mode='test',\r\n",
    "    batchify_fn=batchify_fn)\r\n",
    "\r\n",
    "class LSTMModel(nn.Layer):\r\n",
    "    def __init__(self,\r\n",
    "                 vocab_size,\r\n",
    "                 num_classes,\r\n",
    "                 emb_dim=128,\r\n",
    "                 padding_idx=0,\r\n",
    "                 lstm_hidden_size=198,\r\n",
    "                 direction='forward',\r\n",
    "                 lstm_layers=1,\r\n",
    "                 dropout_rate=0,\r\n",
    "                 pooling_type=None,\r\n",
    "                 fc_hidden_size=96):\r\n",
    "        super().__init__()\r\n",
    "\r\n",
    "        # 首先将输入word id 查表后映射成 word embedding\r\n",
    "        self.embedder = nn.Embedding(\r\n",
    "            num_embeddings=vocab_size,\r\n",
    "            embedding_dim=emb_dim,\r\n",
    "            padding_idx=padding_idx)\r\n",
    "\r\n",
    "        # 将word embedding经过LSTMEncoder变换到文本语义表征空间中\r\n",
    "        self.lstm_encoder = ppnlp.seq2vec.LSTMEncoder(\r\n",
    "            emb_dim,\r\n",
    "            lstm_hidden_size,\r\n",
    "            num_layers=lstm_layers,\r\n",
    "            direction=direction,\r\n",
    "            dropout=dropout_rate,\r\n",
    "            pooling_type=pooling_type)\r\n",
    "\r\n",
    "        # LSTMEncoder.get_output_dim()方法可以获取经过encoder之后的文本表示hidden_size\r\n",
    "        self.fc = nn.Linear(self.lstm_encoder.get_output_dim(), fc_hidden_size)\r\n",
    "\r\n",
    "        # 最后的分类器\r\n",
    "        self.output_layer = nn.Linear(fc_hidden_size, num_classes)\r\n",
    "\r\n",
    "    def forward(self, text, seq_len):\r\n",
    "        # text shape: (batch_size, num_tokens)\r\n",
    "        # print('input :', text.shape)\r\n",
    "        \r\n",
    "        # Shape: (batch_size, num_tokens, embedding_dim)\r\n",
    "        embedded_text = self.embedder(text)\r\n",
    "        # print('after word-embeding:', embedded_text.shape)\r\n",
    "\r\n",
    "        # Shape: (batch_size, num_tokens, num_directions*lstm_hidden_size)\r\n",
    "        # num_directions = 2 if direction is 'bidirectional' else 1\r\n",
    "        text_repr = self.lstm_encoder(embedded_text, sequence_length=seq_len)\r\n",
    "        # print('after lstm:', text_repr.shape)\r\n",
    "\r\n",
    "\r\n",
    "        # Shape: (batch_size, fc_hidden_size)\r\n",
    "        fc_out = paddle.tanh(self.fc(text_repr))\r\n",
    "        # print('after Linear classifier:', fc_out.shape)\r\n",
    "\r\n",
    "        # Shape: (batch_size, num_classes)\r\n",
    "        logits = self.output_layer(fc_out)\r\n",
    "        # print('output:', logits.shape)\r\n",
    "        \r\n",
    "        # probs 分类概率值\r\n",
    "        probs = F.softmax(logits, axis=-1)\r\n",
    "        # print('output probability:', probs.shape)\r\n",
    "        return probs\r\n",
    "\r\n",
    "model= LSTMModel(\r\n",
    "        len(vocab),\r\n",
    "        len(label_list),\r\n",
    "        direction='bidirectional',\r\n",
    "        padding_idx=vocab['[PAD]'])\r\n",
    "model = paddle.Model(model)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "optimizer = paddle.optimizer.Adam(\r\n",
    "        parameters=model.parameters(), learning_rate=5e-4)\r\n",
    "\r\n",
    "loss = paddle.nn.CrossEntropyLoss()\r\n",
    "metric = paddle.metric.Accuracy()\r\n",
    "\r\n",
    "model.prepare(optimizer, loss, metric)\r\n",
    "# 设置visualdl路径\r\n",
    "log_dir = './visualdl'\r\n",
    "callback = paddle.callbacks.VisualDL(log_dir=log_dir)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous step.\n",
      "Epoch 1/20\n",
      "step 251/251 [==============================] - loss: 0.6654 - acc: 0.8977 - ETA: 27s - 116ms/st - loss: 0.7167 - acc: 0.8844 - ETA: 23s - 102ms/st - loss: 0.6539 - acc: 0.8872 - ETA: 21s - 97ms/step - loss: 0.6357 - acc: 0.8912 - ETA: 20s - 95ms/ste - loss: 0.6145 - acc: 0.8950 - ETA: 19s - 96ms/ste - loss: 0.6805 - acc: 0.8938 - ETA: 18s - 95ms/ste - loss: 0.6251 - acc: 0.8935 - ETA: 17s - 94ms/ste - loss: 0.6374 - acc: 0.8950 - ETA: 16s - 94ms/ste - loss: 0.6596 - acc: 0.8943 - ETA: 15s - 94ms/ste - loss: 0.6443 - acc: 0.8944 - ETA: 14s - 94ms/ste - loss: 0.6640 - acc: 0.8934 - ETA: 13s - 93ms/ste - loss: 0.6635 - acc: 0.8942 - ETA: 12s - 93ms/ste - loss: 0.6464 - acc: 0.8943 - ETA: 11s - 93ms/ste - loss: 0.6543 - acc: 0.8959 - ETA: 10s - 93ms/ste - loss: 0.6502 - acc: 0.8966 - ETA: 9s - 93ms/ste - loss: 0.6069 - acc: 0.8969 - ETA: 8s - 93ms/st - loss: 0.6320 - acc: 0.8970 - ETA: 7s - 93ms/st - loss: 0.6335 - acc: 0.8972 - ETA: 6s - 93ms/st - loss: 0.6451 - acc: 0.8979 - ETA: 5s - 93ms/st - loss: 0.6682 - acc: 0.8975 - ETA: 4s - 93ms/st - loss: 0.6395 - acc: 0.8975 - ETA: 3s - 93ms/st - loss: 0.6074 - acc: 0.8986 - ETA: 2s - 93ms/st - loss: 0.5997 - acc: 0.8993 - ETA: 1s - 92ms/st - loss: 0.6741 - acc: 0.8990 - ETA: 1s - 92ms/st - loss: 0.6348 - acc: 0.8993 - ETA: 0s - 92ms/st - loss: 0.6566 - acc: 0.8993 - 91ms/step          \n",
      "save checkpoint at /home/aistudio/checkpoints/0\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7753 - acc: 0.7195 - ETA: 1s - 85ms/st - loss: 0.7943 - acc: 0.7211 - ETA: 0s - 72ms/st - loss: 0.7721 - acc: 0.7266 - ETA: 0s - 62ms/st - loss: 0.8567 - acc: 0.7253 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 2/20\n",
      "step 251/251 [==============================] - loss: 0.6530 - acc: 0.8930 - ETA: 26s - 110ms/st - loss: 0.7084 - acc: 0.8887 - ETA: 23s - 101ms/st - loss: 0.6351 - acc: 0.8938 - ETA: 21s - 96ms/step - loss: 0.6325 - acc: 0.8951 - ETA: 19s - 93ms/ste - loss: 0.6144 - acc: 0.8980 - ETA: 18s - 92ms/ste - loss: 0.6713 - acc: 0.8966 - ETA: 17s - 92ms/ste - loss: 0.6366 - acc: 0.8961 - ETA: 16s - 92ms/ste - loss: 0.6413 - acc: 0.8976 - ETA: 15s - 92ms/ste - loss: 0.6606 - acc: 0.8961 - ETA: 14s - 92ms/ste - loss: 0.6357 - acc: 0.8964 - ETA: 13s - 92ms/ste - loss: 0.6495 - acc: 0.8956 - ETA: 12s - 91ms/ste - loss: 0.6814 - acc: 0.8962 - ETA: 11s - 91ms/ste - loss: 0.6587 - acc: 0.8962 - ETA: 11s - 92ms/ste - loss: 0.6523 - acc: 0.8973 - ETA: 10s - 92ms/ste - loss: 0.6467 - acc: 0.8974 - ETA: 9s - 92ms/ste - loss: 0.5845 - acc: 0.8978 - ETA: 8s - 92ms/st - loss: 0.6214 - acc: 0.8978 - ETA: 7s - 93ms/st - loss: 0.6293 - acc: 0.8984 - ETA: 6s - 93ms/st - loss: 0.6393 - acc: 0.8988 - ETA: 5s - 93ms/st - loss: 0.6649 - acc: 0.8987 - ETA: 4s - 93ms/st - loss: 0.6437 - acc: 0.8988 - ETA: 3s - 93ms/st - loss: 0.5964 - acc: 0.8998 - ETA: 2s - 93ms/st - loss: 0.5988 - acc: 0.9007 - ETA: 1s - 93ms/st - loss: 0.6606 - acc: 0.9004 - ETA: 1s - 93ms/st - loss: 0.6303 - acc: 0.9006 - ETA: 0s - 92ms/st - loss: 0.6485 - acc: 0.9006 - 92ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7715 - acc: 0.7227 - ETA: 1s - 84ms/st - loss: 0.7767 - acc: 0.7172 - ETA: 0s - 71ms/st - loss: 0.7713 - acc: 0.7247 - ETA: 0s - 62ms/st - loss: 0.8471 - acc: 0.7238 - 60ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 3/20\n",
      "step 251/251 [==============================] - loss: 0.6691 - acc: 0.9016 - ETA: 26s - 110ms/st - loss: 0.7033 - acc: 0.8965 - ETA: 22s - 99ms/step - loss: 0.6251 - acc: 0.9003 - ETA: 20s - 95ms/ste - loss: 0.6504 - acc: 0.9018 - ETA: 19s - 93ms/ste - loss: 0.6061 - acc: 0.9044 - ETA: 18s - 93ms/ste - loss: 0.6835 - acc: 0.9026 - ETA: 17s - 92ms/ste - loss: 0.6375 - acc: 0.9019 - ETA: 16s - 92ms/ste - loss: 0.6189 - acc: 0.9031 - ETA: 15s - 92ms/ste - loss: 0.6668 - acc: 0.9017 - ETA: 14s - 91ms/ste - loss: 0.6791 - acc: 0.8963 - ETA: 13s - 91ms/ste - loss: 0.7486 - acc: 0.8926 - ETA: 12s - 91ms/ste - loss: 0.6791 - acc: 0.8918 - ETA: 11s - 91ms/ste - loss: 0.6876 - acc: 0.8912 - ETA: 10s - 91ms/ste - loss: 0.6559 - acc: 0.8923 - ETA: 10s - 91ms/ste - loss: 0.6503 - acc: 0.8926 - ETA: 9s - 91ms/ste - loss: 0.6155 - acc: 0.8927 - ETA: 8s - 91ms/st - loss: 0.6375 - acc: 0.8929 - ETA: 7s - 91ms/st - loss: 0.6329 - acc: 0.8932 - ETA: 6s - 91ms/st - loss: 0.6508 - acc: 0.8937 - ETA: 5s - 90ms/st - loss: 0.6742 - acc: 0.8934 - ETA: 4s - 90ms/st - loss: 0.6474 - acc: 0.8934 - ETA: 3s - 90ms/st - loss: 0.6055 - acc: 0.8941 - ETA: 2s - 90ms/st - loss: 0.6057 - acc: 0.8948 - ETA: 1s - 90ms/st - loss: 0.6683 - acc: 0.8942 - ETA: 0s - 90ms/st - loss: 0.6255 - acc: 0.8948 - ETA: 0s - 90ms/st - loss: 0.6604 - acc: 0.8948 - 89ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7553 - acc: 0.7352 - ETA: 1s - 89ms/st - loss: 0.7595 - acc: 0.7223 - ETA: 0s - 75ms/st - loss: 0.7780 - acc: 0.7250 - ETA: 0s - 64ms/st - loss: 0.8478 - acc: 0.7240 - 63ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 4/20\n",
      "step 251/251 [==============================] - loss: 0.6756 - acc: 0.8953 - ETA: 27s - 112ms/st - loss: 0.6939 - acc: 0.8926 - ETA: 23s - 101ms/st - loss: 0.6613 - acc: 0.8945 - ETA: 21s - 96ms/step - loss: 0.6478 - acc: 0.8967 - ETA: 19s - 93ms/ste - loss: 0.5909 - acc: 0.9012 - ETA: 18s - 93ms/ste - loss: 0.6782 - acc: 0.9001 - ETA: 17s - 92ms/ste - loss: 0.6348 - acc: 0.8992 - ETA: 16s - 91ms/ste - loss: 0.6243 - acc: 0.9006 - ETA: 15s - 91ms/ste - loss: 0.6591 - acc: 0.8996 - ETA: 14s - 91ms/ste - loss: 0.6365 - acc: 0.8998 - ETA: 13s - 91ms/ste - loss: 0.6503 - acc: 0.8986 - ETA: 12s - 91ms/ste - loss: 0.6568 - acc: 0.8990 - ETA: 11s - 91ms/ste - loss: 0.6636 - acc: 0.8989 - ETA: 11s - 91ms/ste - loss: 0.6609 - acc: 0.8995 - ETA: 10s - 91ms/ste - loss: 0.6444 - acc: 0.8996 - ETA: 9s - 91ms/ste - loss: 0.5878 - acc: 0.9002 - ETA: 8s - 91ms/st - loss: 0.6222 - acc: 0.9005 - ETA: 7s - 91ms/st - loss: 0.6308 - acc: 0.9013 - ETA: 6s - 91ms/st - loss: 0.6527 - acc: 0.9014 - ETA: 5s - 91ms/st - loss: 0.6538 - acc: 0.9014 - ETA: 4s - 91ms/st - loss: 0.6300 - acc: 0.9012 - ETA: 3s - 91ms/st - loss: 0.5891 - acc: 0.9020 - ETA: 2s - 91ms/st - loss: 0.6059 - acc: 0.9022 - ETA: 1s - 91ms/st - loss: 0.6613 - acc: 0.9014 - ETA: 0s - 91ms/st - loss: 0.6518 - acc: 0.9017 - ETA: 0s - 90ms/st - loss: 0.6626 - acc: 0.9016 - 90ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7664 - acc: 0.7133 - ETA: 1s - 87ms/st - loss: 0.8123 - acc: 0.7059 - ETA: 0s - 73ms/st - loss: 0.7715 - acc: 0.7169 - ETA: 0s - 63ms/st - loss: 0.8607 - acc: 0.7160 - 62ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 5/20\n",
      "step 251/251 [==============================] - loss: 0.6685 - acc: 0.8977 - ETA: 27s - 112ms/st - loss: 0.6802 - acc: 0.8914 - ETA: 23s - 101ms/st - loss: 0.6349 - acc: 0.8945 - ETA: 21s - 97ms/step - loss: 0.6338 - acc: 0.8973 - ETA: 20s - 95ms/ste - loss: 0.5998 - acc: 0.9005 - ETA: 19s - 95ms/ste - loss: 0.6800 - acc: 0.8996 - ETA: 17s - 94ms/ste - loss: 0.6303 - acc: 0.8996 - ETA: 16s - 93ms/ste - loss: 0.6229 - acc: 0.9007 - ETA: 15s - 92ms/ste - loss: 0.6612 - acc: 0.9003 - ETA: 14s - 92ms/ste - loss: 0.6329 - acc: 0.9002 - ETA: 13s - 91ms/ste - loss: 0.6508 - acc: 0.8996 - ETA: 12s - 91ms/ste - loss: 0.6470 - acc: 0.9003 - ETA: 11s - 91ms/ste - loss: 0.6588 - acc: 0.8996 - ETA: 11s - 91ms/ste - loss: 0.6584 - acc: 0.9006 - ETA: 10s - 91ms/ste - loss: 0.6377 - acc: 0.9007 - ETA: 9s - 91ms/ste - loss: 0.5878 - acc: 0.9013 - ETA: 8s - 92ms/st - loss: 0.6240 - acc: 0.9014 - ETA: 7s - 92ms/st - loss: 0.6624 - acc: 0.9015 - ETA: 6s - 93ms/st - loss: 0.6431 - acc: 0.9016 - ETA: 5s - 93ms/st - loss: 0.6622 - acc: 0.9013 - ETA: 4s - 93ms/st - loss: 0.6436 - acc: 0.9009 - ETA: 3s - 93ms/st - loss: 0.6067 - acc: 0.9017 - ETA: 2s - 93ms/st - loss: 0.6126 - acc: 0.9022 - ETA: 1s - 92ms/st - loss: 0.6756 - acc: 0.9017 - ETA: 1s - 92ms/st - loss: 0.6464 - acc: 0.9017 - ETA: 0s - 92ms/st - loss: 0.6516 - acc: 0.9017 - 91ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.8093 - acc: 0.7109 - ETA: 1s - 84ms/st - loss: 0.7792 - acc: 0.7070 - ETA: 0s - 72ms/st - loss: 0.7775 - acc: 0.7146 - ETA: 0s - 62ms/st - loss: 0.8379 - acc: 0.7145 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 6/20\n",
      "step 251/251 [==============================] - loss: 0.6562 - acc: 0.8984 - ETA: 26s - 111ms/st - loss: 0.6761 - acc: 0.8922 - ETA: 23s - 100ms/st - loss: 0.6516 - acc: 0.8964 - ETA: 21s - 96ms/step - loss: 0.6487 - acc: 0.8992 - ETA: 19s - 93ms/ste - loss: 0.5875 - acc: 0.9025 - ETA: 18s - 93ms/ste - loss: 0.6593 - acc: 0.9014 - ETA: 17s - 92ms/ste - loss: 0.6173 - acc: 0.9019 - ETA: 16s - 92ms/ste - loss: 0.6276 - acc: 0.9042 - ETA: 15s - 91ms/ste - loss: 0.6522 - acc: 0.9040 - ETA: 14s - 91ms/ste - loss: 0.6234 - acc: 0.9043 - ETA: 13s - 91ms/ste - loss: 0.6545 - acc: 0.9034 - ETA: 12s - 91ms/ste - loss: 0.6372 - acc: 0.9039 - ETA: 11s - 91ms/ste - loss: 0.6466 - acc: 0.9034 - ETA: 11s - 91ms/ste - loss: 0.6569 - acc: 0.9046 - ETA: 10s - 91ms/ste - loss: 0.6422 - acc: 0.9048 - ETA: 9s - 91ms/ste - loss: 0.5761 - acc: 0.9046 - ETA: 8s - 91ms/st - loss: 0.6344 - acc: 0.9045 - ETA: 7s - 91ms/st - loss: 0.6276 - acc: 0.9045 - ETA: 6s - 91ms/st - loss: 0.6212 - acc: 0.9049 - ETA: 5s - 91ms/st - loss: 0.6558 - acc: 0.9045 - ETA: 4s - 91ms/st - loss: 0.6345 - acc: 0.9042 - ETA: 3s - 92ms/st - loss: 0.5936 - acc: 0.9049 - ETA: 2s - 91ms/st - loss: 0.5909 - acc: 0.9059 - ETA: 1s - 91ms/st - loss: 0.6611 - acc: 0.9056 - ETA: 1s - 91ms/st - loss: 0.6402 - acc: 0.9053 - ETA: 0s - 91ms/st - loss: 0.6461 - acc: 0.9053 - 90ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7643 - acc: 0.7180 - ETA: 1s - 85ms/st - loss: 0.7766 - acc: 0.7137 - ETA: 0s - 72ms/st - loss: 0.8159 - acc: 0.7172 - ETA: 0s - 62ms/st - loss: 0.8821 - acc: 0.7155 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 7/20\n",
      "step 251/251 [==============================] - loss: 0.6485 - acc: 0.8969 - ETA: 26s - 110ms/st - loss: 0.6856 - acc: 0.8918 - ETA: 22s - 98ms/step - loss: 0.6512 - acc: 0.8982 - ETA: 20s - 95ms/ste - loss: 0.6308 - acc: 0.9018 - ETA: 19s - 94ms/ste - loss: 0.5851 - acc: 0.9064 - ETA: 18s - 94ms/ste - loss: 0.6592 - acc: 0.9049 - ETA: 17s - 93ms/ste - loss: 0.6227 - acc: 0.9049 - ETA: 16s - 93ms/ste - loss: 0.6267 - acc: 0.9072 - ETA: 15s - 92ms/ste - loss: 0.6463 - acc: 0.9068 - ETA: 14s - 92ms/ste - loss: 0.6305 - acc: 0.9069 - ETA: 13s - 92ms/ste - loss: 0.6388 - acc: 0.9061 - ETA: 12s - 92ms/ste - loss: 0.6401 - acc: 0.9068 - ETA: 12s - 92ms/ste - loss: 0.6481 - acc: 0.9063 - ETA: 11s - 92ms/ste - loss: 0.6463 - acc: 0.9074 - ETA: 10s - 92ms/ste - loss: 0.6516 - acc: 0.9078 - ETA: 9s - 92ms/ste - loss: 0.5789 - acc: 0.9072 - ETA: 8s - 92ms/st - loss: 0.6465 - acc: 0.9062 - ETA: 7s - 92ms/st - loss: 0.6301 - acc: 0.9061 - ETA: 6s - 92ms/st - loss: 0.6008 - acc: 0.9067 - ETA: 5s - 91ms/st - loss: 0.6736 - acc: 0.9064 - ETA: 4s - 91ms/st - loss: 0.6439 - acc: 0.9061 - ETA: 3s - 91ms/st - loss: 0.6081 - acc: 0.9060 - ETA: 2s - 91ms/st - loss: 0.5995 - acc: 0.9067 - ETA: 1s - 91ms/st - loss: 0.6696 - acc: 0.9061 - ETA: 0s - 91ms/st - loss: 0.6298 - acc: 0.9067 - ETA: 0s - 90ms/st - loss: 0.6623 - acc: 0.9066 - 90ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7625 - acc: 0.7156 - ETA: 1s - 85ms/st - loss: 0.7665 - acc: 0.7129 - ETA: 0s - 72ms/st - loss: 0.7667 - acc: 0.7201 - ETA: 0s - 62ms/st - loss: 0.8523 - acc: 0.7190 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 8/20\n",
      "step 251/251 [==============================] - loss: 0.6508 - acc: 0.9039 - ETA: 26s - 111ms/st - loss: 0.6851 - acc: 0.9004 - ETA: 22s - 100ms/st - loss: 0.6386 - acc: 0.9021 - ETA: 20s - 94ms/step - loss: 0.6430 - acc: 0.9041 - ETA: 19s - 92ms/ste - loss: 0.5928 - acc: 0.9073 - ETA: 18s - 92ms/ste - loss: 0.6521 - acc: 0.9064 - ETA: 17s - 92ms/ste - loss: 0.6217 - acc: 0.9058 - ETA: 16s - 92ms/ste - loss: 0.6212 - acc: 0.9073 - ETA: 15s - 91ms/ste - loss: 0.6752 - acc: 0.9066 - ETA: 14s - 91ms/ste - loss: 0.6332 - acc: 0.9071 - ETA: 13s - 91ms/ste - loss: 0.6445 - acc: 0.9062 - ETA: 12s - 91ms/ste - loss: 0.6475 - acc: 0.9068 - ETA: 11s - 91ms/ste - loss: 0.6497 - acc: 0.9061 - ETA: 11s - 91ms/ste - loss: 0.6693 - acc: 0.9065 - ETA: 10s - 91ms/ste - loss: 0.6486 - acc: 0.9067 - ETA: 9s - 91ms/ste - loss: 0.5972 - acc: 0.9067 - ETA: 8s - 91ms/st - loss: 0.6556 - acc: 0.9065 - ETA: 7s - 91ms/st - loss: 0.6181 - acc: 0.9068 - ETA: 6s - 91ms/st - loss: 0.6095 - acc: 0.9070 - ETA: 5s - 91ms/st - loss: 0.6487 - acc: 0.9070 - ETA: 4s - 90ms/st - loss: 0.6600 - acc: 0.9065 - ETA: 3s - 91ms/st - loss: 0.5827 - acc: 0.9072 - ETA: 2s - 91ms/st - loss: 0.6034 - acc: 0.9080 - ETA: 1s - 91ms/st - loss: 0.6602 - acc: 0.9074 - ETA: 1s - 91ms/st - loss: 0.6300 - acc: 0.9077 - ETA: 0s - 91ms/st - loss: 0.6540 - acc: 0.9077 - 90ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.8019 - acc: 0.7195 - ETA: 1s - 91ms/st - loss: 0.7799 - acc: 0.7109 - ETA: 0s - 76ms/st - loss: 0.7988 - acc: 0.7219 - ETA: 0s - 65ms/st - loss: 0.8263 - acc: 0.7218 - 64ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 9/20\n",
      "step 251/251 [==============================] - loss: 0.6466 - acc: 0.9062 - ETA: 26s - 111ms/st - loss: 0.6780 - acc: 0.9000 - ETA: 22s - 98ms/step - loss: 0.6325 - acc: 0.9008 - ETA: 20s - 94ms/ste - loss: 0.6396 - acc: 0.9031 - ETA: 19s - 92ms/ste - loss: 0.5974 - acc: 0.9064 - ETA: 18s - 93ms/ste - loss: 0.6403 - acc: 0.9059 - ETA: 17s - 92ms/ste - loss: 0.6304 - acc: 0.9052 - ETA: 16s - 92ms/ste - loss: 0.6186 - acc: 0.9066 - ETA: 15s - 92ms/ste - loss: 0.6667 - acc: 0.9055 - ETA: 14s - 92ms/ste - loss: 0.6254 - acc: 0.9055 - ETA: 13s - 92ms/ste - loss: 0.6537 - acc: 0.9052 - ETA: 12s - 92ms/ste - loss: 0.6583 - acc: 0.9051 - ETA: 12s - 92ms/ste - loss: 0.6792 - acc: 0.9041 - ETA: 11s - 92ms/ste - loss: 0.6677 - acc: 0.9051 - ETA: 10s - 91ms/ste - loss: 0.6654 - acc: 0.9048 - ETA: 9s - 91ms/ste - loss: 0.5978 - acc: 0.9053 - ETA: 8s - 91ms/st - loss: 0.6284 - acc: 0.9053 - ETA: 7s - 92ms/st - loss: 0.6286 - acc: 0.9053 - ETA: 6s - 92ms/st - loss: 0.6105 - acc: 0.9059 - ETA: 5s - 92ms/st - loss: 0.6610 - acc: 0.9057 - ETA: 4s - 92ms/st - loss: 0.6361 - acc: 0.9058 - ETA: 3s - 92ms/st - loss: 0.5847 - acc: 0.9070 - ETA: 2s - 92ms/st - loss: 0.5906 - acc: 0.9078 - ETA: 1s - 92ms/st - loss: 0.6705 - acc: 0.9072 - ETA: 1s - 91ms/st - loss: 0.6359 - acc: 0.9073 - ETA: 0s - 91ms/st - loss: 0.6870 - acc: 0.9071 - 91ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7878 - acc: 0.7203 - ETA: 1s - 84ms/st - loss: 0.7611 - acc: 0.7160 - ETA: 0s - 72ms/st - loss: 0.7798 - acc: 0.7242 - ETA: 0s - 63ms/st - loss: 0.8186 - acc: 0.7248 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 10/20\n",
      "step 251/251 [==============================] - loss: 0.6437 - acc: 0.9086 - ETA: 26s - 111ms/st - loss: 0.6721 - acc: 0.8996 - ETA: 23s - 100ms/st - loss: 0.6406 - acc: 0.9010 - ETA: 20s - 95ms/step - loss: 0.6394 - acc: 0.9039 - ETA: 19s - 92ms/ste - loss: 0.5998 - acc: 0.9069 - ETA: 18s - 92ms/ste - loss: 0.6380 - acc: 0.9070 - ETA: 17s - 91ms/ste - loss: 0.6254 - acc: 0.9071 - ETA: 16s - 90ms/ste - loss: 0.6152 - acc: 0.9086 - ETA: 15s - 90ms/ste - loss: 0.6530 - acc: 0.9072 - ETA: 14s - 89ms/ste - loss: 0.6297 - acc: 0.9070 - ETA: 13s - 89ms/ste - loss: 0.6661 - acc: 0.9058 - ETA: 12s - 89ms/ste - loss: 0.6601 - acc: 0.9062 - ETA: 11s - 89ms/ste - loss: 0.6553 - acc: 0.9053 - ETA: 10s - 89ms/ste - loss: 0.6799 - acc: 0.9057 - ETA: 9s - 89ms/ste - loss: 0.6298 - acc: 0.9057 - ETA: 9s - 89ms/st - loss: 0.5762 - acc: 0.9061 - ETA: 8s - 89ms/st - loss: 0.6389 - acc: 0.9057 - ETA: 7s - 89ms/st - loss: 0.6731 - acc: 0.9056 - ETA: 6s - 90ms/st - loss: 0.6147 - acc: 0.9057 - ETA: 5s - 89ms/st - loss: 0.6766 - acc: 0.9055 - ETA: 4s - 89ms/st - loss: 0.6279 - acc: 0.9054 - ETA: 3s - 90ms/st - loss: 0.6018 - acc: 0.9063 - ETA: 2s - 90ms/st - loss: 0.6072 - acc: 0.9070 - ETA: 1s - 90ms/st - loss: 0.6691 - acc: 0.9064 - ETA: 0s - 90ms/st - loss: 0.6220 - acc: 0.9071 - ETA: 0s - 89ms/st - loss: 0.6364 - acc: 0.9071 - 89ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.8134 - acc: 0.7148 - ETA: 1s - 84ms/st - loss: 0.7657 - acc: 0.7129 - ETA: 0s - 71ms/st - loss: 0.7917 - acc: 0.7182 - ETA: 0s - 62ms/st - loss: 0.8366 - acc: 0.7180 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 11/20\n",
      "step 251/251 [==============================] - loss: 0.6633 - acc: 0.8891 - ETA: 27s - 112ms/st - loss: 0.6712 - acc: 0.8883 - ETA: 23s - 100ms/st - loss: 0.6496 - acc: 0.8948 - ETA: 21s - 95ms/step - loss: 0.6296 - acc: 0.8988 - ETA: 19s - 93ms/ste - loss: 0.5920 - acc: 0.9027 - ETA: 18s - 93ms/ste - loss: 0.6395 - acc: 0.9030 - ETA: 17s - 93ms/ste - loss: 0.6264 - acc: 0.9044 - ETA: 16s - 93ms/ste - loss: 0.6475 - acc: 0.9062 - ETA: 15s - 93ms/ste - loss: 0.6724 - acc: 0.9054 - ETA: 14s - 92ms/ste - loss: 0.6236 - acc: 0.9058 - ETA: 13s - 93ms/ste - loss: 0.6375 - acc: 0.9055 - ETA: 13s - 93ms/ste - loss: 0.6402 - acc: 0.9060 - ETA: 12s - 93ms/ste - loss: 0.6634 - acc: 0.9058 - ETA: 11s - 93ms/ste - loss: 0.6602 - acc: 0.9068 - ETA: 10s - 93ms/ste - loss: 0.6493 - acc: 0.9055 - ETA: 9s - 93ms/ste - loss: 0.5751 - acc: 0.9058 - ETA: 8s - 92ms/st - loss: 0.6226 - acc: 0.9061 - ETA: 7s - 93ms/st - loss: 0.6316 - acc: 0.9066 - ETA: 6s - 93ms/st - loss: 0.6152 - acc: 0.9068 - ETA: 5s - 93ms/st - loss: 0.6749 - acc: 0.9062 - ETA: 4s - 93ms/st - loss: 0.6242 - acc: 0.9067 - ETA: 3s - 93ms/st - loss: 0.5956 - acc: 0.9073 - ETA: 2s - 93ms/st - loss: 0.6276 - acc: 0.9075 - ETA: 1s - 92ms/st - loss: 0.6844 - acc: 0.9069 - ETA: 1s - 92ms/st - loss: 0.6253 - acc: 0.9075 - ETA: 0s - 91ms/st - loss: 0.6636 - acc: 0.9074 - 91ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7703 - acc: 0.7117 - ETA: 1s - 84ms/st - loss: 0.7776 - acc: 0.7023 - ETA: 0s - 71ms/st - loss: 0.7955 - acc: 0.7109 - ETA: 0s - 61ms/st - loss: 0.8333 - acc: 0.7107 - 60ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 12/20\n",
      "step 251/251 [==============================] - loss: 0.6451 - acc: 0.9055 - ETA: 26s - 111ms/st - loss: 0.6838 - acc: 0.8996 - ETA: 22s - 99ms/step - loss: 0.6249 - acc: 0.9034 - ETA: 21s - 96ms/ste - loss: 0.6307 - acc: 0.9055 - ETA: 19s - 93ms/ste - loss: 0.6036 - acc: 0.9095 - ETA: 18s - 93ms/ste - loss: 0.6495 - acc: 0.9089 - ETA: 17s - 92ms/ste - loss: 0.6220 - acc: 0.9090 - ETA: 16s - 92ms/ste - loss: 0.6214 - acc: 0.9105 - ETA: 15s - 91ms/ste - loss: 0.6631 - acc: 0.9100 - ETA: 14s - 91ms/ste - loss: 0.6350 - acc: 0.9102 - ETA: 13s - 90ms/ste - loss: 0.6376 - acc: 0.9091 - ETA: 12s - 90ms/ste - loss: 0.6465 - acc: 0.9096 - ETA: 11s - 90ms/ste - loss: 0.6534 - acc: 0.9088 - ETA: 10s - 90ms/ste - loss: 0.6624 - acc: 0.9098 - ETA: 9s - 90ms/ste - loss: 0.6512 - acc: 0.9099 - ETA: 9s - 90ms/st - loss: 0.5863 - acc: 0.9099 - ETA: 8s - 90ms/st - loss: 0.6374 - acc: 0.9098 - ETA: 7s - 90ms/st - loss: 0.6287 - acc: 0.9100 - ETA: 6s - 90ms/st - loss: 0.6191 - acc: 0.9103 - ETA: 5s - 90ms/st - loss: 0.6647 - acc: 0.9101 - ETA: 4s - 90ms/st - loss: 0.6462 - acc: 0.9097 - ETA: 3s - 90ms/st - loss: 0.5902 - acc: 0.9101 - ETA: 2s - 90ms/st - loss: 0.6030 - acc: 0.9101 - ETA: 1s - 90ms/st - loss: 0.6766 - acc: 0.9094 - ETA: 0s - 90ms/st - loss: 0.6411 - acc: 0.9095 - ETA: 0s - 89ms/st - loss: 0.6566 - acc: 0.9095 - 89ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7503 - acc: 0.7195 - ETA: 1s - 87ms/st - loss: 0.7727 - acc: 0.7098 - ETA: 0s - 73ms/st - loss: 0.7865 - acc: 0.7193 - ETA: 0s - 63ms/st - loss: 0.8156 - acc: 0.7198 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 13/20\n",
      "step 251/251 [==============================] - loss: 0.6513 - acc: 0.9039 - ETA: 27s - 113ms/st - loss: 0.6882 - acc: 0.8973 - ETA: 23s - 100ms/st - loss: 0.6385 - acc: 0.9021 - ETA: 21s - 95ms/step - loss: 0.6413 - acc: 0.9051 - ETA: 19s - 93ms/ste - loss: 0.6042 - acc: 0.9091 - ETA: 18s - 93ms/ste - loss: 0.6538 - acc: 0.9072 - ETA: 17s - 92ms/ste - loss: 0.6291 - acc: 0.9069 - ETA: 16s - 92ms/ste - loss: 0.6216 - acc: 0.9083 - ETA: 15s - 92ms/ste - loss: 0.6531 - acc: 0.9076 - ETA: 14s - 92ms/ste - loss: 0.6663 - acc: 0.9080 - ETA: 13s - 92ms/ste - loss: 0.6375 - acc: 0.9074 - ETA: 12s - 92ms/ste - loss: 0.6303 - acc: 0.9083 - ETA: 12s - 92ms/ste - loss: 0.6673 - acc: 0.9066 - ETA: 11s - 92ms/ste - loss: 0.6611 - acc: 0.9072 - ETA: 10s - 92ms/ste - loss: 0.6339 - acc: 0.9072 - ETA: 9s - 91ms/ste - loss: 0.5865 - acc: 0.9076 - ETA: 8s - 91ms/st - loss: 0.6323 - acc: 0.9074 - ETA: 7s - 91ms/st - loss: 0.6322 - acc: 0.9075 - ETA: 6s - 91ms/st - loss: 0.6223 - acc: 0.9080 - ETA: 5s - 91ms/st - loss: 0.6557 - acc: 0.9081 - ETA: 4s - 91ms/st - loss: 0.6218 - acc: 0.9081 - ETA: 3s - 91ms/st - loss: 0.5905 - acc: 0.9090 - ETA: 2s - 91ms/st - loss: 0.5793 - acc: 0.9098 - ETA: 1s - 91ms/st - loss: 0.6733 - acc: 0.9095 - ETA: 0s - 90ms/st - loss: 0.6219 - acc: 0.9101 - ETA: 0s - 90ms/st - loss: 0.6448 - acc: 0.9101 - 90ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7586 - acc: 0.7133 - ETA: 1s - 85ms/st - loss: 0.7912 - acc: 0.7105 - ETA: 0s - 72ms/st - loss: 0.7482 - acc: 0.7201 - ETA: 0s - 62ms/st - loss: 0.8291 - acc: 0.7203 - 60ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 14/20\n",
      "step 251/251 [==============================] - loss: 0.6485 - acc: 0.9023 - ETA: 26s - 111ms/st - loss: 0.6686 - acc: 0.9004 - ETA: 23s - 101ms/st - loss: 0.6368 - acc: 0.9060 - ETA: 21s - 97ms/step - loss: 0.6303 - acc: 0.9084 - ETA: 20s - 95ms/ste - loss: 0.5834 - acc: 0.9117 - ETA: 19s - 95ms/ste - loss: 0.6392 - acc: 0.9104 - ETA: 17s - 94ms/ste - loss: 0.6185 - acc: 0.9108 - ETA: 16s - 93ms/ste - loss: 0.6192 - acc: 0.9122 - ETA: 15s - 93ms/ste - loss: 0.6536 - acc: 0.9116 - ETA: 14s - 92ms/ste - loss: 0.6346 - acc: 0.9122 - ETA: 13s - 92ms/ste - loss: 0.6374 - acc: 0.9116 - ETA: 12s - 92ms/ste - loss: 0.6243 - acc: 0.9124 - ETA: 11s - 92ms/ste - loss: 0.6389 - acc: 0.9124 - ETA: 11s - 92ms/ste - loss: 0.6683 - acc: 0.9132 - ETA: 10s - 92ms/ste - loss: 0.6382 - acc: 0.9133 - ETA: 9s - 92ms/ste - loss: 0.5702 - acc: 0.9138 - ETA: 8s - 92ms/st - loss: 0.6322 - acc: 0.9135 - ETA: 7s - 92ms/st - loss: 0.6313 - acc: 0.9130 - ETA: 6s - 92ms/st - loss: 0.6128 - acc: 0.9134 - ETA: 5s - 91ms/st - loss: 0.6369 - acc: 0.9136 - ETA: 4s - 91ms/st - loss: 0.6224 - acc: 0.9135 - ETA: 3s - 91ms/st - loss: 0.5948 - acc: 0.9146 - ETA: 2s - 91ms/st - loss: 0.5805 - acc: 0.9153 - ETA: 1s - 91ms/st - loss: 0.6722 - acc: 0.9147 - ETA: 1s - 91ms/st - loss: 0.6320 - acc: 0.9149 - ETA: 0s - 91ms/st - loss: 0.6434 - acc: 0.9149 - 91ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7885 - acc: 0.7016 - ETA: 1s - 85ms/st - loss: 0.7858 - acc: 0.7051 - ETA: 0s - 72ms/st - loss: 0.7694 - acc: 0.7141 - ETA: 0s - 62ms/st - loss: 0.8205 - acc: 0.7147 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 15/20\n",
      "step 251/251 [==============================] - loss: 0.6514 - acc: 0.9102 - ETA: 26s - 110ms/st - loss: 0.6806 - acc: 0.9047 - ETA: 22s - 99ms/step - loss: 0.6320 - acc: 0.9099 - ETA: 20s - 95ms/ste - loss: 0.6301 - acc: 0.9117 - ETA: 19s - 93ms/ste - loss: 0.5964 - acc: 0.9145 - ETA: 18s - 92ms/ste - loss: 0.6365 - acc: 0.9135 - ETA: 17s - 91ms/ste - loss: 0.6276 - acc: 0.9133 - ETA: 16s - 91ms/ste - loss: 0.6231 - acc: 0.9146 - ETA: 15s - 90ms/ste - loss: 0.6455 - acc: 0.9136 - ETA: 14s - 90ms/ste - loss: 0.6258 - acc: 0.9137 - ETA: 13s - 90ms/ste - loss: 0.6370 - acc: 0.9131 - ETA: 12s - 90ms/ste - loss: 0.6421 - acc: 0.9135 - ETA: 11s - 90ms/ste - loss: 0.6474 - acc: 0.9129 - ETA: 10s - 90ms/ste - loss: 0.6527 - acc: 0.9136 - ETA: 9s - 90ms/ste - loss: 0.6401 - acc: 0.9134 - ETA: 9s - 90ms/st - loss: 0.5949 - acc: 0.9135 - ETA: 8s - 90ms/st - loss: 0.6212 - acc: 0.9134 - ETA: 7s - 90ms/st - loss: 0.6343 - acc: 0.9136 - ETA: 6s - 90ms/st - loss: 0.6160 - acc: 0.9140 - ETA: 5s - 90ms/st - loss: 0.6493 - acc: 0.9139 - ETA: 4s - 90ms/st - loss: 0.6219 - acc: 0.9138 - ETA: 3s - 90ms/st - loss: 0.5838 - acc: 0.9148 - ETA: 2s - 91ms/st - loss: 0.5778 - acc: 0.9154 - ETA: 1s - 90ms/st - loss: 0.6705 - acc: 0.9146 - ETA: 0s - 90ms/st - loss: 0.6309 - acc: 0.9150 - ETA: 0s - 90ms/st - loss: 0.6590 - acc: 0.9149 - 90ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.8020 - acc: 0.7047 - ETA: 1s - 85ms/st - loss: 0.7631 - acc: 0.7188 - ETA: 0s - 73ms/st - loss: 0.7822 - acc: 0.7263 - ETA: 0s - 62ms/st - loss: 0.8062 - acc: 0.7268 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 16/20\n",
      "step 251/251 [==============================] - loss: 0.6555 - acc: 0.9094 - ETA: 27s - 113ms/st - loss: 0.6697 - acc: 0.9027 - ETA: 22s - 100ms/st - loss: 0.6372 - acc: 0.9068 - ETA: 21s - 95ms/step - loss: 0.6297 - acc: 0.9090 - ETA: 19s - 94ms/ste - loss: 0.5911 - acc: 0.9137 - ETA: 18s - 94ms/ste - loss: 0.6538 - acc: 0.9113 - ETA: 18s - 94ms/ste - loss: 0.6209 - acc: 0.9112 - ETA: 17s - 94ms/ste - loss: 0.6305 - acc: 0.9116 - ETA: 16s - 94ms/ste - loss: 0.6522 - acc: 0.9110 - ETA: 15s - 94ms/ste - loss: 0.6143 - acc: 0.9113 - ETA: 14s - 93ms/ste - loss: 0.6408 - acc: 0.9102 - ETA: 13s - 93ms/ste - loss: 0.6289 - acc: 0.9109 - ETA: 12s - 93ms/ste - loss: 0.6458 - acc: 0.9109 - ETA: 11s - 93ms/ste - loss: 0.6559 - acc: 0.9119 - ETA: 10s - 93ms/ste - loss: 0.6491 - acc: 0.9111 - ETA: 9s - 92ms/ste - loss: 0.5697 - acc: 0.9113 - ETA: 8s - 93ms/st - loss: 0.6218 - acc: 0.9112 - ETA: 7s - 93ms/st - loss: 0.6276 - acc: 0.9114 - ETA: 6s - 93ms/st - loss: 0.6594 - acc: 0.9113 - ETA: 5s - 92ms/st - loss: 0.6693 - acc: 0.9109 - ETA: 4s - 92ms/st - loss: 0.6451 - acc: 0.9109 - ETA: 3s - 92ms/st - loss: 0.5912 - acc: 0.9118 - ETA: 2s - 92ms/st - loss: 0.5781 - acc: 0.9124 - ETA: 1s - 91ms/st - loss: 0.6667 - acc: 0.9116 - ETA: 1s - 91ms/st - loss: 0.6260 - acc: 0.9122 - ETA: 0s - 91ms/st - loss: 0.6471 - acc: 0.9122 - 90ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.7652 - acc: 0.7109 - ETA: 1s - 91ms/st - loss: 0.7778 - acc: 0.7129 - ETA: 0s - 78ms/st - loss: 0.7607 - acc: 0.7219 - ETA: 0s - 67ms/st - loss: 0.8367 - acc: 0.7215 - 65ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 17/20\n",
      "step 251/251 [==============================] - loss: 0.6473 - acc: 0.9094 - ETA: 27s - 115ms/st - loss: 0.6640 - acc: 0.9059 - ETA: 23s - 101ms/st - loss: 0.6264 - acc: 0.9109 - ETA: 21s - 96ms/step - loss: 0.6363 - acc: 0.9121 - ETA: 19s - 94ms/ste - loss: 0.5880 - acc: 0.9158 - ETA: 18s - 93ms/ste - loss: 0.6480 - acc: 0.9146 - ETA: 17s - 92ms/ste - loss: 0.6281 - acc: 0.9135 - ETA: 16s - 92ms/ste - loss: 0.6070 - acc: 0.9145 - ETA: 15s - 91ms/ste - loss: 0.6592 - acc: 0.9139 - ETA: 14s - 91ms/ste - loss: 0.6131 - acc: 0.9142 - ETA: 13s - 91ms/ste - loss: 0.6441 - acc: 0.9135 - ETA: 12s - 91ms/ste - loss: 0.6239 - acc: 0.9139 - ETA: 11s - 91ms/ste - loss: 0.6438 - acc: 0.9136 - ETA: 11s - 91ms/ste - loss: 0.6531 - acc: 0.9143 - ETA: 10s - 91ms/ste - loss: 0.6474 - acc: 0.9144 - ETA: 9s - 91ms/ste - loss: 0.5701 - acc: 0.9148 - ETA: 8s - 91ms/st - loss: 0.6143 - acc: 0.9148 - ETA: 7s - 91ms/st - loss: 0.6258 - acc: 0.9151 - ETA: 6s - 91ms/st - loss: 0.6175 - acc: 0.9155 - ETA: 5s - 91ms/st - loss: 0.6450 - acc: 0.9154 - ETA: 4s - 91ms/st - loss: 0.6219 - acc: 0.9154 - ETA: 3s - 91ms/st - loss: 0.5982 - acc: 0.9161 - ETA: 2s - 91ms/st - loss: 0.5775 - acc: 0.9168 - ETA: 1s - 91ms/st - loss: 0.6612 - acc: 0.9162 - ETA: 0s - 91ms/st - loss: 0.6221 - acc: 0.9167 - ETA: 0s - 90ms/st - loss: 0.6408 - acc: 0.9167 - 90ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.8066 - acc: 0.7078 - ETA: 1s - 85ms/st - loss: 0.7952 - acc: 0.7121 - ETA: 0s - 77ms/st - loss: 0.7540 - acc: 0.7229 - ETA: 0s - 68ms/st - loss: 0.8518 - acc: 0.7218 - 66ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 18/20\n",
      "step 251/251 [==============================] - loss: 0.6375 - acc: 0.9133 - ETA: 26s - 111ms/st - loss: 0.6758 - acc: 0.9074 - ETA: 22s - 99ms/step - loss: 0.6287 - acc: 0.9096 - ETA: 21s - 95ms/ste - loss: 0.6327 - acc: 0.9121 - ETA: 19s - 94ms/ste - loss: 0.5851 - acc: 0.9169 - ETA: 19s - 95ms/ste - loss: 0.6413 - acc: 0.9154 - ETA: 18s - 95ms/ste - loss: 0.6139 - acc: 0.9153 - ETA: 17s - 95ms/ste - loss: 0.6033 - acc: 0.9168 - ETA: 16s - 94ms/ste - loss: 0.6457 - acc: 0.9167 - ETA: 15s - 94ms/ste - loss: 0.6065 - acc: 0.9168 - ETA: 14s - 93ms/ste - loss: 0.6372 - acc: 0.9156 - ETA: 13s - 93ms/ste - loss: 0.6221 - acc: 0.9156 - ETA: 12s - 93ms/ste - loss: 0.6478 - acc: 0.9154 - ETA: 11s - 93ms/ste - loss: 0.6449 - acc: 0.9165 - ETA: 10s - 92ms/ste - loss: 0.6420 - acc: 0.9165 - ETA: 9s - 92ms/ste - loss: 0.5683 - acc: 0.9168 - ETA: 8s - 92ms/st - loss: 0.6189 - acc: 0.9167 - ETA: 7s - 92ms/st - loss: 0.6206 - acc: 0.9168 - ETA: 6s - 93ms/st - loss: 0.6350 - acc: 0.9171 - ETA: 5s - 92ms/st - loss: 0.6427 - acc: 0.9171 - ETA: 4s - 92ms/st - loss: 0.6319 - acc: 0.9169 - ETA: 3s - 92ms/st - loss: 0.5950 - acc: 0.9178 - ETA: 2s - 92ms/st - loss: 0.5754 - acc: 0.9186 - ETA: 1s - 92ms/st - loss: 0.6616 - acc: 0.9179 - ETA: 1s - 92ms/st - loss: 0.6218 - acc: 0.9183 - ETA: 0s - 91ms/st - loss: 0.6374 - acc: 0.9183 - 91ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.8104 - acc: 0.7086 - ETA: 1s - 85ms/st - loss: 0.7667 - acc: 0.7094 - ETA: 0s - 73ms/st - loss: 0.7805 - acc: 0.7190 - ETA: 0s - 63ms/st - loss: 0.8425 - acc: 0.7185 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 19/20\n",
      "step 251/251 [==============================] - loss: 0.6380 - acc: 0.9180 - ETA: 27s - 112ms/st - loss: 0.6654 - acc: 0.9102 - ETA: 23s - 101ms/st - loss: 0.6277 - acc: 0.9141 - ETA: 21s - 97ms/step - loss: 0.6296 - acc: 0.9152 - ETA: 20s - 95ms/ste - loss: 0.5882 - acc: 0.9183 - ETA: 18s - 94ms/ste - loss: 0.6398 - acc: 0.9174 - ETA: 17s - 94ms/ste - loss: 0.6145 - acc: 0.9175 - ETA: 16s - 93ms/ste - loss: 0.5955 - acc: 0.9193 - ETA: 15s - 92ms/ste - loss: 0.6666 - acc: 0.9176 - ETA: 14s - 91ms/ste - loss: 0.6083 - acc: 0.9173 - ETA: 13s - 91ms/ste - loss: 0.6369 - acc: 0.9162 - ETA: 12s - 91ms/ste - loss: 0.6297 - acc: 0.9168 - ETA: 12s - 92ms/ste - loss: 0.6352 - acc: 0.9165 - ETA: 11s - 92ms/ste - loss: 0.6520 - acc: 0.9176 - ETA: 10s - 92ms/ste - loss: 0.6361 - acc: 0.9180 - ETA: 9s - 92ms/ste - loss: 0.5747 - acc: 0.9181 - ETA: 8s - 92ms/st - loss: 0.6217 - acc: 0.9181 - ETA: 7s - 92ms/st - loss: 0.6144 - acc: 0.9183 - ETA: 6s - 92ms/st - loss: 0.6108 - acc: 0.9188 - ETA: 5s - 91ms/st - loss: 0.6449 - acc: 0.9186 - ETA: 4s - 91ms/st - loss: 0.6245 - acc: 0.9185 - ETA: 3s - 91ms/st - loss: 0.5902 - acc: 0.9194 - ETA: 2s - 91ms/st - loss: 0.5932 - acc: 0.9198 - ETA: 1s - 91ms/st - loss: 0.6609 - acc: 0.9191 - ETA: 0s - 91ms/st - loss: 0.6245 - acc: 0.9193 - ETA: 0s - 90ms/st - loss: 0.6367 - acc: 0.9193 - 90ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.8055 - acc: 0.7133 - ETA: 1s - 85ms/st - loss: 0.7531 - acc: 0.7188 - ETA: 0s - 72ms/st - loss: 0.7882 - acc: 0.7224 - ETA: 0s - 62ms/st - loss: 0.8361 - acc: 0.7223 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "Epoch 20/20\n",
      "step 251/251 [==============================] - loss: 0.6441 - acc: 0.9133 - ETA: 27s - 116ms/st - loss: 0.6662 - acc: 0.9074 - ETA: 23s - 104ms/st - loss: 0.6232 - acc: 0.9112 - ETA: 21s - 99ms/step - loss: 0.6406 - acc: 0.9135 - ETA: 20s - 96ms/ste - loss: 0.5959 - acc: 0.9172 - ETA: 19s - 95ms/ste - loss: 0.6320 - acc: 0.9164 - ETA: 18s - 95ms/ste - loss: 0.6218 - acc: 0.9167 - ETA: 17s - 94ms/ste - loss: 0.5856 - acc: 0.9190 - ETA: 16s - 94ms/ste - loss: 0.6449 - acc: 0.9186 - ETA: 15s - 94ms/ste - loss: 0.6185 - acc: 0.9187 - ETA: 14s - 93ms/ste - loss: 0.6375 - acc: 0.9176 - ETA: 13s - 93ms/ste - loss: 0.6301 - acc: 0.9176 - ETA: 12s - 93ms/ste - loss: 0.6412 - acc: 0.9174 - ETA: 11s - 93ms/ste - loss: 0.6507 - acc: 0.9182 - ETA: 10s - 93ms/ste - loss: 0.6475 - acc: 0.9181 - ETA: 9s - 93ms/ste - loss: 0.5892 - acc: 0.9181 - ETA: 8s - 93ms/st - loss: 0.6388 - acc: 0.9182 - ETA: 7s - 93ms/st - loss: 0.6161 - acc: 0.9183 - ETA: 6s - 93ms/st - loss: 0.6213 - acc: 0.9186 - ETA: 5s - 93ms/st - loss: 0.6460 - acc: 0.9184 - ETA: 4s - 93ms/st - loss: 0.6278 - acc: 0.9185 - ETA: 3s - 93ms/st - loss: 0.5838 - acc: 0.9195 - ETA: 2s - 93ms/st - loss: 0.5968 - acc: 0.9200 - ETA: 1s - 93ms/st - loss: 0.6611 - acc: 0.9190 - ETA: 1s - 93ms/st - loss: 0.6224 - acc: 0.9196 - ETA: 0s - 92ms/st - loss: 0.6447 - acc: 0.9195 - 92ms/step          \n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 31/31 [==============================] - loss: 0.8046 - acc: 0.7086 - ETA: 1s - 85ms/st - loss: 0.7752 - acc: 0.7055 - ETA: 0s - 73ms/st - loss: 0.7635 - acc: 0.7146 - ETA: 0s - 63ms/st - loss: 0.8373 - acc: 0.7145 - 61ms/step          \n",
      "Eval samples: 3968\n",
      "save checkpoint at /home/aistudio/checkpoints/final\n"
     ]
    }
   ],
   "source": [
    "model.fit(train_loader, dev_loader, epochs=20, save_dir='./checkpoints', save_freq=20,verbose=1, callbacks=callback)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 10/31 - loss: 0.7662 - acc: 0.7250 - 92ms/step\n",
      "step 20/31 - loss: 0.8031 - acc: 0.7184 - 76ms/step\n",
      "step 30/31 - loss: 0.7543 - acc: 0.7247 - 65ms/step\n",
      "step 31/31 - loss: 0.8528 - acc: 0.7238 - 64ms/step\n",
      "Eval samples: 3968\n",
      "Finally test acc: 0.72379\n"
     ]
    }
   ],
   "source": [
    "results = model.evaluate(dev_loader)\r\n",
    "print(\"Finally test acc: %.5f\" % results['acc'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "label_map = {0: 'negative', 1: 'medium' ,2:'positive'}\r\n",
    "results = model.predict(test_loader, batch_size=128)[0]\r\n",
    "predictions = []\r\n",
    "\r\n",
    "for batch_probs in results:\r\n",
    "    # 映射分类label\r\n",
    "    idx = np.argmax(batch_probs, axis=-1)\r\n",
    "    idx = idx.tolist()\r\n",
    "    labels = [label_map[i] for i in idx]\r\n",
    "    predictions.extend(labels)\r\n",
    "\r\n",
    "# 看看预测数据前5个样例分类结果\r\n",
    "for idx, data in enumerate(test_ds.data[:10]):\r\n",
    "    print('Data: {} \\t Label: {}'.format(data[0], predictions[idx]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**使用内置数据集**\n",
    "\n",
    "参考：https://github.com/PaddlePaddle/models/blob/develop/PaddleNLP/docs/datasets.md\n",
    "\n",
    "其中包括了很多种的数据集\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/09493b1595004a878c0df02d2eecc9c29d9feff1503946768888674a68782d08)\n",
    "\n",
    "在官方文档中有使用内置数据集的示例;\n",
    "\n",
    "每个数据集也可以在github上看到具体可以使用的API：\n",
    "\n",
    "https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/paddlenlp/datasets\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "from paddlenlp.datasets import ChnSentiCorp\r\n",
    "\r\n",
    "train_ds, dev_ds, test_ds = ChnSentiCorp.get_datasets(['train', 'dev', 'test'])\r\n",
    "\r\n",
    "label_list = train_ds.get_labels()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# PaddleNLP 更多项目\n",
    "\n",
    " - [瞧瞧怎么使用PaddleNLP内置数据集-基于seq2vec的情感分析](https://aistudio.baidu.com/aistudio/projectdetail/1283423)\n",
    " - [如何通过预训练模型Fine-tune下游任务](https://aistudio.baidu.com/aistudio/projectdetail/1294333)\n",
    " - [使用BiGRU-CRF模型完成快递单信息抽取](https://aistudio.baidu.com/aistudio/projectdetail/1317771)\n",
    " - [使用预训练模型ERNIE优化快递单信息抽取](https://aistudio.baidu.com/aistudio/projectdetail/1329361)\n",
    " - [使用Seq2Seq模型完成自动对联](https://aistudio.baidu.com/aistudio/projectdetail/1321118)\n",
    " - [使用预训练模型ERNIE-GEN实现智能写诗](https://aistudio.baidu.com/aistudio/projectdetail/1339888)\n",
    " - [使用TCN网络完成新冠疫情病例数预测](https://aistudio.baidu.com/aistudio/projectdetail/1290873)\n",
    " - [使用预训练模型完成阅读理解](https://aistudio.baidu.com/aistudio/projectdetail/1339612)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:paddlepaddle]",
   "language": "python",
   "name": "conda-env-paddlepaddle-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
