{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 文本分类(Text classification)\n",
    "文本分类任务是将一句话或一段话划分到某个具体的类别。比如垃圾邮件识别，文本情绪分类等。\n",
    "\n",
    "Example::   \n",
    "1,商务大床房，房间很大，床有2M宽，整体感觉经济实惠不错!\n",
    "\n",
    "\n",
    "其中开头的1是只这条评论的标签，表示是正面的情绪。我们将使用到的数据可以通过http://dbcloud.irocn.cn:8989/api/public/dl/dataset/chn_senti_corp.zip 下载并解压，当然也可以通过fastNLP自动下载该数据。\n",
    "\n",
    "数据中的内容如下图所示。接下来，我们将用fastNLP在这个数据上训练一个分类网络。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![jupyter](./cn_cls_example.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 步骤\n",
    "一共有以下的几个步骤  \n",
    "(1) 读取数据  \n",
    "(2) 预处理数据  \n",
    "(3) 选择预训练词向量  \n",
    "(4) 创建模型  \n",
    "(5) 训练模型  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (1) 读取数据\n",
    "fastNLP提供多种数据的自动下载与自动加载功能，对于这里我们要用到的数据，我们可以用\\ref{Loader}自动下载并加载该数据。更多有关Loader的使用可以参考\\ref{Loader}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from fastNLP.io import ChnSentiCorpLoader\n",
    "\n",
    "loader = ChnSentiCorpLoader()  # 初始化一个中文情感分类的loader\n",
    "data_dir = loader.download()  # 这一行代码将自动下载数据到默认的缓存地址, 并将该地址返回\n",
    "data_bundle = loader.load(data_dir)  # 这一行代码将从{data_dir}处读取数据至DataBundle"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "DataBundle的相关介绍，可以参考\\ref{}。我们可以打印该data_bundle的基本信息。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(data_bundle)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看出，该data_bundle中一个含有三个\\ref{DataSet}。通过下面的代码，我们可以查看DataSet的基本情况"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(data_bundle.get_dataset('train')[:2])  # 查看Train集前两个sample"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (2) 预处理数据\n",
    "在NLP任务中，预处理一般包括: (a)将一整句话切分成汉字或者词; (b)将文本转换为index  \n",
    "\n",
    "fastNLP中也提供了多种数据集的处理类，这里我们直接使用fastNLP的ChnSentiCorpPipe。更多关于Pipe的说明可以参考\\ref{Pipe}。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from fastNLP.io import ChnSentiCorpPipe\n",
    "\n",
    "pipe = ChnSentiCorpPipe()\n",
    "data_bundle = pipe.process(data_bundle)  # 所有的Pipe都实现了process()方法，且输入输出都为DataBundle类型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(data_bundle)  # 打印data_bundle，查看其变化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到除了之前已经包含的3个\\ref{DataSet}, 还新增了两个\\ref{Vocabulary}。我们可以打印DataSet中的内容"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(data_bundle.get_dataset('train')[:2])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "新增了一列为数字列表的chars，以及变为数字的target列。可以看出这两列的名称和刚好与data_bundle中两个Vocabulary的名称是一致的，我们可以打印一下Vocabulary看一下里面的内容。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "char_vocab = data_bundle.get_vocab('chars')\n",
    "print(char_vocab)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Vocabulary是一个记录着词语与index之间映射关系的类，比如"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "index = char_vocab.to_index('选')\n",
    "print(\"'选'的index是{}\".format(index))  # 这个值与上面打印出来的第一个instance的chars的第一个index是一致的\n",
    "print(\"index:{}对应的汉字是{}\".format(index, char_vocab.to_word(index))) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (3) 选择预训练词向量  \n",
    "由于Word2vec, Glove, Elmo, Bert等预训练模型可以增强模型的性能，所以在训练具体任务前，选择合适的预训练词向量非常重要。在fastNLP中我们提供了多种Embedding使得加载这些预训练模型的过程变得更加便捷。更多关于Embedding的说明可以参考\\ref{Embedding}。这里我们先给出一个使用word2vec的中文汉字预训练的示例，之后再给出一个使用Bert的文本分类。这里使用的预训练词向量为'cn-fastnlp-100d'，fastNLP将自动下载该embedding至本地缓存，fastNLP支持使用名字指定的Embedding以及相关说明可以参见\\ref{Embedding}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from fastNLP.embeddings import StaticEmbedding\n",
    "\n",
    "word2vec_embed = StaticEmbedding(char_vocab, model_dir_or_name='cn-char-fastnlp-100d')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (4) 创建模型\n",
    "这里我们使用到的模型结构如下所示，补图"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch import nn\n",
    "from fastNLP.modules import LSTM\n",
    "import torch\n",
    "\n",
    "# 定义模型\n",
    "class BiLSTMMaxPoolCls(nn.Module):\n",
    "    def __init__(self, embed, num_classes, hidden_size=400, num_layers=1, dropout=0.3):\n",
    "        super().__init__()\n",
    "        self.embed = embed\n",
    "        \n",
    "        self.lstm = LSTM(self.embed.embedding_dim, hidden_size=hidden_size//2, num_layers=num_layers, \n",
    "                         batch_first=True, bidirectional=True)\n",
    "        self.dropout_layer = nn.Dropout(dropout)\n",
    "        self.fc = nn.Linear(hidden_size, num_classes)\n",
    "        \n",
    "    def forward(self, chars, seq_len):  # 这里的名称必须和DataSet中相应的field对应，比如之前我们DataSet中有chars，这里就必须为chars\n",
    "        # chars:[batch_size, max_len]\n",
    "        # seq_len: [batch_size, ]\n",
    "        chars = self.embed(chars)\n",
    "        outputs, _ = self.lstm(chars, seq_len)\n",
    "        outputs = self.dropout_layer(outputs)\n",
    "        outputs, _ = torch.max(outputs, dim=1)\n",
    "        outputs = self.fc(outputs)\n",
    "        \n",
    "        return {'pred':outputs}  # [batch_size,], 返回值必须是dict类型，且预测值的key建议设为pred\n",
    "\n",
    "# 初始化模型\n",
    "model = BiLSTMMaxPoolCls(word2vec_embed, len(data_bundle.get_vocab('target')))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (5) 训练模型\n",
    "fastNLP提供了Trainer对象来组织训练过程，包括完成loss计算(所以在初始化Trainer的时候需要指定loss类型)，梯度更新(所以在初始化Trainer的时候需要提供优化器optimizer)以及在验证集上的性能验证(所以在初始化时需要提供一个Metric)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from fastNLP import Trainer\n",
    "from fastNLP import CrossEntropyLoss\n",
    "from torch.optim import Adam\n",
    "from fastNLP import AccuracyMetric\n",
    "\n",
    "loss = CrossEntropyLoss()\n",
    "optimizer = Adam(model.parameters(), lr=0.001)\n",
    "metric = AccuracyMetric()\n",
    "device = 0 if torch.cuda.is_available() else 'cpu'  # 如果有gpu的话在gpu上运行，训练速度会更快\n",
    "\n",
    "trainer = Trainer(train_data=data_bundle.get_dataset('train'), model=model, loss=loss, \n",
    "                  optimizer=optimizer, batch_size=32, dev_data=data_bundle.get_dataset('dev'),\n",
    "                  metrics=metric, device=device)\n",
    "trainer.train()  # 开始训练，训练完成之后默认会加载在dev上表现最好的模型\n",
    "\n",
    "# 在测试集上测试一下模型的性能\n",
    "from fastNLP import Tester\n",
    "print(\"Performance on test is:\")\n",
    "tester = Tester(data=data_bundle.get_dataset('test'), model=model, metrics=metric, batch_size=64, device=device)\n",
    "tester.test()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 使用Bert进行文本分类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 只需要切换一下Embedding即可\n",
    "from fastNLP.embeddings import BertEmbedding\n",
    "\n",
    "# 这里为了演示一下效果，所以默认Bert不更新权重\n",
    "bert_embed = BertEmbedding(char_vocab, model_dir_or_name='cn', auto_truncate=True, requires_grad=False)\n",
    "model = BiLSTMMaxPoolCls(bert_embed, len(data_bundle.get_vocab('target')), )\n",
    "\n",
    "\n",
    "import torch\n",
    "from fastNLP import Trainer\n",
    "from fastNLP import CrossEntropyLoss\n",
    "from torch.optim import Adam\n",
    "from fastNLP import AccuracyMetric\n",
    "\n",
    "loss = CrossEntropyLoss()\n",
    "optimizer = Adam(model.parameters(), lr=2e-5)\n",
    "metric = AccuracyMetric()\n",
    "device = 0 if torch.cuda.is_available() else 'cpu'  # 如果有gpu的话在gpu上运行，训练速度会更快\n",
    "\n",
    "trainer = Trainer(train_data=data_bundle.get_dataset('train'), model=model, loss=loss, \n",
    "                  optimizer=optimizer, batch_size=16, dev_data=data_bundle.get_dataset('test'),\n",
    "                  metrics=metric, device=device, n_epochs=3)\n",
    "trainer.train()  # 开始训练，训练完成之后默认会加载在dev上表现最好的模型\n",
    "\n",
    "# 在测试集上测试一下模型的性能\n",
    "from fastNLP import Tester\n",
    "print(\"Performance on test is:\")\n",
    "tester = Tester(data=data_bundle.get_dataset('test'), model=model, metrics=metric, batch_size=64, device=device)\n",
    "tester.test()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 基于词进行文本分类"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由于汉字中没有显示的字与字的边界，一般需要通过分词器先将句子进行分词操作。\n",
    "下面的例子演示了如何不基于fastNLP已有的数据读取、预处理代码进行文本分类。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (1) 读取数据"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里我们继续以之前的数据为例，但这次我们不使用fastNLP自带的数据读取代码  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from fastNLP.io import ChnSentiCorpLoader\n",
    "\n",
    "loader = ChnSentiCorpLoader()        # 初始化一个中文情感分类的loader\n",
    "data_dir = loader.download()         # 这一行代码将自动下载数据到默认的缓存地址, 并将该地址返回"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面我们先定义一个read_file_to_dataset的函数, 即给定一个文件路径，读取其中的内容，并返回一个DataSet。然后我们将所有的DataSet放入到DataBundle对象中来方便接下来的预处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from fastNLP import DataSet, Instance\n",
    "from fastNLP.io import DataBundle\n",
    "\n",
    "\n",
    "def read_file_to_dataset(fp):\n",
    "    ds = DataSet()\n",
    "    with open(fp, 'r') as f:\n",
    "        f.readline()  # 第一行是title名称，忽略掉\n",
    "        for line in f:\n",
    "            line = line.strip()\n",
    "            target, chars = line.split('\\t')\n",
    "            ins = Instance(target=target, raw_chars=chars)\n",
    "            ds.append(ins)\n",
    "    return ds\n",
    "\n",
    "data_bundle = DataBundle()\n",
    "for name in ['train.tsv', 'dev.tsv', 'test.tsv']:\n",
    "    fp = os.path.join(data_dir, name)\n",
    "    ds = read_file_to_dataset(fp)\n",
    "    data_bundle.set_dataset(name=name.split('.')[0], dataset=ds)\n",
    "\n",
    "print(data_bundle)  # 查看以下数据集的情况\n",
    "# In total 3 datasets:\n",
    "#    train has 9600 instances.\n",
    "#    dev has 1200 instances.\n",
    "#    test has 1200 instances."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (2) 数据预处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这里，我们首先把句子通过 [fastHan](http://gitee.com/fastnlp/fastHan) 进行分词操作，然后创建词表，并将词语转换为序号。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from fastHan import FastHan\n",
    "from fastNLP import Vocabulary\n",
    "\n",
    "model=FastHan()\n",
    "# model.set_device('cuda')\n",
    "\n",
    "# 定义分词处理操作\n",
    "def word_seg(ins):\n",
    "    raw_chars = ins['raw_chars']\n",
    "    # 由于有些句子比较长，我们只截取前128个汉字\n",
    "    raw_words = model(raw_chars[:128], target='CWS')[0]\n",
    "    return raw_words\n",
    "\n",
    "for name, ds in data_bundle.iter_datasets():\n",
    "    # apply函数将对内部的instance依次执行word_seg操作，并把其返回值放入到raw_words这个field\n",
    "    ds.apply(word_seg, new_field_name='raw_words')\n",
    "    # 除了apply函数，fastNLP还支持apply_field, apply_more(可同时创建多个field)等操作\n",
    "    # 同时我们增加一个seq_len的field\n",
    "    ds.add_seq_len('raw_words')\n",
    "\n",
    "vocab = Vocabulary()\n",
    "\n",
    "# 对raw_words列创建词表, 建议把非训练集的dataset放在no_create_entry_dataset参数中\n",
    "# 也可以通过add_word(), add_word_lst()等建立词表，请参考http://www.fastnlp.top/docs/fastNLP/tutorials/tutorial_2_vocabulary.html\n",
    "vocab.from_dataset(data_bundle.get_dataset('train'), field_name='raw_words', \n",
    "                no_create_entry_dataset=[data_bundle.get_dataset('dev'), \n",
    "                                            data_bundle.get_dataset('test')]) \n",
    "\n",
    "# 将建立好词表的Vocabulary用于对raw_words列建立词表，并把转为序号的列存入到words列\n",
    "vocab.index_dataset(data_bundle.get_dataset('train'), data_bundle.get_dataset('dev'), \n",
    "                data_bundle.get_dataset('test'), field_name='raw_words', new_field_name='words')\n",
    "\n",
    "# 建立target的词表，target的词表一般不需要padding和unknown\n",
    "target_vocab = Vocabulary(padding=None, unknown=None) \n",
    "# 一般情况下我们可以只用训练集建立target的词表\n",
    "target_vocab.from_dataset(data_bundle.get_dataset('train'), field_name='target') \n",
    "# 如果没有传递new_field_name, 则默认覆盖原词表\n",
    "target_vocab.index_dataset(data_bundle.get_dataset('train'), data_bundle.get_dataset('dev'), \n",
    "                data_bundle.get_dataset('test'), field_name='target')\n",
    "\n",
    "# 我们可以把词表保存到data_bundle中，方便之后使用\n",
    "data_bundle.set_vocab(field_name='words', vocab=vocab)\n",
    "data_bundle.set_vocab(field_name='target', vocab=target_vocab)\n",
    "\n",
    "# 我们把words和target分别设置为input和target，这样它们才会在训练循环中被取出并自动padding, 有关这部分更多的内容参考\n",
    "#  http://www.fastnlp.top/docs/fastNLP/tutorials/tutorial_6_datasetiter.html\n",
    "data_bundle.set_target('target')\n",
    "data_bundle.set_input('words', 'seq_len')  # DataSet也有这两个接口\n",
    "# 如果某些field，您希望它被设置为target或者input，但是不希望fastNLP自动padding或需要使用特定的padding方式，请参考\n",
    "#  http://www.fastnlp.top/docs/fastNLP/fastNLP.core.dataset.html\n",
    "\n",
    "print(data_bundle.get_dataset('train')[:2])  # 我们可以看一下当前dataset的内容\n",
    "\n",
    "# 由于之后需要使用之前定义的BiLSTMMaxPoolCls模型，所以需要将words这个field修改为chars(因为该模型的forward接受chars参数)\n",
    "data_bundle.rename_field('words', 'chars')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### (3) 选择预训练词向量"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里我们选择腾讯的预训练中文词向量，可以在 [腾讯词向量](https://ai.tencent.com/ailab/nlp/en/embedding.html) 处下载并解压。这里我们不能直接使用BERT，因为BERT是基于中文字进行预训练的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from fastNLP.embeddings import StaticEmbedding\n",
    "\n",
    "word2vec_embed = StaticEmbedding(data_bundle.get_vocab('words'), \n",
    "                                 model_dir_or_name='/path/to/Tencent_AILab_ChineseEmbedding.txt')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from fastNLP import Trainer\n",
    "from fastNLP import CrossEntropyLoss\n",
    "from torch.optim import Adam\n",
    "from fastNLP import AccuracyMetric\n",
    "\n",
    "# 初始化模型\n",
    "model = BiLSTMMaxPoolCls(word2vec_embed, len(data_bundle.get_vocab('target')))\n",
    "\n",
    "# 开始训练\n",
    "loss = CrossEntropyLoss()\n",
    "optimizer = Adam(model.parameters(), lr=0.001)\n",
    "metric = AccuracyMetric()\n",
    "device = 0 if torch.cuda.is_available() else 'cpu'  # 如果有gpu的话在gpu上运行，训练速度会更快\n",
    "\n",
    "trainer = Trainer(train_data=data_bundle.get_dataset('train'), model=model, loss=loss, \n",
    "                  optimizer=optimizer, batch_size=32, dev_data=data_bundle.get_dataset('dev'),\n",
    "                  metrics=metric, device=device)\n",
    "trainer.train()  # 开始训练，训练完成之后默认会加载在dev上表现最好的模型\n",
    "\n",
    "# 在测试集上测试一下模型的性能\n",
    "from fastNLP import Tester\n",
    "print(\"Performance on test is:\")\n",
    "tester = Tester(data=data_bundle.get_dataset('test'), model=model, metrics=metric, batch_size=64, device=device)\n",
    "tester.test()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
