{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 用深度学习对弹幕进行分类\n",
    "\n",
    "我们使用的是百度推出的框架 PaddlePaddle，这个框架基本上和 Pytorch 的 API 一样，而且你可以在百度的 AI studio 上免费使用算力。\n",
    "\n",
    "下面我们微调一个 BERT 类模型 ERNIE，ERNIE 是百度研发的，他本质上就是 BERT，但是在 masking 机制上，基于知识图谱进行掩码，而不是随机掩码，他们认为这样有一定的好处，能够让模型在预训练的时候获得更多的语义信息。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 加载预训练模型 Ernie\n",
    "使用 paddlenlp.transformers 预训练库（这一套都是从huggingface的transformers库抄来的，操作几乎一样）\n",
    "\n",
    "用 from_pretrained(模型名字) 的方法加载，他会自动联网下载相应的模型参数权重文件。\n",
    "\n",
    "我们需要两个东西：ernie是模型。tokenizer将文本映射成token序列。\n",
    "\n",
    "transformer里面第一层是Embedding层，它将token序列映射成连续的向量。我们这里不涉及太多的原理讲解，感兴趣可以自己学习。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[2023-03-03 18:24:46,739] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/ernie-1.0/ernie_v1_chn_base.pdparams\n",
      "W0303 18:24:46.747129   621 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 11.2\n",
      "W0303 18:24:46.751644   621 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2.\n",
      "[2023-03-03 18:24:49,211] [    INFO] - Weights from pretrained model not used in ErnieModel: ['cls.predictions.layer_norm.weight', 'cls.predictions.decoder_bias', 'cls.predictions.transform.bias', 'cls.predictions.transform.weight', 'cls.predictions.layer_norm.bias']\n",
      "[2023-03-03 18:24:49,596] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/ernie-1.0/vocab.txt\n",
      "[2023-03-03 18:24:49,678] [    INFO] - tokenizer config file saved in /home/aistudio/.paddlenlp/models/ernie-1.0/tokenizer_config.json\n",
      "[2023-03-03 18:24:49,758] [    INFO] - Special tokens file saved in /home/aistudio/.paddlenlp/models/ernie-1.0/special_tokens_map.json\n"
     ]
    }
   ],
   "source": [
    "import paddle\n",
    "import paddlenlp as ppnlp\n",
    "from paddle import nn, Model\n",
    "from paddle.io import DataLoader\n",
    "\n",
    "ernie = ppnlp.transformers.ErnieModel.from_pretrained('ernie-1.0')\n",
    "tokenizer = ppnlp.transformers.ErnieTokenizer.from_pretrained('ernie-1.0')\n",
    "\n",
    "# Model(ernie).summary((1, 32), dtype=paddle.int64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义自己的数据集类 Dataset，并用 DataLoader 定义 batch_size 等信息。\n",
    "\n",
    "batch_size 是一次使用多少条数据进行训练。每次用的越多，效果越好，但是我们没有那么大的内存/显存。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import paddle\n",
    "from paddle.io import Dataset, DataLoader\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "\n",
    "class DanmuDataset(Dataset):\n",
    "    def __init__(self, name, tokenizer, seq_len=32):\n",
    "        super().__init__()\n",
    "        self.name = name\n",
    "        self.tokenizer = tokenizer\n",
    "        df = pd.read_csv('work/{}.txt'.format(name), sep='\\t', names=['txt', 'label'])\n",
    "        self.txt = list(df.txt)\n",
    "        self.label = list(df.label)\n",
    "        self.seq_len = seq_len\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        txt = self.txt[idx]\n",
    "        lab = self.label[idx]\n",
    "        d = self.tokenizer.encode(txt, max_seq_len=self.seq_len, pad_to_max_seq_len=1)\n",
    "        txt = d['input_ids']\n",
    "        ttp = d['token_type_ids']\n",
    "        return paddle.to_tensor(txt), paddle.to_tensor(ttp), paddle.to_tensor((lab,))\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.label)\n",
    "\n",
    "train = DanmuDataset('train', tokenizer=tokenizer)\n",
    "dev = DanmuDataset('dev', tokenizer=tokenizer)\n",
    "\n",
    "train_loader = DataLoader(train, batch_size=256, shuffle=True)\n",
    "dev_loader = DataLoader(dev, batch_size=256, shuffle=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义模型类。这里的类很简单，我们把文本输入ernie，然后把它输出的向量交给一个线性层，输出2维向量，代表0、1两个类的概率。\n",
    "\n",
    "分类任务使用交叉熵损失函数 CrossEntrophyLoss。\n",
    "\n",
    "优化器使用 AdamW，设置学习率 和 权重衰减。\n",
    "\n",
    "学习率是我们每次移动参数的步子大小，步子大跑得快，但是有可能会跑过头了，错过了最优点。步子小了可能会收敛很慢，跑了很久也没有优化。\n",
    "\n",
    "权重衰减 等价于 L2正则化，可以一定程度上防止过拟合。\n",
    "\n",
    "具体原理恕不详细讲解，感兴趣自己学习。\n",
    "\n",
    "Accuracy是我们要看的准确率指标。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class GameNet(nn.Layer):\n",
    "    def __init__(self, backbone, in_features):\n",
    "        super().__init__()\n",
    "        self.backbone = backbone\n",
    "        self.linear = nn.Linear(in_features=in_features, out_features=2)\n",
    "\n",
    "    def forward(self, txt, ttp):\n",
    "        _, rep = self.backbone(txt, ttp)\n",
    "        out = self.linear(rep)\n",
    "        return nn.functional.sigmoid(out)\n",
    "\n",
    "\n",
    "net = GameNet(ernie, 768)\n",
    "loss = nn.CrossEntropyLoss()\n",
    "m = Model(net)\n",
    "adam = paddle.optimizer.AdamW(learning_rate=1e-5, weight_decay=3e-4, parameters=m.parameters())\n",
    "accuracy = paddle.metric.Accuracy()\n",
    "\n",
    "m.prepare(optimizer=adam, loss=loss, metrics=accuracy)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "开始训练，我们训练10个epoch。每次训练后，用验证集检验效果，最后保留的模型是验证集上效果最好的，这样可以防止过拟合。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous steps.\n",
      "Epoch 1/10\n",
      "step 10/16 - loss: 0.6782 - acc: 0.5133 - 618ms/step\n",
      "step 16/16 - loss: 0.6610 - acc: 0.5557 - 519ms/step\n",
      "save checkpoint at /home/aistudio/data/0\n",
      "Eval begin...\n",
      "step 2/2 - loss: 0.6627 - acc: 0.7000 - 198ms/step\n",
      "Eval samples: 400\n",
      "Epoch 2/10\n",
      "step 10/16 - loss: 0.6211 - acc: 0.6992 - 520ms/step\n",
      "step 16/16 - loss: 0.6148 - acc: 0.7140 - 458ms/step\n",
      "save checkpoint at /home/aistudio/data/1\n",
      "Eval begin...\n",
      "step 2/2 - loss: 0.6076 - acc: 0.7625 - 196ms/step\n",
      "Eval samples: 400\n",
      "Epoch 3/10\n",
      "step 10/16 - loss: 0.5706 - acc: 0.7582 - 521ms/step\n",
      "step 16/16 - loss: 0.5650 - acc: 0.7660 - 459ms/step\n",
      "save checkpoint at /home/aistudio/data/2\n",
      "Eval begin...\n",
      "step 2/2 - loss: 0.5551 - acc: 0.7800 - 184ms/step\n",
      "Eval samples: 400\n",
      "Epoch 4/10\n",
      "step 10/16 - loss: 0.5139 - acc: 0.7895 - 519ms/step\n",
      "step 16/16 - loss: 0.5162 - acc: 0.7917 - 458ms/step\n",
      "save checkpoint at /home/aistudio/data/3\n",
      "Eval begin...\n",
      "step 2/2 - loss: 0.5342 - acc: 0.7925 - 198ms/step\n",
      "Eval samples: 400\n",
      "Epoch 5/10\n",
      "step 10/16 - loss: 0.4981 - acc: 0.8113 - 523ms/step\n",
      "step 16/16 - loss: 0.4970 - acc: 0.8123 - 460ms/step\n",
      "save checkpoint at /home/aistudio/data/4\n",
      "Eval begin...\n",
      "step 2/2 - loss: 0.5185 - acc: 0.7900 - 191ms/step\n",
      "Eval samples: 400\n",
      "Epoch 6/10\n",
      "step 10/16 - loss: 0.4637 - acc: 0.8352 - 519ms/step\n",
      "step 16/16 - loss: 0.4649 - acc: 0.8300 - 457ms/step\n",
      "save checkpoint at /home/aistudio/data/5\n",
      "Eval begin...\n",
      "step 2/2 - loss: 0.5126 - acc: 0.7950 - 182ms/step\n",
      "Eval samples: 400\n",
      "Epoch 7/10\n",
      "step 10/16 - loss: 0.4707 - acc: 0.8355 - 520ms/step\n",
      "step 16/16 - loss: 0.4675 - acc: 0.8420 - 458ms/step\n",
      "save checkpoint at /home/aistudio/data/6\n",
      "Eval begin...\n",
      "step 2/2 - loss: 0.5138 - acc: 0.8000 - 186ms/step\n",
      "Eval samples: 400\n",
      "Epoch 8/10\n",
      "step 10/16 - loss: 0.4744 - acc: 0.8562 - 520ms/step\n",
      "step 16/16 - loss: 0.4520 - acc: 0.8570 - 460ms/step\n",
      "save checkpoint at /home/aistudio/data/7\n",
      "Eval begin...\n",
      "step 2/2 - loss: 0.5079 - acc: 0.8050 - 196ms/step\n",
      "Eval samples: 400\n",
      "Epoch 9/10\n",
      "step 10/16 - loss: 0.4579 - acc: 0.8746 - 526ms/step\n",
      "step 16/16 - loss: 0.4273 - acc: 0.8710 - 463ms/step\n",
      "save checkpoint at /home/aistudio/data/8\n",
      "Eval begin...\n",
      "step 2/2 - loss: 0.5080 - acc: 0.8050 - 183ms/step\n",
      "Eval samples: 400\n",
      "Epoch 10/10\n",
      "step 10/16 - loss: 0.4072 - acc: 0.8852 - 522ms/step\n",
      "step 16/16 - loss: 0.4382 - acc: 0.8845 - 461ms/step\n",
      "save checkpoint at /home/aistudio/data/9\n",
      "Eval begin...\n",
      "step 2/2 - loss: 0.5035 - acc: 0.8150 - 199ms/step\n",
      "Eval samples: 400\n",
      "save checkpoint at /home/aistudio/data/final\n"
     ]
    }
   ],
   "source": [
    "m.fit(train_data=train_loader, eval_data=dev_loader, epochs=10, save_dir='data')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以观察到，模型提升刚开始很快，到最后的提升几乎没有了。\n",
    "\n",
    "接下来，用测试集检验效果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "test = DanmuDataset('test', tokenizer=tokenizer)\n",
    "test_loader = DataLoader(dev, batch_size=256, shuffle=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Eval begin...\n",
      "step 2/2 - loss: 0.5035 - acc: 0.8150 - 196ms/step\n",
      "Eval samples: 400\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'loss': [0.5035329], 'acc': 0.815}"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "m.evaluate(test_loader)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "自己动手试一试，可以通过更改模型结构、更换backbone（即预训练模型，特征抽取器）、更改优化器的参数、修改batch_size等等方式，尝试提升模型的效果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
