{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2a65c88c",
   "metadata": {},
   "source": [
    "# 任务1：AIWIN比赛报名"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "52093b7e",
   "metadata": {},
   "source": [
    "- 成功报名链接：http://ailab.aiwin.org.cn/competitions/68#\n",
    "- 加入比赛社群，跑通比赛baseline提交结果\n",
    "- BERT相关资料\n",
    "    - BERT & transformers入门(https://huggingface.co/transformers/v3.0.2/quicktour.html)\n",
    "    - BERT实例教程(https://github.com/datawhalechina/competition-baseline/tree/master/tutorial/bert)\n",
    "    - BERT相关专业术语(https://huggingface.co/transformers/v3.0.2/glossary.html)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "957b7ab4",
   "metadata": {},
   "source": [
    "# 任务2：BERT与NLP任务介绍"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b4eb6dfc",
   "metadata": {},
   "source": [
    "本次赛题涉及如下4种NLP命题：\n",
    "1. 文本分类\n",
    "2. 文本相似度\n",
    "3. 实体识别\n",
    "4. 阅读理解"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6acdfa36",
   "metadata": {},
   "source": [
    "BERT模型的原理是什么：\n",
    "1. BERT使用了Transformers架构种的encoding，利用大量的文本训练出来的预训练模型\n",
    "2. 预训练时主要是对15%的词做[mask]处理，这15%中有10%被替换成其他词汇，10%保持原样，80%会被直接[MASK]\n",
    "3. 经过预训练后，会得到一个较好的语言模型，然后基于这个语言模型接不同的header\n",
    "4. 最终得到BERT在不同任务上的应用，比如：**ForSequenceClassification、**ForTokenClassification、**ForMultipleChoice\n",
    "5. 其中**则是不同厂家利用不同文本及技术上微调训练的出来的BERT"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d38987e7",
   "metadata": {},
   "source": [
    "# 任务3：transformers使用"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d1952424",
   "metadata": {},
   "source": [
    "下面简单说一遍bert流程，具体使用在竞赛其他代码中,这里仅仅简单跑一下</br>\n",
    "大致流程如下：\n",
    "1. 自定义数据集，由于不同任务会有不同打标方式，所以这里需要自定义数据集\n",
    "2. 导入预训练的tokenizer和model，注意model后缀的For**要与任务对应\n",
    "3. 构建神经网络的dataset和dataloader\n",
    "4. 遍历每一batch训练模型，优化模型参数（这一步下面样例中省略,所以会导致结果不准确）\n",
    "5. 模型预测"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "b67f47bc",
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import BertTokenizer\n",
    "from transformers import BertForSequenceClassification, AdamW, get_linear_schedule_with_warmup\n",
    "\n",
    "import torch\n",
    "from torch.utils.data import Dataset, DataLoader"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "abdd60b8",
   "metadata": {},
   "source": [
    "自定义数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "3aed697a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# \n",
    "class NewsDataset(Dataset):\n",
    "    def __init__(self, encodings, labels):\n",
    "        self.encodings = encodings\n",
    "        self.labels = labels\n",
    "        self.len = len(encodings)\n",
    "    \n",
    "    # 读取单个样本\n",
    "    def __getitem__(self, idx):\n",
    "        item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}\n",
    "        if self.labels is not None:\n",
    "            item['labels'] = torch.tensor(int(self.labels[idx]))\n",
    "        return item\n",
    "    \n",
    "    def __len__(self):\n",
    "        return self.len\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58cba8d1",
   "metadata": {},
   "source": [
    "加载BERT模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "ccda2b43",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of the model checkpoint at bert-base-chinese were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight']\n",
      "- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
      "Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-chinese and are newly initialized: ['classifier.weight', 'classifier.bias']\n",
      "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
     ]
    }
   ],
   "source": [
    "# 引入分词器\n",
    "tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')\n",
    "# 引入模型\n",
    "model = BertForSequenceClassification.from_pretrained('bert-base-chinese', num_labels=2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4171081e",
   "metadata": {},
   "source": [
    "构建数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "83ba6cd5",
   "metadata": {},
   "outputs": [],
   "source": [
    "train_X = ['这家酒店风景优美，干净卫生，是一家好酒店', '这家酒店太吵了，根本无法睡觉','这家酒店的床真舒服']\n",
    "train_y = [1, 0,1]\n",
    "\n",
    "test_X = ['这家酒店风景优美，干净卫生，是一家好酒店', '这家酒店太吵了，根本无法睡觉','这家酒店的床真舒服']\n",
    "# 分词及对应词典转换\n",
    "train_encoding = tokenizer(train_X, truncation=True, padding=True, max_length=16)\n",
    "test_encoding = tokenizer(test_X, truncation=True, padding=True, max_length=16)\n",
    "# 构建dataset\n",
    "train_dataset = NewsDataset(train_encoding, train_y)\n",
    "test_dataset = NewsDataset(test_encoding, None)\n",
    "# 构建dataloader\n",
    "train_dataloader = DataLoader(train_dataset, batch_size=1, shuffle=True)\n",
    "test_dataloader = DataLoader(test_dataset, batch_size=1, shuffle=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "69e51f35",
   "metadata": {},
   "source": [
    "训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "0f44dcfa",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss: tensor(0.4921, grad_fn=<NllLossBackward0>)\n",
      "loss: tensor(0.3775, grad_fn=<NllLossBackward0>)\n",
      "loss: tensor(0.9057, grad_fn=<NllLossBackward0>)\n"
     ]
    }
   ],
   "source": [
    "for iter_id, batch in enumerate(train_dataloader):\n",
    "    input_ids = batch['input_ids']\n",
    "    attention_mask = batch['attention_mask']\n",
    "    labels = batch['labels']\n",
    "        \n",
    "    outputs = model(input_ids, attention_mask=attention_mask, labels=labels)\n",
    "    print('loss:', outputs[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "95af4324",
   "metadata": {},
   "source": [
    "预测结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "0593b6f2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "logist: tensor([[-0.1046,  0.2831]], grad_fn=<AddmmBackward0>)\n",
      "logist: tensor([[-0.0069,  0.4461]], grad_fn=<AddmmBackward0>)\n",
      "logist: tensor([[-0.1164,  0.6632]], grad_fn=<AddmmBackward0>)\n"
     ]
    }
   ],
   "source": [
    "for iter_id, batch in enumerate(test_dataloader):\n",
    "    input_ids = batch['input_ids']\n",
    "    attention_mask = batch['attention_mask']\n",
    "        \n",
    "    outputs = model(input_ids, attention_mask=attention_mask)\n",
    "    print('logist:', outputs.logits)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "745001f1",
   "metadata": {},
   "source": [
    "# 任务4：BERT下游任务"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "56b41873",
   "metadata": {},
   "source": [
    "代码在项目中的**bert各任务案例**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5cf035ae",
   "metadata": {},
   "source": [
    "# 任务5：BERT预训练"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "48146608",
   "metadata": {},
   "source": [
    "代码在项目中的**mlm预训练**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "89146339",
   "metadata": {},
   "source": [
    "# 任务6：Prompt"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c1f7b477",
   "metadata": {},
   "source": [
    "代码在项目中的mlm预训练**prompt学习.md**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d4b49fdc",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
