{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ERNIE实现对话情绪识别\n",
    "## 1 概述\n",
    "### 1.1 对话情绪识别\n",
    "对话情绪识别（Emotion Detection，简称EmoTect），专注于识别智能对话场景中用户的情绪，针对智能对话场景中的用户文本，自动判断该文本的情绪类别并给出相应的置信度，情绪类型分为积极、消极、中性。对话情绪识别适用于聊天、客服等多个场景，能够帮助企业更好地把握对话质量、改善产品的用户交互体验，也能分析客服服务质量、降低人工质检成本。主要实现以下效果：\n",
    "```\n",
    "输入: 今天天气真好\n",
    "正确标签: 积极\n",
    "预测标签: 积极\n",
    "\n",
    "输入: 今天是晴天\n",
    "正确标签: 中性\n",
    "预测标签: 中性\n",
    "\n",
    "输入: 今天天气也太差了\n",
    "正确标签: 消极\n",
    "预测标签: 消极\n",
    "\n",
    "```\n",
    "### 1.2 ERNIE\n",
    "ERNIE: Enhanced Representation through Knowledge Integration 是百度在2019年3月的时候，基于BERT模型做的进一步优化，在中文的NLP任务上得到了state-of-the-art的结果。ERNIE 是一种基于知识增强的持续学习语义理解框架，该框架将大数据预训练与多源丰富知识相结合，通过持续学习技术，不断吸收海量文本数据中词汇、结构、语义等方面的知识，实现模型效果不断进化。ERNIE 在情感分析、文本匹配、自然语言推理、词法分析、阅读理解、智能问答等公开数据集上取得了优异的成绩。ERNIE 在工业界也得到了大规模应用，如搜索引擎、新闻推荐、广告系统、语音交互、智能客服等。ERNIE目前仍在不断优化中，本文基于ERNIE1.0版本开发完成。\n",
    "\n",
    "![ERNIE结构图](https://pic3.zhimg.com/80/v2-9b91ca9e0032c0af65fb3565b6556456_720w.jpg)\n",
    "\n",
    "### 1.3 环境要求\n",
    "本文在MindSpore框架下实现了ERNIE模型，通过加载百度开放的标注机器人聊天数据集完成了模型训练、测试及预测。需要的环境如下：\n",
    "- 硬件\n",
    "  - 10G以上GPU显存或对应配置的Ascend服务器\n",
    "- 软件\n",
    "  - Python 3.5以上\n",
    "  - MindSpore 1.8 (详情查看[MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2 数据准备\n",
    "本节使用百度提供的一份已标注的、经过分词预处理的机器人对话情绪识别数据集，其目录结构如下：\n",
    "```\n",
    ".\n",
    "├── train.tsv       # 训练集\n",
    "├── dev.tsv         # 验证集\n",
    "├── test.tsv        # 测试集\n",
    "├── infer.tsv       # 待预测数据\n",
    "├── vocab.txt       # 词典\n",
    "```\n",
    "\n",
    "数据由两列组成，以制表符（'\\t'）分隔，第一列是情绪分类的类别（0表示消极；1表示中性；2表示积极），第二列是以空格分词的中文文本，如下示例，文件为 utf8 编码。\n",
    "\n",
    "| label | text_a |\n",
    "| :-: | :-: |\n",
    "| 0 | 谁 骂人 了 ？ 我 从来 不 骂人 ， 我 骂 的 都 不是 人 ， 你 是 人 吗 ？ |\n",
    "| 1 | 我 有事 等会儿 就 回来 和 你 聊 |\n",
    "| 2 | 我 见到 你 很高兴 谢谢 你 帮 我 |\n",
    "\n",
    "### 2.1 数据下载\n",
    "为了方便数据集和预训练词向量的下载，首先设计数据下载模块，实现下载流程，并保存至指定路径。我们通过`wget`工具来发起http请求并下载数据集，下载好的数据集为tar.gz文件，利用`tar`工具解压下载下载的数据集到指定位置，并将所有数据和标签分别进行存放。\n",
    ">说明：需要提前配置`wget`和`tar`软件，Ubuntu系列安装命令为:`apt-get install wget tar`\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--2022-11-10 03:08:52--  https://baidu-nlp.bj.bcebos.com/emotion_detection-dataset-1.0.0.tar.gz\n",
      "Resolving baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)... 110.242.70.39, 110.242.70.3, 2409:8c04:1001:1002:0:ff:b001:368a\n",
      "Connecting to baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)|110.242.70.39|:443... connected.\n",
      "HTTP request sent, awaiting response... 200 OK\n",
      "Length: 1710581 (1.6M) [application/x-gzip]\n",
      "Saving to: ‘emotion_detection-dataset-1.0.0.tar.gz.1’\n",
      "\n",
      "emotion_detection-d 100%[===================>]   1.63M   806KB/s    in 2.1s    \n",
      "\n",
      "2022-11-10 03:08:55 (806 KB/s) - ‘emotion_detection-dataset-1.0.0.tar.gz.1’ saved [1710581/1710581]\n",
      "\n",
      "data/\n",
      "data/test.tsv\n",
      "data/infer.tsv\n",
      "data/dev.tsv\n",
      "data/train.tsv\n",
      "data/vocab.txt\n"
     ]
    }
   ],
   "source": [
    "!wget --no-check-certificate https://baidu-nlp.bj.bcebos.com/emotion_detection-dataset-1.0.0.tar.gz\n",
    "\n",
    "!tar xvf emotion_detection-dataset-1.0.0.tar.gz\n",
    "!/bin/rm emotion_detection-dataset-1.0.0.tar.gz"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 MindSpore数据转换\n",
    "下载数据后，现有数据是tsv文件格式，要加载数据到MindSpore，并按照特定格式获取批处理数据，需要转换成MindSpore独有的MindRecord数据格式。\n",
    "运行数据格式转换脚本, 将数据集转为MindRecord格式。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先我们定义一个tsv格式的`reader`，按照`tuple`格式读取tsv文件中的每一行数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "def csv_reader(fd, delimiter='\\t'):\n",
    "    \"\"\"\n",
    "    csv 文件读取\n",
    "    \"\"\"\n",
    "    def gen():\n",
    "        for i in fd:\n",
    "            slots = i.rstrip('\\n').split(delimiter)\n",
    "            if len(slots) == 1:\n",
    "                yield (slots,)\n",
    "            else:\n",
    "                yield slots\n",
    "    return gen()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在实际处理过程中，数据中还可能会出现一些`bytes`格式等乱码数据。为了处理这些数据中的乱码问题，我们需要定义一系列数据格式转换方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "import io\n",
    "import unicodedata\n",
    "import collections\n",
    "\n",
    "\n",
    "def convert_to_unicode(text):\n",
    "    \"\"\"假设输入时utf-8的编码，将`text`转换到Unicode编码\"\"\"\n",
    "    if isinstance(text, str):\n",
    "        text = text\n",
    "    elif isinstance(text, bytes):\n",
    "        text = text.decode(\"utf-8\", \"ignore\")\n",
    "    else:\n",
    "        raise ValueError(\"Unsupported string type: %s\" % (type(text)))\n",
    "    return text\n",
    "\n",
    "def load_vocab(vocab_file):\n",
    "    \"\"\"将vocab文件文件加载为dict格式\"\"\"\n",
    "    vocab = collections.OrderedDict()\n",
    "    fin = io.open(vocab_file, encoding=\"utf8\")\n",
    "    for num, line in enumerate(fin):\n",
    "        items = convert_to_unicode(line.strip()).split(\"\\t\")\n",
    "        if len(items) > 2:\n",
    "            break\n",
    "        token = items[0]\n",
    "        index = items[1] if len(items) == 2 else num\n",
    "        token = token.strip()\n",
    "        vocab[token] = int(index)\n",
    "    return vocab\n",
    "\n",
    "\n",
    "def convert_by_vocab(vocab, items):\n",
    "    \"\"\"通过vocab将items转换为[token|ids]序列\"\"\"\n",
    "    output = []\n",
    "    for item in items:\n",
    "        output.append(vocab[item])\n",
    "    return output\n",
    "\n",
    "\n",
    "def whitespace_tokenize(text):\n",
    "    \"\"\"对一段文本完成基本的空白清理和拆分\"\"\"\n",
    "    text = text.strip()\n",
    "    if not text:\n",
    "        return []\n",
    "    tokens = text.split()\n",
    "    return tokens\n",
    "\n",
    "\n",
    "def _is_whitespace(char):\n",
    "    \"\"\"检查`chars`是否为空白字符\"\"\"\n",
    "    # \\t, \\n, and \\r 是控制字符，但其对文本没有作用，将其认为是空格。\n",
    "    if char in (\" \", \"\\t\", \"\\n\", \"\\r\"):\n",
    "        return True\n",
    "    cat = unicodedata.category(char)\n",
    "    if cat == \"Zs\":\n",
    "        return True\n",
    "    return False\n",
    "\n",
    "\n",
    "def _is_control(char):\n",
    "    \"\"\"判断`chars` 是否是控制字符\"\"\"\n",
    "    # 这些技术上是控制字符，但我们将其视为空白字符\n",
    "    if char in (\"\\t\", \"\\n\", \"\\r\"):\n",
    "        return False\n",
    "    cat = unicodedata.category(char)\n",
    "    if cat.startswith(\"C\"):\n",
    "        return True\n",
    "    return False\n",
    "\n",
    "\n",
    "def _is_punctuation(char):\n",
    "    \"\"\"判断`chars` 是否是标点符号\"\"\"\n",
    "    cp = ord(char)\n",
    "    #我们将所有非字母/数字ASCII视为标点符号。\n",
    "    #“^”、“$”和“`”等字符不在Unicode标点符号类中，但为了一致性，我们将它们视为标点符号。\n",
    "    if ((33 <= cp <= 47) or\n",
    "            (58 <= cp <= 64) or\n",
    "            (91 <= cp <= 96) or\n",
    "            (123 <= cp <= 126)):\n",
    "        return True\n",
    "    cat = unicodedata.category(char)\n",
    "    if cat.startswith(\"P\"):\n",
    "        return True\n",
    "    return False"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来，我们开始进行数据预处理，以便将清洗处理好的输出转为`MindDataset`方便后续训练。\n",
    "\n",
    "首先定义`BasicTokenizer`类，该类能够执行基本分词标记化（标点符号分割、小写等）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "class BasicTokenizer:\n",
    "    \"\"\"运行基本标记化（标点符号拆分、小写等）\"\"\"\n",
    "\n",
    "    def __init__(self, do_lower_case=True):\n",
    "        \"\"\"构造BasicTokenizer.\n",
    "        Args:\n",
    "            do_lower_case: 是否小写输入.\n",
    "        \"\"\"\n",
    "        self.do_lower_case = do_lower_case\n",
    "\n",
    "    def tokenize(self, text):\n",
    "        \"\"\"标记处理一段文本\"\"\"\n",
    "        text = convert_to_unicode(text)\n",
    "        text = self._clean_text(text)\n",
    "\n",
    "        #这是2018年11月1日为多语言和中文车型添加的。\n",
    "        # 这一点现在也适用于英语模型，但这并不重要，因为英语模型没有经过任何中文数据的训练，\n",
    "        # 通常也没有任何中文数据（词汇表中有汉字，因为维基百科在英语维基百科中确实有一些中文单词）。\n",
    "        text = self._tokenize_chinese_chars(text)\n",
    "\n",
    "        orig_tokens = whitespace_tokenize(text)\n",
    "        split_tokens = []\n",
    "        for token in orig_tokens:\n",
    "            if self.do_lower_case:\n",
    "                token = token.lower()\n",
    "                token = self._run_strip_accents(token)\n",
    "            split_tokens.extend(self._run_split_on_punc(token))\n",
    "\n",
    "        output_tokens = whitespace_tokenize(\" \".join(split_tokens))\n",
    "        return output_tokens\n",
    "\n",
    "    def _run_strip_accents(self, text):\n",
    "        \"\"\"从一段文字中去除重音。\"\"\"\n",
    "        text = unicodedata.normalize(\"NFD\", text)\n",
    "        output = []\n",
    "        for char in text:\n",
    "            cat = unicodedata.category(char)\n",
    "            if cat == \"Mn\":\n",
    "                continue\n",
    "            output.append(char)\n",
    "        return \"\".join(output)\n",
    "\n",
    "    def _run_split_on_punc(self, text):\n",
    "        \"\"\"拆分文本上的标点符号\"\"\"\n",
    "        chars = list(text)\n",
    "        i = 0\n",
    "        start_new_word = True\n",
    "        output = []\n",
    "        while i < len(chars):\n",
    "            char = chars[i]\n",
    "            if _is_punctuation(char):\n",
    "                output.append([char])\n",
    "                start_new_word = True\n",
    "            else:\n",
    "                if start_new_word:\n",
    "                    output.append([])\n",
    "                start_new_word = False\n",
    "                output[-1].append(char)\n",
    "            i += 1\n",
    "\n",
    "        return [\"\".join(x) for x in output]\n",
    "\n",
    "    def _tokenize_chinese_chars(self, text):\n",
    "        \"\"\"在CJK字符周围添加空白。\"\"\"\n",
    "        output = []\n",
    "        for char in text:\n",
    "            cp = ord(char)\n",
    "            if self._is_chinese_char(cp):\n",
    "                output.append(\" \")\n",
    "                output.append(char)\n",
    "                output.append(\" \")\n",
    "            else:\n",
    "                output.append(char)\n",
    "        return \"\".join(output)\n",
    "\n",
    "    def _is_chinese_char(self, cp):\n",
    "        \"\"\"检查输入`cp`是否为CJK字符的编码\"\"\"\n",
    "        # 这将“汉字”定义为CJK Unicode块中的字符:\n",
    "        #     https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)\n",
    "        #\n",
    "        # 请注意，CJK Unicode块并非全部是日语和韩语字符,\n",
    "        # 现代韩语字母是一个不同的块，日语平假名和片假名也是如此。\n",
    "        # 这些字母用于书写空格分隔的单词，所以它们并没有像其他所有语言一样被特殊对待和处理。\n",
    "        if ((0x4E00 <= cp <= 0x9FFF) or\n",
    "                (0x3400 <= cp <= 0x4DBF) or\n",
    "                (0x20000 <= cp <= 0x2A6DF) or\n",
    "                (0x2A700 <= cp <= 0x2B73F) or\n",
    "                (0x2B740 <= cp <= 0x2B81F) or\n",
    "                (0x2B820 <= cp <= 0x2CEAF) or\n",
    "                (0xF900 <= cp <= 0xFAFF) or\n",
    "                (0x2F800 <= cp <= 0x2FA1F)):\n",
    "            return True\n",
    "\n",
    "        return False\n",
    "\n",
    "    def _clean_text(self, text):\n",
    "        \"\"\"对文本执行无效字符删除和空白清理\"\"\"\n",
    "        output = []\n",
    "        for char in text:\n",
    "            cp = ord(char)\n",
    "            if cp == 0 or cp == 0xfffd or _is_control(char):\n",
    "                continue\n",
    "            if _is_whitespace(char):\n",
    "                output.append(\" \")\n",
    "            else:\n",
    "                output.append(char)\n",
    "        return \"\".join(output)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接着我们定义`WordpieceTokenizer`类与上述`BasicTokenizer`一致，用于将一段文本标记为其词条，此类使用贪婪的最长匹配优先算法来执行标记化使用给定的词汇。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "class WordpieceTokenizer:\n",
    "    \"\"\"运行WordpieceTokenizer\"\"\"\n",
    "\n",
    "    def __init__(self, vocab, unk_token=\"[UNK]\", max_input_chars_per_word=100):\n",
    "        self.vocab = vocab\n",
    "        self.unk_token = unk_token\n",
    "        self.max_input_chars_per_word = max_input_chars_per_word\n",
    "\n",
    "    def tokenize(self, text):\n",
    "        \"\"\"将一段文字标记为单词。这使用贪婪的最长匹配优先算法来使用给定词汇表执行标记化。\n",
    "        For example:\n",
    "            input = \"unaffable\"\n",
    "            output = [\"un\", \"##aff\", \"##able\"]\n",
    "        Args:\n",
    "            text: 单个标记或空白分隔的标记。通过`BasicTokenizer`类传递\n",
    "        Returns:\n",
    "            字词标记列表\n",
    "        \"\"\"\n",
    "\n",
    "        text = convert_to_unicode(text)\n",
    "\n",
    "        output_tokens = []\n",
    "        for token in whitespace_tokenize(text):\n",
    "            chars = list(token)\n",
    "            if len(chars) > self.max_input_chars_per_word:\n",
    "                output_tokens.append(self.unk_token)\n",
    "                continue\n",
    "\n",
    "            is_bad = False\n",
    "            start = 0\n",
    "            sub_tokens = []\n",
    "            while start < len(chars):\n",
    "                end = len(chars)\n",
    "                cur_substr = None\n",
    "                while start < end:\n",
    "                    substr = \"\".join(chars[start:end])\n",
    "                    if start > 0:\n",
    "                        substr = \"##\" + substr\n",
    "                    if substr in self.vocab:\n",
    "                        cur_substr = substr\n",
    "                        break\n",
    "                    end -= 1\n",
    "                if cur_substr is None:\n",
    "                    is_bad = True\n",
    "                    break\n",
    "                sub_tokens.append(cur_substr)\n",
    "                start = end\n",
    "\n",
    "            if is_bad:\n",
    "                output_tokens.append(self.unk_token)\n",
    "            else:\n",
    "                output_tokens.extend(sub_tokens)\n",
    "        return output_tokens"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "最后，我们定义一个`FullTokenizer`类将上述的`BasickTokenizer`类和`WordPieceTokenizer`类结合起来。定义`tokenize()`方法将输入的文本分别进行`BasicTokenizer`和`WordPieceTokenizer`的`tokenize()`处理，得到处理分割后的文本词组；定义`convert_tokens_to_ids()`方法通过vocab字典转换成对应id。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "class FullTokenizer:\n",
    "    \"\"\"运行端到端的tokenziation.\"\"\"\n",
    "\n",
    "    def __init__(self, vocab_file, do_lower_case=True):\n",
    "        self.vocab = load_vocab(vocab_file)\n",
    "        self.inv_vocab = {v: k for k, v in self.vocab.items()}\n",
    "        self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case)\n",
    "        self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab)\n",
    "\n",
    "    def tokenize(self, text):\n",
    "        split_tokens = []\n",
    "        for token in self.basic_tokenizer.tokenize(text):\n",
    "            for sub_token in self.wordpiece_tokenizer.tokenize(token):\n",
    "                split_tokens.append(sub_token)\n",
    "\n",
    "        return split_tokens\n",
    "\n",
    "    def convert_tokens_to_ids(self, tokens):\n",
    "        return convert_by_vocab(self.vocab, tokens)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义文本预处理相关类后，我们继续定义一个基本的将文本文件转换为`MindDataset`类型的对象`BaseReader`。\n",
    "\n",
    "该对象对文本的处理方法为：\n",
    "1. 将tsv文件中的数据读取出来，并统一句子长度为`max_seq_len`，多的截断，少的补0。\n",
    "2. 为了衡量句子的真实长度，对有内容的部分赋1，从而获得句子真实长度。\n",
    "3. 由于本文是基于文本对话情绪识别，文本数据中存在有多条句子存在的情况。为了衡量对话顺序，使用`[CLS]`作为句子的开头，使用`[SEP]`作为对话间句子分割。\n",
    "4. 为了表明句子的对话发言情况，使用0代表对话中的第一句话，使用1代表该句的回复。\n",
    "\n",
    "并通过如下几个变量定义上述处理：\n",
    "- tokens：英文做wordpiece拆分原型和变形\n",
    "- segment_ids(token_type_id): 表示第一句话还是第二句话\n",
    "- input_ids: look up vocab找的每个字的索引，因为max_seq_len的存在，需要padding长度不够的位置补0\n",
    "- input_mask: mask在有字的地方全是1，没有字的地方是0（0的位置后续不参与attention）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "import numpy as np\n",
    "from mindspore.mindrecord import FileWriter\n",
    "\n",
    "\n",
    "class BaseReader:\n",
    "    \"\"\"用于分类和序列标记任务BaseReader\"\"\"\n",
    "\n",
    "    def __init__(self,\n",
    "                 vocab_path,\n",
    "                 label_map_config=None,\n",
    "                 max_seq_len=512,\n",
    "                 do_lower_case=True,\n",
    "                 in_tokens=False,\n",
    "                 random_seed=None):\n",
    "        self.max_seq_len = max_seq_len\n",
    "        self.tokenizer = FullTokenizer(\n",
    "            vocab_file=vocab_path, do_lower_case=do_lower_case)\n",
    "        self.vocab = self.tokenizer.vocab\n",
    "        self.pad_id = self.vocab[\"[PAD]\"]\n",
    "        self.cls_id = self.vocab[\"[CLS]\"]\n",
    "        self.sep_id = self.vocab[\"[SEP]\"]\n",
    "        self.in_tokens = in_tokens\n",
    "\n",
    "        np.random.seed(random_seed)\n",
    "\n",
    "        self.current_example = 0\n",
    "        self.current_epoch = 0\n",
    "        self.num_examples = 0\n",
    "\n",
    "        if label_map_config:\n",
    "            with open(label_map_config) as f:\n",
    "                self.label_map = json.load(f)\n",
    "        else:\n",
    "            self.label_map = None\n",
    "\n",
    "    def _read_tsv(self, input_file, quotechar=None):\n",
    "        \"\"\"读取以制表符分隔的tsv文件\"\"\"\n",
    "        with io.open(input_file, \"r\", encoding=\"utf8\") as f:\n",
    "            reader = csv_reader(f, delimiter=\"\\t\")\n",
    "            headers = next(reader)\n",
    "            Example = collections.namedtuple('Example', headers)\n",
    "\n",
    "            examples = []\n",
    "            for line in reader:\n",
    "                example = Example(*line)\n",
    "                examples.append(example)\n",
    "            return examples\n",
    "\n",
    "    def _truncate_seq_pair(self, tokens_a, tokens_b, max_length):\n",
    "        \"\"\"将序列对就地截断到最大长度\"\"\"\n",
    "\n",
    "        #这是一个简单的启发式算法，每次只截短一个较长的序列。\n",
    "        # 这比从每个标记中截取相等百分比的标记更有意义，\n",
    "        # 因为如果一个序列很短，那么被截取的每个标记可能包含比较长序列更多的信息。\n",
    "        while True:\n",
    "            total_length = len(tokens_a) + len(tokens_b)\n",
    "            if total_length <= max_length:\n",
    "                break\n",
    "            if len(tokens_a) > len(tokens_b):\n",
    "                tokens_a.pop()\n",
    "            else:\n",
    "                tokens_b.pop()\n",
    "\n",
    "    def _convert_example_to_record(self, example, max_seq_length, tokenizer):\n",
    "        \"\"\"将单条 `Example` 记录转换为单个`Record`对象.\"\"\"\n",
    "\n",
    "        text_a = convert_to_unicode(example.text_a)\n",
    "        tokens_a = tokenizer.tokenize(text_a)\n",
    "        tokens_b = None\n",
    "        if \"text_b\" in example._fields:\n",
    "            text_b = convert_to_unicode(example.text_b)\n",
    "            tokens_b = tokenizer.tokenize(text_b)\n",
    "\n",
    "        if tokens_b:\n",
    "            # Modifies `tokens_a` and `tokens_b` in place so that the total\n",
    "            # length is less than the specified length.\n",
    "            # Account for [CLS], [SEP], [SEP] with \"- 3\"\n",
    "            self._truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)\n",
    "        else:\n",
    "            # Account for [CLS] and [SEP] with \"- 2\"\n",
    "            if len(tokens_a) > max_seq_length - 2:\n",
    "                tokens_a = tokens_a[0:(max_seq_length - 2)]\n",
    "\n",
    "        # The convention in BERT/ERNIE is:\n",
    "        # (a) For sequence pairs:\n",
    "        #  tokens:   [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]\n",
    "        #  type_ids: 0     0  0    0    0     0       0 0     1  1  1  1   1 1\n",
    "        # (b) For single sequences:\n",
    "        #  tokens:   [CLS] the dog is hairy . [SEP]\n",
    "        #  type_ids: 0     0   0   0  0     0 0\n",
    "        #\n",
    "        # Where \"type_ids\" are used to indicate whether this is the first\n",
    "        # sequence or the second sequence. The embedding vectors for `type=0` and\n",
    "        # `type=1` were learned during pre-training and are added to the wordpiece\n",
    "        # embedding vector (and position vector). This is not *strictly* necessary\n",
    "        # since the [SEP] token unambiguously separates the sequences, but it makes\n",
    "        # it easier for the model to learn the concept of sequences.\n",
    "        #\n",
    "        # For classification tasks, the first vector (corresponding to [CLS]) is\n",
    "        # used as as the \"sentence vector\". Note that this only makes sense because\n",
    "        # the entire model is fine-tuned.\n",
    "        tokens = []\n",
    "        segment_ids = []\n",
    "        tokens.append(\"[CLS]\")\n",
    "        segment_ids.append(0)\n",
    "        for token in tokens_a:\n",
    "            tokens.append(token)\n",
    "            segment_ids.append(0)\n",
    "        tokens.append(\"[SEP]\")\n",
    "        segment_ids.append(0)\n",
    "\n",
    "        if tokens_b:\n",
    "            for token in tokens_b:\n",
    "                tokens.append(token)\n",
    "                segment_ids.append(1)\n",
    "            tokens.append(\"[SEP]\")\n",
    "            segment_ids.append(1)\n",
    "\n",
    "        input_ids = tokenizer.convert_tokens_to_ids(tokens)\n",
    "\n",
    "        input_mask = [1] * len(input_ids)\n",
    "\n",
    "        while len(input_ids) < max_seq_length:\n",
    "            input_ids.append(0)\n",
    "            input_mask.append(0)\n",
    "            segment_ids.append(0)\n",
    "\n",
    "        if self.label_map:\n",
    "            label_id = self.label_map[example.label]\n",
    "        else:\n",
    "            label_id = example.label\n",
    "\n",
    "        Record = collections.namedtuple(\n",
    "            'Record',\n",
    "            ['input_ids', 'input_mask', 'segment_ids', 'label_id'])\n",
    "\n",
    "        record = Record(\n",
    "            input_ids=input_ids,\n",
    "            input_mask=input_mask,\n",
    "            segment_ids=segment_ids,\n",
    "            label_id=label_id)\n",
    "        return record\n",
    "\n",
    "    def get_num_examples(self, input_file):\n",
    "        \"\"\"返回读取数据总数\"\"\"\n",
    "        examples = self._read_tsv(input_file)\n",
    "        return len(examples)\n",
    "\n",
    "    def get_examples(self, input_file):\n",
    "        examples = self._read_tsv(input_file)\n",
    "        return examples\n",
    "\n",
    "    def file_based_convert_examples_to_features(self, input_file, output_file):\n",
    "        \"\"\"\"将`InputExample`数据集转换为 MindDataset 文件\"\"\"\n",
    "        examples = self._read_tsv(input_file)\n",
    "\n",
    "        writer = FileWriter(file_name=output_file, shard_num=1)\n",
    "        nlp_schema = {\n",
    "            \"input_ids\": {\"type\": \"int64\", \"shape\": [-1]},\n",
    "            \"input_mask\": {\"type\": \"int64\", \"shape\": [-1]},\n",
    "            \"segment_ids\": {\"type\": \"int64\", \"shape\": [-1]},\n",
    "            \"label_ids\": {\"type\": \"int64\", \"shape\": [-1]},\n",
    "        }\n",
    "        writer.add_schema(nlp_schema, \"proprocessed classification dataset\")\n",
    "        data = []\n",
    "        for index, example in enumerate(examples):\n",
    "            if index % 10000 == 0:\n",
    "                print(\"Writing example %d of %d\" % (index, len(examples)))\n",
    "            record = self._convert_example_to_record(example, self.max_seq_len, self.tokenizer)\n",
    "            sample = {\n",
    "                \"input_ids\": np.array(record.input_ids, dtype=np.int64),\n",
    "                \"input_mask\": np.array(record.input_mask, dtype=np.int64),\n",
    "                \"segment_ids\": np.array(record.segment_ids, dtype=np.int64),\n",
    "                \"label_ids\": np.array([record.label_id], dtype=np.int64),\n",
    "            }\n",
    "            data.append(sample)\n",
    "        writer.write_raw_data(data)\n",
    "        writer.commit()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了拓展适配我们这里使用的tsv文件格式的数据集，我们继承上述`BaseReader`并定义新的`_read_tsv`方法来读取tsv文件数据格式。首先获取tsv文件的列名编号，在利用到csv文件的`reader`将所有行中的数据中的空格去掉，从而形成一句完整的句子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "class ClassifyReader(BaseReader):\n",
    "    \"\"\"ClassifyReader\"\"\"\n",
    "\n",
    "    def _read_tsv(self, input_file, quotechar=None):\n",
    "        \"\"\"Reads a tab separated value file.\"\"\"\n",
    "        with io.open(input_file, \"r\", encoding=\"utf8\") as f:\n",
    "            reader = csv_reader(f, delimiter=\"\\t\")\n",
    "            headers = next(reader)\n",
    "            text_indices = [\n",
    "                index for index, h in enumerate(headers) if h != \"label\"\n",
    "            ]\n",
    "            Example = collections.namedtuple('Example', headers)\n",
    "\n",
    "            examples = []\n",
    "            for line in reader:\n",
    "                for index, text in enumerate(line):\n",
    "                    if index in text_indices:\n",
    "                        line[index] = text.replace(' ', '')\n",
    "                example = Example(*line)\n",
    "                examples.append(example)\n",
    "            return examples"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "最后我们定义一个数据转换方法，对`train.tsv`、`test.tsv`、`dev.tsv`三个文件分别定义一个数据转换格式的`ClassifyReader`对象加载数据内容，然后调用`file_based_convert_examples_to_features()`方法，从而将每个tsv文件都转换为对应的MindRecord。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "def convert_tsv_2_mindrecord(config):\n",
    "    for file in config:\n",
    "        reader = ClassifyReader(\n",
    "            vocab_path=config[file]['vocab_path'],\n",
    "            label_map_config=config[file]['label_map_config'],\n",
    "            max_seq_len=config[file]['max_seq_len'],\n",
    "            do_lower_case=config[file]['do_lower_case'],\n",
    "            random_seed=config[file]['random_seed']\n",
    "        )\n",
    "        reader.file_based_convert_examples_to_features(input_file=config[file]['input_file'], output_file=config[file]['output_file'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了能够尽快学习到更好的模型效果，我们这里直接使用Paddle已经训练好了的ERNIE模型文件，在其上做微调任务。首先下载ERNIE的模型文件"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--2022-11-10 03:27:36--  https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz\n",
      "Resolving baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)... 110.242.70.39, 110.242.70.3, 2409:8c04:1001:1002:0:ff:b001:368a\n",
      "Connecting to baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)|110.242.70.39|:443... connected.\n",
      "HTTP request sent, awaiting response... 200 OK\n",
      "Length: 374178867 (357M) [application/x-gzip]\n",
      "Saving to: ‘ERNIE_stable-1.0.1.tar.gz’\n",
      "\n",
      "ERNIE_stable-1.0.1. 100%[===================>] 356.84M  10.5MB/s    in 41s     \n",
      "\n",
      "2022-11-10 03:28:17 (8.80 MB/s) - ‘ERNIE_stable-1.0.1.tar.gz’ saved [374178867/374178867]\n",
      "\n",
      "params/\n",
      "params/encoder_layer_5_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_0_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_0_post_att_layer_norm_bias\n",
      "params/encoder_layer_0_multi_head_att_value_fc.w_0\n",
      "params/sent_embedding\n",
      "params/encoder_layer_11_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_8_ffn_fc_0.w_0\n",
      "params/encoder_layer_5_ffn_fc_1.w_0\n",
      "params/encoder_layer_6_ffn_fc_1.b_0\n",
      "params/encoder_layer_5_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_10_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_4_ffn_fc_0.w_0\n",
      "params/encoder_layer_4_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_3_ffn_fc_1.b_0\n",
      "params/encoder_layer_0_multi_head_att_value_fc.b_0\n",
      "params/encoder_layer_11_post_att_layer_norm_bias\n",
      "params/encoder_layer_3_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_10_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_5_ffn_fc_1.b_0\n",
      "params/encoder_layer_10_multi_head_att_value_fc.w_0\n",
      "params/encoder_layer_6_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_8_post_att_layer_norm_bias\n",
      "params/encoder_layer_2_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_1_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_4_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_6_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_9_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_11_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_6_multi_head_att_value_fc.b_0\n",
      "params/encoder_layer_9_ffn_fc_0.w_0\n",
      "params/encoder_layer_2_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_1_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_1_post_ffn_layer_norm_bias\n",
      "params/next_sent_3cls_fc.w_0\n",
      "params/encoder_layer_9_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_7_multi_head_att_value_fc.w_0\n",
      "params/encoder_layer_10_ffn_fc_0.b_0\n",
      "params/encoder_layer_2_multi_head_att_value_fc.w_0\n",
      "params/encoder_layer_8_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_3_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_2_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_11_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_1_ffn_fc_0.w_0\n",
      "params/encoder_layer_8_multi_head_att_value_fc.w_0\n",
      "params/word_embedding\n",
      "params/mask_lm_trans_layer_norm_bias\n",
      "params/encoder_layer_8_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_1_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_5_ffn_fc_0.b_0\n",
      "params/encoder_layer_3_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_7_ffn_fc_1.b_0\n",
      "params/encoder_layer_2_post_att_layer_norm_bias\n",
      "params/encoder_layer_8_post_att_layer_norm_scale\n",
      "params/encoder_layer_2_ffn_fc_1.b_0\n",
      "params/encoder_layer_11_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_6_multi_head_att_key_fc.b_0\n",
      "params/mask_lm_trans_layer_norm_scale\n",
      "params/encoder_layer_11_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_5_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_0_ffn_fc_0.b_0\n",
      "params/encoder_layer_9_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_9_post_att_layer_norm_scale\n",
      "params/encoder_layer_7_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_4_ffn_fc_0.b_0\n",
      "params/encoder_layer_9_multi_head_att_value_fc.w_0\n",
      "params/pos_embedding\n",
      "params/mask_lm_trans_fc.w_0\n",
      "params/encoder_layer_4_multi_head_att_value_fc.b_0\n",
      "params/encoder_layer_4_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_5_multi_head_att_value_fc.w_0\n",
      "params/encoder_layer_3_ffn_fc_1.w_0\n",
      "params/encoder_layer_9_post_att_layer_norm_bias\n",
      "params/accuracy_0.tmp_0\n",
      "params/encoder_layer_3_post_att_layer_norm_bias\n",
      "params/encoder_layer_7_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_7_ffn_fc_1.w_0\n",
      "params/encoder_layer_11_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_0_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_6_ffn_fc_0.w_0\n",
      "params/encoder_layer_5_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_10_post_att_layer_norm_scale\n",
      "params/encoder_layer_2_ffn_fc_1.w_0\n",
      "params/encoder_layer_6_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_9_ffn_fc_1.w_0\n",
      "params/encoder_layer_10_ffn_fc_0.w_0\n",
      "params/pre_encoder_layer_norm_bias\n",
      "params/encoder_layer_1_ffn_fc_0.b_0\n",
      "params/encoder_layer_1_post_att_layer_norm_scale\n",
      "params/encoder_layer_9_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_9_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_2_multi_head_att_query_fc.b_0\n",
      "params/tmp_51\n",
      "params/encoder_layer_11_ffn_fc_1.w_0\n",
      "params/encoder_layer_7_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_11_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_8_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_5_multi_head_att_value_fc.b_0\n",
      "params/encoder_layer_6_post_att_layer_norm_scale\n",
      "params/encoder_layer_5_ffn_fc_0.w_0\n",
      "params/encoder_layer_4_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_10_post_att_layer_norm_bias\n",
      "params/encoder_layer_3_post_att_layer_norm_scale\n",
      "params/encoder_layer_6_ffn_fc_1.w_0\n",
      "params/mask_lm_out_fc.b_0\n",
      "params/encoder_layer_3_ffn_fc_0.w_0\n",
      "params/encoder_layer_6_ffn_fc_0.b_0\n",
      "params/encoder_layer_1_post_att_layer_norm_bias\n",
      "params/encoder_layer_6_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_3_ffn_fc_0.b_0\n",
      "params/encoder_layer_2_post_att_layer_norm_scale\n",
      "params/encoder_layer_7_ffn_fc_0.w_0\n",
      "params/encoder_layer_8_ffn_fc_1.w_0\n",
      "params/encoder_layer_11_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_9_multi_head_att_value_fc.b_0\n",
      "params/encoder_layer_3_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_9_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_4_multi_head_att_value_fc.w_0\n",
      "params/encoder_layer_4_ffn_fc_1.w_0\n",
      "params/encoder_layer_5_post_att_layer_norm_scale\n",
      "params/encoder_layer_3_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_2_multi_head_att_value_fc.b_0\n",
      "params/encoder_layer_5_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_0_ffn_fc_1.w_0\n",
      "params/encoder_layer_0_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_11_ffn_fc_0.b_0\n",
      "params/pooled_fc.b_0\n",
      "params/encoder_layer_2_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_8_multi_head_att_value_fc.b_0\n",
      "params/encoder_layer_5_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_1_ffn_fc_1.w_0\n",
      "params/encoder_layer_2_ffn_fc_0.b_0\n",
      "params/encoder_layer_5_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_3_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_0_ffn_fc_1.b_0\n",
      "params/encoder_layer_7_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_1_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_1_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_6_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_2_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_7_ffn_fc_0.b_0\n",
      "params/encoder_layer_11_ffn_fc_0.w_0\n",
      "params/encoder_layer_1_ffn_fc_1.b_0\n",
      "params/encoder_layer_10_multi_head_att_key_fc.w_0\n",
      "params/reduce_mean_0.tmp_0\n",
      "params/encoder_layer_7_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_10_multi_head_att_value_fc.b_0\n",
      "params/@LR_DECAY_COUNTER@\n",
      "params/encoder_layer_8_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_4_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_10_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_9_ffn_fc_1.b_0\n",
      "params/encoder_layer_3_multi_head_att_value_fc.b_0\n",
      "params/encoder_layer_6_multi_head_att_value_fc.w_0\n",
      "params/encoder_layer_8_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_8_ffn_fc_1.b_0\n",
      "params/encoder_layer_4_post_att_layer_norm_bias\n",
      "params/encoder_layer_0_post_att_layer_norm_scale\n",
      "params/encoder_layer_0_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_0_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_4_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_8_ffn_fc_0.b_0\n",
      "params/pre_encoder_layer_norm_scale\n",
      "params/encoder_layer_11_ffn_fc_1.b_0\n",
      "params/encoder_layer_8_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_10_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_1_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_6_multi_head_att_output_fc.b_0\n",
      "params/mask_lm_trans_fc.b_0\n",
      "params/encoder_layer_9_multi_head_att_output_fc.b_0\n",
      "params/encoder_layer_7_multi_head_att_value_fc.b_0\n",
      "params/encoder_layer_10_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_8_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_2_multi_head_att_key_fc.w_0\n",
      "params/encoder_layer_10_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_0_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_11_multi_head_att_value_fc.w_0\n",
      "params/pooled_fc.w_0\n",
      "params/encoder_layer_3_multi_head_att_value_fc.w_0\n",
      "params/encoder_layer_0_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_3_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_11_multi_head_att_value_fc.b_0\n",
      "params/next_sent_3cls_fc.b_0\n",
      "params/encoder_layer_2_ffn_fc_0.w_0\n",
      "params/encoder_layer_1_multi_head_att_value_fc.w_0\n",
      "params/encoder_layer_7_multi_head_att_query_fc.w_0\n",
      "params/encoder_layer_3_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_1_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_6_post_att_layer_norm_bias\n",
      "params/encoder_layer_4_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_6_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_7_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_10_ffn_fc_1.b_0\n",
      "params/encoder_layer_11_post_att_layer_norm_scale\n",
      "params/encoder_layer_4_post_att_layer_norm_scale\n",
      "params/encoder_layer_5_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_4_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_4_ffn_fc_1.b_0\n",
      "params/encoder_layer_0_ffn_fc_0.w_0\n",
      "params/encoder_layer_7_multi_head_att_key_fc.b_0\n",
      "params/encoder_layer_5_post_att_layer_norm_bias\n",
      "params/encoder_layer_9_ffn_fc_0.b_0\n",
      "params/encoder_layer_1_multi_head_att_value_fc.b_0\n",
      "params/encoder_layer_10_post_ffn_layer_norm_scale\n",
      "params/encoder_layer_2_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_7_post_att_layer_norm_bias\n",
      "params/encoder_layer_10_ffn_fc_1.w_0\n",
      "params/encoder_layer_0_multi_head_att_output_fc.w_0\n",
      "params/encoder_layer_9_multi_head_att_query_fc.b_0\n",
      "params/encoder_layer_8_post_ffn_layer_norm_bias\n",
      "params/encoder_layer_7_post_att_layer_norm_scale\n",
      "vocab.txt\n",
      "ernie_config.json\n"
     ]
    }
   ],
   "source": [
    "!mkdir -p pretrain_models/ernie\n",
    "\n",
    "# download pretrain model file to ./pretrain_models/\n",
    "!wget --no-check-certificate https://baidu-nlp.bj.bcebos.com/ERNIE_stable-1.0.1.tar.gz -O ERNIE_stable-1.0.1.tar.gz\n",
    "\n",
    "!tar -zxvf ERNIE_stable-1.0.1.tar.gz -C pretrain_models/ernie\n",
    "\n",
    "!/bin/rm ERNIE_stable-1.0.1.tar.gz"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在下载模型文件并解压后，得到训练好的Paddle的ERNIR模型权重，我们需要将其转换为MindSpore支持的模型权重。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "====================save vocab file====================\n",
      "====================extract weights====================\n",
      "encoder_layer_1_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.1.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_9_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.9.output.dense.bias (768,)\n",
      "encoder_layer_2_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.2.attention.attention.key_layer.bias (768,)\n",
      "encoder_layer_0_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.0.attention.attention.key_layer.bias (768,)\n",
      "encoder_layer_6_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.6.intermediate.bias (3072,)\n",
      "encoder_layer_2_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.2.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_4_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.4.attention.output.dense.bias (768,)\n",
      "encoder_layer_10_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.10.attention.output.dense.bias (768,)\n",
      "encoder_layer_2_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.2.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_10_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.10.intermediate.weight (3072, 768)\n",
      "encoder_layer_10_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.10.attention.attention.key_layer.bias (768,)\n",
      "encoder_layer_2_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.2.output.layernorm.gamma (768,)\n",
      "encoder_layer_5_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.5.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_3_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.3.output.layernorm.gamma (768,)\n",
      "encoder_layer_3_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.3.attention.output.dense.bias (768,)\n",
      "encoder_layer_10_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.10.attention.output.layernorm.beta (768,)\n",
      "encoder_layer_11_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.11.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_10_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.10.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_11_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.11.attention.output.layernorm.gamma (768,)\n",
      "encoder_layer_9_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.9.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_7_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.7.intermediate.bias (3072,)\n",
      "encoder_layer_3_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.3.intermediate.bias (3072,)\n",
      "encoder_layer_3_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.3.intermediate.weight (3072, 768)\n",
      "encoder_layer_9_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.9.output.layernorm.gamma (768,)\n",
      "encoder_layer_9_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.9.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_3_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.3.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_10_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.10.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_8_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.8.intermediate.weight (3072, 768)\n",
      "encoder_layer_11_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.11.output.layernorm.gamma (768,)\n",
      "encoder_layer_9_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.9.attention.attention.key_layer.bias (768,)\n",
      "pre_encoder_layer_norm_bias -> ernie.ernie.ernie_embedding_postprocessor.layernorm.beta (768,)\n",
      "encoder_layer_8_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.8.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_9_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.9.output.layernorm.beta (768,)\n",
      "word_embedding -> ernie.ernie.ernie_embedding_lookup.embedding_table (18000, 768)\n",
      "encoder_layer_0_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.0.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_1_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.1.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_2_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.2.output.dense.weight (768, 3072)\n",
      "encoder_layer_10_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.10.output.dense.weight (768, 3072)\n",
      "encoder_layer_0_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.0.intermediate.weight (3072, 768)\n",
      "encoder_layer_5_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.5.output.layernorm.beta (768,)\n",
      "encoder_layer_5_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.5.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_9_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.9.attention.output.dense.bias (768,)\n",
      "encoder_layer_1_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.1.output.dense.weight (768, 3072)\n",
      "encoder_layer_4_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.4.intermediate.bias (3072,)\n",
      "encoder_layer_9_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.9.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_9_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.9.attention.output.layernorm.gamma (768,)\n",
      "encoder_layer_10_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.10.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_8_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.8.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_7_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.7.attention.attention.key_layer.bias (768,)\n",
      "encoder_layer_8_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.8.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_5_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.5.attention.output.layernorm.beta (768,)\n",
      "sent_embedding -> ernie.ernie.ernie_embedding_postprocessor.token_type_embedding.embedding_table (2, 768)\n",
      "encoder_layer_2_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.2.attention.output.dense.bias (768,)\n",
      "encoder_layer_0_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.0.attention.output.layernorm.gamma (768,)\n",
      "encoder_layer_5_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.5.attention.output.dense.bias (768,)\n",
      "encoder_layer_6_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.6.output.layernorm.gamma (768,)\n",
      "encoder_layer_5_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.5.intermediate.bias (3072,)\n",
      "encoder_layer_7_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.7.attention.output.layernorm.gamma (768,)\n",
      "pooled_fc.w_0 -> ernie.ernie.dense.weight (768, 768)\n",
      "encoder_layer_0_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.0.output.dense.bias (768,)\n",
      "encoder_layer_4_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.4.output.layernorm.gamma (768,)\n",
      "encoder_layer_5_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.5.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_6_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.6.intermediate.weight (3072, 768)\n",
      "encoder_layer_6_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.6.attention.attention.key_layer.bias (768,)\n",
      "encoder_layer_8_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.8.attention.attention.key_layer.bias (768,)\n",
      "encoder_layer_8_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.8.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_11_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.11.attention.attention.key_layer.bias (768,)\n",
      "encoder_layer_6_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.6.attention.attention.key_layer.weight (768, 768)\n",
      "pre_encoder_layer_norm_scale -> ernie.ernie.ernie_embedding_postprocessor.layernorm.gamma (768,)\n",
      "encoder_layer_11_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.11.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_5_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.5.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_11_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.11.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_10_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.10.output.layernorm.beta (768,)\n",
      "encoder_layer_10_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.10.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_7_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.7.output.layernorm.beta (768,)\n",
      "encoder_layer_1_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.1.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_4_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.4.attention.output.layernorm.beta (768,)\n",
      "encoder_layer_6_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.6.output.dense.weight (768, 3072)\n",
      "encoder_layer_0_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.0.output.layernorm.beta (768,)\n",
      "encoder_layer_0_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.0.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_11_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.11.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_11_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.11.intermediate.bias (3072,)\n",
      "encoder_layer_3_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.3.attention.attention.key_layer.bias (768,)\n",
      "encoder_layer_3_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.3.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_9_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.9.intermediate.bias (3072,)\n",
      "encoder_layer_2_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.2.attention.output.layernorm.gamma (768,)\n",
      "encoder_layer_1_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.1.attention.attention.key_layer.bias (768,)\n",
      "encoder_layer_3_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.3.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_8_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.8.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_11_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.11.intermediate.weight (3072, 768)\n",
      "encoder_layer_9_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.9.output.dense.weight (768, 3072)\n",
      "encoder_layer_1_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.1.intermediate.bias (3072,)\n",
      "encoder_layer_11_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.11.attention.output.dense.bias (768,)\n",
      "encoder_layer_2_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.2.output.layernorm.beta (768,)\n",
      "encoder_layer_1_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.1.attention.output.layernorm.gamma (768,)\n",
      "encoder_layer_3_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.3.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_2_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.2.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_10_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.10.output.dense.bias (768,)\n",
      "encoder_layer_6_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.6.attention.output.layernorm.beta (768,)\n",
      "encoder_layer_8_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.8.intermediate.bias (3072,)\n",
      "encoder_layer_8_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.8.output.dense.bias (768,)\n",
      "encoder_layer_7_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.7.intermediate.weight (3072, 768)\n",
      "encoder_layer_7_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.7.attention.output.layernorm.beta (768,)\n",
      "encoder_layer_2_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.2.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_7_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.7.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_7_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.7.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_5_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.5.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_4_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.4.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_10_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.10.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_5_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.5.attention.output.layernorm.gamma (768,)\n",
      "encoder_layer_0_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.0.output.layernorm.gamma (768,)\n",
      "encoder_layer_11_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.11.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_5_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.5.attention.attention.key_layer.bias (768,)\n",
      "encoder_layer_11_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.11.output.dense.bias (768,)\n",
      "encoder_layer_0_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.0.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_10_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.10.intermediate.bias (3072,)\n",
      "encoder_layer_3_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.3.output.dense.bias (768,)\n",
      "encoder_layer_5_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.5.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_6_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.6.attention.output.layernorm.gamma (768,)\n",
      "encoder_layer_8_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.8.output.dense.weight (768, 3072)\n",
      "encoder_layer_0_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.0.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_2_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.2.intermediate.bias (3072,)\n",
      "encoder_layer_1_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.1.intermediate.weight (3072, 768)\n",
      "encoder_layer_4_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.4.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_10_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.10.attention.output.layernorm.gamma (768,)\n",
      "encoder_layer_0_ffn_fc_0.b_0 -> ernie.ernie.ernie_encoder.layers.0.intermediate.bias (3072,)\n",
      "encoder_layer_1_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.1.attention.output.dense.bias (768,)\n",
      "encoder_layer_4_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.4.intermediate.weight (3072, 768)\n",
      "encoder_layer_5_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.5.output.dense.bias (768,)\n",
      "encoder_layer_4_multi_head_att_key_fc.b_0 -> ernie.ernie.ernie_encoder.layers.4.attention.attention.key_layer.bias (768,)\n",
      "pos_embedding -> ernie.ernie.ernie_embedding_postprocessor.full_position_embedding.embedding_table (513, 768)\n",
      "encoder_layer_6_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.6.output.dense.bias (768,)\n",
      "encoder_layer_9_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.9.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_0_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.0.output.dense.weight (768, 3072)\n",
      "encoder_layer_5_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.5.intermediate.weight (3072, 768)\n",
      "encoder_layer_1_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.1.output.layernorm.gamma (768,)\n",
      "encoder_layer_7_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.7.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_6_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.6.attention.output.dense.bias (768,)\n",
      "encoder_layer_6_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.6.output.layernorm.beta (768,)\n",
      "encoder_layer_5_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.5.output.layernorm.gamma (768,)\n",
      "encoder_layer_0_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.0.attention.output.dense.bias (768,)\n",
      "encoder_layer_11_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.11.output.dense.weight (768, 3072)\n",
      "encoder_layer_4_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.4.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_10_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.10.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_6_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.6.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_8_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.8.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_1_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.1.attention.output.layernorm.beta (768,)\n",
      "encoder_layer_1_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.1.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_11_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.11.attention.output.layernorm.beta (768,)\n",
      "encoder_layer_4_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.4.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_1_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.1.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_7_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.7.attention.output.dense.bias (768,)\n",
      "encoder_layer_1_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.1.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_7_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.7.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_3_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.3.output.layernorm.beta (768,)\n",
      "encoder_layer_9_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.9.intermediate.weight (3072, 768)\n",
      "encoder_layer_10_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.10.output.layernorm.gamma (768,)\n",
      "encoder_layer_4_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.4.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_2_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.2.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_6_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.6.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_6_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.6.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_3_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.3.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_4_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.4.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_2_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.2.output.dense.bias (768,)\n",
      "encoder_layer_1_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.1.output.dense.bias (768,)\n",
      "encoder_layer_8_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.8.attention.output.layernorm.beta (768,)\n",
      "pooled_fc.b_0 -> ernie.ernie.dense.bias (768,)\n",
      "encoder_layer_0_multi_head_att_query_fc.b_0 -> ernie.ernie.ernie_encoder.layers.0.attention.attention.query_layer.bias (768,)\n",
      "encoder_layer_7_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.7.output.dense.bias (768,)\n",
      "encoder_layer_9_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.9.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_0_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.0.attention.output.layernorm.beta (768,)\n",
      "encoder_layer_3_multi_head_att_key_fc.w_0 -> ernie.ernie.ernie_encoder.layers.3.attention.attention.key_layer.weight (768, 768)\n",
      "encoder_layer_8_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.8.output.layernorm.gamma (768,)\n",
      "encoder_layer_6_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.6.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_2_multi_head_att_query_fc.w_0 -> ernie.ernie.ernie_encoder.layers.2.attention.attention.query_layer.weight (768, 768)\n",
      "encoder_layer_3_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.3.attention.output.layernorm.beta (768,)\n",
      "encoder_layer_2_ffn_fc_0.w_0 -> ernie.ernie.ernie_encoder.layers.2.intermediate.weight (3072, 768)\n",
      "encoder_layer_1_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.1.output.layernorm.beta (768,)\n",
      "encoder_layer_9_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.9.attention.output.layernorm.beta (768,)\n",
      "encoder_layer_4_ffn_fc_1.b_0 -> ernie.ernie.ernie_encoder.layers.4.output.dense.bias (768,)\n",
      "encoder_layer_3_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.3.attention.output.layernorm.gamma (768,)\n",
      "encoder_layer_2_post_att_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.2.attention.output.layernorm.beta (768,)\n",
      "encoder_layer_7_multi_head_att_value_fc.b_0 -> ernie.ernie.ernie_encoder.layers.7.attention.attention.value_layer.bias (768,)\n",
      "encoder_layer_11_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.11.output.layernorm.beta (768,)\n",
      "encoder_layer_8_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.8.attention.output.layernorm.gamma (768,)\n",
      "encoder_layer_0_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.0.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_8_multi_head_att_output_fc.b_0 -> ernie.ernie.ernie_encoder.layers.8.attention.output.dense.bias (768,)\n",
      "encoder_layer_3_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.3.output.dense.weight (768, 3072)\n",
      "encoder_layer_7_post_ffn_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.7.output.layernorm.gamma (768,)\n",
      "encoder_layer_4_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.4.output.dense.weight (768, 3072)\n",
      "encoder_layer_6_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.6.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_9_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.9.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_5_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.5.output.dense.weight (768, 3072)\n",
      "encoder_layer_11_multi_head_att_output_fc.w_0 -> ernie.ernie.ernie_encoder.layers.11.attention.output.dense.weight (768, 768)\n",
      "encoder_layer_7_ffn_fc_1.w_0 -> ernie.ernie.ernie_encoder.layers.7.output.dense.weight (768, 3072)\n",
      "encoder_layer_7_multi_head_att_value_fc.w_0 -> ernie.ernie.ernie_encoder.layers.7.attention.attention.value_layer.weight (768, 768)\n",
      "encoder_layer_8_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.8.output.layernorm.beta (768,)\n",
      "encoder_layer_4_post_ffn_layer_norm_bias -> ernie.ernie.ernie_encoder.layers.4.output.layernorm.beta (768,)\n",
      "encoder_layer_4_post_att_layer_norm_scale -> ernie.ernie.ernie_encoder.layers.4.attention.output.layernorm.gamma (768,)\n"
     ]
    }
   ],
   "source": [
    "import collections\n",
    "import os\n",
    "import json\n",
    "import shutil\n",
    "import paddle.fluid.dygraph as D\n",
    "from paddle import fluid\n",
    "from mindspore import Tensor\n",
    "from mindspore.train.serialization import save_checkpoint\n",
    "\n",
    "def build_params_map(attention_num=12):\n",
    "    \"\"\"\n",
    "    build params map from paddle-paddle's ERNIE to transformer's ernie\n",
    "    \"\"\"\n",
    "    weight_map = collections.OrderedDict({\n",
    "        'word_embedding': \"ernie.ernie.ernie_embedding_lookup.embedding_table\",\n",
    "        'pos_embedding': \"ernie.ernie.ernie_embedding_postprocessor.full_position_embedding.embedding_table\",\n",
    "        'sent_embedding': \"ernie.ernie.ernie_embedding_postprocessor.token_type_embedding.embedding_table\",\n",
    "        'pre_encoder_layer_norm_scale': 'ernie.ernie.ernie_embedding_postprocessor.layernorm.gamma',\n",
    "        'pre_encoder_layer_norm_bias': 'ernie.ernie.ernie_embedding_postprocessor.layernorm.beta',\n",
    "    })\n",
    "    # add attention layers\n",
    "    for i in range(attention_num):\n",
    "        weight_map[f'encoder_layer_{i}_multi_head_att_query_fc.w_0'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.attention.attention.query_layer.weight'\n",
    "        weight_map[f'encoder_layer_{i}_multi_head_att_query_fc.b_0'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.attention.attention.query_layer.bias'\n",
    "        weight_map[f'encoder_layer_{i}_multi_head_att_key_fc.w_0'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.attention.attention.key_layer.weight'\n",
    "        weight_map[f'encoder_layer_{i}_multi_head_att_key_fc.b_0'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.attention.attention.key_layer.bias'\n",
    "        weight_map[f'encoder_layer_{i}_multi_head_att_value_fc.w_0'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.attention.attention.value_layer.weight'\n",
    "        weight_map[f'encoder_layer_{i}_multi_head_att_value_fc.b_0'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.attention.attention.value_layer.bias'\n",
    "        weight_map[f'encoder_layer_{i}_multi_head_att_output_fc.w_0'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.attention.output.dense.weight'\n",
    "        weight_map[f'encoder_layer_{i}_multi_head_att_output_fc.b_0'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.attention.output.dense.bias'\n",
    "        weight_map[f'encoder_layer_{i}_post_att_layer_norm_scale'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.attention.output.layernorm.gamma'\n",
    "        weight_map[f'encoder_layer_{i}_post_att_layer_norm_bias'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.attention.output.layernorm.beta'\n",
    "        weight_map[f'encoder_layer_{i}_ffn_fc_0.w_0'] = f'ernie.ernie.ernie_encoder.layers.{i}.intermediate.weight'\n",
    "        weight_map[f'encoder_layer_{i}_ffn_fc_0.b_0'] = f'ernie.ernie.ernie_encoder.layers.{i}.intermediate.bias'\n",
    "        weight_map[f'encoder_layer_{i}_ffn_fc_1.w_0'] = f'ernie.ernie.ernie_encoder.layers.{i}.output.dense.weight'\n",
    "        weight_map[f'encoder_layer_{i}_ffn_fc_1.b_0'] = f'ernie.ernie.ernie_encoder.layers.{i}.output.dense.bias'\n",
    "        weight_map[f'encoder_layer_{i}_post_ffn_layer_norm_scale'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.output.layernorm.gamma'\n",
    "        weight_map[f'encoder_layer_{i}_post_ffn_layer_norm_bias'] = \\\n",
    "            f'ernie.ernie.ernie_encoder.layers.{i}.output.layernorm.beta'\n",
    "\n",
    "    weight_map.update(\n",
    "        {\n",
    "            'pooled_fc.w_0': 'ernie.ernie.dense.weight',\n",
    "            'pooled_fc.b_0': 'ernie.ernie.dense.bias',\n",
    "            'cls_out_w': 'ernie.dense_1.weight',\n",
    "            'cls_out_b': 'ernie.dense_1.bias'\n",
    "        }\n",
    "    )\n",
    "    return weight_map\n",
    "\n",
    "def extract_and_convert(input_dir, output_dir):\n",
    "    \"\"\"extract ckpt and convert\"\"\"\n",
    "    if not os.path.exists(output_dir):\n",
    "        os.makedirs(output_dir)\n",
    "    config = json.load(open(os.path.join(input_dir, 'ernie_config.json'), 'rt', encoding='utf-8'))\n",
    "    print('=' * 20 + 'save vocab file' + '=' * 20)\n",
    "    shutil.copyfile(os.path.join(input_dir, 'vocab.txt'), os.path.join(output_dir, 'vocab.txt'))\n",
    "    print('=' * 20 + 'extract weights' + '=' * 20)\n",
    "    state_dict = []\n",
    "    weight_map = build_params_map(attention_num=config['num_hidden_layers'])\n",
    "    with fluid.dygraph.guard():\n",
    "        paddle_paddle_params, _ = D.load_dygraph(os.path.join(input_dir, 'params'))\n",
    "    for weight_name, weight_value in paddle_paddle_params.items():\n",
    "        if weight_name not in weight_map.keys():\n",
    "            continue\n",
    "        if 'w_0' in weight_name \\\n",
    "            or 'post_att_layer_norm_scale' in weight_name \\\n",
    "            or 'post_ffn_layer_norm_scale' in weight_name \\\n",
    "            or 'cls_out_w' in weight_name:\n",
    "            weight_value = weight_value.transpose()\n",
    "        state_dict.append({'name': weight_map[weight_name], 'data': Tensor(weight_value)})\n",
    "        print(weight_name, '->', weight_map[weight_name], weight_value.shape)\n",
    "    save_checkpoint(state_dict, os.path.join(output_dir, \"ernie.ckpt\"))\n",
    "\n",
    "\n",
    "extract_and_convert('./pretrain_models/ernie', './pretrain_models/converted')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来我们定义训练配置，并将原有数据根据解析到的`vocab.txt`转换为`MindRecord`数据集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing example 0 of 9655\n",
      "Writing example 0 of 1036\n",
      "Writing example 0 of 1080\n"
     ]
    }
   ],
   "source": [
    "##定义训练配置\n",
    "config = {\n",
    "    'train':{\n",
    "        'vocab_path':'./pretrain_models/converted/vocab.txt',\n",
    "        'label_map_config':None,\n",
    "        'max_seq_len':64,\n",
    "        'do_lower_case':'true',\n",
    "        'random_seed':1,\n",
    "        'input_file':'./data/train.tsv',\n",
    "        'output_file':'./data/train.mindrecord'\n",
    "    },\n",
    "    'test':{\n",
    "        'vocab_path':'./pretrain_models/converted/vocab.txt',\n",
    "        'label_map_config':None,\n",
    "        'max_seq_len':64,\n",
    "        'do_lower_case':'true',\n",
    "        'random_seed':1,\n",
    "        'input_file':'./data/test.tsv',\n",
    "        'output_file':'./data/test.mindrecord'\n",
    "    },\n",
    "    'dev':{\n",
    "        'vocab_path':'./pretrain_models/converted/vocab.txt',\n",
    "        'label_map_config':None,\n",
    "        'max_seq_len':64,\n",
    "        'do_lower_case':'true',\n",
    "        'random_seed':1,\n",
    "        'input_file':'./data/dev.tsv',\n",
    "        'output_file':'./data/dev.mindrecord'\n",
    "    }\n",
    "}\n",
    "\n",
    "convert_tsv_2_mindrecord(config=config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3 加载对话数据集\n",
    "在将指定数据转换为对应的`MindRecord`之后，我们还需要将`MindRecord`数据集加载到`MindDataset`对象中，该对象能够将`MindRecord`中的数据按照给定的`Schema`中规定的数据类型读取到内存中，需将其加入到MindSpore特有的数据集处理流水线中，使用`map`接口对指定的`column`添加操作。最后指定数据集的`batch`大小，通过`batch`接口指定，并设置是否丢弃无法被`batch size`整除的剩余数据。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.dataset as ds\n",
    "import mindspore.dataset.transforms as T\n",
    "import mindspore.common.dtype as mstype\n",
    "\n",
    "def create_classification_dataset(batch_size=1,\n",
    "                                  repeat_count=1,\n",
    "                                  data_file_path=None,\n",
    "                                  schema_file_path=None,\n",
    "                                  do_shuffle=True,\n",
    "                                  drop_remainder=True):\n",
    "    \"\"\"create finetune or evaluation dataset\"\"\"\n",
    "    type_cast_op = T.TypeCast(mstype.int32)\n",
    "    data_set = ds.MindDataset([data_file_path],\n",
    "                              columns_list=[\"input_ids\", \"input_mask\", \"segment_ids\", \"label_ids\"],\n",
    "                              shuffle=do_shuffle)\n",
    "    data_set = data_set.map(operations=type_cast_op, input_columns=\"label_ids\")\n",
    "    data_set = data_set.map(operations=type_cast_op, input_columns=\"segment_ids\")\n",
    "    data_set = data_set.map(operations=type_cast_op, input_columns=\"input_mask\")\n",
    "    data_set = data_set.map(operations=type_cast_op, input_columns=\"input_ids\")\n",
    "    data_set = data_set.repeat(repeat_count)\n",
    "    # apply batch operations\n",
    "    data_set = data_set.batch(batch_size, drop_remainder=drop_remainder)\n",
    "    return data_set\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.4 参数配置文件\n",
    "\n",
    "由于本文中模型参数较多，我们直接定义一个参数类`ErnieConfig`用来存放模型中用到的所有参数，从而方便后续模型训练或评估的时候自由设置参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "class ErnieConfig:\n",
    "    \"\"\"\n",
    "    Configuration for `ErnieModel`.\n",
    "    Args:\n",
    "        seq_length (int): 输入序列的长度. 默认值: 128.\n",
    "        vocab_size (int): 每个嵌入向量的形状. 默认值: 32000.\n",
    "        hidden_size (int): Ernie编码器层的大小. 默认值: 768.\n",
    "        num_hidden_layers (int): Number of hidden layers in the ErnieTransformer encoder\n",
    "                           cell. 默认值: 12.\n",
    "        num_attention_heads (int): Number of attention heads in the ErnieTransformer\n",
    "                             encoder cell. 默认值: 12.\n",
    "        intermediate_size (int): Size of intermediate layer in the ErnieTransformer\n",
    "                           encoder cell. 默认值: 3072.\n",
    "        hidden_act (str): Activation function used in the ErnieTransformer encoder\n",
    "                    cell. 默认值: \"gelu\".\n",
    "        hidden_dropout_prob (float): ErnieOutput的dropout概率. 默认值: 0.1.\n",
    "        attention_probs_dropout_prob (float): ErnieAttention的dropout概率. 默认值: 0.1.\n",
    "        max_position_embeddings (int): 此模型中使用的序列的最大长度. 默认值: 512.\n",
    "        type_vocab_size (int): Size of token type vocab. 默认值: 16.\n",
    "        initializer_range (float): Initialization value of TruncatedNormal. 默认值: 0.02.\n",
    "        use_relative_positions (bool): Specifies whether to use relative positions. 默认值: False.\n",
    "        dtype (:class:`mindspore.dtype`): Data type of the input. 默认值: mstype.float32.\n",
    "        compute_type (:class:`mindspore.dtype`): Compute type in ErnieTransformer. 默认值: mstype.float32.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 seq_length=128,\n",
    "                 vocab_size=32000,\n",
    "                 hidden_size=768,\n",
    "                 num_hidden_layers=12,\n",
    "                 num_attention_heads=12,\n",
    "                 intermediate_size=3072,\n",
    "                 hidden_act=\"gelu\",\n",
    "                 hidden_dropout_prob=0.1,\n",
    "                 attention_probs_dropout_prob=0.1,\n",
    "                 max_position_embeddings=512,\n",
    "                 type_vocab_size=16,\n",
    "                 initializer_range=0.02,\n",
    "                 use_relative_positions=False,\n",
    "                 dtype=mstype.float32,\n",
    "                 compute_type=mstype.float32):\n",
    "        self.seq_length = seq_length\n",
    "        self.vocab_size = vocab_size\n",
    "        self.hidden_size = hidden_size\n",
    "        self.num_hidden_layers = num_hidden_layers\n",
    "        self.num_attention_heads = num_attention_heads\n",
    "        self.hidden_act = hidden_act\n",
    "        self.intermediate_size = intermediate_size\n",
    "        self.hidden_dropout_prob = hidden_dropout_prob\n",
    "        self.attention_probs_dropout_prob = attention_probs_dropout_prob\n",
    "        self.max_position_embeddings = max_position_embeddings\n",
    "        self.type_vocab_size = type_vocab_size\n",
    "        self.initializer_range = initializer_range\n",
    "        self.use_relative_positions = use_relative_positions\n",
    "        self.dtype = dtype\n",
    "        self.compute_type = compute_type"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3 ERNIE模型构建\n",
    "完成数据集的处理后，我们设计用于对话情绪识别的模型结构。首先需要将输入文本(即序列化后的index id列表)通过查表转为向量化表示，此时需要使用`nn.Embedding`层加载之前构建的词向量`input_ids`；然后使用RNN循环神经网络做特征提取；最后将RNN连接至一个全连接层，即nn.Dense，将特征转化为与分类数量相同的size，用于后续进行模型优化训练。整体模型结构如下：\n",
    "\n",
    "下面对模型进行详解：\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1 Embedding\n",
    "#### 3.1.1 EmbeddingLookup\n",
    "其作用是使用index id对权重矩阵对应id的向量进行查找，当输入为一个由index id组成的序列时，则查找并返回一个相同长度的矩阵。这里我们只需要利用一个线性函数即可完成。\n",
    "```\n",
    "ernie_embedding_lookup = nn.Embedding(\n",
    "            vocab_size=config.vocab_size,\n",
    "            embedding_size=self.embedding_size,\n",
    "            use_one_hot=use_one_hot_embeddings)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.1.2 Embedding_postprocess\n",
    "这一层包含有两部分，一是句子间序列信息token_type_embedding层，二是句子内部序列信息full_position_embedding层。\n",
    "\n",
    "1. token_type_embedding层\n",
    "\n",
    "token_type表的维度是[2,768]只有`0，1`两种可能性)，对`token_type_id` 做`one_hot` 再和`token_type`表相乘得到`token_type_embedding`,其`shape`是[8,128,768]。最后把token_type_embedding和之前的ernie_embedding_lookup相加，把位置信息融入input。\n",
    "\n",
    "2. full_position_embedding层\n",
    "\n",
    "对于batch里的所有样本，位置嵌入的信息是随机初始化的，值都是从1到128的位置，postion_embedding的维度是[128,768]，batch里的每一个样本都需要加上position_embedding\n",
    "\n",
    "3. layer_normalization 和dropout\n",
    "\n",
    "当我们使用梯度下降法做优化时，随着网络深度的增加，输入数据的特征分布会不断发生变化，为了保证数据特征分布的稳定性，这里加入了Layer Normalization。Normalization的主要作用就是把每层特征输入到激活函数之前，对它们进行normalization，使其转换为均值为1，方差为0的数据，从而可以避免数据落在激活函数的饱和区，以减少梯度消失的问题。\n",
    "\n",
    "Dropout随机遮蔽一些神经元，防止模型过拟合。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.nn as nn\n",
    "import mindspore.common.dtype as mstype\n",
    "from mindspore.common.tensor import Tensor\n",
    "\n",
    "from mindspore.ops import operations as P\n",
    "\n",
    "class EmbeddingPostprocessor(nn.Cell):\n",
    "    \"\"\"\n",
    "    Postprocessors apply positional and token type embeddings to word embeddings.\n",
    "    Args:\n",
    "        embedding_size (int): The size of each embedding vector.\n",
    "        embedding_shape (list): [batch_size, seq_length, embedding_size], the shape of\n",
    "                         each embedding vector.\n",
    "        use_token_type (bool): Specifies whether to use token type embeddings. Default: False.\n",
    "        token_type_vocab_size (int): Size of token type vocab. Default: 16.\n",
    "        use_one_hot_embeddings (bool): Specifies whether to use one hot encoding form. Default: False.\n",
    "        initializer_range (float): Initialization value of TruncatedNormal. Default: 0.02.\n",
    "        max_position_embeddings (int): Maximum length of sequences used in this\n",
    "                                 model. Default: 512.\n",
    "        dropout_prob (float): The dropout probability. Default: 0.1.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 embedding_size,\n",
    "                 embedding_shape,\n",
    "                 use_relative_positions=False,\n",
    "                 use_token_type=False,\n",
    "                 token_type_vocab_size=16,\n",
    "                 use_one_hot_embeddings=False,\n",
    "                 initializer_range=0.02,\n",
    "                 max_position_embeddings=512,\n",
    "                 dropout_prob=0.1):\n",
    "        super(EmbeddingPostprocessor, self).__init__()\n",
    "        self.use_token_type = use_token_type\n",
    "        self.token_type_vocab_size = token_type_vocab_size\n",
    "        self.use_one_hot_embeddings = use_one_hot_embeddings\n",
    "        self.max_position_embeddings = max_position_embeddings\n",
    "        self.token_type_embedding = nn.Embedding(\n",
    "            vocab_size=token_type_vocab_size,\n",
    "            embedding_size=embedding_size,\n",
    "            use_one_hot=use_one_hot_embeddings)\n",
    "        self.shape_flat = (-1,)\n",
    "        self.one_hot = P.OneHot()\n",
    "        self.on_value = Tensor(1.0, mstype.float32)\n",
    "        self.off_value = Tensor(0.1, mstype.float32)\n",
    "        self.array_mul = P.MatMul()\n",
    "        self.reshape = P.Reshape()\n",
    "        self.shape = tuple(embedding_shape)\n",
    "        self.dropout = nn.Dropout(1 - dropout_prob)\n",
    "        self.gather = P.Gather()\n",
    "        self.use_relative_positions = use_relative_positions\n",
    "        self.slice = P.StridedSlice()\n",
    "        _, seq, _ = self.shape\n",
    "        self.full_position_embedding = nn.Embedding(\n",
    "            vocab_size=max_position_embeddings,\n",
    "            embedding_size=embedding_size,\n",
    "            use_one_hot=False)\n",
    "        self.layernorm = nn.LayerNorm((embedding_size,))\n",
    "        self.position_ids = Tensor(np.arange(seq).reshape(-1, seq).astype(np.int32))\n",
    "        self.add = P.Add()\n",
    "\n",
    "    def construct(self, token_type_ids, word_embeddings):\n",
    "        \"\"\"Postprocessors apply positional and token type embeddings to word embeddings.\"\"\"\n",
    "        output = word_embeddings\n",
    "        if self.use_token_type:\n",
    "            token_type_embeddings = self.token_type_embedding(token_type_ids)\n",
    "            output = self.add(output, token_type_embeddings)\n",
    "        if not self.use_relative_positions:\n",
    "            position_embeddings = self.full_position_embedding(self.position_ids)\n",
    "            output = self.add(output, position_embeddings)\n",
    "        output = self.layernorm(output)\n",
    "        output = self.dropout(output)\n",
    "        return output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2 Encoder\n",
    "#### 3.2.1 Attention Mask\n",
    "之前我们讲到mask在有字的地方全是1，没有字的地方是0，这里做的事情是：对于2维的[8, 128]的句子矩阵，每一个元素是词的id，对于每一个元素的位置，我们替换成一个向量，这个向量大小128，就是当前这个字所在的句子的一个mask信息,大概长这样[1,1,1,1...1,0,...0]。这样2维的句子矩阵就变成3维了[8, 128, 128]，这个向量是为了，当前这个词在attention的时候能看到哪几个词（0的位置不参与attention）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "class CreateAttentionMaskFromInputMask(nn.Cell):\n",
    "    \"\"\"\n",
    "    Create attention mask according to input mask.\n",
    "    Args:\n",
    "        config (Class): Configuration for ErnieModel.\n",
    "    \"\"\"\n",
    "    def __init__(self, config):\n",
    "        super(CreateAttentionMaskFromInputMask, self).__init__()\n",
    "        self.input_mask = None\n",
    "\n",
    "        self.cast = P.Cast()\n",
    "        self.reshape = P.Reshape()\n",
    "        self.shape = (-1, 1, config.seq_length)\n",
    "\n",
    "    def construct(self, input_mask):\n",
    "        attention_mask = self.cast(self.reshape(input_mask, self.shape), mstype.float32)\n",
    "        return attention_mask"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2.2 ERNIE Self-Attention\n",
    "\n",
    "\n",
    "这一块可以说是模型的核心区域，也是唯一涉及到公式的地方，所以将贴出大量代码。\n",
    "\n",
    "\n",
    "首先介绍三个工具类\n",
    "- `RelaPosMatrixGenerator`能够生成输入之间的相对位置矩阵；\n",
    "- `RelaPosEmbeddingsGenerator`类能够生成大小为[length, length, depth]的张量。\n",
    "- `SaturateCast`类主要用来执行安全类型转换。此操作在转换前适当的收缩范围，以防止出现值溢出或下溢的错误。\n",
    "\n",
    "\n",
    "1. RelaPosMatrixGenerator"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RelaPosMatrixGenerator(nn.Cell):\n",
    "    \"\"\"\n",
    "    生成输入向量之间的相对位置矩阵.\n",
    "    Args:\n",
    "        length (int): Length of one dim for the matrix to be generated.\n",
    "        max_relative_position (int): Max value of relative position.\n",
    "    \"\"\"\n",
    "    def __init__(self, length, max_relative_position):\n",
    "        super(RelaPosMatrixGenerator, self).__init__()\n",
    "        self._length = length\n",
    "        self._max_relative_position = max_relative_position\n",
    "        self._min_relative_position = -max_relative_position\n",
    "        self.range_length = -length + 1\n",
    "\n",
    "        self.tile = P.Tile()\n",
    "        self.range_mat = P.Reshape()\n",
    "        self.sub = P.Sub()\n",
    "        self.expanddims = P.ExpandDims()\n",
    "        self.cast = P.Cast()\n",
    "\n",
    "    def construct(self):\n",
    "        \"\"\"Generates matrix of relative positions between inputs.\"\"\"\n",
    "        range_vec_row_out = self.cast(F.tuple_to_array(F.make_range(self._length)), mstype.int32)\n",
    "        range_vec_col_out = self.range_mat(range_vec_row_out, (self._length, -1))\n",
    "        tile_row_out = self.tile(range_vec_row_out, (self._length,))\n",
    "        tile_col_out = self.tile(range_vec_col_out, (1, self._length))\n",
    "        range_mat_out = self.range_mat(tile_row_out, (self._length, self._length))\n",
    "        transpose_out = self.range_mat(tile_col_out, (self._length, self._length))\n",
    "        distance_mat = self.sub(range_mat_out, transpose_out)\n",
    "\n",
    "        distance_mat_clipped = C.clip_by_value(distance_mat,\n",
    "                                               self._min_relative_position,\n",
    "                                               self._max_relative_position)\n",
    "\n",
    "        # Shift values to be >=0. Each integer still uniquely identifies a\n",
    "        # relative position difference.\n",
    "        final_mat = distance_mat_clipped + self._max_relative_position\n",
    "        return final_mat"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2. RelaPosEmbeddingsGenerator\n",
    "\n",
    "当需要使用到相对位置编码时，利用上述`RelaPosMatrixGenerator`生成嵌入向量的相对位置编码。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "from mindspore.common.parameter import Parameter\n",
    "from mindspore.common.initializer import TruncatedNormal, initializer\n",
    "\n",
    "class RelaPosEmbeddingsGenerator(nn.Cell):\n",
    "    \"\"\"\n",
    "    生成 length, length, depth]的size的tensor.\n",
    "    Args:\n",
    "        length (int): Length of one dim for the matrix to be generated.\n",
    "        depth (int): Size of each attention head.\n",
    "        max_relative_position (int): Maxmum value of relative position.\n",
    "        initializer_range (float): Initialization value of TruncatedNormal.\n",
    "        use_one_hot_embeddings (bool): Specifies whether to use one hot encoding form. Default: False.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 length,\n",
    "                 depth,\n",
    "                 max_relative_position,\n",
    "                 initializer_range,\n",
    "                 use_one_hot_embeddings=False):\n",
    "        super(RelaPosEmbeddingsGenerator, self).__init__()\n",
    "        self.depth = depth\n",
    "        self.vocab_size = max_relative_position * 2 + 1\n",
    "        self.use_one_hot_embeddings = use_one_hot_embeddings\n",
    "\n",
    "        self.embeddings_table = Parameter(\n",
    "            initializer(TruncatedNormal(initializer_range),\n",
    "                        [self.vocab_size, self.depth]))\n",
    "\n",
    "        self.relative_positions_matrix = RelaPosMatrixGenerator(length=length,\n",
    "                                                                max_relative_position=max_relative_position)\n",
    "        self.reshape = P.Reshape()\n",
    "        self.one_hot = nn.OneHot(depth=self.vocab_size)\n",
    "        self.shape = P.Shape()\n",
    "        self.gather = P.Gather()  # index_select\n",
    "        self.matmul = P.BatchMatMul()\n",
    "\n",
    "    def construct(self):\n",
    "        \"\"\"为每个相对位置的维度生成嵌入.\"\"\"\n",
    "        relative_positions_matrix_out = self.relative_positions_matrix()\n",
    "\n",
    "        if self.use_one_hot_embeddings:\n",
    "            flat_relative_positions_matrix = self.reshape(relative_positions_matrix_out, (-1,))\n",
    "            one_hot_relative_positions_matrix = self.one_hot(\n",
    "                flat_relative_positions_matrix)\n",
    "            embeddings = self.matmul(one_hot_relative_positions_matrix, self.embeddings_table)\n",
    "            my_shape = self.shape(relative_positions_matrix_out) + (self.depth,)\n",
    "            embeddings = self.reshape(embeddings, my_shape)\n",
    "        else:\n",
    "            embeddings = self.gather(self.embeddings_table,\n",
    "                                     relative_positions_matrix_out, 0)\n",
    "        return embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3. SaturateCast\n",
    "\n",
    "用来完成类型的安全转换，该类会在转换前适当的收缩范围，以防止出现值溢出或下溢的错误。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "class SaturateCast(nn.Cell):\n",
    "    \"\"\"\n",
    "    执行安全饱和转换。此操作在转换前合适的收紧范围，以防止值溢出或下溢的危险。\n",
    "    Args:\n",
    "        src_type (:class:`mindspore.dtype`): The type of the elements of the input tensor. Default: mstype.float32.\n",
    "        dst_type (:class:`mindspore.dtype`): The type of the elements of the output tensor. Default: mstype.float32.\n",
    "    \"\"\"\n",
    "    def __init__(self, src_type=mstype.float32, dst_type=mstype.float32):\n",
    "        super(SaturateCast, self).__init__()\n",
    "        np_type = mstype.dtype_to_nptype(dst_type)\n",
    "\n",
    "        self.tensor_min_type = float(np.finfo(np_type).min)\n",
    "        self.tensor_max_type = float(np.finfo(np_type).max)\n",
    "\n",
    "        self.min_op = P.Minimum()\n",
    "        self.max_op = P.Maximum()\n",
    "        self.cast = P.Cast()\n",
    "        self.dst_type = dst_type\n",
    "\n",
    "    def construct(self, x):\n",
    "        out = self.max_op(x, self.tensor_min_type)\n",
    "        out = self.min_op(out, self.tensor_max_type)\n",
    "        return self.cast(out, self.dst_type)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2.2.1 ErnieAttention\n",
    "\n",
    "Attention机制\n",
    "\n",
    "熟悉Attention原理的都知道，Attention可由以下形式表示：\n",
    "\n",
    "$$\n",
    "attention\\_ouput=Attention(Q,K,V)\n",
    "$$\n",
    "\n",
    "Multi-Head Attention则是通过$h$个不同的线性变换对$Q$，$K$，$V$进行投影，最后将不同的Attention结果拼接起来：\n",
    "\n",
    "$$\n",
    "MultiHead(Q,K,V)=Concat(head_1, ..., head_n)W^O \\\\\n",
    "head_i=Attention(QW^Q_i, KW^K_i, VW^V_i)\n",
    "$$\n",
    "\n",
    "Self-Attention则是取$Q$，$K$，$V$相同。\n",
    "\n",
    "另外，ERNIE的Attention的计算同样采用的是Scaled dot-product，即：\n",
    "$$\n",
    "Attention(Q,K,V)=softmax(\\frac{QK^T}{\\sqrt{d_k}})V\n",
    "$$\n",
    "\n",
    "简单来说其核心内容是为输入向量的每个单词学习一个权重。通过给定一个任务相关的查询向量Query向量，计算Query和各个Key的相似性或者相关性得到注意力分布，即得到每个Key对应Value的权重系数，然后对Value进行加权求和得到最终的Attention数值。\n",
    "\n",
    "与其他模型不一致的是，本文在Attention机制中加入了是否使用`use_relative_positions`的选项，代表是否要对输入进行函数式相对位置编码。\n",
    "\n",
    "函数式相对位置编码是由华为提出的NeZha模型中首先提出的[引用]，官方实现:[huawei-noah/Pretrained-Language-Model](https://github.com/huawei-noah/Pretrained-Language-Model)\n",
    "\n",
    "原始 Multi-Head Attention 是基于 Scaled Dot-Product Attention 实现的，而 Scaled Dot-Product Attention 的实现下图所示：\n",
    "\n",
    "<center>\n",
    "    <img style=\"border-radius: 0.3125em;\n",
    "    box-shadow: 0 2px 4px 0 rgba(34,36,38,.12),0 2px 10px 0 rgba(34,36,38,.08); width:50%; height:50%\" \n",
    "    src=\"https://ai-studio-static-online.cdn.bcebos.com/20e8f5bf1dae494e9f0027288b19346797ba508032fe45cda0511fd4c8da3837\">\n",
    "    <br>\n",
    "    <div style=\"color:orange; border-bottom: 1px solid #d9d9d9;\n",
    "    display: inline-block;\n",
    "    color: #999;\n",
    "    padding: 2px;\">图2. Scaled Dot-Production 结构图</div>\n",
    "</center>\n",
    "\n",
    "输入的 Q、K 和 V 分别由真实输入的序列$x=(x1,x2,...,x_n)$乘上不同权重$W^Q$、$W^K$和$W^V$得到，输出为序列$z=(z_1,z_2,...,z_n)$长度与输入序列一致。输出$z_i$的计算公式如下：\n",
    "$$\n",
    "z_i=\\sum_{j=1}^n\\alpha_{ij}(x_jW^V)\n",
    "$$\n",
    "其中， $\\alpha_{ij}$是由位置$i$和位置$j$的隐藏状态求softmax得到，如下：\n",
    "\n",
    "$$\n",
    "\\alpha_{ij}=\\frac{expe_{ij}}{\\sum_kexpe_{ik}}\n",
    "$$\n",
    "\n",
    "其中， $e_{ij}$ 为输入元素的通过 $W^Q$ 和 $W^K$ 变换缩放点积得到，如下：\n",
    "\n",
    "$$\n",
    "e_{ij}=\\frac{(x_iW^Q)(x_jW^K)^T}{\\sqrt{d_z}}\n",
    "$$\n",
    "\n",
    "在相对位置编码方案中，将输出$z_i$加入两个位置之间相对距离的参数，在上述公式1和公式3中，分别加入两个 token 的相对位置信息，修改如下得到：\n",
    "\n",
    "$$\n",
    "z_i=\\sum_{j=1}^n\\alpha_{ij}(x_jW^V+\\alpha_{ij}^V)\n",
    "$$\n",
    "\n",
    "$$\n",
    "e_{ij}=\\frac{(x_i)W^Q(x_jW^K+\\alpha_{ij}^K)^T}{\\sqrt{d_z}}\n",
    "$$\n",
    "\n",
    "其中， $\\alpha_{ij}^V$ 和 $\\alpha_{ij}^K$ 是位置$i$和位置 $j$ 的相对位置编码，定义$\\alpha_{ij}$位置编码如下：\n",
    "\n",
    "$$\n",
    "\\alpha[2k]=sin(\\frac{j-i}{10000^{\\frac{2k}{d_z}}})\n",
    "$$\n",
    "\n",
    "$$\n",
    "\\alpha[2k+1]=cos(\\frac{j-i}{10000^{\\frac{2k}{d_z}}})\n",
    "$$\n",
    "\n",
    "\n",
    "如果在此处Attention层使用了函数式相对位置编码（Relative Position），那么在此前的Embedding层就可以不在做位置嵌入(Position Embedding)了。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "import math \n",
    "from mindspore.ops import functional as F\n",
    "\n",
    "class ErnieAttention(nn.Cell):\n",
    "    \"\"\"\n",
    "    从 \"from_tensor\" 到 \"to_tensor\"应用多头注意力机制转换.\n",
    "    Args:\n",
    "        from_tensor_width (int): Size of last dim of from_tensor.\n",
    "        to_tensor_width (int): Size of last dim of to_tensor.\n",
    "        from_seq_length (int): Length of from_tensor sequence.\n",
    "        to_seq_length (int): Length of to_tensor sequence.\n",
    "        num_attention_heads (int): Number of attention heads. Default: 1.\n",
    "        size_per_head (int): Size of each attention head. Default: 512.\n",
    "        query_act (str): Activation function for the query transform. Default: None.\n",
    "        key_act (str): Activation function for the key transform. Default: None.\n",
    "        value_act (str): Activation function for the value transform. Default: None.\n",
    "        has_attention_mask (bool): Specifies whether to use attention mask. Default: False.\n",
    "        attention_probs_dropout_prob (float): The dropout probability for\n",
    "                                      ErnieAttention. Default: 0.0.\n",
    "        use_one_hot_embeddings (bool): Specifies whether to use one hot encoding form. Default: False.\n",
    "        initializer_range (float): Initialization value of TruncatedNormal. Default: 0.02.\n",
    "        do_return_2d_tensor (bool): True for return 2d tensor. False for return 3d\n",
    "                             tensor. Default: False.\n",
    "        use_relative_positions (bool): Specifies whether to use relative positions. Default: False.\n",
    "        compute_type (:class:`mindspore.dtype`): Compute type in ErnieAttention. Default: mstype.float32.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 from_tensor_width,\n",
    "                 to_tensor_width,\n",
    "                 from_seq_length,\n",
    "                 to_seq_length,\n",
    "                 num_attention_heads=1,\n",
    "                 size_per_head=512,\n",
    "                 query_act=None,\n",
    "                 key_act=None,\n",
    "                 value_act=None,\n",
    "                 has_attention_mask=False,\n",
    "                 attention_probs_dropout_prob=0.0,\n",
    "                 use_one_hot_embeddings=False,\n",
    "                 initializer_range=0.02,\n",
    "                 do_return_2d_tensor=False,\n",
    "                 use_relative_positions=False,\n",
    "                 compute_type=mstype.float32):\n",
    "\n",
    "        super(ErnieAttention, self).__init__()\n",
    "        self.from_seq_length = from_seq_length\n",
    "        self.to_seq_length = to_seq_length\n",
    "        self.num_attention_heads = num_attention_heads\n",
    "        self.size_per_head = size_per_head\n",
    "        self.has_attention_mask = has_attention_mask\n",
    "        self.use_relative_positions = use_relative_positions\n",
    "\n",
    "        self.scores_mul = 1.0 / math.sqrt(float(self.size_per_head))\n",
    "        self.reshape = P.Reshape()\n",
    "        self.shape_from_2d = (-1, from_tensor_width)\n",
    "        self.shape_to_2d = (-1, to_tensor_width)\n",
    "        weight = TruncatedNormal(initializer_range)\n",
    "        units = num_attention_heads * size_per_head\n",
    "        self.query_layer = nn.Dense(from_tensor_width,\n",
    "                                    units,\n",
    "                                    activation=query_act,\n",
    "                                    weight_init=weight).to_float(compute_type)\n",
    "        self.key_layer = nn.Dense(to_tensor_width,\n",
    "                                  units,\n",
    "                                  activation=key_act,\n",
    "                                  weight_init=weight).to_float(compute_type)\n",
    "        self.value_layer = nn.Dense(to_tensor_width,\n",
    "                                    units,\n",
    "                                    activation=value_act,\n",
    "                                    weight_init=weight).to_float(compute_type)\n",
    "\n",
    "        self.shape_from = (-1, from_seq_length, num_attention_heads, size_per_head)\n",
    "        self.shape_to = (-1, to_seq_length, num_attention_heads, size_per_head)\n",
    "\n",
    "        self.matmul_trans_b = P.BatchMatMul(transpose_b=True)\n",
    "        self.multiply = P.Mul()\n",
    "        self.transpose = P.Transpose()\n",
    "        self.trans_shape = (0, 2, 1, 3)\n",
    "        self.trans_shape_relative = (2, 0, 1, 3)\n",
    "        self.trans_shape_position = (1, 2, 0, 3)\n",
    "        self.multiply_data = -10000.0\n",
    "        self.matmul = P.BatchMatMul()\n",
    "\n",
    "        self.softmax = nn.Softmax()\n",
    "        self.dropout = nn.Dropout(1 - attention_probs_dropout_prob)\n",
    "\n",
    "        if self.has_attention_mask:\n",
    "            self.expand_dims = P.ExpandDims()\n",
    "            self.sub = P.Sub()\n",
    "            self.add = P.Add()\n",
    "            self.cast = P.Cast()\n",
    "            self.get_dtype = P.DType()\n",
    "        if do_return_2d_tensor:\n",
    "            self.shape_return = (-1, num_attention_heads * size_per_head)\n",
    "        else:\n",
    "            self.shape_return = (-1, from_seq_length, num_attention_heads * size_per_head)\n",
    "\n",
    "        self.cast_compute_type = SaturateCast(dst_type=compute_type)\n",
    "        if self.use_relative_positions:\n",
    "            self._generate_relative_positions_embeddings = \\\n",
    "                RelaPosEmbeddingsGenerator(length=to_seq_length,\n",
    "                                           depth=size_per_head,\n",
    "                                           max_relative_position=16,\n",
    "                                           initializer_range=initializer_range,\n",
    "                                           use_one_hot_embeddings=use_one_hot_embeddings)\n",
    "\n",
    "    def construct(self, from_tensor, to_tensor, attention_mask):\n",
    "        \"\"\"reshape 2d/3d input tensors to 2d\"\"\"\n",
    "        from_tensor_2d = self.reshape(from_tensor, self.shape_from_2d)\n",
    "        to_tensor_2d = self.reshape(to_tensor, self.shape_to_2d)\n",
    "        query_out = self.query_layer(from_tensor_2d)\n",
    "        key_out = self.key_layer(to_tensor_2d)\n",
    "        value_out = self.value_layer(to_tensor_2d)\n",
    "\n",
    "        query_layer = self.reshape(query_out, self.shape_from)\n",
    "        query_layer = self.transpose(query_layer, self.trans_shape)\n",
    "        key_layer = self.reshape(key_out, self.shape_to)\n",
    "        key_layer = self.transpose(key_layer, self.trans_shape)\n",
    "\n",
    "        attention_scores = self.matmul_trans_b(query_layer, key_layer)\n",
    "\n",
    "        # 如果使用相对位置编码，补充逻辑\n",
    "        if self.use_relative_positions:\n",
    "            # relations_keys is [F|T, F|T, H]\n",
    "            relations_keys = self._generate_relative_positions_embeddings()\n",
    "            relations_keys = self.cast_compute_type(relations_keys)\n",
    "            # query_layer_t is [F, B, N, H]\n",
    "            query_layer_t = self.transpose(query_layer, self.trans_shape_relative)\n",
    "            # query_layer_r is [F, B * N, H]\n",
    "            query_layer_r = self.reshape(query_layer_t,\n",
    "                                         (self.from_seq_length,\n",
    "                                          -1,\n",
    "                                          self.size_per_head))\n",
    "            # key_position_scores is [F, B * N, F|T]\n",
    "            key_position_scores = self.matmul_trans_b(query_layer_r,\n",
    "                                                      relations_keys)\n",
    "            # key_position_scores_r is [F, B, N, F|T]\n",
    "            key_position_scores_r = self.reshape(key_position_scores,\n",
    "                                                 (self.from_seq_length,\n",
    "                                                  -1,\n",
    "                                                  self.num_attention_heads,\n",
    "                                                  self.from_seq_length))\n",
    "            # key_position_scores_r_t is [B, N, F, F|T]\n",
    "            key_position_scores_r_t = self.transpose(key_position_scores_r,\n",
    "                                                     self.trans_shape_position)\n",
    "            attention_scores = attention_scores + key_position_scores_r_t\n",
    "\n",
    "        attention_scores = self.multiply(self.scores_mul, attention_scores)\n",
    "\n",
    "        if self.has_attention_mask:\n",
    "            attention_mask = self.expand_dims(attention_mask, 1)\n",
    "            multiply_out = self.sub(self.cast(F.tuple_to_array((1.0,)), self.get_dtype(attention_scores)),\n",
    "                                    self.cast(attention_mask, self.get_dtype(attention_scores)))\n",
    "\n",
    "            adder = self.multiply(multiply_out, self.multiply_data)\n",
    "            attention_scores = self.add(adder, attention_scores)\n",
    "\n",
    "        attention_probs = self.softmax(attention_scores)\n",
    "        attention_probs = self.dropout(attention_probs)\n",
    "\n",
    "        value_layer = self.reshape(value_out, self.shape_to)\n",
    "        value_layer = self.transpose(value_layer, self.trans_shape)\n",
    "        context_layer = self.matmul(attention_probs, value_layer)\n",
    "\n",
    "        # 如果使用相对位置编码，补充逻辑\n",
    "        if self.use_relative_positions:\n",
    "            # relations_values is [F|T, F|T, H]\n",
    "            relations_values = self._generate_relative_positions_embeddings()\n",
    "            relations_values = self.cast_compute_type(relations_values)\n",
    "            # attention_probs_t is [F, B, N, T]\n",
    "            attention_probs_t = self.transpose(attention_probs, self.trans_shape_relative)\n",
    "            # attention_probs_r is [F, B * N, T]\n",
    "            attention_probs_r = self.reshape(\n",
    "                attention_probs_t,\n",
    "                (self.from_seq_length,\n",
    "                 -1,\n",
    "                 self.to_seq_length))\n",
    "            # value_position_scores is [F, B * N, H]\n",
    "            value_position_scores = self.matmul(attention_probs_r,\n",
    "                                                relations_values)\n",
    "            # value_position_scores_r is [F, B, N, H]\n",
    "            value_position_scores_r = self.reshape(value_position_scores,\n",
    "                                                   (self.from_seq_length,\n",
    "                                                    -1,\n",
    "                                                    self.num_attention_heads,\n",
    "                                                    self.size_per_head))\n",
    "            # value_position_scores_r_t is [B, N, F, H]\n",
    "            value_position_scores_r_t = self.transpose(value_position_scores_r,\n",
    "                                                       self.trans_shape_position)\n",
    "            context_layer = context_layer + value_position_scores_r_t\n",
    "\n",
    "        context_layer = self.transpose(context_layer, self.trans_shape)\n",
    "        context_layer = self.reshape(context_layer, self.shape_return)\n",
    "\n",
    "        return context_layer\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2.2.2 Output\n",
    "为了便于计算，且方便后续引用，我们将Self-Attention模块中最后一部分的线性计算和残差连接封装为一个单独的类。该模型是一个全连接(Dense)+Dropout+LayerNorm结构。在全连接层我们使用了`TruncatedNormal`来生成一个服从正态（高斯）分布的随机数组并截断，然后对其进行`dropout`提高泛化性，接着对输入(input)进行残差连接计算(residual connect),最后放入`LayerNorm`正则化输出。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ErnieOutput(nn.Cell):\n",
    "    \"\"\"\n",
    "    Apply a linear computation to hidden status and a residual computation to input.\n",
    "    Args:\n",
    "        in_channels (int): Input channels.\n",
    "        out_channels (int): Output channels.\n",
    "        initializer_range (float): Initialization value of TruncatedNormal. Default: 0.02.\n",
    "        dropout_prob (float): The dropout probability. Default: 0.1.\n",
    "        compute_type (:class:`mindspore.dtype`): Compute type in ErnieTransformer. Default: mstype.float32.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 in_channels,\n",
    "                 out_channels,\n",
    "                 initializer_range=0.02,\n",
    "                 dropout_prob=0.1,\n",
    "                 compute_type=mstype.float32):\n",
    "        super(ErnieOutput, self).__init__()\n",
    "        self.dense = nn.Dense(in_channels, out_channels,\n",
    "                              weight_init=TruncatedNormal(initializer_range)).to_float(compute_type)\n",
    "        self.dropout = nn.Dropout(1 - dropout_prob)\n",
    "        self.dropout_prob = dropout_prob\n",
    "        self.add = P.Add()\n",
    "        self.layernorm = nn.LayerNorm((out_channels,)).to_float(compute_type)\n",
    "        self.cast = P.Cast()\n",
    "\n",
    "    def construct(self, hidden_status, input_tensor):\n",
    "        output = self.dense(hidden_status)\n",
    "        output = self.dropout(output)\n",
    "        output = self.add(input_tensor, output)\n",
    "        output = self.layernorm(output)\n",
    "        return output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2.2.3 ErnieSelfAttention\n",
    "将上述`Attention`类与`Output`模块拼装后，即可完成`ErnieSelfAttention`模块的编写"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ErnieSelfAttention(nn.Cell):\n",
    "    \"\"\"\n",
    "    Apply self-attention.\n",
    "    Args:\n",
    "        seq_length (int): Length of input sequence.\n",
    "        hidden_size (int): Size of the Ernie encoder layers.\n",
    "        num_attention_heads (int): Number of attention heads. Default: 12.\n",
    "        attention_probs_dropout_prob (float): The dropout probability for\n",
    "                                      ErnieAttention. Default: 0.1.\n",
    "        use_one_hot_embeddings (bool): Specifies whether to use one_hot encoding form. Default: False.\n",
    "        initializer_range (float): Initialization value of TruncatedNormal. Default: 0.02.\n",
    "        hidden_dropout_prob (float): The dropout probability for ErnieOutput. Default: 0.1.\n",
    "        use_relative_positions (bool): Specifies whether to use relative positions. Default: False.\n",
    "        compute_type (:class:`mindspore.dtype`): Compute type in ErnieSelfAttention. Default: mstype.float32.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 seq_length,\n",
    "                 hidden_size,\n",
    "                 num_attention_heads=12,\n",
    "                 attention_probs_dropout_prob=0.1,\n",
    "                 use_one_hot_embeddings=False,\n",
    "                 initializer_range=0.02,\n",
    "                 hidden_dropout_prob=0.1,\n",
    "                 use_relative_positions=False,\n",
    "                 compute_type=mstype.float32):\n",
    "        super(ErnieSelfAttention, self).__init__()\n",
    "        if hidden_size % num_attention_heads != 0:\n",
    "            raise ValueError(\"The hidden size (%d) is not a multiple of the number \"\n",
    "                             \"of attention heads (%d)\" % (hidden_size, num_attention_heads))\n",
    "\n",
    "        self.size_per_head = int(hidden_size / num_attention_heads)\n",
    "\n",
    "        self.attention = ErnieAttention(\n",
    "            from_tensor_width=hidden_size,\n",
    "            to_tensor_width=hidden_size,\n",
    "            from_seq_length=seq_length,\n",
    "            to_seq_length=seq_length,\n",
    "            num_attention_heads=num_attention_heads,\n",
    "            size_per_head=self.size_per_head,\n",
    "            attention_probs_dropout_prob=attention_probs_dropout_prob,\n",
    "            use_one_hot_embeddings=use_one_hot_embeddings,\n",
    "            initializer_range=initializer_range,\n",
    "            use_relative_positions=use_relative_positions,\n",
    "            has_attention_mask=True,\n",
    "            do_return_2d_tensor=True,\n",
    "            compute_type=compute_type)\n",
    "\n",
    "        self.output = ErnieOutput(in_channels=hidden_size,\n",
    "                                  out_channels=hidden_size,\n",
    "                                  initializer_range=initializer_range,\n",
    "                                  dropout_prob=hidden_dropout_prob,\n",
    "                                  compute_type=compute_type)\n",
    "        self.reshape = P.Reshape()\n",
    "        self.shape = (-1, hidden_size)\n",
    "\n",
    "    def construct(self, input_tensor, attention_mask):\n",
    "        input_tensor = self.reshape(input_tensor, self.shape)\n",
    "        attention_output = self.attention(input_tensor, input_tensor, attention_mask)\n",
    "        output = self.output(attention_output, input_tensor)\n",
    "        return output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2.3 ERNIETransformer\n",
    "ERNIE成功的关键在于运用了Transformer模型，Transformer模型源于2017年的一篇文章[2]。在这篇文章中提出的基于Attention机制的编码器-解码器型结构在自然语言处理领域获得了巨大的成功。模型结构如下图2 所示：\n",
    "\n",
    "<center>\n",
    "    <img style=\"border-radius: 0.3125em;\n",
    "    box-shadow: 0 2px 4px 0 rgba(34,36,38,.12),0 2px 10px 0 rgba(34,36,38,.08); width:50%; height:50%\" \n",
    "    src=\"https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/tutorials/application/source_zh_cn/cv/images/transformer_architecture.png\">\n",
    "    <br>\n",
    "    <div style=\"color:orange; border-bottom: 1px solid #d9d9d9;\n",
    "    display: inline-block;\n",
    "    color: #999;\n",
    "    padding: 2px;\">图2. Transformer模型结构图</div>\n",
    "</center>\n",
    "\n",
    "其主要结构为多个Encoder和Decoder模块所组成，其中Encoder和Decoder的详细结构如下图3[2]所示：\n",
    "\n",
    "\n",
    "<center>\n",
    "    <img style=\"border-radius: 0.3125em;\n",
    "    box-shadow: 0 2px 4px 0 rgba(34,36,38,.12),0 2px 10px 0 rgba(34,36,38,.08); width:50%;height:50%\" \n",
    "    src=\"https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/tutorials/application/source_zh_cn/cv/images/encoder_decoder.png\">\n",
    "    <br>\n",
    "    <div style=\"color:orange; border-bottom: 1px solid #d9d9d9;\n",
    "    display: inline-block;\n",
    "    color: #999;\n",
    "    padding: 2px;\">图3. Encoder和Decoder结构</div>\n",
    "</center>\n",
    "\n",
    "Encoder与Decoder由许多结构组成，如：多头注意力（Multi-Head Attention）层，Feed Forward层，Normaliztion层，甚至残差连接（Residual Connection，图中的“Add”）。不过，其中最重要的结构是多头注意力（Multi-Head Attention）结构，该结构基于自注意力（Self-Attention）机制，是多个Self-Attention的并行组成。\n",
    "\n",
    "#### 3.2.3.1 ErnieEncoderCell\n",
    "ERNIE模型属于预训练模型，只用到了Transformer层中的Encoder层。因此，我们需要先构建ERNIE中用到的Encoder层。\n",
    "\n",
    "这一层包装了之前构建的Self-Attention、Intermediate和ErnieOutput（即Attention后的FFN部分），以及这里直接忽略的cross-attention部分（将BERT作为Decoder时涉及的部分）。\n",
    "\n",
    "理论上，这里顺序调用三个子模块就可以，没有什么值得说明的地方。\n",
    "\n",
    "在Attention后面还有一个全连接+激活的操作，这里的全连接做了一个扩展，将维度扩展为3072，是原始维度768的4倍之多。这里的激活函数默认实现为gelu（Gaussian Error Linerar Units(GELUS），它是无法直接计算的，可以用一个包含tanh的表达式进行近似。如图4所示：\n",
    "\n",
    "<center>\n",
    "    <img style=\"border-radius: 0.3125em;\n",
    "    box-shadow: 0 2px 4px 0 rgba(34,36,38,.12),0 2px 10px 0 rgba(34,36,38,.08);\" \n",
    "    src=\"http://static.article.iis7.com/imgcj/2021/14/2023055824.png\">\n",
    "    <br>\n",
    "    <div style=\"color:orange; border-bottom: 1px solid #d9d9d9;\n",
    "    display: inline-block;\n",
    "    color: #999;\n",
    "    padding: 2px;\">图4. gelu、relu、elu曲线对比</div>\n",
    "</center>\n",
    "\n",
    "至于为什么在transformer中要用这个激活函数，应该是GeLU比ReLU这些表现都好，以至于后续的语言模型都沿用了这一激活函数。\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ErnieEncoderCell(nn.Cell):\n",
    "    \"\"\"\n",
    "    ErnieTransformer中使用的Encoder单元.\n",
    "    Args:\n",
    "        hidden_size (int): Size of the Ernie encoder layers. Default: 768.\n",
    "        seq_length (int): Length of input sequence. Default: 512.\n",
    "        num_attention_heads (int): Number of attention heads. Default: 12.\n",
    "        intermediate_size (int): Size of intermediate layer. Default: 3072.\n",
    "        attention_probs_dropout_prob (float): The dropout probability for\n",
    "                                      ErnieAttention. Default: 0.02.\n",
    "        use_one_hot_embeddings (bool): Specifies whether to use one hot encoding form. Default: False.\n",
    "        initializer_range (float): Initialization value of TruncatedNormal. Default: 0.02.\n",
    "        hidden_dropout_prob (float): The dropout probability for ErnieOutput. Default: 0.1.\n",
    "        use_relative_positions (bool): Specifies whether to use relative positions. Default: False.\n",
    "        hidden_act (str): Activation function. Default: \"gelu\".\n",
    "        compute_type (:class:`mindspore.dtype`): Compute type in attention. Default: mstype.float32.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 hidden_size=768,\n",
    "                 seq_length=512,\n",
    "                 num_attention_heads=12,\n",
    "                 intermediate_size=3072,\n",
    "                 attention_probs_dropout_prob=0.02,\n",
    "                 use_one_hot_embeddings=False,\n",
    "                 initializer_range=0.02,\n",
    "                 hidden_dropout_prob=0.1,\n",
    "                 use_relative_positions=False,\n",
    "                 hidden_act=\"gelu\",\n",
    "                 compute_type=mstype.float32):\n",
    "        super(ErnieEncoderCell, self).__init__()\n",
    "        self.attention = ErnieSelfAttention(\n",
    "            hidden_size=hidden_size,\n",
    "            seq_length=seq_length,\n",
    "            num_attention_heads=num_attention_heads,\n",
    "            attention_probs_dropout_prob=attention_probs_dropout_prob,\n",
    "            use_one_hot_embeddings=use_one_hot_embeddings,\n",
    "            initializer_range=initializer_range,\n",
    "            hidden_dropout_prob=hidden_dropout_prob,\n",
    "            use_relative_positions=use_relative_positions,\n",
    "            compute_type=compute_type)\n",
    "        self.intermediate = nn.Dense(in_channels=hidden_size,\n",
    "                                     out_channels=intermediate_size,\n",
    "                                     activation=hidden_act,\n",
    "                                     weight_init=TruncatedNormal(initializer_range)).to_float(compute_type)\n",
    "        self.output = ErnieOutput(in_channels=intermediate_size,\n",
    "                                  out_channels=hidden_size,\n",
    "                                  initializer_range=initializer_range,\n",
    "                                  dropout_prob=hidden_dropout_prob,\n",
    "                                  compute_type=compute_type)\n",
    "\n",
    "    def construct(self, hidden_states, attention_mask):\n",
    "        # 计算attention值\n",
    "        attention_output = self.attention(hidden_states, attention_mask)\n",
    "        # 嵌入并输出\n",
    "        intermediate_output = self.intermediate(attention_output)\n",
    "        # 相加并正则化输出\n",
    "        output = self.output(intermediate_output, attention_output)\n",
    "        return output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2.3.2 Transformer\n",
    "\n",
    "构建好上述的Encoder层后，ERNIE中用到的Transformer部分也就大致构建好了。\n",
    "\n",
    "这里，我们构建ERNIE中的Transformer模型如下，共设置了`num_hidden_layers`（默认是12）个ERNIE Encode组件作为编码器。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ErnieTransformer(nn.Cell):\n",
    "    \"\"\"\n",
    "    多层 Ernie transformer.\n",
    "    Args:\n",
    "        hidden_size (int): Size of the encoder layers.\n",
    "        seq_length (int): Length of input sequence.\n",
    "        num_hidden_layers (int): Number of hidden layers in encoder cells.\n",
    "        num_attention_heads (int): Number of attention heads in encoder cells. Default: 12.\n",
    "        intermediate_size (int): Size of intermediate layer in encoder cells. Default: 3072.\n",
    "        attention_probs_dropout_prob (float): The dropout probability for\n",
    "                                      ErnieAttention. Default: 0.1.\n",
    "        use_one_hot_embeddings (bool): Specifies whether to use one hot encoding form. Default: False.\n",
    "        initializer_range (float): Initialization value of TruncatedNormal. Default: 0.02.\n",
    "        hidden_dropout_prob (float): The dropout probability for ErnieOutput. Default: 0.1.\n",
    "        use_relative_positions (bool): Specifies whether to use relative positions. Default: False.\n",
    "        hidden_act (str): Activation function used in the encoder cells. Default: \"gelu\".\n",
    "        compute_type (:class:`mindspore.dtype`): Compute type in ErnieTransformer. Default: mstype.float32.\n",
    "        return_all_encoders (bool): Specifies whether to return all encoders. Default: False.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 hidden_size,\n",
    "                 seq_length,\n",
    "                 num_hidden_layers,\n",
    "                 num_attention_heads=12,\n",
    "                 intermediate_size=3072,\n",
    "                 attention_probs_dropout_prob=0.1,\n",
    "                 use_one_hot_embeddings=False,\n",
    "                 initializer_range=0.02,\n",
    "                 hidden_dropout_prob=0.1,\n",
    "                 use_relative_positions=False,\n",
    "                 hidden_act=\"gelu\",\n",
    "                 compute_type=mstype.float32,\n",
    "                 return_all_encoders=False):\n",
    "        super(ErnieTransformer, self).__init__()\n",
    "        self.return_all_encoders = return_all_encoders\n",
    "\n",
    "        layers = []\n",
    "        for _ in range(num_hidden_layers):\n",
    "            layer = ErnieEncoderCell(hidden_size=hidden_size,\n",
    "                                     seq_length=seq_length,\n",
    "                                     num_attention_heads=num_attention_heads,\n",
    "                                     intermediate_size=intermediate_size,\n",
    "                                     attention_probs_dropout_prob=attention_probs_dropout_prob,\n",
    "                                     use_one_hot_embeddings=use_one_hot_embeddings,\n",
    "                                     initializer_range=initializer_range,\n",
    "                                     hidden_dropout_prob=hidden_dropout_prob,\n",
    "                                     use_relative_positions=use_relative_positions,\n",
    "                                     hidden_act=hidden_act,\n",
    "                                     compute_type=compute_type)\n",
    "            layers.append(layer)\n",
    "\n",
    "        self.layers = nn.CellList(layers)\n",
    "\n",
    "        self.reshape = P.Reshape()\n",
    "        self.shape = (-1, hidden_size)\n",
    "        self.out_shape = (-1, seq_length, hidden_size)\n",
    "\n",
    "    def construct(self, input_tensor, attention_mask):\n",
    "        \"\"\"多层Ernie的transformer.\"\"\"\n",
    "        prev_output = self.reshape(input_tensor, self.shape)\n",
    "\n",
    "        all_encoder_layers = ()\n",
    "        for layer_module in self.layers:\n",
    "            layer_output = layer_module(prev_output, attention_mask)\n",
    "            prev_output = layer_output\n",
    "\n",
    "            if self.return_all_encoders:\n",
    "                layer_output = self.reshape(layer_output, self.out_shape)\n",
    "                all_encoder_layers = all_encoder_layers + (layer_output,)\n",
    "\n",
    "        if not self.return_all_encoders:\n",
    "            prev_output = self.reshape(prev_output, self.out_shape)\n",
    "            all_encoder_layers = all_encoder_layers + (prev_output,)\n",
    "        return all_encoder_layers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2.4 ErnieModel\n",
    "综合上述提到的`EmbeddingPostprocessor`、`ErnieTransformer`等子类，构建ErnieModel如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "import copy\n",
    "\n",
    "class ErnieModel(nn.Cell):\n",
    "    \"\"\"\n",
    "    构建来自Transformers的双向 Encoder 表示，即标准的ERNIE 模型.\n",
    "    Args:\n",
    "        config (Class): Configuration for ErnieModel.\n",
    "        is_training (bool): True for training mode. False for eval mode.\n",
    "        use_one_hot_embeddings (bool): Specifies whether to use one hot encoding form. Default: False.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 config,\n",
    "                 is_training,\n",
    "                 use_one_hot_embeddings=False):\n",
    "        super(ErnieModel, self).__init__()\n",
    "        config = copy.deepcopy(config)\n",
    "        if not is_training:\n",
    "            config.hidden_dropout_prob = 0.0\n",
    "            config.attention_probs_dropout_prob = 0.0\n",
    "\n",
    "        self.seq_length = config.seq_length\n",
    "        self.hidden_size = config.hidden_size\n",
    "        self.num_hidden_layers = config.num_hidden_layers\n",
    "        self.embedding_size = config.hidden_size\n",
    "        self.token_type_ids = None\n",
    "\n",
    "        self.last_idx = self.num_hidden_layers - 1\n",
    "        output_embedding_shape = [-1, self.seq_length, self.embedding_size]\n",
    "\n",
    "        self.ernie_embedding_lookup = nn.Embedding(\n",
    "            vocab_size=config.vocab_size,\n",
    "            embedding_size=self.embedding_size,\n",
    "            use_one_hot=use_one_hot_embeddings)\n",
    "\n",
    "        self.ernie_embedding_postprocessor = EmbeddingPostprocessor(\n",
    "            embedding_size=self.embedding_size,\n",
    "            embedding_shape=output_embedding_shape,\n",
    "            use_relative_positions=config.use_relative_positions,\n",
    "            use_token_type=True,\n",
    "            token_type_vocab_size=config.type_vocab_size,\n",
    "            use_one_hot_embeddings=use_one_hot_embeddings,\n",
    "            initializer_range=0.02,\n",
    "            max_position_embeddings=config.max_position_embeddings,\n",
    "            dropout_prob=config.hidden_dropout_prob)\n",
    "\n",
    "        self.ernie_encoder = ErnieTransformer(\n",
    "            hidden_size=self.hidden_size,\n",
    "            seq_length=self.seq_length,\n",
    "            num_attention_heads=config.num_attention_heads,\n",
    "            num_hidden_layers=self.num_hidden_layers,\n",
    "            intermediate_size=config.intermediate_size,\n",
    "            attention_probs_dropout_prob=config.attention_probs_dropout_prob,\n",
    "            use_one_hot_embeddings=use_one_hot_embeddings,\n",
    "            initializer_range=config.initializer_range,\n",
    "            hidden_dropout_prob=config.hidden_dropout_prob,\n",
    "            use_relative_positions=config.use_relative_positions,\n",
    "            hidden_act=config.hidden_act,\n",
    "            compute_type=config.compute_type,\n",
    "            return_all_encoders=True)\n",
    "\n",
    "        self.cast = P.Cast()\n",
    "        self.dtype = config.dtype\n",
    "        self.cast_compute_type = SaturateCast(dst_type=config.compute_type)\n",
    "        self.slice = P.StridedSlice()\n",
    "\n",
    "        self.squeeze_1 = P.Squeeze(axis=1)\n",
    "        self.dense = nn.Dense(self.hidden_size, self.hidden_size,\n",
    "                              activation=\"tanh\",\n",
    "                              weight_init=TruncatedNormal(config.initializer_range)).to_float(config.compute_type)\n",
    "        self._create_attention_mask_from_input_mask = CreateAttentionMaskFromInputMask(config)\n",
    "\n",
    "    def construct(self, input_ids, token_type_ids, input_mask):\n",
    "        \"\"\"构建来自Transformers的双向 Encoder 表示.\"\"\"\n",
    "        # embedding\n",
    "        word_embeddings = self.ernie_embedding_lookup(input_ids)\n",
    "        embedding_output = self.ernie_embedding_postprocessor(token_type_ids,\n",
    "                                                              word_embeddings)\n",
    "\n",
    "        # attention mask [batch_size, seq_length, seq_length]\n",
    "        attention_mask = self._create_attention_mask_from_input_mask(input_mask)\n",
    "\n",
    "        # ernie encoder\n",
    "        encoder_output = self.ernie_encoder(self.cast_compute_type(embedding_output),\n",
    "                                            attention_mask)\n",
    "\n",
    "        sequence_output = self.cast(encoder_output[self.last_idx], self.dtype)\n",
    "\n",
    "        # pooler\n",
    "        batch_size = P.Shape()(input_ids)[0]\n",
    "        sequence_slice = self.slice(sequence_output,\n",
    "                                    (0, 0, 0),\n",
    "                                    (batch_size, 1, self.hidden_size),\n",
    "                                    (1, 1, 1))\n",
    "        first_token = self.squeeze_1(sequence_slice)\n",
    "        pooled_output = self.dense(first_token)\n",
    "        pooled_output = self.cast(pooled_output, self.dtype)\n",
    "\n",
    "        return sequence_output, pooled_output\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2.5 构建分类评估模型\n",
    "在构造好原始模型后，我们为其增加文本分类任务评估对象。该类负责分类任务评估，即XNLI（num_tlabels=3），LCQMC（num_labels=2），Chnsenti（num_tlabels=2）。返回的输出表示最终log _softmax的结果与softmax的值成正比。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ErnieCLSModel(nn.Cell):\n",
    "    \"\"\"\n",
    "    该类负责分类任务评估，即XNLI（num_labels=3）、LCQMC（num_labels=2）、Chnsenti（num.labels=2）。\n",
    "    返回的输出表示最终的逻辑，因为log_softmax的结果与softmax的成比例。\n",
    "    \"\"\"\n",
    "    def __init__(self, config, is_training, num_labels=2, dropout_prob=0.0, use_one_hot_embeddings=False,\n",
    "                 assessment_method=\"\"):\n",
    "        super(ErnieCLSModel, self).__init__()\n",
    "        if not is_training:\n",
    "            config.hidden_dropout_prob = 0.0\n",
    "            config.hidden_probs_dropout_prob = 0.0\n",
    "        self.ernie = ErnieModel(config, is_training, use_one_hot_embeddings)\n",
    "        self.cast = P.Cast()\n",
    "        self.weight_init = TruncatedNormal(config.initializer_range)\n",
    "        self.log_softmax = P.LogSoftmax(axis=-1)\n",
    "        self.dtype = config.dtype\n",
    "        self.num_labels = num_labels\n",
    "        self.dense_1 = nn.Dense(config.hidden_size, self.num_labels, weight_init=self.weight_init,\n",
    "                                has_bias=True).to_float(config.compute_type)\n",
    "        self.dropout = nn.Dropout(1 - dropout_prob)\n",
    "        self.assessment_method = assessment_method\n",
    "\n",
    "    def construct(self, input_ids, input_mask, token_type_id):\n",
    "        _, pooled_output = \\\n",
    "            self.ernie(input_ids, token_type_id, input_mask)\n",
    "        cls = self.cast(pooled_output, self.dtype)\n",
    "        cls = self.dropout(cls)\n",
    "        logits = self.dense_1(cls)\n",
    "        logits = self.cast(logits, self.dtype)\n",
    "        if self.assessment_method != \"spearman_correlation\":\n",
    "            logits = self.log_softmax(logits)\n",
    "        return logits\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.3 损失函数与优化器\n",
    "#### 3.3.1 CrossEntropyCalculation\n",
    "对于ERNIE模型，其损失函数是由交叉熵损失函数构成的，为了配合后续计算，我们定义交叉熵损失计算对象`CrossEntropyCalculation`,该对象能够通过交叉熵计算模型损失。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "class CrossEntropyCalculation(nn.Cell):\n",
    "    \"\"\"\n",
    "    Cross Entropy loss\n",
    "    \"\"\"\n",
    "    def __init__(self, is_training=True):\n",
    "        super(CrossEntropyCalculation, self).__init__()\n",
    "        self.onehot = P.OneHot()\n",
    "        self.on_value = Tensor(1.0, mstype.float32)\n",
    "        self.off_value = Tensor(0.0, mstype.float32)\n",
    "        self.reduce_sum = P.ReduceSum()\n",
    "        self.reduce_mean = P.ReduceMean()\n",
    "        self.reshape = P.Reshape()\n",
    "        self.last_idx = (-1,)\n",
    "        self.neg = P.Neg()\n",
    "        self.cast = P.Cast()\n",
    "        self.is_training = is_training\n",
    "\n",
    "    def construct(self, logits, label_ids, num_labels):\n",
    "        if self.is_training:\n",
    "            label_ids = self.reshape(label_ids, self.last_idx)\n",
    "            one_hot_labels = self.onehot(label_ids, num_labels, self.on_value, self.off_value)\n",
    "            per_example_loss = self.neg(self.reduce_sum(one_hot_labels * logits, self.last_idx))\n",
    "            loss = self.reduce_mean(per_example_loss, self.last_idx)\n",
    "            return_value = self.cast(loss, mstype.float32)\n",
    "        else:\n",
    "            return_value = logits * 1.0\n",
    "        return return_value"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.3.2 ErnieLearningRate\n",
    "这里定义学习率对象`ErnieLearningRate`来作为ERNIE模型的学习率参数。在`ErnieLearningRate`中，首先使用基于多项式衰减函数来计算学习率的`PolynomialDecayLR`,根据设定的`decay_steps`从`learing_rate`逐步变化`end_learning_rate`。对于当前step，`PolynomialDecayLR`计算学习率的公式为：\n",
    "$$\n",
    "decayed\\_learning\\_rate=(learning\\_rate−end\\_learning\\_rate)∗(1−tmp\\_step/tmp\\_decay\\_steps)power+end\\_learning\\_rate\n",
    "$$\n",
    "其中，$tmp\\_step=min(current\\_step,decay\\_steps)$,\n",
    "如果设置 `update_decay_steps` 为`true`，则每 `decay_steps` 更新 `tmp_decay_step` 的值。公式为：\n",
    "$$\n",
    "tmp\\_decay\\_steps=decay\\_steps∗ceil(current\\_step/decay\\_steps)\n",
    "$$\n",
    "\n",
    "另外，由于刚开始训练时,模型的权重(weights)是随机初始化的，此时若选择一个较大的学习率,可能带来模型的不稳定(振荡)，选择Warmup预热学习率的方式，可以使得开始训练的几个epoches或者一些steps内学习率较小,在预热的小学习率下，模型可以慢慢趋于稳定,等模型相对稳定后再选择预先设置的学习率进行训练,使得模型收敛速度变得更快，模型效果更佳[引用]。我们这里使用MindSpore中的`WarmUpLR`函数来预热学习率。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "from mindspore.nn.learning_rate_schedule import LearningRateSchedule, PolynomialDecayLR, WarmUpLR\n",
    "\n",
    "class ErnieLearningRate(LearningRateSchedule):\n",
    "    \"\"\"\n",
    "    Warmup-decay learning rate for Ernie network.\n",
    "    \"\"\"\n",
    "    def __init__(self, learning_rate, end_learning_rate, warmup_steps, decay_steps, power):\n",
    "        super(ErnieLearningRate, self).__init__()\n",
    "        self.warmup_flag = False\n",
    "        if warmup_steps > 0:\n",
    "            self.warmup_flag = True\n",
    "            self.warmup_lr = WarmUpLR(learning_rate, warmup_steps)\n",
    "        self.decay_lr = PolynomialDecayLR(learning_rate, end_learning_rate, decay_steps, power)\n",
    "        self.warmup_steps = Tensor(np.array([warmup_steps]).astype(np.float32))\n",
    "\n",
    "        self.greater = P.Greater()\n",
    "        self.one = Tensor(np.array([1.0]).astype(np.float32))\n",
    "        self.cast = P.Cast()\n",
    "\n",
    "    def construct(self, global_step):\n",
    "        decay_lr = self.decay_lr(global_step)\n",
    "        if self.warmup_flag:\n",
    "            is_warmup = self.cast(self.greater(self.warmup_steps, global_step), mstype.float32)\n",
    "            warmup_lr = self.warmup_lr(global_step)\n",
    "            lr = (self.one - is_warmup) * decay_lr + is_warmup * warmup_lr\n",
    "        else:\n",
    "            lr = decay_lr\n",
    "        return lr"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.3.3 训练分类接口\n",
    "本文将对话情感分析模型定义为分类任务，为了能够直接利用上述定义的ERNIE 模型完成情感分类任务，我们构建了一个ERNIE模型的分类训练器，该模型通过集成ERNIE分类模型利用`CrossEntropyCalculation`交叉熵分类模型的损失，从而不断优化分类模型，提高模型准确率。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ErnieCLS(nn.Cell):\n",
    "    \"\"\"\n",
    "    Train interface for classification finetuning task.\n",
    "    \"\"\"\n",
    "    def __init__(self, config, is_training, num_labels=2, dropout_prob=0.0, use_one_hot_embeddings=False,\n",
    "                 assessment_method=\"\"):\n",
    "        super(ErnieCLS, self).__init__()\n",
    "        self.ernie = ErnieCLSModel(config, is_training, num_labels, dropout_prob, use_one_hot_embeddings)\n",
    "        self.loss = CrossEntropyCalculation(is_training)\n",
    "        self.num_labels = num_labels\n",
    "        self.is_training = is_training\n",
    "\n",
    "    def construct(self, input_ids, input_mask, token_type_id, label_ids):\n",
    "        logits = self.ernie(input_ids, input_mask, token_type_id)\n",
    "        loss = self.loss(logits, label_ids, self.num_labels)\n",
    "        return loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.4 评估指标\n",
    "训练逻辑完成后，需要对模型进行评估。即使用模型的预测结果和测试集的正确标签进行对比，求出预测的准确率。对话情感分析为三分类问题，通过对获得的概率分布求得最大概率的分类标签(0、1或2)，判断是否与正确标签(ground truth)是否相等即可。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Accuracy():\n",
    "    '''\n",
    "    calculate accuracy\n",
    "    '''\n",
    "    def __init__(self):\n",
    "        self.acc_num = 0\n",
    "        self.total_num = 0\n",
    "    def update(self, logits, labels):\n",
    "        labels = labels.asnumpy()\n",
    "        labels = np.reshape(labels, -1)\n",
    "        logits = logits.asnumpy()\n",
    "        logit_id = np.argmax(logits, axis=-1)\n",
    "        self.acc_num += np.sum(labels == logit_id)\n",
    "        self.total_num += len(labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4 模型训练与保存\n",
    "在前面我们完成了模型构建和训练、评估逻辑的设计，下面进行模型训练。这里我们设置模型训练轮数为3轮。\n",
    "\n",
    "### 4.1 模型保存\n",
    "为了尽可能保存的模型参数，通常在训练模型的过程中，每隔一段时间就会将训练模型信息保存一次，这些模型信息包含模型的参数信息，还包含其他信息，如当前的迭代次数，优化器的参数等，以便用于快速加载模型，这个保存模型信息的时间点就叫做CheckPoint。\n",
    "\n",
    "MindSpore提供了callback机制，可以在训练过程中执行自定义逻辑。这里使用框架提供的ModelCheckpoint、TimeMonitor和LossCallBack三个函数。 ModelCheckpoint可以保存网络模型和参数，以便进行后续的fine-tuning操作。 TimeMonitor是MindSpore官方提供的callback函数，可以用于监控训练过程中单步迭代时间。LossCallBack是我们自己定义的展示每一个step的loss的函数,代码如下。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "from mindspore.train.callback import Callback\n",
    "\n",
    "class LossCallBack(Callback):\n",
    "    \"\"\"\n",
    "    Monitor the loss in training.\n",
    "    If the loss in NAN or INF terminating training.\n",
    "    Note:\n",
    "        if per_print_times is 0 do not print loss.\n",
    "    Args:\n",
    "        per_print_times (int): Print loss every times. Default: 1.\n",
    "    \"\"\"\n",
    "    def __init__(self, dataset_size=-1):\n",
    "        super(LossCallBack, self).__init__()\n",
    "        self._dataset_size = dataset_size\n",
    "    def step_end(self, run_context):\n",
    "        \"\"\"\n",
    "        Print loss after each step\n",
    "        \"\"\"\n",
    "        cb_params = run_context.original_args()\n",
    "        if self._dataset_size > 0:\n",
    "            percent, epoch_num = math.modf(cb_params.cur_step_num / self._dataset_size)\n",
    "            if percent == 0:\n",
    "                percent = 1\n",
    "                epoch_num -= 1\n",
    "            print(\"epoch: {}, current epoch percent: {}, step: {}, outputs are {}\"\n",
    "                  .format(int(epoch_num), \"%.3f\" % percent, cb_params.cur_step_num, str(cb_params.net_outputs)),\n",
    "                  flush=True)\n",
    "        else:\n",
    "            print(\"epoch: {}, step: {}, outputs are {}\".format(cb_params.cur_epoch_num, cb_params.cur_step_num,\n",
    "                                                               str(cb_params.net_outputs)), flush=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.2 训练回调函数配置\n",
    "接下来我们将训练过程中需要的回调函数callbacks封装成为一个集合，从而模型在训练的时候能够在每一个epoch执行这些回调函数，执行打印或是保存模型等操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "from mindspore.train.callback import CheckpointConfig, ModelCheckpoint, TimeMonitor\n",
    "\n",
    "def get_callbacks(steps_per_epoch, save_checkpoint_path, dataset):\n",
    "    ckpt_config = CheckpointConfig(save_checkpoint_steps=steps_per_epoch, keep_checkpoint_max=10)\n",
    "    ckpoint_cb = ModelCheckpoint(prefix=\"classifier\",\n",
    "                                 directory=None if save_checkpoint_path == \"\" else save_checkpoint_path,\n",
    "                                 config=ckpt_config)\n",
    "    callbacks = [TimeMonitor(dataset.get_dataset_size()), LossCallBack(dataset.get_dataset_size()), ckpoint_cb]\n",
    "    return callbacks"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.3 模型训练\n",
    "MindSpore中将优化器应用在模型中的方法是使用`TrainOneStepWithLossScaleCell`类，该类能够使用混合精度功能的训练网络。\n",
    "\n",
    "实现了包含损失缩放（loss scale）的单次训练。它使用网络、优化器和用于更新损失缩放系数（loss scale）的Cell(或一个Tensor)作为参数。可在host侧或device侧更新损失缩放系数。 如果需要在host侧更新，使用Tensor作为 scale_sense ，否则，使用可更新损失缩放系数的Cell实例作为 scale_sense 。\n",
    "\n",
    "这里我们定义ERNIE模型定义微调训练模块，首先加载原有网络参数，随后根据输入数据对原有网络参数求解梯度变化，并观察梯度是否变化，如果梯度变小，则继续优化直到梯度不发生变化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "from mindspore.ops import composite as C\n",
    "\n",
    "GRADIENT_CLIP_TYPE = 1\n",
    "GRADIENT_CLIP_VALUE = 1.0\n",
    "grad_scale = C.MultitypeFuncGraph(\"grad_scale\")\n",
    "reciprocal = P.Reciprocal()\n",
    "\n",
    "clip_grad = C.MultitypeFuncGraph(\"clip_grad\")\n",
    "\n",
    "\n",
    "@clip_grad.register(\"Number\", \"Number\", \"Tensor\")\n",
    "def _clip_grad(clip_type, clip_value, grad):\n",
    "    \"\"\"\n",
    "    Clip gradients.\n",
    "    Inputs:\n",
    "        clip_type (int): The way to clip, 0 for 'value', 1 for 'norm'.\n",
    "        clip_value (float): Specifies how much to clip.\n",
    "        grad (tuple[Tensor]): Gradients.\n",
    "    Outputs:\n",
    "        tuple[Tensor], clipped gradients.\n",
    "    \"\"\"\n",
    "    if clip_type not in (0, 1):\n",
    "        return grad\n",
    "    dt = F.dtype(grad)\n",
    "    if clip_type == 0:\n",
    "        new_grad = C.clip_by_value(grad, F.cast(F.tuple_to_array((-clip_value,)), dt),\n",
    "                                   F.cast(F.tuple_to_array((clip_value,)), dt))\n",
    "    else:\n",
    "        new_grad = nn.ClipByNorm()(grad, F.cast(F.tuple_to_array((clip_value,)), dt))\n",
    "    return new_grad\n",
    "\n",
    "@grad_scale.register(\"Tensor\", \"Tensor\")\n",
    "def tensor_grad_scale(scale, grad):\n",
    "    return grad * reciprocal(scale)\n",
    "\n",
    "_grad_overflow = C.MultitypeFuncGraph(\"_grad_overflow\")\n",
    "grad_overflow = P.FloatStatus()\n",
    "@_grad_overflow.register(\"Tensor\")\n",
    "def _tensor_grad_overflow(grad):\n",
    "    return grad_overflow(grad)\n",
    "\n",
    "\n",
    "class ErnieFinetuneCell(nn.TrainOneStepWithLossScaleCell):\n",
    "    \"\"\"\n",
    "    Especially defined for finetuning where only four inputs tensor are needed.\n",
    "    Append an optimizer to the training network after that the construct\n",
    "    function can be called to create the backward graph.\n",
    "    Different from the builtin loss_scale wrapper cell, we apply grad_clip before the optimization.\n",
    "    Args:\n",
    "        network (Cell): The training network. Note that loss function should have been added.\n",
    "        optimizer (Optimizer): Optimizer for updating the weights.\n",
    "        scale_update_cell (Cell): Cell to do the loss scale. Default: None.\n",
    "    \"\"\"\n",
    "    def __init__(self, network, optimizer, scale_update_cell=None):\n",
    "        super(ErnieFinetuneCell, self).__init__(network, optimizer, scale_update_cell)\n",
    "        self.cast = P.Cast()\n",
    "\n",
    "    def construct(self,\n",
    "                  input_ids,\n",
    "                  input_mask,\n",
    "                  token_type_id,\n",
    "                  label_ids,\n",
    "                  sens=None):\n",
    "        \"\"\"Ernie Finetune\"\"\"\n",
    "\n",
    "        weights = self.weights\n",
    "        loss = self.network(input_ids,\n",
    "                            input_mask,\n",
    "                            token_type_id,\n",
    "                            label_ids)\n",
    "        if sens is None:\n",
    "            scaling_sens = self.scale_sense\n",
    "        else:\n",
    "            scaling_sens = sens\n",
    "\n",
    "        status, scaling_sens = self.start_overflow_check(loss, scaling_sens)\n",
    "        grads = self.grad(self.network, weights)(input_ids,\n",
    "                                                 input_mask,\n",
    "                                                 token_type_id,\n",
    "                                                 label_ids,\n",
    "                                                 self.cast(scaling_sens,\n",
    "                                                           mstype.float32))\n",
    "        grads = self.hyper_map(F.partial(grad_scale, scaling_sens), grads)\n",
    "        grads = self.hyper_map(F.partial(clip_grad, GRADIENT_CLIP_TYPE, GRADIENT_CLIP_VALUE), grads)\n",
    "        if self.reducer_flag:\n",
    "            grads = self.grad_reducer(grads)\n",
    "        cond = self.get_overflow_status(status, grads)\n",
    "        overflow = self.process_loss_scale(cond)\n",
    "        if not overflow:\n",
    "            self.optimizer(grads)\n",
    "        return (loss, cond)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.4 设定训练流程\n",
    "MindSpore在构建训练过程时有多种方法，我们这里使用MindSpore的`Model`对象来配置一个模型训练过程。`Model`是模型训练或推理的高阶接口。 `Model`会根据用户传入的参数封装可训练或推理的实例。\n",
    "\n",
    "`Model`对象的实例拥有`train`、`eval`等实现方法，当我们在GPU或者Ascend上调用模型训练接口`train`时，模型流程可以通过下沉模式执行。我们只需要将需要训练的轮次`epoch`、一个训练数据迭代器`train_dataset`和训练过程中需要执行的回调对象或者回调对象列表`callbacks`传入`train`接口，即可开始模型训练。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "from mindspore.train.serialization import load_checkpoint, load_param_into_net\n",
    "from mindspore.nn.optim import Adam, AdamWeightDecay, Adagrad\n",
    "from mindspore.nn.wrap.loss_scale import DynamicLossScaleUpdateCell\n",
    "from mindspore.train.model import Model\n",
    "\n",
    "def do_train(dataset=None, optimizer_cfg=None, network=None, load_checkpoint_path=\"\", save_checkpoint_path=\"\", epoch_num=1):\n",
    "    \"\"\" do train \"\"\"\n",
    "    if load_checkpoint_path == \"\":\n",
    "        raise ValueError(\"Pretrain model missed, finetune task must load pretrain model!\")\n",
    "    steps_per_epoch = 500\n",
    "    # optimizer\n",
    "    lr_schedule = ErnieLearningRate(learning_rate=optimizer_cfg.AdamWeightDecay.learning_rate,\n",
    "                                    end_learning_rate=optimizer_cfg.AdamWeightDecay.end_learning_rate,\n",
    "                                    warmup_steps=int(steps_per_epoch * epoch_num * 0.1),\n",
    "                                    decay_steps=steps_per_epoch * epoch_num,\n",
    "                                    power=optimizer_cfg.AdamWeightDecay.power)\n",
    "    params = network.trainable_params()\n",
    "    decay_params = list(filter(optimizer_cfg.AdamWeightDecay.decay_filter, params))\n",
    "    other_params = list(filter(lambda x: not optimizer_cfg.AdamWeightDecay.decay_filter(x), params))\n",
    "    group_params = [{'params': decay_params, 'weight_decay': optimizer_cfg.AdamWeightDecay.weight_decay},\n",
    "                    {'params': other_params, 'weight_decay': 0.0}]\n",
    "\n",
    "    optimizer = AdamWeightDecay(group_params, lr_schedule, eps=optimizer_cfg.AdamWeightDecay.eps)\n",
    "    # load checkpoint into network\n",
    "    param_dict = load_checkpoint(load_checkpoint_path)\n",
    "    unloaded_params = load_param_into_net(network, param_dict)\n",
    "    if len(unloaded_params) > 2:\n",
    "        print(unloaded_params)\n",
    "        print('Loading ernie model failed, please check the checkpoint file.')\n",
    "\n",
    "    update_cell = DynamicLossScaleUpdateCell(loss_scale_value=2**32, scale_factor=2, scale_window=1000)\n",
    "    netwithgrads = ErnieFinetuneCell(network, optimizer=optimizer, scale_update_cell=update_cell)\n",
    "    model = Model(netwithgrads)\n",
    "    callbacks = get_callbacks(steps_per_epoch, save_checkpoint_path, dataset)\n",
    "    model.train(epoch_num, dataset, callbacks=callbacks)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.5 开始训练\n",
    "首先根据我们需要设定的模型参数，如整体训练参数，包括类型数量`num_class`、训练次数`epoch_num`等参数；如ernie网络模型参数，如`seq_length`，`hidden_size`等参数；还有一些优化器参数，如`optimizer`，`learning_rate`等参数，设定好参数，即可开始训练。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[WARNING] ME(27247:140498728199680,MainProcess):2022-10-17-15:25:51.651.665 [mindspore/dataset/engine/datasets.py:2440] Repeat is located before batch, data from two epochs can be batched together.\n",
      "[WARNING] ME(27247:140498728199680,MainProcess):2022-10-17-15:25:53.125.29 [mindspore/train/serialization.py:712] For 'load_param_into_net', 2 parameters in the 'net' are not loaded, because they are not in the 'parameter_dict', please check whether the network structure is consistent when training and loading checkpoint.\n",
      "[WARNING] ME(27247:140498728199680,MainProcess):2022-10-17-15:25:53.139.68 [mindspore/train/serialization.py:714] ernie.dense_1.weight is not loaded.\n",
      "[WARNING] ME(27247:140498728199680,MainProcess):2022-10-17-15:25:53.151.35 [mindspore/train/serialization.py:714] ernie.dense_1.bias is not loaded.\n",
      "[WARNING] ME(27247:140498728199680,MainProcess):2022-10-17-15:25:53.257.24 [mindspore/train/model.py:1077] For LossCallBack callback, {'step_end'} methods may not be supported in later version, Use methods prefixed with 'on_train' or 'on_eval' instead when using customized callbacks.\n",
      "[WARNING] KERNEL(27247,7fc868c92600,python):2022-10-17-15:26:05.132.199 [mindspore/ccsrc/plugin/device/gpu/kernel/gpu_kernel.cc:40] CheckDeviceSm] It is recommended to use devices with a computing capacity >= 7, but the current device's computing capacity is 6\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch: 0, current epoch percent: 1.000, step: 459, outputs are (Tensor(shape=[], dtype=Float32, value= 0.843228), Tensor(shape=[], dtype=Bool, value= False))\n",
      "Train epoch time: 198780.280 ms, per step time: 433.073 ms\n",
      "epoch: 1, current epoch percent: 1.000, step: 918, outputs are (Tensor(shape=[], dtype=Float32, value= 0.0612363), Tensor(shape=[], dtype=Bool, value= False))\n",
      "Train epoch time: 190296.364 ms, per step time: 414.589 ms\n",
      "epoch: 2, current epoch percent: 1.000, step: 1377, outputs are (Tensor(shape=[], dtype=Float32, value= 0.0730695), Tensor(shape=[], dtype=Bool, value= False))\n",
      "Train epoch time: 187904.293 ms, per step time: 409.378 ms\n"
     ]
    }
   ],
   "source": [
    "from easydict import EasyDict as edict\n",
    "\n",
    "num_class = 3\n",
    "epoch_num = 3\n",
    "train_batch_size = 21\n",
    "\n",
    "train_data_file_path = './data/train.mindrecord'\n",
    "schema_file_path = './ms_log/train_classifier_log.txt'\n",
    "train_data_shuffle = True\n",
    "\n",
    "load_pretrain_checkpoint_path = './pretrain_models/converted/ernie.ckpt'\n",
    "save_finetune_checkpoint_path = './save_models'\n",
    "\n",
    "\n",
    "\n",
    "ernie_net_cfg = ErnieConfig(\n",
    "    seq_length=64,\n",
    "    vocab_size=18000,\n",
    "    hidden_size=768,\n",
    "    num_hidden_layers=12,\n",
    "    num_attention_heads=12,\n",
    "    intermediate_size=3072,\n",
    "    hidden_act=\"relu\",\n",
    "    hidden_dropout_prob=0.1,\n",
    "    attention_probs_dropout_prob=0.1,\n",
    "    max_position_embeddings=513,\n",
    "    type_vocab_size=2,\n",
    "    initializer_range=0.02,\n",
    "    use_relative_positions=False,\n",
    "    dtype=mstype.float32,\n",
    "    compute_type=mstype.float16,\n",
    ")\n",
    "\n",
    "optimizer_cfg = edict({\n",
    "    'optimizer': 'AdamWeightDecay',\n",
    "    'AdamWeightDecay': edict({\n",
    "        'learning_rate': 2e-5,\n",
    "        'end_learning_rate': 1e-7,\n",
    "        'power': 1.0,\n",
    "        'weight_decay': 1e-5,\n",
    "        'decay_filter': lambda x: 'layernorm' not in x.name.lower() and 'bias' not in x.name.lower(),\n",
    "        'eps': 1e-6,\n",
    "    })\n",
    "})\n",
    "\n",
    "\n",
    "netwithloss = ErnieCLS(ernie_net_cfg, True, num_labels=num_class, dropout_prob=0.1)\n",
    "train_dataset = create_classification_dataset(batch_size=train_batch_size, repeat_count=1,\n",
    "                                           data_file_path=train_data_file_path,\n",
    "                                           schema_file_path=schema_file_path,\n",
    "                                           do_shuffle=train_data_shuffle)\n",
    "\n",
    "do_train(train_dataset, optimizer_cfg, netwithloss, load_pretrain_checkpoint_path, save_finetune_checkpoint_path, epoch_num)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5 模型加载与评估\n",
    "使用训练过程中保存的checkpoint文件进行推理，验证模型的泛化能力。首先通过load_checkpoint接口加载模型文件，然后调用Model的eval接口对输入图片类别作出预测，再与输入图片的真实类别做比较，得出最终的预测精度值。\n",
    "### 5.1 模型加载\n",
    "首先使用`load_checkpoint`方法将模型文件加载到内存，然后使用`load_param_into_net`将参数加载到网络中，返回网络中没有被加载的参数列表。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "def LoadNewestCkpt(load_finetune_checkpoint_dir, steps_per_epoch, epoch_num, prefix):\n",
    "    \"\"\"\n",
    "    Find the ckpt finetune generated and load it into eval network.\n",
    "    \"\"\"\n",
    "    files = os.listdir(load_finetune_checkpoint_dir)\n",
    "    pre_len = len(prefix)\n",
    "    max_num = 0\n",
    "    for filename in files:\n",
    "        name_ext = os.path.splitext(filename)\n",
    "        if name_ext[-1] != \".ckpt\":\n",
    "            continue\n",
    "        if filename.find(prefix) == 0 and not filename[pre_len].isalpha():\n",
    "            index = filename[pre_len:].find(\"-\")\n",
    "            if index == 0 and max_num == 0:\n",
    "                load_finetune_checkpoint_path = os.path.join(load_finetune_checkpoint_dir, filename)\n",
    "            elif index not in (0, -1):\n",
    "                name_split = name_ext[-2].split('_')\n",
    "                if (steps_per_epoch != int(name_split[len(name_split)-1])) \\\n",
    "                        or (epoch_num != int(filename[pre_len + index + 1:pre_len + index + 2])):\n",
    "                    continue\n",
    "                num = filename[pre_len + 1:pre_len + index]\n",
    "                if int(num) > max_num:\n",
    "                    max_num = int(num)\n",
    "                    load_finetune_checkpoint_path = os.path.join(load_finetune_checkpoint_dir, filename)\n",
    "    return load_finetune_checkpoint_path    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.2 模型评估流程\n",
    "1. 使用`load_checkpoint`接口加载模型文件。\n",
    "2. 使用`dataset.create_dict_iterator`根据需要评估的`epoch`来划分数据集批次。\n",
    "3. 使用定义的原生模型按批次读入测试数据集，进行推理。\n",
    "4. 使用之前定义的`Accuracy`不断根据模型返回结果计算精度值。\n",
    "5. 计算完成，返回模型的准确率和推理时间。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "def do_eval(dataset=None, network=None, num_class=3, load_checkpoint_path=\"\"):\n",
    "    if load_checkpoint_path == \"\":\n",
    "        raise ValueError(\"Finetune model missed, evaluation task must load finetune model!\")\n",
    "    net_for_pretraining = network(ernie_net_cfg, False, num_class)\n",
    "    net_for_pretraining.set_train(False)\n",
    "    param_dict = load_checkpoint(load_checkpoint_path)\n",
    "    load_param_into_net(net_for_pretraining, param_dict)\n",
    "\n",
    "    callback = Accuracy()\n",
    "\n",
    "    evaluate_times = []\n",
    "    columns_list = [\"input_ids\", \"input_mask\", \"segment_ids\", \"label_ids\"]\n",
    "    for data in dataset.create_dict_iterator(num_epochs=1):\n",
    "        input_data = []\n",
    "        for i in columns_list:\n",
    "            input_data.append(data[i])\n",
    "        input_ids, input_mask, token_type_id, label_ids = input_data\n",
    "        # print(input_ids)\n",
    "        time_begin = time.time()\n",
    "        logits = net_for_pretraining(input_ids, input_mask, token_type_id, label_ids)\n",
    "        time_end = time.time()\n",
    "        evaluate_times.append(time_end - time_begin)\n",
    "        callback.update(logits, label_ids)\n",
    "    print(\"==============================================================\")\n",
    "    print(\"acc_num {} , total_num {}, accuracy {:.6f}\".format(callback.acc_num, callback.total_num,\n",
    "                                                              callback.acc_num / callback.total_num))\n",
    "    print(\"(w/o first and last) elapsed time: {}, per step time : {}\".format(\n",
    "        sum(evaluate_times[1:-1]), sum(evaluate_times[1:-1])/(len(evaluate_times) - 2)))\n",
    "    print(\"==============================================================\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.3 开始评估"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[WARNING] ME(27247:140498728199680,MainProcess):2022-10-17-15:42:48.576.67 [mindspore/dataset/engine/datasets.py:2440] Repeat is located before batch, data from two epochs can be batched together.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==============================================================\n",
      "acc_num 925 , total_num 1036, accuracy 0.892857\n",
      "(w/o first and last) elapsed time: 3.0740649700164795, per step time : 0.09916338612956385\n",
      "==============================================================\n"
     ]
    }
   ],
   "source": [
    "num_class = 3\n",
    "eval_batch_size = 32\n",
    "eval_data_file_path = './data/test.mindrecord'\n",
    "schema_file_path = './ms_log/eval_classifier_log.txt'\n",
    "eval_data_shuffle = 'false'\n",
    "\n",
    "\n",
    "## 根据输入的保存路径找到上层文件夹\n",
    "# load_finetune_checkpoint_dir = make_directory(save_finetune_checkpoint_path)\n",
    "load_finetune_checkpoint_dir = './save_models'\n",
    "## 找到上层文件夹后，找到下层的\n",
    "load_finetune_checkpoint_path = LoadNewestCkpt(load_finetune_checkpoint_dir,\n",
    "                                                train_dataset.get_dataset_size(), epoch_num, \"classifier\")\n",
    "\n",
    "\n",
    "test_dataset = create_classification_dataset(batch_size=eval_batch_size, repeat_count=1,\n",
    "                                           data_file_path=eval_data_file_path,\n",
    "                                           schema_file_path=schema_file_path,\n",
    "                                           do_shuffle=(eval_data_shuffle.lower() == \"true\"),\n",
    "                                           drop_remainder=False)\n",
    "do_eval(test_dataset, ErnieCLS, num_class, load_finetune_checkpoint_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## 6 模型测试\n",
    "最后我们设计一个预测函数，实现开头描述的效果，输入一句对话，获得对话的情绪分类。具体包含以下步骤:\n",
    "1. 将输入句子进行分词；\n",
    "2. 使用词表获取对应的index id序列；\n",
    "3. index id序列转为Tensor；\n",
    "4. 通过dataset.PaddedDataset加载数据\n",
    "4. 将填充数据送入模型获得预测结果；\n",
    "5. 反馈输出预测结果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "当前模型预测句子情感为： Negative\n"
     ]
    }
   ],
   "source": [
    "\n",
    "score_map = {\n",
    "    0:'Negative',\n",
    "    1:'Natural',\n",
    "    2:'Positive'\n",
    "}\n",
    "\n",
    "load_finetune_checkpoint_dir = './save_models'\n",
    "## 找到上层文件夹后，找到下层的\n",
    "load_finetune_checkpoint_path = LoadNewestCkpt(load_finetune_checkpoint_dir,\n",
    "                                                train_dataset.get_dataset_size(), epoch_num, \"classifier\")\n",
    "\n",
    "def predict_sentiment(network=None, sentence=None, load_checkpoint_path=''):\n",
    "\n",
    "    ## 数据转换\n",
    "    Example = collections.namedtuple('Example', 'text_a label')\n",
    "    example = Example(*sentence)\n",
    "\n",
    "    reader = ClassifyReader(\n",
    "            vocab_path=config['test']['vocab_path'],\n",
    "            label_map_config=config['test']['label_map_config'],\n",
    "            max_seq_len=config['test']['max_seq_len'],\n",
    "            do_lower_case=config['test']['do_lower_case'],\n",
    "            random_seed=config['test']['random_seed']\n",
    "        )\n",
    "    record = reader._convert_example_to_record(example=example, max_seq_length=reader.max_seq_len, tokenizer=reader.tokenizer)\n",
    "    sample = {\n",
    "        \"input_ids\": np.array(record.input_ids, dtype=np.int64),\n",
    "        \"input_mask\": np.array(record.input_mask, dtype=np.int64),\n",
    "        \"segment_ids\": np.array(record.segment_ids, dtype=np.int64),\n",
    "        \"label_ids\": np.array([record.label_id], dtype=np.int64),\n",
    "    }\n",
    "\n",
    "    # 配置训练用dataset\n",
    "    dataset = ds.PaddedDataset(padded_samples=[sample])\n",
    "    columns_list = [\"input_ids\", \"input_mask\", \"segment_ids\", \"label_ids\"]\n",
    "    for data in dataset.create_dict_iterator(num_epochs=1):\n",
    "\n",
    "        input_data = []\n",
    "        for i in columns_list:\n",
    "            input_data.append(data[i])\n",
    "        input_ids, input_mask, token_type_id, label_ids = input_data\n",
    "        # input_ids\n",
    "\n",
    "        # 构造模型参数\n",
    "        net_for_pretraining = network(ernie_net_cfg, False, num_class)\n",
    "        net_for_pretraining.set_train(False)\n",
    "        param_dict = load_checkpoint(load_checkpoint_path)\n",
    "        load_param_into_net(net_for_pretraining, param_dict)\n",
    "\n",
    "        # 开始预测\n",
    "        logits = net_for_pretraining(input_ids, input_mask, token_type_id, label_ids)\n",
    "\n",
    "        logits = logits.asnumpy()\n",
    "        logit_id = np.argmax(logits, axis=-1)\n",
    "        print('当前模型预测句子情感为：', score_map[logit_id[0]])\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "sentence = ('你 见过 说 驯鹿 我 是 牛鹿党 我讨厌 你', 0)\n",
    "\n",
    "predict_sentiment(ErnieCLS, sentence, load_finetune_checkpoint_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 7. 总结\n",
    "本案例完成了ERNIE模型在百度公开的机器人聊天数据上进行训练，验证和推理的过程。其中，对关键的ERNIE模型结构和原理作了讲解。通过学习本案例，理解源码可以帮助用户掌握Multi-Head Attention，TransformerEncoder，pos_embedding等关键概念，如果要详细理解ERNIE的模型原理，建议基于源码更深层次的详细阅读，可以参考MindSpore实现的Model Zoo中的EmoTect项目:[Gitee:EmoTect](https://gitee.com/mindspore/models/tree/master/official/nlp/emotect)\n",
    "## 8. 引用\n",
    "[1] Sun Y, Wang S, Li Y, et al. Ernie: Enhanced representation through knowledge integration[J]. arXiv preprint arXiv:1904.09223, 2019.\n",
    "[2] Vaswani A ,  Shazeer N ,  Parmar N , et al. Attention Is All You Need[C]// arXiv. arXiv, 2017."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.7.5 64-bit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
