{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# FastText实现文本分类\n",
    "\n",
    "在本教程中，我们将在MindSpore中使用`MindRecord`加载并构建文本数据集，用户可以从本教程中了解到如何：\n",
    "\n",
    "- 创建迭代数据集\n",
    "- 将文本转换为向量\n",
    "- 对数据进行shuffle等操作\n",
    "\n",
    "此外，本教程使用N-Gram，即N元语法模型来判断语句单词的构成顺序。N-Gram可以按照字节顺序，将文本内容进行大小为N的划窗操作，最终形成长度为N的字节片段序列。实践中经常使用二元或三元模型，本教程通过将`ngram`参数设定为2，将二元模型应用在文本分类案例中。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据处理\n",
    "\n",
    "点击下载[文本分类AG_NEWS数据集](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/middleclass/ag_news_csv.tgz) ，在教程的同级目录下新建`data`文件夹，将下载好的数据集存放在`data`中。目录如下：\n",
    "\n",
    "\n",
    "(!!!介绍数据集啊，里面数据内容，是个大题什么情况)\n",
    "\n",
    "```\n",
    "project\n",
    "│  text_sentiment_ngrams_tutorial.ipynb      \n",
    "└─data\n",
    "   │   train.csv\n",
    "   │   test.csv\n",
    "```\n",
    "\n",
    "在进行其他操作之前，需要先安装`sklearn`和`spacy`工具包，并导入所需要的库并进行参数设置。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "ename": "ModuleNotFoundError",
     "evalue": "No module named 'spacy'",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mModuleNotFoundError\u001b[0m                       Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-2-f7473034d740>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m      5\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mast\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      6\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mhtml\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 7\u001b[0;31m \u001b[0;32mimport\u001b[0m \u001b[0mspacy\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m      8\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mnumpy\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      9\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'spacy'"
     ]
    }
   ],
   "source": [
    "import csv\n",
    "import os\n",
    "import re\n",
    "import argparse\n",
    "import ast\n",
    "import html\n",
    "import spacy\n",
    "import numpy as np\n",
    "\n",
    "from mindspore import nn\n",
    "from mindspore import context\n",
    "import mindspore.ops as ops\n",
    "from mindspore import dataset as ds\n",
    "from mindspore.mindrecord import FileWriter\n",
    "import mindspore.common.dtype as mstype\n",
    "from mindspore import Tensor,Model,ParameterTuple\n",
    "from mindspore.context import ParallelMode\n",
    "import mindspore.dataset.transforms.c_transforms as deC\n",
    "from mindspore.common.initializer import XavierUniform\n",
    "from sklearn.feature_extraction import FeatureHasher\n",
    "from sklearn.metrics import accuracy_score, classification_report"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "ename": "SyntaxError",
     "evalue": "invalid syntax (<ipython-input-3-839bf17031b0>, line 8)",
     "output_type": "error",
     "traceback": [
      "\u001b[0;36m  File \u001b[0;32m\"<ipython-input-3-839bf17031b0>\"\u001b[0;36m, line \u001b[0;32m8\u001b[0m\n\u001b[0;31m    (!!!去掉去掉，简化再简化，一看就是liuxiao的冗余代码风格)\u001b[0m\n\u001b[0m     ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m invalid syntax\n"
     ]
    }
   ],
   "source": [
    "parser = argparse.ArgumentParser()\n",
    "parser.add_argument('--ngram', type=int, default=2, required=False)\n",
    "parser.add_argument('--max_len', type=int, required=False, help='max length sentence in dataset')\n",
    "parser.add_argument('--bucket', type=ast.literal_eval, default=[64, 128, 467], help='bucket sequence length.')\n",
    "parser.add_argument('--test_bucket', type=ast.literal_eval, default=[64, 128, 467], help='bucket sequence length.')\n",
    "parser.add_argument('--feature_size', type=int, default=10000000, help='hash feature size')\n",
    "parser.add_argument('--device_target', type=str, default=\"GPU\", choices=['Ascend', 'GPU'])\n",
    "(!!!去掉去掉，简化再简化，一看就是liuxiao的冗余代码风格)\n",
    "args = parser.parse_known_args()[0]\n",
    "context.set_context(\n",
    "    mode=context.GRAPH_MODE,\n",
    "    save_graphs=False,\n",
    "    device_target=args.device_target)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 读取数据\n",
    "\n",
    "定义数据预处理函数，填充至训练集与测试集。(!!!下面100行代码，你就给我一句话解释完了？)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "class FastTextDataPreProcess():\n",
    "    \"\"\"FastText数据预处理\"\"\"\n",
    "    \n",
    "    def __init__(self, train_path, test_file, max_length,class_num,ngram, train_feature_dict,\n",
    "                 buckets, test_feature_dict, test_bucket,feature_size):\n",
    "        self.train_path = train_path\n",
    "        self.test_path = test_file\n",
    "        self.max_length = max_length\n",
    "        self.class_num = class_num\n",
    "        self.train_feature_dict = train_feature_dict\n",
    "        self.test_feature_dict = test_feature_dict\n",
    "        self.test_bucket = test_bucket\n",
    "        self.feature_size = feature_size\n",
    "        self.buckets = buckets\n",
    "        self.ngram = ngram\n",
    "        self.text_greater = '>'\n",
    "        self.text_less = '<'\n",
    "        self.word2vec = dict()\n",
    "        self.vec2words = dict()\n",
    "        self.non_str = '\\\\'\n",
    "        self.end_string = ['.', '?', '!']\n",
    "        self.word2vec['PAD'] = 0\n",
    "        self.vec2words[0] = 'PAD'\n",
    "        self.word2vec['UNK'] = 1\n",
    "        self.vec2words[1] = 'UNK'\n",
    "        self.str_html = re.compile(r'<[^>]+>')\n",
    "\n",
    "    def common_block(self, _pair_sen, spacy_nlp):\n",
    "        \"\"\"数据通用模块\"\"\"\n",
    "        (!!!多通用，用来解决什么问题，这么多if else就通用了？)\n",
    "        label_idx = int(_pair_sen[0]) - 1\n",
    "        if len(_pair_sen) == 3:\n",
    "            src_tokens = self.input_preprocess(src_text1=_pair_sen[1],\n",
    "                                               src_text2=_pair_sen[2],\n",
    "                                               spacy_nlp=spacy_nlp,\n",
    "                                               train_mode=True)\n",
    "            src_tokens_length = len(src_tokens)\n",
    "        elif len(_pair_sen) == 2:\n",
    "            src_tokens = self.input_preprocess(src_text1=_pair_sen[1],\n",
    "                                               src_text2=None,\n",
    "                                               spacy_nlp=spacy_nlp,\n",
    "                                               train_mode=True)\n",
    "            src_tokens_length = len(src_tokens)\n",
    "        elif len(_pair_sen) == 4:\n",
    "            if _pair_sen[2]:\n",
    "                sen_o_t = _pair_sen[1] + ' ' + _pair_sen[2]\n",
    "            else:\n",
    "                sen_o_t = _pair_sen[1]\n",
    "            src_tokens = self.input_preprocess(src_text1=sen_o_t,\n",
    "                                               src_text2=_pair_sen[3],\n",
    "                                               spacy_nlp=spacy_nlp,\n",
    "                                               train_mode=True)\n",
    "            src_tokens_length = len(src_tokens)\n",
    "        return src_tokens, src_tokens_length, label_idx\n",
    "\n",
    "    (!!!拆分出来，再加介绍)\n",
    "\n",
    "    def load(self):\n",
    "        \"\"\"数据读取\"\"\"\n",
    "        train_dataset_list = []\n",
    "        test_dataset_list = []\n",
    "        spacy_nlp = spacy.load('en_core_web_sm', disable=['parser', 'tagger', 'ner','lemmatizer'])\n",
    "        spacy_nlp.add_pipe('sentencizer')\n",
    "        print(\"开始处理训练数据\")\n",
    "        with open(self.train_path, 'r', newline='', encoding='utf-8') as src_file:\n",
    "            reader = csv.reader(src_file, delimiter=\",\", quotechar='\"')\n",
    "            for _, _pair_sen in enumerate(reader):\n",
    "                src_tokens, src_tokens_length, label_idx = self.common_block(_pair_sen=_pair_sen,\n",
    "                                                                             spacy_nlp=spacy_nlp)\n",
    "                train_dataset_list.append([src_tokens, src_tokens_length, label_idx])\n",
    "\n",
    "        print(\"开始处理测试数据\")\n",
    "        (!!!上面通用模块类似的？那通用模块就不通用了又？)\n",
    "        with open(self.test_path, 'r', newline='', encoding='utf-8') as test_file:\n",
    "            reader2 = csv.reader(test_file, delimiter=\",\", quotechar='\"')\n",
    "            for _, _test_sen in enumerate(reader2):\n",
    "                label_idx = int(_test_sen[0]) - 1\n",
    "                if len(_test_sen) == 3:\n",
    "                    src_tokens = self.input_preprocess(src_text1=_test_sen[1],\n",
    "                                                       src_text2=_test_sen[2],\n",
    "                                                       spacy_nlp=spacy_nlp,\n",
    "                                                       train_mode=False)\n",
    "                    src_tokens_length = len(src_tokens)\n",
    "                elif len(_test_sen) == 2:\n",
    "                    src_tokens = self.input_preprocess(src_text1=_test_sen[1],\n",
    "                                                       src_text2=None,\n",
    "                                                       spacy_nlp=spacy_nlp,\n",
    "                                                       train_mode=False)\n",
    "                    src_tokens_length = len(src_tokens)\n",
    "                elif len(_test_sen) == 4:\n",
    "                    if _test_sen[2]:\n",
    "                        sen_o_t = _test_sen[1] + ' ' + _test_sen[2]\n",
    "                    else:\n",
    "                        sen_o_t = _test_sen[1]\n",
    "                    src_tokens = self.input_preprocess(src_text1=sen_o_t,\n",
    "                                                       src_text2=_test_sen[3],\n",
    "                                                       spacy_nlp=spacy_nlp,\n",
    "                                                       train_mode=False)\n",
    "                    src_tokens_length = len(src_tokens)\n",
    "\n",
    "                test_dataset_list.append([src_tokens, src_tokens_length, label_idx])\n",
    "                \n",
    "        (!!!看到就生气，你300行代码往这里面噻就可以了，别写了，直接让用户自己看就完了)\n",
    "        # 填充训练数据，(!!!怎么填充，说明清楚啊)\n",
    "        train_dataset_list_length = len(train_dataset_list)\n",
    "        test_dataset_list_length = len(test_dataset_list)\n",
    "        for l in range(train_dataset_list_length):\n",
    "            bucket_length = self._get_bucket_length(train_dataset_list[l][0], self.buckets)\n",
    "            while len(train_dataset_list[l][0]) < bucket_length:\n",
    "                train_dataset_list[l][0].append(self.word2vec['PAD'])\n",
    "            train_dataset_list[l][1] = len(train_dataset_list[l][0])\n",
    "            \n",
    "        # 填充测试数据(!!!怎么填充说明清楚啊)\n",
    "        for j in range(test_dataset_list_length):\n",
    "            test_bucket_length = self._get_bucket_length(test_dataset_list[j][0], self.test_bucket)\n",
    "            while len(test_dataset_list[j][0]) < test_bucket_length:\n",
    "                test_dataset_list[j][0].append(self.word2vec['PAD'])\n",
    "            test_dataset_list[j][1] = len(test_dataset_list[j][0])\n",
    "\n",
    "        train_example_data = []\n",
    "        test_example_data = []\n",
    "        for idx in range(train_dataset_list_length):\n",
    "            train_example_data.append({\n",
    "                \"src_tokens\": train_dataset_list[idx][0],\n",
    "                \"src_tokens_length\": train_dataset_list[idx][1],\n",
    "                \"label_idx\": train_dataset_list[idx][2],\n",
    "            })\n",
    "            for key in self.train_feature_dict:\n",
    "                if key == train_example_data[idx]['src_tokens_length']:\n",
    "                    self.train_feature_dict[key].append(train_example_data[idx])\n",
    "        for h in range(test_dataset_list_length):\n",
    "            test_example_data.append({\n",
    "                \"src_tokens\": test_dataset_list[h][0],\n",
    "                \"src_tokens_length\": test_dataset_list[h][1],\n",
    "                \"label_idx\": test_dataset_list[h][2],\n",
    "            })\n",
    "            for key in self.test_feature_dict:\n",
    "                if key == test_example_data[h]['src_tokens_length']:\n",
    "                    self.test_feature_dict[key].append(test_example_data[h])\n",
    "        print(\"train vocab size is \", len(self.word2vec))\n",
    "\n",
    "        return self.train_feature_dict, self.test_feature_dict\n",
    "\n",
    "(!!!拆分出来，再加介绍)\n",
    "    \n",
    "    def input_preprocess(self, src_text1, src_text2, spacy_nlp, train_mode):\n",
    "        \"\"\"数据处理函数\"\"\"\n",
    "        src_text1 = src_text1.strip()\n",
    "        if src_text1 and src_text1[-1] not in self.end_string:\n",
    "            src_text1 = src_text1 + '.'\n",
    "\n",
    "        if src_text2:\n",
    "            src_text2 = src_text2.strip()\n",
    "            sent_describe = src_text1 + ' ' + src_text2\n",
    "        else:\n",
    "            sent_describe = src_text1\n",
    "        if self.non_str in sent_describe:\n",
    "            sent_describe = sent_describe.replace(self.non_str, ' ')\n",
    "       \n",
    "        (!!!中文注释)\n",
    "        sent_describe = html.unescape(sent_describe)\n",
    "\n",
    "        if self.text_less in sent_describe and self.text_greater in sent_describe:\n",
    "            sent_describe = self.str_html.sub('', sent_describe)\n",
    "\n",
    "        (!!!中文注释)\n",
    "        doc = spacy_nlp(sent_describe)\n",
    "        bows_token = [token.text for token in doc]\n",
    "\n",
    "        (!!!中文注释)\n",
    "        try:\n",
    "            tagged_sent_desc = '<p> ' + ' </s> '.join([s.text for s in doc.sents]) + ' </p>'\n",
    "        except ValueError:\n",
    "            tagged_sent_desc = '<p> ' + sent_describe + ' </p>'\n",
    "        doc = spacy_nlp(tagged_sent_desc)\n",
    "        ngrams = self.generate_gram([token.text for token in doc], num=self.ngram)\n",
    "\n",
    "        bo_ngrams = bows_token + ngrams\n",
    "        \n",
    "        (!!!中文注释)\n",
    "        if train_mode is True:\n",
    "            for ngms in bo_ngrams:\n",
    "                idx = self.word2vec.get(ngms)\n",
    "                if idx is None:\n",
    "                    idx = len(self.word2vec)\n",
    "                    self.word2vec[ngms] = idx\n",
    "                    self.vec2words[idx] = ngms\n",
    "\n",
    "        (!!!中文注释)\n",
    "        processed_out = [self.word2vec[ng] if ng in self.word2vec else self.word2vec['UNK'] for ng in bo_ngrams]\n",
    "        return processed_out\n",
    "\n",
    "    (!!!拆分出来，再加介绍)\n",
    "\n",
    "    def _get_bucket_length(self, x, bts):\n",
    "        (!!!中文注释)\n",
    "        x_len = len(x)\n",
    "        for index in range(1, len(bts)):\n",
    "            if bts[index - 1] < x_len <= bts[index]:\n",
    "                return bts[index]\n",
    "        return bts[0]\n",
    "\n",
    "    def generate_gram(self, words, num=2):\n",
    "        (!!!中文注释)\n",
    "        return [' '.join(words[i: i + num]) for i in range(len(words) - num + 1)]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 生成预处理数据\n",
    "\n",
    "现在调用上一步定义好的`FastTextDataPreProcess`函数(!!!函数命名不规范)，获取训练与测试的预处理数据，以便于下一步使用`mindspore.dataset.MindRecord`接口进一步转换数据格式。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始处理训练数据\n",
      "开始处理测试数据\n",
      "train vocab size is  1071957\n"
     ]
    }
   ],
   "source": [
    "train_feature_dicts = {}\n",
    "\n",
    "# 通过循环将bucket中的长度都加载到空字典\n",
    "for i in args.bucket:\n",
    "    train_feature_dicts[i] = []\n",
    "test_feature_dicts = {}\n",
    "for i in args.test_bucket:\n",
    "    test_feature_dicts[i] = []\n",
    "\n",
    "# 读取bucket的test和train数据进行处理\n",
    "g_d = FastTextDataPreProcess(train_path=os.path.join(\"./data/\", \"train.csv\"),\n",
    "                             test_file=os.path.join(\"./data/\", \"test.csv\"),\n",
    "                             max_length=args.max_len,\n",
    "                             ngram=args.ngram,\n",
    "                             class_num=True,\n",
    "                             train_feature_dict=train_feature_dicts,\n",
    "                             buckets=args.bucket,\n",
    "                             test_feature_dict=test_feature_dicts,\n",
    "                             test_bucket=args.test_bucket,\n",
    "                             feature_size=args.feature_size)\n",
    "train_data_example, test_data_example = g_d.load()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 完成MindRecord转换"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来我们通过定义`write_to_mindrecord`方法来将预处理后的基本数据转换为MindRecord格式，该方法提供两个参数：\n",
    "\n",
    "data：AG_NEWS数据集的路径。\n",
    "\n",
    "path：定义生成MindRecord格式文件路径。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "def write_to_mindrecord(data, path, shared_num=1):\n",
    "    \"\"\"生成MindRecord\"\"\"\n",
    "    if not os.path.isabs(path):\n",
    "        path = os.path.abspath(path)\n",
    "\n",
    "    writer = FileWriter(path, shared_num)\n",
    "    data_schema = {\n",
    "        \"src_tokens\": {\"type\": \"int32\", \"shape\": [-1]},\n",
    "        \"src_tokens_length\": {\"type\": \"int32\", \"shape\": [-1]},\n",
    "        \"label_idx\": {\"type\": \"int32\", \"shape\": [-1]}\n",
    "    }\n",
    "    \n",
    "    writer.add_schema(data_schema, \"fasttext\")\n",
    "    for item in data:\n",
    "        item['src_tokens'] = np.array(item['src_tokens'], dtype=np.int32)\n",
    "        item['src_tokens_length'] = np.array(item['src_tokens_length'], dtype=np.int32)\n",
    "        item['label_idx'] = np.array(item['label_idx'], dtype=np.int32)\n",
    "        writer.write_raw_data([item])\n",
    "    \n",
    "    writer.commit()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "遍历原始数据集，将所有数据全部写为MindRecord数据格式。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Writing train data to MindRecord file.....\n",
      "Writing test data to MindRecord file.....\n"
     ]
    }
   ],
   "source": [
    "# 通过循环来将文件转换成拼接的MindRecord文件\n",
    "print(\"Writing train data to MindRecord file.....\")\n",
    "for i in args.bucket:\n",
    "    write_to_mindrecord(train_data_example[i], './train/train_dataset_bs_' + str(i) + '.mindrecord', 1)\n",
    "\n",
    "print(\"Writing test data to MindRecord file.....\")\n",
    "for k in args.test_bucket:\n",
    "    write_to_mindrecord(test_data_example[k], './test/test_dataset_bs_' + str(k) + '.mindrecord', 1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 生成统一数据集\n",
    "\n",
    "经过`write_to_mindrecord`，现在我们已经得到了全部数据的MindRecord格式的数据集，接下来进一步调用`load_dataset`方法，实现如下功能：\n",
    "\n",
    "1. 循环遍历所有MindRecord文件。\n",
    "2. 将读取的数据通过`batch_per_bucket`合并到统一数据集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "def load_dataset(dataset_path,batch_size,epoch_count=1, rank_size=1,rank_id=0,bucket=None, shuffle=True):\n",
    "    \"\"\"数据集读取\"\"\"\n",
    "\n",
    "    (!!!代码挪出来单独介绍)\n",
    "    def batch_per_bucket(bucket_length, input_file):\n",
    "        input_file = input_file + 'train/train_dataset_bs_' + str(bucket_length) + '.mindrecord'\n",
    "        if not input_file:\n",
    "            raise FileNotFoundError(\"input file parameter must not be empty.\")\n",
    "\n",
    "        data_set = ds.MindDataset(input_file,\n",
    "                                  columns_list=['src_tokens', 'src_tokens_length', 'label_idx'],\n",
    "                                  shuffle=shuffle,\n",
    "                                  num_shards=rank_size,\n",
    "                                  shard_id=rank_id,\n",
    "                                  num_parallel_workers=4)\n",
    "        ori_dataset_size = data_set.get_dataset_size()\n",
    "        print(f\"Dataset size: {ori_dataset_size}\")\n",
    "        repeat_count = epoch_count\n",
    "\n",
    "        data_set = data_set.rename(input_columns=['src_tokens', 'src_tokens_length', 'label_idx'],\n",
    "                                   output_columns=['src_token_text', 'src_tokens_text_length', 'label_idx_tag'])\n",
    "        data_set = data_set.batch(batch_size, drop_remainder=False)\n",
    "        data_set = data_set.repeat(repeat_count)\n",
    "        return data_set\n",
    "\n",
    "    for i, _ in enumerate(bucket):\n",
    "        bucket_len = bucket[i]\n",
    "        ds_per = batch_per_bucket(bucket_len, dataset_path)\n",
    "        if i == 0:\n",
    "            data_set = ds_per\n",
    "        else:\n",
    "            data_set = data_set + ds_per\n",
    "    data_set = data_set.shuffle(data_set.get_dataset_size())\n",
    "    data_set.channel_name = 'fasttext'\n",
    "\n",
    "    return data_set\n",
    "(!!!代码格式到底是不是错了，怎么这么奇怪。函数套函数就算了，"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 生成训练数据\n",
    "\n",
    "通过`load_dataset`生成训练数据，其中的四个参数为：\n",
    "\n",
    "dataset：文件存取路径。\n",
    "\n",
    "batch_size：设定训练的batch。\n",
    "\n",
    "epoch_count：设定训练进行的epoch。\n",
    "\n",
    "bucket：数据中bucket的拼接长度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Dataset size: 4780\n",
      "Dataset size: 73255\n",
      "Dataset size: 6706\n"
     ]
    }
   ],
   "source": [
    "preprocessed_data = load_dataset(dataset_path=\"\",\n",
    "                                     batch_size=512,\n",
    "                                     epoch_count=1,\n",
    "                                     bucket=[64,128,467])\n",
    "\n",
    "(!!!代码格式"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 定义模型\n",
    "\n",
    "论文[Bag of Tricks for Efficient Text Classification](https://arxiv.org/pdf/1607.01759.pdf)中详细阐述了FastText模型的实现原理，模型结构如图所示：\n",
    "\n",
    "![ ](images/fasttext.png)\n",
    "\n",
    "图统一风格，让海燕去画\n",
    "\n",
    "FastText模型主要由输入层、隐藏层和输出层组成。其中输入是单词序列，通常以文本或句子的形式出现。输出层是词序列属于不同类别的概率。隐藏层是多个词向量的叠加平均。特征通过线性变换映射到隐藏层，再从隐藏层映射到标签。(!!!介绍图中的x1,x2，看图说话，你在自说自话）\n",
    "\n",
    "下面定义FastText网络。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "class FastText(nn.Cell):\n",
    "    \n",
    "    (!!!你好好自己看看，下面construct用了6个参数，你的init写了十几个，很明显是多了）\n",
    "    def __init__(self, vocab_size, embedding_dims, num_class):\n",
    "        \"\"\"定义FastText网络\"\"\"\n",
    "        super(FastText, self).__init__()\n",
    "        self.vocab_size = vocab_size\n",
    "        self.embeding_dims = embedding_dims\n",
    "        self.num_class = num_class\n",
    "        self.embeding_func = nn.Embedding(vocab_size=self.vocab_size,\n",
    "                                          embedding_size=self.embeding_dims,\n",
    "                                          padding_idx=0, embedding_table='Zeros')\n",
    "        self.fc = nn.Dense(self.embeding_dims, out_channels=self.num_class,\n",
    "                           weight_init=XavierUniform(1)).to_float(mstype.float16)\n",
    "        self.reducesum = ops.operations.ReduceSum()\n",
    "        self.expand_dims = ops.operations.ExpandDims()\n",
    "        self.squeeze = ops.operations.Squeeze(axis=1)\n",
    "        self.cast = ops.operations.Cast()\n",
    "        self.tile = ops.operations.Tile()\n",
    "        self.realdiv = ops.operations.RealDiv()\n",
    "        self.fill = ops.operations.Fill()\n",
    "        self.log_softmax = nn.LogSoftmax(axis=1)\n",
    "        \n",
    "    def construct(self, src_tokens, src_token_length):\n",
    "        \"\"\" FastText网络构建 \"\"\"\n",
    "        src_tokens = self.embeding_func(src_tokens)\n",
    "        embeding = self.reducesum(src_tokens, 1)\n",
    "        embeding = self.realdiv(embeding, src_token_length)\n",
    "        embeding = self.cast(embeding, mstype.float16)\n",
    "        classifier = self.fc(embeding)\n",
    "        classifier = self.cast(classifier, mstype.float32)\n",
    "        return classifier"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 启动实例\n",
    "\n",
    "`AG_NEWS`数据集具有四个标签，因此类别数是四个。\n",
    "\n",
    "```py\n",
    "1 : World\n",
    "2 : Sports\n",
    "3 : Business\n",
    "4 : Sci/Tec\n",
    "\n",
    "```\n",
    "\n",
    "在网络中，`vocab_size`为词汇数据的长度，其中包括单个单词和N元组。类的数量等于标签的数量，在`AG_NEWS`情况下为4。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "fast_text_net = FastText(1383812, 16, 4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 用于生成批量的函数\n",
    "\n",
    "由于文本条目的长度不同，所以使用自定义函数`batch_per_bucket`生成批量数据和偏移量。该函数传递到`mindspore.dataset.MindDataset`中的`inpufile`。`inputfile`的输入是张量文件，其大小为`batch_size`，函数将它们打包成一个小批量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "def batch_per_bucket(bucket_length, input_file):\n",
    "    (!!!格式根本不对，这个函数根本跑不通，别忽悠开发者，非常影响体验\n",
    "     (!!!加注释\n",
    "        input_file = input_file + 'train/train_dataset_bs_' + str(bucket_length) + '.mindrecord'\n",
    "        if not input_file:\n",
    "            raise FileNotFoundError(\"input file parameter must not be empty.\")\n",
    "\n",
    "      (!!!加注释\n",
    "        data_set = ds.MindDataset(input_file,\n",
    "                                  columns_list=['src_tokens', 'src_tokens_length', 'label_idx'],\n",
    "                                  shuffle=shuffle,\n",
    "                                  num_shards=rank_size,\n",
    "                                  shard_id=rank_id,\n",
    "                                  num_parallel_workers=4)\n",
    "        ori_dataset_size = data_set.get_dataset_size()\n",
    "        print(f\"Dataset size: {ori_dataset_size}\")\n",
    "        repeat_count = epoch_count\n",
    "       \n",
    "        (!!!加注释    \n",
    "        data_set = data_set.rename(input_columns=['src_tokens', 'src_tokens_length', 'label_idx'],\n",
    "                                   output_columns=['src_token_text', 'src_tokens_text_length', 'label_idx_tag'])\n",
    "        data_set = data_set.batch(batch_size, drop_remainder=False)\n",
    "        data_set = data_set.repeat(repeat_count)\n",
    "         \n",
    "        return data_set"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型训练\n",
    "\n",
    "我们在此处使用MindSpore数据集接口`MindDataset`加载`AG_NEWS`数据集，并将其发送到模型以进行训练/验证。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 提供FastTextloss计算\n",
    "\n",
    "我们已经在前面定义了一个完整的`FastText`网络，现在需要来为网络提供一个计算loss值的方法，这一过程由`FastTextNetWithLoss`类来实现。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "class FastTextNetWithLoss(nn.Cell):\n",
    "    \"\"\"\n",
    "   提供FastText的loss运算\n",
    "    \"\"\"\n",
    "    def __init__(self,network, vocab_size, embedding_dims, num_class):\n",
    "        super(FastTextNetWithLoss, self).__init__()\n",
    "        \n",
    "        self.fasttext = network\n",
    "        self.loss_func = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')\n",
    "        self.squeeze = ops.operations.Squeeze(axis=1)\n",
    "        self.print = ops.operations.Print()\n",
    "\n",
    "    def construct(self, src_tokens, src_tokens_lengths, label_idx):\n",
    "        \"\"\"\n",
    "        带有loss的FastText网络\n",
    "        \"\"\"\n",
    "        predict_score = self.fasttext(src_tokens, src_tokens_lengths)\n",
    "        label_idx = self.squeeze(label_idx)\n",
    "        predict_score = self.loss_func(predict_score, label_idx)\n",
    "\n",
    "        return predict_score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 创建网络计算loss值\n",
    "\n",
    "在这一步中实例化`FastTextNetWithLoss`类。将定义好的网络`FastTextNet`、vocab的大小、embedding的数量和类别数放入到实例中。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{Parameter (name=fasttext.embeding_func.embedding_table, shape=(1383812, 16), dtype=Float32, requires_grad=True): Parameter (name=fasttext.embeding_func.embedding_table, shape=(1383812, 16), dtype=Float32, requires_grad=True),\n",
       " Parameter (name=fasttext.fc.weight, shape=(4, 16), dtype=Float32, requires_grad=True): Parameter (name=fasttext.fc.weight, shape=(4, 16), dtype=Float32, requires_grad=True),\n",
       " Parameter (name=fasttext.fc.bias, shape=(4,), dtype=Float32, requires_grad=True): Parameter (name=fasttext.fc.bias, shape=(4,), dtype=Float32, requires_grad=True)}"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "net_with_loss = FastTextNetWithLoss(fast_text_net, 1383812, 16, 4)\n",
    "net_with_loss.init_parameters_data()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 设置学习率和优化器\n",
    "\n",
    "现在我们需要为`mindspore.nn.optim.Adam`优化器来定义一个学习率变化方式，以此来为优化器提供所需学习率参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'mindspore.common.tensor.Tensor'>\n"
     ]
    }
   ],
   "source": [
    "from mindspore.nn.optim import Adam\n",
    "from mindspore.nn import piecewise_constant_lr\n",
    "\n",
    "(!!!注释\n",
    "learn_rate = 0.2\n",
    "min_lr = 0.000001\n",
    "decay_steps = preprocessed_data.get_dataset_size()\n",
    "update_steps = 5 * preprocessed_data.get_dataset_size()\n",
    "lr_step = [i+1 for i in range(update_steps)]\n",
    "lr_list = [learn_rate - min_lr * i for i in range(update_steps)]\n",
    "lr = Tensor(piecewise_constant_lr(lr_step,lr_list), dtype=mstype.float32)\n",
    "\n",
    " (!!!注释\n",
    "optimizer = Adam(net_with_loss.trainable_params(), lr, beta1=0.9, beta2=0.999)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 定义训练pipeline\n",
    "\n",
    "当所有准备完毕后，我们要规划一次训练所需要的pipeline，于是定义了`TrainOneStepCell`类，该类主要实现以下方法：\n",
    "\n",
    "- set_sens：将获取值转为sens类型方便后续传入`tuple_to_array`转换\n",
    "- construct：定义一次训练结算所需要的流程"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "class FastTextTrainOneStepCell(nn.Cell):\n",
    "    \n",
    "    def __init__(self, network, optimizer, sens=1.0):\n",
    "        super(FastTextTrainOneStepCell, self).__init__(auto_prefix=False)\n",
    "        \n",
    "        self.network = network\n",
    "        self.weights = ParameterTuple(network.trainable_params())\n",
    "        self.optimizer = optimizer\n",
    "        self.grad = ops.composite.GradOperation(get_by_list=True, sens_param=True)\n",
    "        self.sens = sens\n",
    "        self.reducer_flag = False\n",
    "        \n",
    "        (!!!代码解释\n",
    "        self.parallel_mode = context.get_auto_parallel_context(\"parallel_mode\")\n",
    "        if self.parallel_mode not in ParallelMode.MODE_LIST:\n",
    "            raise ValueError(\"Parallel mode does not support: \", self.parallel_mode)\n",
    "        if self.parallel_mode in [ParallelMode.DATA_PARALLEL, ParallelMode.HYBRID_PARALLEL]:\n",
    "            self.reducer_flag = True\n",
    "        self.grad_reducer = None\n",
    "         \n",
    "        (!!!代码解释\n",
    "        if self.reducer_flag:\n",
    "            mean = context.get_auto_parallel_context(\"gradients_mean\")\n",
    "            degree = get_group_size()\n",
    "            self.grad_reducer = DistributedGradReducer(optimizer.parameters, mean, degree)\n",
    "\n",
    "        (!!!代码解释\n",
    "        self.hyper_map =  ops.composite.HyperMap()\n",
    "        self.cast = ops.operations.Cast()\n",
    "\n",
    "    def set_sens(self, value):\n",
    "        self.sens = value\n",
    "\n",
    "    def construct(self,\n",
    "                  src_token_text,\n",
    "                  src_tokens_text_length,\n",
    "                  label_idx_tag):\n",
    "        \"\"\"定义执行运算.\"\"\"\n",
    "        weights = self.weights\n",
    "        loss = self.network(src_token_text,\n",
    "                            src_tokens_text_length,\n",
    "                            label_idx_tag)\n",
    "        grads = self.grad(self.network, weights)(src_token_text,\n",
    "                                                 src_tokens_text_length,\n",
    "                                                 label_idx_tag,\n",
    "                                                 self.cast(ops.functional.tuple_to_array((self.sens,)),\n",
    "                                                           mstype.float32))\n",
    "        grads = self.hyper_map(ops.functional.partial(clip_grad, GRADIENT_CLIP_TYPE, GRADIENT_CLIP_VALUE), grads)\n",
    "    \n",
    "        (!!!diamante解释\n",
    "         if self.reducer_flag:\n",
    "            # 实现梯度消除\n",
    "            grads = self.grad_reducer(grads)\n",
    "\n",
    "        succ = self.optimizer(grads)\n",
    "        return ops.functional.depend(loss, succ)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 定义梯度\n",
    "\n",
    "因为本次梯度所需格式的不同，需要通过`clip_grad`修饰器重新定义`_clip_grad`传入参数的类型，如下所示：\n",
    "\n",
    "- clip_type为数字类型。\n",
    "\n",
    "- clip_value为数字类型。\n",
    "\n",
    "- grad为张量类型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "GRADIENT_CLIP_TYPE = 1\n",
    "GRADIENT_CLIP_VALUE = 1.0\n",
    "\n",
    "(!!!解释\n",
    "clip_grad =  ops.composite.MultitypeFuncGraph(\"clip_grad\")\n",
    "@clip_grad.register(\"Number\", \"Number\", \"Tensor\")\n",
    "\n",
    "def _clip_grad(clip_type, clip_value, grad):\n",
    "    (!!!解释\n",
    "    if clip_type not in (0, 1):\n",
    "        return grad\n",
    "    dt = ops.functional.dtype(grad)\n",
    "     \n",
    "    (!!!解释\n",
    "    if clip_type == 0:\n",
    "        new_grad =  ops.composite.clip_by_value(grad, ops.functional.cast(ops.functional.tuple_to_array((-clip_value,)), dt),\n",
    "                                   ops.functional.cast(ops.functional.tuple_to_array((clip_value,)), dt))\n",
    "    else:\n",
    "        new_grad = nn.ClipByNorm()(grad, ops.functional.cast(ops.functional.tuple_to_array((clip_value,)), dt))\n",
    "    return new_grad"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 进行模型训练\n",
    "\n",
    "调用之前设定的`FastTextTrainOneStepCell`并迭代数据集，完成模型训练。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "FastTextTrainOneStepCell<\n",
       "  (network): FastTextNetWithLoss<\n",
       "    (fasttext): FastText<\n",
       "      (embeding_func): Embedding<vocab_size=1383812, embedding_size=16, use_one_hot=False, embedding_table=Parameter (name=fasttext.embeding_func.embedding_table, shape=(1383812, 16), dtype=Float32, requires_grad=True), dtype=Float32, padding_idx=0>\n",
       "      (fc): Dense<input_channels=16, output_channels=4, has_bias=True>\n",
       "      (log_softmax): LogSoftmax<>\n",
       "      >\n",
       "    (loss_func): SoftmaxCrossEntropyWithLogits<>\n",
       "    >\n",
       "  (optimizer): Adam<\n",
       "    (learning_rate): _IteratorLearningRate<>\n",
       "    >\n",
       "  >"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "net_with_grads = FastTextTrainOneStepCell(net_with_loss, optimizer=optimizer)\n",
    "net_with_grads.set_train(True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.3239299\n",
      "1.2918508\n",
      "1.236133\n",
      "1.1651388\n",
      "1.074889\n",
      "1.1294309\n",
      "0.9561551\n",
      "0.9522176\n",
      "0.91801494\n",
      "0.8881521\n",
      "0.80080545\n",
      "0.7337659\n",
      "0.6696707\n",
      "0.63573897\n",
      "0.5883118\n",
      "0.23005332\n",
      "0.4515081\n",
      "0.20126605\n",
      "0.4553006\n",
      "0.21953695\n",
      "0.15097088\n",
      "0.22751673\n",
      "0.299681\n",
      "0.23459665\n",
      "0.17367001\n",
      "0.32614958\n",
      "0.24170385\n",
      "0.18644962\n",
      "0.14626658\n",
      "0.18693896\n",
      "0.22911525\n",
      "0.30018106\n",
      "0.28360566\n",
      "0.22088502\n",
      "0.21194872\n",
      "0.17272016\n",
      "0.21119592\n",
      "0.21003135\n",
      "0.17690946\n",
      "0.18701789\n",
      "0.22161637\n",
      "0.18359481\n",
      "0.25332585\n",
      "0.1607348\n",
      "0.18905574\n",
      "0.21450931\n",
      "0.4525343\n",
      "0.048400477\n",
      "0.06543859\n",
      "0.04598104\n",
      "0.046952773\n",
      "0.05878158\n",
      "0.05802965\n",
      "0.021141667\n",
      "0.016563205\n",
      "0.0599133\n",
      "0.03379585\n",
      "0.020350233\n",
      "0.033926312\n",
      "0.10194215\n",
      "0.034460913\n",
      "0.055590115\n",
      "0.014893334\n",
      "0.060085252\n",
      "0.028355705\n",
      "0.056327038\n",
      "0.024952719\n",
      "0.032113466\n",
      "0.023740696\n",
      "0.01511923\n",
      "0.034571428\n",
      "0.037790537\n",
      "0.07907674\n",
      "0.032159526\n",
      "0.046872605\n",
      "0.028533353\n",
      "0.0076825884\n",
      "0.0077427584\n",
      "0.040141877\n",
      "0.013469651\n",
      "0.029853245\n",
      "1.1512634\n",
      "0.010118321\n",
      "0.025405075\n",
      "0.026934445\n",
      "0.031721305\n",
      "0.042373456\n",
      "0.0452683\n",
      "0.07718848\n",
      "0.06898584\n",
      "0.06665465\n",
      "0.030750485\n",
      "0.039185237\n",
      "0.017627863\n",
      "0.04209162\n",
      "0.020786878\n",
      "0.021398135\n",
      "0.018585052\n",
      "0.018579647\n",
      "0.012931412\n",
      "0.018248955\n",
      "0.019529575\n",
      "0.0103960065\n",
      "0.018511338\n",
      "0.014498311\n",
      "0.015237848\n",
      "0.0048193294\n",
      "0.012299601\n",
      "0.0012418798\n",
      "0.0017256059\n",
      "0.017027915\n",
      "0.010947452\n",
      "0.0053985277\n",
      "0.005133066\n"
     ]
    }
   ],
   "source": [
    "# 进行epoch训练\n",
    "for i in range(20):\n",
    "    for d in preprocessed_data.create_dict_iterator():\n",
    "        net_with_grads(d[\"src_token_text\"],len(d[\"src_token_text\"]),d[\"label_idx_tag\"])\n",
    "        # 输出loss值\n",
    "        print(net_with_loss(d[\"src_token_text\"],len(d[\"src_token_text\"]),d[\"label_idx_tag\"]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用测试数据集评估模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 读取验证集\n",
    "\n",
    "如同读取训练数据集一样，这里定义`load_infer_dataset`方法来读取测试数据集，其中入参分别为：\n",
    "\n",
    "- batch_size：测试集中的batch数量\n",
    "- datafile：读取测试数据的路径\n",
    "- bucket：数据中bucket的拼接长度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "def load_infer_dataset(batch_size, datafile, bucket):\n",
    "    \"\"\"测试数据加载\"\"\"\n",
    "    \n",
    "    (!!!挪出去\n",
    "    def batch_per_bucket(bucket_length, input_file):\n",
    "        input_file = input_file + 'test/test_dataset_bs_' + str(bucket_length) + '.mindrecord'\n",
    "        if not input_file:\n",
    "            raise FileNotFoundError(\"input file parameter must not be empty.\")\n",
    "\n",
    "        data_set = ds.MindDataset(input_file,\n",
    "                                  columns_list=['src_tokens', 'src_tokens_length', 'label_idx'])\n",
    "        type_cast_op = deC.TypeCast(mstype.int32)\n",
    "        data_set = data_set.map(operations=type_cast_op, input_columns=\"src_tokens\")\n",
    "        data_set = data_set.map(operations=type_cast_op, input_columns=\"src_tokens_length\")\n",
    "        data_set = data_set.map(operations=type_cast_op, input_columns=\"label_idx\")\n",
    "\n",
    "        data_set = data_set.batch(batch_size, drop_remainder=False)\n",
    "        return data_set\n",
    "     \n",
    "    (!!!解释\n",
    "    for i, _ in enumerate(bucket):\n",
    "        bucket_len = bucket[i]\n",
    "        ds_per = batch_per_bucket(bucket_len, datafile)\n",
    "        if i == 0:\n",
    "            data_set = ds_per\n",
    "        else:\n",
    "            data_set = data_set + ds_per\n",
    "\n",
    "    return data_set"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 定义验证方法\n",
    "\n",
    "现在传入训练后的网络`network`，通过`FastTextInferCell`来完成我们的验证流程。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "class FastTextInferCell(nn.Cell):\n",
    "\n",
    "    def __init__(self, network):\n",
    "        super(FastTextInferCell, self).__init__(auto_prefix=False)\n",
    "        self.network = network\n",
    "        self.argmax = ops.operations.ArgMaxWithValue(axis=1, keep_dims=True)\n",
    "        self.log_softmax = nn.LogSoftmax(axis=1)\n",
    "\n",
    "    def construct(self, src_tokens, src_tokens_lengths):\n",
    "        prediction = self.network(src_tokens, src_tokens_lengths)\n",
    "        predicted_idx = self.log_softmax(prediction)\n",
    "        predicted_idx, _ = self.argmax(predicted_idx)\n",
    "\n",
    "        return predicted_idx"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 读取数据并推理模型\n",
    "\n",
    "最后，实例化`load_infer_dataset`和`FastTextInferCell`来模型推理："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "load_test_data = load_infer_dataset(batch_size=512,\n",
    "                                     datafile=\"\",\n",
    "                                     bucket=[64,128,467])\n",
    "\n",
    "(!!!解释\n",
    "ft_infer = FastTextInferCell(fast_text_net)\n",
    "predictions = []\n",
    "target_sens = []\n",
    "model = Model(ft_infer)\n",
    " \n",
    "(!!!解释\n",
    "for batch in load_test_data.create_dict_iterator(output_numpy=True, num_epochs=1):\n",
    "    target_sens.append(batch['label_idx'])\n",
    "    src_tokens = Tensor(batch['src_tokens'], mstype.int32)\n",
    "    src_tokens_length = Tensor(batch['src_tokens_length'], mstype.int32)\n",
    "    predicted_idx = ft_infer(src_tokens,src_tokens_length)\n",
    "    predictions.append(predicted_idx.asnumpy())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 评估模型\n",
    "\n",
    "计算模型的预测值与真实值之前的误差，输出模型的每个batch精度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy:  0.8404494382022472\n",
      "Accuracy:  0.9140625\n",
      "Accuracy:  0.912109375\n",
      "Accuracy:  0.91796875\n",
      "Accuracy:  0.923828125\n",
      "Accuracy:  0.93359375\n",
      "Accuracy:  0.9453125\n",
      "Accuracy:  0.923828125\n",
      "Accuracy:  0.90625\n",
      "Accuracy:  0.9140625\n",
      "Accuracy:  0.9375\n",
      "Accuracy:  0.91796875\n",
      "Accuracy:  0.923828125\n",
      "Accuracy:  0.9050772626931567\n",
      "Accuracy:  0.912109375\n",
      "Accuracy:  0.9347826086956522\n"
     ]
    }
   ],
   "source": [
    "predictions = np.array(predictions).flatten()\n",
    "merge_predictions = []\n",
    "\n",
    "(!!!解释\n",
    "for prediction in predictions:\n",
    "    merge_predictions.extend([prediction])\n",
    "predictions = merge_predictions\n",
    "target_sens = np.array(target_sens).flatten()\n",
    "merge_target_sens = []\n",
    "for target_sen in target_sens:\n",
    "    merge_target_sens.extend([target_sen])\n",
    "target_sens = merge_target_sens\n",
    "\n",
    "(!!!解释输出结果代表的是什么\n",
    "for i in range(len(target_sens)):\n",
    "    acc = accuracy_score(target_sens[i], predictions[i])\n",
    "    print(\"Accuracy: \", acc)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
