{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "MindSpore是华为最近开源的深度学习框架，根据官方的说法，开发这款深度学习框架主要是为了充分利用华为自研的昇腾AI处理器（Ascend）的硬件能力，当然这款框架除了运行在Ascend平台也可以运行在CPU和GPU上面。由于该框架只开发到了0.3版本，目前网络上相关的资料比较少，所以这篇博客想要通过一个简单的小项目，介绍一下如何使用MindSpore训练一个深度学习模型。想要更深入的学习MindSpore可以访问他的官网：https://www.mindspore.cn 和项目代码仓库：https://gitee.com/mindspore/mindspore"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这篇Notebook介绍如何使用MindSpore对IMDB数据集中的电影评论进行情感分析。主要思路就是对电影评论中的单词进行词嵌入处理，然后将处理后的数据送入LSTM模型，模型对评论进行打标签（正面或者负面）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "整个处理过程分为三个部分：\n",
    "- 准备数据：该教程使用的数据采用IMDB影评数据集，下载地址：http://ai.stanford.edu/~amaas/data/sentiment/ ， 如果需要运行该notebook你需要把下载之后的数据解压之后放到 ./data/imdb目录下。由于我们需要对评论中的单词进行词嵌入处理，所以我们还需要用到预训练好的词向量，这里我们不再自己去训练词向量，而是直接采用GloVe，下载地址为：http://nlp.stanford.edu/data/glove.6B.zip。 该文件解压之后包含多个txt文件，多个txt文件的数据都是常用词汇的词向量，只不过向量的维度不同，分为50、100、200、300四种，向量维度越高，词向量的表达能力越强。你可以根据需要选择一个文件使用。将文件放到 ./data/glove目录下面。imdb和glove的下载完之后，我们需要将原始的文本数据经过切词、词嵌入、对齐之后，保存为mindrecord格式。\n",
    "\n",
    "- 模型训练：MindSpore为我们定义好了很多常用模型，我们可以直接从model_zoo中选择基于LSTM实现的SentimentNet使用。\n",
    "- 模型评估：使用MindSpore定义好的接口可方便的对训练好的模型进行评估，比如准确率等等。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "详细的处理流程，可以参考下面的代码。代码下载地址：https://gitee.com/aierwiki/toy-project-for-mindspore"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 准备数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import math\n",
    "from itertools import chain\n",
    "import gensim\n",
    "import numpy as np\n",
    "from mindspore.mindrecord import FileWriter"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "def read_imdb(path, seg='train'):\n",
    "    labels = ['pos', 'neg']\n",
    "    data = []\n",
    "    for label in labels:\n",
    "        files = os.listdir(os.path.join(path, seg, label))\n",
    "        for file in files:\n",
    "            with open(os.path.join(path, seg, label, file), 'r', encoding='utf8') as rf:\n",
    "                review = rf.read().replace('\\n', '')\n",
    "                if label == 'pos':\n",
    "                    data.append([review, 1])\n",
    "                elif label == 'neg':\n",
    "                    data.append([review, 0])\n",
    "    return data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def tokenize_samples(raw_data):\n",
    "    tokenized_data = []\n",
    "    for review in raw_data:\n",
    "        tokenized_data.append([tok.lower() for tok in review.split()])\n",
    "    return tokenized_data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "def encode_samples(tokenized_samples, word_to_idx):\n",
    "    \"\"\"\n",
    "    tokenized_samples: [[word, word, ...]]\n",
    "    word_to_idx: {word:idx, word:idx, ...}\n",
    "    features: [[idx, idx, ...], [idx, idx, ...], ...]\n",
    "    \"\"\"\n",
    "    features = []\n",
    "    for sample in tokenized_samples:\n",
    "        feature = []\n",
    "        for token in sample:\n",
    "            feature.append(word_to_idx.get(token, 0))\n",
    "        features.append(feature)\n",
    "    return features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "def pad_samples(features, maxlen=500, pad=0):\n",
    "    padded_features = []\n",
    "    for feature in features:\n",
    "        if len(feature) >= maxlen:\n",
    "            padded_feature = feature[:maxlen]\n",
    "        else:\n",
    "            padded_feature = feature\n",
    "            while len(padded_feature) < maxlen:\n",
    "                padded_feature.append(pad)\n",
    "        padded_features.append(padded_feature)\n",
    "    return padded_features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "def prepare_data(imdb_data_path='./data/imdb/aclImdb'):      \n",
    "    raw_data_train = read_imdb(imdb_data_path, seg='train')\n",
    "    raw_data_test = read_imdb(imdb_data_path, seg='test')\n",
    "    y_train = np.array([label for _, label in raw_data_train]).astype(np.int32)\n",
    "    y_test = np.array([label for _, label in raw_data_test]).astype(np.int32)\n",
    "    tokenized_data_train = tokenize_samples([review for review, _ in raw_data_train])\n",
    "    tokenized_data_test = tokenize_samples([review for review, _ in raw_data_test])\n",
    "    vocab = set(chain(*tokenized_data_train))\n",
    "    word_to_idx = {word: i+1 for i, word in enumerate(vocab)}\n",
    "    word_to_idx['<unk>'] = 0\n",
    "    X_train = np.array(pad_samples(encode_samples(tokenized_data_train, word_to_idx))).astype(np.int32)\n",
    "    X_test = np.array(pad_samples(encode_samples(tokenized_data_test, word_to_idx))).astype(np.int32)\n",
    "    return X_train, y_train, X_test, y_test, word_to_idx"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train, y_train, X_test, y_test, word_to_idx = prepare_data()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "#!sed -i '1i\\400000 50' ./data/glove/glove.6B.50d.txt\n",
    "def load_embeddings(glove_file_path, word_to_idx, embed_size=50):\n",
    "    word2vector = gensim.models.KeyedVectors.load_word2vec_format(\n",
    "        glove_file_path, binary=False, encoding='utf-8')\n",
    "    assert embed_size == word2vector.vector_size\n",
    "    embeddings = np.zeros((len(word_to_idx), embed_size)).astype(np.float32)\n",
    "    for word, idx in word_to_idx.items():\n",
    "        try:\n",
    "            embeddings[idx, :] = word2vector.word_vec(word)\n",
    "        except KeyError:\n",
    "            continue\n",
    "    return embeddings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "embeddings = load_embeddings('./data/glove/glove.6B.50d.txt', word_to_idx)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_json_data_list(X, y):\n",
    "    data_list = []\n",
    "    for i, (feature, label) in enumerate(zip(X, y)):\n",
    "        data_json = {\"id\": i, \"feature\": feature.reshape(-1), \"label\": int(label)}\n",
    "        data_list.append(data_json)\n",
    "    return data_list"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "def convert_np_to_mindrecord(X_train, y_train, X_test, y_test, mindrecord_save_path=\"./data/mindrecord\"):\n",
    "    schema_json = {\"id\": {\"type\": \"int32\"},\n",
    "                  \"label\": {\"type\": \"int32\"},\n",
    "                  \"feature\": {\"type\": \"int32\", \"shape\": [-1]}}\n",
    "    writer = FileWriter(os.path.join(mindrecord_save_path, \"aclImdb_train.mindrecord\"), shard_num=4)\n",
    "    data_train = get_json_data_list(X_train, y_train)\n",
    "    writer.add_schema(schema_json, \"nlp_schema\")\n",
    "    writer.add_index([\"id\", \"label\"])\n",
    "    writer.write_raw_data(data_train)\n",
    "    writer.commit()\n",
    "    \n",
    "    writer = FileWriter(os.path.join(mindrecord_save_path, \"aclImdb_test.mindrecord\"), shard_num=4)\n",
    "    data_test = get_json_data_list(X_test, y_test)\n",
    "    writer.add_schema(schema_json, \"nlp_schema\")\n",
    "    writer.add_index([\"id\", \"label\"])\n",
    "    writer.write_raw_data(data_test)\n",
    "    writer.commit()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "aclImdb_test.mindrecord0     aclImdb_train.mindrecord0\n",
      "aclImdb_test.mindrecord0.db  aclImdb_train.mindrecord0.db\n",
      "aclImdb_test.mindrecord1     aclImdb_train.mindrecord1\n",
      "aclImdb_test.mindrecord1.db  aclImdb_train.mindrecord1.db\n",
      "aclImdb_test.mindrecord2     aclImdb_train.mindrecord2\n",
      "aclImdb_test.mindrecord2.db  aclImdb_train.mindrecord2.db\n",
      "aclImdb_test.mindrecord3     aclImdb_train.mindrecord3\n",
      "aclImdb_test.mindrecord3.db  aclImdb_train.mindrecord3.db\n"
     ]
    }
   ],
   "source": [
    "!ls ./data/mindrecord"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "np.savetxt(\"./data/mindrecord/weight.txt\", embeddings)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "metadata": {},
   "outputs": [],
   "source": [
    "convert_np_to_mindrecord(X_train, y_train, X_test, y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 创建数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.dataset as mds"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_dataset(base_path, batch_size, num_epochs, is_train):\n",
    "    columns_list = [\"feature\", \"label\"]\n",
    "    num_consumer = 4\n",
    "    if is_train:\n",
    "        path = os.path.join(base_path, \"aclImdb_train.mindrecord0\")\n",
    "    else:\n",
    "        path = os.path.join(base_path, \"aclImdb_test.mindrecord0\")\n",
    "    dataset = mds.MindDataset(path, columns_list=[\"feature\", \"label\"], num_parallel_workers=4)\n",
    "    dataset = dataset.shuffle(buffer_size=dataset.get_dataset_size())\n",
    "    dataset = dataset.batch(batch_size=batch_size, drop_remainder=True)\n",
    "    dataset = dataset.repeat(count=num_epochs)\n",
    "    return dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset_train = create_dataset(\"./data/mindrecord\", batch_size=32, num_epochs=10, is_train=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 定义模型并训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "from mindspore import Tensor, nn, Model, context, Parameter\n",
    "from mindspore.common.initializer import initializer\n",
    "from mindspore.ops import operations as P\n",
    "from mindspore.nn import Accuracy\n",
    "from mindspore.train.callback import LossMonitor, CheckpointConfig, ModelCheckpoint, TimeMonitor\n",
    "from mindspore.model_zoo.lstm import SentimentNet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "embedding_tabel = np.loadtxt(os.path.join(\"./data/mindrecord\", \"weight.txt\")).astype(np.float32)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "network = SentimentNet(vocab_size=embedding_tabel.shape[0],\n",
    "                embed_size=50,\n",
    "                num_hiddens=100,\n",
    "                num_layers=2,\n",
    "                bidirectional=False,\n",
    "                num_classes=2,\n",
    "                weight=Tensor(embedding_tabel),\n",
    "                batch_size=32)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "opt = nn.Momentum(network.trainable_params(), 0.1, 0.9)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [],
   "source": [
    "loss_callback = LossMonitor(per_print_times=60)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = Model(network, loss, opt, {'acc': Accuracy()})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [],
   "source": [
    "config_ck = CheckpointConfig(save_checkpoint_steps=390, keep_checkpoint_max=10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "checkpoint_cb = ModelCheckpoint(prefix=\"lstm\", directory=\"./model\", config=config_ck)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "from mindspore import context"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [],
   "source": [
    "context.set_context(mode=context.GRAPH_MODE, save_graphs=False, device_target=\"GPU\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch: 1 step: 60, loss is 0.68534297\n",
      "epoch: 1 step: 120, loss is 0.6540929\n",
      "epoch: 1 step: 180, loss is 0.6991941\n",
      "epoch: 1 step: 240, loss is 0.6952911\n",
      "epoch: 1 step: 300, loss is 0.6947917\n",
      "epoch: 1 step: 360, loss is 0.6916472\n",
      "epoch: 1 step: 420, loss is 0.7108066\n",
      "epoch: 1 step: 480, loss is 0.70178443\n",
      "epoch: 1 step: 540, loss is 0.6881235\n",
      "epoch: 1 step: 600, loss is 0.6866909\n",
      "epoch: 1 step: 660, loss is 0.6784966\n",
      "epoch: 1 step: 720, loss is 0.7008598\n",
      "epoch: 1 step: 780, loss is 0.6816678\n",
      "epoch: 2 step: 59, loss is 0.689169\n",
      "epoch: 2 step: 119, loss is 0.6890258\n",
      "epoch: 2 step: 179, loss is 0.68908715\n",
      "epoch: 2 step: 239, loss is 0.693069\n",
      "epoch: 2 step: 299, loss is 0.70114046\n",
      "epoch: 2 step: 359, loss is 0.70493597\n",
      "epoch: 2 step: 419, loss is 0.6845702\n",
      "epoch: 2 step: 479, loss is 0.6932264\n",
      "epoch: 2 step: 539, loss is 0.65646595\n",
      "epoch: 2 step: 599, loss is 0.6929465\n",
      "epoch: 2 step: 659, loss is 0.7343909\n",
      "epoch: 2 step: 719, loss is 0.74130523\n",
      "epoch: 2 step: 779, loss is 0.6994195\n",
      "epoch: 3 step: 58, loss is 0.7015147\n",
      "epoch: 3 step: 118, loss is 0.6912332\n",
      "epoch: 3 step: 178, loss is 0.69253314\n",
      "epoch: 3 step: 238, loss is 0.6845928\n",
      "epoch: 3 step: 298, loss is 0.69672406\n",
      "epoch: 3 step: 358, loss is 0.693559\n",
      "epoch: 3 step: 418, loss is 0.68099916\n",
      "epoch: 3 step: 478, loss is 0.69984835\n",
      "epoch: 3 step: 538, loss is 0.69197893\n",
      "epoch: 3 step: 598, loss is 0.7402815\n",
      "epoch: 3 step: 658, loss is 0.6900856\n",
      "epoch: 3 step: 718, loss is 0.7192722\n",
      "epoch: 3 step: 778, loss is 0.6806815\n",
      "epoch: 4 step: 57, loss is 0.7227373\n",
      "epoch: 4 step: 117, loss is 0.6730433\n",
      "epoch: 4 step: 177, loss is 0.6573717\n",
      "epoch: 4 step: 237, loss is 0.7085013\n",
      "epoch: 4 step: 297, loss is 0.6785747\n",
      "epoch: 4 step: 357, loss is 0.7435396\n",
      "epoch: 4 step: 417, loss is 0.6762891\n",
      "epoch: 4 step: 477, loss is 0.70481503\n",
      "epoch: 4 step: 537, loss is 0.6838855\n",
      "epoch: 4 step: 597, loss is 0.6782035\n",
      "epoch: 4 step: 657, loss is 0.69722736\n",
      "epoch: 4 step: 717, loss is 0.69401675\n",
      "epoch: 4 step: 777, loss is 0.6923896\n",
      "epoch: 5 step: 56, loss is 0.6755858\n",
      "epoch: 5 step: 116, loss is 0.6902178\n",
      "epoch: 5 step: 176, loss is 0.6967071\n",
      "epoch: 5 step: 236, loss is 0.7098597\n",
      "epoch: 5 step: 296, loss is 0.75929666\n",
      "epoch: 5 step: 356, loss is 0.70480347\n",
      "epoch: 5 step: 416, loss is 0.67271775\n",
      "epoch: 5 step: 476, loss is 0.6790585\n",
      "epoch: 5 step: 536, loss is 0.6885497\n",
      "epoch: 5 step: 596, loss is 0.6823237\n",
      "epoch: 5 step: 656, loss is 0.69205844\n",
      "epoch: 5 step: 716, loss is 0.72124106\n",
      "epoch: 5 step: 776, loss is 0.77457017\n",
      "epoch: 6 step: 55, loss is 0.70404315\n",
      "epoch: 6 step: 115, loss is 0.667035\n",
      "epoch: 6 step: 175, loss is 0.7099749\n",
      "epoch: 6 step: 235, loss is 0.67157024\n",
      "epoch: 6 step: 295, loss is 0.7066524\n",
      "epoch: 6 step: 355, loss is 0.6628231\n",
      "epoch: 6 step: 415, loss is 0.71774\n",
      "epoch: 6 step: 475, loss is 0.7003077\n",
      "epoch: 6 step: 535, loss is 0.7007376\n",
      "epoch: 6 step: 595, loss is 0.6750086\n",
      "epoch: 6 step: 655, loss is 0.66500735\n",
      "epoch: 6 step: 715, loss is 0.6554856\n",
      "epoch: 6 step: 775, loss is 0.6798543\n",
      "epoch: 7 step: 54, loss is 0.68250155\n",
      "epoch: 7 step: 114, loss is 0.6809841\n",
      "epoch: 7 step: 174, loss is 0.72653234\n",
      "epoch: 7 step: 234, loss is 0.7061486\n",
      "epoch: 7 step: 294, loss is 0.70829767\n",
      "epoch: 7 step: 354, loss is 0.66803557\n",
      "epoch: 7 step: 414, loss is 0.6876676\n",
      "epoch: 7 step: 474, loss is 0.67877734\n",
      "epoch: 7 step: 534, loss is 0.6720035\n",
      "epoch: 7 step: 594, loss is 0.6820434\n",
      "epoch: 7 step: 654, loss is 0.68805\n",
      "epoch: 7 step: 714, loss is 0.8011002\n",
      "epoch: 7 step: 774, loss is 0.70042866\n",
      "epoch: 8 step: 53, loss is 0.72881436\n",
      "epoch: 8 step: 113, loss is 0.6926464\n",
      "epoch: 8 step: 173, loss is 0.6929503\n",
      "epoch: 8 step: 233, loss is 0.70876867\n",
      "epoch: 8 step: 293, loss is 0.6721176\n",
      "epoch: 8 step: 353, loss is 0.6962265\n",
      "epoch: 8 step: 413, loss is 0.68623626\n",
      "epoch: 8 step: 473, loss is 0.6974403\n",
      "epoch: 8 step: 533, loss is 0.67096335\n",
      "epoch: 8 step: 593, loss is 0.71321845\n",
      "epoch: 8 step: 653, loss is 0.66914654\n",
      "epoch: 8 step: 713, loss is 0.67352813\n",
      "epoch: 8 step: 773, loss is 0.6924113\n",
      "epoch: 9 step: 52, loss is 0.64445335\n",
      "epoch: 9 step: 112, loss is 0.68303764\n",
      "epoch: 9 step: 172, loss is 0.6797687\n",
      "epoch: 9 step: 232, loss is 0.67689335\n",
      "epoch: 9 step: 292, loss is 0.70505637\n",
      "epoch: 9 step: 352, loss is 0.6834617\n",
      "epoch: 9 step: 412, loss is 0.6920756\n",
      "epoch: 9 step: 472, loss is 0.7067552\n",
      "epoch: 9 step: 532, loss is 0.6869732\n",
      "epoch: 9 step: 592, loss is 0.6928568\n",
      "epoch: 9 step: 652, loss is 0.70847225\n",
      "epoch: 9 step: 712, loss is 0.68950903\n",
      "epoch: 9 step: 772, loss is 0.72626096\n",
      "epoch: 10 step: 51, loss is 0.7197725\n",
      "epoch: 10 step: 111, loss is 0.7169821\n",
      "epoch: 10 step: 171, loss is 0.68675727\n",
      "epoch: 10 step: 231, loss is 0.6730936\n",
      "epoch: 10 step: 291, loss is 0.68884194\n",
      "epoch: 10 step: 351, loss is 0.68834925\n",
      "epoch: 10 step: 411, loss is 0.6844593\n",
      "epoch: 10 step: 471, loss is 0.69052047\n",
      "epoch: 10 step: 531, loss is 0.71014285\n",
      "epoch: 10 step: 591, loss is 0.69257176\n",
      "epoch: 10 step: 651, loss is 0.67768323\n",
      "epoch: 10 step: 711, loss is 0.7152304\n",
      "epoch: 10 step: 771, loss is 0.70117944\n"
     ]
    }
   ],
   "source": [
    "model.train(10, dataset_train, callbacks=[checkpoint_cb, loss_callback], dataset_sink_mode=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 评估模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset_test = create_dataset(\"./data/mindrecord\", batch_size=32, num_epochs=10, is_train=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [],
   "source": [
    "acc = model.eval(dataset_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "accuracy:{'acc': 0.6604833546734955}\n"
     ]
    }
   ],
   "source": [
    "print(\"accuracy:{}\".format(acc))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}