{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# <center/>MindInsight的溯源分析和对比分析体验"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 概述\n",
    "\n",
    "在模型调参的场景下，需要多次调整模型超参并进行多次训练，这个过程，往往需要手动记录每次训练使用参数以及训练结果。为此，MindSpore提供了自动记录模型参数、训练信息及训练结果评估指标的功能，并通过MindInsight进行可视化展示。本次体验会从MindInsight的数据记录、可视化效果、如何方便用户在模型调优和数据调优上做一次整体流程的体验。\n",
    "\n",
    "下面按照MindSpore的训练数据模型的正常步骤进行，使用`SummaryCollector`进行数据保存操作，本次体验的整体流程如下：\n",
    "\n",
    "1. 数据集的准备，这里使用的是MNIST数据集。\n",
    "\n",
    "2. 构建一个网络，这里使用LeNet网络。\n",
    "\n",
    "3. 记录数据及启动训练。\n",
    "\n",
    "4. 启动MindInsight服务。\n",
    "\n",
    "5. 溯源分析的使用。\n",
    "\n",
    "6. 对比分析的使用。\n",
    "\n",
    "\n",
    "> 本文档适用于GPU和Ascend环境。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据集准备"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据集下载"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "执行如下命令，进行数据集下载并解压，将解压后的数据集移动到指定位置。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./datasets/MNIST_Data\n",
      "├── test\n",
      "│   ├── t10k-images-idx3-ubyte\n",
      "│   └── t10k-labels-idx1-ubyte\n",
      "└── train\n",
      "    ├── train-images-idx3-ubyte\n",
      "    └── train-labels-idx1-ubyte\n",
      "\n",
      "2 directories, 4 files\n"
     ]
    }
   ],
   "source": [
    "!mkdir -p ./datasets/MNIST_Data/train ./datasets/MNIST_Data/test\n",
    "!wget -NP ./datasets/MNIST_Data/train https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte \n",
    "!wget -NP ./datasets/MNIST_Data/train https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte\n",
    "!wget -NP ./datasets/MNIST_Data/test https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte\n",
    "!wget -NP ./datasets/MNIST_Data/test https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte\n",
    "!tree ./datasets/MNIST_Data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据集处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "数据集处理对于训练非常重要，好的数据集可以有效提高训练精度和效率。在加载数据集前，我们通常会对数据集进行一些处理。\n",
    "<br/>我们定义一个函数`create_dataset`来创建数据集。在这个函数中，我们定义好需要进行的数据增强和处理操作：\n",
    "\n",
    "1. 定义数据集。\n",
    "2. 定义进行数据增强和处理所需要的一些参数。\n",
    "3. 根据参数，生成对应的数据增强操作。\n",
    "4. 使用`map`映射函数，将数据操作应用到数据集。\n",
    "5. 对生成的数据集进行处理。\n",
    "\n",
    "具体的数据集操作可以在MindInsight的数据溯源中进行可视化分析。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.dataset.vision.c_transforms as CV\n",
    "import mindspore.dataset.transforms.c_transforms as C\n",
    "from mindspore.dataset.vision import Inter\n",
    "from mindspore.common import dtype as mstype\n",
    "import mindspore.dataset as ds\n",
    "\n",
    "def create_dataset(data_path, batch_size=16, repeat_size=1,\n",
    "                   num_parallel_workers=1):\n",
    "    \"\"\" create dataset for train or test\n",
    "    Args:\n",
    "        data_path (str): Data path\n",
    "        batch_size (int): The number of data records in each group\n",
    "        repeat_size (int): The number of replicated data records\n",
    "        num_parallel_workers (int): The number of parallel workers\n",
    "    \"\"\"\n",
    "    # define dataset\n",
    "    mnist_ds = ds.MnistDataset(data_path)\n",
    "\n",
    "    # define some parameters needed for data enhancement and rough justification\n",
    "    resize_height, resize_width = 32, 32\n",
    "    rescale = 1.0 / 255.0\n",
    "    shift = 0.0\n",
    "    rescale_nml = 1 / 0.3081\n",
    "    shift_nml = -1 * 0.1307 / 0.3081\n",
    "\n",
    "    # according to the parameters, generate the corresponding data enhancement method\n",
    "    resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR)\n",
    "    # if you need to use SummaryCollector to extract image data, do not use the following normalize operator operation\n",
    "    rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)\n",
    "    rescale_op = CV.Rescale(rescale, shift)\n",
    "    hwc2chw_op = CV.HWC2CHW()\n",
    "    type_cast_op = C.TypeCast(mstype.int32)\n",
    "\n",
    "    # using map method to apply operations to a dataset\n",
    "    mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n",
    "    mnist_ds = mnist_ds.map(operations=resize_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
    "    mnist_ds = mnist_ds.map(operations=rescale_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
    "    mnist_ds = mnist_ds.map(operations=rescale_nml_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
    "    mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
    "    \n",
    "    # process the generated dataset\n",
    "    buffer_size = 10000\n",
    "    mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)  # 10000 as in LeNet train script\n",
    "    mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)\n",
    "    mnist_ds = mnist_ds.repeat(repeat_size)\n",
    "\n",
    "    return mnist_ds"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 定义LeNet5网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "本例采用的网络模型为LeNet5网络，对于手写数字分类表现得非常出色，网络模型定义如下。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.ops as ops\n",
    "import mindspore.nn as nn\n",
    "from mindspore.common.initializer import Normal\n",
    "\n",
    "class LeNet5(nn.Cell):\n",
    "    \"\"\"Lenet network structure.\"\"\"\n",
    "    def __init__(self):\n",
    "        super(LeNet5, self).__init__()\n",
    "        self.batch_size = 32 \n",
    "        self.conv1 = nn.Conv2d(1, 6, 5, pad_mode=\"valid\")\n",
    "        self.conv2 = nn.Conv2d(6, 16, 5, pad_mode=\"valid\")\n",
    "        self.fc1 = nn.Dense(16 * 5 * 5, 120, Normal(0.02), Normal(0.02))\n",
    "        self.fc2 = nn.Dense(120, 84, Normal(0.02), Normal(0.02))\n",
    "        self.fc3 = nn.Dense(84, 10)\n",
    "        self.relu = nn.ReLU()\n",
    "        self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)\n",
    "        self.flatten = nn.Flatten()\n",
    "        self.image_summary = ops.ImageSummary()\n",
    "        self.tensor_summary = ops.TensorSummary()\n",
    "\n",
    "    def construct(self, x):\n",
    "        self.image_summary(\"image\", x)\n",
    "        self.tensor_summary(\"tensor\", x)\n",
    "        x = self.max_pool2d(self.relu(self.conv1(x)))\n",
    "        x = self.max_pool2d(self.relu(self.conv2(x)))\n",
    "        x = self.flatten(x)\n",
    "        x = self.relu(self.fc1(x))\n",
    "        x = self.relu(self.fc2(x))\n",
    "        x = self.fc3(x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 记录数据及启动训练"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "MindSpore 提供 `SummaryCollector` 接口来记录训练过程中的信息。\n",
    "\n",
    "为了更好的体验溯源分析和对比分析的效果，这里将调整学习率（`learning_rate`）、迭代次数（`epoch_size`）、batch数量（`batch_size`）来多次训练模型，并使用`SummaryCollector`保存对应的数据。\n",
    "\n",
    "`learning_rate`取值分别为0.01和0.05。\n",
    "\n",
    "`epoch_size`取值分别为2和5。\n",
    "\n",
    "`batch_size`取值分别为16和32。\n",
    "\n",
    "每次调整一个参数进行训练，总共分2x2x2=8组参数。\n",
    "\n",
    "`SummaryCollector`的更多用法，请参考[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================= The Situation 1 =================\n",
      "== learning_rate:0.01, epoch_size:2, batch_size:16 ==\n",
      "================ Starting Training ================\n",
      "epoch: 1 step: 125, loss is 2.296706\n",
      "epoch: 1 step: 250, loss is 2.2627764\n",
      "epoch: 1 step: 375, loss is 2.3244872\n",
      "epoch: 1 step: 500, loss is 2.3250148\n",
      "epoch: 1 step: 625, loss is 2.2620986\n",
      "epoch: 1 step: 750, loss is 2.3121898\n",
      "epoch: 1 step: 875, loss is 2.3026366\n",
      "epoch: 1 step: 1000, loss is 2.2881525\n",
      "epoch: 1 step: 1125, loss is 2.3082426\n",
      "epoch: 1 step: 1250, loss is 2.2710335\n",
      "epoch: 1 step: 1375, loss is 2.270731\n",
      "epoch: 1 step: 1500, loss is 0.9243496\n",
      "epoch: 1 step: 1625, loss is 0.6205256\n",
      "epoch: 1 step: 1750, loss is 0.24102339\n",
      "epoch: 1 step: 1875, loss is 0.22378448\n",
      "epoch: 1 step: 2000, loss is 0.29994896\n",
      "epoch: 1 step: 2125, loss is 0.17046359\n",
      "epoch: 1 step: 2250, loss is 0.0100224735\n",
      "epoch: 1 step: 2375, loss is 0.48139822\n",
      "epoch: 1 step: 2500, loss is 0.06742601\n",
      "epoch: 1 step: 2625, loss is 0.0052048573\n",
      "epoch: 1 step: 2750, loss is 0.3211555\n",
      "epoch: 1 step: 2875, loss is 0.31865978\n",
      "epoch: 1 step: 3000, loss is 0.01020497\n",
      "epoch: 1 step: 3125, loss is 0.50366527\n",
      "epoch: 1 step: 3250, loss is 0.009847044\n",
      "epoch: 1 step: 3375, loss is 0.090079\n",
      "epoch: 1 step: 3500, loss is 0.011234925\n",
      "epoch: 1 step: 3625, loss is 0.10852169\n",
      "epoch: 1 step: 3750, loss is 0.10376892\n",
      "epoch: 2 step: 125, loss is 0.02567438\n",
      "epoch: 2 step: 250, loss is 0.04415862\n",
      "epoch: 2 step: 375, loss is 0.23834515\n",
      "epoch: 2 step: 500, loss is 0.0027683496\n",
      "epoch: 2 step: 625, loss is 0.03725496\n",
      "epoch: 2 step: 750, loss is 0.0075114276\n",
      "epoch: 2 step: 875, loss is 0.18687367\n",
      "epoch: 2 step: 1000, loss is 0.037377894\n",
      "epoch: 2 step: 1125, loss is 0.06776899\n",
      "epoch: 2 step: 1250, loss is 0.4728808\n",
      "epoch: 2 step: 1375, loss is 0.0015088853\n",
      "epoch: 2 step: 1500, loss is 0.11711723\n",
      "epoch: 2 step: 1625, loss is 0.102257855\n",
      "epoch: 2 step: 1750, loss is 0.0031716113\n",
      "epoch: 2 step: 1875, loss is 0.061183207\n",
      "epoch: 2 step: 2000, loss is 0.019151682\n",
      "epoch: 2 step: 2125, loss is 0.021350795\n",
      "epoch: 2 step: 2250, loss is 0.106557846\n",
      "epoch: 2 step: 2375, loss is 0.3371804\n",
      "epoch: 2 step: 2500, loss is 0.283691\n",
      "epoch: 2 step: 2625, loss is 0.009455755\n",
      "epoch: 2 step: 2750, loss is 0.20017545\n",
      "epoch: 2 step: 2875, loss is 0.009389517\n",
      "epoch: 2 step: 3000, loss is 0.04983216\n",
      "epoch: 2 step: 3125, loss is 0.03779413\n",
      "epoch: 2 step: 3250, loss is 0.0079332255\n",
      "epoch: 2 step: 3375, loss is 0.2990877\n",
      "epoch: 2 step: 3500, loss is 0.01983778\n",
      "epoch: 2 step: 3625, loss is 0.09528342\n",
      "epoch: 2 step: 3750, loss is 0.008239745\n",
      "================ Starting Testing ================\n",
      "============ Accuracy:{'Accuracy': 0.9797} ============\n",
      "\n",
      "\n",
      "================= The Situation 2 =================\n",
      "== learning_rate:0.01, epoch_size:2, batch_size:32 ==\n",
      "================ Starting Training ================\n",
      "epoch: 1 step: 125, loss is 2.2925339\n",
      "epoch: 1 step: 250, loss is 2.3008182\n",
      "epoch: 1 step: 375, loss is 2.3030884\n",
      "epoch: 1 step: 500, loss is 2.2976336\n",
      "epoch: 1 step: 625, loss is 2.3130703\n",
      "epoch: 1 step: 750, loss is 2.300329\n",
      "epoch: 1 step: 875, loss is 2.2933068\n",
      "epoch: 1 step: 1000, loss is 2.3033197\n",
      "epoch: 1 step: 1125, loss is 2.2951014\n",
      "epoch: 1 step: 1250, loss is 2.295413\n",
      "epoch: 1 step: 1375, loss is 2.3133006\n",
      "epoch: 1 step: 1500, loss is 0.85154563\n",
      "epoch: 1 step: 1625, loss is 0.47536567\n",
      "epoch: 1 step: 1750, loss is 0.26820138\n",
      "epoch: 1 step: 1875, loss is 0.4543515\n",
      "epoch: 2 step: 125, loss is 0.32313684\n",
      "epoch: 2 step: 250, loss is 0.22960262\n",
      "epoch: 2 step: 375, loss is 0.046680164\n",
      "epoch: 2 step: 500, loss is 0.05865948\n",
      "epoch: 2 step: 625, loss is 0.0072424933\n",
      "epoch: 2 step: 750, loss is 0.086514264\n",
      "epoch: 2 step: 875, loss is 0.11134705\n",
      "epoch: 2 step: 1000, loss is 0.020027155\n",
      "epoch: 2 step: 1125, loss is 0.12832528\n",
      "epoch: 2 step: 1250, loss is 0.055560835\n",
      "epoch: 2 step: 1375, loss is 0.028572561\n",
      "epoch: 2 step: 1500, loss is 0.19585766\n",
      "epoch: 2 step: 1625, loss is 0.14577985\n",
      "epoch: 2 step: 1750, loss is 0.23607145\n",
      "epoch: 2 step: 1875, loss is 0.0840621\n",
      "================ Starting Testing ================\n",
      "============ Accuracy:{'Accuracy': 0.9787} ============\n",
      "\n",
      "\n",
      "================= The Situation 3 =================\n",
      "== learning_rate:0.01, epoch_size:5, batch_size:16 ==\n",
      "================ Starting Training ================\n",
      "epoch: 1 step: 125, loss is 2.328815\n",
      "epoch: 1 step: 250, loss is 2.3232577\n",
      "epoch: 1 step: 375, loss is 2.2851524\n",
      "epoch: 1 step: 500, loss is 2.2837648\n",
      "epoch: 1 step: 625, loss is 2.3328993\n",
      "epoch: 1 step: 750, loss is 2.2846725\n",
      "epoch: 1 step: 875, loss is 2.3305407\n",
      "epoch: 1 step: 1000, loss is 2.3256888\n",
      "epoch: 1 step: 1125, loss is 2.3163714\n",
      "epoch: 1 step: 1250, loss is 2.2763608\n",
      "epoch: 1 step: 1375, loss is 2.3155422\n",
      "epoch: 1 step: 1500, loss is 1.2162496\n",
      "epoch: 1 step: 1625, loss is 0.53659093\n",
      "epoch: 1 step: 1750, loss is 0.23527911\n",
      "epoch: 1 step: 1875, loss is 0.105321795\n",
      "epoch: 1 step: 2000, loss is 0.19657795\n",
      "epoch: 1 step: 2125, loss is 0.5824721\n",
      "epoch: 1 step: 2250, loss is 0.38761842\n",
      "epoch: 1 step: 2375, loss is 0.10887136\n",
      "epoch: 1 step: 2500, loss is 0.2810255\n",
      "epoch: 1 step: 2625, loss is 0.9004075\n",
      "epoch: 1 step: 2750, loss is 0.13873589\n",
      "epoch: 1 step: 2875, loss is 0.010646933\n",
      "epoch: 1 step: 3000, loss is 0.073572345\n",
      "epoch: 1 step: 3125, loss is 0.25893953\n",
      "epoch: 1 step: 3250, loss is 0.028899945\n",
      "epoch: 1 step: 3375, loss is 0.3362317\n",
      "epoch: 1 step: 3500, loss is 0.02972875\n",
      "epoch: 1 step: 3625, loss is 0.0014936002\n",
      "epoch: 1 step: 3750, loss is 0.18348369\n",
      "epoch: 2 step: 125, loss is 0.0075014555\n",
      "epoch: 2 step: 250, loss is 0.08570729\n",
      "epoch: 2 step: 375, loss is 0.12431516\n",
      "epoch: 2 step: 500, loss is 0.18875955\n",
      "epoch: 2 step: 625, loss is 0.01166816\n",
      "epoch: 2 step: 750, loss is 0.45471027\n",
      "epoch: 2 step: 875, loss is 0.07407855\n",
      "epoch: 2 step: 1000, loss is 0.47525182\n",
      "epoch: 2 step: 1125, loss is 0.02400005\n",
      "epoch: 2 step: 1250, loss is 0.010517514\n",
      "epoch: 2 step: 1375, loss is 0.02913664\n",
      "epoch: 2 step: 1500, loss is 0.25256392\n",
      "epoch: 2 step: 1625, loss is 0.21558005\n",
      "epoch: 2 step: 1750, loss is 0.013623273\n",
      "epoch: 2 step: 1875, loss is 0.020157713\n",
      "epoch: 2 step: 2000, loss is 0.00023730143\n",
      "epoch: 2 step: 2125, loss is 0.04196192\n",
      "epoch: 2 step: 2250, loss is 0.22700204\n",
      "epoch: 2 step: 2375, loss is 0.15068744\n",
      "epoch: 2 step: 2500, loss is 0.18599582\n",
      "epoch: 2 step: 2625, loss is 0.11737528\n",
      "epoch: 2 step: 2750, loss is 0.003812017\n",
      "epoch: 2 step: 2875, loss is 0.008812527\n",
      "epoch: 2 step: 3000, loss is 0.035302274\n",
      "epoch: 2 step: 3125, loss is 0.18453324\n",
      "epoch: 2 step: 3250, loss is 0.0103479475\n",
      "epoch: 2 step: 3375, loss is 0.009817297\n",
      "epoch: 2 step: 3500, loss is 0.032968633\n",
      "epoch: 2 step: 3625, loss is 0.0034950136\n",
      "epoch: 2 step: 3750, loss is 0.0057869614\n",
      "epoch: 3 step: 125, loss is 0.27577397\n",
      "epoch: 3 step: 250, loss is 0.007953547\n",
      "epoch: 3 step: 375, loss is 0.21745506\n",
      "epoch: 3 step: 500, loss is 0.0020471578\n",
      "epoch: 3 step: 625, loss is 0.009939543\n",
      "epoch: 3 step: 750, loss is 0.032122627\n",
      "epoch: 3 step: 875, loss is 0.03780477\n",
      "epoch: 3 step: 1000, loss is 0.0076191444\n",
      "epoch: 3 step: 1125, loss is 0.007801161\n",
      "epoch: 3 step: 1250, loss is 0.0006592998\n",
      "epoch: 3 step: 1375, loss is 0.07005897\n",
      "epoch: 3 step: 1500, loss is 0.016776687\n",
      "epoch: 3 step: 1625, loss is 0.18362688\n",
      "epoch: 3 step: 1750, loss is 0.080620855\n",
      "epoch: 3 step: 1875, loss is 0.6229161\n",
      "epoch: 3 step: 2000, loss is 0.0055219308\n",
      "epoch: 3 step: 2125, loss is 0.0009366708\n",
      "epoch: 3 step: 2250, loss is 0.16341054\n",
      "epoch: 3 step: 2375, loss is 0.0015036274\n",
      "epoch: 3 step: 2500, loss is 0.013156504\n",
      "epoch: 3 step: 2625, loss is 0.0027046946\n",
      "epoch: 3 step: 2750, loss is 0.0009584853\n",
      "epoch: 3 step: 2875, loss is 0.22423576\n",
      "epoch: 3 step: 3000, loss is 0.05459709\n",
      "epoch: 3 step: 3125, loss is 0.00039554507\n",
      "epoch: 3 step: 3250, loss is 0.010483981\n",
      "epoch: 3 step: 3375, loss is 0.032579858\n",
      "epoch: 3 step: 3500, loss is 0.000750014\n",
      "epoch: 3 step: 3625, loss is 0.00826493\n",
      "epoch: 3 step: 3750, loss is 0.049514227\n",
      "epoch: 4 step: 125, loss is 0.015774932\n",
      "epoch: 4 step: 250, loss is 0.06803825\n",
      "epoch: 4 step: 375, loss is 0.00016382817\n",
      "epoch: 4 step: 500, loss is 0.078078\n",
      "epoch: 4 step: 625, loss is 0.14985096\n",
      "epoch: 4 step: 750, loss is 0.12369352\n",
      "epoch: 4 step: 875, loss is 0.021003578\n",
      "epoch: 4 step: 1000, loss is 0.0004177717\n",
      "epoch: 4 step: 1125, loss is 0.03918505\n",
      "epoch: 4 step: 1250, loss is 0.16861053\n",
      "epoch: 4 step: 1375, loss is 0.19486608\n",
      "epoch: 4 step: 1500, loss is 0.024210513\n",
      "epoch: 4 step: 1625, loss is 0.00055875443\n",
      "epoch: 4 step: 1750, loss is 0.021766845\n",
      "epoch: 4 step: 1875, loss is 0.04386355\n",
      "epoch: 4 step: 2000, loss is 0.6126808\n",
      "epoch: 4 step: 2125, loss is 0.00016478299\n",
      "epoch: 4 step: 2250, loss is 0.045052838\n",
      "epoch: 4 step: 2375, loss is 0.009033074\n",
      "epoch: 4 step: 2500, loss is 0.083323196\n",
      "epoch: 4 step: 2625, loss is 0.0013404265\n",
      "epoch: 4 step: 2750, loss is 0.00039283157\n",
      "epoch: 4 step: 2875, loss is 0.023555582\n",
      "epoch: 4 step: 3000, loss is 0.03309316\n",
      "epoch: 4 step: 3125, loss is 0.00038078718\n",
      "epoch: 4 step: 3250, loss is 0.0011988003\n",
      "epoch: 4 step: 3375, loss is 0.1094174\n",
      "epoch: 4 step: 3500, loss is 0.4831129\n",
      "epoch: 4 step: 3625, loss is 4.419859e-05\n",
      "epoch: 4 step: 3750, loss is 0.000106370804\n",
      "epoch: 5 step: 125, loss is 0.031739432\n",
      "epoch: 5 step: 250, loss is 0.0023120884\n",
      "epoch: 5 step: 375, loss is 0.19174251\n",
      "epoch: 5 step: 500, loss is 0.115054466\n",
      "epoch: 5 step: 625, loss is 0.00063831004\n",
      "epoch: 5 step: 750, loss is 0.011749344\n",
      "epoch: 5 step: 875, loss is 0.00043033107\n",
      "epoch: 5 step: 1000, loss is 0.1209258\n",
      "epoch: 5 step: 1125, loss is 0.085516274\n",
      "epoch: 5 step: 1250, loss is 0.011499016\n",
      "epoch: 5 step: 1375, loss is 0.0013453395\n",
      "epoch: 5 step: 1500, loss is 0.1783311\n",
      "epoch: 5 step: 1625, loss is 0.000960443\n",
      "epoch: 5 step: 1750, loss is 0.00059457694\n",
      "epoch: 5 step: 1875, loss is 0.08647974\n",
      "epoch: 5 step: 2000, loss is 0.0013335779\n",
      "epoch: 5 step: 2125, loss is 0.02167116\n",
      "epoch: 5 step: 2250, loss is 0.0005232549\n",
      "epoch: 5 step: 2375, loss is 0.016557036\n",
      "epoch: 5 step: 2500, loss is 0.046004463\n",
      "epoch: 5 step: 2625, loss is 0.00019306582\n",
      "epoch: 5 step: 2750, loss is 0.0066435235\n",
      "epoch: 5 step: 2875, loss is 0.0028824392\n",
      "epoch: 5 step: 3000, loss is 0.24145652\n",
      "epoch: 5 step: 3125, loss is 0.063728176\n",
      "epoch: 5 step: 3250, loss is 0.0018528743\n",
      "epoch: 5 step: 3375, loss is 0.005786577\n",
      "epoch: 5 step: 3500, loss is 0.0063151703\n",
      "epoch: 5 step: 3625, loss is 9.56385e-05\n",
      "epoch: 5 step: 3750, loss is 0.005796452\n",
      "================ Starting Testing ================\n",
      "============ Accuracy:{'Accuracy': 0.9819} ============\n",
      "\n",
      "\n",
      "================= The Situation 4 =================\n",
      "== learning_rate:0.01, epoch_size:5, batch_size:32 ==\n",
      "================ Starting Training ================\n",
      "epoch: 1 step: 125, loss is 2.3141277\n",
      "epoch: 1 step: 250, loss is 2.299097\n",
      "epoch: 1 step: 375, loss is 2.2934532\n",
      "epoch: 1 step: 500, loss is 2.3099198\n",
      "epoch: 1 step: 625, loss is 2.305512\n",
      "epoch: 1 step: 750, loss is 2.3175468\n",
      "epoch: 1 step: 875, loss is 2.3058007\n",
      "epoch: 1 step: 1000, loss is 2.3117945\n",
      "epoch: 1 step: 1125, loss is 2.3218691\n",
      "epoch: 1 step: 1250, loss is 2.3033545\n",
      "epoch: 1 step: 1375, loss is 2.2944286\n",
      "epoch: 1 step: 1500, loss is 0.5123912\n",
      "epoch: 1 step: 1625, loss is 0.14081886\n",
      "epoch: 1 step: 1750, loss is 0.137348\n",
      "epoch: 1 step: 1875, loss is 0.332155\n",
      "epoch: 2 step: 125, loss is 0.029168233\n",
      "epoch: 2 step: 250, loss is 0.2399086\n",
      "epoch: 2 step: 375, loss is 0.18014185\n",
      "epoch: 2 step: 500, loss is 0.2132935\n",
      "epoch: 2 step: 625, loss is 0.0040447153\n",
      "epoch: 2 step: 750, loss is 0.13248429\n",
      "epoch: 2 step: 875, loss is 0.16978796\n",
      "epoch: 2 step: 1000, loss is 0.042082515\n",
      "epoch: 2 step: 1125, loss is 0.043927424\n",
      "epoch: 2 step: 1250, loss is 0.15354133\n",
      "epoch: 2 step: 1375, loss is 0.06834163\n",
      "epoch: 2 step: 1500, loss is 0.045728613\n",
      "epoch: 2 step: 1625, loss is 0.016941896\n",
      "epoch: 2 step: 1750, loss is 0.05370252\n",
      "epoch: 2 step: 1875, loss is 0.011741843\n",
      "epoch: 3 step: 125, loss is 0.00913367\n",
      "epoch: 3 step: 250, loss is 0.15724385\n",
      "epoch: 3 step: 375, loss is 0.067094825\n",
      "epoch: 3 step: 500, loss is 0.061788365\n",
      "epoch: 3 step: 625, loss is 0.0050505553\n",
      "epoch: 3 step: 750, loss is 0.0023197087\n",
      "epoch: 3 step: 875, loss is 0.028508047\n",
      "epoch: 3 step: 1000, loss is 0.039646797\n",
      "epoch: 3 step: 1125, loss is 0.1460342\n",
      "epoch: 3 step: 1250, loss is 0.0054985345\n",
      "epoch: 3 step: 1375, loss is 0.3982027\n",
      "epoch: 3 step: 1500, loss is 0.010748535\n",
      "epoch: 3 step: 1625, loss is 0.015157141\n",
      "epoch: 3 step: 1750, loss is 0.0019374305\n",
      "epoch: 3 step: 1875, loss is 0.058262732\n",
      "epoch: 4 step: 125, loss is 0.29354185\n",
      "epoch: 4 step: 250, loss is 0.019852865\n",
      "epoch: 4 step: 375, loss is 0.044506036\n",
      "epoch: 4 step: 500, loss is 0.038882047\n",
      "epoch: 4 step: 625, loss is 0.010133128\n",
      "epoch: 4 step: 750, loss is 0.0055175046\n",
      "epoch: 4 step: 875, loss is 0.086619824\n",
      "epoch: 4 step: 1000, loss is 0.010645878\n",
      "epoch: 4 step: 1125, loss is 0.025731985\n",
      "epoch: 4 step: 1250, loss is 0.10762554\n",
      "epoch: 4 step: 1375, loss is 0.006392666\n",
      "epoch: 4 step: 1500, loss is 0.0022511086\n",
      "epoch: 4 step: 1625, loss is 0.020254254\n",
      "epoch: 4 step: 1750, loss is 0.007738711\n",
      "epoch: 4 step: 1875, loss is 0.021094736\n",
      "epoch: 5 step: 125, loss is 0.15432167\n",
      "epoch: 5 step: 250, loss is 0.009095187\n",
      "epoch: 5 step: 375, loss is 0.09194406\n",
      "epoch: 5 step: 500, loss is 0.02482254\n",
      "epoch: 5 step: 625, loss is 0.072574414\n",
      "epoch: 5 step: 750, loss is 0.0033603504\n",
      "epoch: 5 step: 875, loss is 0.014673766\n",
      "epoch: 5 step: 1000, loss is 0.10280271\n",
      "epoch: 5 step: 1125, loss is 0.017723871\n",
      "epoch: 5 step: 1250, loss is 0.0246438\n",
      "epoch: 5 step: 1375, loss is 0.0056467657\n",
      "epoch: 5 step: 1500, loss is 0.009505681\n",
      "epoch: 5 step: 1625, loss is 0.030743863\n",
      "epoch: 5 step: 1750, loss is 0.1039285\n",
      "epoch: 5 step: 1875, loss is 0.0149848955\n",
      "================ Starting Testing ================\n",
      "============ Accuracy:{'Accuracy': 0.9836} ============\n",
      "\n",
      "\n",
      "================= The Situation 5 =================\n",
      "== learning_rate:0.05, epoch_size:2, batch_size:16 ==\n",
      "================ Starting Training ================\n",
      "epoch: 1 step: 125, loss is 2.3003526\n",
      "epoch: 1 step: 250, loss is 2.267969\n",
      "epoch: 1 step: 375, loss is 2.295865\n",
      "epoch: 1 step: 500, loss is 1.685572\n",
      "epoch: 1 step: 625, loss is 2.1919081\n",
      "epoch: 1 step: 750, loss is 2.2844672\n",
      "epoch: 1 step: 875, loss is 2.2945147\n",
      "epoch: 1 step: 1000, loss is 2.3321033\n",
      "epoch: 1 step: 1125, loss is 2.3237975\n",
      "epoch: 1 step: 1250, loss is 2.337674\n",
      "epoch: 1 step: 1375, loss is 2.3723369\n",
      "epoch: 1 step: 1500, loss is 2.328748\n",
      "epoch: 1 step: 1625, loss is 2.3221745\n",
      "epoch: 1 step: 1750, loss is 2.3402386\n",
      "epoch: 1 step: 1875, loss is 2.2624133\n",
      "epoch: 1 step: 2000, loss is 2.2845757\n",
      "epoch: 1 step: 2125, loss is 2.2816522\n",
      "epoch: 1 step: 2250, loss is 2.2604764\n",
      "epoch: 1 step: 2375, loss is 2.293416\n",
      "epoch: 1 step: 2500, loss is 2.2869396\n",
      "epoch: 1 step: 2625, loss is 2.2734303\n",
      "epoch: 1 step: 2750, loss is 2.2904344\n",
      "epoch: 1 step: 2875, loss is 2.3431993\n",
      "epoch: 1 step: 3000, loss is 2.3309033\n",
      "epoch: 1 step: 3125, loss is 2.3322077\n",
      "epoch: 1 step: 3250, loss is 2.321935\n",
      "epoch: 1 step: 3375, loss is 2.3091938\n",
      "epoch: 1 step: 3500, loss is 2.3223789\n",
      "epoch: 1 step: 3625, loss is 2.3160322\n",
      "epoch: 1 step: 3750, loss is 2.30167\n",
      "epoch: 2 step: 125, loss is 2.3138895\n",
      "epoch: 2 step: 250, loss is 2.3254342\n",
      "epoch: 2 step: 375, loss is 2.3004107\n",
      "epoch: 2 step: 500, loss is 2.27686\n",
      "epoch: 2 step: 625, loss is 2.2919784\n",
      "epoch: 2 step: 750, loss is 2.3029525\n",
      "epoch: 2 step: 875, loss is 2.2823474\n",
      "epoch: 2 step: 1000, loss is 2.3258169\n",
      "epoch: 2 step: 1125, loss is 2.2833183\n",
      "epoch: 2 step: 1250, loss is 2.3104324\n",
      "epoch: 2 step: 1375, loss is 2.271712\n",
      "epoch: 2 step: 1500, loss is 2.2836237\n",
      "epoch: 2 step: 1625, loss is 2.2735772\n",
      "epoch: 2 step: 1750, loss is 2.3267956\n",
      "epoch: 2 step: 1875, loss is 2.2562587\n",
      "epoch: 2 step: 2000, loss is 2.3003142\n",
      "epoch: 2 step: 2125, loss is 2.3798678\n",
      "epoch: 2 step: 2250, loss is 2.2594686\n",
      "epoch: 2 step: 2375, loss is 2.3176265\n",
      "epoch: 2 step: 2500, loss is 2.318133\n",
      "epoch: 2 step: 2625, loss is 2.2887654\n",
      "epoch: 2 step: 2750, loss is 2.3572085\n",
      "epoch: 2 step: 2875, loss is 2.2714615\n",
      "epoch: 2 step: 3000, loss is 2.3420625\n",
      "epoch: 2 step: 3125, loss is 2.3499656\n",
      "epoch: 2 step: 3250, loss is 2.2610397\n",
      "epoch: 2 step: 3375, loss is 2.3557587\n",
      "epoch: 2 step: 3500, loss is 2.361361\n",
      "epoch: 2 step: 3625, loss is 2.3162065\n",
      "epoch: 2 step: 3750, loss is 2.338607\n",
      "================ Starting Testing ================\n",
      "============ Accuracy:{'Accuracy': 0.1028} ============\n",
      "\n",
      "\n",
      "================= The Situation 6 =================\n",
      "== learning_rate:0.05, epoch_size:2, batch_size:32 ==\n",
      "================ Starting Training ================\n",
      "epoch: 1 step: 125, loss is 2.3143704\n",
      "epoch: 1 step: 250, loss is 2.3132532\n",
      "epoch: 1 step: 375, loss is 2.2908692\n",
      "epoch: 1 step: 500, loss is 0.83405465\n",
      "epoch: 1 step: 625, loss is 0.7648193\n",
      "epoch: 1 step: 750, loss is 0.77581483\n",
      "epoch: 1 step: 875, loss is 0.63934445\n",
      "epoch: 1 step: 1000, loss is 1.0165555\n",
      "epoch: 1 step: 1125, loss is 0.20264903\n",
      "epoch: 1 step: 1250, loss is 0.4031322\n",
      "epoch: 1 step: 1375, loss is 0.22567266\n",
      "epoch: 1 step: 1500, loss is 0.5009518\n",
      "epoch: 1 step: 1625, loss is 0.30227607\n",
      "epoch: 1 step: 1750, loss is 0.4046876\n",
      "epoch: 1 step: 1875, loss is 0.13460635\n",
      "epoch: 2 step: 125, loss is 0.47336528\n",
      "epoch: 2 step: 250, loss is 2.1019025\n",
      "epoch: 2 step: 375, loss is 2.3308382\n",
      "epoch: 2 step: 500, loss is 2.3199062\n",
      "epoch: 2 step: 625, loss is 2.281591\n",
      "epoch: 2 step: 750, loss is 2.3075724\n",
      "epoch: 2 step: 875, loss is 2.3032534\n",
      "epoch: 2 step: 1000, loss is 2.2849927\n",
      "epoch: 2 step: 1125, loss is 2.3171089\n",
      "epoch: 2 step: 1250, loss is 2.2753448\n",
      "epoch: 2 step: 1375, loss is 2.3221805\n",
      "epoch: 2 step: 1500, loss is 2.3242655\n",
      "epoch: 2 step: 1625, loss is 2.3066783\n",
      "epoch: 2 step: 1750, loss is 2.3138652\n",
      "epoch: 2 step: 1875, loss is 2.3345938\n",
      "================ Starting Testing ================\n",
      "============ Accuracy:{'Accuracy': 0.0974} ============\n",
      "\n",
      "\n",
      "================= The Situation 7 =================\n",
      "== learning_rate:0.05, epoch_size:5, batch_size:16 ==\n",
      "================ Starting Training ================\n",
      "epoch: 1 step: 125, loss is 2.295558\n",
      "epoch: 1 step: 250, loss is 2.38386\n",
      "epoch: 1 step: 375, loss is 2.33319\n",
      "epoch: 1 step: 500, loss is 1.438849\n",
      "epoch: 1 step: 625, loss is 1.4208732\n",
      "epoch: 1 step: 750, loss is 1.1754154\n",
      "epoch: 1 step: 875, loss is 0.7132174\n",
      "epoch: 1 step: 1000, loss is 1.0798488\n",
      "epoch: 1 step: 1125, loss is 2.4280946\n",
      "epoch: 1 step: 1250, loss is 2.3117175\n",
      "epoch: 1 step: 1375, loss is 2.3256335\n",
      "epoch: 1 step: 1500, loss is 2.2663872\n",
      "epoch: 1 step: 1625, loss is 2.3064473\n",
      "epoch: 1 step: 1750, loss is 2.2814608\n",
      "epoch: 1 step: 1875, loss is 2.312989\n",
      "epoch: 1 step: 2000, loss is 2.3795862\n",
      "epoch: 1 step: 2125, loss is 2.3190327\n",
      "epoch: 1 step: 2250, loss is 2.3067005\n",
      "epoch: 1 step: 2375, loss is 2.3292706\n",
      "epoch: 1 step: 2500, loss is 2.3708742\n",
      "epoch: 1 step: 2625, loss is 2.3234503\n",
      "epoch: 1 step: 2750, loss is 2.286217\n",
      "epoch: 1 step: 2875, loss is 2.3187988\n",
      "epoch: 1 step: 3000, loss is 2.2813363\n",
      "epoch: 1 step: 3125, loss is 2.3160567\n",
      "epoch: 1 step: 3250, loss is 2.3587837\n",
      "epoch: 1 step: 3375, loss is 2.3024836\n",
      "epoch: 1 step: 3500, loss is 2.3151147\n",
      "epoch: 1 step: 3625, loss is 2.3327696\n",
      "epoch: 1 step: 3750, loss is 2.3304598\n",
      "epoch: 2 step: 125, loss is 2.3098416\n",
      "epoch: 2 step: 250, loss is 2.2828104\n",
      "epoch: 2 step: 375, loss is 2.312215\n",
      "epoch: 2 step: 500, loss is 2.2553732\n",
      "epoch: 2 step: 625, loss is 2.3105173\n",
      "epoch: 2 step: 750, loss is 2.339398\n",
      "epoch: 2 step: 875, loss is 2.2900229\n",
      "epoch: 2 step: 1000, loss is 2.292558\n",
      "epoch: 2 step: 1125, loss is 2.3165226\n",
      "epoch: 2 step: 1250, loss is 2.2258747\n",
      "epoch: 2 step: 1375, loss is 2.367465\n",
      "epoch: 2 step: 1500, loss is 2.3556745\n",
      "epoch: 2 step: 1625, loss is 2.3215854\n",
      "epoch: 2 step: 1750, loss is 2.2786517\n",
      "epoch: 2 step: 1875, loss is 2.2869582\n",
      "epoch: 2 step: 2000, loss is 2.2685075\n",
      "epoch: 2 step: 2125, loss is 2.334608\n",
      "epoch: 2 step: 2250, loss is 2.294904\n",
      "epoch: 2 step: 2375, loss is 2.3460655\n",
      "epoch: 2 step: 2500, loss is 2.2993896\n",
      "epoch: 2 step: 2625, loss is 2.3113718\n",
      "epoch: 2 step: 2750, loss is 2.2953403\n",
      "epoch: 2 step: 2875, loss is 2.3484921\n",
      "epoch: 2 step: 3000, loss is 2.3252711\n",
      "epoch: 2 step: 3125, loss is 2.3128834\n",
      "epoch: 2 step: 3250, loss is 2.3085055\n",
      "epoch: 2 step: 3375, loss is 2.2696073\n",
      "epoch: 2 step: 3500, loss is 2.2517495\n",
      "epoch: 2 step: 3625, loss is 2.332074\n",
      "epoch: 2 step: 3750, loss is 2.288159\n",
      "epoch: 3 step: 125, loss is 2.278061\n",
      "epoch: 3 step: 250, loss is 2.2659266\n",
      "epoch: 3 step: 375, loss is 2.3351808\n",
      "epoch: 3 step: 500, loss is 2.3183289\n",
      "epoch: 3 step: 625, loss is 2.3381956\n",
      "epoch: 3 step: 750, loss is 2.3140006\n",
      "epoch: 3 step: 875, loss is 2.4133265\n",
      "epoch: 3 step: 1000, loss is 2.2901528\n",
      "epoch: 3 step: 1125, loss is 2.2979116\n",
      "epoch: 3 step: 1250, loss is 2.310516\n",
      "epoch: 3 step: 1375, loss is 2.3049035\n",
      "epoch: 3 step: 1500, loss is 2.2720628\n",
      "epoch: 3 step: 1625, loss is 2.3208938\n",
      "epoch: 3 step: 1750, loss is 2.2830434\n",
      "epoch: 3 step: 1875, loss is 2.30417\n",
      "epoch: 3 step: 2000, loss is 2.2737663\n",
      "epoch: 3 step: 2125, loss is 2.2822623\n",
      "epoch: 3 step: 2250, loss is 2.3083425\n",
      "epoch: 3 step: 2375, loss is 2.31658\n",
      "epoch: 3 step: 2500, loss is 2.2714338\n",
      "epoch: 3 step: 2625, loss is 2.3353026\n",
      "epoch: 3 step: 2750, loss is 2.2701824\n",
      "epoch: 3 step: 2875, loss is 2.3068202\n",
      "epoch: 3 step: 3000, loss is 2.3071563\n",
      "epoch: 3 step: 3125, loss is 2.3619137\n",
      "epoch: 3 step: 3250, loss is 2.2972512\n",
      "epoch: 3 step: 3375, loss is 2.307385\n",
      "epoch: 3 step: 3500, loss is 2.25137\n",
      "epoch: 3 step: 3625, loss is 2.3223963\n",
      "epoch: 3 step: 3750, loss is 2.332354\n",
      "epoch: 4 step: 125, loss is 2.3525374\n",
      "epoch: 4 step: 250, loss is 2.2607126\n",
      "epoch: 4 step: 375, loss is 2.3337207\n",
      "epoch: 4 step: 500, loss is 2.2943015\n",
      "epoch: 4 step: 625, loss is 2.322392\n",
      "epoch: 4 step: 750, loss is 2.3488765\n",
      "epoch: 4 step: 875, loss is 2.3072693\n",
      "epoch: 4 step: 1000, loss is 2.2509954\n",
      "epoch: 4 step: 1125, loss is 2.267654\n",
      "epoch: 4 step: 1250, loss is 2.3125684\n",
      "epoch: 4 step: 1375, loss is 2.2700844\n",
      "epoch: 4 step: 1500, loss is 2.3357136\n",
      "epoch: 4 step: 1625, loss is 2.3254232\n",
      "epoch: 4 step: 1750, loss is 2.3321593\n",
      "epoch: 4 step: 1875, loss is 2.3218544\n",
      "epoch: 4 step: 2000, loss is 2.2537644\n",
      "epoch: 4 step: 2125, loss is 2.350479\n",
      "epoch: 4 step: 2250, loss is 2.2925644\n",
      "epoch: 4 step: 2375, loss is 2.2582018\n",
      "epoch: 4 step: 2500, loss is 2.3031194\n",
      "epoch: 4 step: 2625, loss is 2.2963529\n",
      "epoch: 4 step: 2750, loss is 2.3857465\n",
      "epoch: 4 step: 2875, loss is 2.3052728\n",
      "epoch: 4 step: 3000, loss is 2.3019109\n",
      "epoch: 4 step: 3125, loss is 2.345898\n",
      "epoch: 4 step: 3250, loss is 2.3057108\n",
      "epoch: 4 step: 3375, loss is 2.3092058\n",
      "epoch: 4 step: 3500, loss is 2.263299\n",
      "epoch: 4 step: 3625, loss is 2.2924554\n",
      "epoch: 4 step: 3750, loss is 2.3009706\n",
      "epoch: 5 step: 125, loss is 2.363699\n",
      "epoch: 5 step: 250, loss is 2.340525\n",
      "epoch: 5 step: 375, loss is 2.3687658\n",
      "epoch: 5 step: 500, loss is 2.3060727\n",
      "epoch: 5 step: 625, loss is 2.3061423\n",
      "epoch: 5 step: 750, loss is 2.3200512\n",
      "epoch: 5 step: 875, loss is 2.296088\n",
      "epoch: 5 step: 1000, loss is 2.3382936\n",
      "epoch: 5 step: 1125, loss is 2.3020995\n",
      "epoch: 5 step: 1250, loss is 2.3069475\n",
      "epoch: 5 step: 1375, loss is 2.2993364\n",
      "epoch: 5 step: 1500, loss is 2.2792392\n",
      "epoch: 5 step: 1625, loss is 2.3670845\n",
      "epoch: 5 step: 1750, loss is 2.237617\n",
      "epoch: 5 step: 1875, loss is 2.3088713\n",
      "epoch: 5 step: 2000, loss is 2.305958\n",
      "epoch: 5 step: 2125, loss is 2.2708802\n",
      "epoch: 5 step: 2250, loss is 2.3196752\n",
      "epoch: 5 step: 2375, loss is 2.2655036\n",
      "epoch: 5 step: 2500, loss is 2.2821996\n",
      "epoch: 5 step: 2625, loss is 2.3173587\n",
      "epoch: 5 step: 2750, loss is 2.31017\n",
      "epoch: 5 step: 2875, loss is 2.2813506\n",
      "epoch: 5 step: 3000, loss is 2.3327284\n",
      "epoch: 5 step: 3125, loss is 2.3425179\n",
      "epoch: 5 step: 3250, loss is 2.3141623\n",
      "epoch: 5 step: 3375, loss is 2.345585\n",
      "epoch: 5 step: 3500, loss is 2.2416115\n",
      "epoch: 5 step: 3625, loss is 2.2807086\n",
      "epoch: 5 step: 3750, loss is 2.2743173\n",
      "================ Starting Testing ================\n",
      "============ Accuracy:{'Accuracy': 0.1028} ============\n",
      "\n",
      "\n",
      "================= The Situation 8 =================\n",
      "== learning_rate:0.05, epoch_size:5, batch_size:32 ==\n",
      "================ Starting Training ================\n",
      "epoch: 1 step: 125, loss is 2.3025277\n",
      "epoch: 1 step: 250, loss is 2.3404844\n",
      "epoch: 1 step: 375, loss is 2.0081742\n",
      "epoch: 1 step: 500, loss is 0.7698262\n",
      "epoch: 1 step: 625, loss is 0.84001845\n",
      "epoch: 1 step: 750, loss is 0.858749\n",
      "epoch: 1 step: 875, loss is 1.1702987\n",
      "epoch: 1 step: 1000, loss is 0.32887033\n",
      "epoch: 1 step: 1125, loss is 1.203075\n",
      "epoch: 1 step: 1250, loss is 0.3252069\n",
      "epoch: 1 step: 1375, loss is 0.6137168\n",
      "epoch: 1 step: 1500, loss is 0.20187378\n",
      "epoch: 1 step: 1625, loss is 0.31883952\n",
      "epoch: 1 step: 1750, loss is 0.51499724\n",
      "epoch: 1 step: 1875, loss is 0.35917458\n",
      "epoch: 2 step: 125, loss is 1.1622694\n",
      "epoch: 2 step: 250, loss is 0.5028538\n",
      "epoch: 2 step: 375, loss is 0.6323484\n",
      "epoch: 2 step: 500, loss is 0.31263918\n",
      "epoch: 2 step: 625, loss is 0.81616145\n",
      "epoch: 2 step: 750, loss is 0.24894318\n",
      "epoch: 2 step: 875, loss is 0.87633514\n",
      "epoch: 2 step: 1000, loss is 0.51267153\n",
      "epoch: 2 step: 1125, loss is 2.2888105\n",
      "epoch: 2 step: 1250, loss is 2.3071456\n",
      "epoch: 2 step: 1375, loss is 2.2765212\n",
      "epoch: 2 step: 1500, loss is 2.3278954\n",
      "epoch: 2 step: 1625, loss is 2.3195877\n",
      "epoch: 2 step: 1750, loss is 2.3329341\n",
      "epoch: 2 step: 1875, loss is 2.3095658\n",
      "epoch: 3 step: 125, loss is 2.304574\n",
      "epoch: 3 step: 250, loss is 2.3350236\n",
      "epoch: 3 step: 375, loss is 2.3366516\n",
      "epoch: 3 step: 500, loss is 2.3036337\n",
      "epoch: 3 step: 625, loss is 2.3146763\n",
      "epoch: 3 step: 750, loss is 2.3325539\n",
      "epoch: 3 step: 875, loss is 2.3182425\n",
      "epoch: 3 step: 1000, loss is 2.2901216\n",
      "epoch: 3 step: 1125, loss is 2.271974\n",
      "epoch: 3 step: 1250, loss is 2.3013616\n",
      "epoch: 3 step: 1375, loss is 2.3093197\n",
      "epoch: 3 step: 1500, loss is 2.288068\n",
      "epoch: 3 step: 1625, loss is 2.3186734\n",
      "epoch: 3 step: 1750, loss is 2.3295755\n",
      "epoch: 3 step: 1875, loss is 2.2763002\n",
      "epoch: 4 step: 125, loss is 2.3041005\n",
      "epoch: 4 step: 250, loss is 2.329235\n",
      "epoch: 4 step: 375, loss is 2.2897174\n",
      "epoch: 4 step: 500, loss is 2.2791119\n",
      "epoch: 4 step: 625, loss is 2.3264925\n",
      "epoch: 4 step: 750, loss is 2.3077648\n",
      "epoch: 4 step: 875, loss is 2.301721\n",
      "epoch: 4 step: 1000, loss is 2.2765012\n",
      "epoch: 4 step: 1125, loss is 2.2757645\n",
      "epoch: 4 step: 1250, loss is 2.2934148\n",
      "epoch: 4 step: 1375, loss is 2.3058321\n",
      "epoch: 4 step: 1500, loss is 2.3203738\n",
      "epoch: 4 step: 1625, loss is 2.319434\n",
      "epoch: 4 step: 1750, loss is 2.2785978\n",
      "epoch: 4 step: 1875, loss is 2.3218942\n",
      "epoch: 5 step: 125, loss is 2.300228\n",
      "epoch: 5 step: 250, loss is 2.3302739\n",
      "epoch: 5 step: 375, loss is 2.302813\n",
      "epoch: 5 step: 500, loss is 2.3095956\n",
      "epoch: 5 step: 625, loss is 2.306519\n",
      "epoch: 5 step: 750, loss is 2.2943096\n",
      "epoch: 5 step: 875, loss is 2.316216\n",
      "epoch: 5 step: 1000, loss is 2.3018808\n",
      "epoch: 5 step: 1125, loss is 2.2752795\n",
      "epoch: 5 step: 1250, loss is 2.300592\n",
      "epoch: 5 step: 1375, loss is 2.313322\n",
      "epoch: 5 step: 1500, loss is 2.2971704\n",
      "epoch: 5 step: 1625, loss is 2.328839\n",
      "epoch: 5 step: 1750, loss is 2.2877312\n",
      "epoch: 5 step: 1875, loss is 2.3039935\n",
      "================ Starting Testing ================\n",
      "============ Accuracy:{'Accuracy': 0.1135} ============\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "from mindspore.train.callback import SummaryCollector\n",
    "from mindspore.nn.metrics import Accuracy\n",
    "from mindspore import context, Model\n",
    "from mindspore.nn.loss import SoftmaxCrossEntropyWithLogits\n",
    "from mindspore import load_checkpoint, load_param_into_net\n",
    "from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor\n",
    "import os\n",
    "\n",
    "if __name__==\"__main__\":\n",
    "    context.set_context(mode=context.GRAPH_MODE, device_target = \"GPU\")\n",
    "    if os.name == \"nt\":\n",
    "        os.system(\"del/f/s/q *.ckpt *.meta\")\n",
    "    else:\n",
    "        os.system(\"rm -f *.ckpt *.meta *.pb\")\n",
    "\n",
    "    mnist_path = \"./datasets/MNIST_Data/\"\n",
    "    model_path = \"./models/ckpt/lineage_and_scalars_comparison/\"\n",
    "    repeat_size = 1\n",
    "    config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10)\n",
    "    ckpoint_cb = ModelCheckpoint(prefix=\"checkpoint_lenet\", directory=model_path, config=config_ck)\n",
    "    # define the optimizer\n",
    "    \n",
    "    lrs = [0.01,0.05]\n",
    "    epoch_sizes = [2, 5]\n",
    "    batch_sizes = [16, 32]\n",
    "    situations = [(i, j, k) for i in lrs for j in epoch_sizes for k in batch_sizes]\n",
    "    count = 1\n",
    "    \n",
    "    for lr,epoch_size,batch_size in situations:\n",
    "        momentum = 0.9 \n",
    "        network = LeNet5()\n",
    "        net_loss = SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')\n",
    "        net_opt = nn.Momentum(network.trainable_params(), lr, momentum)\n",
    "        model = Model(network, net_loss, net_opt, metrics={\"Accuracy\": Accuracy()})\n",
    "        summary_collector = SummaryCollector(summary_dir=\"./summary_base/LeNet-MNIST_Data,lr:{},epoch:{},batch_size:{}\"\n",
    "                                             .format(lr, epoch_size, batch_size), collect_freq=1)\n",
    "        # Start to train\n",
    "        print(\"================= The Situation {} =================\".format(count))\n",
    "        print(\"== learning_rate:{}, epoch_size:{}, batch_size:{} ==\".format(lr, epoch_size, batch_size))\n",
    "        print(\"================ Starting Training ================\")\n",
    "        ds_train = create_dataset(os.path.join(mnist_path, \"train\"), batch_size, repeat_size)\n",
    "        model.train(epoch_size, ds_train, callbacks=[ckpoint_cb, summary_collector, LossMonitor(125)], dataset_sink_mode=True)\n",
    "\n",
    "        print(\"================ Starting Testing ================\")\n",
    "        # load testing dataset\n",
    "        ds_eval = create_dataset(os.path.join(mnist_path, \"test\"))\n",
    "        acc = model.eval(ds_eval, callbacks=[summary_collector], dataset_sink_mode=True)\n",
    "        print(\"============ Accuracy:{} ============\\n\\n\".format(acc))\n",
    "        count += 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 启动及关闭MindInsight服务"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里主要展示如何启用及关闭MindInsight，更多的命令集信息，请参考MindSpore官方网站：<https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/visualization_tutorials.html>。\n",
    "\n",
    "启动MindInsight服务命令：\n",
    "```\n",
    "mindinsight start --summary-base-dir=./summary_base --port=8080\n",
    "```\n",
    "\n",
    "- `--summary-base-dir`：MindInsight指定启动工作路径的命令；`./summary_base` 为 `SummaryCollector` 的 `summary_dir` 参数所指定的目录。\n",
    "- `--port`：MindInsight指定启动的端口，数值可以任意为1~65535的范围内。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "停止MindInsight服务命令：\n",
    "```\n",
    "mindinsight stop --port=8080\n",
    "```\n",
    "- `mindinsight stop`：MindInsight关闭服务命令。\n",
    "- `--port=8080`：即MindInsight服务开启在`8080`端口，所以这里写成`--port=8080`。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 溯源分析\n",
    "\n",
    "### 连接到溯源分析地址\n",
    "\n",
    "浏览器中输入:`http://127.0.0.1:8080`进入MindInsight界面如下："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/mindinsight/images/mindinsight_homepage_for_lineage.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 模型溯源界面介绍"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上图训练列表中序号1-8分别是按照8组训练参数，保存的训练数据。点击右上角的溯源分析便可以进入，溯源分析包含模型溯源和数据溯源，首先是模型溯源界面，如下所示："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/mindinsight/images/model_lineage_page.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 优化目标区域\n",
    "\n",
    "可以选择模型精度值（Accuracy）或模型损失值（loss）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/mindinsight/images/optimization_target_page_of_model_lineage.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "能直观的看出`learning_rate`、`epoch`、`batch_size`三个参数对本次训练模型的精度值和损失值的参数重要性（参数重要性的数值越接近1表示对此优化目标的影响越大，越接近0则表示对优化目标的影响越小），方便用户决策在训练时需要调整的参数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 模型训练的详细参数展示界面"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "展示界面中提供了模型训练过程中的各类重要参数信息，包括：网络、优化器、训练样本数量、测试样本数量、学习率、迭代次数、`batch_size`、`device`数目、模型大小、损失函数等等，用户可以自行选择单次训练数据进行可视化分析或者多次训练数据进行可视化比对分析，提高分析效率。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/mindinsight/images/detailed_information_page_of_model_lineage.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据溯源界面介绍"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "数据溯源展示了用户进行模型训练前的数据增强的过程，且此过程按照增强顺序进行排列。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/mindinsight/images/data_lineage_page.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "本例中数据增强的过程包括`MnistDataset`，`Map_TypeCast`，`Map_Resize`，`Map_Rescale`，`Map_HWC2CHW`，`Shuffle`，`Batch`等操作。\n",
    "\n",
    "- 数据集转换（`MnistDataset`）\n",
    "- label的数据类型转换（`Map_TypeCast`）\n",
    "- 图像的高宽缩放（`Map_Resize`）\n",
    "- 图像的比例缩放（`Map_Rescale`）\n",
    "- 图像数据的张量变换（`Map_HWC2CHW`）\n",
    "- 图像混洗（`Shuffle`）\n",
    "- 图像成组（`Batch`）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 对比分析"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 进入对比分析界面\n",
    "\n",
    "从MindInsight主页进入对比分析界面。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/mindinsight/images/mindinsight_homepage_for_scalars_comparison.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "从对比分析界面中可以对比不同的训练中的标量信息，本例使用`SummaryCollector`自动保存了loss值，其他的标量信息保存，请参考[官方文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScalarSummary.html)。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/mindinsight/images/scalars_comparison_page.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对比看板中可以选择对比的信息有：\n",
    "\n",
    "- 训练选择：本例有8组不同的训练参数对应的训练信息可供选择，此次选择了其中学习率（lr）分别为0.01和0.05的两组训练过程的数据进行对比。\n",
    "- 标签选择：本例保存了loss值一种标量标签。\n",
    "\n",
    "> 对比曲线可通过调整平滑度来优化显示效果。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 总结"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "本次体验使用了MindSpore的数据收集接口`SummaryCollector`对不同训练参数下的模型训练信息进行收集，并且通过开启MindInsight服务将溯源信息和标量信息进行可视化展示，以上就是本次体验的全部内容。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "MindSpore-1.0.1",
   "language": "python",
   "name": "mindspore-1.0.1"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
