{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "322849d6",
   "metadata": {},
   "source": [
    "# 应用感知量化训练\n",
    "\n",
    "## 背景\n",
    "\n",
    "越来越多的应用选择在移动设备或者边缘设备上使用深度学习技术。以手机为例，为了提供人性化和智能的服务，现在操作系统和应用都开始集成深度学习功能。而使用该功能，涉及训练或者推理，自然包含大量的模型及权重文件。经典的AlexNet，原始权重文件已经超过了200MB，而最近出现的新模型正往结构更复杂、参数更多的方向发展。由于移动设备、边缘设备的硬件资源有限，需要对模型进行精简，而量化（Quantization）技术就是应对该类问题衍生出的技术之一。\n",
    "\n",
    "## 概念\n",
    "\n",
    "\n",
    "\n",
    "量化即以较低的推理精度损失将连续取值（或者大量可能的离散取值）的浮点型模型权重或流经模型的张量数据定点近似（通常为INT8）为有限多个（或较少的）离散值的过程，它是以更少位数的数据类型用于近似表示32位有限范围浮点型数据的过程，而模型的输入输出依然是浮点型。这样的好处是可以减小模型尺寸大小，减少模型内存占用，加快模型推理速度，降低功耗等。\n",
    "\n",
    "如上所述，与FP32类型相比，FP16、INT8、INT4等低精度数据表达类型所占用空间更小。使用低精度数据表达类型替换高精度数据表达类型，可以大幅降低存储空间和传输时间。而低比特的计算性能也更高，INT8相对比FP32的加速比可达到3倍甚至更高，对于相同的计算，功耗上也有明显优势。\n",
    "\n",
    "当前业界量化方案主要分为两种：感知量化训练（Quantization Aware Training）和训练后量化（Post-training Quantization）。感知量化训练需要训练数据，在模型准确率上通常表现更好，适用于对模型压缩率和模型准确率要求较高的场景；训练后量化简单易用，只需少量校准数据，适用于追求高易用性和缺乏训练资源的场景。\n",
    "\n",
    "伪量化节点是指感知量化训练中插入的节点，用以寻找网络数据分布，并反馈损失精度，具体作用如下：\n",
    "\n",
    "- 找到网络数据的分布，即找到待量化参数的最大值和最小值；\n",
    "\n",
    "- 模拟量化为低比特时的精度损失，把该损失作用到网络模型中，传递给损失函数，让优化器在训练过程中对该损失值进行优化。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c9fbf550",
   "metadata": {},
   "source": [
    "本文将介绍在MindSpore中如何应用感知量化训练来对模型进行量化，主要流程如下：\n",
    "\n",
    "1. 数据集和预训练模型的准备。\n",
    "\n",
    "2. 构建数据预处理函数。\n",
    "\n",
    "2. 量化网络模型的构建。\n",
    "\n",
    "3. 量化网络模型的微调训练。\n",
    "\n",
    "4. 量化网络模型的保存及导出。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "92c98009",
   "metadata": {},
   "source": [
    "## 准备工作"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fc1db470",
   "metadata": {},
   "source": [
    "### 数据集准备\n",
    "\n",
    "下载MNIST数据集并将其放置在指定位置。为后续微调所需要用到的数据集做准备，在Jupyter Notebook中执行如下命令。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "c240466b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./datasets/MNIST_Data\n",
      "├── test\n",
      "│   ├── t10k-images-idx3-ubyte\n",
      "│   └── t10k-labels-idx1-ubyte\n",
      "└── train\n",
      "    ├── train-images-idx3-ubyte\n",
      "    └── train-labels-idx1-ubyte\n",
      "\n",
      "2 directories, 4 files\n"
     ]
    }
   ],
   "source": [
    "!mkdir -p ./datasets/MNIST_Data/train ./datasets/MNIST_Data/test\n",
    "!wget -NP ./datasets/MNIST_Data/train https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte\n",
    "!wget -NP ./datasets/MNIST_Data/train https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte\n",
    "!wget -NP ./datasets/MNIST_Data/test https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte\n",
    "!wget -NP ./datasets/MNIST_Data/test https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte\n",
    "!tree ./datasets/MNIST_Data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "33171c49",
   "metadata": {},
   "source": [
    "### 预训练模型准备\n",
    "\n",
    "下载预训练好的模型LeNet5网络的模型文件，为后续预训练模型转化为量化模型做准备"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "8187bd94",
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/models/checkpoint_lenet.ckpt"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4e23e2c2",
   "metadata": {},
   "source": [
    "## 构建数据预处理函数\n",
    "\n",
    "数据预处理函数可以参考[快速入门篇章](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)将微调模型所用的数据集从单张`28*28`大小的图片，处理成`32*32`大小的图片。\n",
    "\n",
    "将数据集增强为符合网络模型LeNet5训练要求的数据数据--即将6万张大小为`28*28`的数据集，增强为1875个batch，每个batch为32张图片，每张图片大小为`32*32`的数据集。增强后batch数据的张量为`32*1*32*32`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "f9dc928d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.dataset as ds\n",
    "import mindspore.dataset.vision.c_transforms as CV\n",
    "import mindspore.dataset.transforms.c_transforms as C\n",
    "from mindspore.dataset.vision import Inter\n",
    "from mindspore import dtype as mstype\n",
    "\n",
    "\n",
    "def create_dataset(data_path, batch_size=32, repeat_size=1,\n",
    "                   num_parallel_workers=1):\n",
    "    \"\"\"\n",
    "    create dataset for train or test\n",
    "    \"\"\"\n",
    "    # define dataset\n",
    "    mnist_ds = ds.MnistDataset(data_path)\n",
    "\n",
    "    resize_height, resize_width = 32, 32\n",
    "    rescale = 1.0 / 255.0\n",
    "    shift = 0.0\n",
    "    rescale_nml = 1 / 0.3081\n",
    "    shift_nml = -1 * 0.1307 / 0.3081\n",
    "\n",
    "    # define map operations\n",
    "    C_trans = [\n",
    "        CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR),  # Bilinear mode\n",
    "        CV.Rescale(rescale_nml, shift_nml),\n",
    "        CV.Rescale(rescale, shift),\n",
    "        CV.HWC2CHW()\n",
    "    ]\n",
    "    type_cast_op = C.TypeCast(mstype.int32)\n",
    "\n",
    "    # apply map operations on images\n",
    "    mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n",
    "    mnist_ds = mnist_ds.map(operations=C_trans, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
    "\n",
    "    # apply DatasetOps\n",
    "    buffer_size = 10000\n",
    "    mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)  # 10000 as in LeNet train script\n",
    "    mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)\n",
    "    mnist_ds = mnist_ds.repeat(repeat_size)\n",
    "\n",
    "    return mnist_ds"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc4820af",
   "metadata": {},
   "source": [
    "## 构建量化前的融合网络模型\n",
    "\n",
    "在MindSpore中的量化网络构建主要分为自动量化网络构建和手动量化网络构建，本文将以自动量化网络构建为例，完成感知量化训练。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bba49089",
   "metadata": {},
   "source": [
    "### 自动量化网络\n",
    "\n",
    "自动量化网络需要分两步执行完成量化。\n",
    "\n",
    "1. 构造含有融合算子的网络\n",
    "\n",
    "    与一般的LeNet5网络构建相比，自动量化网络需要使用到融合算子来构建。主要使用了`nn.Conv2dBnAct`和`nn.DenseBnAct`替换了原来的卷积层和全连接层，这里融合算子将多种操作融合在了一起，会提升运算性。\n",
    "\n",
    "    - `nn.Conv2dBnAct`：融合了2维卷积、Batch Normolization和激活操作，其参数`activatation`中设置`relu`，即在卷积后，自动采用`relu`函数进行激活。\n",
    "\n",
    "    - `nn.DenseBnAct`：融合了全连接、Batch Normolization和激活操作，其参数`activation`中设置`relu`，即在全连接后，自动采用`relu`函数进行激活。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "034a0553",
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.nn as nn\n",
    "\n",
    "class LeNet5(nn.Cell):\n",
    "    def __init__(self, num_class=10):\n",
    "        super(LeNet5, self).__init__()\n",
    "        self.num_class = num_class\n",
    "\n",
    "        self.conv1 = nn.Conv2dBnAct(1, 6, kernel_size=5, pad_mode=\"valid\", activation='relu')\n",
    "        self.conv2 = nn.Conv2dBnAct(6, 16, kernel_size=5, pad_mode=\"valid\", activation='relu')\n",
    "\n",
    "        self.fc1 = nn.DenseBnAct(16 * 5 * 5, 120, activation='relu')\n",
    "        self.fc2 = nn.DenseBnAct(120, 84, activation='relu')\n",
    "        self.fc3 = nn.DenseBnAct(84, self.num_class)\n",
    "        self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)\n",
    "        self.flatten = nn.Flatten()\n",
    "\n",
    "    def construct(self, x):\n",
    "        x = self.max_pool2d(self.conv1(x))\n",
    "        x = self.max_pool2d(self.conv2(x))\n",
    "        x = self.flatten(x)\n",
    "        x = self.fc1(x)\n",
    "        x = self.fc2(x)\n",
    "        x = self.fc3(x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2f8bc06e",
   "metadata": {},
   "source": [
    "2. 量化算子融合网络\n",
    "\n",
    "    在`QuantizationAwareTraining`接口中设置网络量化的参数，然后使用`QuantizationAwareTraining.quantize`接口，将算子融合网络自动插入伪量化节点，完成对模型的量化。\n",
    "\n",
    "    其中接口`QuantizationAwareTraining`中参数:\n",
    "\n",
    "    - `quant_delay`：推理评估期间量化权重和量化激活数的步骤数。\n",
    "    - `bn_fold`：使用bn fold算子进行模拟推理的标志位。默认True。\n",
    "    - `per_channel`：基于层或通道的量化粒度，第一个元素值如果为True，则基于每个通道量化，否则基于层量化。第二个元素值代表数据流必须为False。\n",
    "    - `symmetric`：量化算法是否对称。第一个元素值如果为True，则基于对称算法，否则基于不对称算法。第二个权重代表数据流设置为False。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "43c7913e",
   "metadata": {},
   "outputs": [],
   "source": [
    "from mindspore import context\n",
    "from mindspore.compression.quant import QuantizationAwareTraining\n",
    "\n",
    "context.set_context(mode=context.GRAPH_MODE, device_target=\"GPU\")\n",
    "\n",
    "network = LeNet5(10)\n",
    "quantizer = QuantizationAwareTraining(quant_delay=900,\n",
    "                                      bn_fold=False,\n",
    "                                      per_channel=[True, False],\n",
    "                                      symmetric=[True, False])\n",
    "\n",
    "quant_network = quantizer.quantize(network)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9be4fe58",
   "metadata": {},
   "source": [
    "> 除了自动量化网络外，还能[手动模式构建量化网络](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_quantization_aware_training.html#id10)，而且手动模式构建量化网络的方法由于引入了专门的量化参数[quant_config](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.compression.html?#mindspore.compression.quant.create_quant_config)，可以更细粒度的调节模型量化程度，比如量化的类型，指定量化的通道等。并且使用了专门的量化计算节点，在构建网络时就已经插入了伪量化节点，可以不必使用`QuantizationAwareTraining.quantize`接口来进行量化。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "73563f94",
   "metadata": {},
   "source": [
    "## 载入预训练模型权重文件\n",
    "\n",
    "由于预训练文件是未量化的模型文件，而待载入的网络为量化网络，这里需使用专用接口`load_nonquant_param_into_quant_net`来完成预训练模型的载入。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "12bdbcf2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "init model param conv1.weight with checkpoint param conv1.weight\n",
      "init model param conv2.weight with checkpoint param conv2.weight\n",
      "init model param fc1.weight with checkpoint param fc1.weight\n",
      "init model param fc1.bias with checkpoint param fc1.bias\n",
      "init model param fc2.weight with checkpoint param fc2.weight\n",
      "init model param fc2.bias with checkpoint param fc2.bias\n",
      "init model param fc3.weight with checkpoint param fc3.weight\n",
      "init model param fc3.bias with checkpoint param fc3.bias\n"
     ]
    }
   ],
   "source": [
    "from mindspore import load_checkpoint\n",
    "from mindspore.compression.quant import load_nonquant_param_into_quant_net\n",
    "\n",
    "# load quantization aware network checkpoint\n",
    "param_dict = load_checkpoint(\"./checkpoint_lenet.ckpt\")\n",
    "load_nonquant_param_into_quant_net(quant_network, param_dict)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a3cc20cb",
   "metadata": {},
   "source": [
    "完成模型的载入和初始化后，其余微调训练，模型保存等操作方式，跟快速入门中的样例一致。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c909d572",
   "metadata": {},
   "source": [
    "## 模型微调\n",
    "\n",
    "微调过程跟训练过程相差不大，需要先定义损失函数，优化器等超参，然后调用`Model`接口，将量化网络，损失函数，优化器结合成完整的计算网络，然后送入微调用的数据集，完成对模型微调，并将微调后的模型保存出来。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "5d4ebcdb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch: 1 step: 375, loss is 0.03274906\n",
      "epoch: 1 step: 750, loss is 0.34685582\n",
      "epoch: 1 step: 1125, loss is 0.0022193685\n",
      "epoch: 1 step: 1500, loss is 0.15521993\n",
      "epoch: 1 step: 1875, loss is 0.05880319\n"
     ]
    }
   ],
   "source": [
    "import mindspore\n",
    "from mindspore import export, Model\n",
    "from mindspore.train.callback import LossMonitor, ModelCheckpoint, CheckpointConfig\n",
    "\n",
    "lr = 0.01\n",
    "momentum = 0.9\n",
    "epoch_size = 1\n",
    "\n",
    "# define fusion network\n",
    "net_opt = nn.Momentum(quant_network.trainable_params(), lr, momentum)\n",
    "net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction=\"mean\")\n",
    "model = Model(quant_network, net_loss, net_opt)\n",
    "\n",
    "config_ckpt = CheckpointConfig(save_checkpoint_steps=epoch_size * 1875,\n",
    "                                   keep_checkpoint_max=10)\n",
    "ckpoint = ModelCheckpoint(prefix=\"quant_checkpoint_lenet\", config=config_ckpt)\n",
    "\n",
    "ds_train = create_dataset(\"./datasets/MNIST_Data/train\")\n",
    "model.train(epoch_size, ds_train, callbacks=[ckpoint, LossMonitor(375)], dataset_sink_mode=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "19725123",
   "metadata": {},
   "source": [
    "## 查看模型大小\n",
    "\n",
    "对比微调后的量化网络模型权重文件和原本的网络模型权重文件，在大小上的区别。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "8e973c23",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The original model is 482 KB\n",
      "After quant the model size is 482 KB\n"
     ]
    }
   ],
   "source": [
    "import os \n",
    "\n",
    "original_model_size = os.path.getsize(\"./checkpoint_lenet.ckpt\")\n",
    "quant_model_size = os.path.getsize(\"./quant_checkpoint_lenet-1_1875.ckpt\")\n",
    "print(\"The original model is\", original_model_size//1024, \"KB\")\n",
    "print(\"After quant the model size is\", quant_model_size//1024, \"KB\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cbdc2017",
   "metadata": {},
   "source": [
    "先查看量化后的模型中的计算节点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "0e024ad5",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Original model calculation node number: 24\n",
      "{'conv1.weight': Parameter (name=conv1.weight),\n",
      " 'conv2.weight': Parameter (name=conv2.weight),\n",
      " 'fc1.add.fake_quant_act.maxq': Parameter (name=fc1.add.fake_quant_act.maxq),\n",
      " 'fc1.add.fake_quant_act.minq': Parameter (name=fc1.add.fake_quant_act.minq),\n",
      " 'fc1.bias': Parameter (name=fc1.bias),\n",
      " 'fc1.weight': Parameter (name=fc1.weight),\n",
      " 'fc2.add.fake_quant_act.maxq': Parameter (name=fc2.add.fake_quant_act.maxq),\n",
      " 'fc2.add.fake_quant_act.minq': Parameter (name=fc2.add.fake_quant_act.minq),\n",
      " 'fc2.bias': Parameter (name=fc2.bias),\n",
      " 'fc2.weight': Parameter (name=fc2.weight),\n",
      " 'fc3.add.fake_quant_act.maxq': Parameter (name=fc3.add.fake_quant_act.maxq),\n",
      " 'fc3.add.fake_quant_act.minq': Parameter (name=fc3.add.fake_quant_act.minq),\n",
      " 'fc3.bias': Parameter (name=fc3.bias),\n",
      " 'fc3.weight': Parameter (name=fc3.weight),\n",
      " 'learning_rate': Parameter (name=learning_rate),\n",
      " 'moments.conv1.weight': Parameter (name=moments.conv1.weight),\n",
      " 'moments.conv2.weight': Parameter (name=moments.conv2.weight),\n",
      " 'moments.fc1.bias': Parameter (name=moments.fc1.bias),\n",
      " 'moments.fc1.weight': Parameter (name=moments.fc1.weight),\n",
      " 'moments.fc2.bias': Parameter (name=moments.fc2.bias),\n",
      " 'moments.fc2.weight': Parameter (name=moments.fc2.weight),\n",
      " 'moments.fc3.bias': Parameter (name=moments.fc3.bias),\n",
      " 'moments.fc3.weight': Parameter (name=moments.fc3.weight),\n",
      " 'momentum': Parameter (name=momentum)}\n"
     ]
    }
   ],
   "source": [
    "import pprint\n",
    "\n",
    "quant_params = load_checkpoint(\"./quant_checkpoint_lenet-1_1875.ckpt\")\n",
    "print(\"Original model calculation node number:\",len(quant_params))\n",
    "pprint.pprint(quant_params)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3fc90d6a",
   "metadata": {},
   "source": [
    "再查看未量化的模型网络计算节点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "cd67743e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "quant model calculation node number: 18\n",
      "{'conv1.weight': Parameter (name=conv1.weight),\n",
      " 'conv2.weight': Parameter (name=conv2.weight),\n",
      " 'fc1.bias': Parameter (name=fc1.bias),\n",
      " 'fc1.weight': Parameter (name=fc1.weight),\n",
      " 'fc2.bias': Parameter (name=fc2.bias),\n",
      " 'fc2.weight': Parameter (name=fc2.weight),\n",
      " 'fc3.bias': Parameter (name=fc3.bias),\n",
      " 'fc3.weight': Parameter (name=fc3.weight),\n",
      " 'learning_rate': Parameter (name=learning_rate),\n",
      " 'moments.conv1.weight': Parameter (name=moments.conv1.weight),\n",
      " 'moments.conv2.weight': Parameter (name=moments.conv2.weight),\n",
      " 'moments.fc1.bias': Parameter (name=moments.fc1.bias),\n",
      " 'moments.fc1.weight': Parameter (name=moments.fc1.weight),\n",
      " 'moments.fc2.bias': Parameter (name=moments.fc2.bias),\n",
      " 'moments.fc2.weight': Parameter (name=moments.fc2.weight),\n",
      " 'moments.fc3.bias': Parameter (name=moments.fc3.bias),\n",
      " 'moments.fc3.weight': Parameter (name=moments.fc3.weight),\n",
      " 'momentum': Parameter (name=momentum)}\n"
     ]
    }
   ],
   "source": [
    "no_quant_params = load_checkpoint(\"./checkpoint_lenet.ckpt\")\n",
    "print(\"quant model calculation node number:\", len(no_quant_params))\n",
    "pprint.pprint(no_quant_params)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "380227a6",
   "metadata": {},
   "source": [
    "从上面在量化后和量化前的对比可以看出，模型量化前和量化后的变化：\n",
    "\n",
    "|模型量化前|模型大小|模型计算节点\n",
    "|:---|:---|:---\n",
    "|量化前|482 KB| 18\n",
    "|量化后|482 KB| 24\n",
    "\n",
    "量化后的模型大小并未变化，另外模型的计算节点比量化前的计算节点增加了6个，这些增加的计算节点均为全连接层中插入的伪量化节点。\n",
    "\n",
    "为什么量化后模型并未缩小？\n",
    "\n",
    "原因是MindSpore中采用了伪量化节点并不是压缩训练网络用的，而是在后续的模型部署部分，在将有伪量化节点的模型文件，转化为用于推理的模型`.ms`文件时才会将插入伪量化节点的float32的存储数据和计算数据转换为int8或者int4类型的数据，从而将部署的网络模型小型化。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5af8ada3",
   "metadata": {},
   "source": [
    "## 导出模型\n",
    "\n",
    "使用export接口将`.ckpt`的模型文件导出为`.mindir`文件，除了导出`mindir`外，还能将模型导出为`.onnx`和`.air`等推理用的模型文件，详细导出方式可以参考官网的《[保存模型](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html)》。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "64c13a9a",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from mindspore import Tensor, export\n",
    "\n",
    "# export network\n",
    "inputs = Tensor(np.ones([1, 1, 32, 32]), mindspore.float32)\n",
    "export(quant_network, inputs, file_name=\"lenet_quant\", file_format='MINDIR', quant_mode='AUTO')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "db99a33a",
   "metadata": {},
   "source": [
    "`.mindir`模型文件导出后，查看其大小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "751cbbfc",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "mindir file size is 248 KB\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "\n",
    "mindir_size = os.path.getsize(\"./lenet_quant.mindir\")\n",
    "\n",
    "print(\"mindir file size is\", mindir_size//1024, \"KB\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b7264981",
   "metadata": {},
   "source": [
    "> `.mindir`模型文件大小为248KB，比`.ckpt`模型文件小了一半，主要是由于转化为`.mindir`模型文件时，只保留了模型前向传播中用于推理网络，反向传播部分的网络被省略掉导致的。并非量化的原因。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b9a3efa8",
   "metadata": {},
   "source": [
    "## 转化模型\n",
    "\n",
    "将`.mindir`文件转化为部署推理用的`.ms`文件需要使用到转换工具`MindConvert_Lite`，详情可参考官网《[推理模型转换](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/converter_tool.html#)》，模型转换工具会自动识别模型文件中的伪量化节点，完成推理模型的量化，得到最终的`.ms`推理模型文件。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "MindSpore-1.1.1",
   "language": "python",
   "name": "mindspore-1.1.1"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
