{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# [模型保存与加载](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/beginner/model_save_load_cn.html)\n",
    "在模型训练过程中，通常会在如下场景中用到模型的保存与加载功能：\n",
    "\n",
    "* 训练调优场景：\n",
    "\n",
    "    * 模型训练过程中定期保存模型，以便后续对不同时期的模型恢复训练或进行研究；\n",
    "\n",
    "    * 模型训练完毕，需要保存模型方便进行评估测试；\n",
    "\n",
    "    * 载入预训练模型，并对模型进行微调（fine-tune）。\n",
    "\n",
    "* 推理部署场景：\n",
    "\n",
    "    * 模型训练完毕，在云、边、端不同的硬件环境中部署使用，飞桨提供了服务器端部署的 Paddle Inference、移动端/IoT端部署的 Paddle Lite、服务化部署的 Paddle Serving 等，以实现模型的快速部署上线。\n",
    "\n",
    "针对以上场景，飞桨框架推荐使用的模型保存与加载基础 API 主要包括：\n",
    "\n",
    "* paddle.save\n",
    "\n",
    "* paddle.load\n",
    "\n",
    "* paddle.jit.save\n",
    "\n",
    "* paddle.jit.load\n",
    "\n",
    "模型保存与加载高层 API 主要包括：\n",
    "\n",
    "* paddle.Model.save\n",
    "\n",
    "* paddle.Model.load\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://www.paddlepaddle.org.cn/documentation/docs/zh/_images/paddle_save_load_2.3.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/lib/python3/dist-packages/urllib3/util/selectors.py:14: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working\n",
      "  from collections import namedtuple, Mapping\n",
      "/usr/lib/python3/dist-packages/urllib3/_collections.py:2: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working\n",
      "  from collections import Mapping, MutableMapping\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 0 batch 0: loss = 2.6345319747924805\n",
      "Epoch 0 batch 1: loss = 2.345216989517212\n",
      "Epoch 0 batch 2: loss = 2.4752840995788574\n",
      "Epoch 0 batch 3: loss = 2.4144530296325684\n",
      "Epoch 1 batch 0: loss = 2.384279251098633\n",
      "Epoch 1 batch 1: loss = 2.330089569091797\n",
      "Epoch 1 batch 2: loss = 2.211522102355957\n",
      "Epoch 1 batch 3: loss = 2.1438331604003906\n",
      "Epoch 2 batch 0: loss = 2.648134231567383\n",
      "Epoch 2 batch 1: loss = 2.6120200157165527\n",
      "Epoch 2 batch 2: loss = 2.2029805183410645\n",
      "Epoch 2 batch 3: loss = 2.152506113052368\n",
      "Epoch 3 batch 0: loss = 2.4099395275115967\n",
      "Epoch 3 batch 1: loss = 2.368257999420166\n",
      "Epoch 3 batch 2: loss = 2.418806791305542\n",
      "Epoch 3 batch 3: loss = 2.3663992881774902\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import paddle\n",
    "import paddle.nn as nn\n",
    "import paddle.optimizer as opt\n",
    "\n",
    "BATCH_SIZE = 16\n",
    "BATCH_NUM = 4\n",
    "EPOCH_NUM = 4\n",
    "\n",
    "IMAGE_SIZE = 784\n",
    "CLASS_NUM = 10\n",
    "\n",
    "final_checkpoint = dict()\n",
    "\n",
    "\n",
    "# 定义一个随机数据集\n",
    "class RandomDataset(paddle.io.Dataset):\n",
    "    def __init__(self, num_samples):\n",
    "        self.num_samples = num_samples\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        image = np.random.random([IMAGE_SIZE]).astype(\"float32\")\n",
    "        label = np.random.randint(0, CLASS_NUM - 1, (1,)).astype(\"int64\")\n",
    "        return image, label\n",
    "\n",
    "    def __len__(self):\n",
    "        return self.num_samples\n",
    "\n",
    "\n",
    "class LinearNet(nn.Layer):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self._linear = nn.Linear(IMAGE_SIZE, CLASS_NUM)\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self._linear(x)\n",
    "\n",
    "\n",
    "def train(layer, loader, loss_fn, opt):\n",
    "    for epoch_id in range(EPOCH_NUM):\n",
    "        for batch_id, (image, label) in enumerate(loader()):\n",
    "            out = layer(image)\n",
    "            loss = loss_fn(out, label)\n",
    "            loss.backward()\n",
    "            opt.step()\n",
    "            opt.clear_grad()\n",
    "            print(\n",
    "                \"Epoch {} batch {}: loss = {}\".format(\n",
    "                    epoch_id, batch_id, np.mean(loss.numpy())\n",
    "                )\n",
    "            )\n",
    "        # 最后一个epoch保存检查点checkpoint\n",
    "        if epoch_id == EPOCH_NUM - 1:\n",
    "            final_checkpoint[\"epoch\"] = epoch_id\n",
    "            final_checkpoint[\"loss\"] = loss\n",
    "\n",
    "\n",
    "# 创建网络、loss和优化器\n",
    "layer = LinearNet()\n",
    "loss_fn = nn.CrossEntropyLoss()\n",
    "adam = opt.Adam(learning_rate=0.001, parameters=layer.parameters())\n",
    "\n",
    "# 创建用于载入数据的DataLoader\n",
    "dataset = RandomDataset(BATCH_NUM * BATCH_SIZE)\n",
    "loader = paddle.io.DataLoader(\n",
    "    dataset, batch_size=BATCH_SIZE, shuffle=True, drop_last=True, num_workers=2\n",
    ")\n",
    "\n",
    "# 开始训练\n",
    "train(layer, loader, loss_fn, adam)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n"
     ]
    }
   ],
   "source": [
    "# 保存Layer参数\n",
    "paddle.save(layer.state_dict(), \"../output/linear_net.pdparams\")\n",
    "# 保存优化器参数\n",
    "paddle.save(adam.state_dict(), \"../output/adam.pdopt\")\n",
    "# 保存检查点checkpoint信息\n",
    "paddle.save(final_checkpoint, \"../output/final_checkpoint.pkl\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loaded Final Checkpoint. Epoch : 3, Loss : [2.3663993]\n"
     ]
    }
   ],
   "source": [
    "# 载入模型参数、优化器参数和最后一个epoch保存的检查点\n",
    "layer_state_dict = paddle.load(\"../output/linear_net.pdparams\")\n",
    "opt_state_dict = paddle.load(\"../output/adam.pdopt\")\n",
    "final_checkpoint_dict = paddle.load(\"../output/final_checkpoint.pkl\")\n",
    "\n",
    "# 将load后的参数与模型关联起来\n",
    "layer.set_state_dict(layer_state_dict)\n",
    "adam.set_state_dict(opt_state_dict)\n",
    "\n",
    "# 打印出来之前保存的 checkpoint 信息\n",
    "print(\n",
    "    \"Loaded Final Checkpoint. Epoch : {}, Loss : {}\".format(\n",
    "        final_checkpoint_dict[\"epoch\"], final_checkpoint_dict[\"loss\"].numpy()\n",
    "    )\n",
    ")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous steps.\n",
      "Epoch 1/1\n",
      "step   10/1875 - loss: 3.2755 - 12ms/step\n",
      "step   20/1875 - loss: 2.2269 - 12ms/step\n",
      "step   30/1875 - loss: 2.3683 - 10ms/step\n",
      "step   40/1875 - loss: 2.0954 - 10ms/step\n",
      "step   50/1875 - loss: 2.2487 - 10ms/step\n",
      "step   60/1875 - loss: 1.6199 - 10ms/step\n",
      "step   70/1875 - loss: 1.7036 - 10ms/step\n",
      "step   80/1875 - loss: 1.9259 - 10ms/step\n",
      "step   90/1875 - loss: 1.8833 - 10ms/step\n",
      "step  100/1875 - loss: 1.8027 - 10ms/step\n",
      "step  110/1875 - loss: 1.4439 - 9ms/step\n",
      "step  120/1875 - loss: 1.2307 - 9ms/step\n",
      "step  130/1875 - loss: 1.3870 - 9ms/step\n",
      "step  140/1875 - loss: 1.3461 - 9ms/step\n",
      "step  150/1875 - loss: 1.3342 - 9ms/step\n",
      "step  160/1875 - loss: 1.1689 - 9ms/step\n",
      "step  170/1875 - loss: 1.1764 - 9ms/step\n",
      "step  180/1875 - loss: 0.9474 - 9ms/step\n",
      "step  190/1875 - loss: 1.2324 - 9ms/step\n",
      "step  200/1875 - loss: 1.1690 - 9ms/step\n",
      "step  210/1875 - loss: 1.3451 - 9ms/step\n",
      "step  220/1875 - loss: 1.2803 - 9ms/step\n",
      "step  230/1875 - loss: 1.1379 - 9ms/step\n",
      "step  240/1875 - loss: 0.8125 - 9ms/step\n",
      "step  250/1875 - loss: 1.0822 - 9ms/step\n",
      "step  260/1875 - loss: 0.8527 - 9ms/step\n",
      "step  270/1875 - loss: 0.8929 - 9ms/step\n",
      "step  280/1875 - loss: 0.8095 - 9ms/step\n",
      "step  290/1875 - loss: 0.8340 - 9ms/step\n",
      "step  300/1875 - loss: 1.0866 - 8ms/step\n",
      "step  310/1875 - loss: 0.8878 - 8ms/step\n",
      "step  320/1875 - loss: 0.9051 - 8ms/step\n",
      "step  330/1875 - loss: 1.1166 - 8ms/step\n",
      "step  340/1875 - loss: 0.7181 - 8ms/step\n",
      "step  350/1875 - loss: 0.9334 - 8ms/step\n",
      "step  360/1875 - loss: 0.7764 - 8ms/step\n",
      "step  370/1875 - loss: 1.0136 - 8ms/step\n",
      "step  380/1875 - loss: 1.2091 - 8ms/step\n",
      "step  390/1875 - loss: 0.9193 - 8ms/step\n",
      "step  400/1875 - loss: 1.2819 - 8ms/step\n",
      "step  410/1875 - loss: 1.0141 - 8ms/step\n",
      "step  420/1875 - loss: 0.6434 - 8ms/step\n",
      "step  430/1875 - loss: 1.0579 - 8ms/step\n",
      "step  440/1875 - loss: 0.5979 - 8ms/step\n",
      "step  450/1875 - loss: 0.5957 - 8ms/step\n",
      "step  460/1875 - loss: 0.7318 - 8ms/step\n",
      "step  470/1875 - loss: 0.7947 - 8ms/step\n",
      "step  480/1875 - loss: 0.7781 - 8ms/step\n",
      "step  490/1875 - loss: 0.7335 - 8ms/step\n",
      "step  500/1875 - loss: 0.7538 - 8ms/step\n",
      "step  510/1875 - loss: 0.7014 - 8ms/step\n",
      "step  520/1875 - loss: 0.4308 - 8ms/step\n",
      "step  530/1875 - loss: 0.6902 - 8ms/step\n",
      "step  540/1875 - loss: 0.6258 - 8ms/step\n",
      "step  550/1875 - loss: 0.6820 - 8ms/step\n",
      "step  560/1875 - loss: 0.8763 - 8ms/step\n",
      "step  570/1875 - loss: 0.3483 - 8ms/step\n",
      "step  580/1875 - loss: 0.6824 - 8ms/step\n",
      "step  590/1875 - loss: 0.3151 - 8ms/step\n",
      "step  600/1875 - loss: 0.5704 - 8ms/step\n",
      "step  610/1875 - loss: 0.4774 - 8ms/step\n",
      "step  620/1875 - loss: 0.5818 - 8ms/step\n",
      "step  630/1875 - loss: 0.5834 - 8ms/step\n",
      "step  640/1875 - loss: 0.4374 - 8ms/step\n",
      "step  650/1875 - loss: 1.1827 - 8ms/step\n",
      "step  660/1875 - loss: 0.7557 - 8ms/step\n",
      "step  670/1875 - loss: 0.5705 - 8ms/step\n",
      "step  680/1875 - loss: 0.7360 - 8ms/step\n",
      "step  690/1875 - loss: 0.4685 - 8ms/step\n",
      "step  700/1875 - loss: 0.5552 - 8ms/step\n",
      "step  710/1875 - loss: 0.8858 - 8ms/step\n",
      "step  720/1875 - loss: 0.5387 - 8ms/step\n",
      "step  730/1875 - loss: 0.4687 - 8ms/step\n",
      "step  740/1875 - loss: 0.4723 - 8ms/step\n",
      "step  750/1875 - loss: 0.4872 - 8ms/step\n",
      "step  760/1875 - loss: 0.6064 - 8ms/step\n",
      "step  770/1875 - loss: 0.8087 - 8ms/step\n",
      "step  780/1875 - loss: 0.5691 - 8ms/step\n",
      "step  790/1875 - loss: 0.6615 - 8ms/step\n",
      "step  800/1875 - loss: 0.8721 - 8ms/step\n",
      "step  810/1875 - loss: 0.6494 - 8ms/step\n",
      "step  820/1875 - loss: 0.4199 - 8ms/step\n",
      "step  830/1875 - loss: 0.7269 - 8ms/step\n",
      "step  840/1875 - loss: 0.4262 - 8ms/step\n",
      "step  850/1875 - loss: 0.2565 - 8ms/step\n",
      "step  860/1875 - loss: 0.7143 - 8ms/step\n",
      "step  870/1875 - loss: 0.5917 - 8ms/step\n",
      "step  880/1875 - loss: 0.6817 - 8ms/step\n",
      "step  890/1875 - loss: 0.5390 - 8ms/step\n",
      "step  900/1875 - loss: 0.6579 - 8ms/step\n",
      "step  910/1875 - loss: 0.5139 - 8ms/step\n",
      "step  920/1875 - loss: 0.5771 - 8ms/step\n",
      "step  930/1875 - loss: 0.5221 - 8ms/step\n",
      "step  940/1875 - loss: 0.6577 - 8ms/step\n",
      "step  950/1875 - loss: 0.2966 - 8ms/step\n",
      "step  960/1875 - loss: 0.4149 - 8ms/step\n",
      "step  970/1875 - loss: 0.4186 - 8ms/step\n",
      "step  980/1875 - loss: 0.4918 - 8ms/step\n",
      "step  990/1875 - loss: 0.3008 - 8ms/step\n",
      "step 1000/1875 - loss: 0.5120 - 8ms/step\n",
      "step 1010/1875 - loss: 0.6266 - 8ms/step\n",
      "step 1020/1875 - loss: 0.3144 - 8ms/step\n",
      "step 1030/1875 - loss: 0.5971 - 8ms/step\n",
      "step 1040/1875 - loss: 0.1952 - 8ms/step\n",
      "step 1050/1875 - loss: 0.5790 - 8ms/step\n",
      "step 1060/1875 - loss: 0.5557 - 8ms/step\n",
      "step 1070/1875 - loss: 0.5677 - 8ms/step\n",
      "step 1080/1875 - loss: 0.3019 - 8ms/step\n",
      "step 1090/1875 - loss: 0.5152 - 8ms/step\n",
      "step 1100/1875 - loss: 0.6149 - 8ms/step\n",
      "step 1110/1875 - loss: 0.3739 - 8ms/step\n",
      "step 1120/1875 - loss: 0.4072 - 8ms/step\n",
      "step 1130/1875 - loss: 0.3601 - 8ms/step\n",
      "step 1140/1875 - loss: 0.5889 - 8ms/step\n",
      "step 1150/1875 - loss: 0.4335 - 8ms/step\n",
      "step 1160/1875 - loss: 0.2624 - 8ms/step\n",
      "step 1170/1875 - loss: 0.4712 - 8ms/step\n",
      "step 1180/1875 - loss: 0.4617 - 8ms/step\n",
      "step 1190/1875 - loss: 0.2753 - 8ms/step\n",
      "step 1200/1875 - loss: 0.3375 - 8ms/step\n",
      "step 1210/1875 - loss: 0.4218 - 8ms/step\n",
      "step 1220/1875 - loss: 0.3771 - 8ms/step\n",
      "step 1230/1875 - loss: 0.2318 - 8ms/step\n",
      "step 1240/1875 - loss: 0.4782 - 8ms/step\n",
      "step 1250/1875 - loss: 0.3439 - 8ms/step\n",
      "step 1260/1875 - loss: 0.4158 - 8ms/step\n",
      "step 1270/1875 - loss: 0.2968 - 8ms/step\n",
      "step 1280/1875 - loss: 0.2645 - 8ms/step\n",
      "step 1290/1875 - loss: 0.4430 - 8ms/step\n",
      "step 1300/1875 - loss: 0.4425 - 8ms/step\n",
      "step 1310/1875 - loss: 0.5596 - 8ms/step\n",
      "step 1320/1875 - loss: 0.5344 - 8ms/step\n",
      "step 1330/1875 - loss: 0.6385 - 8ms/step\n",
      "step 1340/1875 - loss: 0.3167 - 8ms/step\n",
      "step 1350/1875 - loss: 0.2665 - 8ms/step\n",
      "step 1360/1875 - loss: 0.1504 - 8ms/step\n",
      "step 1370/1875 - loss: 0.7179 - 8ms/step\n",
      "step 1380/1875 - loss: 0.3530 - 8ms/step\n",
      "step 1390/1875 - loss: 0.7321 - 8ms/step\n",
      "step 1400/1875 - loss: 0.4815 - 8ms/step\n",
      "step 1410/1875 - loss: 0.7968 - 8ms/step\n",
      "step 1420/1875 - loss: 0.6513 - 8ms/step\n",
      "step 1430/1875 - loss: 0.2936 - 8ms/step\n",
      "step 1440/1875 - loss: 0.4791 - 8ms/step\n",
      "step 1450/1875 - loss: 0.2776 - 8ms/step\n",
      "step 1460/1875 - loss: 0.4959 - 8ms/step\n",
      "step 1470/1875 - loss: 0.4982 - 8ms/step\n",
      "step 1480/1875 - loss: 0.3774 - 8ms/step\n",
      "step 1490/1875 - loss: 0.4094 - 8ms/step\n",
      "step 1500/1875 - loss: 0.4402 - 8ms/step\n",
      "step 1510/1875 - loss: 0.5917 - 8ms/step\n",
      "step 1520/1875 - loss: 0.3316 - 8ms/step\n",
      "step 1530/1875 - loss: 0.2973 - 8ms/step\n",
      "step 1540/1875 - loss: 0.3902 - 8ms/step\n",
      "step 1550/1875 - loss: 0.4680 - 8ms/step\n",
      "step 1560/1875 - loss: 0.4359 - 8ms/step\n",
      "step 1570/1875 - loss: 0.2991 - 8ms/step\n",
      "step 1580/1875 - loss: 0.3240 - 8ms/step\n",
      "step 1590/1875 - loss: 0.6997 - 8ms/step\n",
      "step 1600/1875 - loss: 0.2630 - 8ms/step\n",
      "step 1610/1875 - loss: 0.3145 - 8ms/step\n",
      "step 1620/1875 - loss: 0.3839 - 8ms/step\n",
      "step 1630/1875 - loss: 0.2617 - 8ms/step\n",
      "step 1640/1875 - loss: 0.4326 - 8ms/step\n",
      "step 1650/1875 - loss: 0.2732 - 8ms/step\n",
      "step 1660/1875 - loss: 0.4507 - 8ms/step\n",
      "step 1670/1875 - loss: 0.3472 - 8ms/step\n",
      "step 1680/1875 - loss: 0.4124 - 8ms/step\n",
      "step 1690/1875 - loss: 0.3961 - 8ms/step\n",
      "step 1700/1875 - loss: 0.4141 - 8ms/step\n",
      "step 1710/1875 - loss: 0.3882 - 8ms/step\n",
      "step 1720/1875 - loss: 0.3178 - 8ms/step\n",
      "step 1730/1875 - loss: 0.1238 - 8ms/step\n",
      "step 1740/1875 - loss: 0.2351 - 8ms/step\n",
      "step 1750/1875 - loss: 0.3784 - 8ms/step\n",
      "step 1760/1875 - loss: 0.1806 - 8ms/step\n",
      "step 1770/1875 - loss: 0.2585 - 8ms/step\n",
      "step 1780/1875 - loss: 0.7346 - 8ms/step\n",
      "step 1790/1875 - loss: 0.2447 - 8ms/step\n",
      "step 1800/1875 - loss: 0.2506 - 8ms/step\n",
      "step 1810/1875 - loss: 0.4151 - 8ms/step\n",
      "step 1820/1875 - loss: 0.3752 - 8ms/step\n",
      "step 1830/1875 - loss: 0.4452 - 8ms/step\n",
      "step 1840/1875 - loss: 0.3519 - 8ms/step\n",
      "step 1850/1875 - loss: 0.2378 - 8ms/step\n",
      "step 1860/1875 - loss: 0.3873 - 8ms/step\n",
      "step 1870/1875 - loss: 0.2092 - 8ms/step\n",
      "step 1875/1875 - loss: 0.2920 - 8ms/step\n",
      "save checkpoint at /home/AI/paddlepaddle_learn/output/test/0\n",
      "save checkpoint at /home/AI/paddlepaddle_learn/output/test/final\n"
     ]
    }
   ],
   "source": [
    "import paddle\n",
    "import paddle.nn as nn\n",
    "import paddle.vision.transforms as T\n",
    "from paddle.vision.models import LeNet\n",
    "\n",
    "model = paddle.Model(LeNet())\n",
    "optim = paddle.optimizer.SGD(learning_rate=1e-3, parameters=model.parameters())\n",
    "model.prepare(optim, paddle.nn.CrossEntropyLoss())\n",
    "\n",
    "transform = T.Compose([T.Transpose(), T.Normalize([127.5], [127.5])])\n",
    "data = paddle.vision.datasets.MNIST(mode=\"train\", transform=transform)\n",
    "\n",
    "# 方式一：设置训练过程中保存模型\n",
    "model.fit(data, epochs=1, batch_size=32, save_freq=1, save_dir=\"../output/test\")\n",
    "\n",
    "# 方式二：设置训练后保存模型\n",
    "model.save(\"../output/test\")  # save for training\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous steps.\n",
      "Epoch 1/1\n",
      "step   10/1875 - loss: 0.3920 - 15ms/step\n",
      "step   20/1875 - loss: 0.2840 - 13ms/step\n",
      "step   30/1875 - loss: 0.3344 - 12ms/step\n",
      "step   40/1875 - loss: 0.1848 - 11ms/step\n",
      "step   50/1875 - loss: 0.5409 - 11ms/step\n",
      "step   60/1875 - loss: 0.1764 - 10ms/step\n",
      "step   70/1875 - loss: 0.3999 - 10ms/step\n",
      "step   80/1875 - loss: 0.4119 - 10ms/step\n",
      "step   90/1875 - loss: 0.4331 - 10ms/step\n",
      "step  100/1875 - loss: 0.5062 - 10ms/step\n",
      "step  110/1875 - loss: 0.5357 - 10ms/step\n",
      "step  120/1875 - loss: 0.1287 - 10ms/step\n",
      "step  130/1875 - loss: 0.2899 - 10ms/step\n",
      "step  140/1875 - loss: 0.5676 - 10ms/step\n",
      "step  150/1875 - loss: 0.3726 - 10ms/step\n",
      "step  160/1875 - loss: 0.2526 - 10ms/step\n",
      "step  170/1875 - loss: 0.2470 - 9ms/step\n",
      "step  180/1875 - loss: 0.1971 - 9ms/step\n",
      "step  190/1875 - loss: 0.2872 - 9ms/step\n",
      "step  200/1875 - loss: 0.2360 - 9ms/step\n",
      "step  210/1875 - loss: 0.4807 - 9ms/step\n",
      "step  220/1875 - loss: 0.3982 - 9ms/step\n",
      "step  230/1875 - loss: 0.2423 - 9ms/step\n",
      "step  240/1875 - loss: 0.2143 - 9ms/step\n",
      "step  250/1875 - loss: 0.2155 - 9ms/step\n",
      "step  260/1875 - loss: 0.2357 - 9ms/step\n",
      "step  270/1875 - loss: 0.2268 - 9ms/step\n",
      "step  280/1875 - loss: 0.1810 - 9ms/step\n",
      "step  290/1875 - loss: 0.2210 - 9ms/step\n",
      "step  300/1875 - loss: 0.7611 - 9ms/step\n",
      "step  310/1875 - loss: 0.5478 - 9ms/step\n",
      "step  320/1875 - loss: 0.4263 - 9ms/step\n",
      "step  330/1875 - loss: 0.4916 - 9ms/step\n",
      "step  340/1875 - loss: 0.2452 - 9ms/step\n",
      "step  350/1875 - loss: 0.2104 - 9ms/step\n",
      "step  360/1875 - loss: 0.1508 - 9ms/step\n",
      "step  370/1875 - loss: 0.4791 - 9ms/step\n",
      "step  380/1875 - loss: 0.6856 - 9ms/step\n",
      "step  390/1875 - loss: 0.4468 - 9ms/step\n",
      "step  400/1875 - loss: 0.9016 - 9ms/step\n",
      "step  410/1875 - loss: 0.5517 - 9ms/step\n",
      "step  420/1875 - loss: 0.3082 - 9ms/step\n",
      "step  430/1875 - loss: 0.5741 - 9ms/step\n",
      "step  440/1875 - loss: 0.1923 - 9ms/step\n",
      "step  450/1875 - loss: 0.2840 - 9ms/step\n",
      "step  460/1875 - loss: 0.2119 - 9ms/step\n",
      "step  470/1875 - loss: 0.4033 - 9ms/step\n",
      "step  480/1875 - loss: 0.4754 - 9ms/step\n",
      "step  490/1875 - loss: 0.2610 - 9ms/step\n",
      "step  500/1875 - loss: 0.1813 - 9ms/step\n",
      "step  510/1875 - loss: 0.2567 - 9ms/step\n",
      "step  520/1875 - loss: 0.1863 - 9ms/step\n",
      "step  530/1875 - loss: 0.3551 - 9ms/step\n",
      "step  540/1875 - loss: 0.3029 - 9ms/step\n",
      "step  550/1875 - loss: 0.3872 - 9ms/step\n",
      "step  560/1875 - loss: 0.5483 - 9ms/step\n",
      "step  570/1875 - loss: 0.1798 - 9ms/step\n",
      "step  580/1875 - loss: 0.3853 - 9ms/step\n",
      "step  590/1875 - loss: 0.0618 - 9ms/step\n",
      "step  600/1875 - loss: 0.2820 - 9ms/step\n",
      "step  610/1875 - loss: 0.3385 - 9ms/step\n",
      "step  620/1875 - loss: 0.2443 - 9ms/step\n",
      "step  630/1875 - loss: 0.2035 - 9ms/step\n",
      "step  640/1875 - loss: 0.1664 - 9ms/step\n",
      "step  650/1875 - loss: 0.7331 - 9ms/step\n",
      "step  660/1875 - loss: 0.4985 - 9ms/step\n",
      "step  670/1875 - loss: 0.2394 - 9ms/step\n",
      "step  680/1875 - loss: 0.4270 - 9ms/step\n",
      "step  690/1875 - loss: 0.1790 - 9ms/step\n",
      "step  700/1875 - loss: 0.2749 - 9ms/step\n",
      "step  710/1875 - loss: 0.5508 - 9ms/step\n",
      "step  720/1875 - loss: 0.1977 - 9ms/step\n",
      "step  730/1875 - loss: 0.1920 - 9ms/step\n",
      "step  740/1875 - loss: 0.3086 - 9ms/step\n",
      "step  750/1875 - loss: 0.1999 - 9ms/step\n",
      "step  760/1875 - loss: 0.3510 - 9ms/step\n",
      "step  770/1875 - loss: 0.4088 - 9ms/step\n",
      "step  780/1875 - loss: 0.4549 - 9ms/step\n",
      "step  790/1875 - loss: 0.5072 - 9ms/step\n",
      "step  800/1875 - loss: 0.4813 - 9ms/step\n",
      "step  810/1875 - loss: 0.3535 - 9ms/step\n",
      "step  820/1875 - loss: 0.2414 - 9ms/step\n",
      "step  830/1875 - loss: 0.4718 - 9ms/step\n",
      "step  840/1875 - loss: 0.2315 - 9ms/step\n",
      "step  850/1875 - loss: 0.1517 - 9ms/step\n",
      "step  860/1875 - loss: 0.4515 - 9ms/step\n",
      "step  870/1875 - loss: 0.3752 - 9ms/step\n",
      "step  880/1875 - loss: 0.4403 - 9ms/step\n",
      "step  890/1875 - loss: 0.2560 - 9ms/step\n",
      "step  900/1875 - loss: 0.5129 - 9ms/step\n",
      "step  910/1875 - loss: 0.3341 - 9ms/step\n",
      "step  920/1875 - loss: 0.3126 - 9ms/step\n",
      "step  930/1875 - loss: 0.3069 - 9ms/step\n",
      "step  940/1875 - loss: 0.4983 - 9ms/step\n",
      "step  950/1875 - loss: 0.1332 - 9ms/step\n",
      "step  960/1875 - loss: 0.2099 - 9ms/step\n",
      "step  970/1875 - loss: 0.2478 - 9ms/step\n",
      "step  980/1875 - loss: 0.2556 - 9ms/step\n",
      "step  990/1875 - loss: 0.1538 - 9ms/step\n",
      "step 1000/1875 - loss: 0.2690 - 9ms/step\n",
      "step 1010/1875 - loss: 0.4059 - 9ms/step\n",
      "step 1020/1875 - loss: 0.1184 - 9ms/step\n",
      "step 1030/1875 - loss: 0.4548 - 9ms/step\n",
      "step 1040/1875 - loss: 0.0920 - 9ms/step\n",
      "step 1050/1875 - loss: 0.3426 - 9ms/step\n",
      "step 1060/1875 - loss: 0.3376 - 9ms/step\n",
      "step 1070/1875 - loss: 0.3478 - 9ms/step\n",
      "step 1080/1875 - loss: 0.1371 - 9ms/step\n",
      "step 1090/1875 - loss: 0.2473 - 9ms/step\n",
      "step 1100/1875 - loss: 0.4502 - 9ms/step\n",
      "step 1110/1875 - loss: 0.2439 - 9ms/step\n",
      "step 1120/1875 - loss: 0.2776 - 9ms/step\n",
      "step 1130/1875 - loss: 0.2342 - 9ms/step\n",
      "step 1140/1875 - loss: 0.3923 - 9ms/step\n",
      "step 1150/1875 - loss: 0.2070 - 9ms/step\n",
      "step 1160/1875 - loss: 0.1362 - 9ms/step\n",
      "step 1170/1875 - loss: 0.3303 - 9ms/step\n",
      "step 1180/1875 - loss: 0.2903 - 9ms/step\n",
      "step 1190/1875 - loss: 0.1431 - 9ms/step\n",
      "step 1200/1875 - loss: 0.1964 - 9ms/step\n",
      "step 1210/1875 - loss: 0.2295 - 9ms/step\n",
      "step 1220/1875 - loss: 0.1926 - 9ms/step\n",
      "step 1230/1875 - loss: 0.0977 - 9ms/step\n",
      "step 1240/1875 - loss: 0.3104 - 9ms/step\n",
      "step 1250/1875 - loss: 0.1605 - 9ms/step\n",
      "step 1260/1875 - loss: 0.2973 - 9ms/step\n",
      "step 1270/1875 - loss: 0.2422 - 9ms/step\n",
      "step 1280/1875 - loss: 0.1205 - 9ms/step\n",
      "step 1290/1875 - loss: 0.2687 - 9ms/step\n",
      "step 1300/1875 - loss: 0.3235 - 9ms/step\n",
      "step 1310/1875 - loss: 0.2793 - 9ms/step\n",
      "step 1320/1875 - loss: 0.3142 - 9ms/step\n",
      "step 1330/1875 - loss: 0.5290 - 9ms/step\n",
      "step 1340/1875 - loss: 0.1775 - 9ms/step\n",
      "step 1350/1875 - loss: 0.1787 - 9ms/step\n",
      "step 1360/1875 - loss: 0.0631 - 9ms/step\n",
      "step 1370/1875 - loss: 0.5330 - 9ms/step\n",
      "step 1380/1875 - loss: 0.1812 - 9ms/step\n",
      "step 1390/1875 - loss: 0.5768 - 9ms/step\n",
      "step 1400/1875 - loss: 0.2694 - 9ms/step\n",
      "step 1410/1875 - loss: 0.7464 - 9ms/step\n",
      "step 1420/1875 - loss: 0.5706 - 9ms/step\n",
      "step 1430/1875 - loss: 0.1650 - 9ms/step\n",
      "step 1440/1875 - loss: 0.3077 - 9ms/step\n",
      "step 1450/1875 - loss: 0.1516 - 9ms/step\n",
      "step 1460/1875 - loss: 0.3543 - 9ms/step\n",
      "step 1470/1875 - loss: 0.3337 - 9ms/step\n",
      "step 1480/1875 - loss: 0.2563 - 9ms/step\n",
      "step 1490/1875 - loss: 0.2328 - 9ms/step\n",
      "step 1500/1875 - loss: 0.3190 - 9ms/step\n",
      "step 1510/1875 - loss: 0.4749 - 9ms/step\n",
      "step 1520/1875 - loss: 0.1989 - 9ms/step\n",
      "step 1530/1875 - loss: 0.2229 - 9ms/step\n",
      "step 1540/1875 - loss: 0.2336 - 9ms/step\n",
      "step 1550/1875 - loss: 0.3280 - 9ms/step\n",
      "step 1560/1875 - loss: 0.2472 - 9ms/step\n",
      "step 1570/1875 - loss: 0.1759 - 9ms/step\n",
      "step 1580/1875 - loss: 0.2350 - 9ms/step\n",
      "step 1590/1875 - loss: 0.5010 - 9ms/step\n",
      "step 1600/1875 - loss: 0.1289 - 9ms/step\n",
      "step 1610/1875 - loss: 0.2257 - 9ms/step\n",
      "step 1620/1875 - loss: 0.2824 - 9ms/step\n",
      "step 1630/1875 - loss: 0.1619 - 9ms/step\n",
      "step 1640/1875 - loss: 0.3739 - 9ms/step\n",
      "step 1650/1875 - loss: 0.1692 - 9ms/step\n",
      "step 1660/1875 - loss: 0.3087 - 9ms/step\n",
      "step 1670/1875 - loss: 0.2297 - 9ms/step\n",
      "step 1680/1875 - loss: 0.2915 - 9ms/step\n",
      "step 1690/1875 - loss: 0.2639 - 9ms/step\n",
      "step 1700/1875 - loss: 0.3299 - 9ms/step\n",
      "step 1710/1875 - loss: 0.2671 - 9ms/step\n",
      "step 1720/1875 - loss: 0.2138 - 9ms/step\n",
      "step 1730/1875 - loss: 0.0594 - 9ms/step\n",
      "step 1740/1875 - loss: 0.1670 - 9ms/step\n",
      "step 1750/1875 - loss: 0.2205 - 9ms/step\n",
      "step 1760/1875 - loss: 0.1044 - 9ms/step\n",
      "step 1770/1875 - loss: 0.1702 - 9ms/step\n",
      "step 1780/1875 - loss: 0.4956 - 9ms/step\n",
      "step 1790/1875 - loss: 0.1456 - 10ms/step\n",
      "step 1800/1875 - loss: 0.1196 - 10ms/step\n",
      "step 1810/1875 - loss: 0.3439 - 10ms/step\n",
      "step 1820/1875 - loss: 0.2659 - 10ms/step\n",
      "step 1830/1875 - loss: 0.3633 - 10ms/step\n",
      "step 1840/1875 - loss: 0.2221 - 9ms/step\n",
      "step 1850/1875 - loss: 0.1557 - 9ms/step\n",
      "step 1860/1875 - loss: 0.3318 - 9ms/step\n",
      "step 1870/1875 - loss: 0.0942 - 9ms/step\n",
      "step 1875/1875 - loss: 0.1826 - 9ms/step\n",
      "save checkpoint at /home/AI/paddlepaddle_learn/output/test_f/0\n",
      "save checkpoint at /home/AI/paddlepaddle_learn/output/test_f/final\n"
     ]
    }
   ],
   "source": [
    "import paddle\n",
    "import paddle.nn as nn\n",
    "import paddle.vision.transforms as T\n",
    "from paddle.vision.models import LeNet\n",
    "\n",
    "model = paddle.Model(LeNet())\n",
    "optim = paddle.optimizer.SGD(learning_rate=1e-3, parameters=model.parameters())\n",
    "model.prepare(optim, paddle.nn.CrossEntropyLoss())\n",
    "\n",
    "transform = T.Compose([T.Transpose(), T.Normalize([127.5], [127.5])])\n",
    "data = paddle.vision.datasets.MNIST(mode=\"train\", transform=transform)\n",
    "# 加载模型参数和优化器参数\n",
    "model.load(\"../output/test\")\n",
    "model.fit(data, epochs=1, batch_size=32, save_freq=1, save_dir=\"../output/test_f\")\n",
    "\n",
    "model.save(\"../output/test_1\")  # save for training\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n",
      "/usr/local/lib/python3.8/dist-packages/paddle/hapi/model.py:1956: UserWarning: 'inputs' was not specified when Model initialization, so the input shape to be saved will be the shape derived from the user's actual inputs. The input shape to be saved is [[32, 1, 28, 28]]. For saving correct input shapes, please provide 'inputs' for Model initialization.\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "model.save(\"../output/inference_model\", False)  # save for inference\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "import paddle\n",
    "import paddle.static as static\n",
    "\n",
    "# 开启静态图模式\n",
    "paddle.enable_static()\n",
    "\n",
    "# 创建输入数据和网络\n",
    "x = paddle.static.data(name=\"x\", shape=[None, 224], dtype=\"float32\")\n",
    "z = paddle.static.nn.fc(x, 10)\n",
    "\n",
    "# 设置执行器开始训练\n",
    "place = paddle.CPUPlace()\n",
    "exe = paddle.static.Executor(place)\n",
    "exe.run(paddle.static.default_startup_program())\n",
    "prog = paddle.static.default_main_program()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{ // block 0\n",
      "    var x : LOD_TENSOR.shape(-1, 224).dtype(float32).stop_gradient(True)\n",
      "    persist trainable param fc_0.w_0 : LOD_TENSOR.shape(224, 10).dtype(float32).stop_gradient(False)\n",
      "    var fc_0.tmp_0 : LOD_TENSOR.shape(-1, 10).dtype(float32).stop_gradient(False)\n",
      "    persist trainable param fc_0.b_0 : LOD_TENSOR.shape(10,).dtype(float32).stop_gradient(False)\n",
      "    var fc_0.tmp_1 : LOD_TENSOR.shape(-1, 10).dtype(float32).stop_gradient(False)\n",
      "\n",
      "    {Out=['fc_0.tmp_0']} = mul(inputs={X=['x'], Y=['fc_0.w_0']}, force_fp32_output = False, op_device = , op_namescope = /, op_role = 0, op_role_var = [], scale_out = 1.0, scale_x = 1.0, scale_y = [1.0], use_mkldnn = False, x_num_col_dims = 1, y_num_col_dims = 1)\n",
      "    {Out=['fc_0.tmp_1']} = elementwise_add(inputs={X=['fc_0.tmp_0'], Y=['fc_0.b_0']}, Scale_out = 1.0, Scale_x = 1.0, Scale_y = 1.0, axis = 1, mkldnn_data_type = float32, op_device = , op_namescope = /, op_role = 0, op_role_var = [], use_mkldnn = False, use_quantizer = False, x_data_format = , y_data_format = )\n",
      "}\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n"
     ]
    }
   ],
   "source": [
    "print(prog)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n"
     ]
    }
   ],
   "source": [
    "# 保存模型参数\n",
    "paddle.save(prog.state_dict(), \"../output/model.pdparams\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n"
     ]
    }
   ],
   "source": [
    "# 保存模型结构（program）\n",
    "paddle.save(prog, \"../output/model.pdmodel\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n"
     ]
    }
   ],
   "source": [
    "# 载入模型结构（program）\n",
    "prog = paddle.load(\"../output/model.pdmodel\")\n",
    "\n",
    "# 载入模型参数\n",
    "state_dict = paddle.load(\"../output/model.pdparams\")\n",
    "# 将load后的参数与模型program关联起来\n",
    "prog.set_state_dict(state_dict)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[]"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import paddle\n",
    "import numpy as np\n",
    "\n",
    "# 开启静态图模式\n",
    "paddle.enable_static()\n",
    "\n",
    "# 创建输入数据和网络\n",
    "startup_prog = paddle.static.default_startup_program()\n",
    "main_prog = paddle.static.default_main_program()\n",
    "with paddle.static.program_guard(main_prog, startup_prog):\n",
    "    image = paddle.static.data(name=\"img\", shape=[64, 784])\n",
    "    w = paddle.create_parameter(shape=[784, 200], dtype=\"float32\")\n",
    "    b = paddle.create_parameter(shape=[200], dtype=\"float32\")\n",
    "    hidden_w = paddle.matmul(x=image, y=w)\n",
    "    hidden_b = paddle.add(hidden_w, b)\n",
    "# 设置执行器开始训练\n",
    "exe = paddle.static.Executor(paddle.CPUPlace())\n",
    "exe.run(startup_prog)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n"
     ]
    }
   ],
   "source": [
    "# 保存静态图推理模型\n",
    "path_prefix = \"../output/infer_model\"\n",
    "paddle.static.save_inference_model(path_prefix, [image], [hidden_b], exe)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 载入静态图推理模型\n",
    "[\n",
    "    inference_program,\n",
    "    feed_target_names,\n",
    "    fetch_targets,\n",
    "] = paddle.static.load_inference_model(path_prefix, exe)\n",
    "tensor_img = np.array(np.random.random((64, 784)), dtype=np.float32)\n",
    "results = exe.run(\n",
    "    inference_program,\n",
    "    feed={feed_target_names[0]: tensor_img},\n",
    "    fetch_list=fetch_targets,\n",
    ")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.8/dist-packages/ipykernel/ipkernel.py:283: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.\n",
      "  and should_run_async(code)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "(['img'],\n",
       " [var save_infer_model/scale_0.tmp_0 : LOD_TENSOR.shape(64, 200).dtype(float32).stop_gradient(False)])"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "feed_target_names, fetch_targets"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 五、总结\n",
    "飞桨框架同时支持动态图和静态图，优先推荐使用动态图训练，兼容支持静态图。\n",
    "\n",
    "1. 如果用于训练调优场景，动态图和静态图均使用 paddle.save和paddle.load保存和加载模型参数，或者在高层 API 训练场景下使用 paddle.Model.save和 paddle.Model.load。\n",
    "\n",
    "2. 如果用于推理部署场景，动态图模型需先转为静态图模型再保存，使用 paddle.jit.save和paddle.jit.load保存和加载模型结构和参数；静态图模型直接使用 paddle.static.save_inference_model和paddle.static.load_inference_model保存和加载模型结构和参数。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
