{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "f5c95f47",
   "metadata": {},
   "source": [
    "# 典型资源开发\n",
    "\n",
    "[TOC]\n",
    "\n",
    "---\n",
    "\n",
    "## 1. 资源开发流程图\n",
    "\n",
    "![典型资源开发流程](./典型资源开发流程.PNG)\n",
    "\n",
    "## 2. 环境配置\n",
    "\n",
    "### 2.1 安装安装包\n",
    "\n",
    "- python==3.7.5\n",
    "- cann==5.1.T100\n",
    "- torch==1.5.0 (Ascend)\n",
    "- torchvision==0.6.0\n",
    "- noah_vega==1.8.1\n",
    "\n",
    "cann安装版本需根据自身服务器类型选择`x86_64`或者`aarch64`,使用以下命令可查看："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d96a22d2",
   "metadata": {},
   "outputs": [],
   "source": [
    "!arch "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1afd50e5",
   "metadata": {},
   "source": [
    "cann 安装流程及软件包获取参看https://support.huaweicloud.com/cann/index.html  \n",
    "torch 安装流程及部署参看https://gitee.com/ascend/pytorch/tree/v1.5.0-3.0.rc1/  \n",
    "\n",
    "pip安装流程，如下（如出现包无法获取，可更换pip国内源获取）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "992885da",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install torchvision==0.6.0 \n",
    "!pip install noah_vega"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "53759f28",
   "metadata": {},
   "source": [
    "### 2.2 环境变量配置\n",
    "\n",
    "1. 根据自己程序的所使用的python环境配置PYTHONPATH:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7e92e8df",
   "metadata": {},
   "outputs": [],
   "source": [
    "!export PYTHONPATH=/root/.../lib/python3.7/site-packages/:$PYTHONPATH"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5361e6d9",
   "metadata": {},
   "source": [
    "2. cann安装后一般会在`/usr/local/Ascend`目录下生成自己的环境变量配置脚本，`source`以下两个脚本:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "90a54e61",
   "metadata": {},
   "outputs": [],
   "source": [
    "!source /usr/local/Ascend/driver/bin/setenv.bash\n",
    "!source /usr/local/Ascend/ascend-toolkit/set_env.sh"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e3286dc0",
   "metadata": {},
   "source": [
    "3. 配置单卡单模型，多卡多模型并行环境变量，`/ascend-toolkit`目录下可选取安装好的cann版本文件夹，如本样例的`5.1.T100`，也可以是链接到`5.1.T100`的`latest`，如下:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7c111be9",
   "metadata": {},
   "outputs": [],
   "source": [
    "!export install_path=/usr/local/Ascend/ascend-toolkit/5.1.T100\n",
    "\n",
    "!export LD_LIBRARY_PATH=${install_path}/compiler/lib64/:$LD_LIBRARY_PATH\n",
    "!export TBE_IMPL_PATH=${install_path}/opp/op_impl/built-in/ai_core/tbe:$TBE_IMPL_PATH\n",
    "\n",
    "!export JOB_ID=10087  #该项不建议改动\n",
    "!export DEVICE_ID=2  #单卡训练使用的device_id\n",
    "\n",
    "!export NPU_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b5c6370c",
   "metadata": {},
   "source": [
    "#### QA\n",
    "\n",
    "如果后续执行代码运行出现如下错误时，添加环境变量LD_PRELOAD\n",
    "\n",
    "```python\n",
    "ImportError: /root/.../lib/python3.7/site-packages/sklearn/__check_build/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS \n",
    "```\n",
    "\n",
    "环境变量增加项（**路径根据自己实际出错的路劲做修改**）:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3f5ea7b5",
   "metadata": {},
   "outputs": [],
   "source": [
    "#!export LD_PRELOAD=/root/.../lib/python3.7/site-packages/sklearn/__check_build/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0:$LD_PRELOAD"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd7f5968",
   "metadata": {},
   "source": [
    "## 3. Vega automl过程\n",
    "\n",
    "1. **微调阶段（fine_tune）**\n",
    "加载权重后，训练并评估模型精度。\n",
    "\n",
    "2. **剪枝阶段（nas）**\n",
    "    采样剪枝模型。\n",
    "    微调训练剪枝模型，评估模型精度。\n",
    "    重复1~2，经过N次采样和学习，剪枝结束，选取若干个top精度的剪枝模型作为输出结果，进行下一步训练。\n",
    "\n",
    "  **备注**：改阶段采样的训练过程可多卡并行，每个采样的样本模型在一张卡上运行，即3个不同的剪枝后的模型在三张卡上同时训练\n",
    "\n",
    "3. **剪枝后训练阶段（fully_train）**\n",
    "    训练剪枝后的模型。\n",
    "\n",
    "  **备注**：该部分多模型多卡并行原理，与nas部分相同\n",
    "\n",
    "### 3.1 模型及数据集准备\n",
    "\n",
    "- **模型**：pytorch提供的resnet18模型\n",
    "- **预训练模型权重**：Ascend ModelZoo提供的resnet18预训练权重\n",
    "- **数据集**：cifar10\n",
    "\n",
    "cifar10数据集下载到`./data`目录下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "709256c2",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torchvision\n",
    "\n",
    "# cifar10数据集下载\n",
    "trainset = torchvision.datasets.CIFAR10(\n",
    "    root='./data', train=True, download=True)\n",
    "testset = torchvision.datasets.CIFAR10(\n",
    "    root='./data', train=False, download=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "337e0d44",
   "metadata": {},
   "source": [
    "因为ModelZoo训练模型是在DP/DDP模式下的并行训练，vega默认加载的模型权重为非DP/DDP模式下的，torch官方在存储state_dict上有小的差异，需采用如下方式重新存储："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7ab37024",
   "metadata": {},
   "outputs": [],
   "source": [
    "!python pth_trans.py"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47b264f8",
   "metadata": {},
   "source": [
    "### 3.2 超参数文件配置\n",
    "\n",
    "超参数配置主要参照对应的样例目录`{CANN包安装路径}/ascend-toolkit/latest/tools/ascend_automl/examples/`，以resnet18举例，配置文件参照目录如下：  \n",
    "`{CANN包安装路径}/ascend-toolkit/latest/tools/ascend_automl/examples/classification/pytorch/resnet18/resnet18_dag.yml`\n",
    "\n",
    "当前样例已给出`./resnet18_dag.yml`，配置介绍[分段超参数配置介绍](#5.分段超参数配置介绍)\n",
    "\n",
    "### 3.3 Vega代码执行\n",
    "\n",
    "执行如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "853bb161",
   "metadata": {},
   "outputs": [],
   "source": [
    "!vega resnet18_dag.yml -d NPU"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b66f9cf",
   "metadata": {},
   "source": [
    "## 4. automl输出展示\n",
    "\n",
    "1. **微调阶段（fine_tune）**\n",
    "\n",
    "   当前batch_size=128，前三个batch运行时间如下：\n",
    "   \n",
    "   ```sh\n",
    "   2022-03-30 11:08:40,479 INFO worker id [0], epoch [1/10], train step [  0/390], loss [  10.130,   10.130], lr [   0.0100000],  time pre batch [116.002s] , total mean time per batch [116.002s]\n",
    "   2022-03-30 11:09:08,920 INFO worker id [0], epoch [1/10], train step [ 10/390], loss [   1.784,    4.728], lr [   0.0100000],  time pre batch [2.836s] , total mean time per batch [2.836s]\n",
    "   2022-03-30 11:09:11,665 INFO worker id [0], epoch [1/10], train step [ 20/390], loss [   1.036,    3.143], lr [   0.0100000],  time pre batch [0.261s] , total mean time per batch [1.549s]\n",
    "   ```\n",
    "   \n",
    "   最终平均每个batch运行时间如下：\n",
    "   \n",
    "   ```sh\n",
    "   total mean time per batch [0.319s]\n",
    "   ```\n",
    "   \n",
    "   输出output/fine_tune/performance_0.json文件，具体性能精度如下：\n",
    "   \n",
    "   ```python\n",
    "   {\"flops\": 1.819066368, \"params\": 11689.512, \"accuracy\": 0.9493189102564102, \"accuracy_top1\": 0.9493189102564102, \"accuracy_top5\": 0.9990985576923077, \"latency\": 27.226957998936996} \n",
    "   ```\n",
    "   \n",
    "2. **剪枝阶段（nas）**\n",
    "\n",
    "   nas部分抽取了64个样本模型中的前三个模型，每个模型的前三个batch运行时间如下：\n",
    "\n",
    "   ```sh\n",
    "   2022-03-30 11:36:22,682 INFO flops: 1.5552983040000001 , params:7716.776\n",
    "   2022-03-30 11:38:41,748 INFO worker id [2], epoch [1/1], train step [  0/390], loss [   0.364,    0.364], lr [   0.0030000],  time pre batch [138.025s] , total mean time per batch [138.025s]\n",
    "   2022-03-30 11:39:08,431 INFO worker id [2], epoch [1/1], train step [ 10/390], loss [   0.169,    0.259], lr [   0.0030000],  time pre batch [2.658s] , total mean time per batch [2.658s]\n",
    "   2022-03-30 11:39:10,970 INFO worker id [2], epoch [1/1], train step [ 20/390], loss [   0.086,    0.208], lr [   0.0030000],  time pre batch [0.246s] , total mean time per batch [1.452s]\n",
    "   ```\n",
    "\n",
    "   ```sh\n",
    "   2022-03-30 11:45:54,426 INFO flops: 1.127917056 , params:7021.992\n",
    "   2022-03-30 11:48:37,170 INFO worker id [12], epoch [1/1], train step [  0/390], loss [   1.904,    1.904], lr [   0.0030000],  time pre batch [161.527s] , total mean time per batch [161.527s]\n",
    "   2022-03-30 11:49:06,416 INFO worker id [12], epoch [1/1], train step [ 10/390], loss [   1.027,    1.389], lr [   0.0030000],  time pre batch [2.917s] , total mean time per batch [2.917s]\n",
    "   2022-03-30 11:49:09,102 INFO worker id [12], epoch [1/1], train step [ 20/390], loss [   0.659,    1.148], lr [   0.0030000],  time pre batch [0.261s] , total mean time per batch [1.589s]\n",
    "   ```\n",
    "\n",
    "   ```sh\n",
    "   2022-03-30 12:06:13,240 INFO flops: 1.61018368 , params:10132.264000000001\n",
    "   2022-03-30 12:08:25,222 INFO worker id [32], epoch [1/1], train step [  0/390], loss [   0.354,    0.354], lr [   0.0030000],  time pre batch [130.673s] , total mean time per batch [130.673s]\n",
    "   2022-03-30 12:08:51,048 INFO worker id [32], epoch [1/1], train step [ 10/390], loss [   0.250,    0.265], lr [   0.0030000],  time pre batch [2.575s] , total mean time per batch [2.575s]\n",
    "   2022-03-30 12:08:53,545 INFO worker id [32], epoch [1/1], train step [ 20/390], loss [   0.034,    0.215], lr [   0.0030000],  time pre batch [0.241s] , total mean time per batch [1.408s]\n",
    "   ```\n",
    "\n",
    "   由于剪枝后模型不同，平均每个batch运行时间具有明显差异性，将在第三部分（fully_train）输出性能top的剪枝模型平均每个batch运行时间，模型的FLOPs和params也因同样原因在第三部分展示。  \n",
    "\n",
    "3. **剪枝后训练阶段（fully_train）**\n",
    "\n",
    "   所有nas后结果相对好的模型，在fully_train后结果如下：\n",
    "\n",
    "   ```sh\n",
    "   2022-03-30 13:49:17,103 INFO   result:\n",
    "   2022-03-30 13:49:17,109 INFO    22:  {'flops': 1.1599616000000001, 'params': 3905.512, 'accuracy': 0.9242788461538461, 'accuracy_top1': 0.9242788461538461, 'accuracy_top5': 0.9975961538461539, 'latency': 28.30104000022402}\n",
    "   2022-03-30 13:49:17,109 INFO    11:  {'flops': 1.0725550080000001, 'params': 5236.808, 'accuracy': 0.9293870192307693, 'accuracy_top1': 0.9293870192307693, 'accuracy_top5': 0.9977964743589743, 'latency': 26.74245099624386}\n",
    "   2022-03-30 13:49:17,109 INFO    41:  {'flops': 1.819066368, 'params': 11689.512, 'accuracy': 0.9496193910256411, 'accuracy_top1': 0.9496193910256411, 'accuracy_top5': 0.9990985576923077, 'latency': 26.701773998502176}\n",
    "   2022-03-30 13:49:17,109 INFO     6:  {'flops': 1.6167065600000001, 'params': 7559.72, 'accuracy': 0.9415064102564102, 'accuracy_top1': 0.9415064102564102, 'accuracy_top5': 0.9987980769230769, 'latency': 26.371871000446845}\n",
    "   2022-03-30 13:49:17,109 INFO    61:  {'flops': 1.819066368, 'params': 11689.512, 'accuracy': 0.9496193910256411, 'accuracy_top1': 0.9496193910256411, 'accuracy_top5': 0.9990985576923077, 'latency': 26.746704000106547}\n",
    "   2022-03-30 13:49:17,109 INFO    35:  {'flops': 1.6421027840000002, 'params': 7827.496, 'accuracy': 0.9447115384615384, 'accuracy_top1': 0.9447115384615384, 'accuracy_top5': 0.9988982371794872, 'latency': 28.03112300025532}\n",
    "   2022-03-30 13:49:17,109 INFO     8:  {'flops': 1.352813056, 'params': 6020.328, 'accuracy': 0.9340945512820513, 'accuracy_top1': 0.9340945512820513, 'accuracy_top5': 0.9981971153846154, 'latency': 28.510089000337757}\n",
    "   2022-03-30 13:49:17,109 INFO    57:  {'flops': 1.7034106880000002, 'params': 11099.432, 'accuracy': 0.9437099358974359, 'accuracy_top1': 0.9437099358974359, 'accuracy_top5': 0.9990985576923077, 'latency': 26.797360001364723}\n",
    "   2022-03-30 13:49:17,110 INFO     2:  {'flops': 1.5552983040000001, 'params': 7716.776, 'accuracy': 0.9419070512820513, 'accuracy_top1': 0.9419070512820513, 'accuracy_top5': 0.9990985576923077, 'latency': 26.695608996669762}\n",
    "   2022-03-30 13:49:17,110 INFO    45:  {'flops': 1.732261888, 'params': 11578.792, 'accuracy': 0.9453125, 'accuracy_top1': 0.9453125, 'accuracy_top5': 0.9990985576923077, 'latency': 26.531805004924536}\n",
    "   2022-03-30 13:49:17,110 INFO    62:  {'flops': 1.202912256, 'params': 7117.832, 'accuracy': 0.9354967948717948, 'accuracy_top1': 0.9354967948717948, 'accuracy_top5': 0.9983974358974359, 'latency': 26.88416899909498}\n",
    "   2022-03-30 13:49:17,110 INFO ------------------------------------------------\n",
    "   ```\n",
    "\n",
    "   此处挑选了模型22，35，62给出模型的前三个batch运行时间如下：\n",
    "\n",
    "   模型22\n",
    "\n",
    "   ```sh\n",
    "   2022-03-30 12:57:16,095 INFO flops: 1.1599616000000001 , params:3905.512\n",
    "   2022-03-30 12:59:50,541 INFO worker id [22], epoch [1/10], train step [  0/390], loss [   0.171,    0.171], lr [   0.0030000],  time pre batch [153.234s] , total mean time per batch [153.234s]\n",
    "   2022-03-30 13:00:17,384 INFO worker id [22], epoch [1/10], train step [ 10/390], loss [   0.143,    0.139], lr [   0.0030000],  time pre batch [2.673s] , total mean time per batch [2.673s]\n",
    "   2022-03-30 13:00:20,048 INFO worker id [22], epoch [1/10], train step [ 20/390], loss [   0.102,    0.131], lr [   0.0030000],  time pre batch [0.259s] , total mean time per batch [1.466s]\n",
    "   ```\n",
    "   \n",
    "   模型35\n",
    "   \n",
    "   ```sh\n",
    "   2022-03-30 12:57:03,237 INFO flops: 1.6421027840000002 , params:7827.496\n",
    "   2022-03-30 12:59:12,515 INFO worker id [35], epoch [1/10], train step [  0/390], loss [   0.013,    0.013], lr [   0.0030000],  time pre batch [128.455s] , total mean time per batch [128.455s]\n",
    "   2022-03-30 12:59:38,651 INFO worker id [35], epoch [1/10], train step [ 10/390], loss [   0.034,    0.014], lr [   0.0030000],  time pre batch [2.606s] , total mean time per batch [2.606s]\n",
    "   2022-03-30 12:59:41,290 INFO worker id [35], epoch [1/10], train step [ 20/390], loss [   0.013,    0.014], lr [   0.0030000],  time pre batch [0.256s] , total mean time per batch [1.431s]\n",
    "   ```\n",
    "\n",
    "   模型62\n",
    "   \n",
    "   ```sh\n",
    "   2022-03-30 13:22:42,174 INFO flops: 1.202912256 , params:7117.832\n",
    "   2022-03-30 13:25:35,541 INFO worker id [62], epoch [1/10], train step [  0/390], loss [   0.053,    0.053], lr [   0.0030000],  time pre batch [171.911s] , total mean time per batch [171.911s]\n",
    "   2022-03-30 13:26:07,707 INFO worker id [62], epoch [1/10], train step [ 10/390], loss [   0.090,    0.054], lr [   0.0030000],  time pre batch [3.208s] , total mean time per batch [3.208s]\n",
    "   2022-03-30 13:26:10,425 INFO worker id [62], epoch [1/10], train step [ 20/390], loss [   0.011,    0.043], lr [   0.0030000],  time pre batch [0.263s] , total mean time per batch [1.736s]\n",
    "   ```\n",
    "   \n",
    "   模型22，35，62平均每个batch运行时间如下：\n",
    "   \n",
    "   模型22\n",
    "   \n",
    "   ```sh\n",
    "   total mean time per batch [0.264s]\n",
    "   ```\n",
    "   \n",
    "   模型35\n",
    "   \n",
    "   ```sh\n",
    "   total mean time per batch [0.263s]\n",
    "   ```\n",
    "   \n",
    "   模型62\n",
    "   \n",
    "   ```sh\n",
    "   total mean time per batch [0.281s]\n",
    "\n",
    "automl整体时间消耗，本次测试nas部分为单卡运行，fully_train部分为8卡并行\n",
    "\n",
    "```\n",
    "2022-03-30 13:49:17,094 INFO   running time:\n",
    "2022-03-30 13:49:17,094 INFO          fine_tune:  0:30:03  [2022-03-30 11:04:11.576198 - 2022-03-30 11:34:15.145749]\n",
    "2022-03-30 13:49:17,094 INFO                nas:  1:20:57  [2022-03-30 11:34:15.169738 - 2022-03-30 12:55:12.519387]\n",
    "2022-03-30 13:49:17,094 INFO        fully_train:  0:53:49  [2022-03-30 12:55:12.532375 - 2022-03-30 13:49:01.808672]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "783892e0",
   "metadata": {},
   "source": [
    "## 5. 分段超参数配置介绍\n",
    "\n",
    "**general**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "general:\n",
    "  backend: pytorch # 模型框架选择\n",
    "  parallel_search: True # nas部分是否使用并行搜索\n",
    "  parallel_fully_train: True # fully_train部分是否使用并行策略\n",
    "  worker:\n",
    "    timeout: 72000\n",
    "  logger:\n",
    "    level: info # logger等级可调为debug\n",
    "  task:\n",
    "    local_base_path: ./dag_output # 开发者自行给出automl输出目录\n",
    "```\n",
    "\n",
    "**pipline**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "pipeline: [fine_tune, nas, fully_train] # 配置好你所需要的流程，如果要跳过某部分，需要对下个部分的文件输入做路径修改\n",
    "```\n",
    "\n",
    "**fine_tune**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "fine_tune:\n",
    "  pipe_step:\n",
    "    type: TrainPipeStep\n",
    "  model:\n",
    "    pretrained_model_file: \"./model/resnet18_all.pth\" # 此处可配置预训练权重，本次样例的预训练权重为pytorch\n",
    "    model_desc:\n",
    "      type: Script2Vega\n",
    "      path: \"./resnet18.py\" # 模型输入路径，需要确认文件内有get_model()方法来获取你的模型，此处给出样例为resnet18，可以切换为个人模型，保证nas部分的剪枝搜索方法和数据集适配即可\n",
    "      image_size: 224\n",
    "\n",
    "  trainer:\n",
    "    type: Trainer\n",
    "    with_train: True\n",
    "    model_statistics: True\n",
    "    epochs: 10\n",
    "    mixup: False\n",
    "    optimizer:\n",
    "      type: SGD\n",
    "      params:\n",
    "        lr: 0.01\n",
    "        momentum: 0.9\n",
    "        weight_decay: !!float 1e-4\n",
    "    lr_scheduler:\n",
    "      type: MultiStepLR\n",
    "      params:\n",
    "        milestones: [30, 60, 90]\n",
    "        gamma: 0.1\n",
    "    loss:\n",
    "      type: CrossEntropyLoss\n",
    "\n",
    "  evaluator:\n",
    "    type: Evaluator\n",
    "    #device_evaluator:\n",
    "    host_evaluator:\n",
    "            #type: DeviceEvaluator\n",
    "      type: HostEvaluator\n",
    "      metric:\n",
    "         type: accuracy\n",
    "```\n",
    "\n",
    "**fine_tune**当中的**dataset**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "dataset:\n",
    "    type: Cifar10 # 根据个人需要可修改其他数据集，如imagenet\n",
    "    common:\n",
    "      data_path: \"./data\" # 个人之前下载好的cifar10数据集路径\n",
    "      batch_size: 128\n",
    "    train:\n",
    "      transforms: # 数据转换方式这里可以根据数据集以及个人实际情况做修改，后续nas和fully_train部分的数据集会直接调用此处\n",
    "        -   type: Resize\n",
    "          #    size: [256, 256]\n",
    "            #-   type: RandomCrop\n",
    "            size: [224, 224]\n",
    "            #-   type: RandomHorizontalFlip\n",
    "        -   type: ToTensor\n",
    "        -   type: Normalize\n",
    "            mean: [0.4914, 0.4822, 0.4465]\n",
    "            std: [0.2023, 0.1994, 0.2010]\n",
    "    val:\n",
    "      transforms:\n",
    "        -   type: Resize\n",
    "            size: [224, 224]\n",
    "        -   type: ToTensor\n",
    "        -   type: Normalize\n",
    "            mean: [0.4914, 0.4822, 0.4465]\n",
    "            std: [0.2023, 0.1994, 0.2010]\n",
    "    test:\n",
    "      transforms:\n",
    "        -   type: Resize\n",
    "            size: [224, 224]\n",
    "        -   type: ToTensor\n",
    "        -   type: Normalize\n",
    "            mean: [0.4914, 0.4822, 0.4465]\n",
    "            std: [0.2023, 0.1994, 0.2010]\n",
    "```\n",
    "\n",
    "**nas**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "nas:\n",
    "  pipe_step:\n",
    "    type: SearchPipeStep\n",
    "  search_algorithm:\n",
    "    type: RandomSearch\n",
    "    objective_keys: [ 'accuracy', 'latency' ] # 评价剪枝后模型的标准\n",
    "    #objective_keys: ['accuracy', 'params']\n",
    "    policy:\n",
    "      num_sample: 64 # 剪枝后的测试样本\n",
    "  search_space:\n",
    "    type: PruneDAGSearchSpace\n",
    "\n",
    "  model:\n",
    "    model_desc_file: \"{local_base_path}/output/fine_tune/desc_0.json\"\n",
    "    pretrained_model_file: \"{local_base_path}/output/fine_tune/model_0.pth\"\n",
    "\n",
    "  trainer:\n",
    "    type: Trainer\n",
    "    epochs: 1\n",
    "    #callbacks: AdaptiveBatchNormalizationCallback\n",
    "    callbacks: NetworkLoaderCreateCallback\n",
    "    optimizer:\n",
    "      type: SGD\n",
    "      params:\n",
    "        lr: 0.003\n",
    "        momentum: 0.9\n",
    "        weight_decay: !!float 1e-4\n",
    "    lr_scheduler:\n",
    "      type: MultiStepLR\n",
    "      params:\n",
    "        milestones: [ 30, 60, 90 ]\n",
    "        gamma: 0.1\n",
    "    loss:\n",
    "      type: CrossEntropyLoss\n",
    "      params:\n",
    "        sparse: True\n",
    "  dataset:\n",
    "    ref: fine_tune.dataset\n",
    "  evaluator:\n",
    "    ref: fine_tune.evaluator\n",
    "```\n",
    "\n",
    "**fully_train**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "fully_train:\n",
    "  pipe_step:\n",
    "    type: TrainPipeStep\n",
    "          #type: HcclTrainStep\n",
    "    models_folder: \"{local_base_path}/output/nas/\"\n",
    "\n",
    "  trainer:\n",
    "    ref: fine_tune.trainer\n",
    "    epochs: 10 # 剪枝后较为合适的模型，继续训练的epoch数量\n",
    "    optimizer:\n",
    "      type: SGD\n",
    "      params:\n",
    "        lr: 0.003\n",
    "        momentum: 0.9\n",
    "        weight_decay: !!float 1e-4\n",
    "    lr_scheduler:\n",
    "      type: MultiStepLR\n",
    "      params:\n",
    "        milestones: [10, 20, 30]\n",
    "        gamma: 0.1\n",
    "    loss:\n",
    "      type: CrossEntropyLoss\n",
    "  dataset:\n",
    "    ref: fine_tune.dataset\n",
    "  evaluator:\n",
    "    ref: fine_tune.evaluator\n",
    "```\n",
    "\n",
    "**备注**：type类别大多可以灵活替换，可根据个人训练模型的实际情况替换为其他参数，具体的现有方法及更新内容可以参看https://github.com/huawei-noah/vega/blob/master/docs/cn/developer/developer_guide.md"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f4faa05f",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
