{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "f5c95f47",
   "metadata": {},
   "source": [
    "# resnext50样例开发\n",
    "\n",
    "[TOC]\n",
    "\n",
    "---\n",
    "\n",
    "## 1. MindStudio安装\n",
    "\n",
    "### 1.1 下载软件包\n",
    "\n",
    "cann 安装流程及软件包获取参看https://support.huaweicloud.com/cann/index.html  \n",
    "MindStudio软件包获取链接https://www.hiascend.com/software/mindstudio/download\n",
    "\n",
    "### 1.2 安装依赖\n",
    "\n",
    "检查系统是否安装python依赖以及gcc等软件。\n",
    "分别使用如下命令检查是否安装gcc，make以及python依赖软件等。\n",
    "```\n",
    "gcc --version\n",
    "g++ --version\n",
    "make --version\n",
    "cmake --version\n",
    "dpkg -l zlib1g-dev| grep zlib1g-dev| grep ii\n",
    "dpkg -l libbz2-dev| grep libbz2-dev| grep ii\n",
    "dpkg -l libsqlite3-dev| grep libsqlite3-dev| grep ii\n",
    "dpkg -l libssl-dev| grep libssl-dev| grep ii\n",
    "dpkg -l libffi-dev| grep libffi-dev| grep ii\n",
    "dpkg -l unzip| grep unzip| grep ii\n",
    "dpkg -l pciutils| grep pciutils| grep ii\n",
    "dpkg -l net-tools| grep net-tools| grep ii\n",
    "dpkg -l libblas-dev| grep libblas-dev| grep ii\n",
    "dpkg -l gfortran| grep gfortran| grep ii\n",
    "dpkg -l libblas3| grep libblas3| grep ii\n",
    "dpkg -l liblapack-dev| grep liblapack-dev| grep ii\n",
    "dpkg -l openssh-server| grep openssh-server| grep ii\n",
    "dpkg -l xterm| grep xterm| grep ii\n",
    "dpkg -l firefox| grep firefox| grep ii\n",
    "dpkg -l xdg-utils| grep xdg-utils| grep ii\n",
    "dpkg -l libdbus-glib-1-dev | grep libdbus-glib-1-dev  | grep ii\n",
    "dpkg -l gdb | grep gdb  | grep ii\n",
    "```\n",
    "\n",
    "若分别返回如下信息则说明已经安装，进入下一步（以下回显仅为示例，请以实际情况为准）。\n",
    "```\n",
    "gcc (Ubuntu 7.3.0-3ubuntu1~18.04) 7.3.0\n",
    "g++ (Ubuntu 7.3.0-3ubuntu1~18.04) 7.3.0\n",
    "GNU Make 4.1\n",
    "cmake version 3.10.2\n",
    "zlib1g-dev:arm64 1:1.2.11.dfsg-0ubuntu2 arm64        compression library - development\n",
    "libbz2-dev:arm64 1.0.6-8.1ubuntu0.2 arm64        high-quality block-sorting file compressor library - development\n",
    "libsqlite3-dev:arm64 3.22.0-1ubuntu0.3 arm64        SQLite 3 development files\n",
    "libssl-dev:arm64 1.1.1-1ubuntu2.1~18.04.6 arm64     Secure Sockets Layer toolkit - development files\n",
    "libffi-dev:arm64 3.2.1-8      arm64        Foreign Function Interface library (development files)\n",
    "unzip          6.0-21ubuntu1 arm64        De-archiver for .zip files\n",
    "pciutils       1:3.5.2-1ubuntu1 arm64        Linux PCI Utilities\n",
    "net-tools      1.60+git20161116.90da8a0-1ubuntu1 arm64        NET-3 networking toolkit\n",
    "libblas-dev:arm64 3.7.1-4ubuntu1 arm64        Basic Linear Algebra Subroutines 3, static library\n",
    "gfortran       4:7.4.0-1ubuntu2.3 arm64        GNU Fortran 95 compiler\n",
    "libblas3:arm64 3.7.1-4ubuntu1 arm64     Basic Linear Algebra Reference implementations, shared library\n",
    "liblapack-dev:arm64 3.7.1-4ubuntu1 arm64        Library of linear algebra routines 3 - static version\n",
    "openssh-server 1:7.6p1-4ubuntu0.5 arm64        secure shell (SSH) server, for secure access from remote machines\n",
    "xterm          330-1ubuntu2 arm64        X terminal emulator\n",
    "firefox        83.0+build2-0ubuntu0.18.04.2 arm64        Safe and easy web browser from Mozilla\n",
    "xdg-utils      1.1.2-1ubuntu2.5 all          desktop integration utilities from freedesktop.org\n",
    "ii  libdbus-glib-1-dev 0.110-2      arm64        deprecated library for D-Bus IPC (development files)\n",
    "ii  gdb            8.1.1-0ubuntu1 arm64        GNU Debugger\n",
    "```\n",
    "否则请执行如下安装命令（如果只有部分软件未安装，则如下命令修改为只安装还未安装的软件即可）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "38dbec57",
   "metadata": {},
   "outputs": [],
   "source": [
    "!apt-get install -y gcc g++ make cmake zlib1g-dev libbz2-dev libsqlite3-dev libssl-dev libffi-dev unzip pciutils net-tools libblas-dev gfortran libblas3 liblapack-dev openssh-server xterm firefox xdg-utils libdbus-glib-1-dev gdb"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cfb222c6",
   "metadata": {},
   "source": [
    "### 1.3 安装MindStudio\n",
    "\n",
    "解压MindStudio_{version}_linux.tar.gz软件包:\n",
    "```\n",
    "tar -zxvf MindStudio_{version}_linux.tar.gz\n",
    "```\n",
    "解压jbr至MindStudio安装根目录，jbr下载链接请根据操作系统架构选择。\n",
    "\n",
    "x86_64链接：https://cache-redirector.jetbrains.com/intellij-jbr/jbr_dcevm-11_0_10-linux-x64-b1341.35.tar.gz  \n",
    "aarch64链接：https://cache-redirector.jetbrains.com/intellij-jbr/jbr-11_0_10-linux-aarch64-b1341.35.tar.gz  \n",
    "解压jbr至MindStudio根目录后目录结构如下：\n",
    "```\n",
    "├── bin\n",
    "├── jbr        //确认将压缩包中的jbr文件夹解压至MindStudio根目录           \n",
    "├── lib                              \n",
    "├── ......  \n",
    "```\n",
    "使用MindStudio的安装用户进入软件包解压后的MindStudio/bin目录，执行如下命令：\n",
    "```\n",
    "cd MindStudio/bin\n",
    "./MindStudio.sh\n",
    "```\n",
    "\n",
    "**备注：MindStudio安装流程详细请参照https://support.huaweicloud.com/devg-mindstudio304/atlasms_02_0002.html**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6dae9da8",
   "metadata": {},
   "source": [
    "## 2. 样例开发流程图\n",
    "\n",
    "![典型样例开发](./流程.PNG)\n",
    "\n",
    "## 3. 环境配置\n",
    "\n",
    "### 3.1 安装依赖包\n",
    "\n",
    "- python==3.7.5\n",
    "- cann==5.0.4 (开发套件包）\n",
    "- torch==1.5.0 (Ascend)\n",
    "- torchvision==0.6.0\n",
    "- noah_vega==1.8.1 (AutoML工具)\n",
    "\n",
    "cann安装版本需根据自身服务器类型选择`x86_64`或者`aarch64`,使用以下命令可查看："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d96a22d2",
   "metadata": {},
   "outputs": [],
   "source": [
    "!arch "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1afd50e5",
   "metadata": {},
   "source": [
    "cann 安装流程及软件包获取参看《CANN 软件安装指南》https://www.hiascend.com/document/detail/zh/canncommercial/504/envdeployment/instg/instg_000046.html  \n",
    "torch 安装流程及部署参看https://gitee.com/ascend/pytorch/tree/v1.5.0-3.0.rc1/  \n",
    "\n",
    "pip安装流程，如下（如出现包无法获取，可更换pip国内源获取）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "992885da",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install torchvision==0.6.0 \n",
    "!pip install noah_vega==1.8.1"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "53759f28",
   "metadata": {},
   "source": [
    "### 3.2 环境变量配置\n",
    "\n",
    "本段编辑环境变量文件过程均在**命令行**中完成，请打开并执行`cd /{文件路径}/resnext50/`进到目录下\n",
    "\n",
    "#### 环境变量配置\n",
    "\n",
    "为防止多开发者使用同一服务器，环境搭建存在冲突，当前全部环境变量配置在shell脚本中，下面先创建一个新脚本文件：\n",
    "\n",
    "```\n",
    "vim env.sh\n",
    "```\n",
    "\n",
    "#### env.sh脚本内环境变量配置\n",
    "\n",
    "现在进入到`env.sh`的编辑界面，按`i`，界面最下方出现`-- INSERT --`表示进入**编辑模式**\n",
    "\n",
    "1. 根据自己程序的所使用的python环境配置PYTHONPATH:\n",
    "```\n",
    "export PYTHONPATH=/{文件路径}/lib/python3.7/site-packages/:$PYTHONPATH\n",
    "```\n",
    "2. 驱动与cann安装后默认会在`/usr/local/Ascend`目录下生成自己的环境变量配置脚本（如cann安装在自定义目录，请配置自定义目录），`source`以下两个脚本:\n",
    "```\n",
    "source /usr/local/Ascend/driver/bin/setenv.bash\n",
    "source /usr/local/Ascend/ascend-toolkit/set_env.sh\n",
    "```\n",
    "3. 配置单卡单模型，多卡多模型并行环境变量，`/ascend-toolkit`目录下可选取安装好的cann版本文件夹，如本样例的`5.0.4`，也可以是软链接到`5.0.4`的`latest`，如下:\n",
    "```\n",
    "export install_path=/usr/local/Ascend/ascend-toolkit/5.0.4\n",
    "```\n",
    "```\n",
    "export LD_LIBRARY_PATH=${install_path}/compiler/lib64/:$LD_LIBRARY_PATH\n",
    "export TBE_IMPL_PATH=${install_path}/opp/op_impl/built-in/ai_core/tbe:$TBE_IMPL_PATH\n",
    "```\n",
    "```\n",
    "export JOB_ID=10087  #该项不建议改动\n",
    "export DEVICE_ID=2  #单卡训练使用的device_id\n",
    "```\n",
    "```\n",
    "export NPU_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \n",
    "```\n",
    "配置完成，按`Esc`按键，下方`-- INSERT --`消失，表示退出**编辑模式**，按`:wq`然后按`Enter`结束文件编辑。\n",
    "\n",
    "#### 执行环境变量脚本\n",
    "```\n",
    "source env.sh\n",
    "```\n",
    "#### QA\n",
    "\n",
    "如果后续执行代码运行出现如下错误时，添加环境变量LD_PRELOAD\n",
    "\n",
    "```python\n",
    "ImportError: /root/.../lib/python3.7/site-packages/sklearn/__check_build/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS \n",
    "```\n",
    "\n",
    "环境变量增加项（**路径根据自己实际出错的路劲做修改**）:\n",
    "```\n",
    "export LD_PRELOAD=/root/.../lib/python3.7/site-packages/sklearn/__check_build/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0:$LD_PRELOAD\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd7f5968",
   "metadata": {},
   "source": [
    "## 4. AutoML过程\n",
    "\n",
    "1. **微调阶段（fine_tune）**\n",
    "加载权重后，训练并评估模型精度。\n",
    "\n",
    "2. **剪枝阶段（nas）**\n",
    "    采样剪枝模型。\n",
    "    微调训练剪枝模型，评估模型精度。\n",
    "    重复1~2，经过N次采样和学习，剪枝结束，选取若干个top精度的剪枝模型作为输出结果，进行下一步训练。\n",
    "\n",
    "  **备注**：改阶段采样的训练过程可多卡并行，每个采样的样本模型在一张卡上运行，即3个不同的剪枝后的模型在三张卡上同时训练，即将支持单模型多卡并行。\n",
    "\n",
    "3. **剪枝后训练阶段（fully_train）**\n",
    "    训练剪枝后的模型。\n",
    "\n",
    "  **备注**：该部分多模型多卡并行原理，与nas部分相同\n",
    "\n",
    "### 4.1 模型及数据集准备\n",
    "\n",
    "- **模型**：Ascend ModelZoo提供的resnext50模型\n",
    "- **数据集**：cifar10\n",
    "- **预训练模型权重**：Ascend ModelZoo提供的resnext50预训练权重(数据集使用imagenet)\n",
    "\n",
    "#### 模型添加获取方法\n",
    "\n",
    "使用vim工具打开{下载目录}/ResNeXt-50-32x4d_for_PyTorch/models/resnet_0_6_0.py, 添加如下代码(AutoML获取模型通过get_model())：\n",
    "```\n",
    "def get_model():\n",
    "    return resnext50_32x4d()\n",
    "```\n",
    "\n",
    "#### 数据集获取\n",
    "cifar10数据集下载到`./data`目录下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "709256c2",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torchvision\n",
    "\n",
    "# cifar10数据集下载\n",
    "trainset = torchvision.datasets.CIFAR10(\n",
    "    root='./data', train=True, download=True)\n",
    "testset = torchvision.datasets.CIFAR10(\n",
    "    root='./data', train=False, download=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "337e0d44",
   "metadata": {},
   "source": [
    "#### 预训练权重处理\n",
    "因为ModelZoo训练模型是在DP/DDP模式下的并行训练，vega默认加载的模型权重为非DP/DDP模式下的，torch官方在存储state_dict上有小的差异，需采用如下方式重新存储："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7ab37024",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "\n",
    "checkpoint = torch.load('../ResNeXt-50-32x4d_for_PyTorch/model_best.pth.tar', map_location='cpu') # 读取从Ascend ModelZoo下载的预训练模型参数\n",
    "checkpoint_trans = {k.replace('module.',''):v for k,v in checkpoint['state_dict'].items()}\n",
    "torch.save(checkpoint_trans, './model/resnext50.pth') # 重新保存"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47b264f8",
   "metadata": {},
   "source": [
    "### 4.2 模型调优文件配置\n",
    "\n",
    "模型调优配置主要参照对应的样例目录`{CANN包安装路径}/ascend-toolkit/latest/tools/ascend_automl/examples/`，配置文件参照目录如下：  \n",
    "`{CANN包安装路径}/ascend-toolkit/latest/tools/ascend_automl/examples/classification/pytorch/resnext50/resnext50_dag.yml`\n",
    "\n",
    "当前样例已给出`./resnext50_dag.yml`，配置介绍[6.分段模型调优配置介绍](##6.分段模型调优配置介绍)\n",
    "\n",
    "### 4.3 执行AutoML工具\n",
    "\n",
    "执行如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "853bb161",
   "metadata": {},
   "outputs": [],
   "source": [
    "!vega resnext50_dag.yml -d NPU"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b66f9cf",
   "metadata": {},
   "source": [
    "### 4.4 AutoML输出展示\n",
    "\n",
    "1. **微调阶段（fine_tune）**\n",
    "\n",
    "   当前batch_size=64，前三个batch运行时间如下：\n",
    "   \n",
    "   ```sh\n",
    "2022-06-29 16:27:37,188 INFO worker id [0], epoch [1/5], train step [  0/781], loss [   8.812,    8.812], lr [   0.0100000],  time pre batch [231.097s] , total mean time per batch [231.097s]\n",
    "2022-06-29 16:28:09,865 INFO worker id [0], epoch [1/5], train step [ 10/781], loss [   2.025,    4.449], lr [   0.0100000],  time pre batch [3.264s] , total mean time per batch [3.264s]\n",
    "2022-06-29 16:28:11,047 INFO worker id [0], epoch [1/5], train step [ 20/781], loss [   1.839,    3.048], lr [   0.0100000],  time pre batch [0.114s] , total mean time per batch [1.689s]\n",
    "   ```\n",
    "   \n",
    "   最终平均每个batch运行时间如下：\n",
    "   \n",
    "   ```sh\n",
    "   total mean time per batch [0.126s]\n",
    "   ```\n",
    "   \n",
    "   输出output/fine_tune/performance_0.json文件，具体性能精度如下：\n",
    "   \n",
    "   ```python\n",
    "{\"flops\": 4.259383296, \"params\": 25028.904000000002, \"accuracy\": 0.9520232371794872, \"accuracy_top1\": 0.9520232371794872, \"accuracy_top5\": 0.9992988782051282, \"latency\": 9.004661993822083}\n",
    "   ```\n",
    "   \n",
    "2. **剪枝阶段（nas）**\n",
    "\n",
    "   nas部分抽取了32个样本模型中的前三个模型，每个模型的前三个batch运行时间如下：\n",
    "\n",
    "   ```sh\n",
    "2022-06-29 16:41:39,124 INFO flops: 3.9502991360000004 , params:23846.696\n",
    "2022-06-29 16:44:59,860 INFO worker id [1], epoch [1/1], train step [  0/781], loss [   3.255,    3.255], lr [   0.0100000],  time pre batch [112.170s] , total mean time per batch [112.170s]\n",
    "2022-06-29 16:45:03,020 INFO worker id [1], epoch [1/1], train step [ 10/781], loss [   3.215,    3.625], lr [   0.0100000],  time pre batch [0.312s] , total mean time per batch [0.312s]\n",
    "2022-06-29 16:45:03,495 INFO worker id [1], epoch [1/1], train step [ 20/781], loss [   3.446,    3.512], lr [   0.0100000],  time pre batch [0.043s] , total mean time per batch [0.178s]\n",
    "   ```\n",
    "\n",
    "   ```sh\n",
    "2022-06-29 16:50:22,951 INFO flops: 3.3290485760000004 , params:16128.424\n",
    "2022-06-29 16:55:12,593 INFO worker id [13], epoch [1/1], train step [  0/781], loss [   2.701,    2.701], lr [   0.0100000],  time pre batch [182.512s] , total mean time per batch [182.512s]\n",
    "2022-06-29 16:55:15,923 INFO worker id [13], epoch [1/1], train step [ 10/781], loss [   2.630,    2.575], lr [   0.0100000],  time pre batch [0.328s] , total mean time per batch [0.328s]\n",
    "2022-06-29 16:55:16,520 INFO worker id [13], epoch [1/1], train step [ 20/781], loss [   2.260,    2.410], lr [   0.0100000],  time pre batch [0.053s] , total mean time per batch [0.190s]\n",
    "   ```\n",
    "\n",
    "   ```sh\n",
    "2022-06-29 16:58:53,571 INFO flops: 3.40834816 , params:22467.88\n",
    "2022-06-29 17:03:04,025 INFO worker id [20], epoch [1/1], train step [  0/781], loss [   2.800,    2.800], lr [   0.0100000],  time pre batch [145.841s] , total mean time per batch [145.841s]\n",
    "2022-06-29 17:03:07,246 INFO worker id [20], epoch [1/1], train step [ 10/781], loss [   2.834,    2.786], lr [   0.0100000],  time pre batch [0.318s] , total mean time per batch [0.318s]\n",
    "2022-06-29 17:03:07,779 INFO worker id [20], epoch [1/1], train step [ 20/781], loss [   2.171,    2.423], lr [   0.0100000],  time pre batch [0.048s] , total mean time per batch [0.183s]\n",
    "   ```\n",
    "\n",
    "   由于剪枝后模型不同，平均每个batch运行时间具有明显差异性，将在第三部分（fully_train）输出性能top的剪枝模型平均每个batch运行时间，模型的FLOPs和params也因同样原因在第三部分展示。  \n",
    "\n",
    "3. **剪枝后训练阶段（fully_train）**\n",
    "\n",
    "   所有nas后结果相对好的模型，在fully_train后结果如下：\n",
    "   \n",
    "   | 剪枝训练后模型 | flops              | params    | accuracy_top1      | accuracy_top5      | latency            |\n",
    "| -------------- | ------------------ | --------- | ------------------ | ------------------ | ------------------ |\n",
    "| 32             | 4.259383296 | 25028.904000000002  | 0.9655448717948718 | 0.9985977564102564 | 8.932083990657702  |\n",
    "| 12             | 3.432009728 | 17441.704  | 0.9618389423076923 | 0.9988982371794872 | 9.759492002194747 |\n",
    "\n",
    "\n",
    "\n",
    "   ```sh\n",
    "2022-06-29 17:44:53,397 INFO   result:\n",
    "2022-06-29 17:44:53,404 INFO    32:  {'flops': 4.259383296, 'params': 25028.904000000002, 'accuracy': 0.9655448717948718, 'accuracy_top1': 0.9655448717948718, 'accuracy_top5': 0.9985977564102564, 'latency': 8.932083990657702}\n",
    "2022-06-29 17:44:53,404 INFO    12:  {'flops': 3.432009728, 'params': 17441.704, 'accuracy': 0.9618389423076923, 'accuracy_top1': 0.9618389423076923, 'accuracy_top5': 0.9988982371794872, 'latency': 9.759492002194747}\n",
    "   ```\n",
    "\n",
    "   此处挑选了模型32, 12给出模型的前三个batch运行时间如下：\n",
    "\n",
    "   模型32\n",
    "\n",
    "   ```sh\n",
    "2022-06-29 17:16:37,033 INFO flops: 4.259383296 , params:25028.904000000002\n",
    "2022-06-29 17:21:38,379 INFO worker id [32], epoch [1/10], train step [  0/781], loss [   0.021,    0.021], lr [   0.0100000],  time pre batch [219.593s] , total mean time per batch [219.593s]\n",
    "2022-06-29 17:22:11,274 INFO worker id [32], epoch [1/10], train step [ 10/781], loss [   0.004,    0.019], lr [   0.0100000],  time pre batch [3.284s] , total mean time per batch [3.284s]\n",
    "2022-06-29 17:22:12,459 INFO worker id [32], epoch [1/10], train step [ 20/781], loss [   0.016,    0.020], lr [   0.0100000],  time pre batch [0.114s] , total mean time per batch [1.699s]\n",
    "   ```\n",
    "   \n",
    "   模型12\n",
    "   \n",
    "   ```sh\n",
    "2022-06-29 17:16:34,652 INFO flops: 3.432009728 , params:17441.704\n",
    "2022-06-29 17:22:02,543 INFO worker id [12], epoch [1/10], train step [  0/781], loss [   0.147,    0.147], lr [   0.0100000],  time pre batch [236.749s] , total mean time per batch [236.749s]\n",
    "2022-06-29 17:22:32,975 INFO worker id [12], epoch [1/10], train step [ 10/781], loss [   0.306,    0.206], lr [   0.0100000],  time pre batch [3.039s] , total mean time per batch [3.039s]\n",
    "2022-06-29 17:22:34,086 INFO worker id [12], epoch [1/10], train step [ 20/781], loss [   0.271,    0.198], lr [   0.0100000],  time pre batch [0.106s] , total mean time per batch [1.572s]\n",
    "   ```\n",
    "   \n",
    "   模型32，12平均每个batch运行时间如下：\n",
    "   \n",
    "   模型32\n",
    "   \n",
    "   ```sh\n",
    "   total mean time per batch [0.131s]\n",
    "   ```\n",
    "   \n",
    "   模型12\n",
    "   \n",
    "   ```sh\n",
    "   total mean time per batch [0.128s]\n",
    "   ```\n",
    "   \n",
    "\n",
    "AutoML整体时间消耗，本次测试nas部分为单卡运行，fully_train部分为8卡并行\n",
    "\n",
    "```\n",
    "2022-06-29 17:44:53,387 INFO   running time:\n",
    "2022-06-29 17:44:53,387 INFO          fine_tune:  0:20:26  [2022-06-29 16:20:03.816056 - 2022-06-29 16:40:30.078978]\n",
    "2022-06-29 17:44:53,387 INFO                nas:  0:35:20  [2022-06-29 16:40:30.100223 - 2022-06-29 17:15:50.113236]\n",
    "2022-06-29 17:44:53,387 INFO        fully_train:  0:28:48  [2022-06-29 17:15:50.123342 - 2022-06-29 17:44:38.289061]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8215099",
   "metadata": {},
   "source": [
    "### 4.5 AutoML输出模型\n",
    "\n",
    "文件配置修改`{文件路径}/ascend-toolkit/5.0.4/tools/ascend_automl/core/common/callbacks/onnx_export.py\n",
    "\n",
    "onnx格式模型输出（仅支持PyTorch框架）\n",
    "基于训练脚本的模型自动调优支持输出调优后模型的onnx格式，操作参考如下：\n",
    "\n",
    "1. 在yml文件中nas阶段的trainer字段中配置callbacks字段：\n",
    "```\n",
    "callbacks:OnnxModelExportCallback\n",
    "```\n",
    "2. 启动AutoML调优任务。\n",
    "任务结束后会在`{task_id}/output/nas/`路径下输出model_{worker_id}.onnx文件。\n",
    "\n",
    "若用户需将onnx文件转换成PyTorch脚本定义的模型，具体步骤可参考https://github.com/fumihwh/onnx-pytorch.\n",
    "\n",
    "## 5. 推理评估服务\n",
    "\n",
    "模型评估服务是用于评估模型在特定硬件设备上的性能，AutoML过程通常在训练服务器执行，模型结果会考察推理服务器的结果，评估服务目前支持的硬件设备为Davinci推理芯片（Atlas 200 DK、Ascend 310系列产品和开发板环境Evb)和手机，后继会扩展支持更多的设备。\n",
    "\n",
    "### 5.1 安装依赖包\n",
    "\n",
    "- cann==5.0.4 (开发套件包）\n",
    "- evaluate_service==1.8.1 (Automl评估服务工具)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fe02155d",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install evaluate-service==1.8.1"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d5db5d91",
   "metadata": {},
   "source": [
    "### 5.2 环境文件配置\n",
    "\n",
    "环境配置脚本内容如下，参照[3.2 环境变量配置](##3.2环境变量配置)方式配置\n",
    "```\n",
    "source {CANN包安装路径}/ascend-toolkit/set_env.sh\n",
    "\n",
    "export install_path={CANN包安装路径}/ascend-toolkit/latest\n",
    "export DDK_PATH=${install_path}\n",
    "export PATH=${install_path}/compiler/bin:${install_path}/compiler/ccec_compiler/bin:${PATH}\n",
    "export LD_LIBRARY_PATH=${install_path}/lib64:/usr/local/Ascend/driver/lib64:/usr/local/Ascend/driver/lib64/stub:${LD_LIBRARY_PATH}\n",
    "export PYTHONPATH=${install_path}/python/site-packages:${install_path}/compiler/python/site-packages:${install_path}/compiler/python/site-packages/auto_tune.egg:${install_path}/compiler/python/site-packages/schedule_search.egg:${install_path}/opp/op_impl/built-in/ai_core/tbe:${PYTHONPATH}\n",
    "export ASCEND_OPP_PATH=${install_path}/opp\n",
    "export SOC_VERSION=Ascend310\n",
    "export NPU_HOST_LIB=${install_path}/acllib/lib64/stub/\n",
    "\n",
    "export JOB_ID=10087  #该项不建议改动\n",
    "```\n",
    "### 5.3 推理评估服务配置\n",
    "\n",
    "参照配置介绍[分段模型调优配置介绍](##5.分段模型调优配置介绍)\n",
    "\n",
    "### 5.4 启动推理评估服务\n",
    "使用如下命令启动评估服务：\n",
    "\n",
    "`vega-evaluate_service-service -i {your_ip_adress} -p {port} -w {your_work_path} -t {davinci_environment_type} -s`\n",
    "其中：\n",
    "  \n",
    "`-i`参数 指定当前使用的服务器的ip地址  \n",
    "`-p`参数 指定当前使用的服务器的的监听端口，默认值8888  \n",
    "`-w`参数 指定工作路径， 程序运行时的中间文件将存储在该目录下，请使用绝对路径  \n",
    "`-t`参数 表示推理芯片型号，默认值Ascend310  \n",
    "`-s`参数 表示启动安全模式  "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "783892e0",
   "metadata": {},
   "source": [
    "## 6. 分段模型调优配置介绍\n",
    "\n",
    "**general**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "general:\n",
    "    backend: pytorch # 模型框架选择\n",
    "    parallel_search: True # nas部分是否使用并行搜索\n",
    "    parallel_fully_train: True # fully_train部分是否使用并行策略\n",
    "    worker:\n",
    "        timeout: 72000\n",
    "    logger:\n",
    "        level: info # logger等级可调为debug\n",
    "    task:\n",
    "        local_base_path: ./resnext50_dag_output # 开发者自行给出automl输出目录\n",
    "    cluster: {}\n",
    "```\n",
    "\n",
    "**pipline**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "pipeline: [fine_tune, nas, fully_train] # 配置好你所需要的流程，如果要跳过某部分，需要对下个部分的文件输入做路径修改\n",
    "```\n",
    "\n",
    "**fine_tune**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "fine_tune:\n",
    "    pipe_step:\n",
    "        type: TrainPipeStep\n",
    "    model:\n",
    "        pretrained_model_file: ./model/resnext50.pth\n",
    "        model_desc:\n",
    "            type: Script2Vega\n",
    "            path: ./ResNeXt-50-32x4d_for_PyTorch/models/resnet_0_6_0.py\n",
    "            image_size: 224\n",
    "    trainer:\n",
    "        type: Trainer\n",
    "        with_train: true\n",
    "        model_statistics: true\n",
    "        epochs: 5\n",
    "        mixup: false\n",
    "        optimizer:\n",
    "            type: SGD\n",
    "            params:\n",
    "                lr: 0.01\n",
    "                momentum: 0.9\n",
    "                weight_decay: 0.0001\n",
    "        lr_scheduler:\n",
    "            type: MultiStepLR\n",
    "            params:\n",
    "                milestones:[30, 60, 90]\n",
    "                gamma: 0.1\n",
    "        loss:\n",
    "            type: CrossEntropyLoss\n",
    "    evaluator:\n",
    "        type: Evaluator\n",
    "        host_evaluator:\n",
    "            type: HostEvaluator\n",
    "            metric:\n",
    "                type: accuracy\n",
    "```\n",
    "\n",
    "**fine_tune**当中的**dataset**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "dataset:\n",
    "    type: Cifar10 # 根据个人需要可修改其他数据集，如imagenet\n",
    "    common:\n",
    "      data_path: \"./data\" # 个人之前下载好的cifar10数据集路径\n",
    "      batch_size: 64\n",
    "    train:\n",
    "      transforms: # 数据转换方式这里可以根据数据集以及个人实际情况做修改，后续nas和fully_train部分的数据集会直接调用此处\n",
    "        -   type: Resize\n",
    "          #    size: [256, 256]\n",
    "            #-   type: RandomCrop\n",
    "            size: [224, 224]\n",
    "            #-   type: RandomHorizontalFlip\n",
    "        -   type: ToTensor\n",
    "        -   type: Normalize\n",
    "            mean: [0.4914, 0.4822, 0.4465]\n",
    "            std: [0.2023, 0.1994, 0.2010]\n",
    "    val:\n",
    "      transforms:\n",
    "        -   type: Resize\n",
    "            size: [224, 224]\n",
    "        -   type: ToTensor\n",
    "        -   type: Normalize\n",
    "            mean: [0.4914, 0.4822, 0.4465]\n",
    "            std: [0.2023, 0.1994, 0.2010]\n",
    "    test:\n",
    "      transforms:\n",
    "        -   type: Resize\n",
    "            size: [224, 224]\n",
    "        -   type: ToTensor\n",
    "        -   type: Normalize\n",
    "            mean: [0.4914, 0.4822, 0.4465]\n",
    "            std: [0.2023, 0.1994, 0.2010]\n",
    "```\n",
    "\n",
    "**nas**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "nas:\n",
    "    pipe_step:\n",
    "        type: SearchPipeStep\n",
    "    search_algorithm:\n",
    "        type: RandomSearch\n",
    "        objective_keys: [ 'accuracy', 'latency' ] # 评价剪枝后模型的标准\n",
    "        policy:\n",
    "            num_sample: 32 # 剪枝后的测试样本\n",
    "    search_space:\n",
    "        type: PruneDAGSearchSpace\n",
    "    model:\n",
    "        model_desc_file: '{local_base_path}/output/fine_tune/desc_0.json'\n",
    "        pretrained_model_file: '{local_base_path}/output/fine_tune/model_0.pth'\n",
    "    trainer:\n",
    "        type: Trainer\n",
    "        epochs: 1\n",
    "        callbacks: AdaptiveBatchNormalizationCallback\n",
    "        optimizer:\n",
    "            type: SGD\n",
    "            params:\n",
    "                lr: 0.01\n",
    "                momentum: 0.9\n",
    "                weight_decay: 0.0001\n",
    "        lr_scheduler:\n",
    "            type: MultiStepLR\n",
    "            params:\n",
    "                milestones: [ 30, 60, 90 ]\n",
    "                gamma: 0.1\n",
    "        loss:\n",
    "            type: CrossEntropyLoss\n",
    "            params:\n",
    "                sparse: true\n",
    "    dataset:\n",
    "        ref: fine_tune.dataset\n",
    "    evaluator:\n",
    "        ref: fine_tune.evaluator\n",
    "```\n",
    "\n",
    "**fully_train**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "fully_train:\n",
    "    pipe_step:\n",
    "        type: TrainPipeStep\n",
    "        models_folder: '{local_base_path}/output/nas/'\n",
    "    trainer:\n",
    "        ref: fine_tune.trainer\n",
    "        epochs: 10 # 剪枝后较为合适的模型，继续训练的epoch数量\n",
    "        optimizer:\n",
    "            type: SGD\n",
    "            params:\n",
    "                lr: 0.01\n",
    "                momentum: 0.9\n",
    "                weight_decay: 0.0001\n",
    "        lr_scheduler:\n",
    "            type: MultiStepLR\n",
    "            params:\n",
    "                milestones: [10, 20, 30]\n",
    "                gamma: 0.1\n",
    "        loss:\n",
    "            type: CrossEntropyLoss\n",
    "    dataset:\n",
    "        ref: fine_tune.dataset\n",
    "    evaluator:\n",
    "        ref: fine_tune.evaluator\n",
    "```\n",
    "\n",
    "**备注**：type可以灵活替换，可根据个人训练模型的实际情况替换为其他参数，现有方法及更新内容可以参看https://github.com/huawei-noah/vega/blob/master/docs/cn/developer/developer_guide.md"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5a01acdf",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
