{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "fde28515",
   "metadata": {},
   "source": [
    "# yolov5样例开发\n",
    "\n",
    "[TOC]\n",
    "\n",
    "---\n",
    "\n",
    "## 1. MindStudio安装\n",
    "\n",
    "### 1.1 下载软件包\n",
    "\n",
    "cann 安装流程及软件包获取参看https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373/software/256539485?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373 \n",
    "MindStudio软件包获取链接https://www.hiascend.com/software/mindstudio/download\n",
    "\n",
    "### 1.2 安装依赖\n",
    "\n",
    "检查系统是否安装python依赖以及gcc等软件。\n",
    "分别使用如下命令检查是否安装gcc，make以及python依赖软件等。\n",
    "```\n",
    "gcc --version\n",
    "g++ --version\n",
    "make --version\n",
    "cmake --version\n",
    "dpkg -l zlib1g-dev| grep zlib1g-dev| grep ii\n",
    "dpkg -l libbz2-dev| grep libbz2-dev| grep ii\n",
    "dpkg -l libsqlite3-dev| grep libsqlite3-dev| grep ii\n",
    "dpkg -l libssl-dev| grep libssl-dev| grep ii\n",
    "dpkg -l libffi-dev| grep libffi-dev| grep ii\n",
    "dpkg -l unzip| grep unzip| grep ii\n",
    "dpkg -l pciutils| grep pciutils| grep ii\n",
    "dpkg -l net-tools| grep net-tools| grep ii\n",
    "dpkg -l libblas-dev| grep libblas-dev| grep ii\n",
    "dpkg -l gfortran| grep gfortran| grep ii\n",
    "dpkg -l libblas3| grep libblas3| grep ii\n",
    "dpkg -l liblapack-dev| grep liblapack-dev| grep ii\n",
    "dpkg -l openssh-server| grep openssh-server| grep ii\n",
    "dpkg -l xterm| grep xterm| grep ii\n",
    "dpkg -l firefox| grep firefox| grep ii\n",
    "dpkg -l xdg-utils| grep xdg-utils| grep ii\n",
    "dpkg -l libdbus-glib-1-dev | grep libdbus-glib-1-dev  | grep ii\n",
    "dpkg -l gdb | grep gdb  | grep ii\n",
    "```\n",
    "\n",
    "若分别返回如下信息则说明已经安装，进入下一步（以下回显仅为示例，请以实际情况为准）。\n",
    "```\n",
    "gcc (Ubuntu 7.3.0-3ubuntu1~18.04) 7.3.0\n",
    "g++ (Ubuntu 7.3.0-3ubuntu1~18.04) 7.3.0\n",
    "GNU Make 4.1\n",
    "cmake version 3.10.2\n",
    "zlib1g-dev:arm64 1:1.2.11.dfsg-0ubuntu2 arm64        compression library - development\n",
    "libbz2-dev:arm64 1.0.6-8.1ubuntu0.2 arm64        high-quality block-sorting file compressor library - development\n",
    "libsqlite3-dev:arm64 3.22.0-1ubuntu0.3 arm64        SQLite 3 development files\n",
    "libssl-dev:arm64 1.1.1-1ubuntu2.1~18.04.6 arm64     Secure Sockets Layer toolkit - development files\n",
    "libffi-dev:arm64 3.2.1-8      arm64        Foreign Function Interface library (development files)\n",
    "unzip          6.0-21ubuntu1 arm64        De-archiver for .zip files\n",
    "pciutils       1:3.5.2-1ubuntu1 arm64        Linux PCI Utilities\n",
    "net-tools      1.60+git20161116.90da8a0-1ubuntu1 arm64        NET-3 networking toolkit\n",
    "libblas-dev:arm64 3.7.1-4ubuntu1 arm64        Basic Linear Algebra Subroutines 3, static library\n",
    "gfortran       4:7.4.0-1ubuntu2.3 arm64        GNU Fortran 95 compiler\n",
    "libblas3:arm64 3.7.1-4ubuntu1 arm64     Basic Linear Algebra Reference implementations, shared library\n",
    "liblapack-dev:arm64 3.7.1-4ubuntu1 arm64        Library of linear algebra routines 3 - static version\n",
    "openssh-server 1:7.6p1-4ubuntu0.5 arm64        secure shell (SSH) server, for secure access from remote machines\n",
    "xterm          330-1ubuntu2 arm64        X terminal emulator\n",
    "firefox        83.0+build2-0ubuntu0.18.04.2 arm64        Safe and easy web browser from Mozilla\n",
    "xdg-utils      1.1.2-1ubuntu2.5 all          desktop integration utilities from freedesktop.org\n",
    "ii  libdbus-glib-1-dev 0.110-2      arm64        deprecated library for D-Bus IPC (development files)\n",
    "ii  gdb            8.1.1-0ubuntu1 arm64        GNU Debugger\n",
    "```\n",
    "否则请执行如下安装命令（如果只有部分软件未安装，则如下命令修改为只安装还未安装的软件即可）："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88d29667",
   "metadata": {},
   "source": [
    "!apt-get install -y gcc g++ make cmake zlib1g-dev libbz2-dev libsqlite3-dev libssl-dev libffi-dev unzip pciutils net-tools libblas-dev gfortran libblas3 liblapack-dev openssh-server xterm firefox xdg-utils libdbus-glib-1-dev gdb"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cfb222c6",
   "metadata": {},
   "source": [
    "### 1.3 安装MindStudio\n",
    "\n",
    "解压MindStudio_{version}_linux.tar.gz软件包:\n",
    "```\n",
    "tar -zxvf MindStudio_{version}_linux.tar.gz\n",
    "```\n",
    "解压jbr至MindStudio安装根目录，jbr下载链接请根据操作系统架构选择。\n",
    "\n",
    "x86_64链接：https://cache-redirector.jetbrains.com/intellij-jbr/jbr_dcevm-11_0_10-linux-x64-b1341.35.tar.gz  \n",
    "aarch64链接：https://cache-redirector.jetbrains.com/intellij-jbr/jbr-11_0_10-linux-aarch64-b1341.35.tar.gz  \n",
    "解压jbr至MindStudio根目录后目录结构如下：\n",
    "```\n",
    "├── bin\n",
    "├── jbr        //确认将压缩包中的jbr文件夹解压至MindStudio根目录           \n",
    "├── lib                              \n",
    "├── ......  \n",
    "```\n",
    "使用MindStudio的安装用户进入软件包解压后的MindStudio/bin目录，执行如下命令：\n",
    "```\n",
    "cd MindStudio/bin\n",
    "./MindStudio.sh\n",
    "```\n",
    "\n",
    "**备注：MindStudio安装流程详细请参照https://support.huaweicloud.com/devg-mindstudio304/atlasms_02_0002.html**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6dae9da8",
   "metadata": {},
   "source": [
    "## 2. 样例开发流程图\n",
    "\n",
    "![典型样例开发](./流程.PNG)\n",
    "\n",
    "## 3. 环境配置\n",
    "\n",
    "### 3.1 安装依赖包\n",
    "\n",
    "- python==3.7.5\n",
    "- cann==5.1.T201\n",
    "- mindspore==1.7.0 (Ascend)\n",
    "- noah_vega==1.8.1 (AutoML工具)\n",
    "\n",
    "cann安装版本需根据自身服务器类型选择`x86_64`或者`aarch64`,使用以下命令可查看："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d96a22d2",
   "metadata": {},
   "outputs": [],
   "source": [
    "!arch "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "53759f28",
   "metadata": {},
   "source": [
    "### 3.3 环境变量配置\n",
    "\n",
    "本段编辑环境变量文件过程均在**命令行**中完成，请打开并执行`cd /文件路径/workspace/`进到目录下\n",
    "\n",
    "#### 环境变量配置\n",
    "\n",
    "为防止多开发者使用同一服务器，环境搭建存在冲突，当前全部环境变量配置在shell脚本中，下面先创建一个新脚本文件：\n",
    "\n",
    "```\n",
    "vim env.sh\n",
    "```\n",
    "\n",
    "#### env.sh脚本内环境变量配置\n",
    "\n",
    "现在进入到`env.sh`的编辑界面，按`i`，界面最下方出现`-- INSERT --`表示进入**编辑模式**\n",
    "\n",
    "1. 根据自己程序的所使用的python环境配置PYTHONPATH:\n",
    "```\n",
    "export PYTHONPATH=/{文件路径}/lib/python3.7/site-packages/:$PYTHONPATH\n",
    "```\n",
    "2. 驱动与cann安装后默认会在`/usr/local/Ascend`目录下生成自己的环境变量配置脚本（如cann安装在自定义目录，请配置自定义目录），`source`以下两个脚本:\n",
    "```\n",
    "source /usr/local/Ascend/driver/bin/setenv.bash\n",
    "source /usr/local/Ascend/ascend-toolkit/set_env.sh\n",
    "```\n",
    "3. 配置单卡单模型，多卡多模型并行环境变量，`/ascend-toolkit`目录下可选取安装好的cann版本文件夹，如本样例的`5.1.T100`，也可以是软链接到`5.1.T100`的`latest`，如下:\n",
    "```\n",
    "export install_path=/usr/local/Ascend/ascend-toolkit/5.1.T100\n",
    "```\n",
    "```\n",
    "export LD_LIBRARY_PATH=${install_path}/compiler/lib64/:$LD_LIBRARY_PATH\n",
    "export TBE_IMPL_PATH=${install_path}/opp/op_impl/built-in/ai_core/tbe:$TBE_IMPL_PATH\n",
    "```\n",
    "```\n",
    "export JOB_ID=10087  #该项不建议改动\n",
    "export DEVICE_ID=2  #单卡训练使用的device_id\n",
    "```\n",
    "```\n",
    "export NPU_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \n",
    "```\n",
    "配置完成，按`Esc`按键，下方`-- INSERT --`消失，表示退出**编辑模式**，按`:wq`然后按`Enter`结束文件编辑。\n",
    "\n",
    "#### 执行环境变量脚本\n",
    "```\n",
    "source env.sh\n",
    "```\n",
    "#### QA\n",
    "\n",
    "如果后续执行代码运行出现如下错误时，添加环境变量LD_PRELOAD\n",
    "\n",
    "```python\n",
    "ImportError: /root/.../lib/python3.7/site-packages/sklearn/__check_build/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS \n",
    "```\n",
    "\n",
    "环境变量增加项（**路径根据自己实际出错的路劲做修改**）:\n",
    "```\n",
    "export LD_PRELOAD=/root/.../lib/python3.7/site-packages/sklearn/__check_build/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0:$LD_PRELOAD\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4869bbe9",
   "metadata": {},
   "source": [
    "## 4. 模型自动剪枝调优\n",
    "\n",
    "1. **微调阶段（fine_tune）**\n",
    "加载权重后，进行训练和评估模型精度。若不需要进行迁移学习，可以跳过训练环节，本样例跳过训练环节。\n",
    "\n",
    "2. **剪枝阶段（nas）**  \n",
    "    步骤1 采样剪枝模型。  \n",
    "    步骤2 微调训练剪枝模型，评估模型精度。  \n",
    "    步骤3 将剪枝后的模型发送到评估服务器上，评估latency。  \n",
    "    重复步骤1、2、3，经过N次采样和学习，剪枝结束，选取若干个top精度的剪枝模型作为输出结果，进行下一步训练。  \n",
    "\n",
    "  **备注**：该阶段采样的训练过程可多卡并行，每个采样的样本模型在一张卡上运行，即3个不同的剪枝后的模型在三张卡上同时训练，即将支持单模型多卡训练。\n",
    "\n",
    "3. **剪枝后训练阶段（fully_train）**\n",
    "    训练剪枝后的模型。\n",
    "\n",
    "  **备注**：该部分多模型多卡并行原理，与nas部分相同\n",
    "\n",
    "### 4.1 模型及数据集准备\n",
    "\n",
    "- **模型**：mindspore的yolov5模型，模型来源于mindspore的modelzoo。\n",
    "- **模型权重和数据集**：使用coco数据集训练得到的模型权重文件。\n",
    "\n",
    "### 4.2 模型调优文件配置\n",
    "\n",
    "超参数配置主要参照对应的样例目录`{CANN包安装路径}/ascend-toolkit/latest/tools/ascend_automl/examples/`，以yolov5举例，配置文件参照目录如下：  \n",
    "`{CANN包安装路径}/ascend-toolkit/latest/tools/ascend_automl/examples/detection/mindspore/yolov5/yolov5_prune.yml`\n",
    "\n",
    "当前样例已给出`./yolov5_prune.yml`，配置介绍[分段模型调优配置介绍](#5.分段模型调优配置介绍)\n",
    "\n",
    "### 4.3 执行AutoML工具\n",
    "\n",
    "执行如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "853bb161",
   "metadata": {},
   "outputs": [],
   "source": [
    "!vega yolov5_prune.yml -d NPU"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b66f9cf",
   "metadata": {},
   "source": [
    "### 4.4 AutoML输出展示\n",
    "\n",
    "1. **微调阶段（fine_tune）**\n",
    "\n",
    " 对模型进行构图，保存模型描述文件：output/finetune/desc_0.json\n",
    " \n",
    " 对模型进行精度评估，输出output/fine_tune/performance_0.json文件，具体性能精度如下：\n",
    "   \n",
    "   ```python\n",
    "{\"flops\": 0.0, \"params\": 7276.6050000000005, \"mAP\": 36.974134359673116, \"AP50\": 57.23831751178715, \"AP_small\": 21.36575440617437, \"AP_medium\": 41.90543095875907, \"AP_large\": 49.02437854698212}\n",
    "   ```\n",
    "   \n",
    "2. **剪枝阶段（nas）**\n",
    "\n",
    "   nas部分抽取了64个样本模型中的前三个模型，每个模型训练的前三个batch运行情况如下：\n",
    "\n",
    "   ```sh\n",
    "2022-06-08 17:59:55,078:INFO:Finish loading dataset\n",
    "2022-06-08 17:59:55,078:INFO:total 117266 images to train\n",
    "2022-06-08 18:03:32,829:INFO:epoch[0], iter[0], loss:385.016205, fps:0.15 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-08 18:04:42,205:INFO:epoch[0], iter[100], loss:233.498535, fps:46.13 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-08 18:05:56,320:INFO:epoch[0], iter[200], loss:206.302001, fps:43.18 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-08 18:07:09,201:INFO:epoch[0], iter[300], loss:192.717242, fps:43.91 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-08 18:08:20,673:INFO:epoch[0], iter[400], loss:188.474033, fps:44.77 imgs/sec, lr:0.0005000000237487257\n",
    "   ```\n",
    "\n",
    "   ```sh\n",
    "2022-06-08 22:10:53,282:INFO:Finish loading dataset\n",
    "2022-06-08 22:10:53,282:INFO:total 117266 images to train\n",
    "2022-06-08 22:14:16,274:INFO:epoch[0], iter[0], loss:364.341492, fps:0.16 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-08 22:15:30,048:INFO:epoch[0], iter[100], loss:202.106732, fps:43.38 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-08 22:16:46,747:INFO:epoch[0], iter[200], loss:158.671489, fps:41.72 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-08 22:18:05,326:INFO:epoch[0], iter[300], loss:149.247922, fps:40.72 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-08 22:19:20,009:INFO:epoch[0], iter[400], loss:140.635285, fps:42.85 imgs/sec, lr:0.0005000000237487257\n",
    "   ```\n",
    "\n",
    "   ```sh\n",
    "2022-06-09 02:24:21,589:INFO:Finish loading dataset\n",
    "2022-06-09 02:24:21,589:INFO:total 117266 images to train\n",
    "2022-06-09 02:27:39,637:INFO:epoch[0], iter[0], loss:351.244415, fps:0.16 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-09 02:28:47,090:INFO:epoch[0], iter[100], loss:221.918710, fps:47.45 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-09 02:29:58,247:INFO:epoch[0], iter[200], loss:174.836227, fps:44.97 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-09 02:31:08,645:INFO:epoch[0], iter[300], loss:154.926831, fps:45.46 imgs/sec, lr:0.0005000000237487257\n",
    "2022-06-09 02:32:14,857:INFO:epoch[0], iter[400], loss:151.100575, fps:48.33 imgs/sec, lr:0.0005000000237487257\n",
    "   ```\n",
    "\n",
    "每个模型训练后会对精度进行评估，前3个模型评估结果如下：\n",
    "   ```sh\n",
    "2022-06-09 02:23:46.378 INFO evaluate performance: {'mAP': 27.226492994409657, 'AP50': 47.14294727359325, 'AP_small': 15.384970683402818, 'AP_medium': 31.516061532412298, 'AP_large': 34.936458737272005}\n",
    "2022-06-09 02:23:47.698 INFO finished host evaluation, id: 1, performance: {'mAP': 27.226492994409657, 'AP50': 47.14294727359325, 'AP_small': 15.384970683402818, 'AP_medium': 31.516061532412298, 'AP_large': 34.936458737272005}\n",
    "2022-06-09 02:23:47.871 INFO Update Success. step_name=nas, worker_id=1\n",
    "   ```\n",
    "   \n",
    "   ```sh\n",
    "2022-06-09 06:31:46.840 INFO evaluate performance: {'mAP': 28.263199866545484, 'AP50': 48.03461916129006, 'AP_small': 16.490354732184045, 'AP_medium': 32.64256614685292, 'AP_large': 35.686381025275494}\n",
    "2022-06-09 06:31:48.349 INFO finished host evaluation, id: 2, performance: {'mAP': 28.263199866545484, 'AP50': 48.03461916129006, 'AP_small': 16.490354732184045, 'AP_medium': 32.64256614685292, 'AP_large': 35.686381025275494}\n",
    "2022-06-09 06:31:48.803 INFO Update Success. step_name=nas, worker_id=2\n",
    "   ```\n",
    "   \n",
    "   ```sh\n",
    "2022-06-09 10:31:15.300 INFO evaluate performance: {'mAP': 27.73701460399998, 'AP50': 47.618911573403096, 'AP_small': 14.863946870064948, 'AP_medium': 32.12352441250157, 'AP_large': 35.32185901118241}\n",
    "2022-06-09 10:31:16.153 INFO finished host evaluation, id: 3, performance: {'mAP': 27.73701460399998, 'AP50': 47.618911573403096, 'AP_small': 14.863946870064948, 'AP_medium': 32.12352441250157, 'AP_large': 35.32185901118241}\n",
    "2022-06-09 10:31:16.709 INFO Update Success. step_name=nas, worker_id=3\n",
    "   ```\n",
    "\n",
    "\n",
    "3. **剪枝后训练阶段（fully_train）**\n",
    "\n",
    "   所有nas后结果相对好的模型，在fully_train后结果如下：\n",
    "  \n",
    "\n",
    "\n",
    "   ```sh\n",
    "   2022-03-30 13:49:17,103 INFO   result:\n",
    "   2022-03-30 13:49:17,109 INFO    29:  {'flops': 0.0, 'params': 6905.533, 'mAP': 36.1596, 'AP50': 56.2335, 'AP_small': 20.7887, 'AP_medium': 41.5907, 'AP_large': 47.3583}\n",
    "   2022-03-30 13:49:17,109 INFO    17:  {'flops': 0.0, 'params': 6570.749, 'mAP': 36.1856, 'AP50': 54.4875, 'AP_small': 20.7276, 'AP_medium': 41.2647, 'AP_large': 47.600}\n",
    "   2022-03-30 13:49:17,109 INFO    32:  {'flops': 0.0, 'params': 6274.365, 'mAP': 35.5045, 'AP50': 55.5692, 'AP_small': 18.8815, 'AP_medium': 40.5062, 'AP_large': 47.0003}\n",
    "   ```\n",
    "\n",
    "   此处挑选了模型29，17，32给出模型的前三个batch运行时间如下：\n",
    "\n",
    "   模型29\n",
    "\n",
    "   ```sh\n",
    "2022-06-13 07:36:33,717:INFO:epoch[0], iter[100], loss:100.586519, fps:25.60 imgs/sec, lr:0.0002998684358317405\n",
    "2022-06-13 07:37:09,531:INFO:epoch[0], iter[200], loss:101.453274, fps:25.94 imgs/sec, lr:0.0002998684358317405\n",
    "2022-06-13 07:38:42,219:INFO:epoch[0], iter[300], loss:100.930251, fps:24.90 imgs/sec, lr:0.0002998684358317405\n",
    "   ```\n",
    "   \n",
    "   模型17\n",
    "   \n",
    "   ```sh\n",
    "2022-06-13 08:42:48,189:INFO:epoch[4], iter[100], loss:103.495490, fps:28.18 imgs/sec, lr:0.0002998684358317405\n",
    "2022-06-13 08:44:48,597:INFO:epoch[4], iter[200], loss:101.421028, fps:26.58 imgs/sec, lr:0.0002998684358317405\n",
    "2022-06-13 08:46:50,335:INFO:epoch[4], iter[300], loss:102.045392, fps:26.29 imgs/sec, lr:0.0002998684358317405\n",
    "   ```\n",
    "\n",
    "   模型32\n",
    "   \n",
    "   ```sh\n",
    "2022-06-13 07:48:52,285:INFO:epoch[4], iter[100], loss:104.277821, fps:26.24 imgs/sec, lr:0.0002998684358317405\n",
    "2022-06-13 07:50:58,568:INFO:epoch[4], iter[200], loss:103.933939, fps:25.34 imgs/sec, lr:0.0002998684358317405\n",
    "2022-06-13 07:52:59,939:INFO:epoch[4], iter[300], loss:103.927755, fps:26.37 imgs/sec, lr:0.0002998684358317405\n",
    "   ```\n",
    "\n",
    "automl整体时间消耗，本次测试nas部分为单卡运行，fully_train部分为4卡并行\n",
    "\n",
    "```\n",
    "2022-06-13 12:02:29.491 INFO   running time:\n",
    "2022-06-13 12:02:29.491 INFO           finetune:  0:37:16  [2022-06-08 17:11:00.973423 - 2022-01-25 17:48:17.636953]\n",
    "2022-06-13 12:02:29.492 INFO                nas:  103:17:44  [2022-06-08 17:48:17.664279 - 2022-06-13 01:06:02.729006]\n",
    "2022-06-13 12:02:29.492 INFO        fully_train:  58:56:11  [2022-06-13 01:06:02.737215 - 2022-06-15 12:02:14.376295]\n",
    "```\n",
    "\n",
    "## 5. 部署推理服务（可选）\n",
    "如果需要对模型的性能进行调优，需要准备一台推理服务器（310或者710），在服务器上部署推理服务，详细步骤参考https://support.huaweicloud.com/usermanual-mindstudio304/atlasms_02_0322.html AutoML工具->安装部署->启动推理服务  \n",
    "cann 安装流程及软件包获取参看https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373/software/256539485?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373 \n",
    "mindspore 安装流程及部署参看https://www.mindspore.cn/install\n",
    "\n",
    "## 6. 分段模型调优配置介绍\n",
    "\n",
    "**general**部分主要配置如下：\n",
    "\n",
    "```yml\n",
    "general:\n",
    "  backend: mindspore  # 模型框架选择\n",
    "  parallel_search: True  # nas部分是否使用并行搜索\n",
    "  parallel_fully_train: True  # fully_train部分是否使用并行策略\n",
    "  dataset_sink_mode: True\n",
    "  task:\n",
    "    local_base_path: ./tasks  # 开发者自行给出automl输出目录\n",
    "    task_id: \"yolov5_prune_parallel\"  # 开发者自行给出任务名称\n",
    "  logger:\n",
    "    level: info  # logger等级可调为debug\n",
    "  worker:\n",
    "    timeout: 7200000\n",
    "```\n",
    "\n",
    "**register**部分主要配置如下：\n",
    "```yaml\n",
    "register:\n",
    "  pkg_path: [ \"/home/automl/models/yolov5_prune/\" ]\n",
    "  modules:\n",
    "    - module: \"src.yolo\" # 模块导入\n",
    "      script_network: [\"get_model\", \"get_eval_model\"]\n",
    "    - module: \"train\"\n",
    "      ori_train_func: [ \"run_train\" ]\n",
    "    - module: \"eval\" # 模块导入\n",
    "      ori_eval_func: [ \"run_eval\" ]\n",
    "```\n",
    "\n",
    "**pipline**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "pipeline: [fine_tune, nas, fully_train] # 配置好你所需要的流程，如果要跳过某部分，需要对下个部分的文件输入做路径修改\n",
    "```\n",
    "\n",
    "**fine_tune**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "finetune:\n",
    "  pipe_step:\n",
    "    type: TrainPipeStep\n",
    "  model:\n",
    "    pretrained_model_file: /home/cache/yolov5/pre_train/0-300_274800.1130.ckpt\n",
    "    model_desc:\n",
    "      type: Script2Vega\n",
    "      ori_network_config:\n",
    "        type: ScriptModelGen\n",
    "        common:\n",
    "          multiple_inputs:\n",
    "            - shape: [ 1, 12, 320, 320 ]\n",
    "              dtype: fp32\n",
    "            - [ 640, 640 ]\n",
    "        train:\n",
    "          network:\n",
    "            type: get_model\n",
    "        evaluate:\n",
    "          network:\n",
    "            type: get_eval_model\n",
    "\n",
    "  trainer:\n",
    "    type: OriTrainer\n",
    "    with_train: False  # 需要微调，可配置为True\n",
    "    ori_trainer: \n",
    "      type: run_train  # register中注册的模型训练接口\n",
    "      config: # 模型训练需要配置的参数\n",
    "        max_epoch: 5\n",
    "        data_dir: /home/cache/datasets/coco-all/data/coco/\n",
    "        per_batch_size: 32\n",
    "        lr: 0.0003\n",
    "\n",
    "  evaluator:\n",
    "    type: Evaluator\n",
    "    host_evaluator:\n",
    "      type: OriHostEvaluator\n",
    "      ori_eval:\n",
    "        type: run_eval  # register中注册的模型精度评估接口\n",
    "        config: # 精度评估需要配置的参数\n",
    "          data_dir: /home/cache/datasets/coco-all/data/coco/\n",
    "          per_batch_size: 32\n",
    "    device_evaluator:\n",
    "      type: DeviceEvaluator\n",
    "      custom: CustomEvaluator\n",
    "      hardware: \"Davinci\"\n",
    "      remote_host: \"http://x.x.x.x:8888\"\n",
    "      muti_input: True\n",
    "```\n",
    "\n",
    "**nas**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "nas:\n",
    "  pipe_step:\n",
    "    type: SearchPipeStep\n",
    "  search_algorithm:\n",
    "    type: RandomSearch\n",
    "    objective_keys: ['mAP', 'latency']\n",
    "    policy:\n",
    "      num_sample: 64 # 需要采样的数目，可根据实际情况配置\n",
    "  search_space:\n",
    "    type: SCOPDAGSearchSpace\n",
    "    hyperparameters:\n",
    "      - key: prune_d_rate\n",
    "        type: INT\n",
    "        range: [ 75, 95 ]\n",
    "  model:\n",
    "    model_desc_file: \"{local_base_path}/output/finetune/desc_0.json\"\n",
    "    pretrained_model_file: \"{local_base_path}/output/finetune/model_0.ckpt\"\n",
    "  trainer:\n",
    "    ref: finetune.trainer\n",
    "    with_train: True\n",
    "\n",
    "  evaluator:\n",
    "    ref: finetune.evaluator\n",
    "```\n",
    "\n",
    "**fully_train**部分主要配置如下：\n",
    "\n",
    "```yaml\n",
    "fully_train:\n",
    "  pipe_step:\n",
    "    type: TrainPipeStep\n",
    "    models_folder: \"{local_base_path}/output/nas/\"\n",
    "\n",
    "  trainer:\n",
    "    ref: finetune.trainer\n",
    "    with_train: True\n",
    "    ori_trainer:\n",
    "      type: run_train  # register中注册的模型训练接口\n",
    "      config:  # 模型训练中需要配置的参数\n",
    "        max_epoch: 60\n",
    "        data_dir: /home/cache/datasets/coco-all/data/coco/\n",
    "        per_batch_size: 32\n",
    "        lr: 0.0003\n",
    "        warmup_epochs: 0\n",
    "\n",
    "  evaluator:\n",
    "    ref: finetune.evaluator\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5a01acdf",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
