{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# PaddleSeg2.0 飞桨领航团车道线分割任务"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 一、项目背景\n",
    "\n",
    "本项目对于初学者有一定难度，我的研究方向也是机器视觉，抱着学习的态度，选择车道分割这个项目进行学习\n",
    "\n",
    "### 效果如图：(识别图像中的车道线，并用颜色标记出来)\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/5e11a73ba5fc480b84166b61b35c2ff082b2546e8bca46258f11329cdafd6fb4)\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/6bc4bb06b40a49689dcc64902f3dee3b47fcccb90aff4803834a19114891c865)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 二、数据集简介\n",
    "\n",
    "本项目数据集来自全国大学生智能车竞赛-车道线检测。该数据集包含了训练集、验证集和测试集，且图象都已经完成了标注"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 1. 解压数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "!unzip data/data68698/智能车数据集.zip"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 2. 数据集解析\n",
    "\n",
    "+ image_4000：原始图像数据集(训练集样本数量：4000)\n",
    "+ mask_4000：标注且分类的mask掩膜数据集\n",
    "\n",
    "**标注方式：用掩膜中车道线区域(ROI)的*像素的灰度值*进行分类，一共有15类，背景的像素灰度值都是0，分为一类，其他不同车道线的像素灰度值分别为1-14**\n",
    ">  '背景'：0, 0.05\n",
    "  '单实线-黄': 1, \n",
    "  '单实线-白': 2, \n",
    "  '双实线-黄(包括一黄一白)': 3,\n",
    "  '双实线-白': 4, \n",
    "  '单虚线-黄': 5, \n",
    "  '单虚线-白': 6, \n",
    "  '双虚线->    黄（包扩一黄一白）': 7, \n",
    "  '双虚线-白': 8, \n",
    "  '一实一虚-黄（包括一黄一白）': 9,\n",
    "  '一实一虚-白': 10, \n",
    "  '多条车道线（线>2，不区分虚实与颜色）': 11, \n",
    "  '实线减速车道线': 12, \n",
    "  '虚线减速车道线': 13,\n",
    "  '锯齿线（可变车道线）': 14\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 3. 自定义数据集和测试集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[('image_4000/1149.png', 'mask_4000/1149.png'), ('image_4000/600.png', 'mask_4000/600.png'), ('image_4000/3347.png', 'mask_4000/3347.png')]\n",
      "4000\n"
     ]
    }
   ],
   "source": [
    "%run make_list.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 4. 数据集中的各类数据占比分析"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "100%|█████████████████████████████████████| 4000/4000 [1:18:36<00:00,  1.18s/it]\n",
      "类别按图片分布 [0.2879, 0.0329, 0.2474, 0.0578, 0.0346, 0.0612, 0.2157, 0.0085, 0.0023, 0.0131, 0.0153, 0.0148, 0.0043, 0.0017, 0.0024]\n",
      "类别按像素点分布 [0.9838, 0.0003, 0.0072, 0.0014, 0.0012, 0.0005, 0.0041, 0.0001, 0.0, 0.0003, 0.0002, 0.0005, 0.0001, 0.0001, 0.0001]\n",
      "[9437184000, 1078198272, 8108900352, 1894514688, 1134821376, 2005401600, 7070810112, 278396928, 75497472, 429391872, 502530048, 483655680, 141557760, 56623104, 77856768]\n"
     ]
    }
   ],
   "source": [
    "#由于运行时间过长，这里通过脚本在shell中执行\n",
    "!python statistics.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 5. 分析结果\n",
    "\n",
    "> '背景', '单实线-黄','单实线-白','双实线-黄(包括一黄一白)','双实线-白','单虚线-黄','单虚线-白', '双虚线-黄（包扩一黄一白）','双虚线-白','一实一虚-黄（包括一黄一白）','一实一虚-白', '多条车道线（线>2，不区分虚实与颜色）', '实线减速车道线', '虚线减速车道线', '锯齿线（可变车道线）'\n",
    "\n",
    "#### 包含有各类车道的图像的数量（单位：张）\n",
    "[4000, 457, 3437, 803, 481, 850, 2997, 118, 32, 182, 213, 205, 60, 24, 33]\n",
    "#### 每一类ROI所包含的像素数\n",
    "[9284543967, 2595404, 68411608, 12860131, 11705492, 5098954, 38357780, 1066241, 383440, 2419091, 2312722, 4459757, 1332924, 927368, 709121]\n",
    "\n",
    "#### 包含每一类的图像的像素数\n",
    "[9437184000, 1078198272, 8108900352, 1894514688, 1134821376, 2005401600, 7070810112, 278396928, 75497472, 429391872, 502530048, 483655680, 141557760, 56623104, 77856768]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 6. 计算每一类ROI所包含的像素数的占比情况\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/e8c8d7d71b6c4c48b49f0be94e5030d1d4c5a863121c4176ab97fbbaf752b8db)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "total: 9437184000\n",
      "proportion: [0.9838, 0.0003, 0.0072, 0.0014, 0.0012, 0.0005, 0.0041, 0.0001, 0.0, 0.0003, 0.0002, 0.0005, 0.0001, 0.0001, 0.0001]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\r\n",
    "\r\n",
    "pixel = [9284543967, 2595404, 68411608, 12860131, 11705492, 5098954, 38357780, 1066241, 383440, 2419091, 2312722, 4459757, 1332924, 927368, 709121]\r\n",
    "proportion = [0]*len(pixel)#必须预先定义数组\r\n",
    "total = np.array(pixel).sum()\r\n",
    "\r\n",
    "for i in range(len(pixel)):\r\n",
    "    proportion[i] = round(pixel[i]/total, 4)\r\n",
    "\r\n",
    "print('total: {}'.format(total))\r\n",
    "print('proportion: {}'.format(proportion))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 7. 类别不均衡的处理\n",
    "\n",
    "在图像分割任务中，经常出现类别分布不均匀的情况，例如：工业产品的瑕疵检测、道路提取及病变区域提取等。\n",
    "\n",
    "针对这个问题，PaddleSeg主要提供了带权的softmax loss和Lovasz loss两种解决方案。\n",
    "\n",
    "### [](https://gitee.com/paddlepaddle/PaddleSeg/blob/release/v0.6.0/docs/loss_select.md#weighted-softmax-loss)Weighted softmax loss\n",
    "\n",
    "Weighted softmax loss是按类别设置不同权重的softmax loss。\n",
    "\n",
    "通过设置`cfg.SOLVER.CROSS_ENTROPY_WEIGHT`参数进行使用。  \n",
    "默认为None. 如果设置为'dynamic'，会根据每个batch中各个类别的数目，动态调整类别权重。 也可以设置一个静态权重(list的方式)，比如有3类，每个类别权重可以设置为\\[0.1, 2.0, 0.9\\]。\n",
    "```\n",
    "SOLVER:\n",
    "    LR: 0.005\n",
    "    LR_POLICY: \"poly\"\n",
    "    OPTIMIZER: \"sgd\"\n",
    "    NUM_EPOCHS: 40\n",
    "    CROSS_ENTROPY_WEIGHT: [0.1, 2, 0.5, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2 ,2, 2] #每一个类别loss权重\n",
    "```\n",
    "### [](https://gitee.com/paddlepaddle/PaddleSeg/blob/release/v2.0/docs/lovasz_loss.md#lovasz-loss)Lovasz loss\n",
    "Lovasz loss基于子模损失(submodular losses)的凸Lovasz扩展，对神经网络的mean IoU损失进行优化。Lovasz loss根据分割目标的类别数量可分为两种：lovasz hinge loss和lovasz softmax loss. 其中lovasz hinge loss适用于二分类问题，lovasz softmax loss适用于多分类问题。该工作发表在CVPR 2018上，可点击[参考文献](https://gitee.com/paddlepaddle/PaddleSeg/blob/release/v2.0/docs/lovasz_loss.md#%E5%8F%82%E8%80%83%E6%96%87%E7%8C%AE)查看具体原理。\n",
    "\n",
    "需要注意的是，通常的直接训练方式不一定管用，PaddleSeg推荐的是另外2种训练方式：\n",
    "- （1）与cross entropy loss或bce loss(binary cross-entropy loss)加权结合使用。\n",
    "- （2）先使用cross entropy loss或bce loss进行训练，再使用lovasz softmax loss或lovasz hinge loss进行finetuning。\n",
    "通过`coef`参数对不同loss进行权重配比，从而灵活地进行训练调参。\n",
    "\n",
    "```python\n",
    "loss:\n",
    "  types:\n",
    "    - type: MixedLoss\n",
    "      losses:\n",
    "        - type: CrossEntropyLoss\n",
    "        - type: LovaszSoftmaxLoss\n",
    "      coef: [0.8, 0.2]\n",
    "    - type: MixedLoss\n",
    "      losses:\n",
    "        - type: CrossEntropyLoss\n",
    "        - type: LovaszSoftmaxLoss\n",
    "      coef: [0.8, 0.2]\n",
    "  coef: [1, 0.4]\n",
    "\n",
    "SOLVER:\n",
    "    LR: 0.005\n",
    "    LR_POLICY: \"poly\"\n",
    "    OPTIMIZER: \"sgd\"\n",
    "    NUM_EPOCHS: 40\n",
    "    CROSS_ENTROPY_WEIGHT: dynamic\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 三、准备训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# 设置一张可用的卡\n",
    "%set_env CUDA_VISIBLE_DEVICES=0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/aistudio/PaddleSeg\n"
     ]
    }
   ],
   "source": [
    "%cd PaddleSeg/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 1. 训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2021-08-07 19:06:12 [INFO]\t\n",
      "------------Environment Information-------------\n",
      "platform: Linux-4.4.0-150-generic-x86_64-with-debian-stretch-sid\n",
      "Python: 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]\n",
      "Paddle compiled with cuda: True\n",
      "NVCC: Cuda compilation tools, release 10.1, V10.1.243\n",
      "cudnn: 7.6\n",
      "GPUs used: 1\n",
      "CUDA_VISIBLE_DEVICES: None\n",
      "GPU: ['GPU 0: Tesla V100-SXM2-32GB']\n",
      "GCC: gcc (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0\n",
      "PaddlePaddle: 2.1.0\n",
      "OpenCV: 4.1.1\n",
      "------------------------------------------------\n",
      "2021-08-07 19:06:12 [INFO]\t\n",
      "---------------Config Information---------------\n",
      "SOLVER:\n",
      "  CROSS_ENTROPY_WEIGHT: dynamic\n",
      "  LR: 0.005\n",
      "  LR_POLICY: poly\n",
      "  NUM_EPOCHS: 40\n",
      "  OPTIMIZER: sgd\n",
      "batch_size: 4\n",
      "iters: 35000\n",
      "learning_rate:\n",
      "  decay:\n",
      "    end_lr: 0.0\n",
      "    power: 0.9\n",
      "    type: poly\n",
      "  value: 0.0025\n",
      "loss:\n",
      "  coef:\n",
      "  - 1\n",
      "  - 0.4\n",
      "  types:\n",
      "  - coef:\n",
      "    - 0.8\n",
      "    - 0.2\n",
      "    losses:\n",
      "    - type: CrossEntropyLoss\n",
      "    - type: LovaszSoftmaxLoss\n",
      "    type: MixedLoss\n",
      "  - coef:\n",
      "    - 0.8\n",
      "    - 0.2\n",
      "    losses:\n",
      "    - type: CrossEntropyLoss\n",
      "    - type: LovaszSoftmaxLoss\n",
      "    type: MixedLoss\n",
      "model:\n",
      "  backbone:\n",
      "    pretrained: https://bj.bcebos.com/paddleseg/dygraph/hrnet_w18_ssld.tar.gz\n",
      "    type: HRNet_W18\n",
      "  backbone_indices:\n",
      "  - 0\n",
      "  type: OCRNet\n",
      "optimizer:\n",
      "  momentum: 0.9\n",
      "  type: sgd\n",
      "  weight_decay: 4.0e-05\n",
      "train_dataset:\n",
      "  dataset_root: /home/aistudio/\n",
      "  mode: train\n",
      "  num_classes: 15\n",
      "  train_path: /home/aistudio/train_list.txt\n",
      "  transforms:\n",
      "  - max_scale_factor: 2.0\n",
      "    min_scale_factor: 0.5\n",
      "    scale_step_size: 0.25\n",
      "    type: ResizeStepScaling\n",
      "  - max_rotation: 30\n",
      "    type: RandomRotation\n",
      "  - type: RandomHorizontalFlip\n",
      "  - type: RandomVerticalFlip\n",
      "  - crop_size:\n",
      "    - 1024\n",
      "    - 512\n",
      "    type: RandomPaddingCrop\n",
      "  - type: RandomBlur\n",
      "  - brightness_range: 0.4\n",
      "    contrast_range: 0.4\n",
      "    saturation_range: 0.4\n",
      "    type: RandomDistort\n",
      "  - type: Normalize\n",
      "  type: Dataset\n",
      "val_dataset:\n",
      "  dataset_root: /home/aistudio/\n",
      "  mode: val\n",
      "  num_classes: 15\n",
      "  transforms:\n",
      "  - type: Normalize\n",
      "  type: Dataset\n",
      "  val_path: /home/aistudio/val_list.txt\n",
      "------------------------------------------------\n",
      "W0807 19:06:12.665663  3018 device_context.cc:404] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W0807 19:06:12.665731  3018 device_context.cc:422] device: 0, cuDNN Version: 7.6.\n",
      "2021-08-07 19:06:17 [INFO]\tLoading pretrained model from https://bj.bcebos.com/paddleseg/dygraph/hrnet_w18_ssld.tar.gz\n",
      "2021-08-07 19:06:18 [INFO]\tThere are 1525/1525 variables loaded into HRNet.\n",
      "2021-08-07 19:06:18 [WARNING]\t`learning_rate` in configuration file will be deprecated, please use `lr_scheduler` instead. E.g\n",
      "            lr_scheduler:\n",
      "                type: PolynomialDecay\n",
      "                learning_rate: 0.01\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/norm.py:641: UserWarning: When training, we now always track global mean and variance.\n",
      "  \"When training, we now always track global mean and variance.\")\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py:689: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.\n",
      "Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n",
      "  elif dtype == np.bool:\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:239: UserWarning: The dtype of left and right variables are not the same, left dtype is paddle.float32, but right dtype is paddle.int64, the right dtype will convert to paddle.float32\n",
      "  format(lhs_dtype, rhs_dtype, lhs_dtype))\n",
      "2021-08-07 19:06:27 [INFO]\t[TRAIN] epoch: 1, iter: 10/35000, loss: 2.1252, lr: 0.002499, batch_cost: 0.8803, reader_cost: 0.12985, ips: 4.5440 samples/sec | ETA 08:33:20\n",
      "2021-08-07 19:06:35 [INFO]\t[TRAIN] epoch: 1, iter: 20/35000, loss: 0.7221, lr: 0.002499, batch_cost: 0.7899, reader_cost: 0.02694, ips: 5.0640 samples/sec | ETA 07:40:30\n",
      "2021-08-07 19:06:43 [INFO]\t[TRAIN] epoch: 1, iter: 30/35000, loss: 0.4626, lr: 0.002498, batch_cost: 0.7918, reader_cost: 0.05014, ips: 5.0516 samples/sec | ETA 07:41:30\n",
      "2021-08-07 19:06:51 [INFO]\t[TRAIN] epoch: 1, iter: 40/35000, loss: 0.3710, lr: 0.002497, batch_cost: 0.7763, reader_cost: 0.00080, ips: 5.1526 samples/sec | ETA 07:32:19\n",
      "2021-08-07 19:06:59 [INFO]\t[TRAIN] epoch: 1, iter: 50/35000, loss: 0.3471, lr: 0.002497, batch_cost: 0.7730, reader_cost: 0.01174, ips: 5.1744 samples/sec | ETA 07:30:17\n",
      "2021-08-07 19:07:06 [INFO]\t[TRAIN] epoch: 1, iter: 60/35000, loss: 0.3511, lr: 0.002496, batch_cost: 0.7281, reader_cost: 0.00725, ips: 5.4939 samples/sec | ETA 07:03:59\n",
      "2021-08-07 19:07:14 [INFO]\t[TRAIN] epoch: 1, iter: 70/35000, loss: 0.3643, lr: 0.002496, batch_cost: 0.8154, reader_cost: 0.07590, ips: 4.9054 samples/sec | ETA 07:54:43\n",
      "2021-08-07 19:07:22 [INFO]\t[TRAIN] epoch: 1, iter: 80/35000, loss: 0.3259, lr: 0.002495, batch_cost: 0.7458, reader_cost: 0.01825, ips: 5.3632 samples/sec | ETA 07:14:04\n",
      "2021-08-07 19:07:29 [INFO]\t[TRAIN] epoch: 1, iter: 90/35000, loss: 0.3147, lr: 0.002494, batch_cost: 0.7586, reader_cost: 0.02122, ips: 5.2730 samples/sec | ETA 07:21:22\n",
      "2021-08-07 19:07:38 [INFO]\t[TRAIN] epoch: 1, iter: 100/35000, loss: 0.3092, lr: 0.002494, batch_cost: 0.8620, reader_cost: 0.09998, ips: 4.6401 samples/sec | ETA 08:21:25\n",
      "2021-08-07 19:07:46 [INFO]\t[TRAIN] epoch: 1, iter: 110/35000, loss: 0.3278, lr: 0.002493, batch_cost: 0.8297, reader_cost: 0.06704, ips: 4.8209 samples/sec | ETA 08:02:28\n",
      "2021-08-07 19:07:54 [INFO]\t[TRAIN] epoch: 1, iter: 120/35000, loss: 0.3133, lr: 0.002492, batch_cost: 0.7955, reader_cost: 0.02098, ips: 5.0285 samples/sec | ETA 07:42:25\n",
      "2021-08-07 19:08:01 [INFO]\t[TRAIN] epoch: 1, iter: 130/35000, loss: 0.3141, lr: 0.002492, batch_cost: 0.7025, reader_cost: 0.00063, ips: 5.6941 samples/sec | ETA 06:48:15\n",
      "2021-08-07 19:08:09 [INFO]\t[TRAIN] epoch: 1, iter: 140/35000, loss: 0.3162, lr: 0.002491, batch_cost: 0.8010, reader_cost: 0.02306, ips: 4.9935 samples/sec | ETA 07:45:24\n",
      "2021-08-07 19:08:17 [INFO]\t[TRAIN] epoch: 1, iter: 150/35000, loss: 0.3099, lr: 0.002490, batch_cost: 0.7570, reader_cost: 0.00021, ips: 5.2838 samples/sec | ETA 07:19:42\n",
      "2021-08-07 19:08:24 [INFO]\t[TRAIN] epoch: 1, iter: 160/35000, loss: 0.3125, lr: 0.002490, batch_cost: 0.7265, reader_cost: 0.00019, ips: 5.5057 samples/sec | ETA 07:01:51\n",
      "2021-08-07 19:08:32 [INFO]\t[TRAIN] epoch: 1, iter: 170/35000, loss: 0.3313, lr: 0.002489, batch_cost: 0.7948, reader_cost: 0.00035, ips: 5.0330 samples/sec | ETA 07:41:21\n",
      "2021-08-07 19:08:39 [INFO]\t[TRAIN] epoch: 1, iter: 180/35000, loss: 0.2932, lr: 0.002488, batch_cost: 0.7390, reader_cost: 0.01337, ips: 5.4126 samples/sec | ETA 07:08:52\n",
      "2021-08-07 19:08:47 [INFO]\t[TRAIN] epoch: 1, iter: 190/35000, loss: 0.3103, lr: 0.002488, batch_cost: 0.7255, reader_cost: 0.00828, ips: 5.5132 samples/sec | ETA 07:00:55\n",
      "2021-08-07 19:08:54 [INFO]\t[TRAIN] epoch: 1, iter: 200/35000, loss: 0.3083, lr: 0.002487, batch_cost: 0.7929, reader_cost: 0.05278, ips: 5.0449 samples/sec | ETA 07:39:52\n",
      "2021-08-07 19:09:03 [INFO]\t[TRAIN] epoch: 1, iter: 210/35000, loss: 0.3433, lr: 0.002487, batch_cost: 0.8130, reader_cost: 0.04356, ips: 4.9199 samples/sec | ETA 07:51:25\n",
      "2021-08-07 19:09:10 [INFO]\t[TRAIN] epoch: 1, iter: 220/35000, loss: 0.3702, lr: 0.002486, batch_cost: 0.7780, reader_cost: 0.00334, ips: 5.1416 samples/sec | ETA 07:30:57\n",
      "2021-08-07 19:09:18 [INFO]\t[TRAIN] epoch: 1, iter: 230/35000, loss: 0.3196, lr: 0.002485, batch_cost: 0.7587, reader_cost: 0.00026, ips: 5.2723 samples/sec | ETA 07:19:39\n"
     ]
    }
   ],
   "source": [
    "!python train.py \\\r\n",
    "       --config configs/ocrnet/ocrnet_hrnetw18_cityscapes_1024x512_160k_lovasz_softmax.yml \\\r\n",
    "       --do_eval \\\r\n",
    "       --use_vdl \\\r\n",
    "       --save_interval 1000 \\\r\n",
    "       --save_dir output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    " ## 2. 评估训练结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "!python val.py \\\r\n",
    "       --config configs/ocrnet/ocrnet_hrnetw18_cityscapes_1024x512_160k_lovasz_softmax.yml \\\r\n",
    "       --model_path output/iter_35000/model.pdparams"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 3. 评估结果\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/9058209218a2438daa2087201180cca587c25a3b862b451897f0e1625206a229)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 4. 预测"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "!python predict.py \\\r\n",
    "       --config configs/ocrnet/ocrnet_hrnetw18_cityscapes_1024x512_160k_lovasz_softmax.yml \\\r\n",
    "       --model_path output/iter_35000/model.pdparams \\\r\n",
    "       --image_path ../infer/4346.png \\\r\n",
    "       --save_dir output/result"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 输入图象：\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/f199db561c134f88bb16d2bcc2b896ba1e9c89859d6f49e5a7bbfa192cb4d620)\n",
    "\n",
    "### 预测结果：\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/481db2466d084034bf87786ce39d47b1dbf66f275dde4c71b388a8bbd8a5b5c1)\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/d5b380b7e1de4ed582626738455f8320ac31efd6eac14467888af8d3c8a41c13)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 四、心得 \n",
    "**第一次提交心得：**\n",
    "   目前参照项目PaddleSeg 2.0动态图：车道线图像分割任务，已经熟悉并理解了深度学习的工作流程，分析了数据集中各类数据的占比情况，知道了训练的参数配置文件在哪和怎么调整训练参数。但是正要训练时发现，数据集的挂载出了点问题，目前没法训练。现将提交作业，明天接着优化。\n",
    "\n",
    "**第二次提交心得：**\n",
    "   在解决上次数据集挂在问题的时候，发现我参照的项目修改了paddleSeg的的基础配置文件，在其中挂载了数据集，这也就是我在ocrnet_hrnetw18_cityscapes_1024x512_160k_lovasz_softmax.yml 文件中挂在数据集也不成功的原因。解决掉这个问题后，我对paddleSeg训练的配置有了更加深刻的理解。、\n",
    "\n",
    "**数据集处理心得：**\n",
    "前几次看数据集mask时，由于道路掩膜区域的灰度值太小(因为使用灰度值来进行分类的嘛)，一度认为mask是纯黑的图，导致很长时间没有进展，直到把mask下载下来打开查看，才无意中发现有一条若隐若现的道路，至此才发现分割类数据集分类的方法，可以说我这辈子都很难忘记了。\n",
    "\n",
    "\n",
    "数据集各类型统计时长：(统计数据集各类型分布时用了太多时间，用了三层嵌套循环，个人认为还有优化空间)\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/baf2446f5ffd41a596f517ae84990c4193c4e4bde6a14e169c7f1f8aeea45f48)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 五、个人简介\n",
    "本人是一名研一学生，研究方向为机器视觉，目前正在做结构光三维重建。经过这次的飞桨领航团学习，算是入门了深度学习，今后会勤加学习深度学习，将深度学习应用到结构光三维重建中，提高重建的精度和速度，降低其对算力和性能的要求，使其的应用范围更加广阔。\n",
    "\n",
    "### 个人主页：\n",
    "[NeeeaL](https://aistudio.baidu.com/aistudio/usercenter)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "PaddlePaddle 2.1.2 (Python 3.5)",
   "language": "python",
   "name": "py35-paddle1.2.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
