{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 目标检测基础概念\n",
    "\n",
    "> 先介绍检测相关的基本概念\n",
    "\n",
    "1.包括边界框\n",
    "\n",
    "2.锚框\n",
    "\n",
    "3.交并比\n",
    "\n",
    "### 边界框（bounding box）\n",
    "\n",
    "检测任务需要同时预测物体的**类别**和**位置**\n",
    "\n",
    "因此需要引入一些跟位置相关的概念  --> 通常使用边界框（bounding box，bbox）来表示物体的位置，框能正好包含物体的矩形框\n",
    "\n",
    "\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/f581e1bfd07a414596368c9c03a1b30ea115a2e30a014be68b2c26961f5c38fa\" width = \"400\"  div align=center\"></center>\n",
    "<center><br>图4：边界框</br></center>\n",
    "<br></br>\n",
    "\n",
    "### 边界框的两种格式来表示\n",
    "\n",
    "\n",
    "1. **xyxy**，即$(x_1, y_1, x_2, y_2)$，\n",
    "  x_1是左上角的坐标\n",
    "\n",
    "  y_1是左下角的坐标\n",
    "\n",
    "  x_2是右上角的坐标\n",
    "  \n",
    "  y_2是右下角的坐标\n",
    "  \n",
    "\n",
    "2. **xywh**，即$(x, y, w, h)$，\n",
    "  \n",
    "  $(x, y)$是矩形框中心点的坐标，\n",
    "  \n",
    "  w是矩形框的宽度\n",
    "  \n",
    "  h是矩形框的高度\n",
    "  \n",
    "  \n",
    "### 边界框的其他概念\n",
    "\n",
    "#### 真实框（ground truth box）\n",
    "\n",
    "> 在检测任务中，训练数据集的标签里会给出目标物体真实边界框所对应的$(x_1, y_1, x_2, y_2)$，这样的边界框也被称为真实框（ground truth box）\n",
    "  \n",
    "#### 预测框（prediction box）\n",
    "\n",
    "> 模型会对目标物体可能出现的位置进行预测，由模型预测出的边界框则称为预测框（prediction box）\n",
    "\n",
    "  \n",
    "### 边界框的方向描述\n",
    "\n",
    "> 图片坐标的原点在左上角，x轴向右为正方向，y轴向下为正方向。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 锚框（Anchor box）\n",
    "\n",
    "锚框与物体边界框不同，是由人们假想出来的一种框，然后有模型经过训练，学习修改参数最终找到目的物体并标记为真实框假象态。\n",
    "\n",
    "#### 锚框的思路\n",
    "\n",
    "1. 先设定好锚框的大小和形状，再以图像上某一个点为中心画出矩形框。\n",
    "\n",
    "2. 将这些锚框当成可能的候选区域。\n",
    "\n",
    "3. 模型对这些候选区域是否包含物体进行预测，\n",
    "\t\n",
    "    3.1 如果包含目标物体，则还需要进一步预测出物体所属的类别。\n",
    "\n",
    "4.由于锚框位置是固定的，它不大可能刚好跟物体边界框重合，所以需要在锚框的基础上进行微调以形成能准确描述物体位置的预测框。\n",
    "\n",
    "5. 模型需要预测出微调的幅度。在训练过程中，模型通过学习不断的调整参数，最终能学会如何判别出锚框所代表的候选区域是否包含物体\n",
    "\n",
    "\t5.1如果包含物体的话，物体属于哪个类别，以及物体边界框相对于锚框位置需要调整的幅度。\n",
    "\n",
    "**不同的模型往往有着不同的生成锚框的方式** 在后面的内容中，会详细介绍YOLO-V3算法里面产生锚框的规则\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/5a5805290f614a39ab8594a3df9cb44faf4fbe331afe4e4c98c9e316452ebc99)\n",
    "\n",
    "\n",
    "在下图中，以像素点[300, 500]为中心可以使用下面的程序生成3个框，如图中蓝色框所示，其中锚框A1跟人像区域非常接近。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "300.0 500.0 282.84271247461896 141.4213562373095\n",
      "300.0 500.0 200.0 200.0\n",
      "300.0 500.0 141.42135623730948 282.842712474619\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<Figure size 1000x1000 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# 画图展示如何绘制边界框和锚框\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.patches as patches\n",
    "from matplotlib.image import imread\n",
    "import math\n",
    "\n",
    "# 定义画矩形框的程序    \n",
    "def draw_rectangle(currentAxis, bbox, edgecolor = 'k', facecolor = 'y', fill=False, linestyle='-'):\n",
    "    # currentAxis，坐标轴，通过plt.gca()获取\n",
    "    # bbox，边界框，包含四个数值的list， [x1, y1, x2, y2]\n",
    "    # edgecolor，边框线条颜色\n",
    "    # facecolor，填充颜色\n",
    "    # fill, 是否填充\n",
    "    # linestype，边框线型\n",
    "    # patches.Rectangle需要传入左上角坐标、矩形区域的宽度、高度等参数\n",
    "    rect=patches.Rectangle((bbox[0], bbox[1]), bbox[2]-bbox[0]+1, bbox[3]-bbox[1]+1, linewidth=1,\n",
    "                           edgecolor=edgecolor,facecolor=facecolor,fill=fill, linestyle=linestyle)\n",
    "    currentAxis.add_patch(rect)\n",
    "\n",
    "    \n",
    "plt.figure(figsize=(10, 10))\n",
    "\n",
    "filename = '/home/aistudio/work/images/section3/000000086956.jpg'\n",
    "im = imread(filename)\n",
    "plt.imshow(im)\n",
    "\n",
    "# 使用xyxy格式表示物体真实框\n",
    "bbox1 = [214.29, 325.03, 399.82, 631.37]\n",
    "bbox2 = [40.93, 141.1, 226.99, 515.73]\n",
    "bbox3 = [247.2, 131.62, 480.0, 639.32]\n",
    "\n",
    "currentAxis=plt.gca()\n",
    "\n",
    "draw_rectangle(currentAxis, bbox1, edgecolor='r')\n",
    "draw_rectangle(currentAxis, bbox2, edgecolor='r')\n",
    "draw_rectangle(currentAxis, bbox3,edgecolor='r')\n",
    "\n",
    "# 绘制锚框\n",
    "def draw_anchor_box(center, length, scales, ratios, img_height, img_width):\n",
    "    \"\"\"\n",
    "    以center为中心，产生一系列锚框\n",
    "    其中length指定了一个基准的长度\n",
    "    scales是包含多种尺寸比例的list\n",
    "    ratios是包含多种长宽比的list\n",
    "    img_height和img_width是图片的尺寸，生成的锚框范围不能超出图片尺寸之外\n",
    "    \"\"\"\n",
    "    bboxes = []\n",
    "    for scale in scales:\n",
    "        for ratio in ratios:\n",
    "            h = length*scale*math.sqrt(ratio)\n",
    "            w = length*scale/math.sqrt(ratio) \n",
    "            x1 = max(center[0] - w/2., 0.)\n",
    "            y1 = max(center[1] - h/2., 0.)\n",
    "            x2 = min(center[0] + w/2. - 1.0, img_width - 1.0)\n",
    "            y2 = min(center[1] + h/2. - 1.0, img_height - 1.0)\n",
    "            print(center[0], center[1], w, h)\n",
    "            bboxes.append([x1, y1, x2, y2])\n",
    "\n",
    "    for bbox in bboxes:\n",
    "        draw_rectangle(currentAxis, bbox, edgecolor = 'b')\n",
    "\n",
    "img_height = im.shape[0]\n",
    "img_width = im.shape[1]        \n",
    "draw_anchor_box([300., 500.], 100., [2.0], [0.5, 1.0, 2.0], img_height, img_width)\n",
    "\n",
    "\n",
    "################# 以下为添加文字说明和箭头###############################\n",
    "\n",
    "plt.text(285, 285, 'G1', color='red', fontsize=20)\n",
    "plt.arrow(300, 288, 30, 40, color='red', width=0.001, length_includes_head=True, \\\n",
    "         head_width=5, head_length=10, shape='full')\n",
    "\n",
    "plt.text(190, 320, 'A1', color='blue', fontsize=20)\n",
    "plt.arrow(200, 320, 30, 40, color='blue', width=0.001, length_includes_head=True, \\\n",
    "         head_width=5, head_length=10, shape='full')\n",
    "\n",
    "plt.text(160, 370, 'A2', color='blue', fontsize=20)\n",
    "plt.arrow(170, 370, 30, 40, color='blue', width=0.001, length_includes_head=True, \\\n",
    "         head_width=5, head_length=10, shape='full')\n",
    "\n",
    "plt.text(115, 420, 'A3', color='blue', fontsize=20)\n",
    "plt.arrow(127, 420, 30, 40, color='blue', width=0.001, length_includes_head=True, \\\n",
    "         head_width=5, head_length=10, shape='full')\n",
    "\n",
    "#draw_anchor_box([200., 200.], 100., [2.0], [0.5, 1.0, 2.0])    \n",
    "plt.show()\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 交并比\n",
    "\n",
    "锚框根真实框的重合程度：\n",
    "\n",
    "\n",
    "\n",
    "$$IoU = \\frac{A\\cap B}{A \\cup B}$$\n",
    "\n",
    "我们将用这个概念来描述两个框之间的重合度。两个框可以看成是两个像素的集合，它们的交并比等于两个框重合部分的面积除以它们合并起来的面积。下图“交集”中青色区域是两个框的重合面积，图“并集”中蓝色区域是两个框的相并面积。用这两个面积相除即可得到它们之间的交并比，如 **图5** 所示。\n",
    "\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/c2095c01997044f8a054d676ab585f3beed4400961ea40379771a1fd6d8bf2ea\" width = \"500\"></center>\n",
    "<center><br>图5：交并比</br></center>\n",
    "<br></br>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "假设两个矩形框A和B的位置分别为：\n",
    "$$A:  [x_{a1}, y_{a1}, x_{a2}, y_{a2}]$$\n",
    "\n",
    "$$B:  [x_{b1}, y_{b1}, x_{b2}, y_{b2}]$$\n",
    "\n",
    "假如位置关系如 **图6** 所示：\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/8abb3525989044fa8f512f82dfcd0f2f244594ff79cc4f14a900a09d464b22b8\" width = \"300\"></center>\n",
    "<center><br>图6：计算交并比</br></center>\n",
    "<br></br>\n",
    "\n",
    "如果二者有相交部分，则相交部分左上角坐标为：\n",
    "$$x_1 = max(x_{a1}, x_{b1}), \\ \\ \\ \\ \\ y_1 = max(y_{a1}, y_{b1})$$\n",
    "\n",
    "相交部分右下角坐标为：\n",
    "$$x_2 = min(x_{a2}, x_{b2}), \\ \\ \\ \\ \\ y_2 = min(y_{a2}, y_{b2})$$\n",
    "\n",
    "计算先交部分面积：\n",
    "$$intersection = max(x_2 - x_1 + 1.0, 0) \\cdot max(y_2 - y_1 + 1.0, 0)$$\n",
    "\n",
    "矩形框A和B的面积分别是：\n",
    "$$S_A = (x_{a2} - x_{a1} + 1.0) \\cdot (y_{a2} - y_{a1} + 1.0)$$\n",
    "\n",
    "$$S_B = (x_{b2} - x_{b1} + 1.0) \\cdot (y_{b2} - y_{b1} + 1.0)$$\n",
    "\n",
    "计算相并部分面积：\n",
    "$$union = S_A + S_B - intersection$$\n",
    "\n",
    "计算交并比：\n",
    "\n",
    "$$IoU = \\frac{intersection}{union}$$\n",
    "\n",
    "\n",
    "\n",
    "交并比计算程序如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "IoU is 0.47402644317607107\n"
     ]
    }
   ],
   "source": [
    "# 计算IoU，矩形框的坐标形式为xyxy，这个函数会被保存在box_utils.py文件中\n",
    "def box_iou_xyxy(box1, box2):\n",
    "    # 获取box1左上角和右下角的坐标\n",
    "    x1min, y1min, x1max, y1max = box1[0], box1[1], box1[2], box1[3]\n",
    "    # 计算box1的面积\n",
    "    s1 = (y1max - y1min + 1.) * (x1max - x1min + 1.)\n",
    "    # 获取box2左上角和右下角的坐标\n",
    "    x2min, y2min, x2max, y2max = box2[0], box2[1], box2[2], box2[3]\n",
    "    # 计算box2的面积\n",
    "    s2 = (y2max - y2min + 1.) * (x2max - x2min + 1.)\n",
    "    \n",
    "    # 计算相交矩形框的坐标\n",
    "    xmin = np.maximum(x1min, x2min)\n",
    "    ymin = np.maximum(y1min, y2min)\n",
    "    xmax = np.minimum(x1max, x2max)\n",
    "    ymax = np.minimum(y1max, y2max)\n",
    "    # 计算相交矩形行的高度、宽度、面积\n",
    "    inter_h = np.maximum(ymax - ymin + 1., 0.)\n",
    "    inter_w = np.maximum(xmax - xmin + 1., 0.)\n",
    "    intersection = inter_h * inter_w\n",
    "    # 计算相并面积\n",
    "    union = s1 + s2 - intersection\n",
    "    # 计算交并比\n",
    "    iou = intersection / union\n",
    "    return iou\n",
    "\n",
    "\n",
    "bbox1 = [100., 100., 200., 200.]\n",
    "bbox2 = [120., 120., 220., 220.]\n",
    "iou = box_iou_xyxy(bbox1, bbox2)\n",
    "print('IoU is {}'.format(iou))  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 计算IoU，矩形框的坐标形式为xywh\n",
    "def box_iou_xywh(box1, box2):\n",
    "    x1min, y1min = box1[0] - box1[2]/2.0, box1[1] - box1[3]/2.0\n",
    "    x1max, y1max = box1[0] + box1[2]/2.0, box1[1] + box1[3]/2.0\n",
    "    s1 = box1[2] * box1[3]\n",
    "\n",
    "    x2min, y2min = box2[0] - box2[2]/2.0, box2[1] - box2[3]/2.0\n",
    "    x2max, y2max = box2[0] + box2[2]/2.0, box2[1] + box2[3]/2.0\n",
    "    s2 = box2[2] * box2[3]\n",
    "\n",
    "    xmin = np.maximum(x1min, x2min)\n",
    "    ymin = np.maximum(y1min, y2min)\n",
    "    xmax = np.minimum(x1max, x2max)\n",
    "    ymax = np.minimum(y1max, y2max)\n",
    "    inter_h = np.maximum(ymax - ymin, 0.)\n",
    "    inter_w = np.maximum(xmax - xmin, 0.)\n",
    "    intersection = inter_h * inter_w\n",
    "\n",
    "    union = s1 + s2 - intersection\n",
    "    iou = intersection / union\n",
    "    return iou"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了直观的展示交并比的大小跟重合程度之间的关系，**图7** 示意了不同交并比下两个框之间的相对位置关系，从 IoU = 0.95 到 IoU = 0.\n",
    "<br></br>\n",
    "<center><img src=\"https://ai-studio-static-online.cdn.bcebos.com/52faaa317c6c4de59088de70e7ba0d478969aacffa6e40b4998a3ca9203a9ee2\" width = \"500\"></center>\n",
    "<center><br>图7：不同交并比下两个框之间相对位置示意图</br></center>\n",
    "<br></br>\n",
    "\n",
    "---- "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 林业病虫害数据集和数据预处理方法介绍\n",
    "\n",
    "在本次的课程中，将使用百度与林业大学合作开发的林业病虫害防治项目中用到昆虫数据集，关于该项目和数据集的更多信息，可以参考[相关报道](补充报道文章链接)。在这一小节中将为读者介绍该数据集，以及计算机视觉任务中常用的数据预处理方法。\n",
    "   \n",
    "## 读取AI识虫数据集标注信息\n",
    "\n",
    "AI识虫数据集结构如下：\n",
    "\n",
    "* 提供了2183张图片，其中训练集1693张，验证集245，测试集245张。\n",
    "* 包含7种昆虫，分别是Boerner、Leconte、Linnaeus、acuminatus、armandi、coleoptera和linnaeus。\n",
    "* 包含了图片和标注，请读者先将数据解压，并存放在insects目录下。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 解压数据脚本，第一次运行时打开注释，将文件解压到work目录下\n",
    "!unzip -d /home/aistudio/work /home/aistudio/data/data19638/insects.zip"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将数据解压之后，可以看到insects目录下的结构如下所示。\n",
    "\n",
    "        insects\n",
    "            |---train\n",
    "            |         |---annotations\n",
    "            |         |         |---xmls\n",
    "            |         |                  |---100.xml\n",
    "            |         |                  |---101.xml\n",
    "            |         |                  |---...\n",
    "            |         |\n",
    "            |         |---images\n",
    "            |                   |---100.jpeg\n",
    "            |                   |---101.jpeg\n",
    "            |                   |---...\n",
    "            |\n",
    "            |---val\n",
    "            |        |---annotations\n",
    "            |        |         |---xmls\n",
    "            |        |                  |---1221.xml\n",
    "            |        |                  |---1277.xml\n",
    "            |        |                  |---...\n",
    "            |        |\n",
    "            |        |---images\n",
    "            |                  |---1221.jpeg\n",
    "            |                  |---1277.jpeg\n",
    "            |                  |---...\n",
    "            |\n",
    "            |---test\n",
    "                     |---images\n",
    "                               |---1833.jpeg\n",
    "                               |---1838.jpeg\n",
    "                               |---...\n",
    "\n",
    "insects包含train、val和test三个文件夹。train/annotations/xmls目录下存放着图片的标注。每个xml文件是对一张图片的说明，包括图片尺寸、包含的昆虫名称、在图片上出现的位置等信息。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```\n",
    "<annotation>\n",
    "        <folder>刘霏霏</folder>\n",
    "        <filename>100.jpeg</filename>\n",
    "        <path>/home/fion/桌面/刘霏霏/100.jpeg</path>\n",
    "        <source>\n",
    "                <database>Unknown</database>\n",
    "        </source>\n",
    "        <size>\n",
    "                <width>1336</width>\n",
    "                <height>1336</height>\n",
    "                <depth>3</depth>\n",
    "        </size>\n",
    "        <segmented>0</segmented>\n",
    "        <object>\n",
    "                <name>Boerner</name>\n",
    "                <pose>Unspecified</pose>\n",
    "                <truncated>0</truncated>\n",
    "                <difficult>0</difficult>\n",
    "                <bndbox>\n",
    "                        <xmin>500</xmin>\n",
    "                        <ymin>893</ymin>\n",
    "                        <xmax>656</xmax>\n",
    "                        <ymax>966</ymax>\n",
    "                </bndbox>\n",
    "        </object>\n",
    "        <object>\n",
    "                <name>Leconte</name>\n",
    "                <pose>Unspecified</pose>\n",
    "                <truncated>0</truncated>\n",
    "                <difficult>0</difficult>\n",
    "                <bndbox>\n",
    "                        <xmin>622</xmin>\n",
    "                        <ymin>490</ymin>\n",
    "                        <xmax>756</xmax>\n",
    "                        <ymax>610</ymax>\n",
    "                </bndbox>\n",
    "        </object>\n",
    "        <object>\n",
    "                <name>armandi</name>\n",
    "                <pose>Unspecified</pose>\n",
    "                <truncated>0</truncated>\n",
    "                <difficult>0</difficult>\n",
    "                <bndbox>\n",
    "                        <xmin>432</xmin>\n",
    "                        <ymin>663</ymin>\n",
    "                        <xmax>517</xmax>\n",
    "                        <ymax>729</ymax>\n",
    "                </bndbox>\n",
    "        </object>\n",
    "        <object>\n",
    "                <name>coleoptera</name>\n",
    "                <pose>Unspecified</pose>\n",
    "                <truncated>0</truncated>\n",
    "                <difficult>0</difficult>\n",
    "                <bndbox>\n",
    "                        <xmin>624</xmin>\n",
    "                        <ymin>685</ymin>\n",
    "                        <xmax>697</xmax>\n",
    "                        <ymax>771</ymax>\n",
    "                </bndbox>\n",
    "        </object>\n",
    "        <object>\n",
    "                <name>linnaeus</name>\n",
    "                <pose>Unspecified</pose>\n",
    "                <truncated>0</truncated>\n",
    "                <difficult>0</difficult>\n",
    "                <bndbox>\n",
    "                        <xmin>783</xmin>\n",
    "                        <ymin>700</ymin>\n",
    "                        <xmax>856</xmax>\n",
    "                        <ymax>802</ymax>\n",
    "                </bndbox>\n",
    "        </object>\n",
    "</annotation>\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上面列出的xml文件中的主要参数说明如下：\n",
    "\n",
    "size：图片尺寸\n",
    "\n",
    "object：图片中包含的物体，一张图片可能中包含多个物体\n",
    "\n",
    "* name：昆虫名称\n",
    "* bndbox：物体真实框\n",
    "* difficult：识别是否困难\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面我们将从数据集中读取xml文件，将每张图片的标注信息读取出来。在读取具体的标注文件之前，我们先完成一件事情，就是将昆虫的类别名字（字符串）转化成数字表示的类别。因为神经网络里面计算时需要的输入类型是数值型的，所以需要将字符串表示的类别转化成具体的数字。昆虫类别名称的列表是：['Boerner', 'Leconte', 'Linnaeus', 'acuminatus', 'armandi', 'coleoptera', 'linnaeus']，这里我们约定此列表中：'Boerner'对应类别0，'Leconte'对应类别1，...，'linnaeus'对应类别6。使用下面的程序可以得到表示名称字符串和数字类别之间映射关系的字典。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "INSECT_NAMES = ['Boerner', 'Leconte', 'Linnaeus', \n",
    "                'acuminatus', 'armandi', 'coleoptera', 'linnaeus']\n",
    "\n",
    "def get_insect_names():\n",
    "    \"\"\"\n",
    "    return a dict, as following,\n",
    "        {'Boerner': 0,\n",
    "         'Leconte': 1,\n",
    "         'Linnaeus': 2, \n",
    "         'acuminatus': 3,\n",
    "         'armandi': 4,\n",
    "         'coleoptera': 5,\n",
    "         'linnaeus': 6\n",
    "        }\n",
    "    It can map the insect name into an integer label.\n",
    "    \"\"\"\n",
    "    insect_category2id = {}\n",
    "    for i, item in enumerate(INSECT_NAMES):\n",
    "        insect_category2id[item] = i\n",
    "\n",
    "    return insect_category2id"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'Boerner': 0,\n",
       " 'Leconte': 1,\n",
       " 'Linnaeus': 2,\n",
       " 'acuminatus': 3,\n",
       " 'armandi': 4,\n",
       " 'coleoptera': 5,\n",
       " 'linnaeus': 6}"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "cname2cid = get_insect_names()\n",
    "cname2cid"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "调用get_insect_names函数返回一个dict，其键-值对描述了昆虫名称-数字类别之间的映射关系。\n",
    "\n",
    "下面的程序从annotations/xml目录下面读取所有文件标注信息。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import numpy as np\n",
    "import xml.etree.ElementTree as ET\n",
    "\n",
    "def get_annotations(cname2cid, datadir):\n",
    "    filenames = os.listdir(os.path.join(datadir, 'annotations', 'xmls'))\n",
    "    records = []\n",
    "    ct = 0\n",
    "    for fname in filenames:\n",
    "        fid = fname.split('.')[0]\n",
    "        fpath = os.path.join(datadir, 'annotations', 'xmls', fname)\n",
    "        img_file = os.path.join(datadir, 'images', fid + '.jpeg')\n",
    "        tree = ET.parse(fpath)\n",
    "\n",
    "        if tree.find('id') is None:\n",
    "            im_id = np.array([ct])\n",
    "        else:\n",
    "            im_id = np.array([int(tree.find('id').text)])\n",
    "\n",
    "        objs = tree.findall('object')\n",
    "        im_w = float(tree.find('size').find('width').text)\n",
    "        im_h = float(tree.find('size').find('height').text)\n",
    "        gt_bbox = np.zeros((len(objs), 4), dtype=np.float32)\n",
    "        gt_class = np.zeros((len(objs), ), dtype=np.int32)\n",
    "        is_crowd = np.zeros((len(objs), ), dtype=np.int32)\n",
    "        difficult = np.zeros((len(objs), ), dtype=np.int32)\n",
    "        for i, obj in enumerate(objs):\n",
    "            cname = obj.find('name').text\n",
    "            gt_class[i] = cname2cid[cname]\n",
    "            _difficult = int(obj.find('difficult').text)\n",
    "            x1 = float(obj.find('bndbox').find('xmin').text)\n",
    "            y1 = float(obj.find('bndbox').find('ymin').text)\n",
    "            x2 = float(obj.find('bndbox').find('xmax').text)\n",
    "            y2 = float(obj.find('bndbox').find('ymax').text)\n",
    "            x1 = max(0, x1)\n",
    "            y1 = max(0, y1)\n",
    "            x2 = min(im_w - 1, x2)\n",
    "            y2 = min(im_h - 1, y2)\n",
    "            # 这里使用xywh格式来表示目标物体真实框\n",
    "            gt_bbox[i] = [(x1+x2)/2.0 , (y1+y2)/2.0, x2-x1+1., y2-y1+1.]\n",
    "            is_crowd[i] = 0\n",
    "            difficult[i] = _difficult\n",
    "\n",
    "        voc_rec = {\n",
    "            'im_file': img_file,\n",
    "            'im_id': im_id,\n",
    "            'h': im_h,\n",
    "            'w': im_w,\n",
    "            'is_crowd': is_crowd,\n",
    "            'gt_class': gt_class,\n",
    "            'gt_bbox': gt_bbox,\n",
    "            'gt_poly': [],\n",
    "            'difficult': difficult\n",
    "            }\n",
    "        if len(objs) != 0:\n",
    "            records.append(voc_rec)\n",
    "        ct += 1\n",
    "    return records"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "TRAINDIR = '/home/aistudio/work/insects/train'\n",
    "TESTDIR = '/home/aistudio/work/insects/test'\n",
    "VALIDDIR = '/home/aistudio/work/insects/val'\n",
    "cname2cid = get_insect_names()\n",
    "records = get_annotations(cname2cid, TRAINDIR)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1693"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(records)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'im_file': '/home/aistudio/work/insects/train/images/1877.jpeg',\n",
       " 'im_id': array([0]),\n",
       " 'h': 1244.0,\n",
       " 'w': 1244.0,\n",
       " 'is_crowd': array([0, 0, 0, 0, 0, 0, 0, 0], dtype=int32),\n",
       " 'gt_class': array([1, 1, 0, 2, 3, 4, 5, 5], dtype=int32),\n",
       " 'gt_bbox': array([[934.5, 594. , 196. , 171. ],\n",
       "        [593.5, 500.5, 150. , 110. ],\n",
       "        [748.5, 814. ,  76. , 147. ],\n",
       "        [471.5, 633. ,  90. ,  83. ],\n",
       "        [545. , 831.5,  89. ,  60. ],\n",
       "        [764. , 500.5,  85. , 106. ],\n",
       "        [623.5, 688. ,  76. ,  59. ],\n",
       "        [614.5, 859.5,  42. ,  56. ]], dtype=float32),\n",
       " 'gt_poly': [],\n",
       " 'difficult': array([0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)}"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "records[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过上面的程序，将所有训练数据集的标注数据全部读取出来了，存放在records列表下面，其中每一个元素是一张图片的标注数据，包含了图片存放地址，图片id，图片高度和宽度，图片中所包含的目标物体的种类和位置。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据读取和预处理\n",
    "\n",
    "数据预处理是训练神经网络时非常重要的步骤。合适的预处理方法，可以帮助模型更好的收敛并防止过拟合。首先我们需要从磁盘读入数据，然后需要对这些数据进行预处理，为了保证网络运行的速度通常还要对数据预处理进行加速。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据读取\n",
    "\n",
    "前面已经将图片的所有描述信息保存在records中了，其中的每一个元素包含了一张图片的描述，下面的程序展示了如何根据records里面的描述读取图片及标注。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### 数据读取\n",
    "import cv2\n",
    "\n",
    "def get_bbox(gt_bbox, gt_class):\n",
    "    # 对于一般的检测任务来说，一张图片上往往会有多个目标物体\n",
    "    # 设置参数MAX_NUM = 50， 即一张图片最多取50个真实框；如果真实\n",
    "    # 框的数目少于50个，则将不足部分的gt_bbox, gt_class和gt_score的各项数值全设置为0\n",
    "    MAX_NUM = 50\n",
    "    gt_bbox2 = np.zeros((MAX_NUM, 4))\n",
    "    gt_class2 = np.zeros((MAX_NUM,))\n",
    "    for i in range(len(gt_bbox)):\n",
    "        gt_bbox2[i, :] = gt_bbox[i, :]\n",
    "        gt_class2[i] = gt_class[i]\n",
    "        if i >= MAX_NUM:\n",
    "            break\n",
    "    return gt_bbox2, gt_class2\n",
    "\n",
    "def get_img_data_from_file(record):\n",
    "    \"\"\"\n",
    "    record is a dict as following,\n",
    "      record = {\n",
    "            'im_file': img_file,\n",
    "            'im_id': im_id,\n",
    "            'h': im_h,\n",
    "            'w': im_w,\n",
    "            'is_crowd': is_crowd,\n",
    "            'gt_class': gt_class,\n",
    "            'gt_bbox': gt_bbox,\n",
    "            'gt_poly': [],\n",
    "            'difficult': difficult\n",
    "            }\n",
    "    \"\"\"\n",
    "    im_file = record['im_file']\n",
    "    h = record['h']\n",
    "    w = record['w']\n",
    "    is_crowd = record['is_crowd']\n",
    "    gt_class = record['gt_class']\n",
    "    gt_bbox = record['gt_bbox']\n",
    "    difficult = record['difficult']\n",
    "\n",
    "    img = cv2.imread(im_file)\n",
    "    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n",
    "\n",
    "    # check if h and w in record equals that read from img\n",
    "    assert img.shape[0] == int(h), \\\n",
    "             \"image height of {} inconsistent in record({}) and img file({})\".format(\n",
    "               im_file, h, img.shape[0])\n",
    "\n",
    "    assert img.shape[1] == int(w), \\\n",
    "             \"image width of {} inconsistent in record({}) and img file({})\".format(\n",
    "               im_file, w, img.shape[1])\n",
    "\n",
    "    gt_boxes, gt_labels = get_bbox(gt_bbox, gt_class)\n",
    "\n",
    "    # gt_bbox 用相对值\n",
    "    gt_boxes[:, 0] = gt_boxes[:, 0] / float(w)\n",
    "    gt_boxes[:, 1] = gt_boxes[:, 1] / float(h)\n",
    "    gt_boxes[:, 2] = gt_boxes[:, 2] / float(w)\n",
    "    gt_boxes[:, 3] = gt_boxes[:, 3] / float(h)\n",
    "  \n",
    "    return img, gt_boxes, gt_labels, (h, w)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "record = records[0]\n",
    "img, gt_boxes, gt_labels, scales = get_img_data_from_file(record)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(1244, 1244, 3)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "img.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(50, 4)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gt_boxes.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([1., 1., 0., 2., 3., 4., 5., 5., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
       "       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
       "       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gt_labels"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(1244.0, 1244.0)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "scales"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "get_img_data_from_file()函数可以返回图片数据的数据，它们是图像数据img, 真实框坐标gt_boxes, 真实框包含的物体类别gt_labels, 图像尺寸scales。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据预处理\n",
    "\n",
    "在计算机视觉中，通常会对图像做一些随机的变化，产生相似但又不完全相同的样本。主要作用是扩大训练数据集，抑制过拟合，提升模型的泛化能力，常用的方法见下面的程序。\n",
    "\n",
    "\n",
    "#### **随机改变亮暗、对比度和颜色等**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import cv2\n",
    "from PIL import Image, ImageEnhance\n",
    "import random\n",
    "\n",
    "# 随机改变亮暗、对比度和颜色等\n",
    "def random_distort(img):\n",
    "    # 随机改变亮度\n",
    "    def random_brightness(img, lower=0.5, upper=1.5):\n",
    "        e = np.random.uniform(lower, upper)\n",
    "        return ImageEnhance.Brightness(img).enhance(e)\n",
    "    # 随机改变对比度\n",
    "    def random_contrast(img, lower=0.5, upper=1.5):\n",
    "        e = np.random.uniform(lower, upper)\n",
    "        return ImageEnhance.Contrast(img).enhance(e)\n",
    "    # 随机改变颜色\n",
    "    def random_color(img, lower=0.5, upper=1.5):\n",
    "        e = np.random.uniform(lower, upper)\n",
    "        return ImageEnhance.Color(img).enhance(e)\n",
    "\n",
    "    ops = [random_brightness, random_contrast, random_color]\n",
    "    np.random.shuffle(ops)\n",
    "\n",
    "    img = Image.fromarray(img)\n",
    "    img = ops[0](img)\n",
    "    img = ops[1](img)\n",
    "    img = ops[2](img)\n",
    "    img = np.asarray(img)\n",
    "\n",
    "    return img"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### **随机填充**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 随机填充\n",
    "def random_expand(img,\n",
    "                  gtboxes,\n",
    "                  max_ratio=4.,\n",
    "                  fill=None,\n",
    "                  keep_ratio=True,\n",
    "                  thresh=0.5):\n",
    "    if random.random() > thresh:\n",
    "        return img, gtboxes\n",
    "\n",
    "    if max_ratio < 1.0:\n",
    "        return img, gtboxes\n",
    "\n",
    "    h, w, c = img.shape\n",
    "    ratio_x = random.uniform(1, max_ratio)\n",
    "    if keep_ratio:\n",
    "        ratio_y = ratio_x\n",
    "    else:\n",
    "        ratio_y = random.uniform(1, max_ratio)\n",
    "    oh = int(h * ratio_y)\n",
    "    ow = int(w * ratio_x)\n",
    "    off_x = random.randint(0, ow - w)\n",
    "    off_y = random.randint(0, oh - h)\n",
    "\n",
    "    out_img = np.zeros((oh, ow, c))\n",
    "    if fill and len(fill) == c:\n",
    "        for i in range(c):\n",
    "            out_img[:, :, i] = fill[i] * 255.0\n",
    "\n",
    "    out_img[off_y:off_y + h, off_x:off_x + w, :] = img\n",
    "    gtboxes[:, 0] = ((gtboxes[:, 0] * w) + off_x) / float(ow)\n",
    "    gtboxes[:, 1] = ((gtboxes[:, 1] * h) + off_y) / float(oh)\n",
    "    gtboxes[:, 2] = gtboxes[:, 2] / ratio_x\n",
    "    gtboxes[:, 3] = gtboxes[:, 3] / ratio_y\n",
    "\n",
    "    return out_img.astype('uint8'), gtboxes"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### **随机裁剪**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "随机裁剪之前需要先定义两个函数，multi_box_iou_xywh和box_crop这两个函数将被保存在box_utils.py文件中。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "def multi_box_iou_xywh(box1, box2):\n",
    "    \"\"\"\n",
    "    In this case, box1 or box2 can contain multi boxes.\n",
    "    Only two cases can be processed in this method:\n",
    "       1, box1 and box2 have the same shape, box1.shape == box2.shape\n",
    "       2, either box1 or box2 contains only one box, len(box1) == 1 or len(box2) == 1\n",
    "    If the shape of box1 and box2 does not match, and both of them contain multi boxes, it will be wrong.\n",
    "    \"\"\"\n",
    "    assert box1.shape[-1] == 4, \"Box1 shape[-1] should be 4.\"\n",
    "    assert box2.shape[-1] == 4, \"Box2 shape[-1] should be 4.\"\n",
    "\n",
    "\n",
    "    b1_x1, b1_x2 = box1[:, 0] - box1[:, 2] / 2, box1[:, 0] + box1[:, 2] / 2\n",
    "    b1_y1, b1_y2 = box1[:, 1] - box1[:, 3] / 2, box1[:, 1] + box1[:, 3] / 2\n",
    "    b2_x1, b2_x2 = box2[:, 0] - box2[:, 2] / 2, box2[:, 0] + box2[:, 2] / 2\n",
    "    b2_y1, b2_y2 = box2[:, 1] - box2[:, 3] / 2, box2[:, 1] + box2[:, 3] / 2\n",
    "\n",
    "    inter_x1 = np.maximum(b1_x1, b2_x1)\n",
    "    inter_x2 = np.minimum(b1_x2, b2_x2)\n",
    "    inter_y1 = np.maximum(b1_y1, b2_y1)\n",
    "    inter_y2 = np.minimum(b1_y2, b2_y2)\n",
    "    inter_w = inter_x2 - inter_x1\n",
    "    inter_h = inter_y2 - inter_y1\n",
    "    inter_w = np.clip(inter_w, a_min=0., a_max=None)\n",
    "    inter_h = np.clip(inter_h, a_min=0., a_max=None)\n",
    "\n",
    "    inter_area = inter_w * inter_h\n",
    "    b1_area = (b1_x2 - b1_x1) * (b1_y2 - b1_y1)\n",
    "    b2_area = (b2_x2 - b2_x1) * (b2_y2 - b2_y1)\n",
    "\n",
    "    return inter_area / (b1_area + b2_area - inter_area)\n",
    "\n",
    "def box_crop(boxes, labels, crop, img_shape):\n",
    "    x, y, w, h = map(float, crop)\n",
    "    im_w, im_h = map(float, img_shape)\n",
    "\n",
    "    boxes = boxes.copy()\n",
    "    boxes[:, 0], boxes[:, 2] = (boxes[:, 0] - boxes[:, 2] / 2) * im_w, (\n",
    "        boxes[:, 0] + boxes[:, 2] / 2) * im_w\n",
    "    boxes[:, 1], boxes[:, 3] = (boxes[:, 1] - boxes[:, 3] / 2) * im_h, (\n",
    "        boxes[:, 1] + boxes[:, 3] / 2) * im_h\n",
    "\n",
    "    crop_box = np.array([x, y, x + w, y + h])\n",
    "    centers = (boxes[:, :2] + boxes[:, 2:]) / 2.0\n",
    "    mask = np.logical_and(crop_box[:2] <= centers, centers <= crop_box[2:]).all(\n",
    "        axis=1)\n",
    "\n",
    "    boxes[:, :2] = np.maximum(boxes[:, :2], crop_box[:2])\n",
    "    boxes[:, 2:] = np.minimum(boxes[:, 2:], crop_box[2:])\n",
    "    boxes[:, :2] -= crop_box[:2]\n",
    "    boxes[:, 2:] -= crop_box[:2]\n",
    "\n",
    "    mask = np.logical_and(mask, (boxes[:, :2] < boxes[:, 2:]).all(axis=1))\n",
    "    boxes = boxes * np.expand_dims(mask.astype('float32'), axis=1)\n",
    "    labels = labels * mask.astype('float32')\n",
    "    boxes[:, 0], boxes[:, 2] = (boxes[:, 0] + boxes[:, 2]) / 2 / w, (\n",
    "        boxes[:, 2] - boxes[:, 0]) / w\n",
    "    boxes[:, 1], boxes[:, 3] = (boxes[:, 1] + boxes[:, 3]) / 2 / h, (\n",
    "        boxes[:, 3] - boxes[:, 1]) / h\n",
    "\n",
    "    return boxes, labels, mask.sum()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 随机裁剪\n",
    "def random_crop(img,\n",
    "                boxes,\n",
    "                labels,\n",
    "                scales=[0.3, 1.0],\n",
    "                max_ratio=2.0,\n",
    "                constraints=None,\n",
    "                max_trial=50):\n",
    "    if len(boxes) == 0:\n",
    "        return img, boxes\n",
    "\n",
    "    if not constraints:\n",
    "        constraints = [(0.1, 1.0), (0.3, 1.0), (0.5, 1.0), (0.7, 1.0),\n",
    "                       (0.9, 1.0), (0.0, 1.0)]\n",
    "\n",
    "    img = Image.fromarray(img)\n",
    "    w, h = img.size\n",
    "    crops = [(0, 0, w, h)]\n",
    "    for min_iou, max_iou in constraints:\n",
    "        for _ in range(max_trial):\n",
    "            scale = random.uniform(scales[0], scales[1])\n",
    "            aspect_ratio = random.uniform(max(1 / max_ratio, scale * scale), \\\n",
    "                                          min(max_ratio, 1 / scale / scale))\n",
    "            crop_h = int(h * scale / np.sqrt(aspect_ratio))\n",
    "            crop_w = int(w * scale * np.sqrt(aspect_ratio))\n",
    "            crop_x = random.randrange(w - crop_w)\n",
    "            crop_y = random.randrange(h - crop_h)\n",
    "            crop_box = np.array([[(crop_x + crop_w / 2.0) / w,\n",
    "                                  (crop_y + crop_h / 2.0) / h,\n",
    "                                  crop_w / float(w), crop_h / float(h)]])\n",
    "\n",
    "            iou = multi_box_iou_xywh(crop_box, boxes)\n",
    "            if min_iou <= iou.min() and max_iou >= iou.max():\n",
    "                crops.append((crop_x, crop_y, crop_w, crop_h))\n",
    "                break\n",
    "\n",
    "    while crops:\n",
    "        crop = crops.pop(np.random.randint(0, len(crops)))\n",
    "        crop_boxes, crop_labels, box_num = box_crop(boxes, labels, crop, (w, h))\n",
    "        if box_num < 1:\n",
    "            continue\n",
    "        img = img.crop((crop[0], crop[1], crop[0] + crop[2],\n",
    "                        crop[1] + crop[3])).resize(img.size, Image.LANCZOS)\n",
    "        img = np.asarray(img)\n",
    "        return img, crop_boxes, crop_labels\n",
    "    img = np.asarray(img)\n",
    "    return img, boxes, labels"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### **随机缩放**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 随机缩放\n",
    "def random_interp(img, size, interp=None):\n",
    "    interp_method = [\n",
    "        cv2.INTER_NEAREST,\n",
    "        cv2.INTER_LINEAR,\n",
    "        cv2.INTER_AREA,\n",
    "        cv2.INTER_CUBIC,\n",
    "        cv2.INTER_LANCZOS4,\n",
    "    ]\n",
    "    if not interp or interp not in interp_method:\n",
    "        interp = interp_method[random.randint(0, len(interp_method) - 1)]\n",
    "    h, w, _ = img.shape\n",
    "    im_scale_x = size / float(w)\n",
    "    im_scale_y = size / float(h)\n",
    "    img = cv2.resize(\n",
    "        img, None, None, fx=im_scale_x, fy=im_scale_y, interpolation=interp)\n",
    "    return img"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### **随机翻转**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 随机翻转\n",
    "def random_flip(img, gtboxes, thresh=0.5):\n",
    "    if random.random() > thresh:\n",
    "        img = img[:, ::-1, :]\n",
    "        gtboxes[:, 0] = 1.0 - gtboxes[:, 0]\n",
    "    return img, gtboxes"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### **随机打乱真实框排列顺序**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 随机打乱真实框排列顺序\n",
    "def shuffle_gtbox(gtbox, gtlabel):\n",
    "    gt = np.concatenate(\n",
    "        [gtbox, gtlabel[:, np.newaxis]], axis=1)\n",
    "    idx = np.arange(gt.shape[0])\n",
    "    np.random.shuffle(idx)\n",
    "    gt = gt[idx, :]\n",
    "    return gt[:, :4], gt[:, 4]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### **图像增广方法**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 图像增广方法汇总\n",
    "def image_augment(img, gtboxes, gtlabels, size, means=None):\n",
    "    # 随机改变亮暗、对比度和颜色等\n",
    "    img = random_distort(img)\n",
    "    # 随机填充\n",
    "    img, gtboxes = random_expand(img, gtboxes, fill=means)\n",
    "    # 随机裁剪\n",
    "    img, gtboxes, gtlabels, = random_crop(img, gtboxes, gtlabels)\n",
    "    # 随机缩放\n",
    "    img = random_interp(img, size)\n",
    "    # 随机翻转\n",
    "    img, gtboxes = random_flip(img, gtboxes)\n",
    "    # 随机打乱真实框排列顺序\n",
    "    gtboxes, gtlabels = shuffle_gtbox(gtboxes, gtlabels)\n",
    "\n",
    "    return img.astype('float32'), gtboxes.astype('float32'), gtlabels.astype('int32')\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "img, gt_boxes, gt_labels, scales = get_img_data_from_file(record)\n",
    "size = 512\n",
    "img, gt_boxes, gt_labels = image_augment(img, gt_boxes, gt_labels, size)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(512, 512, 3)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "img.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(50, 4)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gt_boxes.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(50,)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gt_labels.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里得到的img数据数值需要调整，需要除以255，并且减去均值和方差，再将维度从[H, W, C]调整为[C, H, W]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[[-2.117904 , -2.117904 , -2.117904 , ..., -2.117904 ,\n",
       "         -2.117904 , -2.117904 ],\n",
       "        [-2.117904 , -2.117904 , -2.117904 , ..., -2.117904 ,\n",
       "         -2.117904 , -2.117904 ],\n",
       "        [-2.117904 , -2.117904 , -2.117904 , ..., -2.117904 ,\n",
       "         -2.117904 , -2.117904 ],\n",
       "        ...,\n",
       "        [-2.117904 , -2.117904 , -2.117904 , ..., -2.117904 ,\n",
       "         -2.117904 , -2.117904 ],\n",
       "        [-2.117904 , -2.117904 , -2.117904 , ..., -2.117904 ,\n",
       "         -2.117904 , -2.117904 ],\n",
       "        [-2.117904 , -2.117904 , -2.117904 , ..., -2.117904 ,\n",
       "         -2.117904 , -2.117904 ]],\n",
       "\n",
       "       [[-2.0357144, -2.0357144, -2.0357144, ..., -2.0357144,\n",
       "         -2.0357144, -2.0357144],\n",
       "        [-2.0357144, -2.0357144, -2.0357144, ..., -2.0357144,\n",
       "         -2.0357144, -2.0357144],\n",
       "        [-2.0357144, -2.0357144, -2.0357144, ..., -2.0357144,\n",
       "         -2.0357144, -2.0357144],\n",
       "        ...,\n",
       "        [-2.0357144, -2.0357144, -2.0357144, ..., -2.0357144,\n",
       "         -2.0357144, -2.0357144],\n",
       "        [-2.0357144, -2.0357144, -2.0357144, ..., -2.0357144,\n",
       "         -2.0357144, -2.0357144],\n",
       "        [-2.0357144, -2.0357144, -2.0357144, ..., -2.0357144,\n",
       "         -2.0357144, -2.0357144]],\n",
       "\n",
       "       [[-1.8044444, -1.8044444, -1.8044444, ..., -1.8044444,\n",
       "         -1.8044444, -1.8044444],\n",
       "        [-1.8044444, -1.8044444, -1.8044444, ..., -1.8044444,\n",
       "         -1.8044444, -1.8044444],\n",
       "        [-1.8044444, -1.8044444, -1.8044444, ..., -1.8044444,\n",
       "         -1.8044444, -1.8044444],\n",
       "        ...,\n",
       "        [-1.8044444, -1.8044444, -1.8044444, ..., -1.8044444,\n",
       "         -1.8044444, -1.8044444],\n",
       "        [-1.8044444, -1.8044444, -1.8044444, ..., -1.8044444,\n",
       "         -1.8044444, -1.8044444],\n",
       "        [-1.8044444, -1.8044444, -1.8044444, ..., -1.8044444,\n",
       "         -1.8044444, -1.8044444]]], dtype=float32)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "img, gt_boxes, gt_labels, scales = get_img_data_from_file(record)\n",
    "size = 512\n",
    "img, gt_boxes, gt_labels = image_augment(img, gt_boxes, gt_labels, size)\n",
    "mean = [0.485, 0.456, 0.406]\n",
    "std = [0.229, 0.224, 0.225]\n",
    "mean = np.array(mean).reshape((1, 1, -1))\n",
    "std = np.array(std).reshape((1, 1, -1))\n",
    "img = (img / 255.0 - mean) / std\n",
    "img = img.astype('float32').transpose((2, 0, 1))\n",
    "img"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将上面的过程整理成一个函数get_img_data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_img_data(record, size=640):\n",
    "    img, gt_boxes, gt_labels, scales = get_img_data_from_file(record)\n",
    "    img, gt_boxes, gt_labels = image_augment(img, gt_boxes, gt_labels, size)\n",
    "    mean = [0.485, 0.456, 0.406]\n",
    "    std = [0.229, 0.224, 0.225]\n",
    "    mean = np.array(mean).reshape((1, 1, -1))\n",
    "    std = np.array(std).reshape((1, 1, -1))\n",
    "    img = (img / 255.0 - mean) / std\n",
    "    img = img.astype('float32').transpose((2, 0, 1))\n",
    "    return img, gt_boxes, gt_labels, scales"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "TRAINDIR = '/home/aistudio/work/insects/train'\n",
    "TESTDIR = '/home/aistudio/work/insects/test'\n",
    "VALIDDIR = '/home/aistudio/work/insects/val'\n",
    "cname2cid = get_insect_names()\n",
    "records = get_annotations(cname2cid, TRAINDIR)\n",
    "\n",
    "record = records[0]\n",
    "img, gt_boxes, gt_labels, scales = get_img_data(record, size=480)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(3, 480, 480)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "img.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(50, 4)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gt_boxes.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
       "       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,\n",
       "       0, 4, 0, 0, 0, 0], dtype=int32)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gt_labels"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(1244.0, 1244.0)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "scales"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 批量数据读取与加速\n",
    "\n",
    "上面的程序展示了如何读取一张图片的数据并加速，下面的代码实现了批量数据读取。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 获取一个批次内样本随机缩放的尺寸\n",
    "def get_img_size(mode):\n",
    "    if (mode == 'train') or (mode == 'valid'):\n",
    "        inds = np.array([0,1,2,3,4,5,6,7,8,9])\n",
    "        ii = np.random.choice(inds)\n",
    "        img_size = 320 + ii * 32\n",
    "    else:\n",
    "        img_size = 608\n",
    "    return img_size\n",
    "\n",
    "# 将 list形式的batch数据 转化成多个array构成的tuple\n",
    "def make_array(batch_data):\n",
    "    img_array = np.array([item[0] for item in batch_data], dtype = 'float32')\n",
    "    gt_box_array = np.array([item[1] for item in batch_data], dtype = 'float32')\n",
    "    gt_labels_array = np.array([item[2] for item in batch_data], dtype = 'int32')\n",
    "    img_scale = np.array([item[3] for item in batch_data], dtype='int32')\n",
    "    return img_array, gt_box_array, gt_labels_array, img_scale\n",
    "\n",
    "# 批量读取数据，同一批次内图像的尺寸大小必须是一样的，\n",
    "# 不同批次之间的大小是随机的，\n",
    "# 由上面定义的get_img_size函数产生\n",
    "def data_loader(datadir, batch_size= 10, mode='train'):\n",
    "    cname2cid = get_insect_names()\n",
    "    records = get_annotations(cname2cid, datadir)\n",
    "\n",
    "    def reader():\n",
    "        if mode == 'train':\n",
    "            np.random.shuffle(records)\n",
    "        batch_data = []\n",
    "        img_size = get_img_size(mode)\n",
    "        for record in records:\n",
    "            #print(record)\n",
    "            img, gt_bbox, gt_labels, im_shape = get_img_data(record, \n",
    "                                                             size=img_size)\n",
    "            batch_data.append((img, gt_bbox, gt_labels, im_shape))\n",
    "            if len(batch_data) == batch_size:\n",
    "                yield make_array(batch_data)\n",
    "                batch_data = []\n",
    "                img_size = get_img_size(mode)\n",
    "        if len(batch_data) > 0:\n",
    "            yield make_array(batch_data)\n",
    "\n",
    "    return reader"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "d = data_loader('/home/aistudio/work/insects/train', batch_size=2, mode='train')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "img, gt_boxes, gt_labels, im_shape = next(d())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((2, 3, 352, 352), (2, 50, 4), (2, 50), (2, 2))"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "img.shape, gt_boxes.shape, gt_labels.shape, im_shape.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由于数据预处理耗时较长，可能会成为网络训练速度的瓶颈，所以需要对预处理部分进行优化。通过使用飞桨提供的API [paddle.reader.xmap_readers](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/io_cn/xmap_readers_cn.html#xmap-readers)可以开启多线程读取数据，具体实现代码如下。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import functools\n",
    "import paddle\n",
    "\n",
    "# 使用paddle.reader.xmap_readers实现多线程读取数据\n",
    "def multithread_loader(datadir, batch_size= 10, mode='train'):\n",
    "    cname2cid = get_insect_names()\n",
    "    records = get_annotations(cname2cid, datadir)\n",
    "    def reader():\n",
    "        if mode == 'train':\n",
    "            np.random.shuffle(records)\n",
    "        img_size = get_img_size(mode)\n",
    "        batch_data = []\n",
    "        for record in records:\n",
    "            batch_data.append((record, img_size))\n",
    "            if len(batch_data) == batch_size:\n",
    "                yield batch_data\n",
    "                batch_data = []\n",
    "                img_size = get_img_size(mode)\n",
    "        if len(batch_data) > 0:\n",
    "            yield batch_data\n",
    "\n",
    "    def get_data(samples):\n",
    "        batch_data = []\n",
    "        for sample in samples:\n",
    "            record = sample[0]\n",
    "            img_size = sample[1]\n",
    "            img, gt_bbox, gt_labels, im_shape = get_img_data(record, size=img_size)\n",
    "            batch_data.append((img, gt_bbox, gt_labels, im_shape))\n",
    "        return make_array(batch_data)\n",
    "\n",
    "    mapper = functools.partial(get_data, )\n",
    "\n",
    "    return paddle.reader.xmap_readers(mapper, reader, 8, 10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "d = multithread_loader('/home/aistudio/work/insects/train', batch_size=2, mode='train')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "img, gt_boxes, gt_labels, im_shape = next(d())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((2, 3, 320, 320), (2, 50, 4), (2, 50), (2, 2))"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "img.shape, gt_boxes.shape, gt_labels.shape, im_shape.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "至此，我们完成了如何查看数据集中的数据、提取数据标注信息、从文件读取图像和标注数据、图像增广、批量读取和加速等过程，通过multithread_loader可以返回img, gt_boxes, gt_labels, im_shape等数据，接下来就可以将它们输入到神经网络，应用到具体算法上了。\n",
    "\n",
    "在开始具体的算法讲解之前，先补充一下读取测试数据的代码。测试数据没有标注信息，也不需要做图像增广，代码如下所示。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 测试数据读取\n",
    "\n",
    "# 将 list形式的batch数据 转化成多个array构成的tuple\n",
    "def make_test_array(batch_data):\n",
    "    img_name_array = np.array([item[0] for item in batch_data])\n",
    "    img_data_array = np.array([item[1] for item in batch_data], dtype = 'float32')\n",
    "    img_scale_array = np.array([item[2] for item in batch_data], dtype='int32')\n",
    "    return img_name_array, img_data_array, img_scale_array\n",
    "\n",
    "# 测试数据读取\n",
    "def test_data_loader(datadir, batch_size= 10, test_image_size=608, mode='test'):\n",
    "    \"\"\"\n",
    "    加载测试用的图片，测试数据没有groundtruth标签\n",
    "    \"\"\"\n",
    "    image_names = os.listdir(datadir)\n",
    "    def reader():\n",
    "        batch_data = []\n",
    "        img_size = test_image_size\n",
    "        for image_name in image_names:\n",
    "            file_path = os.path.join(datadir, image_name)\n",
    "            img = cv2.imread(file_path)\n",
    "            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n",
    "            H = img.shape[0]\n",
    "            W = img.shape[1]\n",
    "            img = cv2.resize(img, (img_size, img_size))\n",
    "\n",
    "            mean = [0.485, 0.456, 0.406]\n",
    "            std = [0.229, 0.224, 0.225]\n",
    "            mean = np.array(mean).reshape((1, 1, -1))\n",
    "            std = np.array(std).reshape((1, 1, -1))\n",
    "            out_img = (img / 255.0 - mean) / std\n",
    "            out_img = out_img.astype('float32').transpose((2, 0, 1))\n",
    "            img = out_img #np.transpose(out_img, (2,0,1))\n",
    "            im_shape = [H, W]\n",
    "\n",
    "            batch_data.append((image_name.split('.')[0], img, im_shape))\n",
    "            if len(batch_data) == batch_size:\n",
    "                yield make_test_array(batch_data)\n",
    "                batch_data = []\n",
    "        if len(batch_data) > 0:\n",
    "            yield make_test_array(batch_data)\n",
    "\n",
    "    return reader\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
