{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 什么是目标检测？"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "目标检测是计算机视觉中的重要任务，近年来传统目标检测方法已经难以满足人们对目标检测效果的要求，随着深度学习在计算机视觉任务上取得的巨大进展，目前基于深度学习的目标检测算法已经称为主流。  \n",
    "相比较于基于深度学习的图像分类任务，目标检测任务更具难度。  \n",
    "具体区别如图所示。  \n",
    "图像分类：只需要判断输入的图像中是否包含已知的物体。  \n",
    "图像检测：需要在识别图片中目标类别的基础上，还要**精确定位到目标的具体位置**，并用**外接矩形框**标出。  \n",
    "![](https://raw.githubusercontent.com/datawhalechina/dive-into-cv-pytorch/master/markdown_imgs/chapter03/3-1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 目标检测的思路"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "自2012年Alex Krizhevsky凭借Alex在ImageNet图像分类挑战赛中拿下冠军之后，深度学习在图像识别尤其是图像分类领域开始大放异彩，大众的视野也重新回到深度神经网络中。紧接着，不断有更深更复杂的网络出现，一再刷新ImageNet图像分类比赛的记录。  \n",
    "\n",
    "大家发现，通过合理的构造，神经网络可以用来预测各种各样的实际问题。于是人们开始了基于CNN的目标检测研究, 但是随着进一步的探索大家发现，似乎CNN并不善于直接预测坐标信息。并且一幅图像中可能出现的物体个数也是不定的，模型如何构建也比较棘手。  \n",
    "\n",
    "因此，人们就想，如果知道了图中某个位置存在物体，再将对应的局部区域送入到分类网络中去进行判别，那我不就可以知道图像中每个物体的位置和类别了吗？  \n",
    "\n",
    "但是，怎么样才能知道每个物体的位置呢？显然我们是没办法知道的，但是我们可以去猜啊！所谓猜，其实就是通过**滑窗**的方式，罗列图中各种可能的区域，一个个去试，分别送入到分类网络进行分类得到其类别，同时我们会对当前的边界框进行微调，这样对于图像中每个区域都能得到（class,x1,y1,x2,y2）五个属性，汇总后最终就得到了图中物体的类别和坐标信息。  \n",
    "\n",
    "总结一下我们的这种方案思路：先确立众多候选框，再对候选框进行分类和微调。  \n",
    "![](https://raw.githubusercontent.com/datawhalechina/dive-into-cv-pytorch/master/markdown_imgs/chapter03/3-2.png)  \n",
    "上图展示了一个通过遍历各个区域，然后逐个分类去完成目标检测任务的过程示例。在待识别图上预设一个框，然后逐像素遍历，就能得到大量候选框（这里仅为示意图，图上只展示了3个框用于说明问题，具体数量由图像大小和预设框大小决定），每个框送入到分类网络分类都有一个得分(代表当前框中有一个船的置信度)，那么得分最高的就代表识别的最准确的框，其位置就是最终要检测的目标的位置。  \n",
    "\n",
    "以上就是最初的基于深度学习的目标检测问题解决思路，RCNN，YOLO，SSD等众多经典网络模型都是沿着这个思路优化发展的。  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 目标框定义方式"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "任何图像任务的训练数据都要包括两项，**图片**和**真实标签信息**，通常叫做GT。  \n",
    "在图像分类中，标签信息是分类的类别。  \n",
    "在目标检测中，标签信息除了分类的类别以外，需要同时包含目标的位置信息（目标的外接矩形框bounding box）。  \n",
    "用来表示bounding box的格式通常有两种，（x1, y1, x2, y2）和（c_x, c_y, w, h）。如下图所示。  \n",
    "![](https://raw.githubusercontent.com/datawhalechina/dive-into-cv-pytorch/master/markdown_imgs/chapter03/3-3.png)  \n",
    "之所以使用两种不同的目标框信息表达格式，是因为两种格式分别会在不同场景下体现出计算的便利性。  \n",
    "两种格式之间可以形成相互转换。代码如下。  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def xy_to_cxcy(xy):\n",
    "    \"\"\"\n",
    "    Convert bounding boxes from boundary coordinates (x_min, y_min, x_max, y_max) to center-size coordinates (c_x, c_y, w, h).\n",
    "\n",
    "    :param xy: bounding boxes in boundary coordinates, a tensor of size (n_boxes, 4)\n",
    "    :return: bounding boxes in center-size coordinates, a tensor of size (n_boxes, 4)\n",
    "    \"\"\"\n",
    "    # torch.cat([A,B], 1) # 1代表按列拼接\n",
    "    return torch.cat([(xy[:,2:] + xy[:, :2]) / 2,   # c_x, c_y\n",
    "                       xy[:, 2:] - xy[:, :2]], 1)    # w, h  \n",
    "def cxcy_to_xy(cscy):\n",
    "    \"\"\"\n",
    "    Convert bounding boxes from center-size coordinates (c_x, c_y, w, h) to boundary coordinates (x_min, y_min, x_max, y_max).\n",
    "\n",
    "    :param cxcy: bounding boxes in center-size coordinates, a tensor of size (n_boxes, 4)\n",
    "    :return: bounding boxes in boundary coordinates, a tensor of size (n_boxes, 4)\n",
    "    \"\"\"\n",
    "    return torch.cat([cxcy[:, :2] - (cxcy[:, 2:] / 2),  # x_min, y_min \n",
    "                      cxcy[:, :2] + (cxcy[:, 2:] / 2)], 1) # x_max, y_max"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 交并比（IoU）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "IoU的全称是交并比（Intersection over Union），表示两个目标框的交集占其并集的比例。  \n",
    "![](https://raw.githubusercontent.com/datawhalechina/dive-into-cv-pytorch/master/markdown_imgs/chapter03/3-4.png)  \n",
    "图中可以看到，分子中黄色区域为红色bounding box和绿色bounding box的交集，分母中黄+红+绿区域为红色bounding box和绿色bounding box的并集，两者之比即为iou。  \n",
    "具体计算过程如下：  \n",
    "```\n",
    "1. 获取两个bounding box的坐标，红框坐标：左上（red_x1, red_y1）,右下（red_x2, red_y2），绿框坐标：左上（green_x1, green_y1），右下（green_x2, green_y2）  \n",
    "2. 计算两个框左上点的坐标最大值：(max(red_x1, green_x1), max(red_y1, green_y1))和右下点坐标最小值：(min(red_x2, green_x2), min(red_y2, green_y2))  \n",
    "3. 利用第2步计算的信息计算黄框的面积：yellow_area  \n",
    "4. 计算红绿框的面积：red_area和green_area  \n",
    "5. iou = yellow_area / (red_area + green_area - yellow_area)\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "def find_intersection(set_1, set_2):\n",
    "    \"\"\" \n",
    "    Find the intersection of every box combination between two sets of boxes that are in boundary coordinates.\n",
    "\n",
    "    :param set_1: set 1, a tensor of dimensions (n1, 4)                                                                                                           \n",
    "    :param set_2: set 2, a tensor of dimensions (n2, 4)\n",
    "    :return: intersection of each of the boxes in set 1 with respect to each of the boxes in set 2, a tensor of dimensions (n1, n2)\n",
    "    \"\"\"\n",
    "\n",
    "    # PyTorch auto-broadcasts singleton dimensions\n",
    "    lower_bounds = torch.max(set_1[:, :2].unsqueeze(1), set_2[:, :2].unsqueeze(0))  # (n1, n2, 2)\n",
    "    upper_bounds = torch.min(set_1[:, 2:].unsqueeze(1), set_2[:, 2:].unsqueeze(0))  # (n1, n2, 2)\n",
    "    intersection_dims = torch.clamp(upper_bounds - lower_bounds, min=0)  # (n1, n2, 2)\n",
    "    return intersection_dims[:, :, 0] * intersection_dims[:, :, 1]  # (n1, n2)\n",
    "\n",
    "\n",
    "def find_jaccard_overlap(set_1, set_2):\n",
    "    \"\"\" \n",
    "    Find the Jaccard Overlap (IoU) of every box combination between two sets of boxes that are in boundary coordinates.\n",
    "\n",
    "    :param set_1: set 1, a tensor of dimensions (n1, 4)\n",
    "    :param set_2: set 2, a tensor of dimensions (n2, 4)\n",
    "    :return: Jaccard Overlap of each of the boxes in set 1 with respect to each of the boxes in set 2, a tensor of dimensions (n1, n2)\n",
    "    \"\"\"\n",
    "\n",
    "    # Find intersections\n",
    "    intersection = find_intersection(set_1, set_2)  # (n1, n2)\n",
    "\n",
    "    # Find areas of each box in both sets\n",
    "    areas_set_1 = (set_1[:, 2] - set_1[:, 0]) * (set_1[:, 3] - set_1[:, 1])  # (n1)\n",
    "    areas_set_2 = (set_2[:, 2] - set_2[:, 0]) * (set_2[:, 3] - set_2[:, 1])  # (n2)\n",
    "\n",
    "    # Find the union\n",
    "    # PyTorch auto-broadcasts singleton dimensions\n",
    "    union = areas_set_1.unsqueeze(1) + areas_set_2.unsqueeze(0) - intersection  # (n1, n2)\n",
    "\n",
    "    return intersection / union  # (n1, n2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 先验框"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在众多经典的目标检测模型中，均有先验框的说法，有的paper(如Faster RCNN)中称之为anchor(锚点)，有的paper(如SSD)称之为prior bounding box(先验框)，实际上是一个概念。  \n",
    "先验框的目的是什么？在目标检测最初的解决方案里，需要遍历图片每一个可能的目标框，再对这些框进行分类和微调，从而完成目标检测任务。先验框就是解决如何定义哪些位置是候选框的问题。  \n",
    "三个概念：  \n",
    "- 设置不同尺度的先验框 \n",
    "- 先验框与特征图的对应  \n",
    "- 先验框类别信息的确定  \n",
    "\n",
    "**设置不同尺度的先验框**  \n",
    "通常，为了覆盖更多可能的情况，在图中的同一个位置会设置几个不同尺度的先验框。这里所说的不同尺度，不单单指大小，还有长宽比，如下图所示：  \n",
    "![](https://raw.githubusercontent.com/datawhalechina/dive-into-cv-pytorch/master/markdown_imgs/chapter03/3-15.jpg)  \n",
    "**先验框与特征图的对应**  \n",
    "除了不同尺度，我们肯定要将先验框放在图片中不同位置上，但是遍历原图每个像素，设置的先验框就太多了。如下图所示，一个224x224的图片，假设每个位置设置3个不同尺寸的先验框，那么就有224x224x3=150528个先验框，这个数据量太大了。但是如果我们不去遍历原图，而是遍历原图下采样得到的feature map呢？以vgg16的backbone为例，下采样了5次，得到7x7的feature map，那就只需要得到7x7x3=147个先验框。这样的设计大大减少了先验框的数量，同时也能覆盖大多数情况。  \n",
    "![](https://raw.githubusercontent.com/datawhalechina/dive-into-cv-pytorch/master/markdown_imgs/chapter03/3-13.png)  \n",
    "因此，我们就将先验框的位置与特征图建立映射关系，通过特征图，直接一次性的输出所有先验框的类别信息以及坐标信息。  \n",
    "**先验框类别信息的确定**  \n",
    "我们铺设了很多的先验框，先要给出这些先验框的类别信息，才能让模型学着去预测每个先验框是否对应着一个目标物体。  \n",
    "这些先验框中很多是和图片中我们要检测的目标完全没有交集或者交集很小。  \n",
    "我们的做法是，设定一个loU阈值，例如ioU=0.5，"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
