{"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"“single_stage_detector_yolo.ipynb”的副本","provenance":[{"file_id":"16-Al3cM1PnKjYK9fY7vV9H2QdM_dMPAG","timestamp":1606219709403}],"collapsed_sections":[],"toc_visible":true},"kernelspec":{"display_name":"Python 3","name":"python3"}},"cells":[{"cell_type":"markdown","metadata":{"id":"DDJwQPZcupab"},"source":["# EECS 498-007/598-005 Assignment 5-1: Single-Stage Object Detector - YOLO\n","\n","Before we start, please put your name and UMID in following format\n","\n",": Firstname LASTNAME, #00000000   //   e.g.) Justin JOHNSON, #12345678"]},{"cell_type":"markdown","metadata":{"id":"2KMxqLt1h2kx"},"source":["**Your Answer:**   \n","Hello WORLD, #XXXXXXXX"]},{"cell_type":"markdown","metadata":{"id":"BRIqwJUr2HuN"},"source":["# Single-Stage Object Detector\n","In this exercise you will implement a **single-stage** object detector, based on YOLO ([v1](https://arxiv.org/pdf/1506.02640.pdf) and [v2](https://arxiv.org/pdf/1612.08242.pdf)) and use it to train a model that can detect objects on novel images. We will also evaluate the detection accuracy using the classic metric mean Average Precision ([mAP](https://github.com/Cartucho/mAP)). In Part II of A5, you will implement a **two-stage** object detector, based on [Faster R-CNN](https://arxiv.org/pdf/1506.01497.pdf). The main difference between the two is that single-stage detectors perform region proposal and classification simultaneously while two-stage detectors have them decoupled. "]},{"cell_type":"markdown","metadata":{"id":"LfBk3NtRgqaV"},"source":["# Getting Started"]},{"cell_type":"markdown","metadata":{"id":"ubB_0e-UAOVK"},"source":["## Install starter code\n","We will continue using the utility functions that we've used for previous assignments: [`coutils` package](https://github.com/deepvision-class/starter-code). Run this cell to download and install it.\n"]},{"cell_type":"code","metadata":{"id":"ASkY27ZtA7Is"},"source":["!pip install git+https://github.com/deepvision-class/starter-code"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"MzqbYcKdz6ew"},"source":["## Setup code\n","Run some setup code for this notebook: Import some useful packages and increase the default figure size."]},{"cell_type":"code","metadata":{"id":"HzRdJ3uhe1CR","tags":["pdf-ignore"]},"source":["import math\n","import torch\n","import torch.nn as nn\n","import torch.nn.functional as F\n","import torchvision\n","import coutils\n","from coutils import extract_drive_file_id, register_colab_notebooks, \\\n","                    fix_random_seed, rel_error\n","import matplotlib.pyplot as plt\n","import numpy as np\n","import cv2\n","import copy\n","import time\n","import shutil\n","import os\n","\n","# for plotting\n","plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\n","plt.rcParams['image.interpolation'] = 'nearest'\n","plt.rcParams['image.cmap'] = 'gray'\n","\n","# data type and device for torch.tensor\n","to_float = {'dtype': torch.float, 'device': 'cpu'}\n","to_float_cuda = {'dtype': torch.float, 'device': 'cuda'}\n","to_double = {'dtype': torch.double, 'device': 'cpu'}\n","to_double_cuda = {'dtype': torch.double, 'device': 'cuda'}\n","to_long = {'dtype': torch.long, 'device': 'cpu'}\n","to_long_cuda = {'dtype': torch.long, 'device': 'cuda'}\n","\n","# for mAP evaluation\n","!rm -rf mAP\n","!git clone https://github.com/Cartucho/mAP.git\n","!rm -rf mAP/input/*"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"OvUDZWGU3VLV"},"source":["We will use GPUs to accelerate our computation in this notebook. Run the following to make sure GPUs are enabled:"]},{"cell_type":"code","metadata":{"id":"RrAX9FOLpr9k"},"source":["if torch.cuda.is_available:\n","  print('Good to go!')\n","else:\n","  print('Please set GPU via Edit -> Notebook Settings.')"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"MjJ3uyYBg3Lw"},"source":["## Load PASCAL VOC 2007 data\n","During the majority of our homework assignments so far, we have used the CIFAR-10 dataset for image classification tasks.\n","\n","We will need to use a new dataset for object detection. In order to train and evaluate object detection models, we need a dataset where each image is annotated with a *set* of *bounding boxes*, where each box gives the category label and spatial extent of some object in the image.\n","\n","We will use the [PASCAL VOC 2007](http://host.robots.ox.ac.uk/pascal/VOC/) dataset, which provides annotations of this form. PASCAL VOC ran a series of yearly computer vision competitions from 2005 to 2012, predating the ImageNet challenge which we have discussed in class.\n","\n","The data from the 2007 challenge used to be one of the most popular datasets for evaluating object detection. It is much smaller than more recent object detection datasets such as [COCO](http://cocodataset.org/#home), and thus easier to manage in an homework assignment.\n","\n","The following function will download the PASCAL VOC 2007 dataset and return it as a PyTorch Dataset object:"]},{"cell_type":"code","metadata":{"id":"fmD9Qrs2g7fI"},"source":["def get_pascal_voc2007_data(image_root, split='train'):\n","  \"\"\"\n","  Use torchvision.datasets\n","  https://pytorch.org/docs/stable/torchvision/datasets.html#torchvision.datasets.VOCDetection\n","  \"\"\"\n","  from torchvision import datasets\n","\n","  train_dataset = datasets.VOCDetection(image_root, year='2007', image_set=split,\n","                                    download=True)\n","  \n","  return train_dataset"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"XXc_Hw3JhVxw"},"source":["Run the following cell to download the training and validation sets for the PASCAL VOC 2007 dataset:\n","\n","The [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) objects returned from the above function returns annotations for each image as a nested set of dictionary objects:"]},{"cell_type":"code","metadata":{"id":"MmEP5KQJzk0d"},"source":["# uncomment below to use the mirror link if the original link is broken\n","# !wget http://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar\n","train_dataset = get_pascal_voc2007_data('/content', 'train')\n","val_dataset = get_pascal_voc2007_data('/content', 'val')\n","\n","# an example on the raw annotation\n","import json\n","print(json.dumps(train_dataset[1][1]['annotation'], indent=2))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"J5MjBX9bkBtA"},"source":["In order to use these annotations to train our model, we need to convert this nested dictionary data structure into a set of PyTorch tensors.\n","\n","We also need to preprocess the image, converting it to a PyTorch tensor and resizing it to 224x224. Real object detection systems typically work with much higher-resolution images, but we will use a low resolution for computational efficiency in this assignment.\n","\n","We also want to train our models using minibatches of data, so we need to group the annotations from several images into minibatches.\n","\n","We perform both of these functions by using a customized PyTorch [DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) object, which we have written for you:"]},{"cell_type":"code","metadata":{"id":"OfwTGpZn1L5U"},"source":["def pascal_voc2007_loader(dataset, batch_size, num_workers=0):\n","  \"\"\"\n","  Data loader for Pascal VOC 2007.\n","  https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader\n","  \"\"\"\n","  from torch.utils.data import DataLoader\n","  # turn off shuffle so we can index the original image\n","  train_loader = DataLoader(dataset,\n","                            batch_size=batch_size,\n","                            shuffle=False, pin_memory=True,\n","                            num_workers=num_workers,\n","                            collate_fn=voc_collate_fn)\n","  return train_loader\n","\n","\n","class_to_idx = {'aeroplane':0, 'bicycle':1, 'bird':2, 'boat':3, 'bottle':4,\n","                'bus':5, 'car':6, 'cat':7, 'chair':8, 'cow':9, 'diningtable':10,\n","                'dog':11, 'horse':12, 'motorbike':13, 'person':14, 'pottedplant':15,\n","                'sheep':16, 'sofa':17, 'train':18, 'tvmonitor':19\n","}\n","idx_to_class = {i:c for c, i in class_to_idx.items()}\n","\n","\n","from torchvision import transforms\n","\n","def voc_collate_fn(batch_lst, reshape_size=224):\n","    preprocess = transforms.Compose([\n","      transforms.Resize((reshape_size, reshape_size)),\n","      transforms.ToTensor(),\n","      transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n","      ])\n","    \n","    batch_size = len(batch_lst)\n","    \n","    img_batch = torch.zeros(batch_size, 3, reshape_size, reshape_size)\n","    \n","    max_num_box = max(len(batch_lst[i][1]['annotation']['object']) \\\n","                      for i in range(batch_size))\n","\n","    box_batch = torch.Tensor(batch_size, max_num_box, 5).fill_(-1.)\n","    w_list = []\n","    h_list = []\n","    img_id_list = []\n","    \n","    for i in range(batch_size):\n","      img, ann = batch_lst[i]\n","      w_list.append(img.size[0]) # image width\n","      h_list.append(img.size[1]) # image height\n","      img_id_list.append(ann['annotation']['filename'])\n","      img_batch[i] = preprocess(img)\n","      all_bbox = ann['annotation']['object']\n","      if type(all_bbox) == dict: # inconsistency in the annotation file\n","        all_bbox = [all_bbox]\n","      for bbox_idx, one_bbox in enumerate(all_bbox):\n","        bbox = one_bbox['bndbox']\n","        obj_cls = one_bbox['name']\n","        box_batch[i][bbox_idx] = torch.Tensor([float(bbox['xmin']), float(bbox['ymin']),\n","          float(bbox['xmax']), float(bbox['ymax']), class_to_idx[obj_cls]])\n","    \n","    h_batch = torch.tensor(h_list)\n","    w_batch = torch.tensor(w_list)\n","\n","    return img_batch, box_batch, w_batch, h_batch, img_id_list"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"0ad8hHvAlGdA"},"source":["Training with the entire PASCAL VOC will be too computationally expensive for this homework assignment, so we can subsample the dataset by wrapping each `Dataset` object in a [`Subset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Subset) object:"]},{"cell_type":"code","metadata":{"id":"XL-7Em_A1kdS"},"source":["train_dataset = torch.utils.data.Subset(train_dataset, torch.arange(0, 2500)) # use 2500 samples for training\n","train_loader = pascal_voc2007_loader(train_dataset, 10)\n","val_loader = pascal_voc2007_loader(val_dataset, 10)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"HTyRHqwlC1Au"},"source":["The `DataLoader` objects return batches of data.\n","\n","The first output from the `DataLoader` is a Tensor `img` of shape `(B, 3, 224, 224)`. This is a batch of `B` images, similar to what we have seen in classification datasets.\n","\n","The second output from the `DataLoader` is a Tensor `ann` of shape `(B, N, 5)` giving information about all objects in all images of the batch. `ann[i, j] = (x_tl, y_tl, x_br, y_br, class)` gives information about the `j`th object in `img[i]`. The position of the top-left corner of the box is `(x_tl, y_tl)` and the position of the bottom-right corner of the box is `(x_br, y_br)`. These positions are in the coordinate system of the original image (before it was resized to 224 x 224). `class` is an integer giving the category label for this bounding box.\n","\n","Each image can have different numbers of objects. If `img[i]` has $N_i$ objects, then $N=\\max_i N_i$ is the maximum number of objects per image among all objects in the batch; this value can vary from batch to batch. For the images that have fewer than $N$ annotated objects, only the first $N_i$ rows of $anns[i]$ contain annotations; the remaining rows are padded with -1."]},{"cell_type":"code","metadata":{"id":"nZVYFJD32I_l"},"source":["train_loader_iter = iter(train_loader)\n","img, ann, _, _, _ = train_loader_iter.next()\n","\n","print('img has shape: ', img.shape)\n","print('ann has shape: ', ann.shape)\n","\n","print('Image 1 has only two annotated objects, so ann[1] is padded with -1:')\n","print(ann[1])\n","\n","print('\\nImage 2 has six annotated objects:, so ann[2] is not padded:')\n","print(ann[2])\n","\n","print('\\nEach row in the annotation tensor indicates (x_tl, y_tl, x_br, y_br, class).')"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"GqISg-cs6vKM"},"source":["# Coordinate transformation\n","It's a good practice to use a consistent coordinate system for all the spatial-related computations (e.g., anchors, proposals). **In this assignment, we use the coordinate system defined by the CNN activation map (of shape 7x7), where the top-left corner is (0, 0) and the bottom-right corner is (7, 7). The horizontal axis is the x axis and the vertical axis is the y axis.**\n","\n","The following function defines the transformation from the original image coordinate system (pixels, and the top-left corner is (0, 0)) to the activation map coordinate system and vice versa.\n","\n","Notes: All the coordinates are in float precision. In later sections, we use the activation map coordinate system for all computations except for visualization."]},{"cell_type":"code","metadata":{"id":"ggnqmAXh6vJv"},"source":["def coord_trans(bbox, w_pixel, h_pixel, w_amap=7, h_amap=7, mode='a2p'):\n","  \"\"\"\n","  Coordinate transformation function. It converts the box coordinate from\n","  the image coordinate system to the activation map coordinate system and vice versa.\n","  In our case, the input image will have a few hundred of pixels in\n","  width/height while the activation map is of size 7x7.\n","\n","  Input:\n","  - bbox: Could be either bbox, anchor, or proposal, of shape Bx*x4\n","  - w_pixel: Number of pixels in the width side of the original image, of shape B\n","  - h_pixel: Number of pixels in the height side of the original image, of shape B\n","  - w_amap: Number of pixels in the width side of the activation map, scalar\n","  - h_amap: Number of pixels in the height side of the activation map, scalar\n","  - mode: Whether transfer from the original image to activation map ('p2a') or\n","          the opposite ('a2p')\n","\n","  Output:\n","  - resized_bbox: Resized box coordinates, of the same shape as the input bbox\n","  \"\"\"\n","\n","  assert mode in ('p2a', 'a2p'), 'invalid coordinate transformation mode!'\n","  assert bbox.shape[-1] >= 4, 'the transformation is applied to the first 4 values of dim -1'\n","  \n","  if bbox.shape[0] == 0: # corner cases\n","    return bbox\n","\n","  resized_bbox = bbox.clone()\n","  # could still work if the first dim of bbox is not batch size\n","  # in that case, w_pixel and h_pixel will be scalars\n","  resized_bbox = resized_bbox.view(bbox.shape[0], -1, bbox.shape[-1])\n","  invalid_bbox_mask = (resized_bbox == -1) # indicating invalid bbox\n","\n","  if mode == 'p2a':\n","    # pixel to activation\n","    width_ratio = w_pixel * 1. / w_amap\n","    height_ratio = h_pixel * 1. / h_amap\n","    resized_bbox[:, :, [0, 2]] /= width_ratio.view(-1, 1, 1)\n","    resized_bbox[:, :, [1, 3]] /= height_ratio.view(-1, 1, 1)\n","  else:\n","    # activation to pixel\n","    width_ratio = w_pixel * 1. / w_amap\n","    height_ratio = h_pixel * 1. / h_amap\n","    resized_bbox[:, :, [0, 2]] *= width_ratio.view(-1, 1, 1)\n","    resized_bbox[:, :, [1, 3]] *= height_ratio.view(-1, 1, 1)\n","\n","  resized_bbox.masked_fill_(invalid_bbox_mask, -1)\n","  resized_bbox.resize_as_(bbox)\n","  return resized_bbox"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"48Cocwotg9Ly"},"source":["# Data Visualizer\n","This function will help us visualize boxes on top of images."]},{"cell_type":"code","metadata":{"id":"fm3vPcQnhmRU"},"source":["def data_visualizer(img, idx_to_class, bbox=None, pred=None):\n","  \"\"\"\n","  Data visualizer on the original image. Support both GT box input and proposal input.\n","  \n","  Input:\n","  - img: PIL Image input\n","  - idx_to_class: Mapping from the index (0-19) to the class name\n","  - bbox: GT bbox (in red, optional), a tensor of shape Nx5, where N is\n","          the number of GT boxes, 5 indicates (x_tl, y_tl, x_br, y_br, class)\n","  - pred: Predicted bbox (in green, optional), a tensor of shape N'x6, where\n","          N' is the number of predicted boxes, 6 indicates\n","          (x_tl, y_tl, x_br, y_br, class, object confidence score)\n","  \"\"\"\n","\n","  img_copy = np.array(img).astype('uint8')\n","\n","  if bbox is not None:\n","    for bbox_idx in range(bbox.shape[0]):\n","      one_bbox = bbox[bbox_idx][:4]\n","      cv2.rectangle(img_copy, (one_bbox[0], one_bbox[1]), (one_bbox[2],\n","                  one_bbox[3]), (255, 0, 0), 2)\n","      if bbox.shape[1] > 4: # if class info provided\n","        obj_cls = idx_to_class[bbox[bbox_idx][4].item()]\n","        cv2.putText(img_copy, '%s' % (obj_cls),\n","                  (one_bbox[0], one_bbox[1]+15),\n","                  cv2.FONT_HERSHEY_PLAIN, 1.0, (0, 0, 255), thickness=1)\n","\n","  if pred is not None:\n","    for bbox_idx in range(pred.shape[0]):\n","      one_bbox = pred[bbox_idx][:4]\n","      cv2.rectangle(img_copy, (one_bbox[0], one_bbox[1]), (one_bbox[2],\n","                  one_bbox[3]), (0, 255, 0), 2)\n","      \n","      if pred.shape[1] > 4: # if class and conf score info provided\n","        obj_cls = idx_to_class[pred[bbox_idx][4].item()]\n","        conf_score = pred[bbox_idx][5].item()\n","        cv2.putText(img_copy, '%s, %.2f' % (obj_cls, conf_score),\n","                    (one_bbox[0], one_bbox[1]+15),\n","                    cv2.FONT_HERSHEY_PLAIN, 1.0, (0, 0, 255), thickness=1)\n","\n","  plt.imshow(img_copy)\n","  plt.axis('off')\n","  plt.show()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"X4WmocEyiXWa"},"source":["## Visualize PASCAL VOC 2007\n","It is always good practice to try and visualize parts of your dataset before you build a model.\n","\n","Here we sample some  images from the PASCAL VOC 2007 training set, and visualize the ground-truth object boxes and category labels:"]},{"cell_type":"code","metadata":{"id":"ld1s28Z4fyL5"},"source":["# default examples for visualization\n","fix_random_seed(0)\n","batch_size = 3\n","sampled_idx = torch.linspace(0, len(train_dataset)-1, steps=batch_size).long()\n","\n","# get the size of each image first\n","h_list = []\n","w_list = []\n","img_list = [] # list of images\n","MAX_NUM_BBOX = 40\n","box_list = torch.LongTensor(batch_size, MAX_NUM_BBOX, 5).fill_(-1) # PADDED GT boxes\n","\n","for idx, i in enumerate(sampled_idx):\n","  # hack to get the original image so we don't have to load from local again...\n","  img, ann = train_dataset.__getitem__(i)\n","  img_list.append(img)\n","\n","  all_bbox = ann['annotation']['object']\n","  if type(all_bbox) == dict:\n","    all_bbox = [all_bbox]\n","  for bbox_idx, one_bbox in enumerate(all_bbox):\n","    bbox = one_bbox['bndbox']\n","    obj_cls = one_bbox['name']\n","    box_list[idx][bbox_idx] = torch.LongTensor([int(bbox['xmin']), int(bbox['ymin']),\n","      int(bbox['xmax']), int(bbox['ymax']), class_to_idx[obj_cls]])\n","\n","  # get sizes\n","  img = np.array(img)\n","  w_list.append(img.shape[1])\n","  h_list.append(img.shape[0])\n","\n","w_list = torch.as_tensor(w_list, **to_float_cuda)\n","h_list = torch.as_tensor(h_list, **to_float_cuda)\n","box_list = torch.as_tensor(box_list, **to_float_cuda)\n","resized_box_list = coord_trans(box_list, w_list, h_list, mode='p2a') # on activation map coordinate system"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"v04D-gwEiWqY"},"source":["# visualize GT boxes\n","for i in range(len(img_list)):\n","  valid_box = sum([1 if j != -1 else 0 for j in box_list[i][:, 0]])\n","  data_visualizer(img_list[i], idx_to_class, box_list[i][:valid_box])"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"IAa1Kvl2P_2k"},"source":["# Detector Backbone Network\n","Here, we use [MobileNet v2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/) for image feature extraction. The code is based on Assignment 4-2."]},{"cell_type":"code","metadata":{"id":"VLmU_CiURha7"},"source":["class FeatureExtractor(nn.Module):\n","  \"\"\"\n","  Image feature extraction with MobileNet.\n","  \"\"\"\n","  def __init__(self, reshape_size=224, pooling=False, verbose=False):\n","    super().__init__()\n","\n","    from torchvision import models\n","    from torchsummary import summary\n","\n","    self.mobilenet = models.mobilenet_v2(pretrained=True)\n","    self.mobilenet = nn.Sequential(*list(self.mobilenet.children())[:-1]) # Remove the last classifier\n","\n","    # average pooling\n","    if pooling:\n","      self.mobilenet.add_module('LastAvgPool', nn.AvgPool2d(math.ceil(reshape_size/32.))) # input: N x 1280 x 7 x 7\n","\n","    for i in self.mobilenet.named_parameters():\n","      i[1].requires_grad = True # fine-tune all\n","\n","    if verbose:\n","      summary(self.mobilenet.cuda(), (3, reshape_size, reshape_size))\n","  \n","  def forward(self, img, verbose=False):\n","    \"\"\"\n","    Inputs:\n","    - img: Batch of resized images, of shape Nx3x224x224\n","    \n","    Outputs:\n","    - feat: Image feature, of shape Nx1280 (pooled) or Nx1280x7x7\n","    \"\"\"\n","    num_img = img.shape[0]\n","    \n","    img_prepro = img\n","\n","    feat = []\n","    process_batch = 500\n","    for b in range(math.ceil(num_img/process_batch)):\n","      feat.append(self.mobilenet(img_prepro[b*process_batch:(b+1)*process_batch]\n","                              ).squeeze(-1).squeeze(-1)) # forward and squeeze\n","    feat = torch.cat(feat)\n","    \n","    if verbose:\n","      print('Output feature shape: ', feat.shape)\n","    \n","    return feat"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"qHeRMCcjx6v0"},"source":["Now, let's see what's inside MobileNet v2. Assume we have a 3x224x224 image input."]},{"cell_type":"code","metadata":{"id":"_pV0Lau_yDwX"},"source":["model = FeatureExtractor(verbose=True)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"zz32q025joxy"},"source":["# Anchor and Proposal\n","The concept of *anchor* is introduced in [Faster R-CNN](https://arxiv.org/pdf/1506.01497.pdf) and later used in numerous later works including [YOLO v2](https://arxiv.org/pdf/1612.08242.pdf). The definition of anchor from the original paper is summarized as follows:\n","\n","After passing the input image through the backbone network, we have a convolutional feature map of shape $(C, 7, 7)$ which we interpret as a 7x7 grid of $C$-dimensional features. At each point in this grid, we consider a set of $A$ *anchor boxes* of different sizes and shapes; for each anchor box we classify it as either an object or background box. The total number of anchor boxes that we consider for the entire input image is $(A, 7, 7)$; we predict classification scores of this shape by applying a sequence of convolutional layers to the backbone features.\n","\n","We slide a small network (e.g., 3x3 conv layer) over the CNN activation feature map. We call this 3x3 conv a sliding window. At each sliding-window location (i.e., centered at each position of the 7x7 activation cell), we simultaneously predict multiple region *proposals*, where the number of proposals for each location is denoted as $A=9$.\n","\n","Later, we will have an object proposal layer outputs\n","A-D scores that estimate probability of object for each proposal, a bounding box regression layer to produce 4A-D outputs encoding\n","the coordinates of A boxes, and a region classification layer to produce 20-D outputs indicating the probability of being each object category (shared by all A anchors). The A *proposals* are parameterized relative to A reference boxes, which we call *anchors*. An anchor is centered at the sliding window in question, and is associated with a shape (e.g., 1x1, 3x3, 5x5). The list of anchor shapes are provided next.\n","\n","\n","\n","\n"]},{"cell_type":"markdown","metadata":{"id":"etBYc7rbj35F"},"source":["## Anchor shapes\n","At each spatial position of the 7x7 backbone features, we consider a set of $A$ anchor boxes. Different spatial positions all use anchors of the same shape.\n","\n","The shapes of the anchor boxes are a hyperparameter. We will provide the anchor shapes for you, and we will re-use the same anchor shapes for both A5-1 and A5-2. In some papers (e.g. YOLO v2), the anchor shapes are determined in a data-driven way by clustering the set of ground-truth box sizes, but for simplicity we will not use that approach in this assignment.\n","\n","Note that anchors could be much larger than the 3x3 sliding window (e.g., 5x5) since the receptive field of activation cell on the original image could be large."]},{"cell_type":"code","metadata":{"id":"O5w-EUJekJj-"},"source":["# Declare variables for anchor priors, a Ax2 Tensor where A is the number of anchors.\n","# Hand-picked, same as our two-stage detector.\n","anchor_list = torch.tensor([[1, 1], [2, 2], [3, 3], [4, 4], [5, 5], [2, 3], [3, 2], [3, 5], [5, 3]], **to_float_cuda)\n","print(anchor_list.shape)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"uochvAlqkgr8"},"source":["## Activation Grid Generator\n","In order to place anchors centered at each position of the 7x7 grid of backbone features, we need to know the spatial position of the center of each cell in the 7x7 grid of features.\n","\n","This function will compute these center coordinates for us."]},{"cell_type":"code","metadata":{"id":"fC9-TKRykof7"},"source":["def GenerateGrid(batch_size, w_amap=7, h_amap=7, dtype=torch.float32, device='cuda'):\n","  \"\"\"\n","  Return a grid cell given a batch size (center coordinates).\n","\n","  Inputs:\n","  - batch_size, B\n","  - w_amap: or W', width of the activation map (number of grids in the horizontal dimension)\n","  - h_amap: or H', height of the activation map (number of grids in the vertical dimension)\n","  - W' and H' are always 7 in our case while w and h might vary.\n","  \n","  Outputs:\n","  grid: A float32 tensor of shape (B, H', W', 2) giving the (x, y) coordinates\n","        of the centers of each feature for a feature map of shape (B, D, H', W')\n","  \"\"\"\n","  w_range = torch.arange(0, w_amap, dtype=dtype, device=device) + 0.5\n","  h_range = torch.arange(0, h_amap, dtype=dtype, device=device) + 0.5\n","\n","  w_grid_idx = w_range.unsqueeze(0).repeat(h_amap, 1)\n","  h_grid_idx = h_range.unsqueeze(1).repeat(1, w_amap)\n","  grid = torch.stack([w_grid_idx, h_grid_idx], dim=-1)\n","  grid = grid.unsqueeze(0).repeat(batch_size, 1, 1, 1)\n","\n","  return grid\n"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"UCHpDXV0sdEX"},"source":["# visualization\n","# simply create an activation grid where the cells are in green and the centers in red\n","# you should see the entire image divided by a 7x7 grid, with no gaps on the edges\n","\n","grid_list = GenerateGrid(w_list.shape[0])\n","\n","center = torch.cat((grid_list, grid_list), dim=-1)\n","grid_cell = center.clone()\n","grid_cell[:, :, :, [0, 1]] -= 1. / 2.\n","grid_cell[:, :, :, [2, 3]] += 1. / 2.\n","center = coord_trans(center, w_list, h_list)\n","grid_cell = coord_trans(grid_cell, w_list, h_list)\n","\n","for img, anc, grid in zip(img_list, center, grid_cell):\n","  data_visualizer(img, idx_to_class, anc.reshape(-1, 4), grid.reshape(-1, 4))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"R0knLi5KkxoS"},"source":["## Anchor Generator\n","At this point we have defined the spatial sizes of anchors that we consider at each grid point, and we have computed the center of each grid point. We now combine these two pieces of information to compute the positions of all anchor boxes for the entire image.\n","\n","\n","We denote the anchor coordinates as ($x_{tl}^a$, $y_{tl}^a$, $x_{br}^a$, $y_{br}^a$), indicating the coordinates of the top-left corner and the bottom-right corner accordingly. The following function returns all the anchors given the anchor shapes and the grid cell. **Note that the center of an anchor overlaps a grid cell center.**"]},{"cell_type":"code","metadata":{"id":"AzuHo-WRrew-"},"source":["def GenerateAnchor(anc, grid):\n","  \"\"\"\n","  Anchor generator.\n","\n","  Inputs:\n","  - anc: Tensor of shape (A, 2) giving the shapes of anchor boxes to consider at\n","    each point in the grid. anc[a] = (w, h) gives the width and height of the\n","    a'th anchor shape.\n","  - grid: Tensor of shape (B, H', W', 2) giving the (x, y) coordinates of the\n","    center of each feature from the backbone feature map. This is the tensor\n","    returned from GenerateGrid.\n","  \n","  Outputs:\n","  - anchors: Tensor of shape (B, A, H', W', 4) giving the positions of all\n","    anchor boxes for the entire image. anchors[b, a, h, w] is an anchor box\n","    centered at grid[b, h, w], whose shape is given by anc[a]; we parameterize\n","    boxes as anchors[b, a, h, w] = (x_tl, y_tl, x_br, y_br), where (x_tl, y_tl)\n","    and (x_br, y_br) give the xy coordinates of the top-left and bottom-right\n","    corners of the box.\n","  \"\"\"\n","  anchors = None\n","  ##############################################################################\n","  # TODO: Given a set of anchor shapes and a grid cell on the activation map,  #\n","  # generate all the anchor coordinates for each image. Support batch input.   #\n","  ##############################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  ##############################################################################\n","  #                               END OF YOUR CODE                             #\n","  ##############################################################################\n","\n","  return anchors"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"9an-jXEr62yq"},"source":["Run the following to check your implementation. You should see errors on the order of 1e-8 or less."]},{"cell_type":"code","metadata":{"id":"gffaPg4Dsfux"},"source":["# sanity check\n","anc_list = GenerateAnchor(anchor_list, grid_list)\n","assert anc_list.shape == torch.Size([3, 9, 7, 7, 4]), 'shape mismatch!'\n","\n","expected_anc_list_mean = torch.tensor([[-1.25000000, -0.87500000,  2.25000000,  1.87500000],\n","                                       [ 1.75000000, -0.87500000,  5.25000000,  1.87500000],\n","                                       [ 4.75000000, -0.87500000,  8.25000000,  1.87500000],\n","                                       [-1.25000000,  1.12500000,  2.25000000,  3.87500000],\n","                                       [ 1.75000000,  1.12500000,  5.25000000,  3.87500000],\n","                                       [ 4.75000000,  1.12500000,  8.25000000,  3.87500000],\n","                                       [-1.25000000,  3.12500000,  2.25000000,  5.87500000],\n","                                       [ 1.75000000,  3.12500000,  5.25000000,  5.87500000],\n","                                       [ 4.75000000,  3.12500000,  8.25000000,  5.87500000],\n","                                       [-1.25000000,  5.12500000,  2.25000000,  7.87500000],\n","                                       [ 1.75000000,  5.12500000,  5.25000000,  7.87500000],\n","                                       [ 4.75000000,  5.12500000,  8.25000000,  7.87500000]], **to_float_cuda)\n","print('rel error: ', rel_error(expected_anc_list_mean, anc_list[0, [1,3,6,8], ::2, ::3, :].view(-1, 12, 4).mean(0)))"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"xbxTvazYyV7P"},"source":["# visualization\n","print('*'*80)\n","print('All nine anchors should be exactly centered:')\n","anc_list = GenerateAnchor(anchor_list, grid_list[:, 3:4, 3:4])\n","for img, anc in zip(img_list, coord_trans(anc_list, w_list, h_list)):\n","  print(anc.shape)\n","  data_visualizer(img, idx_to_class, anc.reshape(-1, 4))\n","\n","print('*'*80)\n","print('All anchors of the image (cluttered):')\n","anc_list = GenerateAnchor(anchor_list, grid_list) # all\n","for img, anc in zip(img_list, coord_trans(anc_list, w_list, h_list)):\n","  print(anc.shape)\n","  data_visualizer(img, idx_to_class, anc.reshape(-1, 4))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"KCzcNX3Zsi3n"},"source":["## Proposal Generator\n","If we only use anchors to propose object locations, we can only cover 9x7x7=441 regions in the image. What if an object does not fall into any of the regions?\n","\n","Hence, in the recent literatures (e.g., [Fast R-CNN](https://arxiv.org/pdf/1504.08083.pdf)), the detector predicts transformations that convert anchor boxes into *region proposals*.\n","\n","So far, we have represented anchors boxes by the coordinates of their top-left and bottom-right corners $(x_{tl}^a, y_{tl}^a, x_{br}^a, y_{br}^a)$. When converting anchors to proposals, it will be more convenient to parameterize boxes by the xy coordinate of their center, and their with and height: $(x_c^a,y_c^a,w^a,h^a)$.\n","\n","Now, consider an anchor box with center, width and height $(x_c^a,y_c^a,w^a,h^a)$.\n","The network will predict a *transformation* $(t^x, t^y, t^w, t^h)$; applying this transformation to the anchor yields a *region proposal* with center, width and height $(x_c^p,y_c^p,w^p,h^p)$.\n","YOLO and Faster R-CNN use slightly different formulas to convert anchors into proposals. Here you need to implement both formulations.\n","\n","### YOLO\n","For YOLO, we assume that $t^x$ and $t^y$ are both in the range $-0.5\\leq t^x,t^y\\leq 0.5$, while $t^w$ and $t^h$ are real numbers in the range $(-\\infty, \\infty)$. Then we have:\n","- $x_c^p = x_c^a + t^x$\n","- $y_c^p = y_c^a + t^y$\n","- $w^p = w_a exp(t^w)$\n","- $h^p = h_a exp(t^h)$\n","\n","### Faster R-CNN\n","For Faster R-CNN, we assume that all transformation parameters $t^x, t^y, t^w, t_h$ are real numbers in the range $(-\\infty, \\infty)$. Then we have:\n","- $x_c^p = x_c^a + t^xw^a$\n","- $y_c^p = y_c^p + t^yh^a$\n","- $w^p = w_a exp(t^w)$\n","- $h^p = h_a exp(t^h)$\n","\n","\n","### Training\n","During training, we compute the ground-truth transformation $(\\hat{t^x}, \\hat{t^y}, \\hat{t^w}, \\hat{t^h})$ that would transform the anchor box $(x_c^a,y_c^a,w^a,h^a)$ into the the ground-truth box $(x_c^{gt},y_c^{gt},w^{gt},h^{gt})$. We then apply a regression loss that penalizes differences between the predicted transform $(t^x, t^y, t^w, t^h)$ and the ground-truth transform.\n"]},{"cell_type":"code","metadata":{"id":"Mx9BlyIXspZy"},"source":["def GenerateProposal(anchors, offsets, method='YOLO'):\n","  \"\"\"\n","  Proposal generator.\n","\n","  Inputs:\n","  - anchors: Anchor boxes, of shape (B, A, H', W', 4). Anchors are represented\n","    by the coordinates of their top-left and bottom-right corners.\n","  - offsets: Transformations of shape (B, A, H', W', 4) that will be used to\n","    convert anchor boxes into region proposals. The transformation\n","    offsets[b, a, h, w] = (tx, ty, tw, th) will be applied to the anchor\n","    anchors[b, a, h, w]. For YOLO, assume that tx and ty are in the range\n","    (-0.5, 0.5).\n","  - method: Which transformation formula to use, either 'YOLO' or 'FasterRCNN'\n","  \n","  Outputs:\n","  - proposals: Region proposals of shape (B, A, H', W', 4), represented by the\n","    coordinates of their top-left and bottom-right corners. Applying the\n","    transform offsets[b, a, h, w] to the anchor [b, a, h, w] should give the\n","    proposal proposals[b, a, h, w].\n","  \n","  \"\"\"\n","  assert(method in ['YOLO', 'FasterRCNN'])\n","  proposals = None\n","  ##############################################################################\n","  # TODO: Given anchor coordinates and the proposed offset for each anchor,    #\n","  # compute the proposal coordinates using the transformation formulas above.  #\n","  ##############################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  ##############################################################################\n","  #                               END OF YOUR CODE                             #\n","  ##############################################################################\n","\n","  return proposals"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"1IglWagADIb6"},"source":["Run the following to check your implementation. You should see errors on the order of 1e-7 or less."]},{"cell_type":"code","metadata":{"id":"beEUhlCHtFAN"},"source":["print('-' * 80)\n","print('Sanity check for YOLO.')\n","yolo_offset_list = torch.cat([torch.ones_like(anc_list[:, :, :, :, 0:2]).fill_(.5), torch.ones_like(anc_list[:, :, :, :, 2:4])], dim=-1)\n","yolo_proposal_list = GenerateProposal(anc_list, yolo_offset_list, 'YOLO') # no scaling\n","\n","print('1. Center moved by ~0.5 cell')\n","print('rel error: ', rel_error(anc_list[0, 0, 0, :, 0:2] + (anc_list[0, 0, 0, :, 2:4] - anc_list[0, 0, 0, :, 0:2])/2.0 + 0.5, \\\n","                               (yolo_proposal_list[0, 0, 0, :, 0:2] + (yolo_proposal_list[0, 0, 0, :, 2:4] - yolo_proposal_list[0, 0, 0, :, 0:2]) / 2.0)))\n","\n","print('2. w/h changed by e')\n","print('rel error: ', rel_error((anc_list[0, 0, 0, :, 2:4] - anc_list[0, 0, 0, :, 0:2]) * torch.exp(torch.ones_like(anc_list[0, 0, 0, :, 0:2])), \\\n","      (yolo_proposal_list[0, 0, 0, :, 2:4] - yolo_proposal_list[0, 0, 0, :, 0:2])))\n","\n","\n","print('-' * 80)\n","print('Sanity check for FasterRCNN.')\n","rcnn_offset_list = torch.ones_like(anc_list)\n","rcnn_proposal_list = GenerateProposal(anc_list, rcnn_offset_list, 'FasterRCNN')\n","\n","print('1. x/y shifted by wh')\n","print('rel error: ', rel_error(anc_list[0, 0, 0, :, 0:2] + (anc_list[0, 0, 0, :, 2:4] - anc_list[0, 0, 0, :, 0:2]) * 3.0 /2.0, \\\n","      (rcnn_proposal_list[0, 0, 0, :, 0:2] + (rcnn_proposal_list[0, 0, 0, :, 2:4] - rcnn_proposal_list[0, 0, 0, :, 0:2]) / 2.0)))\n","\n","print('2. w/h should changed by e')\n","print('rel error: ', rel_error((anc_list[0, 0, 0, :, 2:4] - anc_list[0, 0, 0, :, 0:2]) * torch.exp(torch.ones_like(anc_list[0, 0, 0, :, 0:2])), \\\n","      (rcnn_proposal_list[0, 0, 0, :, 2:4] - rcnn_proposal_list[0, 0, 0, :, 0:2])))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"gYzP-8tGo6bp"},"source":["As an additional sanity check, we visualize an anchor (in red) and the corresponding proposal (in green) that results from applying the transform $(0.5, 0.5, 0, 0)$. The proposal should shift down and to the right (when using the YOLO formulation)."]},{"cell_type":"code","metadata":{"id":"YQZH2b7fhS49"},"source":["yolo_offset_list = torch.cat([torch.ones_like(anc_list[:, :, :, :, 0:2]).fill_(.5), torch.zeros_like(anc_list[:, :, :, :, 2:4])], dim=-1)\n","yolo_proposal_list = GenerateProposal(anc_list, yolo_offset_list, 'YOLO')\n","\n","for img, anc, prop in zip(img_list, coord_trans(anc_list[:, 0:1, 3:4, 3:4, :], w_list, h_list), \\\n","                          coord_trans(yolo_proposal_list[:, 0:1, 3:4, 3:4, :], w_list, h_list)):\n","  data_visualizer(img, idx_to_class, anc.reshape(-1, 4), prop.reshape(-1, 4))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"WvR1zYcMpdYU"},"source":["Next we visualize the effect of applying the transform $(0, 0, 1, 1)$ (with the YOLO formula) to the same anchor box (in red). Now the proposal (in green) should have the same center as the anchor, but the proposal should be larger."]},{"cell_type":"code","metadata":{"id":"3LtrWZCeyYVS"},"source":["yolo_offset_list = torch.cat([torch.zeros_like(anc_list[:, :, :, :, 0:2]), torch.ones_like(anc_list[:, :, :, :, 2:4]).fill_(1.)], dim=-1)\n","yolo_proposal_list = GenerateProposal(anc_list, yolo_offset_list, 'YOLO')\n","\n","for img, anc, prop in zip(img_list, coord_trans(anc_list[:, 0:1, 3:4, 3:4, :], w_list, h_list), \\\n","                          coord_trans(yolo_proposal_list[:, 0:1, 3:4, 3:4, :], w_list, h_list)):\n","  data_visualizer(img, idx_to_class, anc.reshape(-1, 4), prop.reshape(-1, 4))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"d7WJQkOkp6E5"},"source":["Next to sanity-check our implementation of the Faster R-CNN transformation formulas, we visualize the effect of applying the transform $(1, 1, 0, 0)$ to the same anchor (in red). The proposal (in green) should shift down and to the right by an amount equal to the size of the anchor."]},{"cell_type":"code","metadata":{"id":"Fk6FWCcYh1hZ"},"source":["# visualization (shift by wh, Faster R-CNN)\n","# anchors in red and proposals in green\n","rcnn_offset_list = torch.cat([torch.ones_like(anc_list[:, :, :, :, 0:2]), torch.zeros_like(anc_list[:, :, :, :, 2:4])], dim=-1)\n","rcnn_proposal_list = GenerateProposal(anc_list, rcnn_offset_list, 'FasterRCNN')\n","\n","for img, anc, prop in zip(img_list, coord_trans(anc_list[:, 0:1, 3:4, 3:4, :], w_list, h_list), \\\n","                          coord_trans(rcnn_proposal_list[:, 0:1, 3:4, 3:4, :], w_list, h_list)):\n","  data_visualizer(img, idx_to_class, anc.reshape(-1, 4), prop.reshape(-1, 4))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Ex-G6inJqOgv"},"source":["We further check our implementation of the Faster R-CNN transformation formula, and visualize the effect of applying the transformation $(0, 0, 1, 1)$ to the same anchor. Now the proposal (in green) should have the same center as the anchor (in red), but the proposal should be larger by a factor of $e$."]},{"cell_type":"code","metadata":{"id":"rzDVtYz1hjVR"},"source":["# visualization (no shift and then scale by e, Faster R-CNN)\n","# anchors in red and proposals in green\n","rcnn_offset_list = torch.cat([torch.zeros_like(anc_list[:, :, :, :, 0:2]), torch.ones_like(anc_list[:, :, :, :, 2:4]).fill_(1)], dim=-1)\n","rcnn_proposal_list = GenerateProposal(anc_list, rcnn_offset_list, 'FasterRCNN')\n","\n","for img, anc, prop in zip(img_list, coord_trans(anc_list[:, 0:1, 3:4, 3:4, :], w_list, h_list), \\\n","                          coord_trans(rcnn_proposal_list[:, 0:1, 3:4, 3:4, :], w_list, h_list)):\n","  data_visualizer(img, idx_to_class, anc.reshape(-1, 4), prop.reshape(-1, 4))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"_lBcOWGWtNka"},"source":["# Prediction Networks\n","This module outputs the prediction scores (see figure below). We have provided the code to determine activated/negative anchors for you, though you need to implement the IoU function for it to work. You will also need to compute the loss function. The loss function consists of three parts, confidence score regression, bounding box offsets regression, and object classication.\n","\n","![pred_scores](https://miro.medium.com/max/1055/1*YG6heD55fEmZeUKRSlsqlA.png)\n","\n","\n","Image credit: [towardsdatascience](https://towardsdatascience.com/yolov1-you-only-look-once-object-detection-e1f3ffec8a89). In this example, number of anchor shapes is $A=2$ while we have $A=9$."]},{"cell_type":"markdown","metadata":{"id":"Q8HqHqojtXhE"},"source":["## Intersection Over Union (IoU)\n","The definition of IoU and instructions on how to compute IoU can be found in the lecture slides (p35-39): https://web.eecs.umich.edu/~justincj/slides/eecs498/498_FA2019_lecture15.pdf"]},{"cell_type":"code","metadata":{"id":"mJOBdG2ltgT5"},"source":["def IoU(proposals, bboxes):\n","  \"\"\"\n","  Compute intersection over union between sets of bounding boxes.\n","\n","  Inputs:\n","  - proposals: Proposals of shape (B, A, H', W', 4)\n","  - bboxes: Ground-truth boxes from the DataLoader of shape (B, N, 5).\n","    Each ground-truth box is represented as tuple (x_lr, y_lr, x_rb, y_rb, class).\n","    If image i has fewer than N boxes, then bboxes[i] will be padded with extra\n","    rows of -1.\n","  \n","  Outputs:\n","  - iou_mat: IoU matrix of shape (B, A*H'*W', N) where iou_mat[b, i, n] gives\n","    the IoU between one element of proposals[b] and bboxes[b, n].\n","\n","  For this implementation you DO NOT need to filter invalid proposals or boxes;\n","  in particular you don't need any special handling for bboxxes that are padded\n","  with -1.\n","  \"\"\"\n","  iou_mat = None\n","  ##############################################################################\n","  # TODO: Compute the Intersection over Union (IoU) on proposals and GT boxes. #\n","  # No need to filter invalid proposals/bboxes (i.e., allow region area <= 0). #\n","  # You need to ensure your implementation is efficient (no for loops).        #\n","  # HINT:                                                                      #\n","  # IoU = Area of Intersection / Area of Union, where\n","  # Area of Union = Area of Proposal + Area of BBox - Area of Intersection     #\n","  # and the Area of Intersection can be computed using the top-left corner and #\n","  # bottom-right corner of proposal and bbox. Think about their relationships. #\n","  ##############################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  ##############################################################################\n","  #                               END OF YOUR CODE                             #\n","  ##############################################################################\n","  return iou_mat"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"sY52QCKlTBsR"},"source":["Run the following to check your implementation. You should see errors on the order of 1e-8 or less."]},{"cell_type":"code","metadata":{"id":"IRXHJXFIbbDs"},"source":["# simple sanity check\n","width, height = torch.tensor([35, 35], **to_float_cuda), torch.tensor([40, 40], **to_float_cuda)\n","sample_bbox = torch.tensor([[[1,1,11,11,0], [20,20,30,30,0]]], **to_float_cuda)\n","sample_proposals = torch.tensor([[[[[5,5,15,15], [27,27,37,37]]]]], **to_float_cuda)\n","\n","result = IoU(sample_proposals, sample_bbox)\n","\n","# check 1\n","expected_result = torch.tensor([[[0.21951219, 0.00000000],\n","                                 [0.00000000, 0.04712042]]], **to_float_cuda)\n","print('simple iou_mat error: ', rel_error(expected_result, result))\n","\n","# check 2\n","iou_mat = IoU(anc_list, resized_box_list)\n","expected_iou_mat = torch.tensor([0.11666405, 0.15146968, 0.02956639], **to_float_cuda)\n","print('iou_mat error: ', rel_error(expected_iou_mat, iou_mat[:, :, 0].mean(1)))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"WNSwO-wDwzoQ"},"source":["## Activated (positive) and negative anchors\n","During training we need to match the ground-truth boxes against the anchors to determine the classification labels for the anchors -- which anchors should be classified as containing an object and which should be classified as background? We have written this part for you.\n","\n","Read and digest the input/output definition carefully. You are highly recommended to read through the code as later parts rely heavily on this function."]},{"cell_type":"code","metadata":{"id":"z7uXbDraMkHR"},"source":["def ReferenceOnActivatedAnchors(anchors, bboxes, grid, iou_mat, pos_thresh=0.7, neg_thresh=0.3, method='FasterRCNN'):\n","  \"\"\"\n","  Determine the activated (positive) and negative anchors for model training.\n","\n","  For YOLO - A grid cell is responsible for predicting a GT box if the center of\n","  the box falls into that cell.\n","  Implementation details: First compute manhattan distance between grid cell centers\n","  (BxH’xW’) and GT box centers (BxN). This gives us a matrix of shape Bx(H'xW')xN and\n","  perform torch.min(dim=1)[1] on it gives us the indexes indicating activated grids\n","  responsible for GT boxes (convert to x and y). Among all the anchors associated with\n","  the activate grids, the anchor with the largest IoU with the GT box is responsible to\n","  predict (regress to) the GT box.\n","  Note: One anchor might match multiple GT boxes.\n","\n","  For Faster R-CNN - Positive anchors are defined Any of the two\n","  (i) the anchor/anchors with the highest IoU overlap with a GT box, or\n","  (ii) an anchor that has an IoU overlap higher than 0.7 with any GT box.\n","  Note: One anchor can match at most one GT box (the one with the largest IoU overlapping).\n","\n","  For both - We assign a negative label to a anchor if its IoU ratio is lower than\n","  a threshold value for all GT boxes. Anchors that are neither positive nor negative\n","  do not contribute to the training objective.\n","\n","  Main steps include:\n","  i) Decide activated and negative anchors based on the IoU matrix.\n","  ii) Compute GT confidence score/offsets/object class on the positive proposals.\n","  iii) Compute GT confidence score on the negative proposals.\n","  \n","  Inputs:\n","  - anchors: Anchor boxes, of shape BxAxH’xW’x4\n","  - bboxes: GT boxes of shape BxNx5, where N is the number of PADDED GT boxes,\n","            5 indicates (x_{lr}^{gt}, y_{lr}^{gt}, x_{rb}^{gt}, y_{rb}^{gt}) and class index\n","  - grid (float): A cell grid of shape BxH'xW'x2 where 2 indicate the (x, y) coord\n","  - iou_mat: IoU matrix of shape Bx(AxH’xW’)xN\n","  - pos_thresh: Positive threshold value\n","  - neg_thresh: Negative threshold value\n","  - method: Switch between 'YOLO' mode and 'FasterRCNN' mode\n","  \n","  Outputs:\n","  - activated_anc_ind: Index on activated anchors, of shape M, where M indicates the \n","                       number of activated anchors\n","  - negative_anc_ind: Index on negative anchors, of shape M\n","  - GT_conf_scores: GT IoU confidence scores on activated anchors, of shape M\n","  - GT_offsets: GT offsets on activated anchors, of shape Mx4. They are denoted as\n","                \\hat{t^x}, \\hat{t^y}, \\hat{t^w}, \\hat{t^h} in the formulation earlier.\n","  - GT_class: GT class category on activated anchors, essentially indexed from bboxes[:, :, 4],\n","              of shape M\n","  - activated_anc_coord: Coordinates on activated anchors (mainly for visualization purposes)\n","  - negative_anc_coord: Coordinates on negative anchors (mainly for visualization purposes)\n","  \"\"\"\n","  \n","  assert(method in ['FasterRCNN', 'YOLO'])\n","\n","  B, A, h_amap, w_amap, _ = anchors.shape\n","  N = bboxes.shape[1]\n","\n","  # activated/positive anchors\n","  max_iou_per_anc, max_iou_per_anc_ind = iou_mat.max(dim=-1)\n","  if method == 'FasterRCNN':\n","    max_iou_per_box = iou_mat.max(dim=1, keepdim=True)[0]\n","    activated_anc_mask = (iou_mat == max_iou_per_box) & (max_iou_per_box > 0)\n","    activated_anc_mask |= (iou_mat > pos_thresh) # using the pos_thresh condition as well\n","    # if an anchor matches multiple GT boxes, choose the box with the largest iou\n","    activated_anc_mask = activated_anc_mask.max(dim=-1)[0] # Bx(AxH’xW’)\n","    activated_anc_ind = torch.nonzero(activated_anc_mask.view(-1)).squeeze(-1)\n","\n","    # GT conf scores\n","    GT_conf_scores = max_iou_per_anc[activated_anc_mask] # M\n","\n","    # GT class\n","    box_cls = bboxes[:, :, 4].view(B, 1, N).expand((B, A*h_amap*w_amap, N))\n","    GT_class = torch.gather(box_cls, -1, max_iou_per_anc_ind.unsqueeze(-1)).squeeze(-1) # M\n","    GT_class = GT_class[activated_anc_mask].long()\n","\n","    bboxes_expand = bboxes[:, :, :4].view(B, 1, N, 4).expand((B, A*h_amap*w_amap, N, 4))\n","    bboxes = torch.gather(bboxes_expand, -2, max_iou_per_anc_ind.unsqueeze(-1) \\\n","      .unsqueeze(-1).expand(B, A*h_amap*w_amap, 1, 4)).view(-1, 4)\n","    bboxes = bboxes[activated_anc_ind]\n","  else:\n","    bbox_mask = (bboxes[:, :, 0] != -1) # BxN, indicate invalid boxes\n","    bbox_centers = (bboxes[:, :, 2:4] - bboxes[:, :, :2]) / 2. + bboxes[:, :, :2] # BxNx2\n","\n","    mah_dist = torch.abs(grid.view(B, -1, 2).unsqueeze(2) - bbox_centers.unsqueeze(1)).sum(dim=-1) # Bx(H'xW')xN\n","    min_mah_dist = mah_dist.min(dim=1, keepdim=True)[0] # Bx1xN\n","    grid_mask = (mah_dist == min_mah_dist).unsqueeze(1) # Bx1x(H'xW')xN\n","\n","    reshaped_iou_mat = iou_mat.view(B, A, -1, N)\n","    anc_with_largest_iou = reshaped_iou_mat.max(dim=1, keepdim=True)[0] # Bx1x(H’xW’)xN\n","    anc_mask = (anc_with_largest_iou == reshaped_iou_mat) # BxAx(H’xW’)xN\n","    activated_anc_mask = (grid_mask & anc_mask).view(B, -1, N)\n","    activated_anc_mask &= bbox_mask.unsqueeze(1)\n","    \n","    # one anchor could match multiple GT boxes\n","    activated_anc_ind = torch.nonzero(activated_anc_mask.view(-1)).squeeze(-1)\n","    GT_conf_scores = iou_mat.view(-1)[activated_anc_ind]\n","    bboxes = bboxes.view(B, 1, N, 5).repeat(1, A*h_amap*w_amap, 1, 1).view(-1, 5)[activated_anc_ind]\n","    GT_class = bboxes[:, 4].long()\n","    bboxes = bboxes[:, :4]\n","    activated_anc_ind = (activated_anc_ind / activated_anc_mask.shape[-1]).long()\n","\n","  print('number of pos proposals: ', activated_anc_ind.shape[0])\n","  activated_anc_coord = anchors.view(-1, 4)[activated_anc_ind]\n","\n","  # GT offsets\n","  # bbox and anchor coordinates are x_tl, y_tl, x_br, y_br\n","  # offsets are t_x, t_y, t_w, t_h\n","  wh_offsets = torch.log((bboxes[:, 2:4] - bboxes[:, :2]) \\\n","    / (activated_anc_coord[:, 2:4] - activated_anc_coord[:, :2]))\n","\n","  xy_offsets = (bboxes[:, :2] + bboxes[:, 2:4] - \\\n","    activated_anc_coord[:, :2] - activated_anc_coord[:, 2:4]) / 2.\n","\n","  if method == \"FasterRCNN\":\n","    xy_offsets /= (activated_anc_coord[:, 2:4] - activated_anc_coord[:, :2])\n","  else:\n","    assert torch.max(torch.abs(xy_offsets)) <= 0.5, \\\n","      \"x and y offsets should be between -0.5 and 0.5! Got {}\".format( \\\n","      torch.max(torch.abs(xy_offsets)))\n","\n","  GT_offsets = torch.cat((xy_offsets, wh_offsets), dim=-1)\n","\n","  # negative anchors\n","  negative_anc_mask = (max_iou_per_anc < neg_thresh) # Bx(AxH’xW’)\n","  negative_anc_ind = torch.nonzero(negative_anc_mask.view(-1)).squeeze(-1)\n","  negative_anc_ind = negative_anc_ind[torch.randint(0, negative_anc_ind.shape[0], (activated_anc_ind.shape[0],))]\n","  negative_anc_coord = anchors.view(-1, 4)[negative_anc_ind.view(-1)]\n","  \n","  # activated_anc_coord and negative_anc_coord are mainly for visualization purposes\n","  return activated_anc_ind, negative_anc_ind, GT_conf_scores, GT_offsets, GT_class, \\\n","         activated_anc_coord, negative_anc_coord"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"6OHpu5SMXrio"},"source":["Run the following to check your implementation. You should see errors on the order of 1e-7 or less."]},{"cell_type":"code","metadata":{"id":"fK_USCuaXSzh"},"source":["# sanity check on YOLO (the one on Faster R-CNN is in A5-2)\n","activated_anc_ind, negative_anc_ind, GT_conf_scores, GT_offsets, GT_class, \\\n","  activated_anc_coord, negative_anc_coord = ReferenceOnActivatedAnchors(anc_list, resized_box_list, grid_list, iou_mat, neg_thresh=0.2, method='YOLO')\n","\n","expected_GT_conf_scores = torch.tensor([0.74538743, 0.72793430, 0.76044953, 0.37116671], **to_float_cuda)\n","expected_GT_offsets = torch.tensor([[ 0.04900002,  0.35735703, -0.09431065,  0.19244696],\n","                                    [-0.14700007,  0.37299442, -0.00250307,  0.25213102],\n","                                    [-0.40600014,  0.09625626,  0.20863886, -0.07974572],\n","                                    [ 0.15399981, -0.42933345, -0.03459148, -0.86750042]], **to_float_cuda)\n","expected_GT_class = torch.tensor([ 6,  7, 19,  6], **to_long_cuda)\n","print('conf scores error: ', rel_error(GT_conf_scores, expected_GT_conf_scores))\n","print('offsets error: ', rel_error(GT_offsets, expected_GT_offsets))\n","print('class prob error: ', rel_error(GT_class, expected_GT_class))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"YsPZw2mTutWP"},"source":["We can sanity check this function by visualizing ground-truth boxes (in red) along with positive / negative anchors (in green)."]},{"cell_type":"code","metadata":{"id":"2TuJNoCvUuqc"},"source":["# visualize activated and negative anchors\n","anc_per_img = torch.prod(torch.tensor(anc_list.shape[1:-1]))\n","\n","print('*'*80)\n","print('Activated (positive) anchors:')\n","for img, bbox, idx in zip(img_list, box_list, torch.arange(box_list.shape[0])):\n","  anc_ind_in_img = (activated_anc_ind >= idx * anc_per_img) & (activated_anc_ind < (idx+1) * anc_per_img)\n","  print('{} activated anchors!'.format(torch.sum(anc_ind_in_img)))\n","  data_visualizer(img, idx_to_class, bbox[:, :4], coord_trans(activated_anc_coord[anc_ind_in_img], w_list[idx], h_list[idx]))\n","\n","print('*'*80)\n","print('Negative anchors:')\n","for img, bbox, idx in zip(img_list, box_list, torch.arange(box_list.shape[0])):\n","  anc_ind_in_img = (negative_anc_ind >= idx * anc_per_img) & (negative_anc_ind < (idx+1) * anc_per_img)\n","  print('{} negative anchors!'.format(torch.sum(anc_ind_in_img)))\n","  data_visualizer(img, idx_to_class, bbox[:, :4], coord_trans(negative_anc_coord[anc_ind_in_img], w_list[idx], h_list[idx]))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"XW_Zek3_dgfF"},"source":["## Prediction Network\n","The prediction network inputs the features from the backbone network, and outputs the classification scores and transformations for each anchor.\n","\n","For each position in the 7x7 grid of features from the backbone, the prediction network outputs `C` numbers to be interpreted as classification scores over the `C` object categories for the anchors at that position.\n","\n","In addition, for each of the `A` anchors at each position, the prediction network outputs a transformation (4 numbers, to convert the anchor box into a region proposal) and a confidence score (where large positive values indicate high probability that the anchor contains an object, and large negative values indicate low probability that the anchor contains an object).\n","\n","Collecting all of these outputs, we see that for each position in the 7x7 grid of features we need to output a total of `5A+C` numbers, so the prediction network receives an input tensor of shape `(B, 1280, 7, 7)` and produces an output tensor of shape `(B, 5A+C, 7, 7)`. We can achieve this with two `1x1` convolution layers operating on the input tensor, where the number of filters in the second layer is `5A+C`.\n","\n","During training, we do not apply the loss on the full set of anchor boxes for the image; instead we designate a subset of anchors as positive and negative by matching them with ground-truth boxes as in the function above. The Prediction network is also responsible for picking out the outputs corresponding to the positive and negative anchors."]},{"cell_type":"code","metadata":{"id":"vMkvupmCdnYH"},"source":["class PredictionNetwork(nn.Module):\n","  def __init__(self, in_dim, hidden_dim=128, num_anchors=9, num_classes=20, drop_ratio=0.3):\n","    super().__init__()\n","\n","    assert(num_classes != 0 and num_anchors != 0)\n","    self.num_classes = num_classes\n","    self.num_anchors = num_anchors\n","\n","    ##############################################################################\n","    # TODO: Set up a network that will predict outputs for all anchors. This     #\n","    # network should have a 1x1 convolution with hidden_dim filters, followed    #\n","    # by a Dropout layer with p=drop_ratio, a Leaky ReLU nonlinearity, and       #\n","    # finally another 1x1 convolution layer to predict all outputs. You can      #\n","    # use an nn.Sequential for this network, and store it in a member variable.  #\n","    # HINT: The output should be of shape (B, 5*A+C, 7, 7), where                #\n","    # A=self.num_anchors and C=self.num_classes.                                 #\n","    ##############################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ##############################################################################\n","    #                               END OF YOUR CODE                             #\n","    ##############################################################################\n","\n","  def _extract_anchor_data(self, anchor_data, anchor_idx):\n","    \"\"\"\n","    Inputs:\n","    - anchor_data: Tensor of shape (B, A, D, H, W) giving a vector of length\n","      D for each of A anchors at each point in an H x W grid.\n","    - anchor_idx: int64 Tensor of shape (M,) giving anchor indices to extract\n","\n","    Returns:\n","    - extracted_anchors: Tensor of shape (M, D) giving anchor data for each\n","      of the anchors specified by anchor_idx.\n","    \"\"\"\n","    B, A, D, H, W = anchor_data.shape\n","    anchor_data = anchor_data.permute(0, 1, 3, 4, 2).contiguous().view(-1, D)\n","    extracted_anchors = anchor_data[anchor_idx]\n","    return extracted_anchors\n","  \n","  def _extract_class_scores(self, all_scores, anchor_idx):\n","    \"\"\"\n","    Inputs:\n","    - all_scores: Tensor of shape (B, C, H, W) giving classification scores for\n","      C classes at each point in an H x W grid.\n","    - anchor_idx: int64 Tensor of shape (M,) giving the indices of anchors at\n","      which to extract classification scores\n","\n","    Returns:\n","    - extracted_scores: Tensor of shape (M, C) giving the classification scores\n","      for each of the anchors specified by anchor_idx.\n","    \"\"\"\n","    B, C, H, W = all_scores.shape\n","    A = self.num_anchors\n","    all_scores = all_scores.contiguous().permute(0, 2, 3, 1).contiguous()\n","    all_scores = all_scores.view(B, 1, H, W, C).expand(B, A, H, W, C)\n","    all_scores = all_scores.reshape(B * A * H * W, C)\n","    extracted_scores = all_scores[anchor_idx]\n","    return extracted_scores\n","\n","  def forward(self, features, pos_anchor_idx=None, neg_anchor_idx=None):\n","    \"\"\"\n","    Run the forward pass of the network to predict outputs given features\n","    from the backbone network.\n","\n","    Inputs:\n","    - features: Tensor of shape (B, in_dim, 7, 7) giving image features computed\n","      by the backbone network.\n","    - pos_anchor_idx: int64 Tensor of shape (M,) giving the indices of anchors\n","      marked as positive. These are only given during training; at test-time\n","      this should be None.\n","    - neg_anchor_idx: int64 Tensor of shape (M,) giving the indices of anchors\n","      marked as negative. These are only given at training; at test-time this\n","      should be None.\n","    \n","    The outputs from this method are different during training and inference.\n","    \n","    During training, pos_anchor_idx and neg_anchor_idx are given and identify\n","    which anchors should be positive and negative, and this forward pass needs\n","    to extract only the predictions for the positive and negative anchors.\n","\n","    During inference, only features are provided and this method needs to return\n","    predictions for all anchors.\n","\n","    Outputs (During training):\n","    - conf_scores: Tensor of shape (2*M, 1) giving the predicted classification\n","      scores for positive anchors and negative anchors (in that order).\n","    - offsets: Tensor of shape (M, 4) giving predicted transformation for\n","      positive anchors.\n","    - class_scores: Tensor of shape (M, C) giving classification scores for\n","      positive anchors.\n","\n","    Outputs (During inference):\n","    - conf_scores: Tensor of shape (B, A, H, W) giving predicted classification\n","      scores for all anchors.\n","    - offsets: Tensor of shape (B, A, 4, H, W) giving predicted transformations\n","      all all anchors.\n","    - class_scores: Tensor of shape (B, C, H, W) giving classification scores for\n","      each spatial position.\n","    \"\"\"\n","    conf_scores, offsets, class_scores = None, None, None\n","    ############################################################################\n","    # TODO: Use backbone features to predict conf_scores, offsets, and         #\n","    # class_scores. Make sure conf_scores is between 0 and 1 by squashing the  #\n","    # network output with a sigmoid. Also make sure the first two elements t^x #\n","    # and t^y of offsets are between -0.5 and 0.5 by squashing with a sigmoid  #\n","    # and subtracting 0.5.                                                     #\n","    #                                                                          #\n","    # During training you need to extract the outputs for only the positive    #\n","    # and negative anchors as specified above.                                 #\n","    #                                                                          #\n","    # HINT: You can use the provided helper methods self._extract_anchor_data  #\n","    # and self._extract_class_scores to extract information for positive and   #\n","    # negative anchors specified by pos_anchor_idx and neg_anchor_idx.         #\n","    ############################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ##############################################################################\n","    #                               END OF YOUR CODE                             #\n","    ##############################################################################\n","    return conf_scores, offsets, class_scores"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"tqPhOOjiWuSM"},"source":["Run the following to check your implementation. You should see errors on the order of 1e-7 or less."]},{"cell_type":"code","metadata":{"id":"4lgXGvq7JOmS"},"source":["# sanity check\n","fix_random_seed(0)\n","pred_network = PredictionNetwork(1280, drop_ratio=0).to(**to_float_cuda)\n","features = torch.linspace(-10., 10., steps=3*1280*7*7, **to_float_cuda).view(3, 1280, 7, 7)\n","pos_anchor_idx = torch.tensor([122, 605, 871, 955], **to_long_cuda)\n","neg_anchor_idx = torch.tensor([1048, 1292, 1124, 1092], **to_long_cuda)\n","conf_scores, offsets, class_prob = pred_network(features, pos_anchor_idx, neg_anchor_idx)\n","\n","expected_conf_scores = torch.tensor([[0.85080749], [0.55230302], [0.45239496], [0.51819414],\n","                                     [0.38467780], [0.82627463], [0.53837817], [0.538446366]], **to_float_cuda)\n","expected_offsets = torch.tensor([[-0.16344640,  0.45598251, -1.56108809, -1.96177566],\n","                                 [ 0.09087485, -0.11321104,  0.46109992,  0.13557276],\n","                                 [ 0.14098871, -0.04003078, -0.14689390,  0.01480492],\n","                                 [-0.22611487,  0.14513946, -1.26124716,  0.58137202]], **to_float_cuda)\n","expected_class_prob = torch.tensor([0.88649291, 1.08752346, 0.90241265, 1.28021181, 0.77324629,\n","                                    -0.93609941, 1.44269419, -1.01072836, 0.09888625, -0.09179375,\n","                                    -0.48814785, -1.14749694, 0.24533349, -0.19522685, 0.21445289,\n","                                    0.91779679, 0.97305167, 1.01984429, -0.11932681, 0.34998628], **to_float_cuda)\n","\n","print('conf scores error: ', rel_error(conf_scores, expected_conf_scores))\n","print('offsets error: ', rel_error(offsets, expected_offsets))\n","print('class prob error: ', rel_error(class_prob.mean(0), expected_class_prob))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"ah05Gd6BOKG2"},"source":["## Loss Function\n","The confidence score regression loss is for both activated/negative anchors while the bounding box regression loss and the object classification loss are for activated anchors only. These are implemented for you."]},{"cell_type":"markdown","metadata":{"id":"ZDwpyHZBxNRn"},"source":["### Confidence score regression\n"]},{"cell_type":"code","metadata":{"id":"cmVyv6NrxTiM"},"source":["def ConfScoreRegression(conf_scores, GT_conf_scores):\n","  \"\"\"\n","  Use sum-squared error as in YOLO\n","\n","  Inputs:\n","  - conf_scores: Predicted confidence scores\n","  - GT_conf_scores: GT confidence scores\n","  \n","  Outputs:\n","  - conf_score_loss\n","  \"\"\"\n","  # the target conf_scores for negative samples are zeros\n","  GT_conf_scores = torch.cat((torch.ones_like(GT_conf_scores), \\\n","                              torch.zeros_like(GT_conf_scores)), dim=0).view(-1, 1)\n","  conf_score_loss = torch.sum((conf_scores - GT_conf_scores)**2) * 1. / GT_conf_scores.shape[0]\n","  return conf_score_loss"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"sRyF6HDGxT7P"},"source":["### Bounding box regression"]},{"cell_type":"code","metadata":{"id":"yecLoQLjxcx7"},"source":["def BboxRegression(offsets, GT_offsets):\n","  \"\"\"\"\n","  Use sum-squared error as in YOLO\n","  For both xy and wh\n","\n","  Inputs:\n","  - offsets: Predicted box offsets\n","  - GT_offsets: GT box offsets\n","  \n","  Outputs:\n","  - bbox_reg_loss\n","  \"\"\"\n","  bbox_reg_loss = torch.sum((offsets - GT_offsets)**2) * 1. / GT_offsets.shape[0]\n","  return bbox_reg_loss"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"lADrqUuoxdRb"},"source":["### Object classifiction"]},{"cell_type":"code","metadata":{"id":"2FoLOeypxpC8"},"source":["def ObjectClassification(class_prob, GT_class, batch_size, anc_per_img, activated_anc_ind):\n","  \"\"\"\"\n","  Use softmax loss\n","\n","  Inputs:\n","  - class_prob: Predicted softmax class probability\n","  - GT_class: GT box class label\n","  \n","  Outputs:\n","  - object_cls_loss\n","  \"\"\"\n","  # average within sample and then average across batch\n","  # such that the class pred would not bias towards dense popular objects like `person`\n","  all_loss = F.cross_entropy(class_prob, GT_class, reduction='none') # , reduction='sum') * 1. / batch_size\n","  object_cls_loss = 0\n","  for idx in range(batch_size):\n","    anc_ind_in_img = (activated_anc_ind >= idx * anc_per_img) & (activated_anc_ind < (idx+1) * anc_per_img)\n","    object_cls_loss += all_loss[anc_ind_in_img].sum() * 1. / torch.sum(anc_ind_in_img)\n","  object_cls_loss /= batch_size\n","  # object_cls_loss = F.cross_entropy(class_prob, GT_class, reduction='sum') * 1. / batch_size\n","\n","  return object_cls_loss"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"CJQGhhMTVi3k"},"source":["Run the following to check your implementation. You should see errors on the order of 1e-8 or less."]},{"cell_type":"code","metadata":{"id":"B0iG-DAUOQ56"},"source":["# sanity check\n","conf_loss = ConfScoreRegression(conf_scores, GT_conf_scores)\n","reg_loss = BboxRegression(offsets, GT_offsets)\n","cls_loss = ObjectClassification(class_prob, GT_class, w_list.shape[0], anc_per_img, activated_anc_ind)\n","print('conf loss: {:.4f}, reg loss: {:.4f}, cls loss: {:.4f}'.format(conf_loss, reg_loss, cls_loss))\n","\n","loss_all = torch.tensor([conf_loss.data, reg_loss.data, cls_loss.data], **to_float_cuda)\n","expected_loss = torch.tensor([0.27064770460128784, 2.9750006198883057, 2.3919472694396973], **to_float_cuda)\n","\n","print('loss error: ', rel_error(loss_all, expected_loss))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"MIt5AxlAxwKz"},"source":["# Train an object detector"]},{"cell_type":"markdown","metadata":{"id":"7yCYzKIxx2qB"},"source":["## Object detection module\n","\n","We will now combine everything into the `SingleStageDetector` class:"]},{"cell_type":"code","metadata":{"id":"7OsS-KZex6uK"},"source":["class SingleStageDetector(nn.Module):\n","  def __init__(self):\n","    super().__init__()\n","\n","    self.anchor_list = torch.tensor([[1, 1], [2, 2], [3, 3], [4, 4], [5, 5], [2, 3], [3, 2], [3, 5], [5, 3]]) # READ ONLY\n","    self.feat_extractor = FeatureExtractor()\n","    self.num_classes = 20\n","    self.pred_network = PredictionNetwork(1280, num_anchors=self.anchor_list.shape[0], \\\n","                                          num_classes=self.num_classes)\n","  def forward(self):\n","    raise NotImplementedError\n","  \n","  def inference(self):\n","    raise NotImplementedError"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"5Im5jy1QRaeV"},"source":["Implement the `forward` function of our detector. This implements the training-time forward pass: it receives the input images and the ground-truth bounding boxes, and returns the total loss for the minibatch."]},{"cell_type":"code","metadata":{"id":"MsBG9yqNRWhu"},"source":["  def detector_forward(self, images, bboxes):\n","    \"\"\"\n","    Training-time forward pass for the single-stage detector.\n","\n","    Inputs:\n","    - images: Input images, of shape (B, 3, 224, 224)\n","    - bboxes: GT bounding boxes of shape (B, N, 5) (padded)\n","\n","    Outputs:\n","    - total_loss: Torch scalar giving the total loss for the batch.\n","    \"\"\"\n","    # weights to multiple to each loss term\n","    w_conf = 1 # for conf_scores\n","    w_reg = 1 # for offsets\n","    w_cls = 1 # for class_prob\n","\n","    total_loss = None\n","    ##############################################################################\n","    # TODO: Implement the forward pass of SingleStageDetector.                   #\n","    # A few key steps are outlined as follows:                                   #\n","    # i) Image feature extraction,                                               #\n","    # ii) Grid and anchor generation,                                            #\n","    # iii) Compute IoU between anchors and GT boxes and then determine activated/#\n","    #      negative anchors, and GT_conf_scores, GT_offsets, GT_class,           #\n","    # iv) Compute conf_scores, offsets, class_prob through the prediction network#\n","    # v) Compute the total_loss which is formulated as:                          #\n","    #    total_loss = w_conf * conf_loss + w_reg * reg_loss + w_cls * cls_loss,  #\n","    #    where conf_loss is determined by ConfScoreRegression, w_reg by          #\n","    #    BboxRegression, and w_cls by ObjectClassification.                      #\n","    # HINT: Set `neg_thresh=0.2` in ReferenceOnActivatedAnchors in this notebook #\n","    #       (A5-1) for a better performance than with the default value.         #\n","    ##############################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ##############################################################################\n","    #                               END OF YOUR CODE                             #\n","    ##############################################################################\n","\n","    return total_loss\n","  \n","  SingleStageDetector.forward = detector_forward"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"JXZAaDklx7Bs"},"source":["## Object detection solver\n","The `DetectionSolver` object runs the training loop to train an single stage detector."]},{"cell_type":"code","metadata":{"id":"s8KX5ohHyBFA"},"source":["def DetectionSolver(detector, train_loader, learning_rate=3e-3,\n","                    lr_decay=1, num_epochs=20, **kwargs):\n","  \"\"\"\n","  Run optimization to train the model.\n","  \"\"\"\n","\n","  # ship model to GPU\n","  detector.to(**to_float_cuda)\n","\n","  # optimizer setup\n","  from torch import optim\n","  # optimizer = optim.Adam(\n","  optimizer = optim.SGD(\n","    filter(lambda p: p.requires_grad, detector.parameters()),\n","    learning_rate) # leave betas and eps by default\n","  lr_scheduler = optim.lr_scheduler.LambdaLR(optimizer,\n","                                             lambda epoch: lr_decay ** epoch)\n","\n","  # sample minibatch data\n","  loss_history = []\n","  detector.train()\n","  for i in range(num_epochs):\n","    start_t = time.time()\n","    for iter_num, data_batch in enumerate(train_loader):\n","      images, boxes, w_batch, h_batch, _ = data_batch\n","      resized_boxes = coord_trans(boxes, w_batch, h_batch, mode='p2a')\n","      images = images.to(**to_float_cuda)\n","      resized_boxes = resized_boxes.to(**to_float_cuda)\n","\n","      loss = detector(images, resized_boxes)\n","      optimizer.zero_grad()\n","      loss.backward()\n","      loss_history.append(loss.item())\n","      optimizer.step()\n","\n","      print('(Iter {} / {})'.format(iter_num, len(train_loader)))\n","\n","    end_t = time.time()\n","    print('(Epoch {} / {}) loss: {:.4f} time per epoch: {:.1f}s'.format(\n","        i, num_epochs, loss.item(), end_t-start_t))\n","\n","    lr_scheduler.step()\n","\n","  # plot the training losses\n","  plt.plot(loss_history)\n","  plt.xlabel('Iteration')\n","  plt.ylabel('Loss')\n","  plt.title('Training loss history')\n","  plt.show()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"D-9nFPtLyDE_"},"source":["## Overfit small data\n","To make sure that everything is working as expected, we can try to overfit the detector to a small subset of data.\n","\n","After 200 epochs of training you should see a total loss of around or less than 0.3."]},{"cell_type":"code","metadata":{"id":"eNKLRL7HyHO0"},"source":["# monitor the training loss\n","fix_random_seed(0)\n","num_sample = 10\n","small_dataset = torch.utils.data.Subset(train_dataset, torch.linspace(0, len(train_dataset)-1, steps=num_sample).long())\n","small_train_loader = pascal_voc2007_loader(small_dataset, 10) # a new loader\n","\n","for lr in [1e-2]:\n","  print('lr: ', lr)\n","  detector = SingleStageDetector()\n","  DetectionSolver(detector, small_train_loader, learning_rate=lr, num_epochs=200)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"CuSBfcGWyHlD"},"source":["## Train a net\n","\n","Now that we are confident that the training code is working properly, let's train the network on more data and for longer. We will train for 50 epochs; this should take about 35 minutes on a K80 GPU. You should see a total loss around or less than 0.27.\n","\n","Note that real object detection systems typically train for 12-24 hours, distribute training over multiple GPUs, and use much faster GPUs. As such our result will be far from the state of the art, but it should give some reasonable results!\n"]},{"cell_type":"code","metadata":{"id":"Aipf7-XQyJ28"},"source":["# monitor the training loss\n","train_loader = pascal_voc2007_loader(train_dataset, 100) # a new loader\n","\n","lr = 5e-2\n","num_epochs = 50\n","yolo_detector = SingleStageDetector()\n","DetectionSolver(yolo_detector, train_loader, learning_rate=lr, num_epochs=num_epochs)"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"j_K7nL8eviXV"},"source":["# (optional) load/save checkpoint\n","# torch.save(yolo_detector.state_dict(), 'yolo_detector.pt') # uncomment to save your checkpoint\n","# yolo_detector.load_state_dict(torch.load('yolo_detector.pt')) # uncomment to load your previous checkpoint"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"xzU71hc_y9Ij"},"source":["# Use an object detector"]},{"cell_type":"markdown","metadata":{"id":"TxwNNfV-zLkJ"},"source":["## Thresholding\n","We will implement this in the object detection module."]},{"cell_type":"markdown","metadata":{"id":"e42TAEcpjeKW"},"source":["## Non-Maximum Suppression (NMS)\n","The definition of NMS and instructions on how to compute NMS can be found in the lecture slides (p40-44): https://web.eecs.umich.edu/~justincj/slides/eecs498/498_FA2019_lecture15.pdf"]},{"cell_type":"code","metadata":{"id":"zeWgWrzfYgm_"},"source":["def nms(boxes, scores, iou_threshold=0.5, topk=None):\n","  \"\"\"\n","  Non-maximum suppression removes overlapping bounding boxes.\n","\n","  Inputs:\n","  - boxes: top-left and bottom-right coordinate values of the bounding boxes\n","    to perform NMS on, of shape Nx4\n","  - scores: scores for each one of the boxes, of shape N\n","  - iou_threshold: discards all overlapping boxes with IoU > iou_threshold; float\n","  - topk: If this is not None, then return only the topk highest-scoring boxes.\n","    Otherwise if this is None, then return all boxes that pass NMS.\n","\n","  Outputs:\n","  - keep: torch.long tensor with the indices of the elements that have been\n","    kept by NMS, sorted in decreasing order of scores; of shape [num_kept_boxes]\n","  \"\"\"\n","\n","  if (not boxes.numel()) or (not scores.numel()):\n","    return torch.zeros(0, dtype=torch.long)\n","\n","  keep = None\n","  #############################################################################\n","  # TODO: Implement non-maximum suppression which iterates the following:     #\n","  #       1. Select the highest-scoring box among the remaining ones,         #\n","  #          which has not been chosen in this step before                    #\n","  #       2. Eliminate boxes with IoU > threshold                             #\n","  #       3. If any boxes remain, GOTO 1                                      #\n","  #       Your implementation should not depend on a specific device type;    #\n","  #       you can use the device of the input if necessary.                   #\n","  # HINT: You can refer to the torchvision library code:                      #\n","  #   github.com/pytorch/vision/blob/master/torchvision/csrc/cpu/nms_cpu.cpp  #\n","  #############################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  #############################################################################\n","  #                              END OF YOUR CODE                             #\n","  #############################################################################\n","  return keep"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"hq1biRRs6Rqf"},"source":["We will now compare your implementation of NMS with the implementation in torchvision. Most likely, your implementation will be faster on CPU than on CUDA, and the torchvision implementation will likely be much faster than yours. This is expected, but your implementation should produce the same outputs as the torchvision version."]},{"cell_type":"code","metadata":{"id":"wqXkUdvdHh-U"},"source":["fix_random_seed(0)\n","boxes = (100. * torch.rand(5000, 4)).round()\n","boxes[:,2] = boxes[:,2] + boxes[:,0] + 1.\n","boxes[:,3] = boxes[:,3] + boxes[:,1] + 1.\n","scores = torch.randn(5000)\n","\n","names = ['your_cpu', 'torchvision_cpu', 'torchvision_cuda']\n","iou_thresholds = [0.3, 0.5, 0.7]\n","elapsed = dict(zip(names, [0.]*len(names)))\n","intersects = dict(zip(names[1:], [0.]*(len(names)-1)))\n","\n","for iou_threshold in iou_thresholds:\n","  tic = time.time()\n","  my_keep = nms(boxes, scores, iou_threshold)\n","  elapsed['your_cpu'] += time.time() - tic\n"," \n","  tic = time.time()\n","  tv_keep = torchvision.ops.nms(boxes, scores, iou_threshold)\n","  elapsed['torchvision_cpu'] += time.time() - tic\n","  intersect = len(set(tv_keep.tolist()).intersection(my_keep.tolist()))\n","  intersects['torchvision_cpu'] += intersect / (len(my_keep) + len(tv_keep) - intersect)\n","\n","  tic = time.time()\n","  tv_cuda_keep = torchvision.ops.nms(boxes.cuda(), scores.cuda(), iou_threshold).to(my_keep.device)\n","  torch.cuda.synchronize()\n","  elapsed['torchvision_cuda'] += time.time() - tic\n","  intersect = len(set(tv_cuda_keep.tolist()).intersection(my_keep.tolist()))\n","  intersects['torchvision_cuda'] += intersect / (len(my_keep) + len(tv_cuda_keep) - intersect)\n","\n","for key in intersects:\n","  intersects[key] /= len(iou_thresholds)\n","\n","# You should see < 1% difference\n","print('Testing NMS:')\n","print('Your        CPU  implementation: %fs' % elapsed['your_cpu'])\n","print('torchvision CPU  implementation: %fs' % elapsed['torchvision_cpu'])\n","print('torchvision CUDA implementation: %fs' % elapsed['torchvision_cuda'])\n","print('Speedup CPU : %fx' % (elapsed['your_cpu'] / elapsed['torchvision_cpu']))\n","print('Speedup CUDA: %fx' % (elapsed['your_cpu'] / elapsed['torchvision_cuda']))\n","print('Difference CPU : ', 1. - intersects['torchvision_cpu']) # in the order of 1e-3 or less\n","print('Difference CUDA: ', 1. - intersects['torchvision_cuda']) # in the order of 1e-3 or less"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"9JSTPMsqzEnr"},"source":["## Inference"]},{"cell_type":"markdown","metadata":{"id":"d97pmEHLSDyK"},"source":["Now, implement the inference part of module `SingleStageDetector`."]},{"cell_type":"code","metadata":{"id":"wqc-YGmOSG2Y"},"source":["  def detector_inference(self, images, thresh=0.5, nms_thresh=0.7):\n","    \"\"\"\"\n","    Inference-time forward pass for the single stage detector.\n","\n","    Inputs:\n","    - images: Input images\n","    - thresh: Threshold value on confidence scores\n","    - nms_thresh: Threshold value on NMS\n","\n","    Outputs:\n","    - final_propsals: Keeped proposals after confidence score thresholding and NMS,\n","                      a list of B (*x4) tensors\n","    - final_conf_scores: Corresponding confidence scores, a list of B (*x1) tensors\n","    - final_class: Corresponding class predictions, a list of B  (*x1) tensors\n","    \"\"\"\n","    final_proposals, final_conf_scores, final_class = [], [], []\n","    ##############################################################################\n","    # TODO: Predicting the final proposal coordinates `final_proposals`,         #\n","    # confidence scores `final_conf_scores`, and the class index `final_class`.  #\n","    # The overall steps are similar to the forward pass but now you do not need  #\n","    # to decide the activated nor negative anchors.                              #\n","    # HINT: Thresholding the conf_scores based on the threshold value `thresh`.  #\n","    # Then, apply NMS (torchvision.ops.nms) to the filtered proposals given the  #\n","    # threshold `nms_thresh`.                                                    #\n","    # The class index is determined by the class with the maximal probability.   #\n","    # Note that `final_propsals`, `final_conf_scores`, and `final_class` are all #\n","    # lists of B 2-D tensors (you may need to unsqueeze dim=1 for the last two). #\n","    ##############################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ##############################################################################\n","    #                               END OF YOUR CODE                             #\n","    ##############################################################################\n","    return final_proposals, final_conf_scores, final_class\n","  SingleStageDetector.inference = detector_inference"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"M6abC15U1Wtu"},"source":["def DetectionInference(detector, data_loader, dataset, idx_to_class, thresh=0.8, nms_thresh=0.3, output_dir=None):\n","\n","  # ship model to GPU\n","  detector.to(**to_float_cuda)\n"," \n","  detector.eval()\n","  start_t = time.time()\n","\n","  if output_dir is not None:\n","    det_dir = 'mAP/input/detection-results'\n","    gt_dir = 'mAP/input/ground-truth'\n","    if os.path.exists(det_dir):\n","      shutil.rmtree(det_dir)\n","    os.mkdir(det_dir)\n","    if os.path.exists(gt_dir):\n","      shutil.rmtree(gt_dir)\n","    os.mkdir(gt_dir)\n","\n","  for iter_num, data_batch in enumerate(data_loader):\n","    images, boxes, w_batch, h_batch, img_ids = data_batch\n","    images = images.to(**to_float_cuda)\n","\n","    final_proposals, final_conf_scores, final_class = detector.inference(images, thresh=thresh, nms_thresh=nms_thresh)\n","\n","    # clamp on the proposal coordinates\n","    batch_size = len(images)\n","    for idx in range(batch_size):\n","      torch.clamp_(final_proposals[idx][:, 0::2], min=0, max=w_batch[idx])\n","      torch.clamp_(final_proposals[idx][:, 1::2], min=0, max=h_batch[idx])\n","\n","      # visualization\n","      # get the original image\n","      # hack to get the original image so we don't have to load from local again...\n","      i = batch_size*iter_num + idx\n","      img, _ = dataset.__getitem__(i)\n","\n","      valid_box = sum([1 if j != -1 else 0 for j in boxes[idx][:, 0]])\n","      final_all = torch.cat((final_proposals[idx], \\\n","        final_class[idx].float(), final_conf_scores[idx]), dim=-1).cpu()\n","      resized_proposals = coord_trans(final_all, w_batch[idx], h_batch[idx])\n","\n","      # write results to file for evaluation (use mAP API https://github.com/Cartucho/mAP for now...)\n","      if output_dir is not None:\n","        file_name = img_ids[idx].replace('.jpg', '.txt')\n","        with open(os.path.join(det_dir, file_name), 'w') as f_det, \\\n","          open(os.path.join(gt_dir, file_name), 'w') as f_gt:\n","          print('{}: {} GT bboxes and {} proposals'.format(img_ids[idx], valid_box, resized_proposals.shape[0]))\n","          for b in boxes[idx][:valid_box]:\n","            f_gt.write('{} {:.2f} {:.2f} {:.2f} {:.2f}\\n'.format(idx_to_class[b[4].item()], b[0], b[1], b[2], b[3]))\n","          for b in resized_proposals:\n","            f_det.write('{} {:.6f} {:.2f} {:.2f} {:.2f} {:.2f}\\n'.format(idx_to_class[b[4].item()], b[5], b[0], b[1], b[2], b[3]))\n","      else:\n","        data_visualizer(img, idx_to_class, boxes[idx][:valid_box], resized_proposals)\n","\n","  end_t = time.time()\n","  print('Total inference time: {:.1f}s'.format(end_t-start_t))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"kG4PPW1XEDTm"},"source":["### Inference - overfit small data"]},{"cell_type":"code","metadata":{"id":"gp_Hmt-Km5bl"},"source":["# visualize the output from the overfitted model on small dataset\n","# the bounding boxes should be really accurate\n","DetectionInference(detector, small_train_loader, small_dataset, idx_to_class, thresh=0.8)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"ifdEPmd9EMCP"},"source":["### Inference - train a net"]},{"cell_type":"code","metadata":{"id":"J7ArGiLTnHta"},"source":["# visualize the same output from the model trained on the entire training set\n","# some bounding boxes might not make sense\n","DetectionInference(yolo_detector, small_train_loader, small_dataset, idx_to_class)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"ETU6ev7aydIY"},"source":["## Evaluation\n","Compute mean Average Precision (mAP). Introduction on mAP see lecture slides (p46-57): https://web.eecs.umich.edu/~justincj/slides/eecs498/498_FA2019_lecture15.pdf\n"]},{"cell_type":"markdown","metadata":{"id":"1fGptrealquF"},"source":["Run the following to evaluate your detector on the PASCAL VOC validation set. You should see mAP at around 11% or above.\n","\n","The state of the art on this dataset is >80% mAP! To achieve these results we would need to use a much bigger network, and train with more data and for much longer, but that is beyond the scope of this assigment.\n","\n","(Optional) If you train the model longer (e.g., 100 epochs), you should see a better mAP. But make sure you revert the code back for grading purposes."]},{"cell_type":"code","metadata":{"id":"FvDb7uwqyhAK"},"source":["DetectionInference(yolo_detector, val_loader, val_dataset, idx_to_class, output_dir='mAP/input', thresh=0.8, nms_thresh=0.3)\n","# DetectionInference(yolo_detector, train_loader, train_dataset, idx_to_class, output_dir='mAP/input', thresh=0.8, nms_thresh=0.3) # uncomment to see training mAP\n","!cd mAP && python main.py"],"execution_count":null,"outputs":[]}]}