{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "61f626a7",
   "metadata": {},
   "source": [
    "# faster-rcnn 实现"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f82a5a5d",
   "metadata": {},
   "source": [
    "## 加载 Pascal VOC 数据集"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "62660506",
   "metadata": {},
   "source": [
    "首先下载 pascal voc 2012 数据集:\n",
    "```bash\n",
    "curl -fLO \"http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar\"\n",
    "```\n",
    "下载完成之后解压, 文件呈以下结构:\n",
    "\n",
    "```plaintext\n",
    ".\n",
    "└── VOCdevkit\n",
    "    └── VOC2012\n",
    "        ├── Annotations\n",
    "        ├── ImageSets\n",
    "        │   ├── Action\n",
    "        │   ├── Layout\n",
    "        │   ├── Main\n",
    "        │   └── Segmentation\n",
    "        ├── JPEGImages\n",
    "        ├── SegmentationClass\n",
    "        └── SegmentationObject\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6feaf170",
   "metadata": {},
   "source": [
    "假设 `VOCdevkit` 位于 `VOC2012` 路径下, 则按照以下方式加载数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "fe4ad5c9",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/corona/.conda/envs/faster-rcnn/lib/python3.8/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torchvision.datasets as datasets\n",
    "from torch.utils.data import DataLoader\n",
    "import torchvision.transforms as transforms\n",
    "from utils.vocdataset import PascalVOCDataset\n",
    "from torchvision.models.detection.faster_rcnn import FasterRCNN, fasterrcnn_resnet50_fpn, FasterRCNN_ResNet50_FPN_Weights\n",
    "\n",
    "data_path: str = \"./data/VOC2012\"\n",
    "transform = transforms.Compose([\n",
    "    transforms.Resize((440, 500))\n",
    "])\n",
    "voc_train = PascalVOCDataset(\n",
    "        root=data_path,\n",
    "        year=\"2012\",\n",
    "        image_set=\"train\",\n",
    "        transform=transform\n",
    ")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "ae1054f4",
   "metadata": {},
   "source": [
    "## 使用 pytorch 内置的 faster rcnn"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "7a8d1abe",
   "metadata": {},
   "source": [
    "pytorch 中有已经实现好的 faster-rcnn:\n",
    "```python\n",
    "torchvision.models.detection.fasterrcnn_resnet50_fpn(...)\n",
    "```\n",
    "`fasterrcnn_resnet50_fpn` 的输入是一个 tensor 的列表，每个 tensor 的形状为 `[C, H, W]`.\n",
    "\n",
    "在训练时, 除了 input tensors, 还需要 targets, targets 是一个列表, 列表中的每个元素都是一个 dict.\n",
    "\n",
    "每个 dict 至少有两个键, 一个 `boxes`, 对应的值是 `FloatTensor`; 另一个是 `labels`, 对应的值是 `Int64Tensor`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "4c872af0",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Downloading: \"https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth\" to /home/corona/.var/app/com.visualstudio.code/cache/torch/hub/checkpoints/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth\n",
      "100%|██████████| 160M/160M [00:46<00:00, 3.58MB/s] \n"
     ]
    }
   ],
   "source": [
    "model = fasterrcnn_resnet50_fpn(pretrained=True, weights=FasterRCNN_ResNet50_FPN_Weights.DEFAULT)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "8ffda707",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'loss_classifier': tensor(0.8296, grad_fn=<NllLossBackward0>), 'loss_box_reg': tensor(0.1784, grad_fn=<DivBackward0>), 'loss_objectness': tensor(0.0059, grad_fn=<BinaryCrossEntropyWithLogitsBackward0>), 'loss_rpn_box_reg': tensor(0.0125, dtype=torch.float64, grad_fn=<DivBackward0>)}\n"
     ]
    }
   ],
   "source": [
    "for index, data in enumerate(voc_train):\n",
    "    image, target = data[0], data[1]\n",
    "\n",
    "    images = [ image ]\n",
    "    targets = [ target ]\n",
    "\n",
    "    output = model(images, targets)\n",
    "    \n",
    "    print(output)\n",
    "    break"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "a6716476",
   "metadata": {},
   "source": [
    "由于 object detection 的数据集中, 每张图片上 object 的数量是不同的, 因此 `target['label']` 和 `target['boxes']` 的长度也就不同.\n",
    "这使得在使用 `dataloader` 时, 同一 batch 内的数据 tensor 的形状不同, 导致报错.\n",
    "\n",
    "根据 [这里的讨论](https://discuss.pytorch.org/t/dataloader-gives-stack-expects-each-tensor-to-be-equal-size-due-to-different-image-has-different-objects-number/91941/6), `Dataloader` 尝试对同一 batch 内的 tensor 调用 `torch.stack`, 而不同的大小会致使报错. 因此, 需要自定义 `collate_fn`, 以替代默认的 `torch.stack` 操作."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "a915bda2",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'loss_classifier': tensor(0.5586, grad_fn=<NllLossBackward0>),\n",
       " 'loss_box_reg': tensor(0.1174, grad_fn=<DivBackward0>),\n",
       " 'loss_objectness': tensor(0.0088, grad_fn=<BinaryCrossEntropyWithLogitsBackward0>),\n",
       " 'loss_rpn_box_reg': tensor(0.0079, dtype=torch.float64, grad_fn=<DivBackward0>)}"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def collate_fn(data):\n",
    "    images, targets = tuple(zip(*data))\n",
    "    return (images, targets)\n",
    "\n",
    "dataloader = DataLoader(dataset=voc_train, batch_size=4, shuffle=True, collate_fn=collate_fn)\n",
    "\n",
    "batch = next(iter(dataloader))\n",
    "\n",
    "images, targets = batch\n",
    "\n",
    "model(images, targets)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "4cf37e7d",
   "metadata": {},
   "source": [
    "## 训练模型"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "f063ebe1",
   "metadata": {},
   "source": [
    "通常, 我们需要自己编写训练步骤, 即加载 dataloader, 编写循环运行每个 epoch, 定义每个 epoch 的行为等. 为了节省时间, 可以使用 pytorch-lightning 等集成好的框架."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c6bbec19",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "faster-rcnn",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.0"
  },
  "vscode": {
   "interpreter": {
    "hash": "1207557d46b56f8f0da9384bfb6e7f4bf35359087e0bfc07cb17c922bf95f668"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
