{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "continent-gilbert",
   "metadata": {},
   "source": [
    "# cvpods tutorials\n",
    "\n",
    "In this tutorial, we will show the most common operations you might need when using cvpods.\n",
    "Here we suppose **you have successfully clone and build cvpods on your local machine**."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "consolidated-convertible",
   "metadata": {},
   "source": [
    "## 1. Why do we need cvpods?\n",
    "\n",
    "* Support various tasks (classification, self-supervised, detection, segmentation (semantic, panoptic, instance), keypoints, etc.)\n",
    "* Support various data sets (ImageNet, COCO, Objects365, CityScapes, CityPerson, WiderFace, VOC, LVIS, etc.)\n",
    "* Support various enhancements (cvpack2 has realized most of the enhancements and is compatible with all enhancements in torchvision)\n",
    "* Seamless switching/collaboration of various tasks\n",
    "* Taking self-supervised learning as an example, pre-training, fine-tune and training in downstream tasks (such as detection) are completed by the same set of training, eliminating the need for parameter format conversion and various manual transmissions between different codebases. The complicated process of inputting parameters and controlling the training process has significantly improved the efficiency of scientific research\n",
    "* Conveniently share/open source your own results with others/communities\n",
    "* Support your company's nori / dpflow and other useful tools\n",
    "* The same experimental organization habits as cvpack and basedet\n",
    "* Continue to absorb the achievements of the open source community and maintain the advanced nature\n",
    "* Fast development, fast training, (TODO: release fast)\n",
    "\n",
    "### 1.1 Major Features\n",
    "* **Clean & simple & flexible development**: When using detectron2, if you want to implement a new module such as CustomRetinanet, you need to register it to meta_arch, then specify in xxx_config.yaml, and you still need to esplicitly invoke 'from net import CustomRetinanet' to allow registry to retrieve your module successfully. It means you need to copy train_net.py from common tools directory and insert the line above;\n",
    "* **Flexible and easy-to-use configuration system**: When add new config options in Detectron2, you need to add it into config/defaults.py first and then modify the config.yaml. But now in cvpods you just need to add to config.py once. When you need to debug a component, you may need to set SOLVER.IMS_PER_BATCH to 2, before you need to modify it in config, after it starts running correctly, you need to modify it to 16. That's unconvenient too. So ​cvpods allow you to dynamicly update config, for example: `pods_train --num-gpus 1 SOLVER.IMS_PER_BATCH 2`.\n",
    "* **Task specific incremental updating.**: For example, if you need to modify Retinanet relative configurations, you just need to modify retinanet_config.py and don't need care other common configs. On the other hand, we avoid putting all kinds of methods' configuration all in one base config file(such as detectron2/config/defaults.py) like detectron2, maskrcnn_benchmark and mmdetection. So retinanet will not include ROI_HEADS, MASK_HEADS configurations, but only has all necessary component.\n",
    "* **Efficient experiments management**: When you need to implement a new model, you can either copy a project from examples and inheritate some kind of networks such as RetinaNet of FasterRCNN, then define your custom functions; or you can add a new base / commonly used model(such as FCOS) into `cvpods/modeling/meta_arch' and using it like a library.\n",
    "* **Versatile tasks & datasets support**:\n",
    "  * Detection, Segmentation (Semantic, Panoptic, Instance), Keypoint, Self-supervised Learning, 3D Detection & Segmentation, etc.\n",
    "  * COCO, Objects365, WiderFace, VOC, LVIS, CityPersons, ImageNet, CrowdHuman, CityScapes, ModelNet40, ScanNet, KITTI, nuScenes, etc.\n",
    "* **Global training / testing scripts.**: you just need to invoke `pods_train/test --num-gpus x` in your playground; and your projects only need to include all project-specific configs and network modules.\n",
    "* **Compatible with detectron2**: All models in detectron2 can be easily migrated into cvpods."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "knowing-jackson",
   "metadata": {},
   "source": [
    "## 2. Recommended usage\n",
    "Before we introduce the specific details, let us first have a rough feel of how easy to use cvpods is. Here we list all the models in cvpods below, we organize the playgroun in `Task/Dataset/Method/Experiment` order."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "colored-skiing",
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[01;34m/home/poodarchu/Developments/repos/cvpods/playground\u001b[00m\r\n",
      "|-- \u001b[01;34mdetection\u001b[00m\r\n",
      "|   |-- \u001b[01;34mcitypersons\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mfcos\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mfcos.res50.fpn.citypersons.640size.1x\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mrcnn\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mfaster_rcnn.res50.fpn.citypersons.640size.1x\u001b[00m\r\n",
      "|   |   `-- \u001b[01;34mretinanet\u001b[00m\r\n",
      "|   |       `-- \u001b[01;34mretinanet.res50.fpn.citypersons.640size.1x\u001b[00m\r\n",
      "|   |-- \u001b[01;34mcoco\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34matss\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34matss.res50.fpn.coco.800size.1x\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mcenternet\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mcenternet.res18.coco.512size\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mcenternet.res50.coco.512size\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mdetr\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mdetr.res50.c5.coco.multiscale.150e.bs16\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mefficientdet\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34meffdet0.effnet0.bifpn.coco.512size.300e\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34meffdet0.effnet0.bifpn.coco.512size.300e.syncbn\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34meffdet1.effnet1.bifpn.coco.640size.300e\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34meffdet1.effnet1.bifpn.coco.640size.300e.syncbn\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mfcos\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mfcos.res50.fpn.coco.800size.1x\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mfree_anchor\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mfree_anchor.res50.fpn.coco.800size.1x\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mrcnn\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mfaster_rcnn.res50.c4.coco.multiscale.1x\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mfaster_rcnn.res50.fpn.coco.multiscale.1x\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mfaster_rcnn.res50.fpn.coco.multiscale.1x.syncbn\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mreppoints\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mreppoints.res50.fpn.coco.800size.1x.partial_minmax\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mretinanet\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mretinanet.res50.fpn.coco.multiscale.1x\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34msparse_rcnn\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34msparse_rcnn.res50.fpn.coco.multiscale.3x\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mssd\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mssd.vgg16.coco.300size\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mssd.vgg16.coco.300size.expand_aug\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mssd.vgg16.coco.512size\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mssd.vgg16.coco.512size.expand_aug\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mtridentnet\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mtridentnet_base\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mtridentnet.res50.C4.coco.800size.1x\u001b[00m\r\n",
      "|   |   `-- \u001b[01;34myolo\u001b[00m\r\n",
      "|   |       `-- \u001b[01;34myolov3.darknet53.coco.multiscale.syncbn\u001b[00m\r\n",
      "|   |-- \u001b[01;34mcrowdhuman\u001b[00m\r\n",
      "|   |   `-- \u001b[01;34mrcnn\u001b[00m\r\n",
      "|   |       `-- \u001b[01;34mfaster_rcnn.res50.fpn.crowdhuman.800size.1x\u001b[00m\r\n",
      "|   |-- \u001b[01;34mvoc\u001b[00m\r\n",
      "|   |   `-- \u001b[01;34mrcnn\u001b[00m\r\n",
      "|   |       `-- \u001b[01;34mfaster_rcnn.res50.fpn.voc.multiscale.1x\u001b[00m\r\n",
      "|   `-- \u001b[01;34mwiderface\u001b[00m\r\n",
      "|       |-- \u001b[01;34mfcos\u001b[00m\r\n",
      "|       |   `-- \u001b[01;34mfcos.res50.fpn.widerface.600size.0.5x_crop.plus.norm_sync\u001b[00m\r\n",
      "|       `-- \u001b[01;34mretinanet\u001b[00m\r\n",
      "|           `-- \u001b[01;34mretinanet.res50.fpn.widerface.600size.0.5x_crop\u001b[00m\r\n",
      "|-- \u001b[01;34mkeypoints\u001b[00m\r\n",
      "|   `-- \u001b[01;34mcoco_person\u001b[00m\r\n",
      "|       `-- \u001b[01;34mrcnn\u001b[00m\r\n",
      "|           `-- \u001b[01;34mkeypoint_rcnn.res50.FPN.coco_person.multiscale.1x\u001b[00m\r\n",
      "|-- \u001b[01;34mpanoptic_segmentation\u001b[00m\r\n",
      "|   `-- \u001b[01;34mcoco\u001b[00m\r\n",
      "|       |-- \u001b[01;34mpanoptic_fpn.res50.fpn.coco.800size.1x\u001b[00m\r\n",
      "|       `-- \u001b[01;34mpanoptic_fpn.res50.fpn.coco.multiscale.1x\u001b[00m\r\n",
      "|-- \u001b[01;34msegmentation\u001b[00m\r\n",
      "|   |-- \u001b[01;34mcityscapes\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mpointrend\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mpointrend.res50.fpn.cityscapes.multiscale.1x\u001b[00m\r\n",
      "|   |   `-- \u001b[01;34mrcnn\u001b[00m\r\n",
      "|   |       `-- \u001b[01;34mmask_rcnn.res50.fpn.cityscapes.multiscales.1x\u001b[00m\r\n",
      "|   |-- \u001b[01;34mcoco\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mpointrend\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mpointrend.res50.fpn.coco.multiscale.1x\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mpointrend.res50.fpn.coco.multiscale.3x\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34mrcnn\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mcascade_rcnn.res50.fpn.coco.800size.1x\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mmask_rcnn.res50.c4.coco.multiscale.1x\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mmask_rcnn.res50.c4.coco.multiscale.1x.syncbn.extra_norm\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mmask_rcnn.res50.c4.coco.multiscale.1x.syncbn.extra_norm.wider_range\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mmask_rcnn.res50.c4.coco.multiscale.2x\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mmask_rcnn.res50.c4.coco.multiscale.2x.syncbn\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mmask_rcnn.res50.c4.coco.multiscale.2x.syncbn.extra_norm\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34mmask_rcnn.res50.fpn.coco.multiscale.1x\u001b[00m\r\n",
      "|   |   |-- \u001b[01;34msolo\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mdecoupled_solo.res50.fpn.coco.800size.1x\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34mdecoupled_solo.res50.fpn.coco.multiscale.3x\u001b[00m\r\n",
      "|   |   |   |-- \u001b[01;34msolo.res50.fpn.coco.800size.1x\u001b[00m\r\n",
      "|   |   |   `-- \u001b[01;34msolo.res50.fpn.coco.multiscale.3x\u001b[00m\r\n",
      "|   |   `-- \u001b[01;34mtensormask\u001b[00m\r\n",
      "|   |       `-- \u001b[01;34mtensormask.res50.fpn.coco.800size.1x\u001b[00m\r\n",
      "|   `-- \u001b[01;34mlvis\u001b[00m\r\n",
      "|       `-- \u001b[01;34mrcnn\u001b[00m\r\n",
      "|           |-- \u001b[01;34mmask_rcnn.res50.fpn.lvis.800size.1x\u001b[00m\r\n",
      "|           |-- \u001b[01;34mmask_rcnn.res50.fpn.lvis.800size.1x.data_resampling\u001b[00m\r\n",
      "|           `-- \u001b[01;34mmask_rcnn.res50.fpn.lvis.multiscale.1x.data_resampling\u001b[00m\r\n",
      "`-- \u001b[01;34msemantic_segmentation\u001b[00m\r\n",
      "    |-- \u001b[01;34mcityscapes\u001b[00m\r\n",
      "    |   |-- \u001b[01;34mdynamic_routing\u001b[00m\r\n",
      "    |   |   |-- \u001b[01;34mSeg.Layer16\u001b[00m\r\n",
      "    |   |   |-- \u001b[01;34mSeg.Layer16.SmallGate.Dynamic_A\u001b[00m\r\n",
      "    |   |   |-- \u001b[01;34mSeg.Layer16.SmallGate.Dynamic_B\u001b[00m\r\n",
      "    |   |   `-- \u001b[01;34mSeg.Layer16.SmallGate.Dynamic_C\u001b[00m\r\n",
      "    |   |-- \u001b[01;34mfcn\u001b[00m\r\n",
      "    |   |   |-- \u001b[01;34mfcn.res101.cityscapes.multiscale.1x.s16\u001b[00m\r\n",
      "    |   |   |-- \u001b[01;34mfcn.res101.cityscapes.multiscale.1x.s32\u001b[00m\r\n",
      "    |   |   `-- \u001b[01;34mfcn.res101.cityscapes.multiscale.1x.s8\u001b[00m\r\n",
      "    |   `-- \u001b[01;34mpointrend\u001b[00m\r\n",
      "    |       `-- \u001b[01;34mpointrend.res101.fpn.cityscapes.multiscale.1x\u001b[00m\r\n",
      "    `-- \u001b[01;34mcoco\u001b[00m\r\n",
      "        `-- \u001b[01;34msemanticfpn\u001b[00m\r\n",
      "            `-- \u001b[01;34msemanticfpn.res50.fpn.coco.multiscale.1x\u001b[00m\r\n",
      "\r\n",
      "111 directories\r\n"
     ]
    }
   ],
   "source": [
    "!tree -d /home/poodarchu/Developments/repos/cvpods/playground -I \"__pycache__\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "sensitive-program",
   "metadata": {},
   "source": [
    "Let's take `FCOS` as example.\n",
    "\n",
    "```shell\n",
    "cd playground/detection/coco/fcos/fcos.res50.fpn.coco.800size.1x\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "electronic-coverage",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "README.md  config.py  net.py\n",
      "# fcos.res50.fpn.coco.800size.1x\n",
      "\n",
      "## Evaluation results for bbox:  \n",
      "\n",
      "```  \n",
      " Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.387\n",
      " Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.575\n",
      " Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.417\n",
      " Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.226\n",
      " Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.427\n",
      " Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.499\n",
      " Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.320\n",
      " Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.534\n",
      " Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.570\n",
      " Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.364\n",
      " Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.617\n",
      " Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.720\n",
      "```  \n",
      "|   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   |\n",
      "|:------:|:------:|:------:|:------:|:------:|:------:|\n",
      "| 38.719 | 57.490 | 53.539 | 46.949 | 41.692 | 35.188 |\n",
      "\n",
      "### Per-category bbox AP:  \n",
      "\n",
      "| category      | AP     | category     | AP     | category       | AP     |\n",
      "|:--------------|:-------|:-------------|:-------|:---------------|:-------|\n",
      "| person        | 53.282 | bicycle      | 29.202 | car            | 42.108 |\n",
      "| motorcycle    | 38.985 | airplane     | 63.561 | bus            | 63.563 |\n",
      "| train         | 57.491 | truck        | 31.925 | boat           | 21.811 |\n",
      "| traffic light | 25.712 | fire hydrant | 64.749 | stop sign      | 64.309 |\n",
      "| parking meter | 42.604 | bench        | 20.516 | bird           | 34.071 |\n",
      "| cat           | 64.635 | dog          | 61.370 | horse          | 52.967 |\n",
      "| sheep         | 51.680 | cow          | 56.638 | elephant       | 62.866 |\n",
      "| bear          | 71.860 | zebra        | 66.408 | giraffe        | 65.609 |\n",
      "| backpack      | 14.180 | umbrella     | 37.956 | handbag        | 13.694 |\n",
      "| tie           | 27.788 | suitcase     | 34.438 | frisbee        | 64.418 |\n",
      "| skis          | 18.604 | snowboard    | 29.170 | sports ball    | 45.519 |\n",
      "| kite          | 40.818 | baseball bat | 25.096 | baseball glove | 34.597 |\n",
      "| skateboard    | 47.097 | surfboard    | 30.247 | tennis racket  | 45.019 |\n",
      "| bottle        | 35.221 | wine glass   | 34.911 | cup            | 41.024 |\n",
      "| fork          | 27.317 | knife        | 14.199 | spoon          | 13.405 |\n",
      "| bowl          | 39.203 | banana       | 23.568 | apple          | 18.976 |\n",
      "| sandwich      | 31.181 | orange       | 31.731 | broccoli       | 22.755 |\n",
      "| carrot        | 19.831 | hot dog      | 27.377 | pizza          | 49.126 |\n",
      "| donut         | 43.932 | cake         | 35.290 | chair          | 26.128 |\n",
      "| couch         | 41.758 | potted plant | 26.208 | bed            | 39.194 |\n",
      "| dining table  | 25.756 | toilet       | 57.755 | tv             | 51.880 |\n",
      "| laptop        | 55.564 | mouse        | 59.272 | remote         | 26.730 |\n",
      "| keyboard      | 44.607 | cell phone   | 33.582 | microwave      | 58.325 |\n",
      "| oven          | 29.844 | toaster      | 26.954 | sink           | 33.510 |\n",
      "| refrigerator  | 51.442 | book         | 12.788 | clock          | 48.099 |\n",
      "| vase          | 34.989 | scissors     | 19.893 | teddy bear     | 43.942 |\n",
      "| hair drier    | 7.573  | toothbrush   | 16.101 |                |        |\n"
     ]
    }
   ],
   "source": [
    "!ls ~/Developments/repos/cvpods/playground/detection/coco/fcos/fcos.res50.fpn.coco.800size.1x\n",
    "!cat ~/Developments/repos/cvpods/playground/detection/coco/fcos/fcos.res50.fpn.coco.800size.1x/README.md"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "accessible-wednesday",
   "metadata": {},
   "source": [
    "This is a typical experiments in cvpods, which have 3 files (README.md is generated automatically). After you run the project, you'll find a softlink `log` to your config.OUTPUT_DIR, which stores all the logs/tensorboard/predicitons of this experiments.\n",
    "\n",
    "We think these are very useful features to help researchers keep track of their results and get rid of annoying details. Unfortuantely, as far as we know, we're the first to provide these features."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "center-penny",
   "metadata": {},
   "source": [
    "To run this project:\n",
    "```shell\n",
    "pods_train --num-gpus 8\n",
    "```\n",
    "To eval your result:\n",
    "```shell\n",
    "pods_test --num-gpus 8\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "hybrid-devon",
   "metadata": {},
   "source": [
    "We help you to save time typing long commands like\n",
    "```shell\n",
    "python main.py -a resnet50 --dist-url 'tcp://127.0.0.1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0 [imagenet-folder with train and val folders]\n",
    "```\n",
    "every time you run an experimet."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "convenient-comfort",
   "metadata": {},
   "source": [
    "The benefits of this kind of organization is that:\n",
    "* whenever you need to try some ideas, you can just copy an exising experiment, and modify the code **without** affecting your existing experiments, as well as avoiding bugs.\n",
    "* You can easily write your own analyzers to analyze a group of experiments.\n",
    "* Even after a long time, you can still quickly find your code and experimental results at that time.\n",
    "\n",
    "Basically all modules that are frequently modified support customization in their own experiments, e.g., `Optimizers / Schedulers / Datasets / Augmentations`. You can also specify your own train loop in a specific experiments."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "traditional-madness",
   "metadata": {},
   "source": [
    "## 3. Project organization\n",
    "First, let's have a look at the overall structure of cvpods."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "accompanied-middle",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[01;34m/home/poodarchu/Developments/repos/cvpods/cvpods\u001b[00m\r\n",
      "|-- \u001b[01;34manalyser\u001b[00m\r\n",
      "|   `-- \u001b[01;34mtide\u001b[00m\r\n",
      "|       `-- \u001b[01;34merrors\u001b[00m\r\n",
      "|-- \u001b[01;34mcheckpoint\u001b[00m\r\n",
      "|-- \u001b[01;34mconfigs\u001b[00m\r\n",
      "|-- \u001b[01;34mdata\u001b[00m\r\n",
      "|   |-- \u001b[01;34mdatasets\u001b[00m\r\n",
      "|   |-- \u001b[01;34msamplers\u001b[00m\r\n",
      "|   `-- \u001b[01;34mtransforms\u001b[00m\r\n",
      "|-- \u001b[01;34mengine\u001b[00m\r\n",
      "|-- \u001b[01;34mevaluation\u001b[00m\r\n",
      "|-- \u001b[01;34mexport\u001b[00m\r\n",
      "|-- \u001b[01;34mlayers\u001b[00m\r\n",
      "|   `-- \u001b[01;34mcsrc\u001b[00m\r\n",
      "|       |-- \u001b[01;34mborder_align\u001b[00m\r\n",
      "|       |-- \u001b[01;34mbox_iou_rotated\u001b[00m\r\n",
      "|       |-- \u001b[01;34mcocoeval\u001b[00m\r\n",
      "|       |-- \u001b[01;34mdeformable\u001b[00m\r\n",
      "|       |-- \u001b[01;34mml_nms\u001b[00m\r\n",
      "|       |-- \u001b[01;34mnms_rotated\u001b[00m\r\n",
      "|       |-- \u001b[01;34mPSROIPool\u001b[00m\r\n",
      "|       |-- \u001b[01;34mROIAlign\u001b[00m\r\n",
      "|       |-- \u001b[01;34mROIAlignRotated\u001b[00m\r\n",
      "|       |-- \u001b[01;34msigmoid_focal_loss\u001b[00m\r\n",
      "|       |-- \u001b[01;34mSwapAlign2Nat\u001b[00m\r\n",
      "|       `-- \u001b[01;34mtree_filter\u001b[00m\r\n",
      "|-- \u001b[01;34mmodeling\u001b[00m\r\n",
      "|   |-- \u001b[01;34mbackbone\u001b[00m\r\n",
      "|   |   `-- \u001b[01;34mdynamic_arch\u001b[00m\r\n",
      "|   |-- \u001b[01;34mbasenet\u001b[00m\r\n",
      "|   |-- \u001b[01;34mlosses\u001b[00m\r\n",
      "|   |-- \u001b[01;34mmeta_arch\u001b[00m\r\n",
      "|   |-- \u001b[01;34mnn_utils\u001b[00m\r\n",
      "|   |-- \u001b[01;34mproposal_generator\u001b[00m\r\n",
      "|   `-- \u001b[01;34mroi_heads\u001b[00m\r\n",
      "|-- \u001b[01;34msolver\u001b[00m\r\n",
      "|-- \u001b[01;34mstructures\u001b[00m\r\n",
      "`-- \u001b[01;34mutils\u001b[00m\r\n",
      "    |-- \u001b[01;34mbenchmark\u001b[00m\r\n",
      "    |-- \u001b[01;34mdistributed\u001b[00m\r\n",
      "    |-- \u001b[01;34mdump\u001b[00m\r\n",
      "    |-- \u001b[01;34menv\u001b[00m\r\n",
      "    |-- \u001b[01;34mfile\u001b[00m\r\n",
      "    |-- \u001b[01;34mmetrics\u001b[00m\r\n",
      "    `-- \u001b[01;34mvisualizer\u001b[00m\r\n",
      "\r\n",
      "45 directories\r\n"
     ]
    }
   ],
   "source": [
    "!tree -d /home/poodarchu/Developments/repos/cvpods/cvpods -I \"__pycache__\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "experienced-christmas",
   "metadata": {},
   "source": [
    "* The configuration system is defined in `cvpods/configs`.\n",
    "* All experiments in cvpods are managed by the common `DefaultTrainer` in cvpods/engine. So you can find all components related to the train loop here (e.g., Hooks, Trainer, DDP).\n",
    "* Optimizers and LR Scheduers are defined in `cvpods/solver`.\n",
    "* Dataset / Augmentations / Dataset Samplers are all defined in `cvpods/data`. You can easily implement your own datasets / augmentations follwing our guides.\n",
    "* Evaluation is an individual module in cvpods, which is defined in `cvpods/evaluation`.\n",
    "* All custom operators like `DeformConv` are defined in `cvpods/layers`."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "primary-ghost",
   "metadata": {},
   "source": [
    "## 4. Guides of each component of cvpods"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "collected-grammar",
   "metadata": {},
   "source": [
    "### 4.1 Use Config\n",
    "\n",
    "Let's have a look at the configs. \n",
    "> One important thing need to be noticed is that, all of our configs are for 8 GPUs by default, and when you use different number of GPUs, cvpods will automatically scale relative options like iters. For SGD optimziers, it will also scale the lr automatically."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "affiliated-liquid",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[01;34m/home/poodarchu/Developments/repos/cvpods/cvpods/configs\u001b[00m\r\n",
      "|-- base_classification_config.py\r\n",
      "|-- base_config.py\r\n",
      "|-- base_detection_config.py\r\n",
      "|-- config_helper.py\r\n",
      "|-- dynamic_routing_config.py\r\n",
      "|-- efficientdet_config.py\r\n",
      "|-- fcos_config.py\r\n",
      "|-- __init__.py\r\n",
      "|-- keypoint_config.py\r\n",
      "|-- panoptic_seg_config.py\r\n",
      "|-- pointrend_config.py\r\n",
      "|-- rcnn_config.py\r\n",
      "|-- rcnn_fpn_config.py\r\n",
      "|-- retinanet_config.py\r\n",
      "|-- segm_config.py\r\n",
      "|-- solo_config.py\r\n",
      "|-- ssd_config.py\r\n",
      "`-- yolo_config.py\r\n",
      "\r\n",
      "0 directories, 18 files\r\n"
     ]
    }
   ],
   "source": [
    "!tree /home/poodarchu/Developments/repos/cvpods/cvpods/configs"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "focal-advance",
   "metadata": {},
   "source": [
    "The configuration system of cvpods is designed based on a logical inheritance relationship. For example, the root of all configs is ```base_config.py```. BaseConfig have task-independent attributes, such as configrations of trainer, input, seed, etc. When we need to add support for detection, we can add common component of current detection models such as backbones, FPN, NMS, etc. Both one-stage or two-stage detectors can inherit BaseDetectionConfig to add their own attributes. For example, RCNN models usually have RCNN heads while RetinaNet does not require these attributes. Any new task can be added in this way. As for tasks not related to object detection, it contains its unique attributes like `Keypoint`, `Mask`, etc.\n",
    "\n",
    "The benefits of this paradigm is that, the component for training a model is cleaner and  more clear in logic."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "fossil-alpha",
   "metadata": {},
   "outputs": [],
   "source": [
    "from cvpods.configs.base_config import config"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "blond-resort",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'cvpods.configs.base_config.BaseConfig'>\n",
      "True\n",
      "['CUDNN_BENCHMARK', 'DATALOADER', 'DATASETS', 'GLOBAL', 'INPUT', 'MODE', 'MODEL', 'OUTPUT_DIR', 'SEED', 'SOLVER', 'TEST', 'TRAINER', 'VIS_PERIOD', '__class__', '__contains__', '__delattr__', '__delitem__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_register_configuration', 'clear', 'copy', 'diff', 'find', 'fromkeys', 'funcname_not_in_attr', 'get', 'items', 'keys', 'link_log', 'merge', 'merge_from_list', 'pop', 'popitem', 'setdefault', 'update', 'values']\n"
     ]
    }
   ],
   "source": [
    "print(type(config))\n",
    "print(isinstance(config, dict))\n",
    "print(dir(config))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "musical-diagram",
   "metadata": {},
   "source": [
    "You can find that our `config` is subclass of python `dict`, which means you can easily manipulate it using your familiar APIs."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bacterial-oxford",
   "metadata": {},
   "source": [
    "#### 3.1.1 Frequently used options of Config\n",
    "* Input-related\n",
    "  * `config.INPUT.FORMAT`: used for specify \"RGB\" / \"BGR\" / other channel format for 2D images. By default, we use \"RGB\" for classification and \"BGR\" for other tasks for convenience.\n",
    "  * `config.DATALOADER.NUM_WORKERS`: when you need to debug the data loading module, set it to 0.\n",
    "* Trainer-related\n",
    "  * `config.SOLVER.LR_SCHEDUER.MAX_ITER/MAX_EPOCH`: you can specify **one of the two** options to achieve iter-wise or epoch-wise training, and the training log will present different pattern according to it automatically. Once you choose ITER / EPOCH, other options like `WARMUP_ITERS / EVAL_PERIOD` should follow the same logic.\n",
    "  * `config.SOLVER.OPTIMIZER.NAME`: By default, we use \"D2SGD\" for detection and \"SGD\" for other tasks.\n",
    "  * `config.SOLVER.IMS_PER_DEVICE/IMS_PER_BATCH`: As stated above, `IMS_PER_BATCH` should be `8x` of `IMS_PER_DEVICE`. \n",
    "  * `config.SOLVER.BATCH_SUBDIVISIONS`: We support subdivision training. For example, if you set it to 16, you can use 16 iters to simulate a `16 x IMS_PER_BATCH` udpate. LR will be scaled automatically.\n",
    "  * `config.TRAINER.FP16`: You can set `ENABLED=True` to run your experiments in FP16 mode.\n",
    "  * `cofnig.SOLVER.LR_SCHEDULER.EPOCH_WISE`: Default false, it will udpate lr per epoch if set to `True`. \n",
    "\n",
    "For more detailed options of different tasks, please refer to corresponding configs."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "imposed-browser",
   "metadata": {},
   "source": [
    "#### 4.1.2 Write your own `config.py`\n",
    "\n",
    "Your config should inherit from any of configs in cvpods (all configs inherit from BaseConfig).\n",
    "If an option has already exist in parent config, you can just overwrite it or just use the parent setting.\n",
    "Except for existing options, you can also add any options in your own config file **without defining one option in many files.** All the settings of cvpods only need to be added in one place, and then incrementally update them.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "extended-sailing",
   "metadata": {},
   "source": [
    "### 4.2 Add new models\n",
    "We recommend you reuse the modules in cvpods if it's already there. The model definition is nothing special in cvpods.\n",
    "You can migrate your previous model directly into cvpods.\n",
    "\n",
    "There are few things to be noticed:\n",
    "* The `__init__` function of your class should only has config as your input. So put all your custom options in config and parse them in the `__init__` function. The benefit of this design is that, your model can get to know the state of the training loop if necessary, even there's no training code in your experiment directory.\n",
    "* Add `self.device` at top of the `__init__` and call `self.to(device)` at the end of `__init__`.\n",
    "* By default the `collate` function of data_loader only put all items together, so you need to take care of them (e.g., concat, move to GPU) on in model `forward`."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "national-measure",
   "metadata": {},
   "source": [
    "### 4.3 Add new datasets\n",
    "\n",
    "#### 4.3.1 Add basic dataset\n",
    "If you need to add a new data set, please add it as follows:\n",
    "\n",
    "1. In the cvpods/data/datasets/paths_route.py file, add the path of the current data set, evaluator_type and other information. Here we take imagenet as an example:\n",
    "  ```python\n",
    "  _PREDEFINED_SPLITS_IMAGENET = {\n",
    "      \"dataset_type\": \"ImageNetDataset\",\n",
    "      \"evaluator_type\": {\n",
    "          \"imagenet\": \"classification\"\n",
    "      },\n",
    "      \"imagenet\": {\n",
    "          \"imagenet_train\": (\"imagenet\", \"train\"),\n",
    "          \"imagenet_val\": (\"imagenet\", \"val\"),\n",
    "      }\n",
    "  }\n",
    "  PATH_ROUTES.register(_PREDEFINED_SPLITS_IMAGENET, \"IMAGENET\")\n",
    "  ```\n",
    "  Among them, `dataset_type` is the class name of XxxDataset in cvpods/data/datasets/xxx.py you want to create; `imagenet: {datasplit_name: (image_root, label_path)}` specifies each training set you need and its corresponding data , The path of the label; for each newly added dataset, you need to specify the type of evaluator used in the evaluator_type. All currently supported evaluators can be viewed here. It should be noted that the prefixes of datasplit_name and PREDEFINEDSPLITS_NAME must be the same to be correctly indexed.\n",
    "\n",
    "2. \n",
    "Inherit `BaseDataset` and add xxx.py under the cvpods/data/datasets directory (or your own experiment dir). Where xxx is the name of the corresponding data set, such as `coco.py`. Several methods need to be implemented: `getitem`,       `len`, `load_annotations`, and `get_metadata`.\n",
    "`_load_annotations` specifies how to read the data, and returns a List, each element in the list represents a sample, and its format specification is defined in `cvpods/data/base_dataset.py`.\n",
    "`_get_meta` specifies how to obtain the meta information of the current dataset (such as category-id mapping, the color used to visualize each category, etc.), and returns an object in dict format.\n",
    "`__getitem__` read image: read data / handle read exception\n",
    "`apply transfrom`: enhance the current index sample in the order of self.transforms\n",
    "formatter: unify the output format, such as converting all to Instance type\n",
    "3. Add Dataset to `datasets/__init__.py` in the same level of datasets folder."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "living-authentication",
   "metadata": {},
   "source": [
    "#### 4.3.2 Add composed dataset\n",
    "The above shows how to add a new data-based dataset. This section shows how to combine an existing dataset to form a more complex/variable dataset. The currently supported extended dataset can be found in `cvpods/data/wrapped_dataset.py`.\n",
    "\n",
    "Here we use the `ConcatDatset` as example, it concats multiple dataset pipelines and compose the final dataset.\n",
    "\n",
    "```python\n",
    "class ConcatDataset(_ConcatDataset):\n",
    "    \"\"\"A wrapper of concatenated dataset.\n",
    "    Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but\n",
    "    concat the group flag for image aspect ratio.\n",
    "    Args:\n",
    "        datasets (list[:obj:`Dataset`]): A list of datasets.\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, datasets):\n",
    "        super(ConcatDataset, self).__init__(datasets)\n",
    "        if hasattr(self.datasets[0], 'aspect_ratios'):\n",
    "            aspect_ratios = [d.aspect_ratios for d in self.datasets]\n",
    "            self.aspect_ratios = np.concatenate(aspect_ratios)\n",
    "        if hasattr(self.datasets[0], 'meta'):\n",
    "            self.meta = {}\n",
    "            for d in self.datasets:\n",
    "                self.meta.update(d.meta)\n",
    "            self.meta = SimpleNamespace(**self.meta)\n",
    "```\n",
    "\n",
    "Mixup can also be added in this way."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "beneficial-requirement",
   "metadata": {},
   "source": [
    "#### 4.3.3 Use the existing Dataset\n",
    "How to use the supported data set for training? \n",
    "Taking ImageNet classification as an example, you only need to add the corresponding datasplit_name in the config file. The code is here:\n",
    "```python\n",
    "    DATASETS=dict(\n",
    "        TRAIN=(\"imagenet_train\", ),\n",
    "        TEST=(\"imagenet_val\", ),\n",
    "    ),\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "swiss-bhutan",
   "metadata": {},
   "source": [
    "### 4.4 Add new augmentations\n",
    "We implemented many augmentations used in various tasks, besides, **we support all torchvision augmentations** as well.\n",
    "\n",
    "#### 4.4.1 Use existing augmentations\n",
    "Here we use an complex augmentation pipeline to show the power of cvpods:\n",
    "```python\n",
    "INPUT=dict(\n",
    "        AUG=dict(\n",
    "            TRAIN_PIPELINES=dict(\n",
    "                input_stream1=[(\"RepeatList\", dict(transforms=[\n",
    "                    (\"Torch_Compose\", transforms.Compose([\n",
    "                        transforms.RandomResizedCrop(64, scale=(0.2, 1.)),\n",
    "                        transforms.RandomApply([\n",
    "                                transforms.ColorJitter(0.4, 0.4, 0.4, 0.1)], p=0.8),\n",
    "                    ])),\n",
    "                    (\"GaussianBlur\", dict(sigma=[.1, 2.], p=0.5)),\n",
    "                    (\"Torch_Compose\", transforms.Compose([\n",
    "                        transforms.RandomGrayscale(p=0.2),\n",
    "                        transforms.RandomHorizontalFlip(),\n",
    "                    ])),\n",
    "                ], repeat_times=3)),],\n",
    "                input_stream2=[(\"RepeatList\", dict(transforms=[\n",
    "                    (\"Torch_Compose\", transforms.Compose([\n",
    "                        transforms.RandomResizedCrop(64, scale=(0.2, 1.)),\n",
    "                        transforms.RandomHorizontalFlip(),\n",
    "                    ])),\n",
    "                    (\"RandomList\", dict(\n",
    "                        transforms=[\n",
    "                            (\"AutoAugment\", dict(\n",
    "                                name=name,\n",
    "                                prob=0.5,\n",
    "                                magnitude=10,\n",
    "                                hparams=dict(\n",
    "                                    translate_const=int(224 * 0.45),\n",
    "                                    img_mean=tuple([min(255, round(255 * x)) for x in [0.406, 0.456, 0.485]]),\n",
    "                                    magnitude_std=0.5))\n",
    "                            ) for name in _RAND_TRANSFORMS],\n",
    "                        num_layers=2,\n",
    "                        choice_weights=[_RAND_CHOICE_WEIGHTS_0[k] / sum([_RAND_CHOICE_WEIGHTS_0[k] for k in _RAND_TRANSFORMS]) for k in _RAND_TRANSFORMS],\n",
    "                    )),\n",
    "                ], repeat_times=3)),],\n",
    "            )\n",
    "        )\n",
    "    ),\n",
    "```\n",
    "Don't be afraid, we can show another simple pipeline in the classification task in cvpods:\n",
    "```python\n",
    "    INPUT=dict(\n",
    "        AUG=dict(\n",
    "            TRAIN_PIPELINES=[\n",
    "                (\"Torch_Compose\", transforms.Compose([\n",
    "                    transforms.RandomResizedCrop(224),\n",
    "                    transforms.RandomHorizontalFlip(),\n",
    "                    transforms.ToTensor(),\n",
    "                    transforms.Normalize(\n",
    "                        mean=[0.485, 0.456, 0.406],\n",
    "                        std=[0.229, 0.224, 0.225])\n",
    "                    ])\n",
    "                ),\n",
    "            ],\n",
    "            TEST_PIPELINES=[\n",
    "                (\"Torch_Compose\", transforms.Compose([\n",
    "                    transforms.Resize(256),\n",
    "                    transforms.CenterCrop(224),\n",
    "                    transforms.ToTensor(),\n",
    "                    transforms.Normalize(\n",
    "                        mean=[0.485, 0.456, 0.406],\n",
    "                        std=[0.229, 0.224, 0.225])\n",
    "                    ])\n",
    "                ),\n",
    "            ],\n",
    "        )\n",
    "    ),\n",
    "```\n",
    "\n",
    "As you can see, the `config.INPUT.AUG.TRAIN/TEST_PIPELIES` can be list or dict. If you need multiple input streams (which is very common in self-supervised learing tasks), you can use the first example above to specify multiple inputs. While most tasks do not require multiple pipelines, so you can refer to the second example above to specify your specific augmentations."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "running-clear",
   "metadata": {},
   "source": [
    "All current support augmentations can be found in `cvpods/data/transforms/transform_gen.py`.\n",
    "As you can see, for each augmentaion in the pipeline, it follows the format \n",
    "```python\n",
    "(name_str, args_dict)\n",
    "```\n",
    "For example, the `GaussianBlur` is defined:\n",
    "```python\n",
    "(\"GaussianBlur\", dict(sigma=[.1, 2.], p=0.5))\n",
    "```\n",
    "\n",
    "If you need to use torchvision augmentations, make sure `name_str` starts with `Torch_`, and just put the augmentation in `args_dict` like above."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "equal-burning",
   "metadata": {},
   "source": [
    "#### 4.4.2 Add new augmentations\n",
    "\n",
    "It's very easy to add your own augmentations.\n",
    "Here we take 3D point cloud filter points as example:\n",
    "```python\n",
    "@TRANSFORMS.register()\n",
    "class FilterNumPoints(object):\n",
    "    def __init__(self, num_points=-1):\n",
    "        self.num_points = num_points\n",
    "\n",
    "    def __call__(self, points, info):\n",
    "\n",
    "        point_counts = box_ops.points_count_rbbox(\n",
    "            points, info[\"annotations\"][\"point_boxes\"])\n",
    "        # TODO: check why point_counts does not equal to info.annotations.num_points_in_gt\n",
    "        mask = point_counts >= self.num_points\n",
    "        _dict_select(info[\"annotations\"], mask)\n",
    "\n",
    "        return points, info\n",
    "```\n",
    "You just need to make sure your augmentation is callable, and handle your data and annotations as you wish."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "standard-innocent",
   "metadata": {},
   "source": [
    "### 4.5 Add new evaluation metrics\n",
    "The order of adding a new Evaluator is as follows:\n",
    "\n",
    "* Inherit DatasetEvaluator and implement the `process`, `evaluate`.\n",
    "  * `process` prediction in the process to obtain intermediate results used to calculate various metrics, and store them in self.predictions\n",
    "  * `evaluate` gets the results of the current batch from self.predictions, calculates various indicators, and stores them in self.results\n",
    "* In tools/train(test)_net.py, specify the evaluator to be used according to the corresponding evaluator_type added in paths_route. A dataset can use multiple evalutors."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "narrative-scholarship",
   "metadata": {},
   "source": [
    "## 5. Conclusion\n",
    "\n",
    "Congratulations, now you should be very comfortable with how to use cvpods. If you encounter any other problems in use, or want to contribute code to cvpods, please refer to our README."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
