{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "8cb270cc",
   "metadata": {},
   "source": [
    "## ResNet\n",
    "\n",
    "We start with a fundamental question: Is learning better networks as easy as stacking more layers?\n",
    "\n",
    "Stacking more layers adds more expressive power, and hence one may expect that adding more layers should help. However, in practise, deeper neural networks are more difficult to train. \n",
    "\n",
    "Practically, when deeper networks are able to start converging, a degradation problem occurs when the depth of the network increases. The accuracy gets saturated and then degrades rapidly.\n",
    "This is not surprising. There are 2 common reasons why this was thought to happen\n",
    "1. Vanishing/ Exploding Gradient problem: The deeper the network, higher is the chance for gradients to saturate.\n",
    "2. Overfitting: With more parameters, the network is more prone to overfitting\n",
    "\n",
    "However, the authors of the ResNet paper show that the degradation problem is not due to overfitting. Adding more layers lead to a higher training error. If the network was overfitting, then we'd have expected the train error to be lower, but the validation/ test error to be high. Additionally, they also look at the gradients during training to ensure that the gradients are healthy i.e not vanishing/exploding.\n",
    "\n",
    "\n",
    "How do we solve the degradation problem?\n",
    "\n",
    "Let us consider a shallower architecture with $n$ layers and its deeper counterpart that adds more layers onto it  ($n+m$ layers).  The deeper architecture should be able to achieve no higher loss than the shallow architecture. Intuitively, a trivial solution is to learn the exact $n$ layers of the shallow architecture, and the identity function for the additional $m$ layers. The fact that this doesn't happen in practice indicates that the neural network layers have a hard time in learning the identity function. Thus the paper proposes \"shortcut/skip connections\" which enables the layers to potentially learn the identity function easily.  This “identity shortcut connection” is the core idea of ResNet.\n",
    "\n",
    "ResNet is a key architecture for deep learning models. It has inspired several variants, and is one of the most popular architectures in use. Details of ResNet can be found in https://arxiv.org/pdf/1512.03385.pdf.\n",
    "\n",
    "We will spend the subsequent sections of this notebook in trying to understand the key ideas behind ResNet."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a6c54e7",
   "metadata": {},
   "source": [
    "## Residual Learning\n",
    "\n",
    "Let $g(x)$ be the function learned by a stack of layers (not necessarily the entire network).\n",
    "Let’s consider $h(x)= g(x)+x$, i.e the output of the stack of layers with skip connections. Here $+x$ term denotes the skip connection i.e the input $x$ is directly added to the output of the stack of layers. This is called a skip connection because the input bypasses the intermediate layer and is fed into the deeper layer with a direct path.\n",
    "\n",
    "The output $h(x)$ already contains information about the input $x$. So the neural network layers need to learn the function $g(x)=h(x)-x$, which is the change in the value/delta/residue. Hence the name residual networks.\n",
    "\n",
    "Now let us revisit the earlier problem of degradation. We posited that normal neural network layers in general have a hard time learning the identity function. In case of the residual learning, if identity functions are optimal, then to learn the identity function, $h(x)=x$, the layers need to learn $g(x)$=0. This can easily be done by driving all the weights of the layers to 0. \n",
    "\n",
    "Another way to think about it is: if we initialize a regular neural network’s weights and biases to be 0 at the start, then every layer starts with the “zero” function i.e $g(x)=0$. Thus, the output of every stack of layers with a shortcut connection, $h(x)=g(x)+x$, is already the identity function i.e $h(x)=x$ when $g(x)=0$\n",
    "\n",
    "In real cases, it is important to note that it is unlikely that identity mappings are optimal i.e the network layers will want to learn actual features. In such cases, this reformulation isn't preventing the network lawyers from doing so. So the layers can still learn other functions like regular stack of layers. We can think of this reformulation as a  preconditioning which makes learning the identity function easier if needed.\n",
    "\n",
    "\n",
    "Additionally, by adding skip connections, we are allowing a direct path for the gradient to flow from layer to layer i.e the deeper layer has a direct path to $x$. This allows for better learning as information from the lower layers pass directly into the higher layers."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "34c82299",
   "metadata": {},
   "source": [
    "Let us now consider how the shortcut connection works. We will do so by implementing the basic skip connection block."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "77210664",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torchvision\n",
    "\n",
    "from torch import nn\n",
    "from torchsummary import summary"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "e877ed83",
   "metadata": {},
   "outputs": [],
   "source": [
    "class BasicBlock(nn.Module):\n",
    "    def __init__(self, in_channels, num_features, stride=1, downsample=None):\n",
    "        super(BasicBlock, self).__init__()\n",
    "        self.conv1 = nn.Sequential(\n",
    "                        nn.Conv2d(in_channels, num_features, kernel_size=3, stride=stride, padding=1, bias=False),\n",
    "                        nn.BatchNorm2d(num_features, eps=0.001),\n",
    "                        nn.ReLU(inplace=True))\n",
    "        self.conv2 = nn.Sequential(\n",
    "                        nn.Conv2d(num_features, num_features, kernel_size=3, stride=1, padding=1, bias=False),\n",
    "                        nn.BatchNorm2d(num_features, eps=0.001))\n",
    "        self.downsample = downsample\n",
    "        self.relu = nn.ReLU(inplace=True)\n",
    "        \n",
    "        \n",
    "    def forward(self, x):\n",
    "        conv_out = self.conv2(self.conv1(x))\n",
    "        identity = x\n",
    "        if self.downsample is not None:\n",
    "            identity = self.downsample(x)\n",
    "        assert identity.shape == conv_out.shape, f\"Identity {identity.shape} and conv out {conv_out.shape} have different shapes\"\n",
    "        \n",
    "        # Skip connection\n",
    "        out = self.relu(conv_out+identity)\n",
    "        return out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "9cfdb3ea",
   "metadata": {},
   "outputs": [],
   "source": [
    "x = torch.rand([1, 64, 28, 28])\n",
    "residual_block = BasicBlock(in_channels=64, num_features=64, stride=1)\n",
    "\n",
    "out = residual_block(x)\n",
    "assert out.shape == torch.Size([1, 64, 28, 28])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c9f9cf0a",
   "metadata": {},
   "source": [
    "Notice how the output of the residual block is a function of both the input and conv layer out i.e $ReLU(conv\\_out+x)$ This assumes that $x$ and $conv\\_out$ have the same shape. We will study subsequently about what needs to be done when this isn't the case. \n",
    "\n",
    "Also, note that adding the skip connections does not increase the number of parameters. The shortcut connections are parameter free. This makes the solution cheap from a computational point of view. This is one of the charms of shortcut connections."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7deff792",
   "metadata": {},
   "source": [
    "## ResNet Architecture\n",
    "\n",
    "Now that we have studied the basic building block i.e a stack of conv layers with a skip connection, let us delve deeper into the architecture of ResNet. \n",
    "\n",
    "ResNet architectures are constructed by stacking multiple building blocks on top of each other. They follow the same idea as VGG i.e\n",
    "1. The convolutional layers mostly have 3×3 filters\n",
    "2. The layers have the same number of filters for a given  output feature map size.\n",
    "3. If the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer.\n",
    "\n",
    "ResNet however uses Conv layers with a stride=2 to downsample, unlike VGG (which had multiple max pooling layers)\n",
    "\n",
    "The core architecture consists of the following components:\n",
    "\n",
    "1. 5 Convolutional Layer blocks:\n",
    "\n",
    "   The first convolutional block consists of a 7x7 kernel, with stride=2, padding=3, num_features=64 followed by a Max Pooling layer with a 3x3 kernel, stride=2, padding=1. The feature map size is reduced from (224, 224) to (56, 56).\n",
    "   \n",
    "   The remaining convolutional blocks (ResidualConvBlock) are built by stacking multiple basic shortcut blocks together. Each basic block uses 3x3 filters as described above.\n",
    "   \n",
    "  \n",
    "2. Classifier: An average pooling block which runs on top of the conv block output, followed by an Fully Connected layer, which is used for classification\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6fa7e5ae",
   "metadata": {},
   "source": [
    "Let us now implement ResNet-34 from scratch. In practice, this is seldom done. `torchvision.models` already provides ready-made implementations for all the ResNet architectures. However by building the network from scratch, we will gain a deeper understanding of the architecture.\n",
    "\n",
    "\n",
    "We have already looked at the BasicBlock which implements the shortcut connection. Now we will implement a residual conv block which consists of a number of basic blocks stacked on top of each other.\n",
    "\n",
    "We have to handle 2 cases when it comes to basic blocks.\n",
    " \n",
    "#### Case 1: Output feature map size = Input Feature map size &  Number of output features = Number of input features\n",
    "\n",
    "This is the most common case. Since there is no change in either the num_features or the feature_map size, we can easily add the input and output via shortcut connections.\n",
    "\n",
    "#### Case 2: Output feature map size =  1/2 *  Input Feature map size &  Number of output features = 2 * Number of input features\n",
    "\n",
    "Remember that ResNet uses Conv layers with a stride=2 to downsample. Additionally the number of features are also doubled. \n",
    "\n",
    "This is done by the first Basic Block of every Conv Block (except 2nd Conv Block). The output feature map size is reduced by using a 3x3 convolution with a stride=2.\n",
    "\n",
    "In this case the input and output are not of the same size. So how do we add them together as part of the skip connection? 1x1 convs are the answer.\n",
    "The input feature map size is downsampled and the number of input features are upsampled using a 1x1 conv with stride=2, num_features=2 * Number of input features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "be313ef0",
   "metadata": {},
   "outputs": [],
   "source": [
    "class ResidualConvBlock(nn.Module):\n",
    "    def __init__(self, in_channels, num_blocks, reduce_fm_size=True):\n",
    "        super(ResidualConvBlock, self).__init__()\n",
    "        \n",
    "        num_features = in_channels * 2 if reduce_fm_size else in_channels\n",
    "        modules = []\n",
    "    \n",
    "        for i in range(num_blocks):\n",
    "            if i == 0 and reduce_fm_size:\n",
    "                # Case 2\n",
    "                stride = 2 \n",
    "                downsample = nn.Sequential(\n",
    "                    nn.Conv2d(in_channels, num_features, kernel_size=1, stride=stride, bias=False),\n",
    "                    nn.BatchNorm2d(num_features, eps=0.001),\n",
    "                )\n",
    "                basic_block = BasicBlock(in_channels=in_channels, num_features=num_features, stride=stride,\n",
    "                                        downsample=downsample)\n",
    "            else:\n",
    "                # Case 1\n",
    "                basic_block = BasicBlock(in_channels=num_features, num_features=num_features, stride=1)\n",
    "            modules.append(basic_block)\n",
    "        \n",
    "        self.conv_block = nn.Sequential(*modules)\n",
    "    \n",
    "    \n",
    "    def forward(self, x):\n",
    "        return self.conv_block(x)\n",
    "            "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "ec670895",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Case 1\n",
    "x = torch.rand([1, 64, 56, 56])\n",
    "conv_block = ResidualConvBlock(64, 3, reduce_fm_size=False)\n",
    "\n",
    "y = conv_block(x)\n",
    "assert y.shape == torch.Size([1, 64, 56, 56])\n",
    "\n",
    "\n",
    "# Case 2\n",
    "x = torch.rand([1, 64, 56, 56])\n",
    "conv_block = ResidualConvBlock(64, 3, reduce_fm_size=True)\n",
    "\n",
    "y = conv_block(x)\n",
    "assert y.shape == torch.Size([1, 64*2, 56//2, 56//2])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c12e376c",
   "metadata": {},
   "source": [
    "Now we are ready to implement ResNet-34. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "3062057a",
   "metadata": {},
   "outputs": [],
   "source": [
    "class ResNet(nn.Module):\n",
    "    def __init__(self, num_basic_blocks, num_classes):\n",
    "        super(ResNet, self).__init__()\n",
    "        conv1 = nn.Sequential(\n",
    "                              nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),\n",
    "                              nn.BatchNorm2d(64, eps=0.001),\n",
    "                              nn.ReLU(inplace=True),\n",
    "                              nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n",
    "        )\n",
    "                \n",
    "        assert len(num_basic_blocks) == 4\n",
    "        # num_basic_blocks is a list of size 4, which specifies the number of basic blocks per ResidualConvBlock\n",
    "        conv2 = ResidualConvBlock(in_channels=64, num_blocks=num_basic_blocks[0], reduce_fm_size=False)\n",
    "        conv3 = ResidualConvBlock(in_channels=64, num_blocks=num_basic_blocks[1], reduce_fm_size=True)\n",
    "        conv4 = ResidualConvBlock(in_channels=128, num_blocks=num_basic_blocks[2], reduce_fm_size=True)\n",
    "        conv5 = ResidualConvBlock(in_channels=256, num_blocks=num_basic_blocks[3], reduce_fm_size=True)\n",
    "        \n",
    "        self.conv_backbone = nn.Sequential(*[conv1, conv2, conv3, conv4, conv5])\n",
    "        \n",
    "        self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))\n",
    "        self.classifier = nn.Linear(512, num_classes)\n",
    "        \n",
    "    \n",
    "    def forward(self, x):\n",
    "        conv_out = self.conv_backbone(x)\n",
    "        conv_out = self.avg_pool(conv_out)\n",
    "        # We need to flatten the conv features before passing it to the classifier\n",
    "        logits = self.classifier(conv_out.view(conv_out.shape[0], -1)) \n",
    "        return logits"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "90e9cc74",
   "metadata": {},
   "outputs": [],
   "source": [
    "num_classes = 1000\n",
    "resnet34 = ResNet([3, 4, 6, 3], num_classes)\n",
    "\n",
    "x = torch.rand([1, 3, 224, 224])\n",
    "logits = resnet34(x)\n",
    "assert logits.shape == torch.Size([1, num_classes])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "6fecd680",
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "ResNet(\n",
       "  (conv_backbone): Sequential(\n",
       "    (0): Sequential(\n",
       "      (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)\n",
       "      (1): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "      (2): ReLU(inplace=True)\n",
       "      (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)\n",
       "    )\n",
       "    (1): ResidualConvBlock(\n",
       "      (conv_block): Sequential(\n",
       "        (0): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (1): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (2): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "    (2): ResidualConvBlock(\n",
       "      (conv_block): Sequential(\n",
       "        (0): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (downsample): Sequential(\n",
       "            (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
       "            (1): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (1): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (2): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (3): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "    (3): ResidualConvBlock(\n",
       "      (conv_block): Sequential(\n",
       "        (0): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (downsample): Sequential(\n",
       "            (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (1): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (2): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (3): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (4): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (5): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "    (4): ResidualConvBlock(\n",
       "      (conv_block): Sequential(\n",
       "        (0): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (downsample): Sequential(\n",
       "            (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)\n",
       "            (1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (1): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "        (2): BasicBlock(\n",
       "          (conv1): Sequential(\n",
       "            (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "            (2): ReLU(inplace=True)\n",
       "          )\n",
       "          (conv2): Sequential(\n",
       "            (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n",
       "            (1): BatchNorm2d(512, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n",
       "          )\n",
       "          (relu): ReLU(inplace=True)\n",
       "        )\n",
       "      )\n",
       "    )\n",
       "  )\n",
       "  (avg_pool): AdaptiveAvgPool2d(output_size=(1, 1))\n",
       "  (classifier): Linear(in_features=512, out_features=1000, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "resnet34"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "e51ee2e3",
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "----------------------------------------------------------------\n",
      "        Layer (type)               Output Shape         Param #\n",
      "================================================================\n",
      "            Conv2d-1         [-1, 64, 112, 112]           9,408\n",
      "       BatchNorm2d-2         [-1, 64, 112, 112]             128\n",
      "              ReLU-3         [-1, 64, 112, 112]               0\n",
      "         MaxPool2d-4           [-1, 64, 56, 56]               0\n",
      "            Conv2d-5           [-1, 64, 56, 56]          36,864\n",
      "       BatchNorm2d-6           [-1, 64, 56, 56]             128\n",
      "              ReLU-7           [-1, 64, 56, 56]               0\n",
      "            Conv2d-8           [-1, 64, 56, 56]          36,864\n",
      "       BatchNorm2d-9           [-1, 64, 56, 56]             128\n",
      "             ReLU-10           [-1, 64, 56, 56]               0\n",
      "       BasicBlock-11           [-1, 64, 56, 56]               0\n",
      "           Conv2d-12           [-1, 64, 56, 56]          36,864\n",
      "      BatchNorm2d-13           [-1, 64, 56, 56]             128\n",
      "             ReLU-14           [-1, 64, 56, 56]               0\n",
      "           Conv2d-15           [-1, 64, 56, 56]          36,864\n",
      "      BatchNorm2d-16           [-1, 64, 56, 56]             128\n",
      "             ReLU-17           [-1, 64, 56, 56]               0\n",
      "       BasicBlock-18           [-1, 64, 56, 56]               0\n",
      "           Conv2d-19           [-1, 64, 56, 56]          36,864\n",
      "      BatchNorm2d-20           [-1, 64, 56, 56]             128\n",
      "             ReLU-21           [-1, 64, 56, 56]               0\n",
      "           Conv2d-22           [-1, 64, 56, 56]          36,864\n",
      "      BatchNorm2d-23           [-1, 64, 56, 56]             128\n",
      "             ReLU-24           [-1, 64, 56, 56]               0\n",
      "       BasicBlock-25           [-1, 64, 56, 56]               0\n",
      "ResidualConvBlock-26           [-1, 64, 56, 56]               0\n",
      "           Conv2d-27          [-1, 128, 28, 28]          73,728\n",
      "      BatchNorm2d-28          [-1, 128, 28, 28]             256\n",
      "             ReLU-29          [-1, 128, 28, 28]               0\n",
      "           Conv2d-30          [-1, 128, 28, 28]         147,456\n",
      "      BatchNorm2d-31          [-1, 128, 28, 28]             256\n",
      "           Conv2d-32          [-1, 128, 28, 28]           8,192\n",
      "      BatchNorm2d-33          [-1, 128, 28, 28]             256\n",
      "             ReLU-34          [-1, 128, 28, 28]               0\n",
      "       BasicBlock-35          [-1, 128, 28, 28]               0\n",
      "           Conv2d-36          [-1, 128, 28, 28]         147,456\n",
      "      BatchNorm2d-37          [-1, 128, 28, 28]             256\n",
      "             ReLU-38          [-1, 128, 28, 28]               0\n",
      "           Conv2d-39          [-1, 128, 28, 28]         147,456\n",
      "      BatchNorm2d-40          [-1, 128, 28, 28]             256\n",
      "             ReLU-41          [-1, 128, 28, 28]               0\n",
      "       BasicBlock-42          [-1, 128, 28, 28]               0\n",
      "           Conv2d-43          [-1, 128, 28, 28]         147,456\n",
      "      BatchNorm2d-44          [-1, 128, 28, 28]             256\n",
      "             ReLU-45          [-1, 128, 28, 28]               0\n",
      "           Conv2d-46          [-1, 128, 28, 28]         147,456\n",
      "      BatchNorm2d-47          [-1, 128, 28, 28]             256\n",
      "             ReLU-48          [-1, 128, 28, 28]               0\n",
      "       BasicBlock-49          [-1, 128, 28, 28]               0\n",
      "           Conv2d-50          [-1, 128, 28, 28]         147,456\n",
      "      BatchNorm2d-51          [-1, 128, 28, 28]             256\n",
      "             ReLU-52          [-1, 128, 28, 28]               0\n",
      "           Conv2d-53          [-1, 128, 28, 28]         147,456\n",
      "      BatchNorm2d-54          [-1, 128, 28, 28]             256\n",
      "             ReLU-55          [-1, 128, 28, 28]               0\n",
      "       BasicBlock-56          [-1, 128, 28, 28]               0\n",
      "ResidualConvBlock-57          [-1, 128, 28, 28]               0\n",
      "           Conv2d-58          [-1, 256, 14, 14]         294,912\n",
      "      BatchNorm2d-59          [-1, 256, 14, 14]             512\n",
      "             ReLU-60          [-1, 256, 14, 14]               0\n",
      "           Conv2d-61          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-62          [-1, 256, 14, 14]             512\n",
      "           Conv2d-63          [-1, 256, 14, 14]          32,768\n",
      "      BatchNorm2d-64          [-1, 256, 14, 14]             512\n",
      "             ReLU-65          [-1, 256, 14, 14]               0\n",
      "       BasicBlock-66          [-1, 256, 14, 14]               0\n",
      "           Conv2d-67          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-68          [-1, 256, 14, 14]             512\n",
      "             ReLU-69          [-1, 256, 14, 14]               0\n",
      "           Conv2d-70          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-71          [-1, 256, 14, 14]             512\n",
      "             ReLU-72          [-1, 256, 14, 14]               0\n",
      "       BasicBlock-73          [-1, 256, 14, 14]               0\n",
      "           Conv2d-74          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-75          [-1, 256, 14, 14]             512\n",
      "             ReLU-76          [-1, 256, 14, 14]               0\n",
      "           Conv2d-77          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-78          [-1, 256, 14, 14]             512\n",
      "             ReLU-79          [-1, 256, 14, 14]               0\n",
      "       BasicBlock-80          [-1, 256, 14, 14]               0\n",
      "           Conv2d-81          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-82          [-1, 256, 14, 14]             512\n",
      "             ReLU-83          [-1, 256, 14, 14]               0\n",
      "           Conv2d-84          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-85          [-1, 256, 14, 14]             512\n",
      "             ReLU-86          [-1, 256, 14, 14]               0\n",
      "       BasicBlock-87          [-1, 256, 14, 14]               0\n",
      "           Conv2d-88          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-89          [-1, 256, 14, 14]             512\n",
      "             ReLU-90          [-1, 256, 14, 14]               0\n",
      "           Conv2d-91          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-92          [-1, 256, 14, 14]             512\n",
      "             ReLU-93          [-1, 256, 14, 14]               0\n",
      "       BasicBlock-94          [-1, 256, 14, 14]               0\n",
      "           Conv2d-95          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-96          [-1, 256, 14, 14]             512\n",
      "             ReLU-97          [-1, 256, 14, 14]               0\n",
      "           Conv2d-98          [-1, 256, 14, 14]         589,824\n",
      "      BatchNorm2d-99          [-1, 256, 14, 14]             512\n",
      "            ReLU-100          [-1, 256, 14, 14]               0\n",
      "      BasicBlock-101          [-1, 256, 14, 14]               0\n",
      "ResidualConvBlock-102          [-1, 256, 14, 14]               0\n",
      "          Conv2d-103            [-1, 512, 7, 7]       1,179,648\n",
      "     BatchNorm2d-104            [-1, 512, 7, 7]           1,024\n",
      "            ReLU-105            [-1, 512, 7, 7]               0\n",
      "          Conv2d-106            [-1, 512, 7, 7]       2,359,296\n",
      "     BatchNorm2d-107            [-1, 512, 7, 7]           1,024\n",
      "          Conv2d-108            [-1, 512, 7, 7]         131,072\n",
      "     BatchNorm2d-109            [-1, 512, 7, 7]           1,024\n",
      "            ReLU-110            [-1, 512, 7, 7]               0\n",
      "      BasicBlock-111            [-1, 512, 7, 7]               0\n",
      "          Conv2d-112            [-1, 512, 7, 7]       2,359,296\n",
      "     BatchNorm2d-113            [-1, 512, 7, 7]           1,024\n",
      "            ReLU-114            [-1, 512, 7, 7]               0\n",
      "          Conv2d-115            [-1, 512, 7, 7]       2,359,296\n",
      "     BatchNorm2d-116            [-1, 512, 7, 7]           1,024\n",
      "            ReLU-117            [-1, 512, 7, 7]               0\n",
      "      BasicBlock-118            [-1, 512, 7, 7]               0\n",
      "          Conv2d-119            [-1, 512, 7, 7]       2,359,296\n",
      "     BatchNorm2d-120            [-1, 512, 7, 7]           1,024\n",
      "            ReLU-121            [-1, 512, 7, 7]               0\n",
      "          Conv2d-122            [-1, 512, 7, 7]       2,359,296\n",
      "     BatchNorm2d-123            [-1, 512, 7, 7]           1,024\n",
      "            ReLU-124            [-1, 512, 7, 7]               0\n",
      "      BasicBlock-125            [-1, 512, 7, 7]               0\n",
      "ResidualConvBlock-126            [-1, 512, 7, 7]               0\n",
      "AdaptiveAvgPool2d-127            [-1, 512, 1, 1]               0\n",
      "          Linear-128                 [-1, 1000]         513,000\n",
      "================================================================\n",
      "Total params: 21,797,672\n",
      "Trainable params: 21,797,672\n",
      "Non-trainable params: 0\n",
      "----------------------------------------------------------------\n",
      "Input size (MB): 0.57\n",
      "Forward/backward pass size (MB): 99.16\n",
      "Params size (MB): 83.15\n",
      "Estimated Total Size (MB): 182.89\n",
      "----------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "# We can now take a look at the summary to visualize the output shape, number of parameters and the layers\n",
    "summary(resnet34, input_size=(3, 224, 224))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "a253a12f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# As an elementary check, let us compare the number of parameters between our implementation\n",
    "# and the official torchivision implementation and assert that they are equal\n",
    "\n",
    "num_resnet_params = sum(p.numel() for p in resnet34.parameters() if p.requires_grad)\n",
    "\n",
    "torch_resnet34 = torchvision.models.resnet34()\n",
    "num_torch_resnet_params = sum(p.numel() for p in torch_resnet34.parameters() if p.requires_grad)\n",
    "\n",
    "assert num_resnet_params == num_torch_resnet_params"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1fe6ae78",
   "metadata": {},
   "source": [
    "And there we have our own simple implementation of Resnet-34. Note that this is a barebones implementation to get a sense for the broad architecture. \n",
    "\n",
    "Deeper ResNets (ResNet50, ResNet101, Resnet151) use a different type of basic block called the bottleneck layer. Similarly there are several other variants inspired by ResNet like ResNext, Wide Resnet etc. However the core idea behind them remains the same."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
