{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Transformer Encoder (with Scaled Dot Product) from Scratch"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## First What is BERT?\n",
        "\n",
        "BERT stands for Bidirectional Encoder Representations from Transformers. The name itself gives us several clues to what BERT is all about.\n",
        "\n",
        "BERT architecture consists of several Transformer encoders stacked together. Each Transformer encoder encapsulates two sub-layers: a self-attention layer and a feed-forward layer.\n",
        "\n",
        "### There are two different BERT models:\n",
        "\n",
        "- BERT base, which is a BERT model consists of 12 layers of Transformer encoder, 12 attention heads, 768 hidden size, and 110M parameters.\n",
        "\n",
        "- BERT large, which is a BERT model consists of 24 layers of Transformer encoder,16 attention heads, 1024 hidden size, and 340 parameters.\n",
        "\n",
        "\n",
        "\n",
        "BERT Input and Output\n",
        "BERT model expects a sequence of tokens (words) as an input. In each sequence of tokens, there are two special tokens that BERT would expect as an input:\n",
        "\n",
        "- [CLS]: This is the first token of every sequence, which stands for classification token.\n",
        "- [SEP]: This is the token that makes BERT know which token belongs to which sequence. This special token is mainly important for a next sentence prediction task or question-answering task. If we only have one sequence, then this token will be appended to the end of the sequence.\n",
        "\n",
        "\n",
        "It is also important to note that the maximum size of tokens that can be fed into BERT model is 512. If the tokens in a sequence are less than 512, we can use padding to fill the unused token slots with [PAD] token. If the tokens in a sequence are longer than 512, then we need to do a truncation.\n",
        "\n",
        "And that’s all that BERT expects as input.\n",
        "\n",
        "BERT model then will output an embedding vector of size 768 in each of the tokens. We can use these vectors as an input for different kinds of NLP applications, whether it is text classification, next sentence prediction, Named-Entity-Recognition (NER), or question-answering.\n",
        "\n",
        "\n",
        "------------\n",
        "\n",
        "**For a text classification task**, we focus our attention on the embedding vector output from the special [CLS] token. This means that we’re going to use the embedding vector of size 768 from [CLS] token as an input for our classifier, which then will output a vector of size the number of classes in our classification task.\n",
        "\n",
        "-----------------------\n",
        "\n",
        "![Imgur](https://imgur.com/NpeB9vb.png)\n",
        "\n",
        "-------------------------"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "C4m41VSRhUdD"
      },
      "source": [
        "![](assets/2022-07-04-21-46-12.png)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "_4jdjLi7hUdH"
      },
      "source": [
        "![](assets/2022-07-08-02-58-52.png)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "aOdC0XxShUdH"
      },
      "source": [
        "![](assets/2022-07-08-03-07-18.png)\n",
        "\n",
        "\n",
        "![](assets/2022-07-08-03-08-46.png)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 20,
      "metadata": {
        "id": "lbPP17I6hUdI"
      },
      "outputs": [],
      "source": [
        "!pip install bertviz transformers -q\n",
        "\n",
        "# !pip install transformers\n",
        "# !pip install transformers "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 753
        },
        "id": "8qjMPFVyhUdK",
        "outputId": "d1739101-4867-4485-9368-6d2d1249fba0"
      },
      "outputs": [],
      "source": [
        "#hide_output\n",
        "from transformers import AutoTokenizer\n",
        "\n",
        "from bertviz.transformers_neuron_view import BertModel\n",
        "from bertviz.neuron_view import show\n",
        "\n",
        "model_ckpt = 'bert-base-uncased'\n",
        "\n",
        "tokenizer = AutoTokenizer.from_pretrained(model_ckpt)\n",
        "\n",
        "text = \"As the aircraft becomes lighter, it flies higher in air of lower density to maintain the same airspeed.\"\n",
        "\n",
        "model = BertModel.from_pretrained(model_ckpt)\n",
        "\n",
        "# show(model, 'bert', tokenizer, text, display_mode = 'light', layer=0, head=8 )\n",
        "# Commenting out the above line as Github will NOT render the bertviz plot and for that reason anything below this line was NOT getting rendered in the  notebook at all\n",
        "\n"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "Pv6qjQpQhUdL"
      },
      "source": [
        "## Tokenization"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 22,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "MvUW2e39hUdL",
        "outputId": "2c1e21d5-9bc5-4aa9-c152-3af3fcba8310"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "tensor([[ 2004,  1996,  2948,  4150,  9442,  1010,  2009, 10029,  3020,  1999,\n",
              "          2250,  1997,  2896,  4304,  2000,  5441,  1996,  2168, 14369, 25599,\n",
              "          1012]])"
            ]
          },
          "execution_count": 22,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "inputs = tokenizer(text, return_tensors='pt', add_special_tokens=False)\n",
        "\n",
        "inputs.input_ids"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 23,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "upf81-ZThUdM",
        "outputId": "274f1bb6-b5b3-4b2b-ad08-9e057d2cb969"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "BertConfig {\n",
              "  \"_name_or_path\": \"bert-base-uncased\",\n",
              "  \"architectures\": [\n",
              "    \"BertForMaskedLM\"\n",
              "  ],\n",
              "  \"attention_probs_dropout_prob\": 0.1,\n",
              "  \"classifier_dropout\": null,\n",
              "  \"gradient_checkpointing\": false,\n",
              "  \"hidden_act\": \"gelu\",\n",
              "  \"hidden_dropout_prob\": 0.1,\n",
              "  \"hidden_size\": 768,\n",
              "  \"initializer_range\": 0.02,\n",
              "  \"intermediate_size\": 3072,\n",
              "  \"layer_norm_eps\": 1e-12,\n",
              "  \"max_position_embeddings\": 512,\n",
              "  \"model_type\": \"bert\",\n",
              "  \"num_attention_heads\": 12,\n",
              "  \"num_hidden_layers\": 12,\n",
              "  \"pad_token_id\": 0,\n",
              "  \"position_embedding_type\": \"absolute\",\n",
              "  \"transformers_version\": \"4.20.1\",\n",
              "  \"type_vocab_size\": 2,\n",
              "  \"use_cache\": true,\n",
              "  \"vocab_size\": 30522\n",
              "}"
            ]
          },
          "execution_count": 23,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "from torch import nn\n",
        "from transformers import AutoConfig\n",
        "\n",
        "config = AutoConfig.from_pretrained(model_ckpt)\n",
        "config"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 24,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "fbdsdg_mhUdM",
        "outputId": "4c4c9aac-fcba-4f3a-ef04-ff7e8748f42a"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "Embedding(30522, 768)"
            ]
          },
          "execution_count": 24,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "token_emb = nn.Embedding(config.vocab_size, config.hidden_size)\n",
        "token_emb"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 25,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Yx-v3_eFhUdN",
        "outputId": "4a1c55c4-3b90-4b8f-8dc9-fe61647e33d9"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "tensor([[[ 0.6336, -1.8910,  0.5547,  ..., -0.5884, -1.5728, -0.3602],\n",
            "         [ 0.4609, -0.6503,  0.2237,  ..., -0.0850, -1.1233,  0.2282],\n",
            "         [ 0.6410, -0.9294,  1.3242,  ...,  1.4695, -1.0404,  0.1205],\n",
            "         ...,\n",
            "         [ 0.0038, -0.0762,  0.4624,  ...,  1.0815, -0.2495, -0.1189],\n",
            "         [ 0.2080,  0.9414, -0.3307,  ...,  0.3679, -0.7962,  0.9216],\n",
            "         [ 0.0468, -0.0521, -0.5550,  ...,  0.7277,  1.0729, -1.5228]]],\n",
            "       grad_fn=<EmbeddingBackward0>)\n"
          ]
        },
        {
          "data": {
            "text/plain": [
              "torch.Size([1, 21, 768])"
            ]
          },
          "execution_count": 25,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "inputs_embeds = token_emb(inputs.input_ids)\n",
        "print(inputs_embeds)\n",
        "inputs_embeds.size()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 26,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "zhzzQMILhUdN",
        "outputId": "c79253da-8c43-411a-ca45-8935e4b26284"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "torch.Size([1, 21, 21])"
            ]
          },
          "execution_count": 26,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import torch\n",
        "from math import sqrt\n",
        "\n",
        "query = key = value = inputs_embeds\n",
        "\n",
        "dim_k = key.size(-1)\n",
        "\n",
        "scores = torch.bmm(query, key.transpose(1, 2)) /sqrt(dim_k)\n",
        "\n",
        "scores.size()"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "#### This has created a 5 × 5 matrix of attention scores per sample in the batch.\n",
        "\n",
        "\n",
        "-----------------\n",
        "\n",
        "## torch.bmm() function \n",
        "\n",
        "The torch.bmm() function performs a batch matrix-matrix product that simplifies the\n",
        "computation of the attention scores where the query and key vectors have the\n",
        "shape [batch_size, seq_len, hidden_dim]. \n",
        "\n",
        "If we ignored the batch dimension we could calculate the dot product between each query and key vector by simply transposing the key tensor to have the shape [hidden_dim, seq_len] and then using the matrix product (i.e. torch.matmul()) to collect all the dot products in a [seq_len, seq_len] matrix. \n",
        "\n",
        "### But here, since we want to do this for all sequences in the batch independently, hence, we use torch.bmm() instead of torch.matmul(), because, torch.bmm() which takes two batches of matrices and multiplies each matrix from the first batch with the corresponding matrix in the second batch.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 27,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "WWfrweFnhUdN",
        "outputId": "f071dc0b-2d0d-4fbc-8cc1-afee95fe22f7"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
              "         1., 1., 1.]], grad_fn=<SumBackward1>)"
            ]
          },
          "execution_count": 27,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import torch.nn.functional as F\n",
        "\n",
        "weights = F.softmax(scores, dim=1 )\n",
        "\n",
        "weights.sum(dim=-1)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 28,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "k1rfT9b8hUdO",
        "outputId": "32a5cb28-07b9-4f40-aa54-cf6f1d243cd9"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "torch.Size([1, 21, 768])"
            ]
          },
          "execution_count": 28,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "attn_outputs = torch.bmm(weights, value)\n",
        "\n",
        "attn_outputs.shape"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 29,
      "metadata": {
        "id": "5z2XtBZ5hUdO"
      },
      "outputs": [],
      "source": [
        "def scaled_dot_product_attention(query, key, value):\n",
        "    \"\"\"\n",
        "    Compute scaled dot product attention.\n",
        "\n",
        "    Args:\n",
        "        query (torch.Tensor): Query tensor of shape (batch_size, seq_len_q, dim_q).\n",
        "        key (torch.Tensor): Key tensor of shape (batch_size, seq_len_k, dim_k).\n",
        "        value (torch.Tensor): Value tensor of shape (batch_size, seq_len_v, dim_v).\n",
        "\n",
        "    Returns:\n",
        "        torch.Tensor: Output tensor after applying scaled dot product attention of shape (batch_size, seq_len_q, dim_v).\n",
        "\n",
        "    \"\"\"\n",
        "    #first calculates the dimension of the key tensor (dim_k).\n",
        "    dim_k = key.size(-1)\n",
        "    # computes the attention scores by performing the dot product between the query and the transposed key tensor. The result is divided by the square root of dim_k.\n",
        "    scores = torch.bmm(query, key.transpose(1, 2)) /sqrt(dim_k)\n",
        "    # Next, the attention scores are normalized using the softmax function along the last dimension, which represents the sequence length (seq_len_k).\n",
        "    weights = F.softmax(scores, dim=-1)\n",
        "    \n",
        "    \"\"\" Finally, the attention weights are applied to the value tensor by performing a batch matrix multiplication. The resulting tensor is the output of the scaled dot product attention and has the shape (batch_size, seq_len_q, dim_v).\n",
        "\n",
        "    The output tensor represents the attended values corresponding to each query element based on their similarity to the key elements. \"\"\"\n",
        "    return torch.bmm(weights, value)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 30,
      "metadata": {
        "id": "9lAi55V0hUdO"
      },
      "outputs": [],
      "source": [
        "class AttentionHead(nn.Module):\n",
        "    \"\"\"\n",
        "    Attention head module for the Transformer model. Encapsulates the operations required to compute attention within a single attention head of the Transformer model.\n",
        "\n",
        "    Args:\n",
        "        embed_dim (int): Dimensionality of the input embeddings.\n",
        "        head_dim (int): Dimensionality of the attention head.\n",
        "\n",
        "    \"\"\"\n",
        "    def __init__(self, embed_dim, head_dim):\n",
        "        super().__init__()\n",
        "        self.q = nn.Linear(embed_dim, head_dim)\n",
        "        self.k = nn.Linear(embed_dim, head_dim)\n",
        "        self.v = nn.Linear(embed_dim, head_dim)\n",
        "        \n",
        "    def forward(self, hidden_state):\n",
        "        \"\"\"\n",
        "        Perform forward pass through the attention head.\n",
        "\n",
        "        Args:\n",
        "            hidden_state (torch.Tensor): Input tensor of shape (batch_size, seq_len, embed_dim).\n",
        "\n",
        "        Returns:\n",
        "            torch.Tensor: Output tensor after applying scaled dot product attention of shape (batch_size, seq_len, head_dim).\n",
        "\n",
        "        \"\"\"\n",
        "        attn_outputs = scaled_dot_product_attention(\n",
        "            self.q(hidden_state), self.k(hidden_state), self.v(hidden_state)\n",
        "        )\n",
        "        return attn_outputs"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {
        "id": "XPdlVPrVwhqk"
      },
      "outputs": [],
      "source": [
        "class MultiHeadAttention(nn.Module):\n",
        "    \"\"\"\n",
        "    Multi-head attention module for the Transformer model. Combines the outputs of multiple attention heads and applies a linear transformation to produce the final output of the attention mechanism in the Transformer model.\n",
        "\n",
        "    Args:\n",
        "        config (object): Configuration object containing model parameters.\n",
        "\n",
        "    \"\"\"\n",
        "    def __init__(self, config):\n",
        "        super().__init__()\n",
        "        embed_dim = config.hidden_size\n",
        "        num_heads = config.num_attention_heads\n",
        "        head_dim = embed_dim // num_heads # Num of Multiple Heads\n",
        "        \n",
        "        self.heads = nn.ModuleList(\n",
        "            [AttentionHead(embed_dim, embed_dim)]\n",
        "        )\n",
        "        self.output_linear = nn.Linear(embed_dim, embed_dim)\n",
        "        \n",
        "    def forward(self, hidden_state):\n",
        "        \"\"\"\n",
        "        Perform forward pass through the multi-head attention module.\n",
        "        \n",
        "        For each attention head, the input tensor is passed through the corresponding AttentionHead instance, and the outputs are concatenated along the last dimension. The concatenated output is then passed through the output_linear layer to obtain the final output tensor.\n",
        "\n",
        "        Args:\n",
        "            hidden_state (torch.Tensor): Input tensor of shape (batch_size, seq_len, embed_dim).\n",
        "\n",
        "        Returns:\n",
        "            torch.Tensor: Output tensor after applying multi-head attention and linear transformation\n",
        "                of shape (batch_size, seq_len, embed_dim).\n",
        "\n",
        "        \"\"\"\n",
        "        concatenated_output = torch.cat([h(hidden_state) for h in self.heads], dim=-1 )\n",
        "        concatenated_output = self.output_linear(concatenated_output)\n",
        "        return concatenated_output\n",
        "        "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {
        "id": "0LdEqX-4whn_"
      },
      "outputs": [],
      "source": [
        "multihead_attn = MultiHeadAttention(config)\n",
        "attn_output = multihead_attn(inputs_embeds)\n",
        "attn_outputs.size()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {
        "id": "W83p51VJwhk2"
      },
      "outputs": [],
      "source": [
        "from bertviz import head_view\n",
        "\n",
        "from transformers import AutoModel\n",
        "\n",
        "model = AutoModel.from_pretrained(model_ckpt, output_attentions=True)\n",
        "\n",
        "sentence_a = text = \"As the aircraft becomes lighter, it flies higher in air of lower density to maintain the same airspeed.\"\n",
        "\n",
        "sentence_b = \"The corn field are full of flies\"\n",
        "\n",
        "viz_inputs = tokenizer(sentence_a, sentence_b, return_tensors='pt')\n",
        "# Set the return_tensors parameter to either pt for PyTorch, or tf for TensorFlow:\n",
        "\n",
        "attention = model(**viz_inputs).attentions\n",
        "\n",
        "sentence_b_start = (viz_inputs.token_type_ids == 0).sum(dim=1)\n",
        "\n",
        "tokens = tokenizer.convert_ids_to_tokens(viz_inputs.input_ids[0])\n",
        "\n",
        "head_view(attention, tokens, sentence_b_start, heads=[21])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### token_type_ids\n",
        "\n",
        "token_type_ids: list of token type ids to be fed to a model\n",
        "\n",
        "https://huggingface.co/transformers/v2.11.0/main_classes/tokenizer.html\n",
        "\n",
        "https://huggingface.co/transformers/v3.2.0/glossary.html#token-type-ids\n",
        "\n",
        "\n",
        "### convert_ids_to_tokens\n",
        "\n",
        "convert_ids_to_tokens(ids: Union[int, List[int]], skip_special_tokens: bool = False) → Union[int, List[int]]\n",
        "Converts a single index or a sequence of indices (integers) in a token ” (resp.) a sequence of tokens (str), using the vocabulary and added tokens.\n",
        "\n",
        "--------\n",
        "\n",
        "This visualization shows the attention weights as lines connecting the token\n",
        "whose embedding is getting updated (left) with every word that is being\n",
        "attended to (right). The intensity of the lines indicates the strength of the\n",
        "attention weights, with dark lines representing values close to 1, and faint lines\n",
        "representing values close to 0.\n",
        "\n",
        "\n",
        "In this example, the input consists of two sentences and the [CLS] and [SEP]\n",
        "tokens are the special tokens in BERT’s tokenizer. \n",
        "\n",
        "One thing we can see from the visualization is that the attention\n",
        "weights are strongest between words that belong to the same sentence, which\n",
        "suggests BERT can tell that it should attend to words in the same sentence.\n",
        "However, for the word “flies” we can see that BERT has identified “arrow” as\n",
        "important in the first sentence and “fruit” and “banana” in the second. These\n",
        "attention weights allow the model to distinguish the use of “flies” as a verb or\n",
        "noun, depending on the context in which it occurs!\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {
        "id": "vGZf3-96whhg"
      },
      "outputs": [],
      "source": [
        "class FeedForward(nn.Module):\n",
        "    \"\"\"\n",
        "    This class implements the Feed Forward neural network layer within the Transformer model.\n",
        "    \n",
        "    Feed Forward layer is a crucial part of the Transformer's architecture, responsible for the actual \n",
        "    transformation of the input data. It consists of two linear layers with a GELU activation function \n",
        "    in between, followed by a dropout layer for regularization.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    config : object\n",
        "        The configuration object containing model parameters. It should have the following attributes:\n",
        "        - hidden_size: The size of the hidden layer in the transformer model.\n",
        "        - intermediate_size: The size of the intermediate layer in the Feed Forward network.\n",
        "        - hidden_dropout_prob: The dropout probability for the hidden layer.\n",
        "\n",
        "    Attributes\n",
        "    ----------\n",
        "    linear1 : torch.nn.Module\n",
        "        The first linear transformation layer.\n",
        "    linear2 : torch.nn.Module\n",
        "        The second linear transformation layer.\n",
        "    gelu : torch.nn.Module\n",
        "        The Gaussian Error Linear Unit (GELU) activation function.\n",
        "    dropout : torch.nn.Module\n",
        "        The dropout layer for regularization.\n",
        "    \"\"\"\n",
        "    def __init__(self, config):\n",
        "        super().__init__()\n",
        "        self.linear1 = nn.Linear(config.hidden_size, config.intermediate_size)\n",
        "        self.linear2 = nn.Linear(config.intermediate_size, config.hidden_size)\n",
        "        self.gelu = nn.GELU()\n",
        "        self.dropout = nn.Dropout(config.hidden_dropout_prob)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        \"\"\"\n",
        "        Defines the computation performed at every call.\n",
        "\n",
        "        Parameters\n",
        "        ----------\n",
        "        x : torch.Tensor\n",
        "            The input tensor to the Feed Forward network layer.\n",
        "\n",
        "        Returns\n",
        "        -------\n",
        "        x : torch.Tensor\n",
        "            The output tensor after passing through the Feed Forward network layer.\n",
        "        \"\"\"\n",
        "        x = self.linear1(x)\n",
        "        x = self.gelu(x)\n",
        "        x = self.linear2(x)\n",
        "        x = self.dropout(x)\n",
        "        return x"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Class definition of nn.Linear in pytorch?\n",
        "\n",
        "`CLASS torch.nn.Linear(in_features, out_features, bias=True)`\n",
        "\n",
        "Applies a linear transformation to the incoming data: `y = x*W^T + b`\n",
        "\n",
        "Parameters:\n",
        "\n",
        " - **in_features** – size of each input sample (i.e. size of x)\n",
        " - **out_features** – size of each output sample (i.e. size of y)\n",
        "\n",
        "---\n",
        "\n",
        "Note that a feed-forward layer such as nn.Linear is usually applied to a tensor of\n",
        "shape (batch_size, input_dim), where it acts on each element of the batch\n",
        "dimension independently. \n",
        "\n",
        "This is actually true for any dimension except the last one, so when we pass a tensor of shape (batch_size, seq_len, hidden_dim) the layer is applied to all token embeddings of the batch and sequence independently, which is exactly what we want. Let’s test this by passing the attention outputs:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {
        "id": "mhNla13Nwhdd"
      },
      "outputs": [],
      "source": [
        "feed_forward = FeedForward(config)\n",
        "\n",
        "ff_outputs = feed_forward(attn_outputs)\n",
        "\n",
        "ff_outputs.size()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {
        "id": "ZLO3rXSSwhaI"
      },
      "outputs": [],
      "source": [
        "class TransformerEncoderLayer(nn.Module):\n",
        "    \"\"\"\n",
        "    This class implements the Transformer Encoder Layer as part of the Transformer model.\n",
        "    \n",
        "    Each encoder layer consists of a Multi-Head Attention mechanism followed by a Position-wise \n",
        "    Feed Forward neural network. Additionally, residual connections around each of the two \n",
        "    sub-layers are employed, followed by layer normalization.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    config : object\n",
        "        The configuration object containing model parameters. It should have the following attributes:\n",
        "        - hidden_size: The size of the hidden layer in the transformer model.\n",
        "\n",
        "    Attributes\n",
        "    ----------\n",
        "    layer_norm_1 : torch.nn.Module\n",
        "        The first layer normalization.\n",
        "    layer_norm_2 : torch.nn.Module\n",
        "        The second layer normalization.\n",
        "    attention : MultiHeadAttention\n",
        "        The MultiHeadAttention mechanism in the encoder layer.\n",
        "    feed_forward : FeedForward\n",
        "        The FeedForward neural network in the encoder layer.\n",
        "    \"\"\"\n",
        "    def __init__(self, config):\n",
        "        super().__init__()\n",
        "        self.layer_norm_1 = nn.LayerNorm(config.hidden_size)\n",
        "        self.layer_norm_2 = nn.LayerNorm(config.hidden_size)\n",
        "        self.attention = MultiHeadAttention(config)\n",
        "        self.feed_forward = FeedForward(config)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        \"\"\"\n",
        "        Defines the computation performed at every call.\n",
        "\n",
        "        Parameters\n",
        "        ----------\n",
        "        x : torch.Tensor\n",
        "            The input tensor to the Transformer Encoder Layer.\n",
        "\n",
        "        Returns\n",
        "        -------\n",
        "        x : torch.Tensor\n",
        "            The output tensor after passing through the Transformer Encoder Layer.\n",
        "        \"\"\"\n",
        "        hidden_state = self.layer_norm_1(x)\n",
        "        x = x + self.attention(hidden_state)\n",
        "        x = x + self.feed_forward(self.layer_norm_2(x))\n",
        "        return x"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "encoder_layer = TransformerEncoderLayer(config)\n",
        "inputs_embeds.shape, encoder_layer(inputs_embeds).size()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "from torch import embedding\n",
        "\n",
        "\n",
        "class Embeddings(nn.Module):\n",
        "    \"\"\"\n",
        "    This class implements the Embeddings layer as part of the Transformer model.\n",
        "    \n",
        "    The Embeddings layer is responsible for converting input tokens and their corresponding positions \n",
        "    into dense vectors of fixed size. The token embeddings and position embeddings are summed up \n",
        "    and subsequently layer-normalized and passed through a dropout layer for regularization.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    config : object\n",
        "        The configuration object containing model parameters. It should have the following attributes:\n",
        "        - vocal_size: The size of the vocabulary.\n",
        "        - hidden_size: The size of the hidden layer in the transformer model.\n",
        "        - max_position_embeddings: The maximum number of positions that the model can accept.\n",
        "\n",
        "    Attributes\n",
        "    ----------\n",
        "    token_embeddings : torch.nn.Module\n",
        "        The embedding layer for the tokens.\n",
        "    position_embeddings : torch.nn.Module\n",
        "        The embedding layer for the positions.\n",
        "    layer_norm : torch.nn.Module\n",
        "        The layer normalization.\n",
        "    dropout : torch.nn.Module\n",
        "        The dropout layer for regularization.\n",
        "    \"\"\"\n",
        "    def __init__(self, config):\n",
        "        super().__init__()\n",
        "        self.token_embeddings = nn.Embedding(config.vocal_size, config.hidden_size)\n",
        "        self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size )\n",
        "        self.layer_norm = nn.LayerNorm(config.hidden_size, eps=1e-12)\n",
        "        self.dropout = nn.Dropout()\n",
        "        \n",
        "    def forward(self, input_ids):\n",
        "        \"\"\"\n",
        "        Defines the computation performed at every call.\n",
        "\n",
        "        Parameters\n",
        "        ----------\n",
        "        input_ids : torch.Tensor\n",
        "            The input tensor to the Embeddings layer, typically the token ids.\n",
        "\n",
        "        Returns\n",
        "        -------\n",
        "        embeddings : torch.Tensor\n",
        "            The output tensor after passing through the Embeddings layer.\n",
        "        \"\"\"\n",
        "        seq_length = input_ids.size(1)\n",
        "        position_ids = torch.arange(seq_length, dtype=torch.long).unsqueeze(0)\n",
        "        token_embeddings = self.token_embeddings(input_ids)\n",
        "        position_embeddings = self.position_embeddings(position_ids)\n",
        "        embeddings = token_embeddings + position_embeddings\n",
        "        embeddings = self.layer_norm(embeddings)\n",
        "        embeddings = self.dropout(embeddings)\n",
        "        return embeddings\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "embedding_layer = Embeddings(config)\n",
        "embedding_layer(inputs.input_ids).size()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "class TransformerEncode(nn.Module):\n",
        "    \"\"\"\n",
        "    This class implements the Transformer Encoder as part of the Transformer model.\n",
        "    \n",
        "    The Transformer Encoder consists of a series of identical layers, each with a self-attention mechanism \n",
        "    and a position-wise fully connected feed-forward network. The input to each layer is first processed by \n",
        "    the Embeddings layer which converts input tokens and their corresponding positions into dense vectors of \n",
        "    fixed size.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    config : object\n",
        "        The configuration object containing model parameters. It should have the following attributes:\n",
        "        - num_hidden_layer: The number of hidden layers in the encoder.\n",
        "\n",
        "    Attributes\n",
        "    ----------\n",
        "    embeddings : Embeddings\n",
        "        The embedding layer which converts input tokens and positions into dense vectors.\n",
        "    layers : torch.nn.ModuleList\n",
        "        The list of Transformer Encoder Layers.\n",
        "    \"\"\"\n",
        "    def __init__(self, config):\n",
        "        super().__init__()\n",
        "        self.embeddings = Embeddings(config)\n",
        "        # Initialize a list of Transformer Encoder Layers. The number of layers is defined by config.num_hidden_layer\n",
        "        self.layers = nn.ModuleList([TransformerEncoderLayer(config) for _ in range(config.num_hidden_layer) ])\n",
        "        \n",
        "    def forward(self, x):\n",
        "        \"\"\"\n",
        "        Defines the computation performed at every call.\n",
        "\n",
        "        Parameters\n",
        "        ----------\n",
        "        x : torch.Tensor\n",
        "            The input tensor to the Transformer Encoder.\n",
        "\n",
        "        Returns\n",
        "        -------\n",
        "        x : torch.Tensor\n",
        "            The output tensor after passing through the Transformer Encoder.\n",
        "        \"\"\"\n",
        "        x = self.embeddings(x)\n",
        "        for layer in self.layers:\n",
        "            x = layer(x)\n",
        "        return x"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "encoder = TransformerEncode(config)\n",
        "encoder(inputs.input_ids).size()"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Adding a Classification Head"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "class TransformerForSequenceClassification(nn.Module):\n",
        "    \"\"\"\n",
        "    This class implements the Transformer model for sequence classification tasks.\n",
        "    \n",
        "    The model architecture consists of a Transformer encoder, followed by a dropout layer for regularization, \n",
        "    and a linear layer for classification. The output from the [CLS] token's embedding is used for the classification task.\n",
        "\n",
        "    Parameters\n",
        "    ----------\n",
        "    config : object\n",
        "        The configuration object containing model parameters. It should have the following attributes:\n",
        "        - hidden_size: The size of the hidden layer in the transformer model.\n",
        "        - hidden_dropout_prob: The dropout probability for the hidden layer.\n",
        "        - num_labels: The number of labels in the classification task.\n",
        "\n",
        "    Attributes\n",
        "    ----------\n",
        "    encoder : TransformerEncode\n",
        "        The Transformer Encoder.\n",
        "    dropout : torch.nn.Module\n",
        "        The dropout layer for regularization.\n",
        "    classifier : torch.nn.Module\n",
        "        The classification layer.\n",
        "    \"\"\"\n",
        "    def __init__(self, config):\n",
        "        super().__init__()\n",
        "        self.encoder = TransformerEncode(config)\n",
        "        self.dropout = nn.Dropout(config.hidden_dropout_prob)\n",
        "        self.classifier = nn.Linear(config.hidden_size, config.num_labels)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        \"\"\"\n",
        "        Defines the computation performed at every call.\n",
        "\n",
        "        Parameters\n",
        "        ----------\n",
        "        x : torch.Tensor\n",
        "            The input tensor to the Transformer model.\n",
        "\n",
        "        Returns\n",
        "        -------\n",
        "        x : torch.Tensor\n",
        "            The output tensor after passing through the Transformer model and the classification layer.\n",
        "        \"\"\"\n",
        "        x = self.encoder(x)[:, 0, :] # selecting hidden stae of [CLS] token\n",
        "        x = self.dropout(x)\n",
        "        x = self.classifier(x)\n",
        "        return x\n",
        "  \n",
        "config.num_labels = 3\n",
        "encoder_classifier = TransformerForSequenceClassification(config)\n",
        "encoder_classifier(inputs.input_ids).size()      "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": []
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "collapsed_sections": [],
      "name": "Transformer_From_Scratch.ipynb",
      "provenance": []
    },
    "gpuClass": "standard",
    "kernelspec": {
      "display_name": "Python 3.9.13 64-bit",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.9.13"
    },
    "orig_nbformat": 4,
    "vscode": {
      "interpreter": {
        "hash": "f9f85f796d01129d0dd105a088854619f454435301f6ffec2fea96ecbd9be4ac"
      }
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
