{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "d9c7eb01",
   "metadata": {},
   "source": [
    "# Decompose BERT Layer\n",
    "The huggingface transformers library gives us pretrained Bert models that generate our ground truth results. This notebook decomposes the operations that happen in a Bert encoder layer to show exactly what our design must accelerate."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3ae33920",
   "metadata": {},
   "source": [
    "## Ground Truth"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "377e277d",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/hao/.conda/envs/torch/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "from transformers import BertModel, BertTokenizer\n",
    "import math"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "00c3f0e3",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "\u001b[A\n",
      "Downloading model.safetensors: 100%|██████████| 440M/440M [00:45<00:00, 9.76MB/s]\n",
      "Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.seq_relationship.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight']\n",
      "- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
     ]
    }
   ],
   "source": [
    "tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\n",
    "model = BertModel.from_pretrained('bert-base-uncased')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0bb2c3f3",
   "metadata": {},
   "source": [
    "Our text input must first be tokenized, which uses Bert's dictionary to turn words into numbers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "ad5ea852",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([1, 512])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Input is the first 512 tokens generated from the proposal for this project.\n",
    "text_512 = 'This project aims to implement a transformer layer on a cluster of FPGAs. In recent years transformers have outperformed traditional convolutional neural networks in many fields, but serial performance is dismal and parallel GPU performance is power-intensive. Specialized architectures have been studied little, especially using FPGA platforms. This research will improve transformer inference performance by offloading computationally intensive sections of the network to reconfigurable accelerators running on a cluster of multiple FPGA devices. This research will result in an acceleration architecture for a single layer of a transformer network along with a performance comparison with CPU and GPU baselines. We propose the investigation of distributed transformer inference across a cluster of multiple field programmable gate arrays (FPGAs). This research will investigate the partitioning of a transformer layer across multiple FPGA devices along with networking between FPGAs in the cluster. Transformers have become a dominant machine learning architecture for many domains such as natural language processing, therefore high speed inference is desirable. However, networks sizes and limited FPGA resources often make inference on a single FPGA slow due to limited parallelism and pipeline depth or impossible due to limited resources. The purpose of this research is to explore methods to overcome these challenges by introducing parallelism through multi-FPGA clusters. Transformers are highly parallel neural network architectures which consist of stacks of encoder and decoder layers. These layers consist of many linear transformations on matrices which are represented by matrix-matrix multiplication. Within an encoder/decoder layer there is an opportunity to parallelize both between concurrent general matrix multiplies (GeMM) and within each GeMM. Attempting to serialize these operations on a CPU leads to high execution time and is a poor utilization of the CPU\\'s general purpose architecture. GPUs can deliver high throughput inference for transformers, though they are power-hungry and do not achieve the low latency required by some applications. Both in the datacenter and at the edge, low-latency and efficient inference is desired. Optimally, there would be an architecture that could scale between these two extremes of computational demand. State-of-the-art transformers can contain upwards of 12 layers and multiply matrices on the order of 1024x1024 elements. In addition, the trend of increasing transformer size does not show signs of slowing. This large use of memory and FLOPs leads to difficulty mapping an entire transformer network to a '\n",
    "text_128 = 'This project aims to implement a transformer layer on a cluster of FPGAs. In recent years transformers have outperformed traditional convolutional neural networks in many fields, but serial performance is dismal and parallel GPU performance is power-intensive. Specialized architectures have been studied little, especially using FPGA platforms. This research will improve transformer inference performance by offloading computationally intensive sections of the network to reconfigurable accelerators running on a cluster of multiple FPGA devices. This research will result in an acceleration architecture for a single layer of a transformer network along with a  '\n",
    "text = text_512\n",
    "encoded_input = tokenizer(text, return_tensors='pt')\n",
    "encoded_input['input_ids'].shape\n",
    "\n",
    "print(text_512)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0b4ddc8b",
   "metadata": {},
   "source": [
    "This output hidden state is what we want to validate against. After passing this tokenized input through BERT's embedder and it's 12 encoder layers, this is the result. We want to show that our implementation yields the same `last_hidden_state`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "13c08252",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Size([1, 512, 768]),\n",
       " tensor([[[-3.7457e-01, -6.9887e-01, -4.4098e-04,  ..., -3.0715e-01,\n",
       "           -3.8659e-01,  4.7352e-01],\n",
       "          [-6.7209e-01, -7.5042e-01, -6.9455e-01,  ...,  1.4919e-01,\n",
       "            1.1460e+00,  1.7025e-01],\n",
       "          [-8.8504e-01, -6.3164e-01, -5.9148e-01,  ...,  2.0482e-01,\n",
       "            1.7474e-01,  2.4267e-01],\n",
       "          ...,\n",
       "          [-2.5008e-01,  4.4047e-02, -2.1806e-01,  ...,  1.0061e-01,\n",
       "            2.7695e-01,  8.8145e-01],\n",
       "          [-7.5948e-01,  7.5723e-02, -3.9088e-01,  ..., -4.3433e-01,\n",
       "            2.8015e-01,  7.4720e-01],\n",
       "          [-3.3422e-01, -5.3718e-02,  5.4829e-01,  ...,  5.3513e-01,\n",
       "           -3.9397e-01, -2.6217e-01]]], grad_fn=<NativeLayerNormBackward0>))"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output = model(**encoded_input)\n",
    "output.last_hidden_state.shape, output.last_hidden_state"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "247e91ca",
   "metadata": {},
   "source": [
    "## Our Attention and FFN Implementation\n",
    "\n",
    "This is a simple attention implementation that uses only torch operations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "bd0d14c8",
   "metadata": {},
   "outputs": [],
   "source": [
    "def attention(layer, hidden_states):\n",
    "    '''\n",
    "    Pass in a encoder layer (which holds pretrained weights) and hidden_states input,\n",
    "    and this function performs the same operations as the layer but in a readable fashion.\n",
    "    \n",
    "    hidden_states: <bs, seqlen, dmodel>\n",
    "    '''\n",
    "    bs, seqlen, dmodel = hidden_states.size()\n",
    "    num_heads = layer.attention.self.num_attention_heads\n",
    "    dhead = layer.attention.self.attention_head_size\n",
    "    \n",
    "    # Linear transform to get multiple heads. This is a major MAC slurper.\n",
    "    # Each of these is calling an nn.Linear layer on hidden_states.\n",
    "#     query_layer = layer.attention.self.query(hidden_states) # <bs, seqlen, dmodel>\n",
    "    query_layer = torch.matmul(hidden_states, layer.attention.self.query.weight.T) + layer.attention.self.query.bias\n",
    "    key_layer = layer.attention.self.key(hidden_states)     # <bs, seqlen, dmodel>\n",
    "    value_layer = layer.attention.self.value(hidden_states) # <bs, seqlen, dmodel>\n",
    "    \n",
    "    # Reshape and transpose for multi-head\n",
    "    new_shape = (bs, seqlen, num_heads, dhead)\n",
    "    \n",
    "    query_layer = query_layer.view(new_shape)\n",
    "    value_layer = value_layer.view(new_shape)\n",
    "    key_layer = key_layer.view(new_shape)\n",
    "    \n",
    "    query_layer = query_layer.permute(0,2,1,3) # <bs, num_head, seqlen, dhead>\n",
    "    value_layer = value_layer.permute(0,2,1,3) # <bs, num_head, seqlen, dhead>\n",
    "    # Key is transposed to match dimensions of Query for matmul\n",
    "    key_layer = key_layer.permute(0,2,3,1)     # <bs, num_head, dhead, seqlen>\n",
    "    \n",
    "    # The attention main course\n",
    "    attention_scores = torch.matmul(query_layer, key_layer)\n",
    "    attention_scores /= math.sqrt(dhead)\n",
    "    \n",
    "    attention_probs = nn.functional.softmax(attention_scores, dim=-1)\n",
    "    attention_probs = layer.attention.self.dropout(attention_probs)\n",
    "    \n",
    "    # Weighted sum of Values from softmax attention\n",
    "    attention_out = torch.matmul(attention_probs, value_layer)\n",
    "    \n",
    "    attention_out = attention_out.permute(0,2,1,3).contiguous()\n",
    "    attention_out = attention_out.view(bs, seqlen, dmodel)\n",
    "    \n",
    "    # It's time for one more linear transform and layer norm\n",
    "    dense_out = layer.attention.output.dense(attention_out)\n",
    "    dense_out = layer.attention.output.dropout(dense_out)\n",
    "    \n",
    "    # LayerNorm also mplements the residual connection\n",
    "    layer_out = layer.attention.output.LayerNorm(dense_out + hidden_states)\n",
    "    \n",
    "    return layer_out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "ca8ec33a",
   "metadata": {},
   "outputs": [],
   "source": [
    "def ffn(layer, attention_out):\n",
    "    '''\n",
    "    Pass in the feedforward layer and attention output. Returns the same result of a FF forward pass.\n",
    "    '''\n",
    "    # Layer 1\n",
    "    output = layer.intermediate.dense(attention_out)\n",
    "    output = nn.functional.gelu(output)\n",
    "    \n",
    "    # Layer 2\n",
    "    output = layer.output.dense(output)\n",
    "    output = layer.output.dropout(output)\n",
    "    output = layer.output.LayerNorm(output + attention_out)\n",
    "    \n",
    "    return output"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "331494a1",
   "metadata": {},
   "source": [
    "### Show that it gives the same output\n",
    "The cell below loops through each encoder layer in the pretrained Bert model and calls our `attention` and `ffn` implementations on the hidden state. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "adb467bc",
   "metadata": {},
   "outputs": [],
   "source": [
    "'''\n",
    "First, get the embeddings (we did not implement this since it's basically a lookup table)\n",
    "'''\n",
    "embedding_output = model.embeddings(\n",
    "    input_ids=encoded_input['input_ids'],\n",
    "    position_ids=None,\n",
    "    token_type_ids=encoded_input['token_type_ids'],\n",
    "    inputs_embeds=None,\n",
    "    past_key_values_length=0,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "d0f0102e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[-3.7457e-01, -6.9887e-01, -4.4096e-04,  ..., -3.0714e-01,\n",
      "          -3.8659e-01,  4.7352e-01],\n",
      "         [-6.7209e-01, -7.5042e-01, -6.9455e-01,  ...,  1.4919e-01,\n",
      "           1.1460e+00,  1.7025e-01],\n",
      "         [-8.8504e-01, -6.3164e-01, -5.9148e-01,  ...,  2.0482e-01,\n",
      "           1.7474e-01,  2.4267e-01],\n",
      "         ...,\n",
      "         [-2.5008e-01,  4.4047e-02, -2.1806e-01,  ...,  1.0060e-01,\n",
      "           2.7695e-01,  8.8145e-01],\n",
      "         [-7.5948e-01,  7.5723e-02, -3.9088e-01,  ..., -4.3433e-01,\n",
      "           2.8014e-01,  7.4720e-01],\n",
      "         [-3.3422e-01, -5.3718e-02,  5.4829e-01,  ...,  5.3513e-01,\n",
      "          -3.9397e-01, -2.6217e-01]]], grad_fn=<NativeLayerNormBackward0>)\n"
     ]
    }
   ],
   "source": [
    "'''\n",
    "Now pass embeddings through 12 layers of encoder\n",
    "'''\n",
    "hidden_states = embedding_output\n",
    "\n",
    "for layer_module in model.encoder.layer:\n",
    "    # MHA + LayerNorm\n",
    "    attention_out = attention(layer_module, hidden_states)\n",
    "    ff_out = ffn(layer_module, attention_out)\n",
    "    hidden_states = ff_out\n",
    "    \n",
    "print(hidden_states)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "a9718802",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(False)"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "'''\n",
    "Verify equality between our implementation and the ground truth\n",
    "'''\n",
    "torch.isclose(hidden_states, model(**encoded_input).last_hidden_state).all()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "787888fa",
   "metadata": {},
   "source": [
    " asdf<img src=\"attachment:image-2.png\" alt=\"drawing\" width=\"500\"/>"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "torch",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
