{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a31865d3-dc1b-4b03-89b7-30b624621d4b",
   "metadata": {},
   "source": [
    "# KV cache\n",
    "\n",
    "The goal of caching the Key (K) and Value (V) states is to speedup the inference of autoregressive decoder like GPT.\n",
    "\n",
    "The goal of this practical is to adapt the code of [minGPT](https://github.com/karpathy/minGPT/) form [Karpathy](https://karpathy.ai/) in order to incorporate KV-caching. We will only need the two main files [`model.py`](https://github.com/karpathy/minGPT/blob/master/mingpt/model.py) and [`trainer.py`](https://github.com/karpathy/minGPT/blob/master/mingpt/trainer.py) from this repo.\n",
    "\n",
    "Using [Named Tensor Notation](https://hackmd.io/@mlelarge/HkVlvrc8j), we write (see the paper by [Chiang, Rush and Barak](https://arxiv.org/abs/2102.13196))\n",
    "\\begin{align*}\n",
    "\\newcommand{\\namedtensorstrut}{\\vphantom{fg}}\n",
    "\\newcommand{\\nfun}[2]{\\mathop{\\underset{\\substack{#1}}{\\namedtensorstrut\\mathrm{#2}}}}\n",
    "\\newcommand{\\name}[1]{\\mathsf{\\namedtensorstrut #1}}\n",
    "\\newcommand{\\ndef}[2]{\\newcommand{#1}{\\name{#2}}}\n",
    "\\ndef{\\ax}{ax}\n",
    "\\ndef{\\bx}{bx}\n",
    "\\newcommand{\\reals}{\\mathbb{R}}\n",
    "\\ndef{\\batch}{batch}\n",
    "\\ndef{\\layer}{layer}\n",
    "\\ndef{\\chans}{chans}\n",
    "\\ndef{\\key}{key}\n",
    "\\ndef{\\seq}{seq}\n",
    "\\ndef{\\val}{val}\n",
    "\\ndef{\\heads}{heads}\n",
    "\\ndef{\\hidden}{hidden}\n",
    "\\ndef{\\height}{height}\n",
    "\\ndef{\\width}{width}\n",
    "\\newcommand{\\nbin}[2]{\\mathbin{\\underset{\\substack{#1}}{\\namedtensorstrut #2}}}\n",
    "\\newcommand{\\ndot}[1]{\\nbin{#1}{\\odot}}\n",
    "\\text{Attention} \\colon \\mathbb{R}^{\\key} \\times \\mathbb{R}^{\\seq \\times\\key} \\times \\mathbb{R}^{\\seq \\times\\val} &\\rightarrow \\mathbb{R}^{\\val} \\\\\n",
    "  \\text{Attention}(Q,K,V) &= \\left( \\nfun{\\seq}{softmax} \\frac{Q \\ndot{\\key} K}{\\sqrt{|\\key|}} \\right) \\ndot{\\seq} V.\n",
    "\\end{align*}\n",
    "\n",
    "During inference, when we compute the attention for the $t$-th token of a sequence, we get:\n",
    "\\begin{align*}\n",
    "\\text{Attention} \\colon \\mathbb{R}^{\\key} \\times \\mathbb{R}^{\\seq(t-b:t) \\times\\key} \\times \\mathbb{R}^{\\seq(t-b:t) \\times\\val} &\\rightarrow \\mathbb{R}^{\\val} \\\\\n",
    "  \\text{Attention}(Q_t,K_t,V_t) &= \\left( \\nfun{\\seq}{softmax} \\frac{Q_t \\ndot{\\key} K_t}{\\sqrt{|\\key|}} \\right) \\ndot{\\seq} V_t,\n",
    "\\end{align*}\n",
    "where $b$ is the size of a block and $t-b$ should be interpreted as $\\max(t-b,0)$.\n",
    "\n",
    "For the computation at time $t+1$, we see that to compute $K_{t+1}$ and $V_{t+1}$ from $K_t$ and $V_t$, we need only to compute the last idice from $\\seq(t-b+1:t+1)$ if we stored all other indices $\\seq(t-b+1:t)$. This is exactly what we need to do!\n",
    "\n",
    "![](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*uyuyOW1VBqmF5Gtv225XHQ.gif)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "11df31b3-34e2-4634-9e7d-d1e0309e1482",
   "metadata": {},
   "outputs": [],
   "source": [
    "import math\n",
    "from dataclasses import dataclass\n",
    "import time\n",
    "import numpy as np\n",
    "\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "from torch.nn import functional as F"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6e5dc9b-78d7-4eef-b4f3-d47a5d526e9b",
   "metadata": {},
   "source": [
    "## Modifying Self-attention\n",
    "\n",
    "We start from the code from Karpathy"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "31018dff-dd45-4ebe-bfc3-bd86eb8f9b3e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# source: https://github.com/karpathy/minGPT/blob/master/mingpt/model.py\n",
    "class CausalSelfAttention(nn.Module):\n",
    "    \"\"\"\n",
    "    A vanilla multi-head masked self-attention layer with a projection at the end.\n",
    "    It is possible to use torch.nn.MultiheadAttention here but I am including an\n",
    "    explicit implementation here to show that there is nothing too scary here.\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, config):\n",
    "        super().__init__()\n",
    "        assert config.n_embd % config.n_head == 0\n",
    "        # key, query, value projections for all heads, but in a batch\n",
    "        self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd)\n",
    "        # output projection\n",
    "        self.c_proj = nn.Linear(config.n_embd, config.n_embd)\n",
    "        # regularization\n",
    "        self.attn_dropout = nn.Dropout(config.attn_pdrop)\n",
    "        self.resid_dropout = nn.Dropout(config.resid_pdrop)\n",
    "        # causal mask to ensure that attention is only applied to the left in the input sequence\n",
    "        self.register_buffer(\"bias\", torch.tril(torch.ones(config.block_size, config.block_size))\n",
    "                                     .view(1, 1, config.block_size, config.block_size))\n",
    "        self.n_head = config.n_head\n",
    "        self.n_embd = config.n_embd\n",
    "\n",
    "    def forward(self, x):\n",
    "        B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd)\n",
    "\n",
    "        # calculate query, key, values for all heads in batch and move head forward to be the batch dim\n",
    "        q, k ,v  = self.c_attn(x).split(self.n_embd, dim=2)\n",
    "        k = k.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)\n",
    "        q = q.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)\n",
    "        v = v.view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)\n",
    "\n",
    "        # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)\n",
    "        att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))\n",
    "        att = att.masked_fill(self.bias[:,:,:T,:T] == 0, float('-inf'))\n",
    "        att = F.softmax(att, dim=-1)\n",
    "        att = self.attn_dropout(att)\n",
    "        y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)\n",
    "        y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side\n",
    "\n",
    "        # output projection\n",
    "        y = self.resid_dropout(self.c_proj(y))\n",
    "        return y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f95c9770-10de-4996-8ac0-fe3b763daf00",
   "metadata": {},
   "outputs": [],
   "source": [
    "@dataclass\n",
    "class Config:\n",
    "    n_head = 3\n",
    "    n_embd = 15\n",
    "    block_size = 11\n",
    "    # dropout hyperparameters\n",
    "    embd_pdrop = 0.1\n",
    "    resid_pdrop = 0.1\n",
    "    attn_pdrop = 0.1\n",
    "    \n",
    "config = Config()\n",
    "csa = CausalSelfAttention(config)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "60039c6e-6a22-43b4-8018-05f9d7fac1b9",
   "metadata": {},
   "outputs": [],
   "source": [
    "bs = 6\n",
    "x = torch.randn(bs, config.block_size, config.n_embd)\n",
    "out = csa(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d7fd1521-2e1b-4a86-b59a-3e0c84e3b443",
   "metadata": {},
   "outputs": [],
   "source": [
    "out.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bad18671-6ad5-4039-a062-9e43a0f79adc",
   "metadata": {},
   "outputs": [],
   "source": [
    "csa.bias.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "18fdac7d-0673-4a36-8406-46a762e5aead",
   "metadata": {},
   "source": [
    "Now, we need to modify the code in order to add kv-cache. We propose to do a simple modification where the forward pass take as input in addition to `x` the `kv_cache` as a list of tensors `[k, v]` and returns the output `y` and the updated `kv_cache`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d298b80b-f835-4756-9d68-756b09f3f500",
   "metadata": {},
   "outputs": [],
   "source": [
    "class CausalSelfAttention_kv(nn.Module):\n",
    "    \"\"\"\n",
    "    A vanilla multi-head masked self-attention layer with a projection at the end.\n",
    "    It is possible to use torch.nn.MultiheadAttention here but I am including an\n",
    "    explicit implementation here to show that there is nothing too scary here.\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, config):\n",
    "        super().__init__()\n",
    "        assert config.n_embd % config.n_head == 0\n",
    "        # key, query, value projections for all heads, but in a batch\n",
    "        self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd)\n",
    "        # output projection\n",
    "        self.c_proj = nn.Linear(config.n_embd, config.n_embd)\n",
    "        # regularization\n",
    "        self.attn_dropout = nn.Dropout(config.attn_pdrop)\n",
    "        self.resid_dropout = nn.Dropout(config.resid_pdrop)\n",
    "        # causal mask to ensure that attention is only applied to the left in the input sequence\n",
    "        self.register_buffer(\"bias\", torch.tril(torch.ones(config.block_size, config.block_size))\n",
    "                                     .view(1, 1, config.block_size, config.block_size))\n",
    "        self.n_head = config.n_head\n",
    "        self.n_embd = config.n_embd\n",
    "        self.block_size = config.block_size\n",
    "\n",
    "    def forward(self, x, kv_cache=None):\n",
    "        B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd)\n",
    "        \n",
    "        # calculate query, key, values for all heads in batch and move head forward to be the batch dim\n",
    "        q, k ,v  = self.c_attn(x).split(self.n_embd, dim=2)\n",
    "        \n",
    "        ###\n",
    "        # your code here\n",
    "        ####\n",
    "        \n",
    "        # output projection\n",
    "        y = self.resid_dropout(self.c_proj(y))\n",
    "        return y, kv_cache"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0803d18e-f407-4917-981e-8f2ec75a5e83",
   "metadata": {},
   "outputs": [],
   "source": [
    "config = Config()\n",
    "csa = CausalSelfAttention_kv(config)\n",
    "csa.eval()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9c2a1578-5628-4424-bad2-9e38fadd14e1",
   "metadata": {},
   "outputs": [],
   "source": [
    "out, kv = csa(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ba0feab0-e775-4ee9-8115-349eb4967fec",
   "metadata": {},
   "outputs": [],
   "source": [
    "x.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a469868-5d3b-4787-863b-2820da0f8af4",
   "metadata": {},
   "source": [
    "Check the shape of the kv cache."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b4b6ac48-9ee5-4bdb-a88b-220b5f639d73",
   "metadata": {},
   "outputs": [],
   "source": [
    "kv[0][:,:-1,:].shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "436f44ed-ac0c-4712-a884-2f642550bc54",
   "metadata": {},
   "outputs": [],
   "source": [
    "first = x[:,:10,:]\n",
    "last = x[:,[10],:]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "395786a0-1eca-48d0-8c0f-ef3f94e4a3ba",
   "metadata": {},
   "outputs": [],
   "source": [
    "out_kv, kv_cache = csa(last, kv_cache=[kv[0][:,:-1,:], kv[1][:,:-1,:]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "84d42450-0cfe-4c9f-91fb-5dde014458ba",
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.isclose(out[:,-1,:], out_kv[:,0,:])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "15bec824-a5c2-4d62-98ca-1ab2d48c06f9",
   "metadata": {},
   "outputs": [],
   "source": [
    "for k in range(10):\n",
    "    out_kv, kv_cache = csa(x[:,-k:,:], kv_cache=[kv[0][:,:-k,:], kv[1][:,:-k,:]])\n",
    "    print(k, torch.allclose(out[:,-k,:], out_kv[:,0,:], rtol=1e-4))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9655cdbd-dd8f-434c-a9e8-40570da62cde",
   "metadata": {},
   "source": [
    "## Modifying the Block\n",
    "\n",
    "Here is the original code of Karpathy:\n",
    "```python\n",
    "class Block(nn.Module):\n",
    "    \"\"\" an unassuming Transformer block \"\"\"\n",
    "\n",
    "    def __init__(self, config):\n",
    "        super().__init__()\n",
    "        self.ln_1 = nn.LayerNorm(config.n_embd)\n",
    "        self.attn = CausalSelfAttention(config)\n",
    "        self.ln_2 = nn.LayerNorm(config.n_embd)\n",
    "        self.mlp = nn.ModuleDict(dict(\n",
    "            c_fc    = nn.Linear(config.n_embd, 4 * config.n_embd),\n",
    "            c_proj  = nn.Linear(4 * config.n_embd, config.n_embd),\n",
    "            act     = NewGELU(),\n",
    "            dropout = nn.Dropout(config.resid_pdrop),\n",
    "        ))\n",
    "        m = self.mlp\n",
    "        self.mlpf = lambda x: m.dropout(m.c_proj(m.act(m.c_fc(x)))) # MLP forward\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = x + self.attn(self.ln_1(x))\n",
    "        x = x + self.mlpf(self.ln_2(x))\n",
    "        return x\n",
    "```\n",
    "\n",
    "and how it is used in the GPT class:\n",
    "```python\n",
    "class GPT(nn.Module):\n",
    "    def __init__(self, config):\n",
    "        ...\n",
    "        self.transformer = nn.ModuleDict(dict(\n",
    "            wte = nn.Embedding(config.vocab_size, config.n_embd),\n",
    "            wpe = nn.Embedding(config.block_size, config.n_embd),\n",
    "            drop = nn.Dropout(config.embd_pdrop),\n",
    "            h = nn.ModuleList([Block(config) for _ in range(config.n_layer)]),\n",
    "            ln_f = nn.LayerNorm(config.n_embd),\n",
    "        ))\n",
    "        self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)\n",
    "        ...\n",
    "        \n",
    "    def forward(self, idx, targets=None):\n",
    "        device = idx.device\n",
    "        b, t = idx.size()\n",
    "        assert t <= self.block_size, f\"Cannot forward sequence of length {t}, block size is only {self.block_size}\"\n",
    "        pos = torch.arange(0, t, dtype=torch.long, device=device).unsqueeze(0) # shape (1, t)\n",
    "\n",
    "        # forward the GPT model itself\n",
    "        tok_emb = self.transformer.wte(idx) # token embeddings of shape (b, t, n_embd)\n",
    "        pos_emb = self.transformer.wpe(pos) # position embeddings of shape (1, t, n_embd)\n",
    "        x = self.transformer.drop(tok_emb + pos_emb)\n",
    "        for block in self.transformer.h:\n",
    "            x = block(x)\n",
    "        x = self.transformer.ln_f(x)\n",
    "        logits = self.lm_head(x)\n",
    "\n",
    "        # if we are given some desired targets also calculate the loss\n",
    "        loss = None\n",
    "        if targets is not None:\n",
    "            loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), ignore_index=-1)\n",
    "\n",
    "        return logits, loss\n",
    "\n",
    "```\n",
    "\n",
    "You need to adapt first the `Block` to include kv-cache. Provide some tests for your code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3616106c-4c02-4f46-8669-625404386c1b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from mingpt.model import NewGELU\n",
    "\n",
    "class Block_kv(nn.Module):\n",
    "    \"\"\" an unassuming Transformer block \"\"\"\n",
    "\n",
    "    def __init__(self, config):\n",
    "        super().__init__()\n",
    "        self.ln_1 = nn.LayerNorm(config.n_embd)\n",
    "        self.attn = CausalSelfAttention_kv(config)\n",
    "        self.ln_2 = nn.LayerNorm(config.n_embd)\n",
    "        self.mlp = nn.ModuleDict(dict(\n",
    "            c_fc    = nn.Linear(config.n_embd, 4 * config.n_embd),\n",
    "            c_proj  = nn.Linear(4 * config.n_embd, config.n_embd),\n",
    "            act     = NewGELU(),\n",
    "            dropout = nn.Dropout(config.resid_pdrop),\n",
    "        ))\n",
    "        m = self.mlp\n",
    "        self.mlpf = lambda x: m.dropout(m.c_proj(m.act(m.c_fc(x)))) # MLP forward\n",
    "\n",
    "    def forward(self, x, kv_cache=None):\n",
    "        ###\n",
    "        # your code here\n",
    "        #"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ef36719f-5b91-4a43-817f-bae5bf691879",
   "metadata": {},
   "outputs": [],
   "source": [
    "bkv = Block_kv(config)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d7170627-fe45-408a-a5a3-e9c281896e2c",
   "metadata": {},
   "outputs": [],
   "source": [
    "bkv.eval()\n",
    "out, kv = bkv(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7333b465-7ee3-49e2-8012-15f08d5a8406",
   "metadata": {},
   "outputs": [],
   "source": [
    "first = x[:,:10,:]\n",
    "last = x[:,[10],:]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0ee3bf1b-f10c-46fd-9abe-0d8e583edc0b",
   "metadata": {},
   "outputs": [],
   "source": [
    "out_first, kv_first = bkv(first)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3280ad92-6ac4-4c4f-92db-02b35f826902",
   "metadata": {},
   "outputs": [],
   "source": [
    "out_kv, kv_cache = bkv(last, kv_cache=kv_first)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "11d1087e-cc50-4deb-bc23-3bec2f048529",
   "metadata": {},
   "outputs": [],
   "source": [
    "out_kv, kv_cache = bkv(last, kv_cache=[kv[0][:,:-1,:], kv[1][:,:-1,:]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2231f1e7-109a-4ac7-9fc6-3aa37115d98d",
   "metadata": {},
   "outputs": [],
   "source": [
    "kv[0].shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2fd97b65-8d5c-43f9-af35-0e875d8a75c3",
   "metadata": {},
   "outputs": [],
   "source": [
    "out_kv.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "368b56b5-ac77-4100-a422-7c3400d7955c",
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.isclose(out[:,-1,:], out_kv[:,0,:])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2b1a45d5-8eaf-43a1-bcc2-34c448bc7bba",
   "metadata": {},
   "source": [
    "## Modifying the GPT class\n",
    "\n",
    "Now we need to adapt the main class to include kv-cache. The only change in the `init` has been done and consists in using `Block_kv` instead of `Block.`\n",
    "Then you need to override the methods `forward` (see above) and `generate` below:\n",
    "```python\n",
    "    @torch.no_grad()\n",
    "    def generate(self, idx, max_new_tokens, temperature=1.0, do_sample=False, top_k=None):\n",
    "        \"\"\"\n",
    "        Take a conditioning sequence of indices idx (LongTensor of shape (b,t)) and complete\n",
    "        the sequence max_new_tokens times, feeding the predictions back into the model each time.\n",
    "        Most likely you'll want to make sure to be in model.eval() mode of operation for this.\n",
    "        \"\"\"\n",
    "        for _ in range(max_new_tokens):\n",
    "            # if the sequence context is growing too long we must crop it at block_size\n",
    "            idx_cond = idx if idx.size(1) <= self.block_size else idx[:, -self.block_size:]\n",
    "            # forward the model to get the logits for the index in the sequence\n",
    "            logits, _ = self(idx_cond)\n",
    "            # pluck the logits at the final step and scale by desired temperature\n",
    "            logits = logits[:, -1, :] / temperature\n",
    "            # optionally crop the logits to only the top k options\n",
    "            if top_k is not None:\n",
    "                v, _ = torch.topk(logits, top_k)\n",
    "                logits[logits < v[:, [-1]]] = -float('Inf')\n",
    "            # apply softmax to convert logits to (normalized) probabilities\n",
    "            probs = F.softmax(logits, dim=-1)\n",
    "            # either sample from the distribution or take the most likely element\n",
    "            if do_sample:\n",
    "                idx_next = torch.multinomial(probs, num_samples=1)\n",
    "            else:\n",
    "                _, idx_next = torch.topk(probs, k=1, dim=-1)\n",
    "            # append sampled index to the running sequence and continue\n",
    "            idx = torch.cat((idx, idx_next), dim=1)\n",
    "\n",
    "        return idx\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b468a7b9-27b8-4b54-af0d-b5a7385b5b8e",
   "metadata": {},
   "outputs": [],
   "source": [
    "from mingpt.model import GPT\n",
    "\n",
    "class GPT_kv(GPT):\n",
    "    def __init__(self, config):\n",
    "        super().__init__(config)\n",
    "        self.transformer = nn.ModuleDict(dict(\n",
    "            wte = nn.Embedding(config.vocab_size, config.n_embd),\n",
    "            wpe = nn.Embedding(config.block_size, config.n_embd),\n",
    "            drop = nn.Dropout(config.embd_pdrop),\n",
    "            h = nn.ModuleList([Block_kv(config) for _ in range(config.n_layer)]),\n",
    "            ln_f = nn.LayerNorm(config.n_embd),\n",
    "        ))\n",
    "        self.n_layer = config.n_layer\n",
    "        # init all weights, and apply a special scaled init to the residual projections, per GPT-2 paper\n",
    "        self.apply(self._init_weights)\n",
    "        for pn, p in self.named_parameters():\n",
    "            if pn.endswith('c_proj.weight'):\n",
    "                torch.nn.init.normal_(p, mean=0.0, std=0.02/math.sqrt(2 * config.n_layer))\n",
    "    \n",
    "    def forward(self, idx, targets=None, kv_cache=None, compute_first=False):\n",
    "        device = idx.device\n",
    "        b, t = idx.size()\n",
    "        assert t <= self.block_size, f\"Cannot forward sequence of length {t}, block size is only {self.block_size}\"\n",
    "        pos = torch.arange(0, t, dtype=torch.long, device=device).unsqueeze(0) # shape (1, t)\n",
    "\n",
    "        ###\n",
    "        # your code here\n",
    "        ###\n",
    "        x = self.transformer.ln_f(x)\n",
    "        logits = self.lm_head(x)\n",
    "\n",
    "        # if we are given some desired targets also calculate the loss\n",
    "        loss = None\n",
    "        if targets is not None:\n",
    "            loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1), ignore_index=-1)\n",
    "\n",
    "        if kv_cache is None:\n",
    "            return logits, loss\n",
    "        else:\n",
    "            return logits, loss, new_kv_cache\n",
    "\n",
    "    @torch.no_grad()\n",
    "    def generate_kv(self, idx, max_new_tokens, temperature=1.0, do_sample=False, top_k=None):\n",
    "        \"\"\"\n",
    "        Take a conditioning sequence of indices idx (LongTensor of shape (b,t)) and complete\n",
    "        the sequence max_new_tokens times, feeding the predictions back into the model each time.\n",
    "        Most likely you'll want to make sure to be in model.eval() mode of operation for this.\n",
    "        \"\"\"\n",
    "        ###\n",
    "        # your code here\n",
    "        ###"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "02b21e8c-8607-4f0d-bd89-89e6ddfd06e6",
   "metadata": {},
   "outputs": [],
   "source": [
    "# create a GPT instance\n",
    "model_config = GPT.get_default_config()\n",
    "model_config.model_type = 'gpt-nano'\n",
    "model_config.vocab_size = 3\n",
    "model_config.block_size = 100\n",
    "model = GPT_kv(model_config)\n",
    "model.eval();"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7c551292-7256-4ed0-9501-df268bae27cb",
   "metadata": {},
   "source": [
    "Here is a sample of lenght 7 to make some tests for the forward method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a0263909-fe17-45da-87fd-4f7743c9f9fe",
   "metadata": {},
   "outputs": [],
   "source": [
    "inp = torch.tensor([[0, 0, 2, 1, 0, 1, 2]], dtype=torch.long)\n",
    "inp.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "71dd772f-63ae-400b-9d32-8d9c2884d732",
   "metadata": {},
   "outputs": [],
   "source": [
    "logits, _ = model(inp)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3fab0e84-f924-4e33-b1cf-107329d18c13",
   "metadata": {},
   "outputs": [],
   "source": [
    "kv_cache = [None] * model_config.n_layer\n",
    "logits_kv, _, kv_cache = model(inp[:,[0]], kv_cache=kv_cache)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "da7232fc-ed3a-47ba-95a0-fb424ab0aa63",
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.isclose(logits[:,0,:], logits_kv[:,0,:])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2df68ef7-9826-4674-8384-87d62217efab",
   "metadata": {},
   "outputs": [],
   "source": [
    "logits_kv, _, kv_cache = model(inp[:,0:2], kv_cache=kv_cache)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "80b4dbf2-2709-45ba-adcf-06da65fe7230",
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.isclose(logits[:,1,:], logits_kv[:,0,:])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c09f7544-5f76-476b-8e37-50e52e046365",
   "metadata": {},
   "outputs": [],
   "source": [
    "logits_kv, _, kv_cache = model(inp[:,0:3], kv_cache=kv_cache)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dc163d94-c3e5-4d6d-8762-066525364e95",
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.isclose(logits[:,2,:], logits_kv[:,0,:])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "46dafe6f-af1c-459c-b692-1bcb6a899fcb",
   "metadata": {},
   "outputs": [],
   "source": [
    "logits_kv[:,0,:].shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6f90fb0c-4ecf-4f66-bb23-9df6cbe44c9e",
   "metadata": {},
   "source": [
    "Another test related to the `forward` method before testing `generate`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5cd71707-6df5-4283-96c4-beb04edb19cc",
   "metadata": {},
   "outputs": [],
   "source": [
    "kv_cache = [None] * model_config.n_layer\n",
    "logits_kv1, _, kv_cache1 = model(inp[:,0:2], kv_cache=kv_cache, compute_first=True) #you might want to modify this line "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b1bf1b41-2b87-4dcf-ae5d-0109a321f8ef",
   "metadata": {},
   "outputs": [],
   "source": [
    "logits_kv2, _, kv_cache2 = model(inp[:,0:3], kv_cache=kv_cache1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ebd12af6-464e-40ba-ad5a-27a5afc74f85",
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.isclose(logits_kv2[:,0,:], logits_kv[:,0,:])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "63f34747-d186-4afe-ae34-59eba3bd404b",
   "metadata": {},
   "outputs": [],
   "source": [
    "with torch.no_grad():\n",
    "    cat = model.generate_kv(inp, 10, do_sample=False)                                       \n",
    "cat"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "429b0ef3-15cc-472b-b174-4f7539d65374",
   "metadata": {},
   "outputs": [],
   "source": [
    "cat.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ae8000ca-5a43-4c28-b108-737fb5151240",
   "metadata": {},
   "outputs": [],
   "source": [
    "inp"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "229c8120-946c-4283-a00f-d1fac1e26ea8",
   "metadata": {},
   "outputs": [],
   "source": [
    "out, _ = model(cat)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "df98b75e-e8a4-48b0-b4e7-1ee994c42fea",
   "metadata": {},
   "outputs": [],
   "source": [
    "out.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "40eecbc4-4648-452b-966f-1a8067ff24d3",
   "metadata": {},
   "source": [
    "## Learning to sort\n",
    "\n",
    "We use the [demo](https://github.com/karpathy/minGPT/blob/master/demo.ipynb) to check that our code is running fine!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ff8ae55b-8a50-45ca-b6c6-8f116e24dbfd",
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch.utils.data import Dataset\n",
    "from torch.utils.data.dataloader import DataLoader\n",
    "from mingpt.utils import set_seed\n",
    "set_seed(3407)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a8a3f45d-9f34-489e-9230-c95402cc5959",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pickle\n",
    "\n",
    "class SortDataset(Dataset):\n",
    "    \"\"\" \n",
    "    Dataset for the Sort problem. E.g. for problem length 6:\n",
    "    Input: 0 0 2 1 0 1 -> Output: 0 0 0 1 1 2\n",
    "    Which will feed into the transformer concatenated as:\n",
    "    input:  0 0 2 1 0 1 0 0 0 1 1\n",
    "    output: I I I I I 0 0 0 1 1 2\n",
    "    where I is \"ignore\", as the transformer is reading the input sequence\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, split, length=6, num_digits=3):\n",
    "        assert split in {'train', 'test'}\n",
    "        self.split = split\n",
    "        self.length = length\n",
    "        self.num_digits = num_digits\n",
    "    \n",
    "    def __len__(self):\n",
    "        return 10000 # ...\n",
    "    \n",
    "    def get_vocab_size(self):\n",
    "        return self.num_digits\n",
    "    \n",
    "    def get_block_size(self):\n",
    "        # the length of the sequence that will feed into transformer, \n",
    "        # containing concatenated input and the output, but -1 because\n",
    "        # the transformer starts making predictions at the last input element\n",
    "        return self.length * 2 - 1\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        \n",
    "        # use rejection sampling to generate an input example from the desired split\n",
    "        while True:\n",
    "            # generate some random integers\n",
    "            inp = torch.randint(self.num_digits, size=(self.length,), dtype=torch.long)\n",
    "            # half of the time let's try to boost the number of examples that \n",
    "            # have a large number of repeats, as this is what the model seems to struggle\n",
    "            # with later in training, and they are kind of rate\n",
    "            if torch.rand(1).item() < 0.5:\n",
    "                if inp.unique().nelement() > self.length // 2:\n",
    "                    # too many unqiue digits, re-sample\n",
    "                    continue\n",
    "            # figure out if this generated example is train or test based on its hash\n",
    "            h = hash(pickle.dumps(inp.tolist()))\n",
    "            inp_split = 'test' if h % 4 == 0 else 'train' # designate 25% of examples as test\n",
    "            if inp_split == self.split:\n",
    "                break # ok\n",
    "        \n",
    "        # solve the task: i.e. sort\n",
    "        sol = torch.sort(inp)[0]\n",
    "\n",
    "        # concatenate the problem specification and the solution\n",
    "        cat = torch.cat((inp, sol), dim=0)\n",
    "\n",
    "        # the inputs to the transformer will be the offset sequence\n",
    "        x = cat[:-1].clone()\n",
    "        y = cat[1:].clone()\n",
    "        # we only want to predict at output locations, mask out the loss at the input locations\n",
    "        y[:self.length-1] = -1\n",
    "        return x, y"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f0d25f5c-07e4-4d7c-a05e-5b0c8b596540",
   "metadata": {},
   "outputs": [],
   "source": [
    "# print an example instance of the dataset\n",
    "train_dataset = SortDataset('train')\n",
    "test_dataset = SortDataset('test')\n",
    "x, y = train_dataset[0]\n",
    "for a, b in zip(x,y):\n",
    "    print(int(a),int(b))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8db3c7c2-7156-47ab-b804-644ff763a5bd",
   "metadata": {},
   "outputs": [],
   "source": [
    "model_config = GPT.get_default_config()\n",
    "model_config.model_type = 'gpt-nano'\n",
    "model_config.vocab_size = train_dataset.get_vocab_size()\n",
    "model_config.block_size = train_dataset.get_block_size()\n",
    "model = GPT_kv(model_config)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8863cb9a-c0cc-435a-a8b6-ae7706014d32",
   "metadata": {},
   "outputs": [],
   "source": [
    "# create a Trainer object\n",
    "from mingpt.trainer import Trainer\n",
    "\n",
    "train_config = Trainer.get_default_config()\n",
    "train_config.learning_rate = 5e-4 # the model we're using is so small that we can go a bit faster\n",
    "train_config.max_iters = 1000\n",
    "train_config.num_workers = 0\n",
    "trainer = Trainer(train_config, model, train_dataset)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "97d77ac0-bad9-421a-90ee-99bbcd651d4b",
   "metadata": {},
   "outputs": [],
   "source": [
    "def batch_end_callback(trainer):\n",
    "    if trainer.iter_num % 100 == 0:\n",
    "        print(f\"iter_dt {trainer.iter_dt * 1000:.2f}ms; iter {trainer.iter_num}: train loss {trainer.loss.item():.5f}\")\n",
    "trainer.set_callback('on_batch_end', batch_end_callback)\n",
    "\n",
    "trainer.run()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2c835078-d3b6-4863-979f-8d68cb79f13a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# now let's perform some evaluation\n",
    "model.eval();"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9ccd52fa-f56e-4bdc-95bd-5b3870045bdf",
   "metadata": {},
   "outputs": [],
   "source": [
    "loader = DataLoader(train_dataset, batch_size=10, num_workers=0, drop_last=False)\n",
    "x, y = next(iter(loader))\n",
    "n = train_dataset.length\n",
    "x = x.to(trainer.device)\n",
    "y = y.to(trainer.device)\n",
    "# isolate the input pattern alone\n",
    "inp = x[:, :n]\n",
    "sol = y[:, -n:]\n",
    "# let the model sample the rest of the sequence\n",
    "cat = model.generate(inp, n, do_sample=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "96d6a034-0475-4ca5-9282-9d2b1b5a2c7a",
   "metadata": {},
   "outputs": [],
   "source": [
    "def eval_split(trainer, split, max_batches):\n",
    "    dataset = {'train':train_dataset, 'test':test_dataset}[split]\n",
    "    n = train_dataset.length # naugy direct access shrug\n",
    "    results = []\n",
    "    mistakes_printed_already = 0\n",
    "    loader = DataLoader(dataset, batch_size=100, num_workers=0, drop_last=False)\n",
    "    for b, (x, y) in enumerate(loader):\n",
    "        x = x.to(trainer.device)\n",
    "        y = y.to(trainer.device)\n",
    "        # isolate the input pattern alone\n",
    "        inp = x[:, :n]\n",
    "        sol = y[:, -n:]\n",
    "        # let the model sample the rest of the sequence\n",
    "        cat = model.generate_kv(inp, n, do_sample=False) # using greedy argmax, not sampling\n",
    "        sol_candidate = cat[:, -n:] # isolate the filled in sequence\n",
    "        # compare the predicted sequence to the true sequence\n",
    "        correct = (sol == sol_candidate).all(1).cpu() # Software 1.0 vs. Software 2.0 fight RIGHT on this line haha\n",
    "        for i in range(x.size(0)):\n",
    "            results.append(int(correct[i]))\n",
    "            if not correct[i] and mistakes_printed_already < 3: # only print up to 5 mistakes to get a sense\n",
    "                mistakes_printed_already += 1\n",
    "                print(\"GPT claims that %s sorted is %s but gt is %s\" % (inp[i].tolist(), sol_candidate[i].tolist(), sol[i].tolist()))\n",
    "        if max_batches is not None and b+1 >= max_batches:\n",
    "            break\n",
    "    rt = torch.tensor(results, dtype=torch.float)\n",
    "    print(\"%s final score: %d/%d = %.2f%% correct\" % (split, rt.sum(), len(results), 100*rt.mean()))\n",
    "    return rt.sum()\n",
    "\n",
    "# run a lot of examples from both train and test through the model and verify the output correctness\n",
    "with torch.no_grad():\n",
    "    train_score = eval_split(trainer, 'train', max_batches=50)\n",
    "    test_score  = eval_split(trainer, 'test',  max_batches=50)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7aa88806-98fa-44e2-915c-b3b10fc7368d",
   "metadata": {},
   "outputs": [],
   "source": [
    "cat.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "41756ffa-c7bc-424a-8bdb-e224e25b9c32",
   "metadata": {},
   "outputs": [],
   "source": [
    "# let's run a random given sequence through the model as well\n",
    "n = train_dataset.length # naugy direct access shrug\n",
    "inp = torch.tensor([[0, 0, 2, 1, 0, 1]], dtype=torch.long).to(trainer.device)\n",
    "assert inp[0].nelement() == n\n",
    "with torch.no_grad():\n",
    "    cat = model.generate_kv(inp, n, do_sample=False)\n",
    "sol = torch.sort(inp[0])[0]\n",
    "sol_candidate = cat[:, n:]\n",
    "print('input sequence  :', inp.tolist())\n",
    "print('predicted sorted:', sol_candidate.tolist())\n",
    "print('gt sort         :', sol.tolist())\n",
    "print('matches         :', bool((sol == sol_candidate).all()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "396e6016-c64b-4d83-8b0e-23d220db11f9",
   "metadata": {},
   "outputs": [],
   "source": [
    "inp = torch.tensor([[0, 0, 2, 1, 0, 1, 2]], dtype=torch.long)\n",
    "model_config = GPT.get_default_config()\n",
    "model_config.model_type = 'gpt-mini'\n",
    "model_config.vocab_size = 9\n",
    "model_config.block_size = 500 \n",
    "model = GPT_kv(model_config)\n",
    "if torch.backends.mps.is_available():\n",
    "    device = torch.device(\"mps\")\n",
    "elif torch.cuda.is_available():\n",
    "    device = \"cuda\"\n",
    "else:\n",
    "    device = \"cpu\"\n",
    "model = model.to(device)\n",
    "inp = inp.to(device)\n",
    "print(\"running on device\", device)\n",
    "model.eval();"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5efe2a8d-8926-4d0b-b30f-ad5a982bf25a",
   "metadata": {},
   "outputs": [],
   "source": [
    "n = 1000\n",
    "for use_kv in (False, True):\n",
    "    times = []\n",
    "    for _ in range(10):  # measuring 10 generations\n",
    "        start = time.time()\n",
    "        with torch.no_grad():\n",
    "            if use_kv:\n",
    "                cat = model.generate_kv(inp, n, do_sample=False)\n",
    "            else:\n",
    "                cat = model.generate(inp, n, do_sample=False)\n",
    "        times.append(time.time() - start)\n",
    "    print(f\"{'with' if use_kv else 'without'} KV caching: {round(np.mean(times), 3)} +- {round(np.std(times), 3)} seconds\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c0b32491-1770-4c4b-a147-65b66815a897",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "dldiy",
   "language": "python",
   "name": "dldiy"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
