{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Paged Attention in cuDNN Frontend\n",
    "\n",
    "This notebook illustrates how the cuDNN's frontend scaled dot product attention operator can be used with paged K/V caches, specifically for decode. For a simpler introduction to the scaled dot product attention operator, please refer to [samples/python/50_scaled_dot_product_attention.ipynb](https://github.com/NVIDIA/cudnn-frontend/blob/main/samples/python/50_scaled_dot_product_attention.ipynb)\n",
    "\n",
    "The full documentation of cuDNN's scaled dot production attention operator can be found in: [docs/operations/Attention.md#scaled-dot-product-attention](https://github.com/NVIDIA/cudnn-frontend/blob/main/docs/operations/Attention.md#scaled-dot-product-attention). The python test code for the full set of features can be found in: [test/python/test_mhas.py](https://github.com/NVIDIA/cudnn-frontend/blob/main/test/python/test_mhas.py)\n",
    "\n",
    "More details on paged attention can be found in the [PagedAttention paper](https://arxiv.org/abs/2309.06180).\n",
    "\n",
    "\n",
    "This notebook specifically illustrates the following:\n",
    "- SDPA Decode (s_q=1)\n",
    "- Paged Attention\n",
    "- Variable sequence lengths for KV\n",
    "- Packed Block Tables"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NVIDIA/cudnn-frontend/blob/main/samples/python/52_scaled_dot_product_attention.ipynb)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Prerequisites and Setup\n",
    "This notebook requires an NVIDIA GPU A100 or newer. If running on Colab, go to Runtime → Change runtime type → Hardware accelerator and select a GPU."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# get_ipython().system('nvidia-smi')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# get_ipython().system('pip install nvidia-cudnn-cu12')\n",
    "# get_ipython().system('pip install nvidia-cudnn-frontend')\n",
    "# get_ipython().system('pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu128')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import cudnn\n",
    "import torch\n",
    "import math\n",
    "\n",
    "torch.manual_seed(42)\n",
    "handle = cudnn.create_handle()\n",
    "\n",
    "assert torch.cuda.is_available()\n",
    "assert (\n",
    "    torch.cuda.get_device_capability()[0] >= 8\n",
    "), \"SDPA operation is only supported on SM80 architecture (Ampere) or above\"\n",
    "\n",
    "assert (\n",
    "    cudnn.backend_version() >= 90500\n",
    "), \"SDPA operation is only supported cuDNN version 9.5.0 or above\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Problem sizes and Q/K/V setup"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create the query, key, value, and output GPU tensors using PyTorch. However, the user may use any DLPack compatible tensor instead."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "b = 2  # batch size\n",
    "h = 12  # query number of heads\n",
    "s_q = 1  # For decode, we only have one query token\n",
    "s_kv = 1024  # maximum sequence length\n",
    "d = 64  # embedding dimension per head\n",
    "\n",
    "block_size_k = block_size_v = (\n",
    "    64  # block size to be used by the non contiguous K/V containers\n",
    ")\n",
    "\n",
    "attn_scale = 1.0 / math.sqrt(d)\n",
    "\n",
    "# BSHD (batch, sequence_length, num_head, dims_per_head) logcial and physical tensor layouts\n",
    "dims_qo = (b, h, s_q, d)\n",
    "strides_qo = (s_q * h * d, s_q * d, d, 1)\n",
    "\n",
    "dims_kv = (b, h, s_kv, d)\n",
    "strides_kv = (s_kv * h * d, s_kv * d, d, 1)\n",
    "\n",
    "q_gpu = torch.randn(b * s_q * h * d).half().cuda().as_strided(dims_qo, strides_qo)\n",
    "k_gpu = torch.randn(b * s_kv * h * d).half().cuda().as_strided(dims_kv, strides_kv)\n",
    "v_gpu = torch.randn(b * s_kv * h * d).half().cuda().as_strided(dims_kv, strides_kv)\n",
    "o_gpu = torch.empty(b * s_q * h * d).half().cuda().as_strided(dims_qo, strides_qo)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "####  Generate containers and block tables for K and V\n",
    "\n",
    "In a real world scenario, container and block table tensors are generated by other parts of the model. For illustration purposes in this example, we provide a helper function to generate a trivial container from contiguous K and V caches. \n",
    "The helper function basically takes e.g., the K-cache and splits up the sequence (`S`) dimension in different blocks of length `block_size`. The resulting block table then helps identify which block belongs to which sequence ID."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# @brief Helper function to create a non contiguous container in blocks of block_size from a contiguous tensor\n",
    "def create_container_and_block_table(tensor, block_size):\n",
    "    B, H, S, D = tensor.shape\n",
    "    # num_blocks = math.ceil(S/block_size) * B\n",
    "    blocks_per_batch = math.ceil(S / block_size)\n",
    "\n",
    "    # Only needed if S is not a multiple of block_size\n",
    "    padding_seq = (blocks_per_batch * block_size) - S\n",
    "    if padding_seq > 0:\n",
    "        zeros = torch.zeros(B, H, padding_seq, D, device=\"cuda\", dtype=tensor.dtype)\n",
    "        cat_tensor = torch.cat((tensor, zeros), axis=2)\n",
    "    else:\n",
    "        cat_tensor = tensor\n",
    "\n",
    "    # Create a container by splitting on the S dimension and concatenating at the block dimension\n",
    "    # Its dimensions are [num_blocks, H, block_size, D] with num_blocks = B * blocks_per_batch\n",
    "    container = torch.cat((cat_tensor.clone()).chunk(blocks_per_batch, dim=2), dim=0)\n",
    "\n",
    "    # Create the block table\n",
    "    table_size = math.ceil(S / block_size)\n",
    "    block_table_temp = torch.linspace(\n",
    "        0, B * table_size - 1, B * table_size, device=\"cuda\", dtype=torch.int32\n",
    "    ).reshape(table_size, 1, B, 1)\n",
    "    block_table_temp = torch.transpose(block_table_temp, 0, 2)\n",
    "\n",
    "    # Make batch size outer dimension (cuDNN backend preference)\n",
    "    block_table = (\n",
    "        torch.zeros(blocks_per_batch * B)\n",
    "        .int()\n",
    "        .cuda()\n",
    "        .as_strided(\n",
    "            (B, 1, blocks_per_batch, 1), (blocks_per_batch, blocks_per_batch, 1, 1)\n",
    "        )\n",
    "    )\n",
    "    block_table.copy_(block_table_temp)\n",
    "\n",
    "    return (container, block_table)\n",
    "\n",
    "\n",
    "# Create non contiguous containers with block tables for K and V from the contiguous k_gpu and v_gpu\n",
    "container_k_gpu, block_table_k = create_container_and_block_table(k_gpu, block_size_k)\n",
    "container_v_gpu, block_table_v = create_container_and_block_table(v_gpu, block_size_v)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Variable KV sequence lengths and packed block tables\n",
    "Note that we created block tables containing block offsets for every sequence ID up to s_kv-1. However, with variable sequence lengths, we don't need block offsets for sequence ID's beyond the actual sequence length per batch. Therefore, we can consider \"packing\" the block tables, by only storing the block offsets that are needed, similar to how ragged tensors work. It should be noted that due to the small size of block tables, the amount of memory transfer reduction is minimal, and performance is expected to slightly degrade with this technique (this is because packing block tables removes the ability to user vectorized loads). However, for compatibility reasons with many of the frameworks, this feature can still be useful."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's start with creating the actual sequence length tensors."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# In decode, s_q is set to 1 for all batches\n",
    "seq_len_q_gpu = torch.ones((b, 1, 1, 1), device=\"cuda\", dtype=torch.int32)\n",
    "\n",
    "# Create an actual sequence length tensor for KV\n",
    "seq_len_kv_gpu = torch.randint(1, s_kv, (b, 1, 1, 1), device=\"cuda\", dtype=torch.int32)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's pack the previously created block tables. We use the following helper function to do so:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# @brief Helper function to convert a padded block table into a packed block table\n",
    "# @return packed_block_table: packed block table\n",
    "# @return ragged_offset: offset into the packed block table\n",
    "def convert_uniform_to_ragged_block_tables(uniform_tensor, seq_len, block_size):\n",
    "    [B, H, S, D] = uniform_tensor.size()\n",
    "    ragged_offset = torch.zeros(\n",
    "        B + 1, 1, 1, 1, dtype=torch.int32, device=uniform_tensor.device\n",
    "    )  # Initialize with first offset as 0\n",
    "    for i in range(1, B + 1):\n",
    "        prev_seq_len = seq_len[i - 1]\n",
    "        num_pages_prev_batch = (prev_seq_len + block_size - 1) // block_size\n",
    "        next_batch_offset = ragged_offset[i - 1] + num_pages_prev_batch\n",
    "        ragged_offset[i, 0, 0, 0] = next_batch_offset\n",
    "\n",
    "    ragged_offset.to(dtype=torch.int64)\n",
    "\n",
    "    packed_block_table = torch.zeros(B * S, H, D).to(\n",
    "        dtype=uniform_tensor.dtype, device=uniform_tensor.device\n",
    "    )\n",
    "\n",
    "    uniform_tensor_thd = torch.einsum(\"bhsd->bshd\", uniform_tensor).reshape(B * S, H, D)\n",
    "\n",
    "    t_0 = 0\n",
    "    for b, t_1 in enumerate(ragged_offset.flatten()[1:]):\n",
    "        packed_block_table[t_0:t_1, :, :] = uniform_tensor_thd[\n",
    "            b * S : b * S + (t_1 - t_0), :, :\n",
    "        ]\n",
    "        t_0 = t_1\n",
    "\n",
    "    packed_block_table = packed_block_table.reshape(B, S, H, D)\n",
    "    packed_block_table = torch.einsum(\"bshd->bhsd\", packed_block_table)\n",
    "\n",
    "    return packed_block_table, ragged_offset\n",
    "\n",
    "\n",
    "block_table_k_packed_gpu, block_table_k_ragged_offset_gpu = (\n",
    "    convert_uniform_to_ragged_block_tables(block_table_k, seq_len_kv_gpu, block_size_k)\n",
    ")\n",
    "block_table_v_packed_gpu, block_table_v_ragged_offset_gpu = (\n",
    "    convert_uniform_to_ragged_block_tables(block_table_v, seq_len_kv_gpu, block_size_v)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`block_table_{k,v}_packed_gpu` are now packed block tables, containing only the block offsets that are needed for the actual sequence lengths.\n",
    "`block_table_{k,v}_ragged_offset_gpu` are the ragged offsets into the packed block tables. They indicate the start of the offsets for each sequence. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To illustrate this, consider a scenario where `seq_len_kv_gpu = {250,300}`, and assume further that the container has blocks contiguously allocated per batch (so the block table offsets are just linear increments).\n",
    "\n",
    "A padded page table would be:\n",
    "block_table = {B,1, max_s_kv/block_size, 1} = {2,1,16,1}\n",
    "```\n",
    "b = 0 : \n",
    "    block_table_k[0,0] = 0\n",
    "    block_table_k[0,1] = 1\n",
    "    block_table_k[0,2] = 2\n",
    "    block_table_k[0,3] = 3\n",
    "    block_table_k[0,4] = x\n",
    "    block_table_k[0,5] = x\n",
    "    ...\n",
    "    block_table_k[0,15] = x\n",
    "\n",
    "b = 1 : \n",
    "    block_table_k[1,0] = 16\n",
    "    block_table_k[1,1] = 17\n",
    "    block_table_k[1,2] = 18\n",
    "    block_table_k[1,3] = 19\n",
    "    block_table_k[1,4] = 20\n",
    "    block_table_k[1,5] = x\n",
    "    block_table_k[1,6] = x\n",
    "    ...\n",
    "    block_table_k[1,15] = x\n",
    "\n",
    "```\n",
    "Since only 4 and 5 elements contain meaningful block offsets, for batch 0 and 1 respectively (seq_len_kv_gpu/block_size=[4,5]), a packed page table would be:\n",
    "``` \n",
    "block_table_k_packed_gpu = [0,1,2,3,16,17,18,19,20]\n",
    "```\n",
    "With ragged offests\n",
    "```\n",
    "block_table_k,_ragged_offset_gpu = [0, 4, 9]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Graph creation and execution\n",
    "\n",
    "Create the graph"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "graph = cudnn.pygraph(\n",
    "    io_data_type=cudnn.data_type.HALF,\n",
    "    intermediate_data_type=cudnn.data_type.FLOAT,\n",
    "    compute_data_type=cudnn.data_type.FLOAT,\n",
    ")\n",
    "\n",
    "q = graph.tensor_like(q_gpu)\n",
    "\n",
    "container_k = graph.tensor_like(container_k_gpu)\n",
    "container_v = graph.tensor_like(container_v_gpu)\n",
    "block_table_k_packed = graph.tensor_like(block_table_k_packed_gpu)\n",
    "block_table_v_packed = graph.tensor_like(block_table_v_packed_gpu)\n",
    "\n",
    "seq_len_q = graph.tensor_like(seq_len_q_gpu)\n",
    "seq_len_kv = graph.tensor_like(seq_len_kv_gpu)\n",
    "\n",
    "# Add ragged offset tensors to the block tables\n",
    "block_table_k_ragged_offset = graph.tensor_like(block_table_k_ragged_offset_gpu)\n",
    "block_table_k_packed.set_ragged_offset(block_table_k_ragged_offset)\n",
    "block_table_v_ragged_offset = graph.tensor_like(block_table_v_ragged_offset_gpu)\n",
    "block_table_v_packed.set_ragged_offset(block_table_v_ragged_offset)\n",
    "\n",
    "o, _ = graph.sdpa(\n",
    "    name=\"sdpa\",\n",
    "    q=q,\n",
    "    k=container_k,  # Container K: non contiguous container with K blocks\n",
    "    v=container_v,  # Container V: non contiguous container with V blocks\n",
    "    is_inference=True,\n",
    "    attn_scale=attn_scale,\n",
    "    use_causal_mask=False,\n",
    "    use_padding_mask=True,\n",
    "    seq_len_q=seq_len_q,\n",
    "    seq_len_kv=seq_len_kv,\n",
    "    paged_attention_k_table=block_table_k_packed,  # Block Table K: Tensor containing offsets to the container with K blocks\n",
    "    paged_attention_v_table=block_table_v_packed,  # Block Table V: Tensor containing offsets to the container with V blocks\n",
    "    paged_attention_max_seq_len_kv=s_kv,  # The maximum sequence length for K caches (this is optional, but recommended)\n",
    ")\n",
    "\n",
    "o.set_output(True).set_dim(dims_qo).set_stride(strides_qo)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Build the graph"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "graph.build([cudnn.heur_mode.A, cudnn.heur_mode.FALLBACK])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Execute the graph"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "variant_pack = {\n",
    "    q: q_gpu,\n",
    "    container_k: container_k_gpu,\n",
    "    container_v: container_v_gpu,\n",
    "    block_table_k_packed: block_table_k_packed_gpu,\n",
    "    block_table_v_packed: block_table_v_packed_gpu,\n",
    "    block_table_k_ragged_offset: block_table_k_ragged_offset_gpu,  # Ragged offset for K's block table\n",
    "    block_table_v_ragged_offset: block_table_v_ragged_offset_gpu,  # Ragged offset for V's block table\n",
    "    seq_len_q: seq_len_q_gpu,\n",
    "    seq_len_kv: seq_len_kv_gpu,\n",
    "    o: o_gpu,\n",
    "}\n",
    "\n",
    "workspace = torch.empty(graph.get_workspace_size(), device=\"cuda\", dtype=torch.uint8)\n",
    "graph.execute(variant_pack, workspace)\n",
    "torch.cuda.synchronize()\n",
    "\n",
    "cudnn.destroy_handle(handle)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Run the PyTorch reference and compare against cuDNN's output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "q_ref = q_gpu.detach().float().requires_grad_()\n",
    "k_ref = k_gpu.detach().float().requires_grad_()\n",
    "v_ref = v_gpu.detach().float().requires_grad_()\n",
    "\n",
    "# Create attention mask for variable lengths in KV\n",
    "mask = torch.ones(b, s_kv, dtype=torch.bool, device=\"cuda\")\n",
    "\n",
    "for i in range(b):\n",
    "    seqlen = seq_len_kv_gpu[i, 0, 0, 0].item()\n",
    "    mask[i, seqlen:] = False\n",
    "\n",
    "# Expand mask (B,s_kv) -> (B,1,1,s_kv) to match attention shape\n",
    "mask = mask.unsqueeze(1)\n",
    "mask = mask.unsqueeze(1)\n",
    "\n",
    "o_ref = torch.nn.functional.scaled_dot_product_attention(\n",
    "    q_ref, k_ref, v_ref, is_causal=False, scale=attn_scale, attn_mask=mask\n",
    ")\n",
    "\n",
    "torch.testing.assert_close(o_ref, o_gpu.float(), atol=5e-3, rtol=3e-3)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.15"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
