{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "b9702787-10f3-41b2-bd93-7b40d3e53383",
   "metadata": {},
   "source": [
    "# TT-NN Tracer and BERT Model Visualization Tutorial\n",
    "\n",
    "This tutorial demonstrates how to use TT-NN's tracer functionality to visualize\n",
    "tensor operations and computational graphs. We'll explore:\n",
    "1. Basic tensor operation tracing with PyTorch tensors\n",
    "2. TT-NN tensor operations with reshaping\n",
    "3. Tracing a BERT self-attention layer\n",
    "4. Running and tracing a full BERT model for question answering\n",
    "\n",
    "The tracer is a powerful debugging and optimization tool that helps understand\n",
    "how operations are executed on Tenstorrent hardware.\n",
    "\n",
    "## Import Libraries\n",
    "\n",
    "Disable fast runtime mode to ensure all operations are properly traced. Fast runtime mode may skip some operations for performance, which we don't want when debugging."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dbd63fbb-f61c-48a4-85a8-1003586be3fc",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "import os\n",
    "from pathlib import Path\n",
    "os.environ[\"TTNN_CONFIG_OVERRIDES\"] = \"{\\\"enable_fast_runtime_mode\\\": false}\"\n",
    "\n",
    "import torch\n",
    "import transformers\n",
    "\n",
    "import ttnn\n",
    "from ttnn.tracer import trace, visualize"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4ef1d72a",
   "metadata": {},
   "source": [
    "## Set program config\n",
    "\n",
    "Suppress transformer library warnings for cleaner output."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "361f2a1e-bb43-4c8a-ab4e-34d0dd424ebd",
   "metadata": {},
   "outputs": [],
   "source": [
    "transformers.logging.set_verbosity_error()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "162b41bd-cd48-44cc-90cb-0040393ff179",
   "metadata": {},
   "source": [
    "# Example 1: Tracing PyTorch Operations\n",
    "\n",
    "The tracer context manager captures all operations performed within its scope.\n",
    "\n",
    "This example shows how basic PyTorch operations are tracked."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2d48f30d-de73-41a9-83b3-30e7a0dadd89",
   "metadata": {},
   "outputs": [],
   "source": [
    "with trace():\n",
    "    # Create a random integer tensor of shape (1, 64) with values between 0-99\n",
    "    tensor = torch.randint(0, 100, (1, 64))\n",
    "    # Apply exponential function element-wise\n",
    "    # This demonstrates how mathematical operations are captured\n",
    "    tensor = torch.exp(tensor)\n",
    "    \n",
    "# Visualize the computational graph of the traced operations\n",
    "# This will show the flow from random tensor creation to exp operation\n",
    "visualize(tensor)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b15becb-cbe3-4392-8957-ec3a304be505",
   "metadata": {},
   "source": [
    "# Example 2: Tracing TT-NN Tensor Operations\n",
    "\n",
    "This example demonstrates tracing operations that involve TT-NN tensors."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cc31f32f-5daf-4130-995d-fbe72a473d73",
   "metadata": {},
   "outputs": [],
   "source": [
    "with trace():\n",
    "    # Create a PyTorch tensor with shape (4, 64)\n",
    "    tensor = torch.randint(0, 100, (4, 64))\n",
    "    \n",
    "    # Convert PyTorch tensor to TT-NN format\n",
    "    # This operation moves data to the TT-NN representation\n",
    "    tensor = ttnn.from_torch(tensor)\n",
    "    \n",
    "    # Reshape the tensor from (4, 64) to (2, 4, 32)\n",
    "    # This demonstrates how reshape operations are handled in TT-NN\n",
    "    tensor = ttnn.reshape(tensor, (2, 4, 32))\n",
    "    \n",
    "    # Convert back to PyTorch for visualization\n",
    "    tensor = ttnn.to_torch(tensor)\n",
    "\n",
    "# Visualize the graph showing PyTorch → TT-NN → reshape → PyTorch conversion\n",
    "visualize(tensor)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d6e4078",
   "metadata": {},
   "source": [
    "## Model and Config downloading\n",
    "\n",
    "We define three functions to download the weights and configuration from Hugging Face.\n",
    "\n",
    "For practical purposes, we can also specify a `TTNN_TUTORIALS_MODELS_CLIP_PATH` environment variable to avoid downloading the model.\n",
    "If it is defined, then model and configuration will be loaded from the location indicated by `TTNN_TUTORIALS_MODELS_CLIP_PATH`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fb7a85fb",
   "metadata": {},
   "outputs": [],
   "source": [
    "def download_google_bert_model_and_config(\n",
    "    model_name: str,\n",
    ") -> tuple[transformers.models.bert.modeling_bert.BertSelfOutput, transformers.BertConfig]:\n",
    "    model_location = model_name  # By default, download from Hugging Face\n",
    "\n",
    "    # If TTNN_TUTORIALS_MODELS_TRACER_PATH is set, use it as the cache directory to avoid requests to Hugging Face\n",
    "    cache_dir = os.getenv(\"TTNN_TUTORIALS_MODELS_TRACER_PATH\")\n",
    "    if cache_dir is not None:\n",
    "        model_location = Path(cache_dir) / Path(\"config_google_bert.json\")\n",
    "\n",
    "    # Load model weights (download if cache_dir was not set)\n",
    "    config = transformers.BertConfig.from_pretrained(model_location)\n",
    "    model = transformers.models.bert.modeling_bert.BertSelfOutput(config).eval()\n",
    "\n",
    "    return model, config\n",
    "\n",
    "\n",
    "def download_ttnn_bert_config(model_name: str) -> transformers.BertConfig:\n",
    "    config_location = model_name  # By default, download from Hugging Face\n",
    "\n",
    "    # If TTNN_TUTORIALS_MODELS_TRACER_PATH is set, use it as the cache directory to avoid requests to Hugging Face\n",
    "    cache_dir = os.getenv(\"TTNN_TUTORIALS_MODELS_TRACER_PATH\")\n",
    "    if cache_dir is not None:\n",
    "        config_location = Path(cache_dir) / Path(\"config_ttnn_bert.json\")\n",
    "\n",
    "    # Load config (download if cache_dir was not set)\n",
    "    config = transformers.BertConfig.from_pretrained(config_location)\n",
    "\n",
    "    return config\n",
    "\n",
    "\n",
    "def download_ttnn_bert_model(model_name: str, config: transformers.BertConfig) -> transformers.BertForQuestionAnswering:\n",
    "    model_location = model_name  # By default, download from Hugging Face\n",
    "\n",
    "    # If TTNN_TUTORIALS_MODELS_TRACER_PATH is set, use it as the cache directory to avoid requests to Hugging Face\n",
    "    cache_dir = os.getenv(\"TTNN_TUTORIALS_MODELS_TRACER_PATH\")\n",
    "    if cache_dir is not None:\n",
    "        model_location = Path(cache_dir)\n",
    "\n",
    "    # Load model weights (download if cache_dir was not set)\n",
    "    model = transformers.BertForQuestionAnswering.from_pretrained(model_location, config=config).eval()\n",
    "\n",
    "    return model"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0c696468-19e6-459b-b17d-81301acb2ce2",
   "metadata": {},
   "source": [
    "# Example 3: Tracing a BERT Layer\n",
    "\n",
    "Load a small BERT configuration for demonstration. This is a tiny BERT model with only 4 layers, 256 hidden dimensions, and 4 attention heads."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "93c05dbb-2f7f-47a8-966c-7eb1827754e3",
   "metadata": {},
   "outputs": [],
   "source": [
    "model_name = \"google/bert_uncased_L-4_H-256_A-4\"\n",
    "model, config = download_google_bert_model_and_config(model_name)\n",
    "\n",
    "# Trace the BERT self-output layer operations\n",
    "with trace():\n",
    "    # Create dummy inputs matching the expected dimensions\n",
    "    # hidden_states: output from self-attention (batch=1, seq_len=64, hidden_size=256)\n",
    "    hidden_states = torch.rand((1, 64, config.hidden_size))\n",
    "    # input_tensor: residual connection input\n",
    "    input_tensor = torch.rand((1, 64, config.hidden_size))\n",
    "    \n",
    "    # Run the layer forward pass\n",
    "    hidden_states = model(hidden_states, input_tensor)\n",
    "    \n",
    "    # Convert output to TT-NN format for visualization\n",
    "    output = ttnn.from_torch(hidden_states)\n",
    "\n",
    "# Visualize the BERT layer computation graph\n",
    "visualize(output)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "82c31fdf-6604-4e90-8bca-beca9a32f543",
   "metadata": {},
   "source": [
    "## Example 4: Trace models written using ttnn"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "44c29371-cad0-421c-87b1-883522268151",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Configure the dispatch core type based on the architecture\n",
    "# ETH cores are used on newer architectures, WORKER cores on Grayskull\n",
    "dispatch_core_type = ttnn.device.DispatchCoreType.ETH\n",
    "if os.environ.get(\"ARCH_NAME\") and \"grayskull\" in os.environ.get(\"ARCH_NAME\"):\n",
    "    dispatch_core_type = ttnn.device.DispatchCoreType.WORKER\n",
    "\n",
    "# Open device with custom configuration\n",
    "# - l1_small_size: Set L1 memory allocation to 8KB for small tensors\n",
    "# - dispatch_core_config: Configure which cores handle dispatch operations\n",
    "device = ttnn.open_device(\n",
    "    device_id=0, \n",
    "    l1_small_size=8192, \n",
    "    dispatch_core_config=ttnn.device.DispatchCoreConfig(dispatch_core_type)\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c07b5ae4-3194-4605-983a-cf1228c7584e",
   "metadata": {},
   "outputs": [],
   "source": [
    "from models.demos.bert.tt import ttnn_bert\n",
    "from models.demos.bert.tt import ttnn_optimized_bert\n",
    "from ttnn.model_preprocessing import preprocess_model_parameters\n",
    "\n",
    "\n",
    "def ttnn_bert(bert):\n",
    "    \"\"\"\n",
    "    Run and trace a complete BERT model for question answering.\n",
    "\n",
    "    Args:\n",
    "        bert: Either ttnn_bert or ttnn_optimized_bert module\n",
    "    \"\"\"\n",
    "    # Use a larger BERT model fine-tuned for question answering\n",
    "    model_name = \"phiyodr/bert-large-finetuned-squad2\"\n",
    "    config = download_ttnn_bert_config(model_name)\n",
    "\n",
    "    # Limit to 1 layer for faster execution in this demo\n",
    "    # Full BERT-large has 24 layers\n",
    "    config.num_hidden_layers = 1\n",
    "\n",
    "    # Set batch size and sequence length for input\n",
    "    batch_size = 8\n",
    "    sequence_size = 384  # Standard for question answering tasks\n",
    "\n",
    "    # ===== Model Parameter Preprocessing =====\n",
    "    # Convert model parameters to TT-NN format and optimize for device\n",
    "    # This includes weight packing, layout conversion, and memory placement\n",
    "    model = download_ttnn_bert_model(model_name, config)\n",
    "    parameters = preprocess_model_parameters(\n",
    "        initialize_model=lambda: model,\n",
    "        custom_preprocessor=bert.custom_preprocessor,\n",
    "        device=device,\n",
    "    )\n",
    "\n",
    "    # ===== Trace BERT Inference =====\n",
    "    with trace():\n",
    "        # Create dummy input tensors\n",
    "        # input_ids: Token IDs from vocabulary\n",
    "        input_ids = torch.randint(0, config.vocab_size, (batch_size, sequence_size)).to(torch.int32)\n",
    "\n",
    "        # token_type_ids: Segment IDs (0 for question, 1 for context in QA)\n",
    "        torch_token_type_ids = torch.zeros((batch_size, sequence_size), dtype=torch.int32)\n",
    "\n",
    "        # position_ids: Position embeddings (usually just 0 to sequence_length-1)\n",
    "        torch_position_ids = torch.zeros((batch_size, sequence_size), dtype=torch.int32)\n",
    "\n",
    "        # attention_mask: Mask for padding tokens (only for optimized version)\n",
    "        # Shape differs between regular and optimized BERT implementations\n",
    "        torch_attention_mask = torch.zeros(1, sequence_size) if bert == ttnn_optimized_bert else None\n",
    "\n",
    "        # Preprocess inputs for TT-NN format\n",
    "        # This converts PyTorch tensors to device tensors with appropriate layout\n",
    "        ttnn_bert_inputs = bert.preprocess_inputs(\n",
    "            input_ids,\n",
    "            torch_token_type_ids,\n",
    "            torch_position_ids,\n",
    "            torch_attention_mask,\n",
    "            device=device,\n",
    "        )\n",
    "\n",
    "        # Run BERT model for question answering\n",
    "        # Returns start and end logits for answer span prediction\n",
    "        output = bert.bert_for_question_answering(\n",
    "            config,\n",
    "            *ttnn_bert_inputs,\n",
    "            parameters=parameters,\n",
    "        )\n",
    "\n",
    "        # Move output back from device to host for visualization\n",
    "        output = ttnn.from_device(output)\n",
    "\n",
    "    # Visualize the complete BERT computation graph\n",
    "    return visualize(output)\n",
    "\n",
    "\n",
    "# Run the optimized BERT implementation\n",
    "# This version includes TT-NN specific optimizations for better performance\n",
    "ttnn_bert(ttnn_optimized_bert)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5637b989",
   "metadata": {},
   "source": [
    "## Close the device"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "85bc0083-0724-42df-b1ae-acda2b69fbcd",
   "metadata": {},
   "outputs": [],
   "source": [
    "ttnn.close_device(device)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
