{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "1f2517b1-26c2-48aa-8c8f-2f21ea100557",
   "metadata": {},
   "source": [
    "# Quantization-Aware Training (QAT)\n",
    "This notebook provides a working example of **Quantization-Aware Training (QAT)** of a **Vision Transformer (ViT)** using **AIMET**.\n",
    "QAT improves the accuracy of quantized models by fine-tuning model weights and/or quantization parameters while simulating quantization effects during training."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ba3dbb7-a5a0-4d84-8915-8146404cbc3f",
   "metadata": {},
   "source": [
    "## 📁 Before Getting Started: Prepare the ImageNet Dataset\n",
    "To run this notebook successfully, you need to have the ImageNet dataset downloaded and organized in a specific directory structure. AIMET expects the dataset to be accessible via the environment variable IMAGENET_DIR, and structured as follows:\n",
    "\n",
    "```\n",
    "IMAGENET_DIR/\n",
    "├── train/\n",
    "│   ├── n01440764/\n",
    "│   │   ├── image1.JPEG\n",
    "│   │   ├── image2.JPEG\n",
    "│   │   └── ...\n",
    "│   ├── n01443537/\n",
    "│   │   └── ...\n",
    "│   └── ...\n",
    "├── val/\n",
    "│   ├── n01440764/\n",
    "│   │   ├── image1.JPEG\n",
    "│   │   └── ...\n",
    "│   └── ...\n",
    "├── test/\n",
    "│   └── ...\n",
    "```\n",
    "\n",
    "For more information, see [torchvision.datasets.ImageFolder](https://docs.pytorch.org/vision/main/generated/torchvision.datasets.ImageFolder.html?highlight=imagefolder#torchvision.datasets.ImageFolder) and [torchvision.datasets.DatasetFolder](https://docs.pytorch.org/vision/main/generated/torchvision.datasets.DatasetFolder.html#torchvision.datasets.DatasetFolder)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "13f56952-d691-41d7-b1af-33aaca0c2770",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ[\"IMAGENET_DIR\"] = \"/path/to/imagenet\" # TODO: Overwrite this path with your local imagenet directory"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6b3e14e2",
   "metadata": {},
   "source": [
    "## 1. Set Random Seeds and Instantiate ImageNet Data Loaders\n",
    "To ensure reproducibility, we set random seeds for Python, NumPy, and PyTorch.\n",
    "We then define a function to load the ImageNet dataset with standard preprocessing steps."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1601237c",
   "metadata": {},
   "outputs": [],
   "source": [
    "import random\n",
    "import numpy as np\n",
    "import os\n",
    "import torch\n",
    "from torch.utils.data import DataLoader, Dataset\n",
    "from torchvision.datasets import ImageFolder\n",
    "from torchvision import transforms\n",
    "\n",
    "# Set device to GPU if available\n",
    "device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
    "\n",
    "# Set random seeds for reproducibility\n",
    "torch.manual_seed(1234)\n",
    "torch.cuda.manual_seed(1234)\n",
    "random.seed(1234)\n",
    "np.random.seed(1234)\n",
    "\n",
    "\n",
    "def imagenet_dataset(split: str) -> Dataset:\n",
    "    # Load ImageNet directory from environment variable\n",
    "    IMAGENET_DIR = os.getenv(\"IMAGENET_DIR\")\n",
    "    if not IMAGENET_DIR:\n",
    "        raise RuntimeError(\n",
    "            \"Environment variable 'IMAGENET_DIR' has not been set. \"\n",
    "            \"Please set this variable to the path where ImageNet dataset is downloaded \"\n",
    "            \"and organized in the following directory structure:\\n\\n\"\n",
    "            \"<IMAGENET_DIR>\\n\"\n",
    "            \" ├── test\\n\"\n",
    "            \" ├── train\\n\"\n",
    "            \" └── val\\n\"\n",
    "        )\n",
    "\n",
    "    # Define preprocessing transformations\n",
    "    transform = transforms.Compose(\n",
    "        [\n",
    "            transforms.Resize(256),\n",
    "            transforms.CenterCrop(224),\n",
    "            transforms.ToTensor(),\n",
    "            transforms.Normalize(\n",
    "                mean=[0.485, 0.456, 0.406],\n",
    "                std=[0.229, 0.224, 0.225],\n",
    "            ),\n",
    "        ]\n",
    "    )\n",
    "    return ImageFolder(root=os.path.join(IMAGENET_DIR, split), transform=transform)\n",
    "\n",
    "# Create DataLoaders for training and testing\n",
    "test_data_loader = DataLoader(imagenet_dataset(\"test\"), batch_size=128, shuffle=False)\n",
    "train_data_loader = DataLoader(imagenet_dataset(\"train\"), batch_size=32, shuffle=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8519d9ac",
   "metadata": {},
   "source": [
    "## 2. Create W4A8 QuantizationSimModel with Vision Transformer (ViT)\n",
    "We load a pretrained ViT model and simulate quantization using AIMET's QuantizationSimModel.\n",
    "This simulates 4-bit weights and 8-bit activations (W4A8), targeting Qualcomm's Hexagon NPU (HTP V81)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9b248174",
   "metadata": {},
   "outputs": [],
   "source": [
    "import aimet_torch\n",
    "from torchvision.models import vit_b_16\n",
    "\n",
    "model = vit_b_16(weights=\"IMAGENET1K_V1\").to(device=device).eval()\n",
    "dummy_input, _ = next(iter(train_data_loader))\n",
    "dummy_input = dummy_input.to(device=device)\n",
    "\n",
    "# Create QuantizationSimModel with W4A8 configuration\n",
    "sim = aimet_torch.QuantizationSimModel(\n",
    "    model,\n",
    "    dummy_input,\n",
    "    default_param_bw=4,    # 4-bit weights\n",
    "    default_output_bw=8,   # 8-bit activations\n",
    "    in_place=True,\n",
    "    config_file=\"htp_v81\", # AIMET config for Hexagon NPU with HTP V81\n",
    ")\n",
    "\n",
    "# Compute quantization encodings using a few training batches\n",
    "with torch.no_grad(), aimet_torch.nn.compute_encodings(sim.model):\n",
    "    for i, (images, _) in enumerate(train_data_loader):\n",
    "        if i == 8:\n",
    "            break\n",
    "        _ = sim.model(images.to(device=device))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5e14a028",
   "metadata": {},
   "source": [
    "## 3. Evaluate Initial Accuracy Before QAT\n",
    "Before applying QAT, we evaluate the model's accuracy in two scenarios:\n",
    "\n",
    "* **Floating-point accuracy**: by temporarily removing all quantizers.\n",
    "* **Fake-quantized accuracy**: using the quantized simulation model.\n",
    "\n",
    "This helps establish a baseline to compare the impact of QAT later."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "566c1bdc",
   "metadata": {},
   "outputs": [],
   "source": [
    "@torch.no_grad()\n",
    "def evaluate(model: torch.nn.Module, data_loader: DataLoader):\n",
    "    from tqdm import tqdm\n",
    "\n",
    "    top1 = top5 = 0.0\n",
    "    n_images = 0\n",
    "\n",
    "    pbar = tqdm(data_loader)\n",
    "    for images, labels in pbar:\n",
    "        images = images.to(device=device)\n",
    "        labels = labels.unsqueeze(-1).to(device=device)\n",
    "        logits = model(images)\n",
    "\n",
    "        top1 += (logits.topk(1).indices == labels).sum()\n",
    "        top5 += (logits.topk(5).indices == labels).sum()\n",
    "        n_images += images.size(0)\n",
    "\n",
    "        top1_accuracy = top1 / n_images\n",
    "        top5_accuracy = top5 / n_images\n",
    "        pbar.set_description(\n",
    "            f\"Top-1: {top1_accuracy * 100:.2f}%, Top-5: {top5_accuracy * 100:.2f}%\"\n",
    "        )\n",
    "\n",
    "    top1_accuracy = top1 / n_images\n",
    "    top5_accuracy = top5 / n_images\n",
    "    return top1_accuracy, top5_accuracy\n",
    "\n",
    "\n",
    "from aimet_torch.v2.utils import remove_all_quantizers\n",
    "\n",
    "# Evaluate floating-point accuracy\n",
    "with remove_all_quantizers(sim.model):\n",
    "    top1, top5 = evaluate(sim.model, test_data_loader)\n",
    "    print(\"FP Accuracy:\")\n",
    "    print(f\"  * Top-1: {top1 * 100:.2f}%\")\n",
    "    print(f\"  * Top-5: {top5 * 100:.2f}%\")\n",
    "\n",
    "# Evaluate fake-quantized accuracy before QAT\n",
    "top1, top5 = evaluate(sim.model, test_data_loader)\n",
    "print(\"Fake-quantized Accuracy (before QAT):\")\n",
    "print(f\"  * Top-1: {top1 * 100:.2f}%\")\n",
    "print(f\"  * Top-5: {top5 * 100:.2f}%\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5794b47",
   "metadata": {},
   "source": [
    "## 4. Run QAT and Evaluate Post-QAT Accuracy\n",
    "In this notebook, we perform QAT by training only the quantization parameters (not the base model weights).\n",
    "The model is trained using the AdamW optimizer and evaluated after 2000 iterations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b2634ebf",
   "metadata": {},
   "outputs": [],
   "source": [
    "def train(model: torch.nn.Module, data_loader: DataLoader, n_iter: int):\n",
    "    from tqdm import tqdm\n",
    "    from aimet_torch.quantization.affine import AffineQuantizerBase\n",
    "\n",
    "    # Train only quantization parameters\n",
    "    optimizer = torch.optim.AdamW(\n",
    "        params={\n",
    "            param\n",
    "            for module in model.modules()\n",
    "            for param in module.parameters()\n",
    "            if isinstance(module, AffineQuantizerBase)\n",
    "        },\n",
    "        lr=0.001,\n",
    "    )\n",
    "    pbar = tqdm(data_loader, total=n_iter)\n",
    "    for i, (images, labels) in enumerate(pbar):\n",
    "        if i == n_iter:\n",
    "            break\n",
    "        optimizer.zero_grad()\n",
    "        images = images.to(device=device)\n",
    "        labels = labels.to(device=device)\n",
    "\n",
    "        logits = model(images)\n",
    "        loss = torch.nn.functional.cross_entropy(logits, labels)\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        pbar.set_description(f\"loss: {loss:.2f}\")\n",
    "\n",
    "# Run QAT training\n",
    "train(sim.model.train(), train_data_loader, n_iter=2000)\n",
    "\n",
    "# Evaluate accuracy after QAT\n",
    "top1, top5 = evaluate(sim.model.eval(), test_data_loader)\n",
    "print(\"Fake-quantized Accuracy (after QAT):\")\n",
    "print(f\"  * Top-1: {top1 * 100:.2f}%\")\n",
    "print(f\"  * Top-5: {top5 * 100:.2f}%\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c0728e98-50c1-4768-b8a9-fb6c2448937c",
   "metadata": {},
   "source": [
    "## 🏁 Conclusion\n",
    "In this notebook, we demonstrated how to apply **Quantization-Aware Training (QAT)** to a pretrained **Vision Transformer (ViT)** model using **AIMET**. Starting from a floating-point baseline, we simulated quantization effects and observed a significant drop in accuracy. By fine-tuning the quantization parameters through QAT, we were able to recover most of the lost performance, achieving near-floating-point accuracy in the quantized model.\n",
    "This workflow highlights the effectiveness of QAT in preparing models for deployment on resource-constrained hardware such as Qualcomm's Hexagon NPU. With proper dataset preparation, quantization simulation, and targeted fine-tuning, high-performance deep learning models can be made both efficient and accurate for real-world applications.\n",
    "\n",
    "| Model Type                        | Top-1 accuracy | Top-5 accuracy |\n",
    "|-----------------------------------|---------------:|---------------:|\n",
    "| Floating-point                    |    81.07%      |      95.32%    |\n",
    "| Fake-quantized (W4A8, before QAT) |     8.16%      |      20.06%    |\n",
    "| Fake-quantized (W4A8, after QAT)  |  **79.21%**    |    **94.54%**  |"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
