{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2c45f1e3",
   "metadata": {},
   "source": [
    "### Step 4: Comparing Depth and Width Pruned Models\n",
    "\n",
    "Here is an image of the validation loss of running distillation for both the models:\n",
    "\n",
    "<img src=\"./imgs/val_loss_comparison.png\" width=\"600px\" alt=\"Validation Loss comparison between depth and width pruned models\">\n",
    "\n",
    "\n",
    "#### Step 4.1: Convert Pruned Models to Hugging Face Format\n",
    "Lets convert the pruned models back to Hugging Face format and evaluate MMLU benchmark using [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a367b03c",
   "metadata": {},
   "outputs": [],
   "source": [
    "ROOT_DIR = \"/workspace\"\n",
    "DEPTH_PRUNED_MODEL_DIR = f\"{ROOT_DIR}/Qwen3-8B-nemo-depth-pruned-distill\"\n",
    "WIDTH_PRUNED_MODEL_DIR = f\"{ROOT_DIR}/Qwen3-8B-nemo-width-pruned-distill\"\n",
    "\n",
    "!python -c 'from nemo.collections import llm; llm.export_ckpt(path=\"{DEPTH_PRUNED_MODEL_DIR}/checkpoints/best\", target=\"hf\", output_path=\"{DEPTH_PRUNED_MODEL_DIR}/checkpoints/best_hf\")'\n",
    "!python -c 'from nemo.collections import llm; llm.export_ckpt(path=\"{WIDTH_PRUNED_MODEL_DIR}/checkpoints/best\", target=\"hf\", output_path=\"{WIDTH_PRUNED_MODEL_DIR}/checkpoints/best_hf\")'"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8b67e080",
   "metadata": {},
   "source": [
    "#### Step 4.2: Evaluate MMLU using LM Evaluation Harness\n",
    "\n",
    "Let's first install the LM Evaluation Harness library:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "873531e1",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip uninstall -y nvidia_lm_eval\n",
    "!pip install git+https://github.com/EleutherAI/lm-evaluation-harness.git@v0.4.8"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e391b025",
   "metadata": {},
   "source": [
    "Now, let's evaluate the MMLU benchmark for the original 8B model and the pruned models:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "39bcfd89",
   "metadata": {},
   "outputs": [],
   "source": [
    "!accelerate launch -m lm_eval --model hf --model_args pretrained=qwen/Qwen3-8B --batch_size 4 --seed 1234 --tasks mmlu --num_fewshot 5"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ff1d9b68",
   "metadata": {},
   "outputs": [],
   "source": [
    "!accelerate launch -m lm_eval --model hf --model_args pretrained=\"{DEPTH_PRUNED_MODEL_DIR}/checkpoints/best_hf\" --batch_size 4 --seed 1234 --tasks mmlu --num_fewshot 5"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "65453e52",
   "metadata": {},
   "outputs": [],
   "source": [
    "!accelerate launch -m lm_eval --model hf --model_args pretrained=\"{DEPTH_PRUNED_MODEL_DIR}/checkpoints/best_hf\" --batch_size 4 --seed 1234 --tasks mmlu --num_fewshot 5"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77f55c91",
   "metadata": {},
   "source": [
    "Here is a summary of the results on the small Wikitext dataset (only ~125M tokens):\n",
    "\n",
    "| Model | MMLU |\n",
    "|-------|------|\n",
    "| Qwen3-8B | 74.9 |\n",
    "| Depth-Pruned 6B | 62.6 |\n",
    "| Width-Pruned 6B | 56.4 |\n",
    "| Qwen3-4B | 70.0 |\n",
    "\n",
    "> **NOTE:** The dataset used here is fairly small so the results are not very conclusive. In practice with larger datasets, width pruned models have higher MMLU scores but depth pruned models are faster at inference at the same number of parameters. The difference in accuracy narrows on using better quality datasets for longer distillation.\n",
    "\n",
    "#### Importance of Dataset Quality\n",
    "\n",
    "If instead of Wikitext dataset, we used better datasets like [ClimbMix](https://huggingface.co/datasets/OptimalScale/ClimbMix) or [Nemotron-Pretraining-SFT-v1](https://huggingface.co/datasets/nvidia/Nemotron-Pretraining-SFT-v1), MMLU for the depth pruned 6B model would be around **72.5** and **73** respectively. Width pruned model could be even higher. Here the distillation is performed for ~6k H100 GPU hours (96 nodes with 8 H100 each * 8 hours) using **~90B tokens**. Further distillation on these datasets could yield even better results. The Nemotron-Pretraining-SFT-v1 dataset also has good quality coding, multilingual and other task data hence would also result in improvement on other pre-training benchmarks apart from MMLU."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d3304d1",
   "metadata": {},
   "source": [
    "### Next Steps\n",
    "\n",
    "So far, we have distilled the pruned models on a pre-training dataset hence the model is a base variant. Since we have a base model, we only compared all the models on base model benchmarks like MMLU. To practically use these models for reasoning tasks, we need to perform post-training on these models as well which is something we will add to this tutorial in the near future.\n",
    "\n",
    "We can also further Quantize these models to FP8 precision using [Model Optimizer](https://github.com/NVIDIA/Model-Optimizer/tree/main/examples/llm_ptq) and measure Tokens per Second (TPS) for inference. We observed that the depth pruned 6B model is ~30% faster than the Qwen3-4B and ~60% faster than the Qwen3-8B when all are quantized to FP8 precision on single H100 GPU."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "base",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "name": "python",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
