{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "09d30e35-8e9d-4d2e-bd14-738c627a3963",
   "metadata": {},
   "source": [
    "### Step 3: Distill knowledge from teacher into pruned students\n",
    "In this step, we will distill the depth and width pruned models using Knowledge Distillation. For usage details, please refer to the [distillation docs](https://docs.nvidia.com/nemo-framework/user-guide/latest/model-optimization/distillation/distillation.html) for more details.\n",
    "\n",
    "Let's define the common parameters for distillation of depth or width pruned models first.\n",
    "\n",
    "> `NOTE:` While this notebook uses the `wikitext` dataset as it is the most easy to get started with, in practice, we recommend using bigger, more recent and much higher quality datasets like [ClimbMix](https://huggingface.co/datasets/OptimalScale/ClimbMix) or [Nemotron-Pretraining-SFT-v1](https://huggingface.co/datasets/nvidia/Nemotron-Pretraining-SFT-v1). The WikiText dataset only has ~125M tokens while in practice, we recommend distilling the pruned model for ~50-100B tokens. Generally, the larger the dataset, the better the pruned model will perform; and the more aggressive the pruning, the more tokens are needed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "17ad5860",
   "metadata": {},
   "outputs": [],
   "source": [
    "from math import ceil\n",
    "\n",
    "\n",
    "NEMO_ROOT = \"/opt/NeMo\"\n",
    "ROOT_DIR = \"/workspace\"\n",
    "TEACHER_MODEL_PATH = f\"{ROOT_DIR}/Qwen3-8B-nemo\"\n",
    "\n",
    "##### Set data paths\n",
    "# NOTE: If you have multiple partitioned datasets, you can pass in a space-separated list of paths below.\n",
    "DATA_PATH = f\"{ROOT_DIR}/wikitext-data\"\n",
    "DATA_PATHS = f\"{DATA_PATH}/wikitext-train_text_document\"\n",
    "INDEX_MAPPING_DIR = f\"{DATA_PATH}/index_mappings\"\n",
    "# NOTE: Update this to the number according to your dataset\n",
    "NUM_TOKENS = int(125e6)\n",
    "NUM_VAL_TOKENS = int(NUM_TOKENS * 0.01)\n",
    "\n",
    "##### Set Training Parameters\n",
    "# NOTE: Use 4096 or 8192 Seq Len depending on whether your dataset texts are short or long\n",
    "SEQ_LENGTH = 4096\n",
    "# NOTE: GBS 768 and LR 1e-4 to 1e-5 generally works fine so dont change them unless you know what you are doing\n",
    "GLOBAL_BATCH_SIZE = 768\n",
    "LR = 1e-4\n",
    "MIN_LR = 1e-5\n",
    "\n",
    "MAX_STEPS = ceil(NUM_TOKENS / (SEQ_LENGTH * GLOBAL_BATCH_SIZE))\n",
    "WARMUP_STEPS = min(100, ceil(MAX_STEPS / 10))\n",
    "LOG_INTERVAL = min(100, ceil(MAX_STEPS / 10))\n",
    "VAL_CHECK_INTERVAL = min(100, ceil(MAX_STEPS / 10))\n",
    "LIMIT_VAL_BATCHES = min(32, ceil(NUM_VAL_TOKENS / (SEQ_LENGTH * GLOBAL_BATCH_SIZE)))\n",
    "\n",
    "# Change these to accommodate your resources\n",
    "DEVICES = 8\n",
    "NODES = 1\n",
    "TENSOR_PARALLEL_SIZE = DEVICES\n",
    "PIPELINE_PARALLEL_SIZE = 1\n",
    "# NOTE: Use as large of a micro batch size as your GPU can handle for better utilization\n",
    "MICRO_BATCH_SIZE = 8\n",
    "\n",
    "\n",
    "print(\"Training parameters:\")\n",
    "for k, v in list(locals().items()):\n",
    "    if not k.startswith('_') and k.upper() == k:\n",
    "        print(\"\\t\", k, v)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c33cf641-0d27-417f-b3ee-c06701698184",
   "metadata": {},
   "source": [
    "#### Step 3a: Distilling depth-pruned student\n",
    "While distilling knowledge from the teacher to depth-pruned model, the `student_model_path` model would be  `<ROOT_DIR>/Qwen3-8B-nemo-depth-pruned` as produced by the depth-pruning step in the [pruning](./02_pruning.ipynb) notebook."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5d23a01e-4912-47cb-bf21-b4fd72007ec1",
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
    "STUDENT_MODEL_PATH = f\"{ROOT_DIR}/Qwen3-8B-nemo-depth-pruned\"\n",
    "LOG_DIR = ROOT_DIR\n",
    "EXP_NAME = \"Qwen3-8B-nemo-depth-pruned-distill\"\n",
    "\n",
    "!torchrun --nproc_per_node \"{DEVICES}\" \"{NEMO_ROOT}/scripts/llm/gpt_train.py\" \\\n",
    "    --name \"{EXP_NAME}\" \\\n",
    "    --devices \"{DEVICES}\" \\\n",
    "    --num_nodes \"{NODES}\" \\\n",
    "    --tp_size \"{TENSOR_PARALLEL_SIZE}\" \\\n",
    "    --pp_size \"{PIPELINE_PARALLEL_SIZE}\" \\\n",
    "    --model_path \"{STUDENT_MODEL_PATH}\" \\\n",
    "    --teacher_path \"{TEACHER_MODEL_PATH}\" \\\n",
    "    --legacy_ckpt \\\n",
    "    --max_steps \"{MAX_STEPS}\" \\\n",
    "    --warmup_steps \"{WARMUP_STEPS}\" \\\n",
    "    --gbs \"{GLOBAL_BATCH_SIZE}\" \\\n",
    "    --mbs \"{MICRO_BATCH_SIZE}\" \\\n",
    "    --lr \"{LR}\" \\\n",
    "    --min_lr \"{MIN_LR}\" \\\n",
    "    --seq_length \"{SEQ_LENGTH}\" \\\n",
    "    --log_dir \"{LOG_DIR}\" \\\n",
    "    --log_interval \"{LOG_INTERVAL}\" \\\n",
    "    --val_check_interval \"{VAL_CHECK_INTERVAL}\" \\\n",
    "    --limit_val_batches \"{LIMIT_VAL_BATCHES}\" \\\n",
    "    --data_paths \"{DATA_PATHS}\" \\\n",
    "    --index_mapping_dir \"{INDEX_MAPPING_DIR}\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "42d910d9-14dd-44ba-bf2c-0064737c70fa",
   "metadata": {},
   "source": [
    "This will create the final distilled model at something like `<ROOT_DIR>/Qwen3-8B-nemo-depth-distilled/checkpoints/{model_name}--{val_loss:.2f}-{step}-{consumed_samples}`. Exact path depends on your distillation run. For simpicity in next steps, we can rename it to `<ROOT_DIR>/Qwen3-8B-nemo-depth-distilled/checkpoints/best`.\n",
    "\n",
    "> `NOTE:`This script takes about 1 hour on 8x H100 to generate the final distilled model."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cfa05542",
   "metadata": {},
   "source": [
    "#### Step 3b: Distilling width-pruned student\n",
    "While distilling knowledge from the teacher to width-pruned model, the `student_model_path` model would be  `<ROOT_DIR>/Qwen3-8B-nemo-width-pruned` as produced by the width-pruning step in the [pruning](./02_pruning.ipynb) notebook."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "66c2c44a",
   "metadata": {},
   "outputs": [],
   "source": [
    "STUDENT_MODEL_PATH = f\"{ROOT_DIR}/Qwen3-8B-nemo-width-pruned\"\n",
    "LOG_DIR = ROOT_DIR\n",
    "EXP_NAME = \"Qwen3-8B-nemo-width-pruned-distill\"\n",
    "\n",
    "!torchrun --nproc_per_node \"{DEVICES}\" \"{NEMO_ROOT}/scripts/llm/gpt_train.py\" \\\n",
    "    --name \"{EXP_NAME}\" \\\n",
    "    --devices \"{DEVICES}\" \\\n",
    "    --num_nodes \"{NODES}\" \\\n",
    "    --tp_size \"{TENSOR_PARALLEL_SIZE}\" \\\n",
    "    --pp_size \"{PIPELINE_PARALLEL_SIZE}\" \\\n",
    "    --model_path \"{STUDENT_MODEL_PATH}\" \\\n",
    "    --teacher_path \"{TEACHER_MODEL_PATH}\" \\\n",
    "    --legacy_ckpt \\\n",
    "    --max_steps \"{MAX_STEPS}\" \\\n",
    "    --warmup_steps \"{WARMUP_STEPS}\" \\\n",
    "    --gbs \"{GLOBAL_BATCH_SIZE}\" \\\n",
    "    --mbs \"{MICRO_BATCH_SIZE}\" \\\n",
    "    --lr \"{LR}\" \\\n",
    "    --min_lr \"{MIN_LR}\" \\\n",
    "    --seq_length \"{SEQ_LENGTH}\" \\\n",
    "    --log_dir \"{LOG_DIR}\" \\\n",
    "    --log_interval \"{LOG_INTERVAL}\" \\\n",
    "    --val_check_interval \"{VAL_CHECK_INTERVAL}\" \\\n",
    "    --limit_val_batches \"{LIMIT_VAL_BATCHES}\" \\\n",
    "    --data_paths \"{DATA_PATHS}\" \\\n",
    "    --index_mapping_dir \"{INDEX_MAPPING_DIR}\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0638f6d9",
   "metadata": {},
   "source": [
    "This will create the final distilled model at something like `<ROOT_DIR>/Qwen3-8B-nemo-width-distilled/checkpoints/{model_name}--{val_loss:.2f}-{step}-{consumed_samples}`. Exact path depends on your distillation run. For simpicity in next steps, we can rename it to `<ROOT_DIR>/Qwen3-8B-nemo-width-distilled/checkpoints/best`.\n",
    "\n",
    "> `NOTE:`This script takes about 1 hour on 8x H100 to generate the final distilled model.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c75df30d",
   "metadata": {},
   "source": [
    "Checkout the next notebook to compare the depth and width pruned models."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
