{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Fine-tuning gpt-oss with NeMo Framework"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook demonstrates the process of applying LoRA finetuning to **gpt-oss-20b** using the [multilingual-customer-support-tickets](https://www.kaggle.com/datasets/tobiasbueck/multilingual-customer-support-tickets) dataset. Each entry in the dataset includes a customer email's subject and body, the priority level, the queue it was assigned to, and the agent's response.\n",
    "\n",
    "In multi-agent customer care systems, routing customer queries is a crucial task. It involves evaluating a query and directing it to the appropriate sub-agent for resolution. In this example, we will fine-tune the model to perform agent ticket routing, which involves determining the correct queue for a ticket based on its email subject and body."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Pre-requisites"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> **NOTE:** Run this notebook inside the [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) tag `25.07.gpt_oss` which includes all required dependencies. See the tutorial README for instructions on downloading the container."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following cell installs dependencies to visualize the run configurations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "\n",
    "!apt-get update && apt-get install -y graphviz\n",
    "!pip install ipywidgets"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Part I: Prepare the Dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import json\n",
    "import random\n",
    "import pandas as pd\n",
    "random.seed(42)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following cell inspects the dataset and drops rows with missing data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "DATA_DIR = \"/nemo-experiments/data/customer-ticket-routing\"\n",
    "\n",
    "# Load the customer support data\n",
    "df = pd.read_csv(os.path.join(DATA_DIR, \"aa_dataset-tickets-multi-lang-5-2-50-version.csv\"))\n",
    "\n",
    "# Remove rows with missing values\n",
    "df = df.dropna(subset=['subject', 'body', 'queue', 'type'])\n",
    "df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Configure the splits -"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set your split ratios\n",
    "TRAIN_RATIO = 0.9\n",
    "VAL_RATIO = 0.09\n",
    "TEST_RATIO = 0.01\n",
    "\n",
    "PREPARED_DATA_DIR = os.path.join(DATA_DIR, \"prepared-data\")\n",
    "os.makedirs(PREPARED_DATA_DIR, exist_ok=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Transform the data by defining the task in the prompt -"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# This list will hold all of our transformed data points.\n",
    "transformed_data = []\n",
    "\n",
    "def create_prompt(subject, body):\n",
    "    \"\"\"\n",
    "    Creates a standardized prompt for the language model.\n",
    "    \"\"\"\n",
    "    return f\"A customer has submitted a support ticket. Please route it to the correct department.\\n\\nSubject: {subject}\\n\\nBody: {body}\\n\\nDepartment:\"\n",
    "\n",
    "\n",
    "# Iterate over each row of the DataFrame to create the prompt-completion pairs.\n",
    "for index, row in df.iterrows():\n",
    "    prompt = create_prompt(row['subject'], row['body'])\n",
    "    # completion = row['type'] + \", \" + row['queue']\n",
    "    completion = row['queue']\n",
    "    \n",
    "    transformed_data.append({\n",
    "        \"input\": prompt,\n",
    "        \"output\": f\"{completion}\"\n",
    "    })\n",
    "\n",
    "\n",
    "random.shuffle(transformed_data)\n",
    "n = len(transformed_data)\n",
    "\n",
    "# Calculate split indices\n",
    "train_end = int(n * TRAIN_RATIO)\n",
    "val_end = train_end + int(n * VAL_RATIO)\n",
    "\n",
    "train_data = transformed_data[:train_end]\n",
    "val_data = transformed_data[train_end:val_end]\n",
    "test_data = transformed_data[val_end:]\n",
    "\n",
    "# Determine folder\n",
    "\n",
    "\n",
    "def save_jsonl(data, filename):\n",
    "    with open(filename, 'w') as f:\n",
    "        for entry in data:\n",
    "            json.dump(entry, f)\n",
    "            f.write('\\n')\n",
    "\n",
    "# Save each split\n",
    "save_jsonl(train_data, os.path.join(PREPARED_DATA_DIR, \"training.jsonl\"))\n",
    "save_jsonl(val_data, os.path.join(PREPARED_DATA_DIR, \"validation.jsonl\"))\n",
    "save_jsonl(test_data, os.path.join(PREPARED_DATA_DIR, \"test.jsonl\"))\n",
    "\n",
    "print(f\"Total records: {n}\")\n",
    "print(f\"Train: {len(train_data)}, Val: {len(val_data)}, Test: {len(test_data)}\")\n",
    "print(f\"Saved to {PREPARED_DATA_DIR}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Inspect the prepared data\n",
    "!ls {PREPARED_DATA_DIR}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "## Part II: Finetune with NeMo Framework"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "\n",
    "import nemo_run as run\n",
    "from nemo import lightning as nl\n",
    "from nemo.collections import llm\n",
    "from nemo.collections.llm.recipes.precision.mixed_precision import bf16_mixed"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define directories for intermediate artifacts\n",
    "NEMO_MODELS_CACHE = \"/nemo-experiments/models-cache\"\n",
    "NEMO_DATASETS_CACHE = \"/nemo-experiments/data-cache\"\n",
    "\n",
    "os.environ[\"NEMO_DATASETS_CACHE\"] = NEMO_DATASETS_CACHE\n",
    "os.environ[\"NEMO_MODELS_CACHE\"] = NEMO_MODELS_CACHE\n",
    "\n",
    "\n",
    "# Configure the number of GPUs to use\n",
    "NUM_GPU_DEVICES = 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(Required) Configure your Hugging Face token"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from getpass import getpass\n",
    "from huggingface_hub import login\n",
    "\n",
    "login(token=getpass(\"Input your HF Access Token\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(Optional) Configure your [WandB](https://wandb.ai/) token for experiment tracking.\n",
    "\n",
    "Leave empty and press \"Enter\" / skip this step if you don't wish to track with WandB."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import wandb\n",
    "\n",
    "WANDB_API_KEY = getpass(\"Your Wandb API Key:\")\n",
    "\n",
    "wandb.login(key=WANDB_API_KEY)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step 1. Import the Hugging Face Checkpoint\n",
    "The following code uses the `llm.import_ckpt` API to download the specified model using the `hf://<huggingface_model_id>` URL format. It will then convert the model into NeMo 2.0 format."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "```python\n",
    "llm.import_ckpt(model=llm.GPTOSSModel(llm.GPTOSSConfig20B()), source=\"hf:///nemo-experiments/models/gpt-oss-20b\")\n",
    "```\n",
    "Below we wrap this with `run.Partial` to configure it, and then we execute it. Note that run.* primitives are part of [Nemo-Run](https://github.com/NVIDIA-NeMo/Run) which can be used to configure, launch and manage experiments at scale locally, SLURM or even cloud environments from the comfort of a Jupyter Notebook."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# You can just as easily swap out the model with the 120B variant, or execute this on a remote cluster.\n",
    "\n",
    "def configure_checkpoint_conversion():\n",
    "    return run.Partial(\n",
    "        llm.import_ckpt,\n",
    "        model=run.Config(llm.GPTOSSModel, llm.GPTOSSConfig20B),\n",
    "        source=\"hf:///nemo-experiments/models/gpt-oss-20b\",\n",
    "        overwrite=False,\n",
    "    )\n",
    "    \n",
    "# Run your experiment locally\n",
    "run.run(configure_checkpoint_conversion(), executor=run.LocalExecutor())\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The above steps downloads the checkpoint from HuggingFace, converts it to NeMo format, and saves it to the directory specified by the `NEMO_MODELS_CACHE` environment variable."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!ls $NEMO_MODELS_CACHE/gpt-oss-20b"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "### Step 2. Configure the Fine-tuning Run"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "NeMo Framework provides recipes for finetuning and pretraining of supported models. Below, we instantiate the finetuning recipe for `gpt-oss-20b`.\n",
    "\n",
    "> **NOTE**: Below, we specify LoRA, but full supervised-finetuning can also be done by specifying `peft_scheme`=`none`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "recipe = llm.gpt_oss_20b.finetune_recipe(\n",
    "    name=\"gpt_oss_20b_finetuning\",\n",
    "    dir=\"/nemo-experiments/\",\n",
    "    num_nodes=1,\n",
    "    num_gpus_per_node=NUM_GPU_DEVICES,\n",
    "    peft_scheme='lora',  # 'lora', 'none' (for SFT)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "#### 2.1: Configure the Dataloader"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Since we already have the data in input/output format, we can use the [`FineTuningDataModule`](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/llm/gpt/data/fine_tuning.py) directly. \n",
    "\n",
    "You can also subclass this module (ex: [DollyDataModule](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/llm/gpt/data/dolly.py)) for Dolly dataset and define your own data preparation format / logic."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from nemo.collections.llm.gpt.data.fine_tuning import FineTuningDataModule\n",
    "\n",
    "dataloader = run.Config(\n",
    "        FineTuningDataModule,\n",
    "        dataset_root=PREPARED_DATA_DIR,\n",
    "        seq_length=2048,\n",
    "        micro_batch_size=4,\n",
    "        global_batch_size=64\n",
    "    )\n",
    "\n",
    "# Configure the recipe\n",
    "recipe.data = dataloader\n",
    "\n",
    "# Visualize the dataloader\n",
    "dataloader\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "#### 2.2: Configure the Logger"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "The following example demonstrates how to set up the logger with a specific WandB project and run name. Additional configurations, such as checkpointing details, can also be specified."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from lightning.pytorch.loggers import WandbLogger\n",
    "\n",
    "LOG_DIR = \"/nemo-experiments/results\"\n",
    "LOG_NAME = \"nemo2_gpt_oss_sft_customer_ticket_routing\"\n",
    "\n",
    "def logger() -> run.Config[nl.NeMoLogger]:\n",
    "    ckpt = run.Config(\n",
    "        nl.ModelCheckpoint,\n",
    "        save_last=True,\n",
    "        every_n_train_steps=200,\n",
    "        monitor=\"reduced_train_loss\",\n",
    "        save_top_k=1,\n",
    "        save_on_train_epoch_end=True,\n",
    "        save_optim_on_train_end=True,\n",
    "    )\n",
    "\n",
    "    # Since WANDB was optional\n",
    "    if WANDB_API_KEY is not None and WANDB_API_KEY != \"\":\n",
    "        wandb_config = run.Config(\n",
    "            WandbLogger, project=\"NeMo_LoRA_Customer_Ticket_Routing\", name=\"Customer_Ticket_Routing\"\n",
    "        )\n",
    "    else:\n",
    "        wandb_config = None\n",
    "\n",
    "    return run.Config(\n",
    "        nl.NeMoLogger,\n",
    "        name=LOG_NAME,\n",
    "        log_dir=LOG_DIR,\n",
    "        use_datetime_version=False,\n",
    "        ckpt=ckpt,\n",
    "        wandb=wandb_config,\n",
    "    )\n",
    "\n",
    "recipe.log = logger()\n",
    "\n",
    "logger()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "#### 2.3: Configure AutoResume"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def resume() -> run.Config[nl.AutoResume]:\n",
    "    return run.Config(\n",
    "        nl.AutoResume,\n",
    "        restore_config=run.Config(\n",
    "            nl.RestoreConfig, path=f\"nemo:///{NEMO_MODELS_CACHE}/gpt-oss-20b\"\n",
    "        ),\n",
    "        resume_if_exists=True,\n",
    "    )\n",
    "    \n",
    "recipe.resume = resume()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "#### 2.4: Trainer Configurations"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You may also just set various training configurations as needed. For example:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "recipe.trainer.max_steps = 100\n",
    "recipe.trainer.val_check_interval = 25\n",
    "recipe.trainer.limit_val_batches = 2\n",
    "recipe.optim.config.lr = 2e-4\n",
    "\n",
    "# Let's visualize the recipe\n",
    "recipe\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "There are several such parameters (ex: optimizer, LoRA) available to tweak. For example -\n",
    "\n",
    "```python\n",
    "# You may also configure the learning rate, optimizer, etc.\n",
    "recipe.optim.config.lr = 1e-4\n",
    "\n",
    "# Or tweak the LoRA parameters\n",
    "recipe.peft.dim = 8\n",
    "recipe.peft.alpha = 32\n",
    "recipe.peft.dropout = 0.1\n",
    "recipe.peft.target_modules = ['linear_qkv', 'linear_proj']\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "### Step 3. Execute Finetuning"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Following cell executes the configure recipe for finetuning locally. \n",
    "\n",
    "\n",
    "> **NOTE**: You can replace `run.LocalExecutor` with `run.SlurmExecutor` for SLURM cluster execution or `run.SkypilotExecutor` for cloud-based execution. For additional options and detailed guidance, please consult the [NeMo documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/nemorun/guides/execution.html)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "run.run(recipe, executor=run.LocalExecutor(ntasks_per_node=NUM_GPU_DEVICES))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "### Step 4. Run In-Framework Generation\n",
    "\n",
    "For a sanity check, we use the `llm.generate` API in NeMo 2.0 to generate sample from the trained checkpoint. Find your last saved checkpoint from your experiment `results` dir:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "peft_ckpt_path = str(\n",
    "    next(\n",
    "        (\n",
    "            d\n",
    "            for d in Path(\n",
    "                LOG_DIR + \"/\" + LOG_NAME + \"/checkpoints/\"\n",
    "            ).iterdir()\n",
    "            if d.is_dir() and d.name.endswith(\"-last\")\n",
    "        ),\n",
    "        None,\n",
    "    )\n",
    ")\n",
    "print(\"We will load the PEFT checkpoint from:\", peft_ckpt_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# You should see weights and context directories\n",
    "!ls -ltr {peft_ckpt_path}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When using the `llm.generate` API, you can provide the dataloader (as we configured earlier), for example: `input_dataset=dataloader`. This will use the test set from the specified data module to generate predictions. In the example below, the generated predictions are saved to the `peft_predictions.txt` file.\n",
    "\n",
    "Generating predictions needs only 1 GPU (`tensor_model_parallel_size=1`). However, using multiple GPU devices can speed up inference.\n",
    "\n",
    "> **Note:** The execution of the following cell may take up to 10 minutes to complete, based on tests conducted using a single H100-80GB GPU. This in-framework inference is intended primarily for validation or sanity checks. For optimized inference, consider using solutions like NVIDIA NIM."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "RESULTS_DIR = \"/nemo-experiments/results/\"\n",
    "os.makedirs(RESULTS_DIR, exist_ok=True)\n",
    "\n",
    "\n",
    "OUTPUT_FILE = os.path.join(RESULTS_DIR, \"ctr-peft_prediction.jsonl\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
    "from megatron.core.inference.common_inference_params import CommonInferenceParams\n",
    "\n",
    "\n",
    "def trainer() -> run.Config[nl.Trainer]:\n",
    "    strategy = run.Config(\n",
    "        nl.MegatronStrategy,\n",
    "        tensor_model_parallel_size=1,\n",
    "    )\n",
    "    trainer = run.Config(\n",
    "        nl.Trainer,\n",
    "        accelerator=\"gpu\",\n",
    "        devices=NUM_GPU_DEVICES,\n",
    "        num_nodes=1,\n",
    "        strategy=strategy,\n",
    "        plugins=bf16_mixed(),\n",
    "    )\n",
    "    return trainer\n",
    "\n",
    "\n",
    "def configure_inference():\n",
    "    return run.Partial(\n",
    "        llm.generate,\n",
    "        path=str(peft_ckpt_path),\n",
    "        trainer=trainer(),\n",
    "        input_dataset=dataloader,\n",
    "        inference_params=CommonInferenceParams(num_tokens_to_generate=50, top_k=1, return_log_probs=False, top_n_logprobs=0),\n",
    "        output_path=OUTPUT_FILE,\n",
    "        enable_flash_decode=False\n",
    "    )\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    run.run(\n",
    "        configure_inference(), executor=run.LocalExecutor(ntasks_per_node=NUM_GPU_DEVICES)\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "After the inference is complete, you will see results similar to the following:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "!head -n 2 {OUTPUT_FILE} | jq"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You should see output similar to the following:\n",
    "```json\n",
    "{\n",
    "  \"input\": \"A customer has submitted a support ticket. Please assign the type of the ticket and route it to the correct department queue.\\n\\nSubject: Support for ClickUp\\n\\nBody: have encountered recurring crashes when using ClickUp with Microsoft SQL Server 2019. The problem could be related to compatibility issues between software versions.\\n\\nDepartment:\",\n",
    "  \"label\": \"Technical Support\",\n",
    "  \"prediction\": \" Technical Support\"\n",
    "}\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "### Step 5. Calculate Evaluation Metric\n",
    "\n",
    "We can evaluate the model's predictions by calculating the F1 score."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "import json\n",
    "from sklearn.metrics import f1_score\n",
    "\n",
    "labels = []\n",
    "predictions = []\n",
    "\n",
    "# Read the jsonl file and extract labels and predictions\n",
    "with open(OUTPUT_FILE, \"r\") as f:\n",
    "    for line in f:\n",
    "        item = json.loads(line)\n",
    "        labels.append(item[\"label\"])\n",
    "        predictions.append(item[\"prediction\"])\n",
    "\n",
    "\n",
    "# Clean up whitespace for fair comparison\n",
    "clean_labels = [label.strip().lower() for label in labels]\n",
    "clean_preds = [pred.strip().lower() for pred in predictions]\n",
    "\n",
    "f1 = f1_score(clean_labels, clean_preds, average='micro')\n",
    "\n",
    "print(f\"F1 score (micro): {f1:.4f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**NOTE**: If you inspect the dataset, you will notice that some of the ground truth labels are ambiguous even to a human annotator. This is the case with many real-world datasets as well. For example in the dataset we have, at times the distinction between \"Technical Support\" and \"Product Support\" may be hard to tell."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "### Step 6. Export to Hugging Face Format\n",
    "\n",
    "The next step is to export the model to Hugging Face `.safetensors` format. This format can be ingested by NVIDIA NIM or vLLM for deployment."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before exporting, let's merge the LoRA weights into the base model weights to have a unified finetuned checkpoint."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Merge LoRA Weights with Base Model Weights\n",
    "\n",
    "\n",
    "def merge_lora_with_base_model():\n",
    "    return run.Partial(\n",
    "        llm.peft.merge_lora,\n",
    "        lora_checkpoint_path=peft_ckpt_path,\n",
    "        output_path=peft_ckpt_path + \"_merged\",\n",
    "    )\n",
    "\n",
    "\n",
    "local_executor = run.LocalExecutor()\n",
    "run.run(merge_lora_with_base_model(), executor=local_executor)\n",
    "print(f\"Merged LoRA weights with base model weights to: {peft_ckpt_path + '_merged'}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following cell uses the `llm.export_ckpt` API, wrapped by NeMo Run's `run.Partial` primitive, followed by executing it.\n",
    "\n",
    "Its worth noting that `target=hf` indicates exporting a full weights checkpoint to Hugging Face format."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Configure the export directory\n",
    "EXPORT_DIR = \"/nemo-experiments/models/gpt-oss-ctr-finetuned\"\n",
    "\n",
    "def configure_export_ckpt():\n",
    "    return run.Partial(\n",
    "        llm.export_ckpt,\n",
    "        path=peft_ckpt_path + \"_merged\", # Use the merged checkpoint path\n",
    "        target=\"hf\",\n",
    "        output_path=EXPORT_DIR,\n",
    "        overwrite=True\n",
    "    )\n",
    "\n",
    "\n",
    "local_executor = run.LocalExecutor()\n",
    "run.run(configure_export_ckpt(), executor=local_executor)\n",
    "print(f\"Exported Hugging Face model to: {EXPORT_DIR}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!ls {EXPORT_DIR}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "At this point, we have a finetuned `gpt-oss-20b` checkpoint ready to deploy!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
