{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AI-Hypercomputer/maxtext/blob/main/src/MaxText/examples/sft_llama3_demo.ipynb)\n",
    "\n",
    "# Llama3.1-8B-Instruct Supervised Fine-Tuning (SFT) Demo\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Overview\n",
    "\n",
    "This notebook can run on **TPU v5e-8** or **v5p-8**\n",
    "\n",
    "This notebook demonstrates how to perform Supervised Fine-Tuning (SFT) on Llama3.1-8B-Instruct using the Hugging Face ultrachat_200k dataset with MaxText and Tunix integration for efficient training.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "\n",
    "### Change Runtime Type\n",
    "\n",
    "**Instructions:**\n",
    "1.  Navigate to the menu at the top of the screen.\n",
    "2.  Click on **Runtime**.\n",
    "3.  Select **Change runtime type** from the dropdown menu.\n",
    "4.  Select **v5e-8** or **v5p-8 TPU** as the **Hardware accelerator**.\n",
    "5. Click on **Save**.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Get Your Hugging Face Token\n",
    "\n",
    "To access model checkpoint from the Hugging Face Hub, you need to authenticate with a personal access token.\n",
    "\n",
    "**Follow these steps to get your token:**\n",
    "\n",
    "1.  **Navigate to the Access Tokens page** in your Hugging Face account settings. You can go there directly by visiting this URL:\n",
    "    *   [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens)\n",
    "\n",
    "2.  **Create a new token** by clicking the **\"+ Create new token\"** button.\n",
    "\n",
    "3.  **Give your token a name** and assign it a **`read` role**. The `read` role is sufficient for downloading models.\n",
    "\n",
    "4.  **Copy the generated token**. You will need to paste it in the next step.\n",
    "\n",
    "**Follow these steps to store your token:**\n",
    "\n",
    "Just put your token in the cell below"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "HF_TOKEN=\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Install Dependencies\n",
    "\n",
    "https://maxtext.readthedocs.io/en/latest/tutorials/posttraining/sft.html#install-dependencies"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Set up the maxtext environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# if you have cloned the maxtext repo, you should set the path to the maxtext/src folder\n",
    "# otherwise, you can just run the cell below\n",
    "!cd ~/maxtext/src/  #  This is the path to the maxtext/src folder"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import MaxText\n",
    "from MaxText import pyconfig\n",
    "from MaxText.sft.sft_trainer import train as sft_train\n",
    "import jax\n",
    "import os\n",
    "# Hugging Face Authentication Setup\n",
    "from huggingface_hub import login\n",
    "\n",
    "\n",
    "MAXTEXT_REPO_ROOT = os.path.dirname(MaxText.__file__)\n",
    "print(f\"MaxText installation path: {MAXTEXT_REPO_ROOT}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "if HF_TOKEN:\n",
    "    login(token=HF_TOKEN)\n",
    "    print(\"Authenticated with Hugging Face\")\n",
    "else:\n",
    "    print(\"Authentication failed: Hugging Face token not set\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## Set the model, checkpoint path and output directory\n",
    "MODEL_NAME = \"llama3.1-8b\"\n",
    "# set the path to the model checkpoint or leave empty to download from HuggingFace\n",
    "MODEL_CHECKPOINT_PATH = \"\"\n",
    "if not MODEL_CHECKPOINT_PATH:\n",
    "   MODEL_CHECKPOINT_PATH = f\"{MAXTEXT_REPO_ROOT}/llama_checkpoint\"\n",
    "   print(\"Model checkpoint will be downloaded from HuggingFace at: \",  MODEL_CHECKPOINT_PATH)\n",
    "   print(\"Set MODEL_CHECKPOINT_PATH if you do not wish to download the checkpoint.\")\n",
    "\n",
    "BASE_OUTPUT_DIRECTORY = \"\"\n",
    "if not BASE_OUTPUT_DIRECTORY:\n",
    "   print(\"Please set BASE_OUTPUT_DIRECTORY to store output/logs.\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# This is the command to convert the HF model to the MaxText format \n",
    "if not os.path.exists(MODEL_CHECKPOINT_PATH):\n",
    "    !python3 -m MaxText.utils.ckpt_conversion.to_maxtext \\\n",
    "        $MAXTEXT_REPO_ROOT/configs/base.yml \\\n",
    "        model_name=$MODEL_NAME \\\n",
    "        base_output_directory=$MODEL_CHECKPOINT_PATH \\\n",
    "        hf_access_token=$HF_TOKEN \\\n",
    "        use_multimodal=false \\\n",
    "        scan_layers=false"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "CxzKMBQd_U5-"
   },
   "outputs": [],
   "source": [
    "# this is the code to initialize jax if it's not initialized in the cell above\n",
    "if not jax.distributed.is_initialized():\n",
    "  jax.distributed.initialize()\n",
    "print(f\"JAX version: {jax.__version__}\")\n",
    "print(f\"JAX devices: {jax.devices()}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# MaxText imports\n",
    "try:\n",
    "  MAXTEXT_AVAILABLE = True\n",
    "  print(\"✓ MaxText imports successful\")\n",
    "except ImportError as e:\n",
    "  print(f\"⚠️ MaxText not available: {e}\")\n",
    "  MAXTEXT_AVAILABLE = False"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "In-jdp1AAwrL"
   },
   "outputs": [],
   "source": [
    "# Fixed configuration setup for Llama3.1-8B on TPU\n",
    "if MAXTEXT_AVAILABLE:\n",
    "  config_argv = [\n",
    "      \"\",\n",
    "      f\"{MAXTEXT_REPO_ROOT}/configs/sft.yml\",  # base SFT config\n",
    "      f\"load_parameters_path={MODEL_CHECKPOINT_PATH}/0/items/\",  # Load pre-trained weights!, replace with your checkpoint path\n",
    "      f\"model_name={MODEL_NAME}\",\n",
    "      \"steps=100\",  # adjust for your training needs\n",
    "      \"per_device_batch_size=1\",  # minimal to avoid OOM\n",
    "      \"max_target_length=1024\",\n",
    "      \"learning_rate=2.0e-5\",  # safe small LR\n",
    "      \"eval_steps=5\",\n",
    "      \"weight_dtype=bfloat16\",\n",
    "      \"dtype=bfloat16\",\n",
    "      \"hf_path=HuggingFaceH4/ultrachat_200k\",  # HuggingFace dataset\n",
    "      f\"hf_access_token={HF_TOKEN}\",\n",
    "      f\"base_output_directory={BASE_OUTPUT_DIRECTORY}\",\n",
    "      \"run_name=sft_llama3_8b_test\",\n",
    "      \"tokenizer_path=meta-llama/Llama-3.1-8B-Instruct\",  # Llama tokenizer\n",
    "      \"eval_interval=10\",\n",
    "      \"profiler=xplane\",\n",
    "  ]\n",
    "\n",
    "  # Initialize configuration using MaxText's pyconfig\n",
    "  config = pyconfig.initialize(config_argv)\n",
    "\n",
    "  print(\"✓ Fixed configuration loaded:\")\n",
    "  print(f\"  - Model: {config.model_name}\")\n",
    "  print(f\"  - Dataset: {config.hf_path}\")\n",
    "  print(f\"  - Steps: {config.steps}\")\n",
    "  print(f\"  - Use SFT: {config.use_sft}\")\n",
    "  print(f\"  - Learning Rate: {config.learning_rate}\")\n",
    "else:\n",
    "  print(\"MaxText not available - cannot load configuration\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "mgwpNgQYCJEd"
   },
   "outputs": [],
   "source": [
    "#  Execute the training using MaxText SFT trainer's train() function\n",
    "if MAXTEXT_AVAILABLE:\n",
    "  print(\"=\" * 60)\n",
    "  print(\"EXECUTING ACTUAL TRAINING\")\n",
    "  print(\"=\" * 60)\n",
    "\n",
    "  trainer, mesh = sft_train(config)\n",
    "\n",
    "print(\"Training complete!\")\n",
    "print(\"Model saved at: \", BASE_OUTPUT_DIRECTORY)"
   ]
  }
 ],
 "metadata": {
  "accelerator": "TPU",
  "colab": {
   "gpuType": "V5E1",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "base",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
