{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Daisj3x4Jfn3"
   },
   "source": [
    "# SFT Base Model on Science Papers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "WYWW2_-HtG86"
   },
   "source": [
    "**NB**: Tools used on this assignment.\n",
    "\n",
    "### Applications\n",
    "* GitHub Copilot was used to accelerate only very basic completions and was **not** used as an engine for function development\n",
    "* ChatGPT-4 was used for generating only matplotlib/seaborn code and is clearly cited where used for such purposes\n",
    "\n",
    "### Key Packages\n",
    "* transformers\n",
    "    * *Note:* The GPT2 model was replaced with the excellent 2.7B param microsoft/phi-2 base model for improved performance (https://huggingface.co/microsoft/phi-2)\n",
    "* accelerate\n",
    "    * *Note:* This notebook is compatible with both single and multi GPU clusters (we train on 4x GPUs here)\n",
    "* torch\n",
    "    * *Note:* Raw PyTorch code is used for finetuning, pairing nicely with accelerate\n",
    "\n",
    "### Compute\n",
    "* VSCode used for non-training development\n",
    "    * Apple Silicon M1 Pro, 16 GB CPU RAM\n",
    "* RunPod used for training development (https://www.runpod.io/)\n",
    "    * Python 3 Engine via RunPod, 4x A100 SXM4 (80 GB) GPUs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "b93lTmq3_Fcy",
    "outputId": "6996ad69-8cdf-4016-9250-a76ae215479e"
   },
   "outputs": [],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "wyDkZ-yr_Fcz",
    "outputId": "69effdb3-c664-4da9-8c74-00d31bdc5a1c"
   },
   "outputs": [],
   "source": [
    "!nvcc --version"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "wu3ri7EP_Fcz"
   },
   "outputs": [],
   "source": [
    "#!python -m pip install --upgrade pip"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "djiqw6lY_Fcz"
   },
   "outputs": [],
   "source": [
    "# install for training, don't for local dev\n",
    "#!pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "PiDGcs5t_Fcz"
   },
   "outputs": [],
   "source": [
    "#!pip list"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "PoEQITvS_Fcz"
   },
   "outputs": [],
   "source": [
    "# import torch\n",
    "# torch._C._cuda_getDeviceCount()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "FrJs3XcUtG87"
   },
   "outputs": [],
   "source": [
    "# VSCode local setup\n",
    "#!python3 -m venv .venv_atla\n",
    "#!source .venv_atla/bin/activate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "YaC7PbnRT0mi"
   },
   "outputs": [],
   "source": [
    "# %%capture\n",
    "#!pip3 install -q -U transformers accelerate datasets cleantext matplotlib seaborn evaluate transformers[sentencepiece]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "4mde3R2Y6JOw"
   },
   "outputs": [],
   "source": [
    "!git config --global user.email \"dryanfurman@gmail.com\"\n",
    "!git config --global user.name \"Daniel Furman\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "O-0hsK-t7A95",
    "outputId": "3b8e214d-c318-40d0-8ef8-c7e7bb6170ea"
   },
   "outputs": [],
   "source": [
    "from huggingface_hub import login\n",
    "\n",
    "# from google.colab import userdata\n",
    "\n",
    "login(\"\")  # userdata.get('HF_TOKEN')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "AxcotYvwO_QR"
   },
   "source": [
    "## Data Exploration"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "_gNFok8eeV1x"
   },
   "outputs": [],
   "source": [
    "import json\n",
    "import numpy as np\n",
    "import cleantext\n",
    "import re\n",
    "from tqdm.notebook import tqdm\n",
    "import time\n",
    "import json"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "cmK1Vd2wJYzy"
   },
   "outputs": [],
   "source": [
    "# first, upload your file\n",
    "file_path = \"scientific papers.txt\"\n",
    "\n",
    "data = []\n",
    "\n",
    "with open(file_path, \"r\") as file:\n",
    "    data = json.load(file)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "sQEV0BJNLpj8",
    "outputId": "dffc823a-50b3-4d68-a30d-dc31ac7debda"
   },
   "outputs": [],
   "source": [
    "# to do: explore the datset\n",
    "print(\"**Step 0.** Size of the dataset: \\n\")\n",
    "print(f\"There are {len(data)} elements in the dataset\", \"\\n\\n\")\n",
    "\n",
    "print(\"**Step 1.** Insepct an element at random: \\n\")\n",
    "rand_int = int(np.random.uniform(low=0, high=(len(data) - 1)))\n",
    "print(f\"This is the {rand_int} element's keys: {data[rand_int].keys()}\")\n",
    "# for key in data[rand_int].keys():\n",
    "#    if key == \"article_text\":\n",
    "#        print(f\"This is the first element's {key}: {data[rand_int][key][:50]}\\n\")\n",
    "#    else:\n",
    "#        print(f\"This is the first element's {key}: {data[rand_int][key]}\\n\")\n",
    "\n",
    "# inspected element and looks good"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "d69gQe1ztG8-",
    "outputId": "edf85e9c-f093-4757-91fb-fc071b1e2728"
   },
   "outputs": [],
   "source": [
    "print(\"**Step 2** Basic descriptive stats on article_text:\")\n",
    "print(\"**NB**: Remove punctuation in word counts to match cleaning step\\n\")\n",
    "num_words = []\n",
    "for itr in range(len(data)):\n",
    "    num_words.append(\n",
    "        len(\n",
    "            cleantext.clean_words(\n",
    "                \" \".join(data[itr][\"article_text\"]),\n",
    "                clean_all=False,  # Execute all cleaning operations\n",
    "                extra_spaces=True,  # Remove extra white spaces\n",
    "                stemming=False,  # Stem the words\n",
    "                stopwords=False,  # Remove stop words\n",
    "                lowercase=False,  # Convert to lowercase\n",
    "                numbers=False,  # Remove all digits\n",
    "                punct=True,  # Remove all punctuations\n",
    "                stp_lang=\"english\",  # Language for stop words\n",
    "            )\n",
    "        )\n",
    "    )\n",
    "\n",
    "print(f\"Mean of number of words (no punct) {np.mean(num_words)}\")\n",
    "print(f\"Std of number of words (no punct) {np.std(num_words)}\", \"\\n\")\n",
    "np.save(\"num_words.npy\", np.array(num_words))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "lDQz6MAGaxB6",
    "outputId": "dcc020b5-ad78-4453-ca19-63e5223ea1fd"
   },
   "outputs": [],
   "source": [
    "# Distribution plot of num words per article\n",
    "\n",
    "# Reference: ChatGPT-4 generation with slight modifications, dated Feb 10, 2024 PST\n",
    "# Prompt used: (attached num_words.npy as file) \"Make me an excellent distribution plot of the attached numpy array. It contains the number of words contained in scientific papers for 1000 different papers. Make it a very nice, professional plot. Use gridlines.\"\n",
    "# Generated code:\n",
    "\n",
    "# Load the numpy array from the uploaded file\n",
    "num_words = np.load(\"num_words.npy\")\n",
    "\n",
    "# Display the first few elements to understand its structure\n",
    "# print(num_words[:10])\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "\n",
    "# Setting the style for the plot\n",
    "sns.set_style(\"darkgrid\")\n",
    "\n",
    "# Creating the distribution plot\n",
    "plt.figure(figsize=(10, 6))\n",
    "sns.histplot(num_words, kde=True, bins=50, edgecolor=\"black\")\n",
    "plt.axvline(\n",
    "    x=np.mean(num_words), color=\"tab:orange\", label=\"Mean\", linestyle=\"--\", alpha=0.75\n",
    ")\n",
    "plt.axvline(\n",
    "    x=np.mean(num_words) + np.std(num_words),\n",
    "    color=\"tab:green\",\n",
    "    label=\"Standard Dev\",\n",
    "    linestyle=\"--\",\n",
    "    alpha=0.75,\n",
    ")\n",
    "plt.axvline(\n",
    "    x=np.mean(num_words) - np.std(num_words),\n",
    "    color=\"tab:green\",\n",
    "    linestyle=\"--\",\n",
    "    alpha=0.75,\n",
    ")\n",
    "\n",
    "# Adding titles and labels\n",
    "plt.title(\"Distribution of Word Counts in Scientific Papers\", fontsize=16)\n",
    "plt.xlabel(\"Number of Words\", fontsize=14)\n",
    "plt.ylabel(\"Frequency\", fontsize=14)\n",
    "plt.legend()\n",
    "\n",
    "# Adding gridlines\n",
    "plt.grid(True, which=\"both\", linestyle=\"--\", linewidth=0.5)\n",
    "\n",
    "# Show the plot\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* *Note*: We can see that the distribution is right-skewed, with a long tail to the right of the mean. These longer articles will take up more room in our training dataset once we chunk to self.context_window sized chunks - given more time, I'd investigate the longest articles to ensure that they are high quality as a result."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "w0d1JlvbPAsW"
   },
   "source": [
    "## Data Cleaning\n",
    "\n",
    "1. Replace all mathematical formulas and the references to them with _[math formula]_ e.g.\n",
    "    * _@xmath2..._ -> _[math formula]_\n",
    "2. Eliminate all punctuation marks"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "vyf2kY0ztG8_"
   },
   "outputs": [],
   "source": [
    "# check a few papers for their formula structure\n",
    "# paper at index 125 has formula references as \"@xmathi\" and formulas as \"@xmathi _formula_ $ ]\"\n",
    "# paper at index 455 has formula references as \"@xmathi\" and formulas as \"@xmathi _formula_ ] ]\"\n",
    "# paper at index 984 has formula references as \"@xmathi\" and formulas as \"@xmathi _formula_ ] ]\"\n",
    "# paper at index 95 has formula references as \"@xmathi\" and formulas as both \"@xmathi _formula_ ] ] and  \"@xmathi _formula_ $ ]\"\n",
    "# paper at index 684 has formula references as \"@xmathi\" and no formulas\n",
    "# paper at index 427 has formula references as \"@xmathi\" and formulas as \"@xmathi _formula_ $ ]\"\n",
    "\n",
    "# after checking a handful of papers, it seems we can extract formula references with \"@xmathi\" for\n",
    "# each i index in test.split(\"@xmath\") and formulas with \"$ ]\" and \"] ]\" delimiters\n",
    "# we'd want to check more papers given more time for any edge cases or other delimiters\n",
    "\n",
    "rand_int = int(np.random.uniform(low=0, high=(len(data) - 1)))\n",
    "test = \" \".join(data[rand_int][\"article_text\"])\n",
    "# test.split(\"@xmath\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "VcVhMad2PBqT"
   },
   "outputs": [],
   "source": [
    "def clean_text(test: str) -> str:\n",
    "    # to do: implement this function\n",
    "\n",
    "    # Step 1:\n",
    "    # replace math formulas with [math formula] tag\n",
    "    new_data_element = \" \".join(test[\"article_text\"])\n",
    "    # if formulas are present, remove them and replace with tag\n",
    "    if \"@xmath\" in new_data_element:\n",
    "        math_splits = new_data_element.split(\"@xmath\")\n",
    "        good_elements = []\n",
    "        # grab zeroth element after split, before the first math formula\n",
    "        # for loop to replace formula with tag\n",
    "        good_elements.append(math_splits[0])\n",
    "        for math_element in math_splits[1:]:\n",
    "            if \"] ]\" in math_element:\n",
    "                math_element = \"xmath\" + math_element\n",
    "                formula_content = re.search(r\"xmath(.*?)] ]\", math_element).group(1)\n",
    "                content_after_formula = \" \".join(\n",
    "                    math_element.split(formula_content)[1:]\n",
    "                ).replace(\"] ]\", \"\")\n",
    "                good_elements.append(\"[math formula] \" + content_after_formula)\n",
    "            elif \"$ ]\" in math_element:\n",
    "                math_element = \"xmath\" + math_element\n",
    "                formula_content = re.search(r\"xmath(.*?)\\$ ]\", math_element).group(1)\n",
    "                content_after_formula = \" \".join(\n",
    "                    math_element.split(formula_content)[1:]\n",
    "                ).replace(\"$ ]\", \"\")\n",
    "                good_elements.append(\"[math formula] \" + content_after_formula)\n",
    "            else:\n",
    "                content_after_formula = math_element.lstrip(\"0123456789.- \")\n",
    "                good_elements.append(\"[math formula] \" + content_after_formula)\n",
    "        new_data_element = \" \".join(good_elements)\n",
    "\n",
    "    # Step 2:\n",
    "    # remove punct\n",
    "    new_data_element = cleantext.clean(\n",
    "        new_data_element,\n",
    "        clean_all=False,  # Execute all cleaning operations\n",
    "        extra_spaces=False,  # Remove extra white spaces\n",
    "        stemming=False,  # Stem the words\n",
    "        stopwords=False,  # Remove stop words\n",
    "        lowercase=False,  # Convert to lowercase\n",
    "        numbers=False,  # Remove all digits\n",
    "        punct=True,  # Remove all punctuations\n",
    "        stp_lang=\"english\",  # Language for stop words\n",
    "    )\n",
    "\n",
    "    # Step 3:\n",
    "    # remove extra spaces\n",
    "    new_data_element = cleantext.clean(\n",
    "        new_data_element,\n",
    "        clean_all=False,  # Execute all cleaning operations\n",
    "        extra_spaces=True,  # Remove extra white spaces\n",
    "        stemming=False,  # Stem the words\n",
    "        stopwords=False,  # Remove stop words\n",
    "        lowercase=False,  # Convert to lowercase\n",
    "        numbers=False,  # Remove all digits\n",
    "        punct=False,  # Remove all punctuations\n",
    "        stp_lang=\"english\",  # Language for stop words\n",
    "    )\n",
    "\n",
    "    # add brackets bag to math formula tags after removing punct above\n",
    "    new_data_element = new_data_element.replace(\"math formula\", \"[math formula]\")\n",
    "    return new_data_element"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "2HUIHqQStG8_"
   },
   "outputs": [],
   "source": [
    "# let's now clean the dataset and check some test indices along the way\n",
    "# these indices correspond to the ones checked above\n",
    "\n",
    "check_indices = [125, 455, 984, 95, 684, 427]\n",
    "for itr in range(len(data)):\n",
    "    if itr in check_indices:\n",
    "        # print(f\"Original text at index {itr}: {' '.join(data[itr]['article_text'])}\")\n",
    "        pass\n",
    "    data[itr][\"article_text\"] = clean_text(data[itr])\n",
    "    if itr in check_indices:\n",
    "        # print(f\"Cleaned text at index {itr}: {data[itr]['article_text']}\", \"\\n\")\n",
    "        pass\n",
    "\n",
    "# checked and looks good"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "ODM-CWKbtG8_"
   },
   "outputs": [],
   "source": [
    "# the cleaning pipeline looks good, let's proceed to training"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "lxNxFxjAPCHo"
   },
   "source": [
    "## Training Class"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "DGT41gQOPEcL"
   },
   "outputs": [],
   "source": [
    "import torch\n",
    "from torch.utils.data import DataLoader\n",
    "from torch.optim import AdamW\n",
    "from torch.nn import CrossEntropyLoss\n",
    "from transformers import (\n",
    "    AutoTokenizer,\n",
    "    AutoModelForCausalLM,\n",
    "    AutoConfig,\n",
    "    get_scheduler,\n",
    ")\n",
    "from accelerate import Accelerator\n",
    "from datasets import DatasetDict\n",
    "\n",
    "\n",
    "class FineTuner:\n",
    "    def __init__(self):\n",
    "        self.tokenizer = AutoTokenizer.from_pretrained(\"microsoft/phi-2\")\n",
    "        self.tokenizer.pad_token = self.tokenizer.eos_token\n",
    "        self.context_length = 256\n",
    "        config = AutoConfig.from_pretrained(\n",
    "            \"microsoft/phi-2\",\n",
    "            vocab_size=len(self.tokenizer),\n",
    "            n_ctx=self.context_length,\n",
    "            bos_token_id=self.tokenizer.bos_token_id,\n",
    "            eos_token_id=self.tokenizer.eos_token_id,\n",
    "        )\n",
    "        self.model = AutoModelForCausalLM.from_config(config)\n",
    "\n",
    "    def train(self, dataset, batch_size=8, num_train_epochs=5, learning_rate=5e-4):\n",
    "        \"\"\"\n",
    "        Train the model on the provided dataset without using Hugging Face Trainer.\n",
    "\n",
    "        Args:\n",
    "            dataset (Dataset): Huggingface dataset object.\n",
    "            batch_size (int): Training batch size.\n",
    "            num_train_epochs (int): Number of training epochs.\n",
    "            learning_rate (float): Learning rate for optimizer.\n",
    "        \"\"\"\n",
    "\n",
    "        # Sources used:\n",
    "        # * https://huggingface.co/learn/nlp-course/chapter7/6\n",
    "        # * https://huggingface.co/docs/accelerate/en/basic_tutorials/notebook\n",
    "\n",
    "        # Training loop\n",
    "        accelerator = Accelerator(mixed_precision=\"bf16\")\n",
    "        model_name = \"phi-2-scientific-papers-base-v0.1\"\n",
    "        gradient_accumulation_steps = 1\n",
    "        save_chkpt_steps = 300\n",
    "        eval_steps = 100\n",
    "        log_steps = 25\n",
    "\n",
    "        dataset = dataset.shuffle(seed=43)\n",
    "        ds_train = dataset.select(range(750))\n",
    "        ds_valid = dataset.select(range(750, 1000))\n",
    "\n",
    "        # assert there is no leakage between train and val slices\n",
    "        ds_train_pandas = ds_train.to_pandas()\n",
    "        ds_valid_pandas = ds_valid.to_pandas()\n",
    "        assert (\n",
    "            ds_train_pandas[\"article_text\"].isin(ds_valid_pandas[\"article_text\"]).sum()\n",
    "            == 0\n",
    "        )\n",
    "\n",
    "        # create one dataset dict with train/valid splits\n",
    "        raw_datasets = DatasetDict(\n",
    "            {\n",
    "                \"train\": ds_train,\n",
    "                \"valid\": ds_valid,\n",
    "            }\n",
    "        )\n",
    "\n",
    "        # creates chunks out of the article_text with self.context_length number of tokens each\n",
    "        # these are the examples we will pass for language modeling\n",
    "        def tokenize(element):\n",
    "            outputs = self.tokenizer(\n",
    "                element[\"article_text\"],\n",
    "                truncation=True,\n",
    "                max_length=self.context_length,\n",
    "                return_overflowing_tokens=True,\n",
    "                return_length=True,\n",
    "            )\n",
    "            input_batch = []\n",
    "            for length, input_ids in zip(outputs[\"length\"], outputs[\"input_ids\"]):\n",
    "                if length == self.context_length:\n",
    "                    input_batch.append(input_ids)\n",
    "            return {\"input_ids\": input_batch}\n",
    "\n",
    "        tokenized_dataset = raw_datasets.map(\n",
    "            tokenize, batched=True, remove_columns=raw_datasets[\"train\"].column_names\n",
    "        )\n",
    "\n",
    "        model_size = sum(t.numel() for t in self.model.parameters())\n",
    "        print(f\"Model size: {model_size/1000**2:.1f}M parameters\")\n",
    "\n",
    "        def loss_fcn(inputs, logits):\n",
    "            # Shift so that tokens < n predict n\n",
    "            shift_labels = inputs[..., 1:].contiguous()\n",
    "            shift_logits = logits[..., :-1, :].contiguous()\n",
    "            # Calculate per-token loss\n",
    "            loss_fct = CrossEntropyLoss(reduce=False)\n",
    "            loss = loss_fct(\n",
    "                shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)\n",
    "            )\n",
    "            # Resize and average loss per sample\n",
    "            loss_per_sample = loss.view(\n",
    "                shift_logits.size(0), shift_logits.size(1)\n",
    "            ).mean(axis=1)\n",
    "            final_loss = loss_per_sample.mean()\n",
    "            return final_loss\n",
    "\n",
    "        tokenized_dataset.set_format(\"torch\")\n",
    "        train_dataloader = DataLoader(\n",
    "            tokenized_dataset[\"train\"], batch_size=batch_size, shuffle=True\n",
    "        )\n",
    "        eval_dataloader = DataLoader(tokenized_dataset[\"valid\"], batch_size=batch_size)\n",
    "\n",
    "        weight_decay = 0.1\n",
    "\n",
    "        def get_grouped_params(model, no_decay=[\"bias\", \"LayerNorm.weight\"]):\n",
    "            params_with_wd, params_without_wd = [], []\n",
    "            for n, p in model.named_parameters():\n",
    "                if any(nd in n for nd in no_decay):\n",
    "                    params_without_wd.append(p)\n",
    "                else:\n",
    "                    params_with_wd.append(p)\n",
    "            return [\n",
    "                {\"params\": params_with_wd, \"weight_decay\": weight_decay},\n",
    "                {\"params\": params_without_wd, \"weight_decay\": 0.0},\n",
    "            ]\n",
    "\n",
    "        def evaluate():\n",
    "            self.model.eval()\n",
    "            losses = []\n",
    "            for eval_step, batch in enumerate(eval_dataloader):\n",
    "                with torch.no_grad():\n",
    "                    outputs = self.model(batch[\"input_ids\"], labels=batch[\"input_ids\"])\n",
    "\n",
    "                losses.append(accelerator.gather(outputs.loss))\n",
    "            loss = torch.mean(torch.cat(losses))\n",
    "            try:\n",
    "                perplexity = torch.exp(loss)\n",
    "            except OverflowError:\n",
    "                perplexity = float(\"inf\")\n",
    "            return loss.item(), perplexity.item()\n",
    "\n",
    "        optimizer = AdamW(get_grouped_params(self.model), lr=5e-4)\n",
    "        self.model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(\n",
    "            self.model, optimizer, train_dataloader, eval_dataloader\n",
    "        )\n",
    "\n",
    "        num_update_steps_per_epoch = len(train_dataloader)\n",
    "        num_training_steps = num_train_epochs * num_update_steps_per_epoch\n",
    "        print(f\"num_training_steps: {num_training_steps}\")\n",
    "        lr_scheduler = get_scheduler(\n",
    "            name=\"cosine\",\n",
    "            optimizer=optimizer,\n",
    "            num_warmup_steps=500,\n",
    "            num_training_steps=num_training_steps,\n",
    "        )\n",
    "\n",
    "        self.model.train()\n",
    "        global_step = 0\n",
    "        train_logs = []\n",
    "        val_logs = []\n",
    "        for epoch in tqdm(range(num_train_epochs)):\n",
    "            if accelerator.is_main_process:\n",
    "                print(f\"Started epoch {epoch + 1} of {num_train_epochs}\")\n",
    "            for epoch_step, batch in tqdm(\n",
    "                enumerate(train_dataloader, start=1),\n",
    "                total=num_training_steps // num_train_epochs,\n",
    "            ):\n",
    "                logits = self.model(batch[\"input_ids\"]).logits\n",
    "                loss = loss_fcn(batch[\"input_ids\"], logits)\n",
    "                loss = loss / gradient_accumulation_steps\n",
    "                accelerator.backward(loss)\n",
    "                if global_step % gradient_accumulation_steps == 0:\n",
    "                    accelerator.clip_grad_norm_(self.model.parameters(), 1.0)\n",
    "                    optimizer.step()\n",
    "                    lr_scheduler.step()\n",
    "                    optimizer.zero_grad()\n",
    "                    global_step += 1\n",
    "\n",
    "                # train logging\n",
    "                if global_step % log_steps == 0:\n",
    "                    train_log = {\n",
    "                        \"steps\": global_step,\n",
    "                        \"loss/train\": loss.item() * gradient_accumulation_steps,\n",
    "                        \"last_lr\": lr_scheduler.get_last_lr()[0],\n",
    "                    }\n",
    "                    accelerator.print(train_log)\n",
    "                    train_logs.append(train_log)\n",
    "\n",
    "                # save chkpt logging at save_chkpt_steps and last step\n",
    "                if (\n",
    "                    (global_step % (save_chkpt_steps * gradient_accumulation_steps))\n",
    "                    == 0\n",
    "                ) or (global_step == num_training_steps):\n",
    "                    eval_loss, perplexity = evaluate()\n",
    "                    val_log = {\n",
    "                        \"steps\": global_step,\n",
    "                        \"loss/eval\": eval_loss,\n",
    "                        \"perplexity/eval\": perplexity,\n",
    "                    }\n",
    "                    accelerator.print(val_log)\n",
    "                    val_logs.append(val_log)\n",
    "                    self.model.train()\n",
    "                    accelerator.wait_for_everyone()\n",
    "                    unwrapped_model = accelerator.unwrap_model(self.model)\n",
    "                    unwrapped_model.save_pretrained(\n",
    "                        model_name, save_function=accelerator.save\n",
    "                    )\n",
    "                    time.sleep(5)\n",
    "                    try:\n",
    "                        if accelerator.is_main_process:\n",
    "                            self.tokenizer.save_pretrained(model_name)\n",
    "                            # push to hub\n",
    "                            model_id_load = f\"dfurman/{model_name}\"\n",
    "                            # tokenizer\n",
    "                            tokenizer_push = AutoTokenizer.from_pretrained(model_name)\n",
    "                            tokenizer_push.push_to_hub(\n",
    "                                model_id_load, use_auth_token=True\n",
    "                            )\n",
    "                            # model\n",
    "                            model_push = AutoModelForCausalLM.from_pretrained(\n",
    "                                model_name,\n",
    "                            )\n",
    "                            model_push.push_to_hub(\n",
    "                                model_id_load,\n",
    "                                use_auth_token=True,\n",
    "                                safe_serialization=True,\n",
    "                                commit_message=f\"Training in progress step {global_step} of {num_training_steps}\",\n",
    "                                blocking=False,\n",
    "                            )\n",
    "                    except:\n",
    "                        print(\"ERROR: Chkpt saving failed for this step\")\n",
    "\n",
    "                # eval logging\n",
    "                elif (global_step % (eval_steps * gradient_accumulation_steps)) == 0:\n",
    "                    eval_loss, perplexity = evaluate()\n",
    "                    val_log = {\n",
    "                        \"steps\": global_step,\n",
    "                        \"loss/eval\": eval_loss,\n",
    "                        \"perplexity/eval\": perplexity,\n",
    "                    }\n",
    "                    accelerator.print(val_log)\n",
    "                    val_logs.append(val_log)\n",
    "                    self.model.train()\n",
    "                    accelerator.wait_for_everyone()\n",
    "\n",
    "        # save train_logs & val_logs\n",
    "        with open(\"train_logs.json\", \"w\") as fout:\n",
    "            json.dump(train_logs, fout)\n",
    "        with open(\"val_logs.json\", \"w\") as fout:\n",
    "            json.dump(val_logs, fout)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "YiDJX4mmlq5q"
   },
   "outputs": [],
   "source": [
    "from datasets import Dataset\n",
    "\n",
    "# Don't reduce the size of the data, memory is not an issue and A100 GPUs go brrrr (they are super fast)\n",
    "# Extract only 'article_text' from each dictionary\n",
    "article_texts = [d[\"article_text\"] for d in data]\n",
    "\n",
    "# Create a dictionary with 'article_text' as the key\n",
    "data_dict = {\"article_text\": article_texts}\n",
    "\n",
    "# Create the Hugging Face Dataset\n",
    "dataset = Dataset.from_dict(data_dict)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "60vZZ_6qtG9A",
    "outputId": "233f20b1-4615-435a-9c9b-96e0e1f5939a"
   },
   "outputs": [],
   "source": [
    "dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "JIpEtOdBtG9A"
   },
   "outputs": [],
   "source": [
    "# dataset[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "X0s1OlkXrxwb"
   },
   "outputs": [],
   "source": [
    "def training_function():\n",
    "    fine_tuner = FineTuner()\n",
    "    fine_tuner.train(dataset)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "referenced_widgets": [
      "91b8e461c2784a8ea29fc8543268e38f",
      "bffb568e02904f7c8b97fb30db50a024",
      "924a332824504b778f5eddef508eed14",
      "21d6a77e435940b8b69e626bf0067fe0",
      "faed37a28bca4a2eb483d235e68f3f00",
      "ed1d0518f8d441749d71b1e572b52f60",
      "c86963ed265246d9982e49954acb69a1",
      "536bc5d403c644209387c8ad7db24b40",
      "3884d47bd0ef4f1fb9785ce4c8783225",
      "36147f2907d54598952f6ebd78204552",
      "05f6a087e5a24e468a9ea4b03f0e5be0",
      "4d7118a33e4e4529888a4964a3639665",
      "7a8888f8020e433b94a8f285ed51ae69",
      "6378be4cc1d148aaaafcdb8a514f55b3",
      "713f0303d2244a31b1eb10af6a91da44",
      "67da0fa40ebe431db8e29f177a03fb33",
      "6ca3278547ff4285aaa5c42d8005b8d3",
      "6ecf9c9a9d1c413d852ad2794d93f1e9",
      "63998a343ec449c9b5ae0dad47f81447",
      "645fb67ccdea4fd5a130a399cd6e9177",
      "33a1c1f188a2409a849e81ecb9098a8a",
      "7c6b5036c34d483ca52ed74e25c4cdd7",
      "4972406feb3849e396c3ef39a7f532a2",
      "f7731323f8c74dfca5963994024a7b43",
      "6859cfe41a014f07813b0ede85edc3a4",
      "0d95bd1956de4fea85f25febbb8e7e4c",
      "53e5d826af1a48ef8b2aaf81dd47c743",
      "cf8d007e7ebc46119be63686f04815ba",
      "e7a20512fdab47248717059cf14bcbcd",
      "bbd7a6b05c2346008c75290ad0ab498b",
      "f2763d0bf6ef4b05aa7c7de500aa8917",
      "2b157c55c3b9430fb82fdfad984eea5b",
      "0020e744428f4e6087b8446c220fd2a4",
      "36dc26bed0224be89e0a913a6487ca6a",
      "321d80c2dea14b72ae53007ddadd41e4",
      "2850c9997d8d4235a19a3731afb53df0",
      "4a26ceda3dfa4f75a3932d5cb0730f61",
      "d3bac2ac914b48cda3899965a0664755",
      "5728c6b7c4ae4db1a03158cdf0772bb7",
      "83a211a857154942a616510db2cf1348",
      "8b5d567ef559409892b49afc2fff370c",
      "430bf64120854322ad9283fdfb9facd8",
      "9c48d19dc37c4a6c80d4fde3bb1e7236",
      "33b9572e5ce541119f5b321a1d363e65",
      "bb1fe030043a4447b61ec3b15c2664d6",
      "8da9bb07ee754d34b8f3d0e7db23caa4",
      "d78c8ceb52694a7483b4e9862235bcf6",
      "55a8e1feb87648d3b7a543e1b81ecd95",
      "8c5db4fd82bb472aa09f6b8c0269b2e7",
      "6154bf23ae1e486fa61e4c838e988877",
      "44e7105865f44ef8bde562cf08f9f3ea",
      "233ae6d1eb694e429c2e488a8ca27f9f",
      "1421236f408a4917ab3b1ffc0694e8bb",
      "b02cea6b9412436baa14b078da976f63",
      "1b069fdecbfd47979b760d1df1868241",
      "21e594f0b2224f41a51cdc68ee093efb",
      "2a416ea34e8344b9ac29f39c74268ac6",
      "ea0f8a1a53df4b1582b21a5179c657f4",
      "b7dec47b085f44388642ab828a3c5ed3",
      "a5480af495bc43c2a9ebe184d2b89056",
      "71cebfa0cc0849a3b1585c0ae02b89c3",
      "dc9c1d3e4efa4b1c943027c8d372a96e",
      "e154891996f8454e933919ac6df324ee",
      "29c7412c95b24f849b92204ed574db01",
      "9abba1e2dc8f44c18bbfd826a158e4ad",
      "36216892e2ef4090907cf7a32d17f189",
      "c106451b56ae47559023d70bc6914a34",
      "803fc3a62b344e80a895afeda3247ea4",
      "250e109b0a344f98bfe548a46f4654dd",
      "84fac425e8e8454e8a309f0305cdee8f",
      "80d8e6265e2e4946a3d3864692b9a3b8",
      "80bd3d8e391c415187e70bf6aefc758d",
      "7bae0cc7036642138fbac825f49ec80c",
      "819b6d3a394e40d29f366eba0c69c788",
      "81d827a3e9584721afea1f13fc797ec7",
      "a6806f02c4e3406fac65c15eff150ca9",
      "f5f8384b7f114795919e7ee731903eaa",
      "b6fbe747fb23405589a84e5dea405c73"
     ]
    },
    "id": "UDfbg6YjAqnY",
    "outputId": "c6fd53c7-c0a7-4275-841b-45fc2fdf234f"
   },
   "outputs": [],
   "source": [
    "from accelerate import notebook_launcher\n",
    "\n",
    "notebook_launcher(training_function, num_processes=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "AKLNobgm_Fc3"
   },
   "outputs": [],
   "source": [
    "# prints during training look good!\n",
    "# only error... checkpoint saving failed on last upload\n",
    "# we captured 2700/3000 global steps and are only skipping the last 300 steps in the final epoch\n",
    "# proceeding due to time constraints"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "miG6FYw3_Fc3",
    "outputId": "a73d2d53-106e-4e08-b89a-7ab18b0c8a09"
   },
   "outputs": [],
   "source": [
    "print(\"done\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tNjKBovp_Fc3"
   },
   "source": [
    "## GPU Usage During Training Run\n",
    "\n",
    "* We can see that all 4 GPUs are utilized efficiently (~99% avg VRAM consumption)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "n4hq4fhq_Fc3"
   },
   "source": [
    "![](../assets/mid_training_GPU_usage.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "OVeVn0_y762n"
   },
   "source": [
    "# Logging\n",
    "Implement logging in the above FineTuner class and visualise the logs\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "P7H7M8mC_FdC"
   },
   "source": [
    "### Three kinds of logs were captured at varying rates during training\n",
    "\n",
    "* Train logs (completed steps, loss/train, last_lr)\n",
    "    * 120 such logs were captured in this run\n",
    "* Validation log (completed steps, loss/eval, perplexity/eval)\n",
    "    * 30 such logs were captured in this run\n",
    "* Checkpoint caching (saves model to remote repo at each checkpoint step)\n",
    "    * 10 such chkpts were captured in this run\n",
    "    * Logged to https://huggingface.co/dfurman/phi-2-scientific-papers-base-v0.1/commits/main\n",
    "\n",
    "Plan is to visualize 1) loss/train & loss/eval on the same plot, 2) perpelxity/eval on its own plot, and 3) lr progression on its own plot"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "3mCjwBAI_FdC"
   },
   "outputs": [],
   "source": [
    "import json\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "YrWq2RMe_FdC",
    "outputId": "419ff5f0-65a4-40d6-e7ad-aebd5b8ef2ea"
   },
   "outputs": [],
   "source": [
    "# loss plot\n",
    "\n",
    "# Reference: ChatGPT-4 generation with slight modifications, dated Feb 11, 2024 PST\n",
    "# Prompt used: (attached logs files) \"Make me an excellent logging plot from the attached json file. It contains the logs from training a causal language model. Plot the loss/train and eval loss/loss on the same plot each against steps. Use gridlines and make it a very professional plot.\"\n",
    "# Generated code:\n",
    "\n",
    "# Setting the style for the plot\n",
    "sns.set_style(\"darkgrid\")\n",
    "\n",
    "# Load the data from the training and validation logs\n",
    "train_logs_path = \"logs/train_logs.json\"\n",
    "val_logs_path = \"logs/val_logs.json\"\n",
    "\n",
    "with open(train_logs_path, \"r\") as file:\n",
    "    train_logs = json.load(file)\n",
    "\n",
    "with open(val_logs_path, \"r\") as file:\n",
    "    val_logs = json.load(file)\n",
    "\n",
    "# Extracting the required data\n",
    "steps_train, loss_train = zip(\n",
    "    *[(log[\"steps\"], log[\"loss/train\"]) for log in train_logs]\n",
    ")\n",
    "steps_val, loss_eval = zip(*[(log[\"steps\"], log[\"loss/eval\"]) for log in val_logs])\n",
    "\n",
    "# Create the plot\n",
    "plt.figure(figsize=(10, 6))\n",
    "\n",
    "# Plot training loss\n",
    "plt.plot(steps_train, loss_train, \"--.\", label=\"loss/train\")\n",
    "\n",
    "# Plot evaluation loss\n",
    "plt.plot(steps_val, loss_eval, \"--.\", label=\"loss/eval\")\n",
    "\n",
    "# Adding titles and labels\n",
    "plt.title(\"Training and Evaluation Loss Logs\", fontsize=16)\n",
    "plt.xlabel(\"global_step\", fontsize=14)\n",
    "plt.ylabel(\"loss\", fontsize=14)\n",
    "plt.legend()\n",
    "\n",
    "# Adding gridlines for better readability\n",
    "plt.grid(True)\n",
    "\n",
    "# Display the plot\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* We observe a pretty standard looking loss plot. We can see potential overfitting on the train set past ~1500 global_steps, with the validation loss drifting further off the train loss in the right half of the plot. We'd want to explore the impacts of this by running vibe check prompts for each chkpt saved during training - see the \"Eval\" section for more."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "m_aZ2Yy7_FdC",
    "outputId": "33ec703e-f8dd-4c06-f8da-e2b9fe593085"
   },
   "outputs": [],
   "source": [
    "# eval perplexity\n",
    "# modified from loss plot code above\n",
    "\n",
    "# Setting the style for the plot\n",
    "sns.set_style(\"darkgrid\")\n",
    "\n",
    "# Load the data from the training and validation logs\n",
    "val_logs_path = \"logs/val_logs.json\"\n",
    "\n",
    "with open(val_logs_path, \"r\") as file:\n",
    "    val_logs = json.load(file)\n",
    "\n",
    "# Extracting the required data\n",
    "steps_val, perplexity_eval = zip(\n",
    "    *[(log[\"steps\"], log[\"perplexity/eval\"]) for log in val_logs]\n",
    ")\n",
    "\n",
    "# Create the plot\n",
    "plt.figure(figsize=(10, 6))\n",
    "\n",
    "# Plot evaluation loss\n",
    "plt.plot(steps_val, perplexity_eval, \"--.\", label=\"perplexity/eval\", color=\"tab:green\")\n",
    "\n",
    "# Adding titles and labels\n",
    "plt.title(\"Evaluation Perplexity Logs\", fontsize=16)\n",
    "plt.xlabel(\"global_step\", fontsize=14)\n",
    "plt.ylabel(\"perplexity/eval\", fontsize=14)\n",
    "plt.legend()\n",
    "\n",
    "# Adding gridlines for better readability\n",
    "plt.grid(True)\n",
    "\n",
    "# Display the plot\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* Perplexity plot also looks good, and we can equally see here how, on the right half of the graph, we are making little progress in regards to validation set performance (what we care about). "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "_nc7EX7__FdC",
    "outputId": "c625af6e-32f7-440c-923d-5de231dfd337"
   },
   "outputs": [],
   "source": [
    "# learning rate\n",
    "# modified from loss plot code above\n",
    "\n",
    "# Setting the style for the plot\n",
    "sns.set_style(\"darkgrid\")\n",
    "\n",
    "# Load the data from the training and validation logs\n",
    "train_logs_path = \"logs/train_logs.json\"\n",
    "\n",
    "with open(train_logs_path, \"r\") as file:\n",
    "    train_logs = json.load(file)\n",
    "\n",
    "# Extracting the required data\n",
    "steps_train, lr_train = zip(*[(log[\"steps\"], log[\"last_lr\"]) for log in train_logs])\n",
    "\n",
    "# Create the plot\n",
    "plt.figure(figsize=(10, 6))\n",
    "\n",
    "# Plot evaluation loss\n",
    "plt.plot(steps_train, lr_train, \"--\", label=\"learning rate\", color=\"tab:red\")\n",
    "\n",
    "# Adding titles and labels\n",
    "plt.title(\"Learning Rate Logs\", fontsize=16)\n",
    "plt.xlabel(\"global_step\", fontsize=14)\n",
    "plt.ylabel(\"learning rate\", fontsize=14)\n",
    "plt.legend()\n",
    "\n",
    "# Adding gridlines for better readability\n",
    "plt.grid(True)\n",
    "\n",
    "# Display the plot\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* The learning rate plot looks good, and it matches the desired cosine scheduling set for the run. I have had good success with cosine scheduling in the past - and it is what I use for my baselines as a result. We'd want to test other lr values and schedules given more time."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "sk235fgGYZ4u"
   },
   "source": [
    "# Evaluations\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "dg7dcZktOwLf"
   },
   "source": [
    "\"**Building solid evals should be the starting point** for any LLM-based system or product (as well as conventional machine learning systems).\" - Eugene Yan (https://eugeneyan.com/writing/llm-patterns/#evals-to-measure-performance)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "5JhQ5kzSFd6h"
   },
   "source": [
    "**Run \"vibe checks\"** (one such example below in \"Sample usage\")\n",
    "* at each logged chkpt, run 1-2 vibe check prompts to explore evolution of performance across the training\n",
    "    * it is very possible we overfit on the train set, as per the eval loss and eval perplexity, so the above chkpt vibe checks may reveal that the best model is actually at the ~1500th global step\n",
    "* for the best model identified, run ~20 vibe check prompts that are representative to explore best model performance\n",
    "\n",
    "**Run any existing evals related to the task**\n",
    "* look in Eleuther.AI's lm-eval package for existing evals that reflect our use case, run these and compare with other LLMs performance\n",
    "\n",
    "**Run a high-quality custom eval on held out examples**\n",
    "\n",
    "**NB** Ths is the most important step!\n",
    "\n",
    "* create ~1k test set examples that represent a diverse and high-quality snpashot of your production / test-time use case\n",
    "* conduct manual and automated testing on the above"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9wXdSqT1_FdD"
   },
   "source": [
    "# Sample usage"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "OKVPt3aE_FdD"
   },
   "outputs": [],
   "source": [
    "import torch\n",
    "from transformers import AutoModelForCausalLM, AutoTokenizer"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "iMlTQmKI_FdD"
   },
   "outputs": [],
   "source": [
    "# load model and tok\n",
    "model = AutoModelForCausalLM.from_pretrained(\n",
    "    \"dfurman/phi-2-scientific-papers-base-v0.1\",\n",
    "    device_map=\"auto\",\n",
    "    trust_remote_code=True,\n",
    "    torch_dtype=torch.bfloat16,\n",
    ")\n",
    "tokenizer = AutoTokenizer.from_pretrained(\n",
    "    \"dfurman/phi-2-scientific-papers-base-v0.1\", trust_remote_code=True\n",
    ")\n",
    "\n",
    "model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "MimowkaI_FdD",
    "outputId": "eb31e3fe-580d-4867-f2c0-289b290a9494"
   },
   "outputs": [],
   "source": [
    "# run a vibe check prompt\n",
    "\n",
    "input_sample = \"We suggest that [math formula] proves\"\n",
    "inputs = tokenizer(input_sample, return_tensors=\"pt\", return_attention_mask=False)\n",
    "\n",
    "outputs = model.generate(**inputs, max_new_tokens=10, temperature=0.1, do_sample=True)\n",
    "text = tokenizer.batch_decode(outputs)[0]\n",
    "print(text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "6S3sPJY9_FdD"
   },
   "source": [
    "# Next steps\n",
    "\n",
    "We have effectively created a base model here, by language modeling on solely unstructured article text from scientific papers. This is analogous to continued pretraining for the scientific paper domain. In other words, we are essentially creating a scientific paper completer at this stage of the assistant training process. Next, we want to create an assitant model capable of Q&A (https://karpathy.ai/stateofgpt.pdf, slide 3).\n",
    "\n",
    "Here are the next steps I would follow:\n",
    "\n",
    "**Task-relevant SFT and DPO**\n",
    "1. Create a test set of ~300-1000 Q&A examples, hold these out as your eval\n",
    "2. Curate a training set of Q&A examples, ideally 100k examples, at least 10k\n",
    "3. Transfer learn base model weights and further train base with instruction tuning yielding a SFT model (train on assistant completions only) (https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only)\n",
    "4. Crete preference dataset for DPO\n",
    "5. Transfer learn SFT model weights and perform DPO alignment/training\n",
    "\n",
    "**NB** Consider leveraging a strong LLM for generating synthetic data, either as augmentation or primary driver of data curation\n",
    "\n",
    "**To obtain better performance**\n",
    "* More compute evenly split between growing model size and dataset size\n",
    "* More compute on expanding context window sized chunking (self.context_length)\n",
    "\n",
    "**Better logging**\n",
    "* Move logging from JSON to weights & biases, add additional metrics such as GPU usage stats, elapsed time, etc"
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "gpuType": "A100",
   "machine_shape": "hm",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
