{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "C2EgqEPDQ8v6"
   },
   "source": [
    "## Finetune Llama-2-7b using QLora on a Google colab\n",
    "\n",
    "Running large language models (LLMs) requires a lot of GPU power and memory, which can be costly. To improve performance and reduce costs, lightweight LLM models are being explored. This blog will cover key techniques for deploying LLMs more efficiently and affordably.\n",
    "\n",
    "This example shows **How to fine-tine Llama-2-7B model using Instruction tuning with PEFT and QLoRA**.\n",
    "\n",
    "\n",
    "![image.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "i-tTvEF1RT3y"
   },
   "source": [
    "## Setup\n",
    "\n",
    "Run the cells below to setup and install the required libraries. For our experiment we will need `accelerate`, `peft`, `transformers`, `datasets` and TRL to leverage the recent [`SFTTrainer`](https://huggingface.co/docs/trl/main/en/sft_trainer). We will use `bitsandbytes` to [quantize the base model into 4bit](https://huggingface.co/blog/4bit-transformers-bitsandbytes). We will also install `einops` as it is a requirement to load Falcon models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "mNnkgBq7Q3EU",
    "outputId": "9cc72eec-a77f-402b-8fd5-57a14653383c"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "  Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n",
      "  Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n",
      "  Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m150.9/150.9 kB\u001b[0m \u001b[31m3.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m8.4/8.4 MB\u001b[0m \u001b[31m30.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m270.9/270.9 kB\u001b[0m \u001b[31m31.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m507.1/507.1 kB\u001b[0m \u001b[31m32.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m79.7/79.7 kB\u001b[0m \u001b[31m9.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m115.3/115.3 kB\u001b[0m \u001b[31m14.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m16.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Building wheel for peft (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m105.0/105.0 MB\u001b[0m \u001b[31m8.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m44.6/44.6 kB\u001b[0m \u001b[31m5.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.2/2.2 MB\u001b[0m \u001b[31m82.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m196.4/196.4 kB\u001b[0m \u001b[31m23.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m254.1/254.1 kB\u001b[0m \u001b[31m22.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.7/62.7 kB\u001b[0m \u001b[31m7.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h"
     ]
    }
   ],
   "source": [
    "# install required libraries\n",
    "\n",
    "!pip install -q -U trl transformers accelerate git+https://github.com/huggingface/peft.git\n",
    "!pip install -q datasets bitsandbytes einops wandb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "ODB783kwKlgJ"
   },
   "outputs": [],
   "source": [
    "# import libraries and modules\n",
    "\n",
    "import json\n",
    "import os\n",
    "from pprint import pprint\n",
    "\n",
    "import bitsandbytes as bnb\n",
    "import pandas as pd\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import transformers\n",
    "from datasets import load_dataset\n",
    "from huggingface_hub import notebook_login\n",
    "\n",
    "from peft import (\n",
    "    LoraConfig,\n",
    "    PeftConfig,\n",
    "    PeftModel,\n",
    "    get_peft_model,\n",
    "    prepare_model_for_kbit_training,\n",
    ")\n",
    "from transformers import (\n",
    "    AutoConfig,\n",
    "    AutoModelForCausalLM,\n",
    "    AutoTokenizer,\n",
    "    BitsAndBytesConfig,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "hzFhMow6ZY44"
   },
   "source": [
    "### Connect with HF account"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Av5JpWe1LTXE"
   },
   "source": [
    "Follow the Instructions to generate huggingfaces token with write access\n",
    "\n",
    "Create an account, if you don't have account in Hugging Face. If it is already present then\n",
    "\n",
    "***Go to Profile -> Access Tokens -> Create a token with write access -> Copy it and use it for login***"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 145,
     "referenced_widgets": [
      "db0da7cffbea4d97a21968560229cc63",
      "127213df2edb4289b139e64ebfbb935f",
      "bd14014f8d854e5894faa105830cd657",
      "64c3c85242b2491986074f35d502159c",
      "2c56801548ca4329a24d83e9548653e4",
      "e69c61e625f14e0cb1e986dabab577f6",
      "5dbc8bd6ddd5408499e3a69fd1e421f5",
      "82956148127e4286abbae222449f0ce4",
      "5fb09536c5774f649a898950c3729088",
      "1495f2e0648549afa30100f8167f88be",
      "84b96a8b50394d1da8465a8722440fbe",
      "faf00a2cb92047eb87905d41855e4c34",
      "1b6fbc996c5a4e5fbce0e5cfcc764f57",
      "f50495d0cc3d42419796c3a1f9627644",
      "b51dc9418dac4dc9910c68c9ed0af11d",
      "22a7fd916801433c83857efd160b8504",
      "e9a1402d61204f22b691cc1b8f3fab98",
      "ac82aa13b6a64cd2ad3fd79abb52ff84",
      "bc86932701f64eb1b84b873d1b9ebed9",
      "70a04906833e4855a2c9ebcff55f76a9",
      "68be93836c054703b4da20fe1d24146b",
      "085a92d31c874f9a8bc120ee0f5553c8",
      "19c133334a2d4f8e863da22ba8e36727",
      "56021216344347f8ad0d91787bf42074",
      "f6690cc9083243568699024635372541",
      "db78a7dabf06431db8d380042a040bdc",
      "8d709eace6d94abcb4481c06eaba168e",
      "7dc5279d1c61472ca4cb2dd87a1ec73d",
      "27187ef4ee04435ca4fbbc8e17ba1ef7",
      "4fc760ceaaea4ec7ba282d634bc3ed95",
      "329bf711b3d046b3b4dfeff7eb89a751",
      "0fea75df70d9444c8dcc5407184f61fe"
     ]
    },
    "id": "euiJoW3-Kliu",
    "outputId": "2004d455-2f8d-45ce-973a-cf265d769f85"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "db0da7cffbea4d97a21968560229cc63",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "VBox(children=(HTML(value='<center> <img\\nsrc=https://huggingface.co/front/assets/huggingface_logo-noborder.sv…"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# login with hugging face token\n",
    "notebook_login()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "tY_wTbS7PETU",
    "outputId": "347dfec4-7a8a-4e24-aa2a-d776c35753e6"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wed Jan 24 11:09:45 2024       \n",
      "+---------------------------------------------------------------------------------------+\n",
      "| NVIDIA-SMI 535.104.05             Driver Version: 535.104.05   CUDA Version: 12.2     |\n",
      "|-----------------------------------------+----------------------+----------------------+\n",
      "| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |\n",
      "| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |\n",
      "|                                         |                      |               MIG M. |\n",
      "|=========================================+======================+======================|\n",
      "|   0  Tesla T4                       Off | 00000000:00:04.0 Off |                    0 |\n",
      "| N/A   46C    P8              10W /  70W |      3MiB / 15360MiB |      0%      Default |\n",
      "|                                         |                      |                  N/A |\n",
      "+-----------------------------------------+----------------------+----------------------+\n",
      "                                                                                         \n",
      "+---------------------------------------------------------------------------------------+\n",
      "| Processes:                                                                            |\n",
      "|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |\n",
      "|        ID   ID                                                             Usage      |\n",
      "|=======================================================================================|\n",
      "|  No running processes found                                                           |\n",
      "+---------------------------------------------------------------------------------------+\n"
     ]
    }
   ],
   "source": [
    "# check for VRAM of GPU\n",
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "daHiXTX19JEc"
   },
   "source": [
    "### Download data for tuning"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "UVosPiQTKllv",
    "outputId": "a2708f06-c47f-4425-9882-8c22c3471f6f"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading...\n",
      "From: https://drive.google.com/uc?id=1tiAscG941evQS8RzjznoPu8meu4unw5A\n",
      "To: /content/ecommerce-faq.json\n",
      "\r",
      "  0% 0.00/21.0k [00:00<?, ?B/s]\r",
      "100% 21.0k/21.0k [00:00<00:00, 45.3MB/s]\n"
     ]
    }
   ],
   "source": [
    "# download ecommerce-faq.json\n",
    "\n",
    "!gdown 1tiAscG941evQS8RzjznoPu8meu4unw5A"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7YRSakbl9PQ8"
   },
   "source": [
    "### Data loading and understanding"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "8WSc6kUyKloc"
   },
   "outputs": [],
   "source": [
    "# loading and reading values in ecommerce-faq.json\n",
    "\n",
    "with open(\"ecommerce-faq.json\") as json_file:\n",
    "    data = json.load(json_file)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "_17jsWYqKlro",
    "outputId": "556130be-dba9-4531-a4e1-ed9db674c16b"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'question': 'How can I create an account?',\n",
      " 'answer': \"To create an account, click on the 'Sign Up' button on the top \"\n",
      "           'right corner of our website and follow the instructions to '\n",
      "           'complete the registration process.'}\n"
     ]
    }
   ],
   "source": [
    "pprint(data[\"questions\"][0], sort_dicts=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "XBsoKRdsKluv",
    "outputId": "bedfd712-a80c-4786-bc19-bc2f48fb7e55"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'question': 'What payment methods do you accept?',\n",
      " 'answer': 'We accept major credit cards, debit cards, and PayPal as payment '\n",
      "           'methods for online orders.'}\n"
     ]
    }
   ],
   "source": [
    "pprint(data[\"questions\"][1], sort_dicts=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "E29dFIihKlxW",
    "outputId": "de95a5e3-2513-44fb-af3b-a9b4278383fe"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'question': 'What is your return policy?',\n",
      " 'answer': 'Our return policy allows you to return products within 30 days of '\n",
      "           'purchase for a full refund, provided they are in their original '\n",
      "           'condition and packaging. Please refer to our Returns page for '\n",
      "           'detailed instructions.'}\n"
     ]
    }
   ],
   "source": [
    "pprint(data[\"questions\"][3], sort_dicts=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "9cE25RS5Mk1D"
   },
   "outputs": [],
   "source": [
    "# reading dataset.json for Q&A\n",
    "with open(\"dataset.json\", \"w\") as f:\n",
    "    json.dump(data[\"questions\"], f)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 206
    },
    "id": "AgP372eXMk4N",
    "outputId": "9ef08fe0-347e-45a5-db78-479ae4994480"
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "  <div id=\"df-14a3910e-b017-4aeb-a630-2da7370e37f1\" class=\"colab-df-container\">\n",
       "    <div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>question</th>\n",
       "      <th>answer</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>How can I create an account?</td>\n",
       "      <td>To create an account, click on the 'Sign Up' b...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>What payment methods do you accept?</td>\n",
       "      <td>We accept major credit cards, debit cards, and...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>How can I track my order?</td>\n",
       "      <td>You can track your order by logging into your ...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>What is your return policy?</td>\n",
       "      <td>Our return policy allows you to return product...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>Can I cancel my order?</td>\n",
       "      <td>You can cancel your order if it has not been s...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>\n",
       "    <div class=\"colab-df-buttons\">\n",
       "\n",
       "  <div class=\"colab-df-container\">\n",
       "    <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-14a3910e-b017-4aeb-a630-2da7370e37f1')\"\n",
       "            title=\"Convert this dataframe to an interactive table.\"\n",
       "            style=\"display:none;\">\n",
       "\n",
       "  <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\" viewBox=\"0 -960 960 960\">\n",
       "    <path d=\"M120-120v-720h720v720H120Zm60-500h600v-160H180v160Zm220 220h160v-160H400v160Zm0 220h160v-160H400v160ZM180-400h160v-160H180v160Zm440 0h160v-160H620v160ZM180-180h160v-160H180v160Zm440 0h160v-160H620v160Z\"/>\n",
       "  </svg>\n",
       "    </button>\n",
       "\n",
       "  <style>\n",
       "    .colab-df-container {\n",
       "      display:flex;\n",
       "      gap: 12px;\n",
       "    }\n",
       "\n",
       "    .colab-df-convert {\n",
       "      background-color: #E8F0FE;\n",
       "      border: none;\n",
       "      border-radius: 50%;\n",
       "      cursor: pointer;\n",
       "      display: none;\n",
       "      fill: #1967D2;\n",
       "      height: 32px;\n",
       "      padding: 0 0 0 0;\n",
       "      width: 32px;\n",
       "    }\n",
       "\n",
       "    .colab-df-convert:hover {\n",
       "      background-color: #E2EBFA;\n",
       "      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
       "      fill: #174EA6;\n",
       "    }\n",
       "\n",
       "    .colab-df-buttons div {\n",
       "      margin-bottom: 4px;\n",
       "    }\n",
       "\n",
       "    [theme=dark] .colab-df-convert {\n",
       "      background-color: #3B4455;\n",
       "      fill: #D2E3FC;\n",
       "    }\n",
       "\n",
       "    [theme=dark] .colab-df-convert:hover {\n",
       "      background-color: #434B5C;\n",
       "      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
       "      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
       "      fill: #FFFFFF;\n",
       "    }\n",
       "  </style>\n",
       "\n",
       "    <script>\n",
       "      const buttonEl =\n",
       "        document.querySelector('#df-14a3910e-b017-4aeb-a630-2da7370e37f1 button.colab-df-convert');\n",
       "      buttonEl.style.display =\n",
       "        google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
       "\n",
       "      async function convertToInteractive(key) {\n",
       "        const element = document.querySelector('#df-14a3910e-b017-4aeb-a630-2da7370e37f1');\n",
       "        const dataTable =\n",
       "          await google.colab.kernel.invokeFunction('convertToInteractive',\n",
       "                                                    [key], {});\n",
       "        if (!dataTable) return;\n",
       "\n",
       "        const docLinkHtml = 'Like what you see? Visit the ' +\n",
       "          '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
       "          + ' to learn more about interactive tables.';\n",
       "        element.innerHTML = '';\n",
       "        dataTable['output_type'] = 'display_data';\n",
       "        await google.colab.output.renderOutput(dataTable, element);\n",
       "        const docLink = document.createElement('div');\n",
       "        docLink.innerHTML = docLinkHtml;\n",
       "        element.appendChild(docLink);\n",
       "      }\n",
       "    </script>\n",
       "  </div>\n",
       "\n",
       "\n",
       "<div id=\"df-8797e326-54cf-49d9-91ff-b19046e3e21c\">\n",
       "  <button class=\"colab-df-quickchart\" onclick=\"quickchart('df-8797e326-54cf-49d9-91ff-b19046e3e21c')\"\n",
       "            title=\"Suggest charts\"\n",
       "            style=\"display:none;\">\n",
       "\n",
       "<svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
       "     width=\"24px\">\n",
       "    <g>\n",
       "        <path d=\"M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z\"/>\n",
       "    </g>\n",
       "</svg>\n",
       "  </button>\n",
       "\n",
       "<style>\n",
       "  .colab-df-quickchart {\n",
       "      --bg-color: #E8F0FE;\n",
       "      --fill-color: #1967D2;\n",
       "      --hover-bg-color: #E2EBFA;\n",
       "      --hover-fill-color: #174EA6;\n",
       "      --disabled-fill-color: #AAA;\n",
       "      --disabled-bg-color: #DDD;\n",
       "  }\n",
       "\n",
       "  [theme=dark] .colab-df-quickchart {\n",
       "      --bg-color: #3B4455;\n",
       "      --fill-color: #D2E3FC;\n",
       "      --hover-bg-color: #434B5C;\n",
       "      --hover-fill-color: #FFFFFF;\n",
       "      --disabled-bg-color: #3B4455;\n",
       "      --disabled-fill-color: #666;\n",
       "  }\n",
       "\n",
       "  .colab-df-quickchart {\n",
       "    background-color: var(--bg-color);\n",
       "    border: none;\n",
       "    border-radius: 50%;\n",
       "    cursor: pointer;\n",
       "    display: none;\n",
       "    fill: var(--fill-color);\n",
       "    height: 32px;\n",
       "    padding: 0;\n",
       "    width: 32px;\n",
       "  }\n",
       "\n",
       "  .colab-df-quickchart:hover {\n",
       "    background-color: var(--hover-bg-color);\n",
       "    box-shadow: 0 1px 2px rgba(60, 64, 67, 0.3), 0 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
       "    fill: var(--button-hover-fill-color);\n",
       "  }\n",
       "\n",
       "  .colab-df-quickchart-complete:disabled,\n",
       "  .colab-df-quickchart-complete:disabled:hover {\n",
       "    background-color: var(--disabled-bg-color);\n",
       "    fill: var(--disabled-fill-color);\n",
       "    box-shadow: none;\n",
       "  }\n",
       "\n",
       "  .colab-df-spinner {\n",
       "    border: 2px solid var(--fill-color);\n",
       "    border-color: transparent;\n",
       "    border-bottom-color: var(--fill-color);\n",
       "    animation:\n",
       "      spin 1s steps(1) infinite;\n",
       "  }\n",
       "\n",
       "  @keyframes spin {\n",
       "    0% {\n",
       "      border-color: transparent;\n",
       "      border-bottom-color: var(--fill-color);\n",
       "      border-left-color: var(--fill-color);\n",
       "    }\n",
       "    20% {\n",
       "      border-color: transparent;\n",
       "      border-left-color: var(--fill-color);\n",
       "      border-top-color: var(--fill-color);\n",
       "    }\n",
       "    30% {\n",
       "      border-color: transparent;\n",
       "      border-left-color: var(--fill-color);\n",
       "      border-top-color: var(--fill-color);\n",
       "      border-right-color: var(--fill-color);\n",
       "    }\n",
       "    40% {\n",
       "      border-color: transparent;\n",
       "      border-right-color: var(--fill-color);\n",
       "      border-top-color: var(--fill-color);\n",
       "    }\n",
       "    60% {\n",
       "      border-color: transparent;\n",
       "      border-right-color: var(--fill-color);\n",
       "    }\n",
       "    80% {\n",
       "      border-color: transparent;\n",
       "      border-right-color: var(--fill-color);\n",
       "      border-bottom-color: var(--fill-color);\n",
       "    }\n",
       "    90% {\n",
       "      border-color: transparent;\n",
       "      border-bottom-color: var(--fill-color);\n",
       "    }\n",
       "  }\n",
       "</style>\n",
       "\n",
       "  <script>\n",
       "    async function quickchart(key) {\n",
       "      const quickchartButtonEl =\n",
       "        document.querySelector('#' + key + ' button');\n",
       "      quickchartButtonEl.disabled = true;  // To prevent multiple clicks.\n",
       "      quickchartButtonEl.classList.add('colab-df-spinner');\n",
       "      try {\n",
       "        const charts = await google.colab.kernel.invokeFunction(\n",
       "            'suggestCharts', [key], {});\n",
       "      } catch (error) {\n",
       "        console.error('Error during call to suggestCharts:', error);\n",
       "      }\n",
       "      quickchartButtonEl.classList.remove('colab-df-spinner');\n",
       "      quickchartButtonEl.classList.add('colab-df-quickchart-complete');\n",
       "    }\n",
       "    (() => {\n",
       "      let quickchartButtonEl =\n",
       "        document.querySelector('#df-8797e326-54cf-49d9-91ff-b19046e3e21c button');\n",
       "      quickchartButtonEl.style.display =\n",
       "        google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
       "    })();\n",
       "  </script>\n",
       "</div>\n",
       "\n",
       "    </div>\n",
       "  </div>\n"
      ],
      "text/plain": [
       "                              question  \\\n",
       "0         How can I create an account?   \n",
       "1  What payment methods do you accept?   \n",
       "2            How can I track my order?   \n",
       "3          What is your return policy?   \n",
       "4               Can I cancel my order?   \n",
       "\n",
       "                                              answer  \n",
       "0  To create an account, click on the 'Sign Up' b...  \n",
       "1  We accept major credit cards, debit cards, and...  \n",
       "2  You can track your order by logging into your ...  \n",
       "3  Our return policy allows you to return product...  \n",
       "4  You can cancel your order if it has not been s...  "
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pd.DataFrame(data[\"questions\"]).head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZXUqtrpcMpDN"
   },
   "source": [
    "We’re using a sharded model, which means the model is split into multiple parts—around 14 pieces in this case. This helps accelerate the process by allowing different pieces to be loaded into various memory types, like GPU or CPU memory. This way, you can work with and fine-tune a large model even with limited memory, which is why we use the sharded approach."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "wEXXPFrOaSZz"
   },
   "source": [
    "### Changing the quantization type\n",
    "\n",
    "The 4bit integration comes with 2 different quantization types: FP4 and NF4. The NF4 dtype stands for Normal Float 4 and is introduced in the [QLoRA paper](https://arxiv.org/abs/2305.14314)\n",
    "\n",
    "YOu can switch between these two dtype using `bnb_4bit_quant_type` from `BitsAndBytesConfig`. By default, the FP4 quantization is used."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 862,
     "referenced_widgets": [
      "7e376b45695948239817b8e5195b0265",
      "53f8306f762c40208bd7ecfac6ea9fe4",
      "6ae60739c3f440738905090310f8a861",
      "5a2d250e95d149c5b0b48a9002426321",
      "4727720977fc46269e4deada686c5c4f",
      "759cd1bc90ad4b368f075b242d1770d5",
      "5ba79457f39a4092a1a7da4819c544f0",
      "cb086d61be174c7391223cd9278500a2",
      "618e6c61e6cc44c2948ce80ddb9a50ba",
      "4fa2a6de891849319290c2402dd93f32",
      "46f5065a034041d4ba1236d405f13c66",
      "c81fea829eb84022a6e462d786dec34c",
      "1a8b3120850d4b3896f31e7fca6a736e",
      "1e9dcf25e49b4c97bfd3256328f4ac82",
      "d14e56c713a54fbd838670a2dccc3cb6",
      "b00804101e7443f2b477e9faaf606810",
      "921309e80c024a76a90a54a918c344f6",
      "d53d6040c3074d1881dd3c1792fc8656",
      "52c97d6930b54b4d87be048c4121cfdd",
      "000bdae16fec436d83edb0e0b19f7248",
      "acd9a195cd31483ba90252f12b9b30b3",
      "e9131760ade24fcf8fec1de32afda46c",
      "729e45577f1a46cebb0416ffc401fbd4",
      "7afbe388f27b42fd93aa0828008087eb",
      "164d66ad6c054ba7908ee86fe2f74a6e",
      "89d34c9f97f54d55ab63d664e69cbf83",
      "a3453ba657654ab1bdba56d6ddfbe796",
      "1100ac4fdd82422a855d1e0fc675fd41",
      "08a2161434af4a1eaa5b612bfb12413b",
      "742fd80539204da0a599e6844a440dd3",
      "b690c22dc34f4b74817a8c454a6f0a6c",
      "813f87700d924cc2b3ae88047f8e4f11",
      "3584515e4d254db1a3e737a187e70594",
      "8ddfd3bcde8f4993aeb279db4ba8c715",
      "8be6272f61b04f8d850a4c852af2987b",
      "d7667212b28640e998c8980d0925699f",
      "9fcd72c7dc394faa87f9d4afea9a935d",
      "5514c80df18340c6ac6f1f5760aef2b9",
      "736baab93b1445e794372cab9b6faf18",
      "b0e8c0b5f6f54dfe8ff2d730424dfd33",
      "1ae51a8caed749b8b2458cc67782a852",
      "95e95c4526cb4c3888d8f1ba09dbb4ae",
      "1340163f1a694c068f14ec8f8eaed76f",
      "adcaac2088ab4b618617c9e483e27395",
      "68c57995c72c437d93abec28e97f662e",
      "07c8aed30b2b423fb64021a8d1250520",
      "e35fe935e6e34442a727a8035e04a8e4",
      "79efbd99303b4c95b1479cfd91b2569d",
      "fe45154a516e4273bf4055466e10d78f",
      "29b450c77f1941eeb6435e5a44c78c2f",
      "4c113de390d14a57b429b87db2aa8dbe",
      "018b438798774bd7a1ae582c5abd3485",
      "cf81d87c3fb04b31948fbe00b08002d3",
      "ef92c17bb3b6461d85aed96cf89ab07f",
      "e36df3d115844b518fd6f58b16a7dfd5",
      "7933b6e8fd8846e58f5610af96a5bc02",
      "c66d66807c57475aa64ce4a476e79dfe",
      "2e9f22501ce64989954f99043a3bc4c7",
      "e7164efc23f14fbca4adf6015abf2c03",
      "70802a0837a54dfb8925ad1233955207",
      "35f9130b437d4742b232692cf9df6d24",
      "b248895d24c949fcb10751359e7dc093",
      "a7c62cab596a473a874127158b8c9429",
      "916524fe436a436681d2faa14cb6910c",
      "60f6f492435f4eae9fc3dda2abf0cb1e",
      "47c8baac239a44d99e321c666c09c662",
      "e30c583f86584d67b64b4b97f249f6de",
      "2ea63b0b946f42a2b05a826acf71a85a",
      "e01284eb49744e56811f524c4957b7ac",
      "5f75a0480d734099929d2b3a35429d04",
      "498d30011222411d85e7378fa386ce4d",
      "41f5b6b898204da78a80bd0a51962345",
      "5b4c3ae017a641158399df89b479488d",
      "deede946a64a42138a7192d3624a2454",
      "745a6199154e4eb596624809a80c9b44",
      "3745c1609b8742cab64c87dc0c1a7ae7",
      "275fdcea1d6047ff9b2783f940673fd2",
      "f12b38ff9a804cad83b969b967778f0a",
      "ebfb497e967a4164b71b76c47410c547",
      "def0dd87d80c46a1b8e6fedf074ae8e0",
      "82132875e3ad4a0a9569644b4c40ed93",
      "e382df177b9f403abba003a15930209d",
      "988c4541b01d4d949597b23899bc6bb4",
      "a672cc307f6343bb90f25b678d3b9e53",
      "3ad5f8e829e94dc8b7646317e1bd26e0",
      "c9498a51bd4c428ca97ec0fe652e2d55",
      "c4896f188e35473bbde42c0ef0d5311a",
      "5edf1ffd7a194333b329b15c00d72589",
      "75210ffe4d3846dfa71158c8af3f4bdc",
      "969e5875225f4f7e82711e9d1a508889",
      "3bf30262624644a38d8c14adc3b6341d",
      "c35250df27284962bf4ea9dd49766a7c",
      "d8b240ebf8954351963ae9c13eea1146",
      "f6a703af8d0b4e2db0a1f10316300f85",
      "e85916bdce554d2aa47d24865bef8e7c",
      "113d85b0351942b48122b649b0dbe1de",
      "bd3ee08051b449ac8a4b5b547bf4a76c",
      "d24dc54ed22548538ec1d9a2fb5cce53",
      "124f9dff9c804e9dbabb5a8fafd82603",
      "8b77bf1730dc4e0caa034ec75d9c8a93",
      "1926a0a06f4345479d3ed257414f7e03",
      "8a3494842bca4e528ed9c143ea41ce1a",
      "b40067357d4a4d6691a9a83b3e48490e",
      "b312f83f5f8c4d51b6f68de7516202fc",
      "ca0fa43f6cfb493f9ff341ac4d05c884",
      "7fb17605c91847f08a8deaef91b7f20c",
      "1c95c715444a46068edc7d05067d2c41",
      "a68ecb19feea494ba254c0eb7e66fe4f",
      "9a5dec01480740449bd17585068c013a",
      "ba02fea9e6f0495bba5b87c1a135cc26",
      "6c5af26338fb4079a82bb0201b68efde",
      "f8f722ac1a3442719f4cd96305b94d99",
      "89410337c62e4e29a6bc3b84c735375f",
      "f9f55d83fd4249fd9c8e200e1cd6aada",
      "7114cf4f77d04df9805c5207943dac32",
      "b122dc1e8a8649d1be3b7716611608a7",
      "1ee3253d4a714f9db8f9b8d5ff9d9d13",
      "a6a3c4e60e484d12b42eeb5893802202",
      "5df4025cbb714a1b9e9f2ceaa0ff5519",
      "441e65db5e3c43f7a0f9b51231114f7b",
      "135198ada2894674b651cf6ea47ebe65",
      "024c62e1a0e94c68b5c585a018adde2b",
      "a6750217ac9a4766b0aabcab77544872",
      "1ef4852c2e4d447e8b666e48937cd482",
      "1ccfce2abd1e402090f0b0e90bf76ff6",
      "aedb84fa458040adae8e6d3613ac8547",
      "bc84f464a79b45c783bd8afeee6fee8f",
      "38fe5bf0a60648628443bde1dacd3228",
      "a2f4ad8e9d0140fd9f97ea48c57eef87",
      "d3ff9a4c593b42bb940662646b0086cb",
      "8e20584f3b934e979607e5a2ee743fdf",
      "de6bbff52e7e4fe3ba8fba87a2145e19",
      "e77f3c96ca4a42a6969108c23d59b6d6",
      "107322a867614535a794746898c9f9f2",
      "582d7adb0bfe4abe851c85b8f41e39b4",
      "21546d5c83f7461284b58de1b06b70a5",
      "e4a0e97acdec42759f2db3ba2069bbde",
      "4d4e64061acf4f2787839e07d72ef4f5",
      "b00c114a99e74cc0aa02913be667ca88",
      "a1d25f07e0534dddb12cadf9529e9432",
      "36001d04af14412ea24cc0eaee568af3",
      "55f5bc5cf02e4818b099f9fcb310cf52",
      "2554aa220e7b45f58bab1973eeec052a",
      "0101774ee5b04733816b37bed0a8718f",
      "de9f5a374a7e4c4ea6004537b5fa592f",
      "d6c854d084994108a80e0c222b9cb40c",
      "02379b2c6671496ebb2708470219bfa4",
      "9c95304c7d6d4d33a8d67f8474c6ca1f",
      "934a0c90544f40d09da8587a15b169a3",
      "8f5e05b4428c487dbc1d85ee8392fdc0",
      "2f1a8d6f72364760a9d1cf9a3c2077cf",
      "010091aac936471baaf540866333e905",
      "fe4479df735e4ce7bd118a377e7d3d30",
      "a924a11b71714d1a987681c1493d3696",
      "d57d692cf9f1472d99d4e89d0e03d9eb",
      "bbdd1c96b9b5404cb79fe7fae4d128e3",
      "9670ee9091c14561ae4f90a644fded61",
      "528c1a53b5b6421ba7a5597047cb425f",
      "41c33ef46e144825b304c2cddd7a820a",
      "4843ed950e19430e832c4c3ffe171872",
      "4c3ba306bf49409a95e1f20697f4b939",
      "08e978370b4c46f5b3c9f6766d39c475",
      "6dfb7f632c6b46bb95ab3fab5a25731b",
      "9c60639463664238a15c45922810c1ef",
      "f70fa89c94d54856a977b0d115f1b16f",
      "2409a58a1a184e44997cd08309087bb7",
      "bcca7f5c95e64a299b2c566b49ea9d8e",
      "2664f5896c7548c98dec4adacafc1773",
      "1a0b776d0683427ab3c2d0336090fc07",
      "da42b875ac5c445b8568f08deb70f2d7",
      "8bd6df78e23946c2a654996a7bb151d1",
      "99d810d349ea45f3b5974d546451716d",
      "092068270e1645b58a8199df31830104",
      "0772fe28de6f4ede9be2e009168d6440",
      "4ed3dd6adde6471c9f3bb968a9042a6c",
      "445a34b3b3cc4d79a5439fc31327d401",
      "db9dbbf55ac6453190a8e9d3276415c7",
      "ac86550d992f4f52872d239def42f7f7",
      "1062d77445a645ba85733911ef5492ea",
      "e43a2e9cdc744425bf8da2e572075e6f",
      "5fc541b7b8684a80afa5b93bea656518",
      "78e2fafee89644ef88e3929c9dce257e",
      "0063a1c656d3466cac6f94b44b685150",
      "6500bbf2ed8348aa84ea6c8a6e2d5f56",
      "5f8d3a8fcb7b42998210cefe90762ac7",
      "65537c62866f4e8fa740b574a4e79e4b",
      "9caa5c6e2c1547029a8473e5c1e97873",
      "c650fd87ef804ee3b6775d2afbacc8a2",
      "93b4cfcf4c3347068d050ed0879163bb",
      "921c5285939944e6af199235bc84e283",
      "a907179506f3484da21438176d320441",
      "ab23442cd7484bc6a4873c0492bfc130",
      "8d49d6edb84d476382f397981eff1955",
      "fce620ead2124b3292f440a586d7c352",
      "6474ee1f52654f4e8f2483bb7e17cb4f",
      "8d480f4ddf0c4ad49e08dcc3d3978756",
      "5918b970798e4131bf871ffcdf3d1a3e",
      "e6a7791cdfb5438c9fb607f0b86d7a77",
      "d0e5256910d04cc8a5f6a27cb0ae9314",
      "b54f855f1df0422c874bda0da5be30a0",
      "e1d671173a094e6ca82d5d98c913a34a",
      "b680bbe6bb924703aa81966c44fb44c5",
      "100d854f151e4c1ab19a57442dc0c648",
      "0ef96e0ad97f4b2180e91e649aec4b14",
      "cef32900d66147bc83861e367de53d43",
      "b8a501f289524d1ab413e2dbd7054e8a",
      "57e878a0f0814255ab877a5b99f0ba46",
      "6651654128c8494487523de9e9462f93",
      "d4aa32a889a14029ad6824dce5b552ac",
      "e968170092424db885496476020c19a3",
      "6dd7063dda0b4ba2810f968b388b103b",
      "815e5297e36843268d52f87181627e54",
      "3894e6fca1d24eb7ad4215291bf17c69",
      "d297a2d53ae24ae6be67380086aecf67",
      "0f8a83e4c96e475ebf8fff5a0a74825a",
      "a2b35805c80b491298b2e3079b72f99b",
      "de03fb55b67c4040a06b38c63b482203",
      "ccbf46622cf14879bb6d72bf3cee40a8",
      "0cc2d95d223244df8f7a37cee028aa5c",
      "3aabb7373794469f831f692b70f0ce39",
      "34795cc3d44d4ed3b52839d815aaa71e",
      "352f9ecf4fc64a4fae9d9cc507848865",
      "d6260183890e4cbfb821387e76743c53",
      "1e04fae4e50f400bbc4a027e9623abce",
      "4f7c42616b2d4abd86af3f1feb81984e",
      "d0de28d203c34cfca54e95a2d39994f7",
      "614805637cbc43c2bebfa56196e3b7b1",
      "4b4566c507ce4f9292f8d121776ccd55",
      "9c6c6fc03fc04b94b6300aaed02c336a",
      "9328eb252a5442cbbc61076da40d43fe",
      "53c519a74a5a4ebc89b4e5d8a4fc355c",
      "556b1e5bb1364189acc6ab399320e869",
      "fd86c77c01464ffcb897bcd2747c649e",
      "4d44fdce5825457ba23ffbe0d6461a9e",
      "a01a51db34a0413f9a02ff390f706858",
      "ec59b989de8543f3b31c04fb2ade85f8",
      "14f7fa4cf8494aacbf914eae68033b88",
      "3b8fe7d02a8f41c2ae2df89c98b36845",
      "420c0ca380194243baf2d35c32a74cf0",
      "0fb08803a958442ea9b3e811f3d613fd",
      "ac215a2f754a42f0bb184deac83b5591",
      "4faf605688c646f2bd002fd086b60aa2",
      "90a73d86ca8e4c77aea19bccb8fd654b",
      "b32a5eae40274ed3ba224a5244ee901b",
      "502c7b36cf6649b1a7e4c38a394e6712",
      "71c3b8925ee54952b93e02a736f8bb59",
      "2ad724c423ba4c7bb0fffb4c1ff6bbc7",
      "e4d09be6e10244679d93f324a475db7c",
      "47714bd097074f53b3840afc7cd84074",
      "2f7ffa207bda4402b46859fe41a14dfa",
      "dc4873a72b04410ea60264066129b821",
      "3a7da27e98124c4e908bcca0e1fe1e21",
      "0630de2afbf54b96a0144a66dfae67d0"
     ]
    },
    "id": "-v9TmkByVVSn",
    "outputId": "1242b674-f165-4e2a-aa62-6e1b51918c40"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning: \n",
      "The secret `HF_TOKEN` does not exist in your Colab secrets.\n",
      "To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.\n",
      "You will be able to reuse this secret in all of your notebooks.\n",
      "Please note that authentication is recommended but still optional to access public models or datasets.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7e376b45695948239817b8e5195b0265",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/626 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c81fea829eb84022a6e462d786dec34c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors.index.json:   0%|          | 0.00/28.1k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "729e45577f1a46cebb0416ffc401fbd4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading shards:   0%|          | 0/14 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "8ddfd3bcde8f4993aeb279db4ba8c715",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00001-of-00014.safetensors:   0%|          | 0.00/981M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "68c57995c72c437d93abec28e97f662e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00002-of-00014.safetensors:   0%|          | 0.00/967M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7933b6e8fd8846e58f5610af96a5bc02",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00003-of-00014.safetensors:   0%|          | 0.00/967M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e30c583f86584d67b64b4b97f249f6de",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00004-of-00014.safetensors:   0%|          | 0.00/990M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f12b38ff9a804cad83b969b967778f0a",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00005-of-00014.safetensors:   0%|          | 0.00/944M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "75210ffe4d3846dfa71158c8af3f4bdc",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00006-of-00014.safetensors:   0%|          | 0.00/990M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "8b77bf1730dc4e0caa034ec75d9c8a93",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00007-of-00014.safetensors:   0%|          | 0.00/967M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "6c5af26338fb4079a82bb0201b68efde",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00008-of-00014.safetensors:   0%|          | 0.00/967M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "024c62e1a0e94c68b5c585a018adde2b",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00009-of-00014.safetensors:   0%|          | 0.00/990M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e77f3c96ca4a42a6969108c23d59b6d6",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00010-of-00014.safetensors:   0%|          | 0.00/944M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "0101774ee5b04733816b37bed0a8718f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00011-of-00014.safetensors:   0%|          | 0.00/990M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d57d692cf9f1472d99d4e89d0e03d9eb",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00012-of-00014.safetensors:   0%|          | 0.00/967M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "2409a58a1a184e44997cd08309087bb7",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00013-of-00014.safetensors:   0%|          | 0.00/967M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "db9dbbf55ac6453190a8e9d3276415c7",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model-00014-of-00014.safetensors:   0%|          | 0.00/847M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c650fd87ef804ee3b6775d2afbacc8a2",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/14 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d0e5256910d04cc8a5f6a27cb0ae9314",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "generation_config.json:   0%|          | 0.00/132 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e968170092424db885496476020c19a3",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/676 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "34795cc3d44d4ed3b52839d815aaa71e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.model:   0%|          | 0.00/500k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "556b1e5bb1364189acc6ab399320e869",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/1.84M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "90a73d86ca8e4c77aea19bccb8fd654b",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "special_tokens_map.json:   0%|          | 0.00/411 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Load Llama-7b bf16 sharded Model & tokenizer\n",
    "\n",
    "MODEL_NAME = \"TinyPixel/Llama-2-7B-bf16-sharded\"\n",
    "\n",
    "bnb_config = BitsAndBytesConfig(\n",
    "    load_in_4bit=True,\n",
    "    bnb_4bit_quant_type=\"nf4\",\n",
    "    bnb_4bit_compute_dtype=torch.bfloat16,\n",
    ")\n",
    "\n",
    "# model\n",
    "model = AutoModelForCausalLM.from_pretrained(\n",
    "    MODEL_NAME,\n",
    "    trust_remote_code=True,\n",
    "    quantization_config=bnb_config,\n",
    ")\n",
    "\n",
    "# tokenizer\n",
    "tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\n",
    "tokenizer.pad_token = tokenizer.eos_token"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "ZHizPE78Mk-r"
   },
   "outputs": [],
   "source": [
    "# helper function to print number of trainable parameters\n",
    "\n",
    "\n",
    "def print_trainable_parameters(model):\n",
    "    \"\"\"\n",
    "    Prints the number of trainable parameters in the model.\n",
    "    \"\"\"\n",
    "    trainable_params = 0\n",
    "    all_param = 0\n",
    "    for _, param in model.named_parameters():\n",
    "        all_param += param.numel()\n",
    "        if param.requires_grad:\n",
    "            trainable_params += param.numel()\n",
    "    print(\n",
    "        f\"Trainable params: {trainable_params} || All params: {all_param} || Trainable%: {100 * trainable_params / all_param}\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "HptrfjwCMlBs"
   },
   "outputs": [],
   "source": [
    "model.gradient_checkpointing_enable()\n",
    "model = prepare_model_for_kbit_training(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "6fJ9W-hje4sz"
   },
   "source": [
    "### Lora Config\n",
    "\n",
    "LoraConfig allows you to control how LoRA is applied to the base model through the following parameters - \\\n",
    "\n",
    "***r*** - the rank of the update matrices, expressed in int. Lower rank results in smaller update matrices with fewer trainable parameters. \\\n",
    "\n",
    "***target_modules*** - The modules (for example, attention blocks) to apply the LoRA update matrices. \\\n",
    "\n",
    "***alpha***  - LoRA scaling factor. \\\n",
    "\n",
    "***bias*** - Specifies if the bias parameters should be trained. Can be 'none', 'all' or 'lora_only'. \\\n",
    "\n",
    "***modules_to_save*** - List of modules apart from LoRA layers to be set as trainable and saved in the final checkpoint. These typically include model’s custom head that is randomly initialized for the fine-tuning task.\n",
    "***layers_to_transform*** - List of layers to be transformed by LoRA. If not specified, all layers in target_modules are transformed. \\\n",
    "\n",
    "***layers_pattern*** - Pattern to match layer names in target_modules, if layers_to_transform is specified. By default PeftModel will look at common layer pattern (layers, h, blocks, etc.), use it for exotic and custom models. \\\n",
    "\n",
    "***rank_pattern*** - The mapping from layer names or regexp expression to ranks which are different from the default rank specified by r. \\\n",
    "\n",
    "***alpha_pattern*** - The mapping from layer names or regexp expression to alphas which are different from the default alpha specified by lora_alpha."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "lSXo09-4MlEi",
    "outputId": "d01356bc-b71f-46b6-cb8a-7dc8d2685555"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Trainable params: 33554432 || All params: 3533967360 || Trainable%: 0.9494833591219133\n"
     ]
    }
   ],
   "source": [
    "from peft import LoraConfig, get_peft_model\n",
    "\n",
    "# set Loraconfig parameter values\n",
    "\n",
    "lora_alpha = 16\n",
    "lora_dropout = 0.1\n",
    "lora_r = 64\n",
    "\n",
    "config = LoraConfig(\n",
    "    lora_alpha=lora_alpha,\n",
    "    lora_dropout=lora_dropout,\n",
    "    r=lora_r,\n",
    "    bias=\"none\",\n",
    "    task_type=\"CAUSAL_LM\",\n",
    ")\n",
    "\n",
    "model = get_peft_model(model, config)\n",
    "print_trainable_parameters(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9UnhUWHm-YHh"
   },
   "source": [
    "### Inference Before Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "xxyk3cGDMlJO",
    "outputId": "ba282dee-f7a6-4574-ebe8-efaaf758f52d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      ": How can I create an account?\n",
      ":\n"
     ]
    }
   ],
   "source": [
    "prompt = f\"\"\"\n",
    ": How can I create an account?\n",
    ":\n",
    "\"\"\".strip()\n",
    "print(prompt)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Zal7MzltMlNc"
   },
   "outputs": [],
   "source": [
    "# change configuration for generation\n",
    "\n",
    "generation_config = model.generation_config\n",
    "generation_config.max_new_tokens = 80\n",
    "generation_config.temperature = 0.7\n",
    "generation_config.top_p = 0.7\n",
    "generation_config.num_return_sequences = 1\n",
    "generation_config.pad_token_id = tokenizer.eos_token_id\n",
    "generation_config.eos_token_id = tokenizer.eos_token_id"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "AF9pGCJ2MlUH",
    "outputId": "46da0b1a-00b2-42c0-b78c-3ba4f40345db"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "GenerationConfig {\n",
       "  \"bos_token_id\": 1,\n",
       "  \"eos_token_id\": 2,\n",
       "  \"max_new_tokens\": 80,\n",
       "  \"pad_token_id\": 2,\n",
       "  \"temperature\": 0.7,\n",
       "  \"top_p\": 0.7\n",
       "}"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# generation configurations\n",
    "generation_config"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "cComMNlMNIYx",
    "outputId": "69fc1b92-880f-43e8-fced-0d138c0ba8cf"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:392: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.7` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.\n",
      "  warnings.warn(\n",
      "/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:397: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.7` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      ": How can I create an account?\n",
      ": How can I create an account?\n",
      ": How can I create an account? : How can I create an account?\n",
      "CPU times: user 6.25 s, sys: 826 ms, total: 7.08 s\n",
      "Wall time: 11.4 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "# Specify the target device for model execution, typically a GPU.\n",
    "device = \"cuda:0\"\n",
    "\n",
    "# Tokenize the input prompt and move it to the specified device.\n",
    "encoding = tokenizer(prompt, return_tensors=\"pt\").to(device)\n",
    "\n",
    "# Run model inference in evaluation mode (inference_mode) for efficiency.\n",
    "with torch.inference_mode():\n",
    "    outputs = model.generate(\n",
    "        input_ids=encoding.input_ids,\n",
    "        attention_mask=encoding.attention_mask,\n",
    "        generation_config=generation_config,\n",
    "    )\n",
    "\n",
    "\n",
    "# Decode the generated output and print it, excluding special tokens.\n",
    "print(tokenizer.decode(outputs[0], skip_special_tokens=True))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "yostlhu9PKF2"
   },
   "source": [
    "### Build HuggingFace Dataset format"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 49,
     "referenced_widgets": [
      "0c2449f48a474e6f93cde3d312d3a5dc",
      "5ba64ba2f4c24866b7adbb99fd40d314",
      "4213e9108c5e4974b4b62401c4687447",
      "cc4d09c3a9214bfb867eec0af3f2d7b1",
      "a6c7363109fa41b5ad9dc462165c013a",
      "ec0c6f59b8bf4eb6bf0bfec533eab08d",
      "dc589955e6744383ad572bd704ab3328",
      "feea409ab6e444f992ea4fa895e66169",
      "e9429b3be5db42db9fda06f0f5aa34e9",
      "d07fb77c8e10407dbeda4130156b8f2a",
      "16a438b7ce6342b3b0a1b4f3f87d248b"
     ]
    },
    "id": "d_WmYk94NIfa",
    "outputId": "7490170d-8b15-4e8f-a366-9ce1fd4755b4"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "0c2449f48a474e6f93cde3d312d3a5dc",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Generating train split: 0 examples [00:00, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "data = load_dataset(\"json\", data_files=\"dataset.json\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "TJkbZpXzNIiS",
    "outputId": "eef50255-f92f-49ae-b9ba-bfed60617139"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "DatasetDict({\n",
       "    train: Dataset({\n",
       "        features: ['answer', 'question'],\n",
       "        num_rows: 79\n",
       "    })\n",
       "})"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "fSHb0QqWNIlh",
    "outputId": "47a0dea2-5958-48b2-f7bc-b0ac63123bc7"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'answer': \"To create an account, click on the 'Sign Up' button on the top right corner of our website and follow the instructions to complete the registration process.\",\n",
       " 'question': 'How can I create an account?'}"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data[\"train\"][0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "6PGm88MhNIp6"
   },
   "outputs": [],
   "source": [
    "def generate_prompt(data_point):\n",
    "    return f\"\"\"\n",
    ": {data_point[\"question\"]}\n",
    ": {data_point[\"answer\"]}\n",
    "\"\"\".strip()\n",
    "\n",
    "\n",
    "def generate_and_tokenize_prompt(data_point):\n",
    "    full_prompt = generate_prompt(data_point)\n",
    "    tokenized_full_prompt = tokenizer(full_prompt, padding=True, truncation=True)\n",
    "    return tokenized_full_prompt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 67,
     "referenced_widgets": [
      "7689dc301af34e50b31eb03199044994",
      "7997a206e4e9417ca48175d7c958aa49",
      "10cde979ec914e6eb52f297db23c297d",
      "779616cd217a4d1382f3955611276d84",
      "b621686032114e0dbd15d9e9cadd8c1b",
      "b11780aa116a493d92570db8a16013e3",
      "cbe0d6ad82e14f3a9cdac3ef93879ac4",
      "fcd98d26ebf84cc4b3674e0b341de679",
      "a4082600c40f486eb1b82df41dfd5d8d",
      "3754c400c4c742c19e0fcea32d92515f",
      "08b6cdda31414d74b07d890b62d4c8ba"
     ]
    },
    "id": "byzQ1CmQNItJ",
    "outputId": "5f275dc5-ba06-475d-bdb9-836f25168358"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7689dc301af34e50b31eb03199044994",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Map:   0%|          | 0/79 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.\n"
     ]
    }
   ],
   "source": [
    "data = data[\"train\"].shuffle().map(generate_and_tokenize_prompt)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "xh8z2xhHKl0z",
    "outputId": "6514f74f-06b9-442d-da26-f3841cd46ddb"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Dataset({\n",
       "    features: ['answer', 'question', 'input_ids', 'attention_mask'],\n",
       "    num_rows: 79\n",
       "})"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "oNSwV8ECQ3pe"
   },
   "outputs": [],
   "source": [
    "OUTPUT_DIR = \"experiments\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1000
    },
    "id": "TO-zrLspQ347",
    "outputId": "434b300d-f633-4307-d9d2-53042d3ad88f"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='80' max='80' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [80/80 07:21, Epoch 4/5]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Step</th>\n",
       "      <th>Training Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>2.275600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>2.245900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>1.933500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>4</td>\n",
       "      <td>1.858800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>5</td>\n",
       "      <td>2.012600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>6</td>\n",
       "      <td>1.801800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>7</td>\n",
       "      <td>1.794000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>8</td>\n",
       "      <td>1.489300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>9</td>\n",
       "      <td>1.587700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>10</td>\n",
       "      <td>1.560400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>11</td>\n",
       "      <td>1.471600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>12</td>\n",
       "      <td>1.551800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>13</td>\n",
       "      <td>1.598900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>14</td>\n",
       "      <td>1.403500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>15</td>\n",
       "      <td>1.195500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>16</td>\n",
       "      <td>1.334300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>17</td>\n",
       "      <td>1.191300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>18</td>\n",
       "      <td>1.072000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>19</td>\n",
       "      <td>1.151500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>20</td>\n",
       "      <td>1.109000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>21</td>\n",
       "      <td>1.135800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>22</td>\n",
       "      <td>1.122000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>23</td>\n",
       "      <td>0.953200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>24</td>\n",
       "      <td>1.027600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>25</td>\n",
       "      <td>0.940800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>26</td>\n",
       "      <td>0.907100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>27</td>\n",
       "      <td>0.784400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>28</td>\n",
       "      <td>0.880200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>29</td>\n",
       "      <td>1.014100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>30</td>\n",
       "      <td>0.843800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>31</td>\n",
       "      <td>1.039000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>32</td>\n",
       "      <td>0.733400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>33</td>\n",
       "      <td>0.676000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>34</td>\n",
       "      <td>0.628600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>35</td>\n",
       "      <td>0.906400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>36</td>\n",
       "      <td>0.530600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>37</td>\n",
       "      <td>0.678700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>38</td>\n",
       "      <td>0.595400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>39</td>\n",
       "      <td>0.748500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>40</td>\n",
       "      <td>0.590200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>41</td>\n",
       "      <td>0.563200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>42</td>\n",
       "      <td>0.639400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>43</td>\n",
       "      <td>0.513500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>44</td>\n",
       "      <td>0.645800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>45</td>\n",
       "      <td>0.542300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>46</td>\n",
       "      <td>0.364400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>47</td>\n",
       "      <td>0.481800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>48</td>\n",
       "      <td>0.647700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>49</td>\n",
       "      <td>0.489400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>50</td>\n",
       "      <td>0.634600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>51</td>\n",
       "      <td>0.365600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>52</td>\n",
       "      <td>0.420700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>53</td>\n",
       "      <td>0.487100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>54</td>\n",
       "      <td>0.533600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>55</td>\n",
       "      <td>0.361700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>56</td>\n",
       "      <td>0.460900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>57</td>\n",
       "      <td>0.515300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>58</td>\n",
       "      <td>0.547600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>59</td>\n",
       "      <td>0.514300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>60</td>\n",
       "      <td>0.547600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>61</td>\n",
       "      <td>0.409700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>62</td>\n",
       "      <td>0.347000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>63</td>\n",
       "      <td>0.467800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>64</td>\n",
       "      <td>0.429700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>65</td>\n",
       "      <td>0.441100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>66</td>\n",
       "      <td>0.406900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>67</td>\n",
       "      <td>0.505200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>68</td>\n",
       "      <td>0.405800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>69</td>\n",
       "      <td>0.427400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>70</td>\n",
       "      <td>0.528000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>71</td>\n",
       "      <td>0.290200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>72</td>\n",
       "      <td>0.301500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>73</td>\n",
       "      <td>0.484300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>74</td>\n",
       "      <td>0.383900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>75</td>\n",
       "      <td>0.444400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>76</td>\n",
       "      <td>0.424000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>77</td>\n",
       "      <td>0.486000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>78</td>\n",
       "      <td>0.480600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>79</td>\n",
       "      <td>0.397400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>80</td>\n",
       "      <td>0.419400</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "TrainOutput(global_step=80, training_loss=0.8391439635306597, metrics={'train_runtime': 447.6633, 'train_samples_per_second': 0.715, 'train_steps_per_second': 0.179, 'total_flos': 649997819142144.0, 'train_loss': 0.8391439635306597, 'epoch': 4.05})"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# training\n",
    "training_args = transformers.TrainingArguments(\n",
    "    per_device_train_batch_size=1,\n",
    "    gradient_accumulation_steps=4,\n",
    "    num_train_epochs=1,\n",
    "    learning_rate=2e-4,\n",
    "    fp16=True,\n",
    "    save_total_limit=3,\n",
    "    logging_steps=1,\n",
    "    output_dir=OUTPUT_DIR,\n",
    "    max_steps=80,\n",
    "    optim=\"paged_adamw_8bit\",\n",
    "    lr_scheduler_type=\"cosine\",\n",
    "    warmup_ratio=0.05,\n",
    "    report_to=\"tensorboard\",\n",
    ")\n",
    "\n",
    "trainer = transformers.Trainer(\n",
    "    model=model,\n",
    "    train_dataset=data,\n",
    "    args=training_args,\n",
    "    data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),\n",
    ")\n",
    "model.config.use_cache = False\n",
    "trainer.train()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "NKLu3hYCYWbW"
   },
   "source": [
    "I trained it for 100 epochs, and as you can observe, the loss consistently decreases, indicating room for further improvement.\n",
    "\n",
    "NOTE: ***Consider extending the training to a higher number of epochs for potential enhancements***"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Bm-4ny2SYgYz"
   },
   "source": [
    "### Save model in local system"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "YQ4VipiaQ38Q"
   },
   "outputs": [],
   "source": [
    "model.save_pretrained(\"trained-model\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "gEWFr9vKYbvI"
   },
   "source": [
    "### Push trained model in Hugging face\n",
    "\n",
    "NOTE: ***Here you have to change directory where you want to push your model***.\n",
    "\n",
    "For me it is \"Prasant/Llama2-7b-qlora-chat-support-bot-faq\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 85,
     "referenced_widgets": [
      "c91ff953b85847fd9bcbc952b259d13e",
      "80055b5100d0410ab59ba6c000e5f65c",
      "eb05122c60014feca974b9be59a89570",
      "d3a6a5bcce6a4e7eaa2db04a53f5b933",
      "c5ccd8c9f94749d391fb0815180cc7ed",
      "64d26285c5d5413dab6ff9649e5b3b11",
      "100084059b6d41858478218c3ffec02a",
      "157518129a4d46c2b23b86223ae34ee4",
      "e44f488773994690a9505db3c7f5ad6f",
      "d6e500ae93e04beb8d5b80162df8cf9a",
      "4f4ac78e20d64c12a4d3d394f7ab78ae"
     ]
    },
    "id": "bvkTqEZFQ3_W",
    "outputId": "4d5e4b9e-a8ec-4514-e8e6-a390dfa39dd9"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c91ff953b85847fd9bcbc952b259d13e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "adapter_model.safetensors:   0%|          | 0.00/134M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.google.colaboratory.intrinsic+json": {
       "type": "string"
      },
      "text/plain": [
       "CommitInfo(commit_url='https://huggingface.co/Prasant/Llama2-7b-qlora-chat-support-bot-faq/commit/afdc083726f49ccf925eda01e564e2a9520d92f3', commit_message='Upload model', commit_description='', oid='afdc083726f49ccf925eda01e564e2a9520d92f3', pr_url=None, pr_revision=None, pr_num=None)"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.push_to_hub(\"Prasant/Llama2-7b-qlora-chat-support-bot-faq\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "RFiESDCCRXG6"
   },
   "source": [
    "In our approach, we've split the large model TinyPixel/Llama-2-7B-bf16 into more than 14 smaller parts, a method known as sharding. This strategy works well with the `accelerate` framework by huggingface.\n",
    "\n",
    "Each shard holds part of the model's data, and Accelerate helps distribute these parts across different memory types, like GPU and CPU. This way, we can handle large models without needing too much memory."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "JuTrGGWg_zRY"
   },
   "source": [
    "### Load pushed model\n",
    "\n",
    "Load model from the directory you pushed, for me it is \"Prasant/Llama2-7b-qlora-chat-support-bot-faq\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 113,
     "referenced_widgets": [
      "25e4e057e547452fa7870203ecf304af",
      "9d48543aaf23498f85b8ad162ed21a51",
      "1227f4ab5ce745898d952c79afe2e118",
      "c51db07388824a71980ea8b0734639d8",
      "70c4820f69984344acfe5becc487057a",
      "78477d0ebafe4bbe8f3404d846f05a0d",
      "3696a81f9ea9425682caed310d9763c1",
      "8dff8fc4d7f44193bc48e6121c1a63c2",
      "752e1118c25e4b6db51049b679d851b6",
      "0bc37e819ce74a99afd7ce4ebb73c245",
      "0a2ba91f95e14c049d7616ed5cb0c73b",
      "804eec09657b479183c258825f10d02d",
      "f02fca9c66324f7ea611b29ddc862f75",
      "b4e8aece348f400089c3f37781c119d6",
      "fafb33d86b5a4da4ab37b121350cad8d",
      "1e0286d9297c4c859fbbce69bf123971",
      "1801009a0b30418d8fa4bc623cf290da",
      "174ea14f5eba4cfe8d68a26f7d239a08",
      "2134a7e4dca14686a564bd724efac364",
      "325bf139f1aa4dbc9bafe500c1f77172",
      "a16aa5dfef6049d5a3f71b7fc501ebb7",
      "d213614cb3d34bbc833327333a4e7964",
      "ee9f15735568419389482445f330a2cc",
      "c5f05746bd6e49df8035419a6e2d247b",
      "a81437a7cd1f44e693f9c43ded243d6b",
      "6c91ae9a068b4aed94df30c352825341",
      "0a34f2f771a947849b5e526a42584eb6",
      "1c1ab77f2f2b4560863ee8dd51add0cf",
      "335af2e8e0134e34b49328da24c73294",
      "61f6ab50052c475797c37936f5616ed0",
      "3e6d92c3b05d48fe8fc6f4a6da34e1ed",
      "5fae4a9789234b69bab42d72d5dd2136",
      "77884c522402433f95d1cda7d10c3f04"
     ]
    },
    "id": "Fq9phlfIQ4E5",
    "outputId": "69adcb7b-ee40-4834-adf4-2eca74ee5284"
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "25e4e057e547452fa7870203ecf304af",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "adapter_config.json:   0%|          | 0.00/608 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "804eec09657b479183c258825f10d02d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/14 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "ee9f15735568419389482445f330a2cc",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "adapter_model.safetensors:   0%|          | 0.00/134M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "PEFT_MODEL = \"Prasant/Llama2-7b-qlora-chat-support-bot-faq\"\n",
    "\n",
    "# loading trained model from hugging face\n",
    "config = PeftConfig.from_pretrained(PEFT_MODEL)\n",
    "model = AutoModelForCausalLM.from_pretrained(\n",
    "    config.base_model_name_or_path,\n",
    "    return_dict=True,\n",
    "    quantization_config=bnb_config,\n",
    "    device_map=\"auto\",\n",
    "    trust_remote_code=True,\n",
    ")\n",
    "tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)\n",
    "tokenizer.pad_token = tokenizer.eos_token\n",
    "\n",
    "model = PeftModel.from_pretrained(model, PEFT_MODEL)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "LP_1aPCYYyvx"
   },
   "source": [
    "### Do experiments with parameters and see what works for you and your data best"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "O-umn8k_RZ7h"
   },
   "outputs": [],
   "source": [
    "# model configuration, you can try changing these parameters\n",
    "generation_config = model.generation_config\n",
    "generation_config.max_new_tokens = 50\n",
    "\n",
    "# try using temperature parameter by uncommenting following\n",
    "# generation_config.temperature = 0.3\n",
    "generation_config.top_p = 0.7\n",
    "generation_config.num_return_sequences = 1\n",
    "generation_config.pad_token_id = tokenizer.eos_token_id\n",
    "generation_config.eos_token_id = tokenizer.eos_token_id"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "dqP2fFuDRZ-o"
   },
   "outputs": [],
   "source": [
    "# device configuration\n",
    "DEVICE = \"cuda:0\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "TScya6W0RaDL",
    "outputId": "f4e599b4-9674-4f92-a2e7-7aabc66f28a3"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      ": How can I create an account?\n",
      ": To create an account, click on the 'Sign Up' button on the top right corner of the website. Follow the instructions to complete the registration process.\n",
      ": You can place an order by adding items to your shopping cart and proceeding to\n",
      "CPU times: user 4.37 s, sys: 252 ms, total: 4.62 s\n",
      "Wall time: 4.68 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "prompt = f\"\"\"\n",
    ": How can I create an account?\n",
    ":\n",
    "\"\"\".strip()\n",
    "\n",
    "encoding = tokenizer(prompt, return_tensors=\"pt\").to(DEVICE)\n",
    "with torch.inference_mode():\n",
    "    outputs = model.generate(\n",
    "        input_ids=encoding.input_ids,\n",
    "        attention_mask=encoding.attention_mask,\n",
    "        generation_config=generation_config,\n",
    "    )\n",
    "print(tokenizer.decode(outputs[0], skip_special_tokens=True))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "6ZQc3XvhRaGa"
   },
   "outputs": [],
   "source": [
    "# helper function to generate responses\n",
    "def generate_response(question: str) -> str:\n",
    "    prompt = f\"\"\"\n",
    ": {question}\n",
    ":\n",
    "\"\"\".strip()\n",
    "    encoding = tokenizer(prompt, return_tensors=\"pt\").to(DEVICE)\n",
    "    with torch.inference_mode():\n",
    "        outputs = model.generate(\n",
    "            input_ids=encoding.input_ids,\n",
    "            attention_mask=encoding.attention_mask,\n",
    "            generation_config=generation_config,\n",
    "        )\n",
    "    response = tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
    "\n",
    "    assistant_start = \":\"\n",
    "    response_start = response.find(assistant_start)\n",
    "    return response[response_start + len(assistant_start) :].strip()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "IXiY44KGRaJv",
    "outputId": "357e5eaa-f72c-4025-d8a7-6e739e70ad8a"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Question: Can I return a product if it was a clearance or final sale item?\n",
      ": Clearance or final sale items are typically non-returnable. Please refer to the product description or contact our customer support team for specific return instructions.\n",
      ": You can request a return by contacting our customer support team. We will provide you with\n"
     ]
    }
   ],
   "source": [
    "# prompt\n",
    "prompt = \"Question: Can I return a product if it was a clearance or final sale item?\"\n",
    "print(generate_response(prompt))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "yJE1BrB8RaM8",
    "outputId": "83ae00c0-bdff-40f0-ccfb-d37d6348cf61"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Question: What happens when I return a clearance item?\n",
      ": Clearance items are non-refundable and non-exchangeable. However, you can request a store credit for the full value of the item. Please contact our customer support team for assistance.\n",
      ": We accept returns within 30 days\n"
     ]
    }
   ],
   "source": [
    "# prompt\n",
    "prompt = \"Question: What happens when I return a clearance item?\"\n",
    "print(generate_response(prompt))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "sDkq0PbhRw-2",
    "outputId": "be1753a4-2e91-42fb-dac6-cae6830594d4"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Question: How do I know when I'll receive my order?\n",
      ": Once you place an order, we will send you a confirmation email with your order details and estimated delivery time. You can track your order's progress by logging into your account or checking your order confirmation email.\n",
      ": If you need to\n"
     ]
    }
   ],
   "source": [
    "# prompt\n",
    "prompt = \"Question: How do I know when I'll receive my order?\"\n",
    "print(generate_response(prompt))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "3L3UxKlVRaPy"
   },
   "outputs": [],
   "source": [
    "################ falcon with lama2\n",
    "# https://github.com/curiousily/Get-Things-Done-with-Prompt-Engineering-and-LangChain/blob/master/07.falcon-qlora-fine-tuning.ipynb"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "fspW3KiWaaCO"
   },
   "source": [
    "## That's it; you can try to play with these hyperparameters to achieve better results 🎉\n",
    "\n",
    "If you liked this guide, do consider giving a 🌟 to LanceDB's [vector-recipes](https://github.com/lancedb/vectordb-recipes)"
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "gpuType": "T4",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
