{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PnA661gpWtFy"
      },
      "source": [
        "## 此 Phi-3 Fine Tuning Notebook 提供如何進行以下操作的說明:\n",
        "\n",
        "- 使用 QLoRA 和 LoRA 技術微調 Phi-3 mini 模型\n",
        "- 使用 BitsandBytes 和 GPTQ 量化 Phi-3 mini 模型以有效利用記憶體\n",
        "- 使用 Hugging Face 的 Transformers 函式庫執行 Phi-3 mini 模型\n",
        "- 此 Notebook 中的每個部分都設計為可獨立執行，讓你根據需要專注於特定任務。\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 微調 Phi-3\n",
        "歡迎來到微調 Phi-3 模型的指南。Phi-3 是由 Microsoft 開發的一個強大的語言模型，旨在根據接收到的輸入生成類似人類的文本。微調是一個涉及在特定任務上訓練預訓練模型（如 Phi-3）的過程，使其能夠將預先學習的知識適應新任務。\n",
        "\n",
        "在本指南中，我們將引導你完成微調 Phi-3 模型的步驟。這個過程可以幫助提高模型在其原始訓練數據中未涵蓋的特定任務或領域上的性能。\n",
        "\n",
        "微調過程涉及幾個步驟，包括設定環境、加載預訓練模型、準備訓練數據，最後是在新數據上訓練模型。\n",
        "\n",
        "在本指南結束時，你應該對如何為你的特定需求微調 Phi-3 模型有一個良好的理解。讓我們開始吧!\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VKMVLej2Fx6T"
      },
      "source": [
        "# 推論\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0BJnJ48qOYSP"
      },
      "source": [
        "本節演示如何使用 Hugging Face 的 Transformers 函式庫來執行 Phi-3 mini 模型的推論，特別是 16 位元版本。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ydzzo_AwOhIg",
        "outputId": "9367c36e-ef45-4188-d99b-b9925bbb5fa0"
      },
      "outputs": [],
      "source": [
        "# This command is run in a bash shell due to '%%bash' at the beginning.\n",
        "# 'pip install -qqq' is used to install Python packages with pip, Python's package installer, in a less verbose mode.\n",
        "# 'accelerate', 'transformers', 'auto-gptq', and 'optimum' are the packages being installed.\n",
        "# These packages are necessary for the fine-tuning and inference of the Phi-3 model.\n",
        "%%bash\n",
        "pip install -qqq accelerate transformers auto-gptq optimum"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-doiRu9FL4o5"
      },
      "source": [
        "使用原始模型（16-bit 版本）\n",
        "\n",
        "它需要 7.4 GB 的 GPU 記憶體\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 156,
          "referenced_widgets": [
            "d7d3fc8c16a844a7bdad297bc7a76546",
            "6a08690bc594415e91f0a5bc0148d093",
            "780bca400b83462f84fa965fc5aa21a6",
            "967d69ea7e61481281db28cbd45b897a",
            "f1bb4cba99c94580893910bea8fe3d40",
            "ef4c83fa2850481eb2c8bfc522a63401",
            "ed8e4ab7c0ad4e91a9dda4f493f85d4d",
            "8a7e6cafd50b4d88a3a85019af2b1561",
            "2e51fa700c1944edbb5fbce97de9092b",
            "348b7399643c433b900d92808b7de19a",
            "09c513ba74794a57a82a7f9ccb0042fa"
          ]
        },
        "id": "xBD2kd0wL4LL",
        "outputId": "3a246aa0-1c42-485a-bbd0-163b2064be0a"
      },
      "outputs": [],
      "source": [
        "# Import necessary libraries\n",
        "import torch\n",
        "from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed\n",
        "\n",
        "# Set a seed for reproducibility\n",
        "set_seed(2024)\n",
        "\n",
        "# Define the prompt for the model\n",
        "prompt = \"insert your prompt here\"\n",
        "\n",
        "# Define the model checkpoint simply replace with Phi-3 Model Required\n",
        "model_checkpoint = \"microsoft/Phi-3-mini-4k-instruct\"\n",
        "\n",
        "# Load the tokenizer from the model checkpoint\n",
        "# trust_remote_code=True allows the execution of code from the model files\n",
        "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint,trust_remote_code=True)\n",
        "\n",
        "# Load the model from the model checkpoint\n",
        "# trust_remote_code=True allows the execution of code from the model files\n",
        "# torch_dtype=\"auto\" automatically determines the appropriate torch.dtype\n",
        "# device_map=\"cuda\" specifies that the model should be loaded to the GPU\n",
        "model = AutoModelForCausalLM.from_pretrained(model_checkpoint,\n",
        "                                             trust_remote_code=True,\n",
        "                                             torch_dtype=\"auto\",\n",
        "                                             device_map=\"cuda\")\n",
        "\n",
        "# Tokenize the prompt and move the tensors to the GPU\n",
        "inputs = tokenizer(prompt,\n",
        "                   return_tensors=\"pt\").to(\"cuda\")\n",
        "\n",
        "# Generate a response from the model\n",
        "# do_sample=True means the model will generate text by sampling from the distribution of possible outputs\n",
        "# max_new_tokens=120 limits the length of the generated text to 120 tokens\n",
        "outputs = model.generate(**inputs,\n",
        "                         do_sample=True, max_new_tokens=120)\n",
        "\n",
        "# Decode the generated tokens and remove any special tokens\n",
        "response = tokenizer.decode(outputs[0], skip_special_tokens=True)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "q7IKK-0xHq6P",
        "outputId": "7b2b1bf1-1c79-48dd-ed8a-71502a1ff2f1"
      },
      "outputs": [],
      "source": [
        "# Print the generated response from the model\n",
        "print(response)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ikXZjUiyKSSb"
      },
      "source": [
        "## 程式碼產生器\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 121,
          "referenced_widgets": [
            "b1c02b6a844748d896c68d231e7b49ab",
            "1c91b47f9df94852972be0a7d0899233",
            "2cd72050e4a64e3d8a50af99463e9864",
            "2e2740c3ffc04be18f464b9a347edc7b",
            "72d86577268341dfa656fbabb7807e86",
            "7b659540e031480bad94e2ae0b709663",
            "2e8531f7923e40fea430a46f15da9020",
            "d18d5bea3fb246eaa8960bc14bd52ff8",
            "d9460fe3db8f400a9bdb24454516e1e1",
            "090c57feb7a1451fb14ebb4e3f7fbe22",
            "bc5dc14387824eafad12b9d8e64dde12"
          ]
        },
        "id": "p0kNIpmAKUBT",
        "outputId": "c049cbb2-4228-4c14-e8f8-86db949676ca"
      },
      "outputs": [],
      "source": [
        "# Import necessary libraries\n",
        "import torch\n",
        "from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed\n",
        "\n",
        "# Set a seed for reproducibility\n",
        "set_seed(2024)\n",
        "\n",
        "# Define the prompt for the model. In this case, the prompt is a request for C# code.\n",
        "prompt = \"Write a C# code that reads the content of multiple text files and save the result as CSV\"\n",
        "\n",
        "# Define the model checkpoint and Phi-3 Model Required\n",
        "model_checkpoint = \"microsoft/Phi-3-mini-4k-instruct\"\n",
        "\n",
        "# Load the tokenizer from the model checkpoint\n",
        "# trust_remote_code=True allows the execution of code from the model files\n",
        "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint,trust_remote_code=True)\n",
        "\n",
        "# Load the model from the model checkpoint\n",
        "# trust_remote_code=True allows the execution of code from the model files\n",
        "# torch_dtype=\"auto\" automatically determines the appropriate torch.dtype\n",
        "# device_map=\"cuda\" specifies that the model should be loaded to the GPU\n",
        "model = AutoModelForCausalLM.from_pretrained(model_checkpoint,\n",
        "                                             trust_remote_code=True,\n",
        "                                             torch_dtype=\"auto\",\n",
        "                                             device_map=\"cuda\")\n",
        "\n",
        "# Tokenize the prompt and move the tensors to the GPU\n",
        "inputs = tokenizer(prompt,\n",
        "                   return_tensors=\"pt\").to(\"cuda\")\n",
        "\n",
        "# Generate a response from the model\n",
        "# do_sample=True means the model will generate text by sampling from the distribution of possible outputs\n",
        "# max_new_tokens=200 limits the length of the generated text to 200 tokens\n",
        "outputs = model.generate(**inputs,\n",
        "                         do_sample=True, max_new_tokens=200)\n",
        "\n",
        "# Decode the generated tokens and remove any special tokens\n",
        "response = tokenizer.decode(outputs[0], skip_special_tokens=True)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Xi-o38J0KhKn",
        "outputId": "2d232ea7-f9d4-470f-c72c-2beb91f55062"
      },
      "outputs": [],
      "source": [
        "# Print the generated response from the model\n",
        "print(response)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eQE0EZNnF5BI"
      },
      "source": [
        "# 量化\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7JeieNRaOR-p"
      },
      "source": [
        "當 Phi-3 模型使用 Hugging Face 的 Transformers 進行微調，隨後使用 4-bit GPTQ 進行量化時，它需要 2.7 GB 的 GPU 記憶體。\n",
        "\n",
        "\"Bitsandbytes NF4\" 是 Bitsandbytes 函式庫中的一種特定配置或方法，用於量化。量化是一種減少模型中權重數值精度的過程，以使其更小、更快。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "Bitsandbytes NF4"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "TEOhsfGVSkRQ",
        "outputId": "f1ea90a2-7754-49d2-a565-774e4ffd58dc"
      },
      "outputs": [],
      "source": [
        "# This command is used to install and upgrade necessary Python packages using pip, Python's package installer.\n",
        "# The '!' at the beginning allows you to run shell commands in the notebook.\n",
        "# '-qqq' is used to make the installation process less verbose.\n",
        "# '--upgrade' ensures that if the packages are already installed, they are upgraded to the latest version.\n",
        "# 'transformers', 'bitsandbytes', 'accelerate', and 'datasets' are the packages being installed/upgraded.\n",
        "!pip install -qqq --upgrade transformers bitsandbytes accelerate datasets"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TcoqOxjgFKhH"
      },
      "source": [
        "# Phi-3 微調\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "k6_NXV4VbLpL",
        "outputId": "16d79b66-dbcb-4f0f-cbf9-e4cd52d98947"
      },
      "outputs": [],
      "source": [
        "# This command is run in a bash shell due to '%%bash' at the beginning.\n",
        "# 'pip -q install' is used to install Python packages with pip, Python's package installer, in a quiet mode which reduces the output verbosity.\n",
        "# 'huggingface_hub', 'transformers', 'peft', and 'bitsandbytes' are the packages being installed by the first command.\n",
        "# These packages are necessary for the fine-tuning and inference of the Phi-3 model.\n",
        "# 'trl' and 'xformers' are additional packages being installed by the second command.\n",
        "# 'datasets' is a package for providing access to a vast range of datasets, installed by the third command.\n",
        "# The last command ensures that 'torch' version is at least 1.10. If it's already installed but the version is lower, it will be upgraded.\n",
        "%%bash\n",
        "pip -q install huggingface_hub transformers peft bitsandbytes\n",
        "pip -q install trl xformers\n",
        "pip -q install datasets\n",
        "pip install torch>=1.10"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3EAOk-wZbB3G"
      },
      "outputs": [],
      "source": [
        "# Import necessary modules from the transformers library\n",
        "# AutoModelForCausalLM: This is a class for causal language models. It's used for tasks like text generation.\n",
        "# AutoTokenizer: This class is used for tokenizing input data, a necessary step before feeding data into a model.\n",
        "# TrainingArguments: This class is used for defining the parameters for model training, like learning rate, batch size, etc.\n",
        "# BitsAndBytesConfig: This class is used for configuring the BitsAndBytes quantization process.\n",
        "from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, BitsAndBytesConfig\n",
        "\n",
        "# Import necessary modules from the huggingface_hub library\n",
        "# ModelCard: This class is used for creating a model card, which provides information about a model.\n",
        "# ModelCardData: This class is used for defining the data of a model card.\n",
        "# HfApi: This class provides an interface to the Hugging Face API, allowing you to interact with the Hugging Face Model Hub.\n",
        "from huggingface_hub import ModelCard, ModelCardData, HfApi\n",
        "\n",
        "# Import the load_dataset function from the datasets library. This function is used for loading datasets.\n",
        "from datasets import load_dataset\n",
        "\n",
        "# Import the Template class from the jinja2 library. This class is used for creating dynamic HTML templates.\n",
        "from jinja2 import Template\n",
        "\n",
        "# Import the SFTTrainer class from the trl library. This class is used for training models.\n",
        "from trl import SFTTrainer\n",
        "\n",
        "# Import the yaml module. This module is used for working with YAML files.\n",
        "import yaml\n",
        "\n",
        "# Import the torch library. This library provides tools for training and running deep learning models.\n",
        "import torch"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5keZmmhJbBzJ"
      },
      "outputs": [],
      "source": [
        "# MODEL_ID is a string that specifies the identifier of the pre-trained model that will be fine-tuned. \n",
        "# In this case, the model is 'Phi-3-mini-4k-instruct' from Microsoft.\n",
        "MODEL_ID = \"microsoft/Phi-3-mini-4k-instruct\"\n",
        "\n",
        "# NEW_MODEL_NAME is a string that specifies the name of the new model after fine-tuning.\n",
        "# Here, the new model will be named 'opus-samantha-phi-3-mini-4k'.\n",
        "NEW_MODEL_NAME = \"New-Model-phi-3-mini-4k\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "TtO_Q4OwbBvr"
      },
      "outputs": [],
      "source": [
        "# DATASET_NAME is a string that specifies the name of the dataset to be used for fine-tuning.\n",
        "# Replace \"replace with your dataset\" with the actual name of your dataset.\n",
        "DATASET_NAME = \"replace with your dataset\"\n",
        "\n",
        "# SPLIT specifies the portion of the dataset to be used. In this case, the 'train' split of the dataset will be used.\n",
        "SPLIT = \"train\"\n",
        "\n",
        "# MAX_SEQ_LENGTH is an integer that specifies the maximum length of the sequences that the model will handle.\n",
        "MAX_SEQ_LENGTH = 2048\n",
        "\n",
        "# num_train_epochs is an integer that specifies the number of times the training process will go through the entire dataset.\n",
        "num_train_epochs = 1\n",
        "\n",
        "# license is a string that specifies the license under which the model is distributed. In this case, it's Apache License 2.0.\n",
        "license = \"apache-2.0\"\n",
        "\n",
        "# username is a string that specifies the GitHub username of the person who is fine-tuning the model.\n",
        "username = \"GitHubUsername\"\n",
        "\n",
        "# learning_rate is a float that specifies the learning rate to be used during training.\n",
        "learning_rate = 1.41e-5\n",
        "\n",
        "# per_device_train_batch_size is an integer that specifies the number of samples to work through before updating the internal model parameters.\n",
        "per_device_train_batch_size = 4\n",
        "\n",
        "# gradient_accumulation_steps is an integer that specifies the number of steps to accumulate gradients before performing a backward/update pass.\n",
        "gradient_accumulation_steps = 1"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "R2tmmOeVdGez"
      },
      "outputs": [],
      "source": [
        "# This code checks if the current CUDA device supports bfloat16 (Brain Floating Point) computations.\n",
        "# If bfloat16 is supported, it sets the compute_dtype to torch.bfloat16.\n",
        "# If not, it sets the compute_dtype to torch.float16.\n",
        "# bfloat16 and float16 are both half-precision floating-point formats, but bfloat16 provides better performance on some hardware.\n",
        "if torch.cuda.is_bf16_supported():\n",
        "  compute_dtype = torch.bfloat16\n",
        "else:\n",
        "  compute_dtype = torch.float16"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 260,
          "referenced_widgets": [
            "3bfe80fa3d8744fb90fe08d5416b1bbc",
            "ae383e2094594370849cf3409ae56fb5",
            "1d4e2797a9964ed08c8e237c462e90ac",
            "a29197379b264fe7a9d644ab090d2f6c",
            "f631c0450ce14cdeb99d37bdb8c1a3c6",
            "f5402736c73142caaa323cd70aec8ae1",
            "029b6e32a4164189a09381268e8dc57f",
            "11f85ad9fe0b442282d468b1da8c0437",
            "15ec765469404365ae541c926ae917f8",
            "624eeb8629284a10bd98740121076553",
            "0e4e003ca04744dd87bc29a5c77c13a6"
          ]
        },
        "id": "D4KIQBMocYkS",
        "outputId": "98b75683-3a44-4642-c15b-b26f2a563b56"
      },
      "outputs": [],
      "source": [
        "# Load the pre-trained model specified by MODEL_ID using the AutoModelForCausalLM class.\n",
        "# The 'trust_remote_code=True' argument allows the execution of code from the model card (if any).\n",
        "model = AutoModelForCausalLM.from_pretrained(MODEL_ID, trust_remote_code=True)\n",
        "\n",
        "# Load the tokenizer associated with the pre-trained model specified by MODEL_ID using the AutoTokenizer class.\n",
        "# The 'trust_remote_code=True' argument allows the execution of code from the model card (if any).\n",
        "tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, trust_remote_code=True)\n",
        "\n",
        "# Load the dataset specified by DATASET_NAME using the load_dataset function.\n",
        "# The 'split=\"train\"' argument specifies that we want to load the training split of the dataset.\n",
        "dataset = load_dataset(DATASET_NAME, split=\"train\")\n",
        "\n",
        "# Get the ID of the end-of-sentence (EOS) token from the tokenizer and store it in EOS_TOKEN.\n",
        "# This token is used to mark the end of a sentence in the input data.\n",
        "EOS_TOKEN=tokenizer.eos_token_id"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "83DkqVQ5cipS",
        "outputId": "14f46138-6407-44c7-9787-e8d270f5c116"
      },
      "outputs": [],
      "source": [
        "# This line simply prints the contents of the 'dataset' variable.\n",
        "# 'dataset' is expected to be a Dataset object loaded from the 'datasets' library.\n",
        "# Printing it will display information about the dataset such as the number of samples, the features, and a few example data points.\n",
        "dataset"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "vU6t6xlJckts"
      },
      "outputs": [],
      "source": [
        "# Select a subset of the data for faster processing\n",
        "dataset = dataset.select(range(100))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "iyOxewwpcnlc",
        "outputId": "7eac88bf-4a5e-417c-ca16-696408e9a7c1"
      },
      "outputs": [],
      "source": [
        "# This line simply prints the contents of the 'dataset' variable.\n",
        "# 'dataset' is expected to be a Dataset object loaded from the 'datasets' library.\n",
        "# Printing it will display information about the dataset such as the number of samples, the features, and a few example data points.\n",
        "dataset"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "mLYxbQ7RcpUY",
        "outputId": "4c3c302b-607b-49d5-ff73-d57cb683a20e"
      },
      "outputs": [],
      "source": [
        "# Define a function to format the prompts in the dataset.\n",
        "# This function takes a batch of examples and returns a dictionary with the key 'text' and the value being a list of formatted texts.\n",
        "def formatting_prompts_func(examples):\n",
        "    # Extract the conversations from the examples.\n",
        "    convos = examples[\"conversations\"]\n",
        "    # Initialize an empty list to store the formatted texts.\n",
        "    texts = []\n",
        "    # Define a dictionary to map the 'from' field in the conversation to a prefix.\n",
        "    mapper = {\"system\": \"system\\n\", \"human\": \"\\nuser\\n\", \"gpt\": \"\\nassistant\\n\"}\n",
        "    # Define a dictionary to map the 'from' field in the conversation to a suffix.\n",
        "    end_mapper = {\"system\": \"\", \"human\": \"\", \"gpt\": \"\"}\n",
        "    # Iterate over each conversation.\n",
        "    for convo in convos:\n",
        "        # Format the conversation by joining each turn with its corresponding prefix and suffix.\n",
        "        # Append the EOS token to the end of the conversation.\n",
        "        text = \"\".join(f\"{mapper[(turn := x['from'])]} {x['value']}\\n{end_mapper[turn]}\" for x in convo)\n",
        "        texts.append(f\"{text}{EOS_TOKEN}\")\n",
        "    # Return the formatted texts.\n",
        "    return {\"text\": texts}\n",
        "\n",
        "# Apply the formatting function to the dataset using the map method.\n",
        "# The 'batched=True' argument means that the function is applied to batches of examples.\n",
        "dataset = dataset.map(formatting_prompts_func, batched=True)\n",
        "\n",
        "# Print the 9th example from the 'text' field of the dataset to check the result.\n",
        "print(dataset['text'][8])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "124j2HGTctc7"
      },
      "outputs": [],
      "source": [
        "# Create a TrainingArguments object, which is used to define the parameters for model training.\n",
        "\n",
        "args = TrainingArguments(\n",
        "    # 'evaluation_strategy' is set to \"steps\", which means evaluation is done at each logging step.\n",
        "    evaluation_strategy=\"steps\",\n",
        "\n",
        "    # 'per_device_train_batch_size' is set to 7, which means each training batch will contain 7 samples per device.\n",
        "    per_device_train_batch_size=7,\n",
        "\n",
        "    # 'gradient_accumulation_steps' is set to 4, which means gradients are accumulated for 4 steps before performing a backward/update pass.\n",
        "    gradient_accumulation_steps=4,\n",
        "\n",
        "    # 'gradient_checkpointing' is set to True, which means model gradients are stored in memory during training to reduce memory usage.\n",
        "    gradient_checkpointing=True,\n",
        "\n",
        "    # 'learning_rate' is set to 1e-4, which is the learning rate for the optimizer.\n",
        "    learning_rate=1e-4,\n",
        "\n",
        "    # 'fp16' is set to True if bfloat16 is not supported, which means the model will use 16-bit floating point precision for training if possible.\n",
        "    fp16 = not torch.cuda.is_bf16_supported(),\n",
        "\n",
        "    # 'bf16' is set to True if bfloat16 is supported, which means the model will use bfloat16 precision for training if possible.\n",
        "    bf16 = torch.cuda.is_bf16_supported(),\n",
        "\n",
        "    # 'max_steps' is set to -1, which means there is no maximum number of training steps.\n",
        "    max_steps=-1,\n",
        "\n",
        "    # 'num_train_epochs' is set to 3, which means the training process will go through the entire dataset 3 times.\n",
        "    num_train_epochs=3,\n",
        "\n",
        "    # 'save_strategy' is set to \"epoch\", which means the model is saved at the end of each epoch.\n",
        "    save_strategy=\"epoch\",\n",
        "\n",
        "    # 'logging_steps' is set to 10, which means logging is done every 10 steps.\n",
        "    logging_steps=10,\n",
        "\n",
        "    # 'output_dir' is set to NEW_MODEL_NAME, which is the directory where the model and its configuration will be saved.\n",
        "    output_dir=NEW_MODEL_NAME,\n",
        "\n",
        "    # 'optim' is set to \"paged_adamw_32bit\", which is the optimizer to be used for training.\n",
        "    optim=\"paged_adamw_32bit\",\n",
        "\n",
        "    # 'lr_scheduler_type' is set to \"linear\", which means the learning rate scheduler type is linear.\n",
        "    lr_scheduler_type=\"linear\"\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "sNn4_hJDCUxd",
        "outputId": "9fa5170f-0e86-4676-d80c-a56886872112"
      },
      "outputs": [],
      "source": [
        "# Create an instance of the SFTTrainer class, which is used to fine-tune the model.\n",
        "\n",
        "trainer = SFTTrainer(\n",
        "    # 'model' is the pre-trained model that will be fine-tuned.\n",
        "    model=model,\n",
        "\n",
        "    # 'args' are the training arguments that specify the training parameters.\n",
        "    args=args,\n",
        "\n",
        "    # 'train_dataset' is the dataset that will be used for training.\n",
        "    train_dataset=dataset,\n",
        "\n",
        "    # 'dataset_text_field' is the key in the dataset that contains the text data.\n",
        "    dataset_text_field=\"text\",\n",
        "\n",
        "    # 'max_seq_length' is the maximum length of the sequences that the model will handle.\n",
        "    max_seq_length=128,\n",
        "\n",
        "    # 'formatting_func' is the function that will be used to format the prompts in the dataset.\n",
        "    formatting_func=formatting_prompts_func\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_bMqB8QQEdJa"
      },
      "outputs": [],
      "source": [
        "# 'device' is set to 'cuda', which means the CUDA device will be used for computations if available.\n",
        "device = 'cuda'\n",
        "\n",
        "# Import the 'gc' module, which provides an interface to the garbage collector.\n",
        "import gc\n",
        "\n",
        "# Import the 'os' module, which provides a way of using operating system dependent functionality.\n",
        "import os\n",
        "\n",
        "# Call the 'collect' method of the 'gc' module to start a garbage collection, which can help free up memory.\n",
        "gc.collect()\n",
        "\n",
        "# Call the 'empty_cache' method of 'torch.cuda' to release all unused cached memory from PyTorch so that it can be used by other GPU applications.\n",
        "torch.cuda.empty_cache()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 269
        },
        "id": "inuNQbzKcwiB",
        "outputId": "3f5db482-9ba1-4e3f-b52c-ffe1e61667dc"
      },
      "outputs": [],
      "source": [
        "# Call the 'train' method of the 'trainer' object to start the training process.\n",
        "# This method will fine-tune the model on the training dataset according to the parameters specified in the 'args' object.\n",
        "trainer.train()"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "A100",
      "machine_shape": "hm",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    },
    "widgets": {
      "application/vnd.jupyter.widget-state+json": {
        "029b6e32a4164189a09381268e8dc57f": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "090c57feb7a1451fb14ebb4e3f7fbe22": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "09c513ba74794a57a82a7f9ccb0042fa": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "0e4e003ca04744dd87bc29a5c77c13a6": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "11f85ad9fe0b442282d468b1da8c0437": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "15ec765469404365ae541c926ae917f8": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "1c91b47f9df94852972be0a7d0899233": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_7b659540e031480bad94e2ae0b709663",
            "placeholder": "​",
            "style": "IPY_MODEL_2e8531f7923e40fea430a46f15da9020",
            "value": "Loading checkpoint shards: 100%"
          }
        },
        "1d4e2797a9964ed08c8e237c462e90ac": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_11f85ad9fe0b442282d468b1da8c0437",
            "max": 2,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_15ec765469404365ae541c926ae917f8",
            "value": 2
          }
        },
        "2cd72050e4a64e3d8a50af99463e9864": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_d18d5bea3fb246eaa8960bc14bd52ff8",
            "max": 2,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_d9460fe3db8f400a9bdb24454516e1e1",
            "value": 2
          }
        },
        "2e2740c3ffc04be18f464b9a347edc7b": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_090c57feb7a1451fb14ebb4e3f7fbe22",
            "placeholder": "​",
            "style": "IPY_MODEL_bc5dc14387824eafad12b9d8e64dde12",
            "value": " 2/2 [00:03&lt;00:00,  1.43s/it]"
          }
        },
        "2e51fa700c1944edbb5fbce97de9092b": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "2e8531f7923e40fea430a46f15da9020": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "348b7399643c433b900d92808b7de19a": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "3bfe80fa3d8744fb90fe08d5416b1bbc": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_ae383e2094594370849cf3409ae56fb5",
              "IPY_MODEL_1d4e2797a9964ed08c8e237c462e90ac",
              "IPY_MODEL_a29197379b264fe7a9d644ab090d2f6c"
            ],
            "layout": "IPY_MODEL_f631c0450ce14cdeb99d37bdb8c1a3c6"
          }
        },
        "624eeb8629284a10bd98740121076553": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "6a08690bc594415e91f0a5bc0148d093": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_ef4c83fa2850481eb2c8bfc522a63401",
            "placeholder": "​",
            "style": "IPY_MODEL_ed8e4ab7c0ad4e91a9dda4f493f85d4d",
            "value": "Loading checkpoint shards: 100%"
          }
        },
        "72d86577268341dfa656fbabb7807e86": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "780bca400b83462f84fa965fc5aa21a6": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_8a7e6cafd50b4d88a3a85019af2b1561",
            "max": 2,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_2e51fa700c1944edbb5fbce97de9092b",
            "value": 2
          }
        },
        "7b659540e031480bad94e2ae0b709663": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "8a7e6cafd50b4d88a3a85019af2b1561": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "967d69ea7e61481281db28cbd45b897a": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_348b7399643c433b900d92808b7de19a",
            "placeholder": "​",
            "style": "IPY_MODEL_09c513ba74794a57a82a7f9ccb0042fa",
            "value": " 2/2 [00:03&lt;00:00,  1.42s/it]"
          }
        },
        "a29197379b264fe7a9d644ab090d2f6c": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_624eeb8629284a10bd98740121076553",
            "placeholder": "​",
            "style": "IPY_MODEL_0e4e003ca04744dd87bc29a5c77c13a6",
            "value": " 2/2 [00:37&lt;00:00, 18.01s/it]"
          }
        },
        "ae383e2094594370849cf3409ae56fb5": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_f5402736c73142caaa323cd70aec8ae1",
            "placeholder": "​",
            "style": "IPY_MODEL_029b6e32a4164189a09381268e8dc57f",
            "value": "Loading checkpoint shards: 100%"
          }
        },
        "b1c02b6a844748d896c68d231e7b49ab": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_1c91b47f9df94852972be0a7d0899233",
              "IPY_MODEL_2cd72050e4a64e3d8a50af99463e9864",
              "IPY_MODEL_2e2740c3ffc04be18f464b9a347edc7b"
            ],
            "layout": "IPY_MODEL_72d86577268341dfa656fbabb7807e86"
          }
        },
        "bc5dc14387824eafad12b9d8e64dde12": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "d18d5bea3fb246eaa8960bc14bd52ff8": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "d7d3fc8c16a844a7bdad297bc7a76546": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_6a08690bc594415e91f0a5bc0148d093",
              "IPY_MODEL_780bca400b83462f84fa965fc5aa21a6",
              "IPY_MODEL_967d69ea7e61481281db28cbd45b897a"
            ],
            "layout": "IPY_MODEL_f1bb4cba99c94580893910bea8fe3d40"
          }
        },
        "d9460fe3db8f400a9bdb24454516e1e1": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "ed8e4ab7c0ad4e91a9dda4f493f85d4d": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "ef4c83fa2850481eb2c8bfc522a63401": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "f1bb4cba99c94580893910bea8fe3d40": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "f5402736c73142caaa323cd70aec8ae1": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "f631c0450ce14cdeb99d37bdb8c1a3c6": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        }
      }
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}