{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "grcCpefeV-vG"
      },
      "source": [
        "# Memory Tuning Example 🦙\n",
        "Memory Tuning is a powerful feature offered by Lamini that enables effective fact memorization. While it's user-friendly, the process requires iteration. In this example, we'll walk you through the process of creating a model that can accurately answer questions about [Lamini's Python library](https://github.com/lamini-ai/lamini/blob/c1d52643dd129841bf508c1fb09721cafebeab67/lamini/api/lamini.py). You are encouraged to apply it your own use case + data as well. Here are the steps we'll cover for effective memory tuning:\n",
        "\n",
        "\n",
        "1. Create an evaluation set\n",
        "1. Benchmark the base model\n",
        "1. Construct a dataset\n",
        "1. Review the generated Q&A\n",
        "1. Tuning run\n",
        "1. Evaluate using the evaluation set\n",
        "1. Update Q&A generation\n",
        "1. Repeat 4-7\n",
        "1. Expand the dataset\n",
        "\n",
        "\n"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "fbgy7Oc_WKve"
      },
      "source": [
        "## Create an evaluation set 🖌\n",
        "\n",
        "Before we dive in, it's essential to clarify what we want the model to learn and the specific improvements we aim for. One effective approach is to create an \"evaluation set\" or a \"golden set.\" In this example, our goal is to develop a model that can accurately answer questions about Lamini's Python library.\n",
        "\n",
        "Below are some questions, along with their ideal answers, arranged by increasing complexity:\n",
        "\n",
        "| Question | Gold Answer |\n",
        "| :-- | :-- |\n",
        "| What type is model_name? | A string. |\n",
        "| What does upload_data return? | The dataset ID, a string.|\n",
        "| What is the difference between .tune and .train? | They are aliases to each other, i.e. the same. Both functions will tune a model. |\n",
        "| What arguments are required to make an inference call to Lamini? | To make a call to Lamini, you need to instantiate Lamini with a model name. Then, the generate call only needs a prompt to run inference on the set model. |\n",
        "\n"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "RNrgb2rQV1vv"
      },
      "source": [
        "## Benchmark 🪑\n",
        "\n",
        "Since our goal is to improve a base model, let's first evaluate how the base model performs on our evaluation set. In this example, we will be using [Llama 3.1](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) as our base model.\n",
        "\n",
        "![Base Llama Response](https://drive.google.com/uc?export=view&id=140NIXK2dkHVyd_7jycPRMy-E6bJLLhtO)\n",
        "\n",
        "As expected, the base model struggled with most of the responses, given that the questions are tailored to our specific use case and aren't part of the model's training data.\n",
        "\n",
        "Here are the results for our entire evaluation set:\n",
        "\n",
        "| Question | Llama 3.1 Answer | Gold Answer |\n",
        "| :-- | :-- | :-- |\n",
        "| What type is model_name? ✅ | In the Lamini Python API, `model_name` is a string. It represents the name of a model, which can be used to identify and load a specific model from the Lamini model registry. | A string. |\n",
        "| What does upload_data return? ❌ | In the Lamini Python API, the `upload_data` function is used to upload data to a Lamini instance. The `upload_data` function returns a dictionary containing the response from the Lamini server, which includes the ID of the uploaded data. | The dataset ID, a string. |\n",
        "| What is the difference between .tune and .train? ❌ | In the Lamini Python API, `.tune` and `.train` are two distinct methods used for hyperparameter tuning and model training, respectively. While they share some similarities, they serve different purposes and have different behaviors. | They are aliases to each other, i.e. the same. Both functions will tune a model. |\n",
        "| What arguments are required to make an inference call to Lamini? ❌ | I don't have the information about the Lamini Python API. | To make a call to Lamini, you need to instantiate Lamini with a model name. Then, the generate call only needs a prompt to run inference on the set model. |\n",
        "\n",
        "<details>\n",
        "  <summary>Generation details</summary>\n",
        "  We generated these responses using <a href=\"https://app.lamini.ai/playground\" target=\"_blank\">the Lamini playground</a>, with the following System Prompt:\n",
        "  \"You are an expert on Lamini. Answer the following question about the Lamini Python API.\" 🦙\n",
        "</details>\n",
        "\n",
        "But don’t worry—we can only improve from here! 🫡\n",
        "\n",
        "| Model | Score |\n",
        "| :-- | :-- |\n",
        "| Llama 3.1 | 1/4 --> 25% |"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "NLOr2doJpE-S"
      },
      "source": [
        "## Construct a tuning dataset 🎵\n",
        "\n",
        "For this example, we'll generate Q&A pairs from [our `lamini.py` code file](https://github.com/lamini-ai/lamini/blob/c1d52643dd129841bf508c1fb09721cafebeab67/lamini/api/lamini.py).\n",
        "\n",
        "Let's begin with a straightforward approach: we'll pass the entire file to an LLM and ask it to generate some questions and answers.\n",
        "\n",
        "<details>\n",
        "  <summary>Prompt templates</summary>\n",
        "  Meta Llama 3.1 Instruct uses a prompt template, with special tags used to indicate the user query and system prompt.\n",
        "You can find the documentation on this <a href=\"https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1\" target=\"_blank\">model card</a>.\n",
        "</details>"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "cJshb0UQuHGS",
        "outputId": "70999377-2355-42e8-8185-4d04545171a7"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Collecting lamini\n",
            "  Downloading lamini-3.1.1-9-py3-none-any.whl.metadata (1.6 kB)\n",
            "Collecting lamini-configuration[yaml] (from lamini)\n",
            "  Downloading lamini_configuration-0.8.3-py3-none-any.whl.metadata (11 kB)\n",
            "Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from lamini) (2.32.3)\n",
            "Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from lamini) (4.66.5)\n",
            "Requirement already satisfied: numpy<2.0.0 in /usr/local/lib/python3.10/dist-packages (from lamini) (1.26.4)\n",
            "Collecting jsonlines (from lamini)\n",
            "  Downloading jsonlines-4.0.0-py3-none-any.whl.metadata (1.6 kB)\n",
            "Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from lamini) (2.2.2)\n",
            "Collecting azure-storage-blob (from lamini)\n",
            "  Downloading azure_storage_blob-12.23.1-py3-none-any.whl.metadata (26 kB)\n",
            "Requirement already satisfied: scikit-learn in /usr/local/lib/python3.10/dist-packages (from lamini) (1.5.2)\n",
            "Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from lamini) (3.10.8)\n",
            "Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->lamini) (2.4.3)\n",
            "Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->lamini) (1.3.1)\n",
            "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->lamini) (24.2.0)\n",
            "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->lamini) (1.4.1)\n",
            "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->lamini) (6.1.0)\n",
            "Requirement already satisfied: yarl<2.0,>=1.12.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->lamini) (1.13.1)\n",
            "Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->lamini) (4.0.3)\n",
            "Collecting azure-core>=1.30.0 (from azure-storage-blob->lamini)\n",
            "  Downloading azure_core-1.31.0-py3-none-any.whl.metadata (39 kB)\n",
            "Requirement already satisfied: cryptography>=2.1.4 in /usr/local/lib/python3.10/dist-packages (from azure-storage-blob->lamini) (43.0.1)\n",
            "Requirement already satisfied: typing-extensions>=4.6.0 in /usr/local/lib/python3.10/dist-packages (from azure-storage-blob->lamini) (4.12.2)\n",
            "Collecting isodate>=0.6.1 (from azure-storage-blob->lamini)\n",
            "  Downloading isodate-0.6.1-py2.py3-none-any.whl.metadata (9.6 kB)\n",
            "Requirement already satisfied: pyyaml<7.0,>=6.0 in /usr/local/lib/python3.10/dist-packages (from lamini-configuration[yaml]->lamini) (6.0.2)\n",
            "Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from pandas->lamini) (2.8.2)\n",
            "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->lamini) (2024.2)\n",
            "Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.10/dist-packages (from pandas->lamini) (2024.2)\n",
            "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->lamini) (3.3.2)\n",
            "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->lamini) (3.10)\n",
            "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->lamini) (2.2.3)\n",
            "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->lamini) (2024.8.30)\n",
            "Requirement already satisfied: scipy>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->lamini) (1.13.1)\n",
            "Requirement already satisfied: joblib>=1.2.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->lamini) (1.4.2)\n",
            "Requirement already satisfied: threadpoolctl>=3.1.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->lamini) (3.5.0)\n",
            "Requirement already satisfied: six>=1.11.0 in /usr/local/lib/python3.10/dist-packages (from azure-core>=1.30.0->azure-storage-blob->lamini) (1.16.0)\n",
            "Requirement already satisfied: cffi>=1.12 in /usr/local/lib/python3.10/dist-packages (from cryptography>=2.1.4->azure-storage-blob->lamini) (1.17.1)\n",
            "Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.12->cryptography>=2.1.4->azure-storage-blob->lamini) (2.22)\n",
            "Downloading lamini-3.1.1-9-py3-none-any.whl (693 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m693.9/693.9 kB\u001b[0m \u001b[31m28.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hDownloading azure_storage_blob-12.23.1-py3-none-any.whl (405 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m405.6/405.6 kB\u001b[0m \u001b[31m26.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hDownloading jsonlines-4.0.0-py3-none-any.whl (8.7 kB)\n",
            "Downloading azure_core-1.31.0-py3-none-any.whl (197 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m197.4/197.4 kB\u001b[0m \u001b[31m15.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hDownloading isodate-0.6.1-py2.py3-none-any.whl (41 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m41.7/41.7 kB\u001b[0m \u001b[31m3.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hDownloading lamini_configuration-0.8.3-py3-none-any.whl (22 kB)\n",
            "Installing collected packages: lamini-configuration, jsonlines, isodate, azure-core, azure-storage-blob, lamini\n",
            "Successfully installed azure-core-1.31.0 azure-storage-blob-12.23.1 isodate-0.6.1 jsonlines-4.0.0 lamini-3.1.1 lamini-configuration-0.8.3\n",
            "\n",
            "\n",
            "Here are some questions and answers for the provided code:\n",
            "\n",
            "**Q: What is the purpose of the Lamini class?**\n",
            "A: The Lamini class is the main interface for the Lamini platform, providing functionality for generation, data upload, training, and evaluation.\n",
            "\n",
            "**Q: What are the key features of the Lamini class?**\n",
            "A: The key features of the Lamini class are:\n",
            "\n",
            "1. Generation calls\n",
            "2. Data Upload/Downloading\n",
            "3. Training orchestration\n",
            "4. Evaluation\n",
            "\n",
            "**Q: What is the `version` method used for?**\n",
            "A: The `version` method is used to get the version of the Lamini platform.\n",
            "\n",
            "**Q: What is the `generate` method used for?**\n",
            "A: The `generate` method is used to make a generation request to the LLM with the provided prompt.\n",
            "\n",
            "**Q: What is the `async_generate` method used for?**\n",
            "A: The `async_generate` method is used to make an asynchronous generation request to the LLM with the provided prompt.\n",
            "\n",
            "**Q: What is the `upload_data` method used for?**\n",
            "A: The `upload_data` method is used to upload the provided data to the Lamini Platform.\n",
            "\n",
            "**Q: What is the `upload_file` method used for?**\n",
            "A: The `upload_file` method is used to upload a provided file to the Lamini Platform.\n",
            "\n",
            "**Q: What is the `train` method used for?**\n",
            "A: The `train` method is used to submit a training job request to the platform using the provided data.\n",
            "\n",
            "**Q: What is the `train_and_wait` method used for?**\n",
            "A: The `train_and_wait` method is used to submit a training job request to the platform using the provided data and continuously poll until the job is completed.\n",
            "\n",
            "**Q: What is the `cancel_job` method used for?**\n",
            "A: The `cancel_job` method is used to cancel a job specified by the id.\n",
            "\n",
            "**Q: What is the `cancel_all_jobs` method used for?**\n",
            "A: The `cancel_all_jobs` method is used to cancel all jobs from this user on the platform.\n",
            "\n",
            "**Q: What is the `resume_job` method used for?**\n",
            "A: The `resume_job` method is used to resume a specific job on the Lamini platform.\n",
            "\n",
            "**Q: What is the `check_job_status` method used for?**\n",
            "A: The `check_job_status` method is used to check the status of a specific job on the Lamini platform.\n",
            "\n",
            "**Q\n"
          ]
        }
      ],
      "source": [
        "!pip install lamini\n",
        "import lamini\n",
        "\n",
        "# Get your API key form https://app.lamini.ai/account\n",
        "lamini.api_key = \"<YOUR-LAMINI-API-KEY>\"\n",
        "llm = lamini.Lamini(model_name=\"meta-llama/Meta-Llama-3.1-8B-Instruct\")\n",
        "\n",
        "with open('lamini_file.py', 'r') as file:\n",
        "  code_file = file.read()\n",
        "\n",
        "prompt = f\"Generate some questions and answers for this code: {code_file}\"\n",
        "# Apply prompt template\n",
        "formatted_prompt = f\"<|begin_of_text|><|start_header_id|>user<|end_header_id|>{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\"\n",
        "response = llm.generate(formatted_prompt)\n",
        "print(response)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "fYC1lo3H_gv8"
      },
      "source": [
        "Not bad! Let's use json output so we won't have to worry about parsing the response."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "wNotqcwY_mT4",
        "outputId": "3048d4b8-b1e0-4e5e-fe13-657259a667db"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "{'question1': 'What are the arguments required to make an inference call to Lamini', 'answer1': 'The arguments required to make an inference call to Lamini are:', 'question2': 'What are the arguments required to make an inference call to Lamini?', 'answer2': 'The arguments required to make an inference call to Lamini are:', 'question3': 'What are the arguments required to make an inference call to Lamini?', 'answer3': 'The arguments required to make an inference call to Lamini are:'}\n"
          ]
        }
      ],
      "source": [
        "json_response = llm.generate(formatted_prompt, output_type={\n",
        "    \"question1\": \"str\",\n",
        "    \"answer1\": \"str\",\n",
        "    \"question2\": \"str\",\n",
        "    \"answer2\": \"str\",\n",
        "    \"question3\": \"str\",\n",
        "    \"answer3\": \"str\"\n",
        "})\n",
        "print(json_response)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "B-0FbrM1AotR"
      },
      "source": [
        "## Review your tuning dataset 🔎\n",
        "Even if you're using your own data while following this notebook, it's important to review it closely. Understanding your data is crucial for tuning an accurate model—the quality of the input directly influences the quality of the output.\n",
        "\n",
        "![You and your data fusion](https://drive.google.com/uc?export=view&id=184mbTYK0DtBaNs4H2vZ6Ni370HVZn7BF)\n",
        "\n",
        "\n",
        "Now, let's take a look at the data we generated in the previous step:\n",
        "\n",
        "```\n",
        "{\n",
        "  'question1': 'What is the purpose of the Lamini class in this code',\n",
        "  'answer1': 'The Lamini class is the main interface for the Lamini platform functionality, providing methods for generation, data upload, training, and evaluation',\n",
        "  'question2': 'What is the `generate` method used for?',\n",
        "  'answer2': 'The `generate` method is used to make a generation request to the LLM with the provided prompt, allowing for various options such as specifying the model name, output type, and maximum tokens',\n",
        "  'question3': 'What is the `upload_data` method used for?',\n",
        "  'answer3': 'The `upload_data` method is used to upload the provided data to the Lamini Platform, allowing for the option to specify whether the dataset should be public or not'\n",
        "}\n",
        "\n",
        "```\n",
        "And our evaluation set:\n",
        "\n",
        "| Question | Gold Answer |\n",
        "| :-- | :-- |\n",
        "| What type is model_name? | A string. |\n",
        "| What does upload_data return? | The dataset ID, a string.|\n",
        "| What is the difference between .tune and .train? | They are aliases to each other, i.e. the same. Both functions will tune a model. |\n",
        "| What arguments are required to make an inference call to Lamini? | To make a call to Lamini, you need to instantiate Lamini with a model name. Then, the generate call only needs a prompt to run inference on the set model. |\n",
        "\n",
        "What stands out in our analysis? A few notable issues emerge when we compare the generated data with our evaluation set:\n",
        "1. The distribution and diversity of questions differ significantly; for instance, the generated Q&A primarily focuses on the \"purpose\" and the methods used.\n",
        "1. The generated data lacks coverage of key concepts from the evaluation set, such as parameter types and return types.\n",
        "\n",
        "![Eval set concepts](https://drive.google.com/uc?export=view&id=1GveYb9Nc9ZteoxJUDslthC6Gof_fnHr-)\n",
        "\n",
        "To address this, let's create a more diverse dataset that encompasses a broader range of concepts we want the model to learn, utilizing chunking and prompt tuning. You'll find that sometimes simply just asking the model to be more diverse will work. Other times, the model will need some more guidance to ensure specific concepts are adequately convered.\n",
        "\n",
        "<details>\n",
        "  <summary>Chunking</summary>\n",
        "  For simplicity, we use brute chunking in this example (i.e. dividing the code into chunks of 200 lines). If you notice that your generated data is incorrect or lacking key information, it's worth reviewing the input being sent to the model. In some cases, brute chunking can inadvertently cut off words or sentences, resulting in poor-quality data. To mitigate this, consider implementing smarter chunking strategies, such as chunking at the end of each paragraph or section, tailored to your specific data.\n",
        "</details>"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "a5gv9BN0N6sX",
        "outputId": "b22bb735-eced-450e-b97d-28993c33534f"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Generate three diverse questions and answers for this code: ['import json', 'import jsonlines', 'import logging', 'import os', 'import pandas as pd', 'import time', '', 'from lamini.api.lamini_config import get_config', 'from lamini.api.rest_requests import get_version', 'from lamini.api.train import Train', 'from lamini.api.utils.completion import Completion', 'from lamini.api.utils.upload_client import upload_to_blob', 'from lamini.error.error import (', '    DownloadingModelError,', ')', 'from typing import Dict, Iterable, List, Optional, Union, Any, Generator', '', 'logger = logging.getLogger(__name__)', '', '', 'class Lamini:', '    \"\"\"Main interface for Lamini platform functionality. Key features are:', '        1. Generation calls', '        2. Data Upload/Downloading', '        3. Training orchestration', '        4. Evaluation', '', '    Parameters', '    ----------', '    model_name: str = None', '        LLM hugging face ID', '', '    api_key: Optional[str]', '        Lamini platform API key, if not provided the key stored', '        within ~.lamini/configure.yaml will be used. If either', \"        don't exist then an error is raised.\", '', '    api_url: Optional[str]', '        Lamini platform api url, only needed if a different url is needed outside of the', '        defined ones here: https://github.com/lamini-ai/lamini-platform/blob/main/sdk/lamini/api/lamini_config.py#L68', '            i.e. localhost, staging.lamini.ai, or api.lamini.ai', '            Additionally, LLAMA_ENVIRONMENT can be set as an environment variable', '            that will be grabbed for the url before any of the above defaults', '    \"\"\"', '', '    def __init__(', '        self,', '        model_name: str,', '        api_key: Optional[str] = None,', '        api_url: Optional[str] = None,', '    ):', '        self.config = get_config()', '        self.model_name = model_name', '        self.api_key = api_key', '        self.api_url = api_url', '        self.completion = Completion(api_key, api_url)', '        self.trainer = Train(api_key, api_url)', '        self.upload_file_path = None', '        self.upload_base_path = None', '', '    def version(self) -> str:', '        \"\"\"Get the version of the Lamini platform', '', '        Parameters', '        ----------', '        None', '', '        Returns', '        -------', '        str', '            Returned version fo the platform', '        \"\"\"', '', '        return get_version(self.api_key, self.api_url, self.config)', '', '    def generate(', '        self,', '        prompt: Union[str, List[str]],', '        model_name: Optional[str] = None,', '        output_type: Optional[dict] = None,', '        max_tokens: Optional[int] = None,', '        max_new_tokens: Optional[int] = None,', '    ) -> Union[str, Dict[str, Any]]:', '        \"\"\"Generation request to the LLM with the provided prompt.', '        Model name will specify which LLM from hugging face to use.', '        Output type is used to handle structured output of the response.', '        max_tokens and max_new_tokens are related to the total amount of tokens', '        the model can use and generate. max_new_tokens is recommended to be used', '        over max_tokens to adjust model output.', '', '        Parameters', '        ----------', '        prompt: Union[str, List[str]]', '            Prompt to send to LLM', '', '        model_name: Optional[str] = None', '            Which model to use from hugging face', '', '        output_type: Optional[dict] = None', '            Structured output format', '', '        max_tokens: Optional[int] = None', \"            Max number of tokens for the model's generation\", '', '        max_new_tokens: Optional[int] = None', \"            Max number of new tokens from the model's generation\", '', '        Raises', '        ------', '        DownloadingModelError', '            Raised when an issue occurs with the model_name provided has failed to download', '', '        Returns', '        -------', '        result: Union[str, Dict[str, Any]]', '            Generated response from the LLM, strings are returned when output_type is not', '            specified, otherwise a dictionary matching the output_type is returned.', '        \"\"\"', '', '        result = None', '        try:', '            result = self.completion.generate(', '                prompt=prompt,', '                model_name=model_name or self.model_name,', '                output_type=output_type,', '                max_tokens=max_tokens,', '                max_new_tokens=max_new_tokens,', '            )', '        except DownloadingModelError as e:', '            return e', '        if output_type is None:', '            if isinstance(prompt, list):', '                result = [single_result[\"output\"] for single_result in result]', '            else:', '                result = result[\"output\"]', '        return result', '', '    async def async_generate(', '        self,', '        prompt: Union[str, List[str]],', '        model_name: Optional[str] = None,', '        output_type: Optional[dict] = None,', '        max_tokens: Optional[int] = None,', '        max_new_tokens: Optional[int] = None,', '    ):', '        \"\"\"Asynchronous call for a generation request to the LLM with the provided prompt.', '        Model name will specify which LLM from hugging face to use.', '        Output type is used to handle structured output of the response.', '        max_tokens and max_new_tokens are related to the total amount of tokens', '        the model can use and generate. max_new_tokens is recommended to be used', '        over max_tokens to adjust model output.', '', '        Parameters', '        ----------', '        prompt: Union[str, List[str]]', '            Prompt to send to LLM', '', '        model_name: Optional[str] = None', '            Which model to use from hugging face', '', '        output_type: Optional[dict] = None', '            Structured output format', '', '        max_tokens: Optional[int] = None', \"            Max number of tokens for the model's generation\", '', '        max_new_tokens: Optional[int] = None', \"            Max number of new tokens from the model's generation\", '', '        Raises', '        ------', '        DownloadingModelError', '            Raised when an issue occurs with the model_name provided has failed to download', '', '        Returns', '        -------', '        result: Union[str, Dict[str, Any]]', '            Generated response from the LLM, strings are returned when output_type is not', '            specified, otherwise a dictionary matching the output_type is returned.', '        \"\"\"', '', '        req_data = self.completion.make_llm_req_map(', '            prompt=prompt,', '            model_name=model_name or self.model_name,', '            output_type=output_type,', '            max_tokens=max_tokens,', '            max_new_tokens=max_new_tokens,', '        )', '        result = await self.completion.async_generate(req_data)', '        if output_type is None:', '            if isinstance(prompt, list):', '                result = [single_result[\"output\"] for single_result in result]', '            else:', '                result = result[\"output\"]', '        return result', '', '    def upload_data(', '        self,', '        data: Iterable[Dict[str, Union[int, float, str, bool, Dict, List]]],', '        is_public: Optional[bool] = None,']\n",
            "Generate three diverse questions and answers for this code: ['    ) -> str:', '        \"\"\"Upload the provide data to the Lamini Platform', '', '        Parameters', '        ----------', '        data: Iterable[Dict[str, Union[int, float, str, bool, Dict, List]]]', '            Data to upload', '', '        is_public: Optional[bool] = None', '            Flag to indicate if the platform should allow the dataset to be', '            publically shared.', '', '        Raises', '        ------', '        ValueError', '            Raised in data is None', '', '        Exception', '            Raised if there was a failure during upload', '', '        Returns', '        -------', '        str', '            Dataset designation within the platform', '        \"\"\"', '', '        num_datapoints = 0', '', '        def get_data_str(d):', '            nonlocal num_datapoints', '            for item in d:', '                num_datapoints += 1', '                yield json.dumps(item) + \"\\\\n\"', '', '        if not data:', '            raise ValueError(\"Data pairs cannot be empty.\")', '', '        output = self.trainer.get_upload_base_path()', '        self.upload_base_path = output[\"upload_base_path\"]', '', '        try:', '            if self.upload_base_path == \"azure\":', '                data_str = get_data_str(data)', '                response = self.trainer.create_blob_dataset_location(', '                    self.upload_base_path, is_public', '                )', '                self.upload_file_path = response[\"dataset_location\"]', '                upload_to_blob(data_str, self.upload_file_path)', '                self.trainer.update_blob_dataset_num_datapoints(', '                    response[\"dataset_id\"], num_datapoints', '                )', '                print(\"Data pairs uploaded to blob.\")', '            else:', '                response = self.trainer.upload_dataset_locally(', '                    self.upload_base_path, is_public, data', '                )', '                self.upload_file_path = response[\"dataset_location\"]', '                print(\"Data pairs uploaded to local.\")', '', '            print(', '                f\"\\\\nYour dataset id is: {response[\\'dataset_id\\']} . Consider using this in the future to train using the same data. \\\\nEg: \"', '                f\"llm.train(data_or_dataset_id=\\'{response[\\'dataset_id\\']}\\')\"', '            )', '', '        except Exception as e:', '            print(f\"Error uploading data pairs: {e}\")', '            raise e', '', '        return response[\"dataset_id\"]', '', '    def upload_file(', '        self, file_path: str, input_key: str = \"input\", output_key: str = \"output\"', '    ) -> None:', '        \"\"\"Upload a provided file to the Lamini Platform', '', '        Parameters', '        ----------', '        file_path: str', '            File path location to upload', '', '        input_key: str = \"input\"', '            Key of the json dictionary to use as the input', '', '        output_key: str = \"output\"', '            Key of the json dictionary to use as the output', '', '        Raises', '        ------', '        Exception', '            Raised if there is an issue with upload', '', '        Returns', '        -------', '        None', '        \"\"\"', '', '        items = self._upload_file_impl(file_path, input_key, output_key)', '        try:', '            dataset_id = self.upload_data(items)', '            return dataset_id', '        except Exception as e:', '            print(f\"Error reading data file: {e}\")', '            raise e', '', '    def _upload_file_impl(', '        self, file_path: str, input_key: str = \"input\", output_key: str = \"output\"', '    ) -> Generator[Dict[str, Any], None, None]:', '        \"\"\"Private function to handle file types and loading for upload_file', '', '        Parameters', '        ----------', '        file_path: str', '            File path location to upload', '', '        input_key: str = \"input\"', '            Key of the json dictionary to use as the input', '', '        output_key: str = \"output\"', '            Key of the json dictionary to use as the output', '', '        Raises', '        ------', '        ValueError', '            Raised if input_key is not within the file contents provided', '', '        KeyError', '            Raises if input_key or output_key is not within the file contents provided', '', '        Exception', '            If a file type outside of csv or jsonlines is provided', '', '        Yields', '        -------', '        items: Dict[str, Any]', '            Contents of the file provided', '        \"\"\"', '', '        if os.path.getsize(file_path) > 1e10:', '            raise Exception(\"File size is too large, please upload file less than 10GB\")', '', '        # Convert file records to appropriate format before uploading file', '        items = []', '        if file_path.endswith(\".jsonl\") or file_path.endswith(\".jsonlines\"):', '            with open(file_path) as dataset_file:', '', '                for row in jsonlines.Reader(dataset_file):', '                    yield {\"input\": row[input_key], \"output\": row.get(output_key, \"\")}', '', '        elif file_path.endswith(\".csv\"):', '            df = pd.read_csv(file_path).fillna(\"\")', '            data_keys = df.columns', '            if input_key not in data_keys:', '                raise ValueError(', '                    f\"File must have input_key={input_key} as a column (and optionally output_key={output_key}). You \"', '                    \"can pass in different input_key and output_keys.\"', '                )', '', '            try:', '                for _, row in df.iterrows():', '                    yield {', '                        \"input\": row[input_key],', '                        \"output\": row.get(output_key, \"\"),', '                    }', '            except KeyError:', '                raise ValueError(\"Each object must have \\'input\\' and \\'output\\' as keys\")', '', '        else:', '            raise Exception(', '                \"Upload of only csv and jsonlines file supported at the moment.\"', '            )', '        return items', '', '    def train(', '        self,', '        data_or_dataset_id: Union[', '            str, Iterable[Dict[str, Union[int, float, str, bool, Dict, List]]]', '        ],', '        finetune_args: Optional[dict] = None,', '        gpu_config: Optional[dict] = None,', '        is_public: Optional[bool] = None,', '    ) -> str:', '        \"\"\"Handler for training jobs through the Trainer object. This submits a training', '        job request to the platform using the provided data.', '', '        Parameters', '        ----------', '        data_or_dataset_id: Union[', '            str, Iterable[Dict[str, Union[int, float, str, bool, Dict, List]]]', '        ]', '            Data or Id to use for the training job', '', '        finetune_args: Optional[dict] = None', '            Arguments that are passed into the Trainer.train function', '', '        gpu_config: Optional[dict] = None', '            Configuration for the GPUs on the platform', '', '        is_public: Optional[bool] = None', '            Allow public access to the model and dataset', '']\n",
            "Generate three diverse questions and answers for this code: ['        Raises', '        ------', '        AssertionError', '            Raises if dataset_id is None, a dataset_id is generated when data is provided', '            to this function instead of an id', '', '        Returns', '        -------', '        job: str', '            Job id for the train job on the platform', '        \"\"\"', '', '        if isinstance(data_or_dataset_id, str):', '            dataset_id = data_or_dataset_id', '        else:', '            dataset_id = self.upload_data(data_or_dataset_id, is_public=is_public)', '        assert dataset_id is not None', '        base_path = self.trainer.get_upload_base_path()', '        self.upload_base_path = base_path[\"upload_base_path\"]', '        existing_dataset = self.trainer.get_existing_dataset(', '            dataset_id, self.upload_base_path', '        )', '        self.upload_file_path = existing_dataset[\"dataset_location\"]', '', '        job = self.trainer.train(', '            model_name=self.model_name,', '            dataset_id=dataset_id,', '            upload_file_path=self.upload_file_path,', '            finetune_args=finetune_args,', '            gpu_config=gpu_config,', '            is_public=is_public,', '        )', '        job[\"dataset_id\"] = dataset_id', '        return job', '', '    # Add alias for tune', '    tune = train', '', '    # continuously poll until the job is completed', '    def train_and_wait(', '        self,', '        data_or_dataset_id: Union[', '            str, Iterable[Dict[str, Union[int, float, str, bool, Dict, List]]]', '        ],', '        finetune_args: Optional[dict] = None,', '        gpu_config: Optional[dict] = None,', '        is_public: Optional[bool] = None,', '        **kwargs,', '    ) -> str:', '        \"\"\"Handler for training jobs through the Trainer object. This submits a training', '        job request to the platform using the provided data. This differs from the train', '        function in that this function will continuously poll until the job is completed.', '', '        Parameters', '        ----------', '        data_or_dataset_id: Union[', '            str, Iterable[Dict[str, Union[int, float, str, bool, Dict, List]]]', '        ]', '            Data or Id to use for the training job', '', '        finetune_args: Optional[dict] = None', '            Arguments that are passed into the Trainer.train function', '', '        gpu_config: Optional[dict] = None', '            Configuration for the GPUs on the platform', '', '        is_public: Optional[bool] = None', '            Allow public access to the model and dataset', '', '        kwargs: Dict[str, Any]', '            Key word arguments', '                verbose', '                    output text indicating the job is still runing', '', '        Raises', '        ------', '        KeyboardInterrupt', '            Raised when keyboard interrupt is called', '', '        Returns', '        -------', '        status: str', '            Job status on the platform', '        \"\"\"', '', '        job = self.train(', '            data_or_dataset_id,', '            finetune_args=finetune_args,', '            gpu_config=gpu_config,', '            is_public=is_public,', '        )', '', '        try:', '            status = self.check_job_status(job[\"job_id\"])', '            if status[\"status\"] == \"FAILED\":', '                print(f\"Job failed: {status}\")', '                return status', '', '            while status[\"status\"] not in (', '                \"COMPLETED\",', '                \"PARTIALLY COMPLETED\",', '                \"FAILED\",', '                \"CANCELLED\",', '            ):', '                if kwargs.get(\"verbose\", False):', '                    print(f\"job not done. waiting... {status}\")', '                time.sleep(30)', '                status = self.check_job_status(job[\"job_id\"])', '                if status[\"status\"] == \"FAILED\":', '                    print(f\"Job failed: {status}\")', '                    return status', '                elif status[\"status\"] == \"CANCELLED\":', '                    print(f\"Job canceled: {status}\")', '                    return status', '            print(', '                f\"Finetuning process completed, model name is: {status[\\'model_name\\']}\"', '            )', '        except KeyboardInterrupt as e:', '            print(\"Cancelling job\")', '            return self.cancel_job(job[\"job_id\"])', '', '        return status', '', '    # Add alias for tune', '    tune_and_wait = train_and_wait', '', '    def cancel_job(self, job_id: str = None) -> str:', '        \"\"\"Cancel to job specified by the id', '', '        Parameters', '        ----------', '        job_id: str=None', '            job id to cancel', '', '        Returns', '        -------', '        str', '            Output from platform of the confirming cancelling of the job', '        \"\"\"', '', '        return self.trainer.cancel_job(job_id)', '', '    def cancel_all_jobs(', '        self,', '    ) -> str:', '        \"\"\"Cancel all jobs from this user on the platform', '', '        Parameters', '        ----------', '        None', '', '        Returns', '        -------', '        str', '            Output from platform of the confirming cancelling of the job', '        \"\"\"', '', '        return self.trainer.cancel_all_jobs()', '', '    def resume_job(self, job_id: str = None) -> str:', '        \"\"\"Resume the specific job on the Lamini platform', '', '        Parameters', '        ----------', '        job_id: str=None', '            Job to be resumed', '', '        Returns', '        -------', '        str:', '            Returned status of the platform for the job', '        \"\"\"', '', '        return self.trainer.resume_job(job_id)', '', '    def check_job_status(self, job_id: str = None) -> str:', '        \"\"\"Check the specified job on the Lamini platform', '', '        Parameters', '        ----------', '        job_id: str=None', '            Job to check status', '', '        Returns', '        -------', '        str', '            Returned status of the platform job', '        \"\"\"', '', '        return self.trainer.check_job_status(job_id)', '', '    def get_jobs(self) -> List[str]:', '        \"\"\"Get all jobs for this user on the Lamini Platform', '', '        Parameters', '        ----------', '        None', '', '        Returns', '        -------']\n",
            "Generate three diverse questions and answers for this code: ['        List[str]:', '            Returned list of all jobs', '        \"\"\"', '', '        return self.trainer.get_jobs()', '', '    def evaluate(self, job_id: str = None) -> str:', '        \"\"\"Run an evaluation job on the specified training job', '', '        Parameters', '        ----------', '        job_id: str=None', '            Job to evaluate', '', '        Returns', '        -------', '        str:', '            Status of the job on the platform', '        \"\"\"', '', '        return self.trainer.evaluate(job_id)']\n",
            "[{'input': 'What is the purpose of the `get_version` function in the Lamini class', 'output': 'The `get_version` function is used to retrieve the version of the Lamini platform, taking into account the provided API key, API URL, and configuration'}, {'input': 'What is the difference between `max_tokens` and `max_new_tokens` in the `generate` method', 'output': 'max_tokens is the total number of tokens the model can use for generation, while max_new_tokens is the maximum number of new tokens the model can generate, allowing for more fine-grained control over the output'}, {'input': 'What is the purpose of the `upload_data` method in the Lamini class', 'output': 'The `upload_data` method is used to upload data to the Lamini platform, taking in an iterable of dictionaries and an optional boolean indicating whether the data should be made public'}, {'input': 'What is the purpose of the `upload_data` method in the provided code', 'output': 'The `upload_data` method is used to upload data to the Lamini Platform, either to Azure or locally, depending on the `upload_base_path` value. It takes in data, an optional `is_public` flag, and returns the dataset ID'}, {'input': 'What is the maximum file size allowed for upload in the `_upload_file_impl` method?', 'output': '10GB'}, {'input': 'What is the purpose of the `train` method in the provided code?', 'output': 'The `train` method is used to handle training jobs through the Trainer object, submitting a training job request to the platform using the provided data'}, {'input': 'What is the purpose of the `train` function in this code', 'output': 'The `train` function submits a training job request to the platform using the provided data and continuously polls until the job is completed'}, {'input': 'What is the `tune` alias for in this code', 'output': 'The `tune` alias is an alias for the `train` function'}, {'input': 'What is the purpose of the `cancel_job` function in this code', 'output': 'The `cancel_job` function cancels a job specified by the job id on the platform'}, {'input': 'What is the purpose of the `evaluate` method in this code', 'output': 'The purpose of the `evaluate` method is to run an evaluation job on the specified training job and return the status of the job on the platform'}, {'input': 'What is the type of the `job_id` parameter in the `evaluate` method', 'output': 'The `job_id` parameter is a string'}, {'input': 'What is the return type of the `evaluate` method', 'output': 'The return type of the `evaluate` method is a string'}]\n"
          ]
        }
      ],
      "source": [
        "with open('lamini_file.py', 'r') as file:\n",
        "  code_file = file.read()\n",
        "\n",
        "# Generate 3 Q&A pairs for every 200 lines of the code (a chunk)\n",
        "responses = []\n",
        "code_lines = code_file.splitlines()\n",
        "for i in range(0, len(code_lines), 200):\n",
        "  chunk = code_lines[i:i + 200]\n",
        "  # Update the prompt to ask for more diverse Q&A\n",
        "  prompt = f\"Generate three diverse questions and answers for this code: {chunk}\"\n",
        "  print(prompt)\n",
        "  # Apply prompt template\n",
        "  formatted_prompt = f\"<|begin_of_text|><|start_header_id|>user<|end_header_id|>{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\"\n",
        "  # Generate the questions and answers\n",
        "  r = llm.generate(formatted_prompt, output_type={\n",
        "      \"question1\": \"str\",\n",
        "      \"answer1\": \"str\",\n",
        "      \"question2\": \"str\",\n",
        "      \"answer2\": \"str\",\n",
        "      \"question3\": \"str\",\n",
        "      \"answer3\": \"str\"\n",
        "  })\n",
        "  # Save the responses to a list\n",
        "  responses.append({\"input\": r[\"question1\"], \"output\": r[\"answer1\"]})\n",
        "  responses.append({\"input\": r[\"question2\"], \"output\": r[\"answer2\"]})\n",
        "  responses.append({\"input\": r[\"question3\"], \"output\": r[\"answer3\"]})\n",
        "\n",
        "print(responses)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "FEGtbhPARu-a"
      },
      "source": [
        "Now let's revisit the generated data. By carefully reviewing our data and updating the generation code when the dataset is manageable (i.e., something a human can review in just a few minutes), we lay the groundwork to scale the process for larger datasets in future tuning iterations.\n",
        "\n",
        "```\n",
        "[\n",
        "  {\n",
        "    'input': 'What is the purpose of the `get_version` function in the Lamini class',\n",
        "    'output': 'The `get_version` function is used to retrieve the version of the Lamini platform, taking into account the provided API key, API URL, and configuration'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the difference between `max_tokens` and `max_new_tokens` in the `generate` method',\n",
        "    'output': 'max_tokens is the total number of tokens the model can use for generation, while max_new_tokens is the maximum number of new tokens the model can generate, allowing for more fine-grained control over the output'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `upload_data` method in the Lamini class',\n",
        "    'output': 'The `upload_data` method is used to upload data to the Lamini platform, taking in an iterable of dictionaries and an optional boolean indicating whether the data should be made public'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `upload_data` method in the provided code',\n",
        "    'output': 'The `upload_data` method is used to upload data to the Lamini Platform, either to Azure or locally, depending on the `upload_base_path` value. It takes in data, an optional `is_public` flag, and returns the dataset ID'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the maximum file size allowed for upload in the `_upload_file_impl` method?',\n",
        "    'output': '10GB'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `train` method in the provided code?',\n",
        "    'output': 'The `train` method is used to handle training jobs through the Trainer object, submitting a training job request to the platform using the provided data'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `train` function in this code',\n",
        "    'output': 'The `train` function submits a training job request to the platform using the provided data and continuously polls until the job is completed'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the `tune` alias for in this code',\n",
        "    'output': 'The `tune` alias is an alias for the `train` function'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `cancel_job` function in this code',\n",
        "    'output': 'The `cancel_job` function cancels a job specified by the job id on the platform'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `evaluate` method in this code',\n",
        "    'output': 'The purpose of the `evaluate` method is to run an evaluation job on the specified training job and return the status of the job on the platform'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the type of the `job_id` parameter in the `evaluate` method',\n",
        "    'output': 'The `job_id` parameter is a string'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the return type of the `evaluate` method',\n",
        "    'output': 'The return type of the `evaluate` method is a string'\n",
        "  }\n",
        "]\n",
        "```\n"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "povgnW7lUOQP"
      },
      "source": [
        "With our improved prompting and chunking, the questions are now more diverse and cover more concepts from the evaluation set. For example, there are now questions about return types and parameter types—key concepts that were not addressed in the earlier generated data."
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "NPOFu18LUru-"
      },
      "source": [
        "## Tuning a model 🏰\n",
        "Finally, we have some solid data and are ready for a tuning run! This is simple with Lamini. Try uncommenting the tune command to create your own model or feel free to continue and use the public model we have already tuned for you."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "jLgMJYm8U7Kt",
        "outputId": "2c4c7767-466f-4b78-cc83-6f6e82f4142d"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Uncomment the tune command to tune your own model!\n",
            "Data pairs uploaded to local.\n",
            "\n",
            "Your dataset id is: 83eac7d5e636ed35639272cbc3a338e4c505b49613fad1c65456203926eb546d . Consider using this in the future to train using the same data. \n",
            "Eg: llm.train(data_or_dataset_id='83eac7d5e636ed35639272cbc3a338e4c505b49613fad1c65456203926eb546d')\n",
            "Tuning job submitted! Check status of job 12656 here: https://api.lamini.ai/train/12656\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "{'job_id': 12656,\n",
              " 'status': 'CREATED',\n",
              " 'dataset_id': '83eac7d5e636ed35639272cbc3a338e4c505b49613fad1c65456203926eb546d'}"
            ]
          },
          "metadata": {},
          "execution_count": 1
        }
      ],
      "source": [
        "# Note that responses has already been formatted as [{\"input\": ..., \"output\": ...}, ...]\n",
        "# We want to further format the data using Llama 3.1's prompt template: https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/\n",
        "formatted_responses = [\n",
        "  {\n",
        "    \"input\": f'<|begin_of_text|><|start_header_id|>user<|end_header_id|>{r[\"input\"]}<|eot_id|><|start_header_id|>assistant<|end_header_id|>',\n",
        "    \"output\": f'{r[\"output\"]}',\n",
        "  }\n",
        "  for r in responses\n",
        "]\n",
        "\n",
        "print(\"Uncomment the tune command to tune your own model!\")\n",
        "## Uncomment to tune your own model! ##\n",
        "# llm.tune(data_or_dataset_id=formatted_responses)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "iquo7gqcWa4R"
      },
      "source": [
        "## Evaluate 🤓\n",
        "\n",
        "After the model has completed tuning, we should benchmark it again against our evaluation set to evaluate its performance and hopefully see some improvement."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "QcpHBpFVXCG-",
        "outputId": "0fbc2f12-0f3e-4391-8a2d-d4891eb2ef2b"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "What type is model_name? The model_name is a string\n",
            "What does upload_data return? The `upload_data` method returns the dataset ID of the uploaded data\n",
            "What is the difference between .tune and .train? The `tune` alias is an alias for the `train` function\n",
            "What arguments are required to make an inference call to Lamini? The arguments required to make an inference call to Lamini are:\n",
            "\n",
            "* The ID of the job function to run\n",
            "* The input data for the job function\n",
            "* Any additional configuration options for the job function\n"
          ]
        }
      ],
      "source": [
        "# If you tuned your own model, replace model_name once it has finished tuning: https://api.lamini.ai/tune\n",
        "# You can also use the UI to test this: https://api.lamini.ai/tune\n",
        "tuned_llm = lamini.Lamini(model_name=\"a98a7a00623b14b71cf3b53c8f02f7a3c23d95d57d41efbbb8dc79bbb5cb836d\")\n",
        "questions = [\"What type is model_name?\", \"What does upload_data return?\", \"What is the difference between .tune and .train?\", \"What arguments are required to make an inference call to Lamini?\"]\n",
        "for q in questions:\n",
        "  formatted_prompt = f\"<|begin_of_text|><|start_header_id|>user<|end_header_id|>{q}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\"\n",
        "  response = tuned_llm.generate(formatted_prompt)\n",
        "  print(q, response)\n"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "AGYLPUpLX5LJ"
      },
      "source": [
        "| Question | Tuning Run #1 Answer | Gold Answer |\n",
        "| :-- | :-- | :-- |\n",
        "| What type is model_name? | The model_name is a string | A string. |\n",
        "| What does upload_data return? | The upload_data method returns the dataset ID | The dataset ID, a string. |\n",
        "| What is the difference between .tune and .train? | The `tune` alias is an alias for the `train` function | They are aliases to each other, i.e. the same. Both functions<br> will tune a model. |\n",
        "| What arguments are required to make an inference call to Lamini? | The arguments required to make an inference call to<br>Lamini are:<br>- The ID of the job function to run <br>- The input data for the job function<br>- Any additional configuration options for the job function | To make a call to Lamini, you need to instantiate Lamini with<br> a model name. Then, the generate call only needs a prompt<br> to run inference on the set model. |\n",
        "\n",
        "Let's review the models answers against the gold response:\n",
        "1. What type is model_name? --> Correct ✅\n",
        "1. What does upload_data return? --> Correct ✅\n",
        "1. What is the difference between .tune and .train? --> Correct ✅\n",
        "1. What arguments are required to make an inference call to Lamini? --> Incorrect, does not reference the generate function ❌\n",
        "\n",
        "And now we can score our first tuning run:\n",
        "\n",
        "| Model | Score |\n",
        "| :-- | :-- |\n",
        "| Llama 3.1 | 1/4 --> 25% |\n",
        "| Tuning Run #1 | 3/4 --> 75% |\n",
        "\n",
        "Sweet, that's a great improvement!\n",
        "\n",
        "But is this the best we can achieve? Not necessarily. Memory tuning is an iterative process. By analyzing why the model made incorrect responses and iterating on our approach, we can enhance the model's quality even further based on our evaluation set, using the techniques we've just covered."
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "OTITgfHnbqrr"
      },
      "source": [
        "## Iterate ♻\n",
        "Let's look at the evaluation question that the model got wrong and examine why.\n",
        "\n",
        "| Question | Tuning Run #1 Answer | Gold Answer |\n",
        "| :-- | :-- | :-- |\n",
        "| What arguments are required to make an inference call to Lamini? ❌ | The arguments required to make an inference call to<br>Lamini are:<br>- The ID of the job function to run <br>- The input data for the job function<br>- Any additional configuration options for the job function | To make a call to Lamini, you need to instantiate Lamini with<br> a model name. Then, the generate call only needs a prompt<br> to run inference on the set model. |\n",
        "\n",
        "A good starting point is to check the data—does the tuning data contain the information needed to answer this question?\n",
        "\n",
        "<details>\n",
        "  <summary><i> Tuning data </i></summary>\n",
        "  <pre><code>[\n",
        "  {\n",
        "    'input': 'What is the purpose of the `get_version` function in the Lamini class',\n",
        "    'output': 'The `get_version` function is used to retrieve the version of the Lamini platform, taking into account the provided API key, API URL, and configuration'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the difference between `max_tokens` and `max_new_tokens` in the `generate` method',\n",
        "    'output': 'max_tokens is the total number of tokens the model can use for generation, while max_new_tokens is the maximum number of new tokens the model can generate, allowing for more fine-grained control over the output'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `upload_data` method in the Lamini class',\n",
        "    'output': 'The `upload_data` method is used to upload data to the Lamini platform, taking in an iterable of dictionaries and an optional boolean indicating whether the data should be made public'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `upload_data` method in the provided code',\n",
        "    'output': 'The `upload_data` method is used to upload data to the Lamini Platform, either to Azure or locally, depending on the `upload_base_path` value. It takes in data, an optional `is_public` flag, and returns the dataset ID'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the maximum file size allowed for upload in the `_upload_file_impl` method?',\n",
        "    'output': '10GB'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `train` method in the provided code?',\n",
        "    'output': 'The `train` method is used to handle training jobs through the Trainer object, submitting a training job request to the platform using the provided data'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `train` function in this code',\n",
        "    'output': 'The `train` function submits a training job request to the platform using the provided data and continuously polls until the job is completed'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the `tune` alias for in this code',\n",
        "    'output': 'The `tune` alias is an alias for the `train` function'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `cancel_job` function in this code',\n",
        "    'output': 'The `cancel_job` function cancels a job specified by the job id on the platform'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the purpose of the `evaluate` method in this code',\n",
        "    'output': 'The purpose of the `evaluate` method is to run an evaluation job on the specified training job and return the status of the job on the platform'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the type of the `job_id` parameter in the `evaluate` method',\n",
        "    'output': 'The `job_id` parameter is a string'\n",
        "  },\n",
        "  {\n",
        "    'input': 'What is the return type of the `evaluate` method',\n",
        "    'output': 'The return type of the `evaluate` method is a string'\n",
        "  }\n",
        "]</code></pre>\n",
        "</details>\n",
        "\n",
        "We see that the tuning data includes some information about the `generate` method and its parameters:\n",
        "```\n",
        "'input': 'What is the difference between `max_tokens` and `max_new_tokens` in the `generate` method',\n",
        "'output': 'max_tokens is the total number of tokens the model can use for generation, while max_new_tokens is the maximum number of new tokens the model can generate, allowing for more fine-grained control over the output'\n",
        "```\n",
        "However, since the model didn't mention the `generate` method, it likely didn’t make the connection between \"inference call\" and a call to  `generate`. The data is also missing the required arguments for `generate`.\n",
        "\n",
        "**Fix**: Manually add Q&A pairs to the tuning data to cover this concept gap (`generate` means inference). Alternatively, update the data generation process to ensure this concept is included. After making these adjustments, re-tune the model and evaluate the results against the evaluation set."
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "YEMzOulXtWw_"
      },
      "source": [
        "## Conclusion 👑\n",
        "\n",
        "In this example, we've reached step 7 in the memory tuning process and demonstrated many of the essential tools to get started. Your next step is to revisit the creation of the tuning set (steps 3 and 4) to generate additional questions and answers that address the missing concepts and continue iterating.\n",
        "\n",
        "Here’s a quick recap of the steps:\n",
        "1. Create an evaluation set\n",
        "1. Benchmark the base model\n",
        "1. Construct a dataset\n",
        "1. Review the generated Q&A\n",
        "1. Tuning run\n",
        "1. Evaluate using the evaluation set\n",
        "1. Update Q&A generation\n",
        "1. Repeat 4-7\n",
        "1. Expand the dataset\n",
        "\n",
        "Once we think the model is \"good enough\" or start seeing that the data is there, but the model isn't memorizing it correctly, we can try updating the hyperparams and, again, iterate and try different learning rates, max steps, etc. For more information on hyperparameters, check out the documentation here:: [https://lamini-ai.github.io/tuning/hyperparameters/](https://lamini-ai.github.io/tuning/hyperparameters/).\n",
        "\n",
        "Finally, once you have a model that performs well with a small dataset, you can start to expand and add more data. By focusing on debugging a subset of your data, you can iterate and refine the process more effectively. For this example, it would be incrementally increasing to all the files in the Lamini Python package.\n",
        "\n",
        "As you use more data, these tools from Lamini may be helpful: [data generation pipeline](https://github.com/lamini-ai/lamini-examples/blob/main/05_data_pipeline/README.md), evaluation pipeline (WIP)."
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "yubC6ww1Hyhh"
      },
      "source": [
        "## Note from the author 🙋🏻‍♀️\n",
        "While this notebook is meant to provide a straightforward example of memory tuning, even its creation required a lot of iteration. We had to experiment with various prompts, number of data points, modified chunk sizes, tuning hyperparameters, etc. Remember, tuning is as much an art as it is a science. Stay persistent and break the task down into manageable pieces. Embrace the data and start small for the best results!\n",
        "\n",
        "Here are a few of the over 2 dozen models created while making this notebook.\n",
        "\n",
        "![Model iterations](https://drive.google.com/uc?export=view&id=1m21fR0frarN2U3gU4w_mf1G98uwsiQIy)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.10.10"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
