{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "UMqBL77hMXP2"
      },
      "source": [
        "# Setting up close source and open source LLMs\n",
        "\n",
        "Authors:  \n",
        " - [Lior Gazit](https://www.linkedin.com/in/liorgazit).  \n",
        " - [Meysam Ghaffari](https://www.linkedin.com/in/meysam-ghaffari-ph-d-a2553088/).  \n",
        "\n",
        "This notebook is taught and reviewed in our book:  \n",
        "**[Mastering NLP from Foundations to LLMs](https://www.amazon.com/dp/1804619183)**  \n",
        "![image.png]()\n",
        "\n",
        "This Colab notebook is referenced in our book's Github repo:   \n",
        "https://github.com/PacktPublishing/Mastering-NLP-from-Foundations-to-LLMs   \n",
        "<a target=\"_blank\" href=\"https://colab.research.google.com/github/PacktPublishing/Mastering-NLP-from-Foundations-to-LLMs/blob/liors_branch/Chapter8_notebooks/Ch8_Setting_Up_Close_Source_and_Open_Source_LLMs.ipynb\">\n",
        "  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
        "</a>"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "K3nyaPpW4GVh"
      },
      "source": [
        "**The purpose of this notebook:**   \n",
        "In this notebook we demostrate how to set up LLMs for prompting via code.  \n",
        "We start by experimenting with OpenAI's GPT via API,  \n",
        "And then we import an open source free LLM and experiment with it locally.  \n",
        "\n",
        "\n",
        "**Requirements:**  \n",
        "* When running in Colab, use this runtime notebook setting: `Python 3, CPU`  \n",
        "* OpenAI's API is preferrable so to utilize all the notebooks functionalities  "
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "g54Uf66Vz9Fi"
      },
      "source": [
        ">*```Disclaimer: The content and ideas presented in this notebook are solely those of the authors and do not represent the views or intellectual property of the authors' employers.```*"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "ZsuY22Ld4RLh"
      },
      "source": [
        "## Part 1 - Using LLMs in Python Using API\n",
        "In this part we cover the process of setting up access to OpenAI's `gpt-3.5-turbo` via their API.  \n",
        "We lay out the steps per the description in the book.  \n",
        "> **Note that an API key needs to be generated ahead of time via OpenAI's website, and is available for you once you set up an account.**  \n",
        " If you don't have a key, skip to part 2 for setting up open source LLMs for free.  \n"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "5vHoDoKTE4tt"
      },
      "source": [
        "Install:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "FV2EqicfE4tu",
        "outputId": "307656e0-c6b0-48cf-a652-7ce12805933b"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m227.4/227.4 kB\u001b[0m \u001b[31m3.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.6/75.6 kB\u001b[0m \u001b[31m4.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.8/77.8 kB\u001b[0m \u001b[31m4.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m3.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ],
      "source": [
        "# REMARK:\n",
        "# If the below code error's out due to a Python package discrepency, it may be because new versions are causing it.\n",
        "# In which case, set \"default_installations\" to False to revert to the original image:\n",
        "default_installations = True\n",
        "if default_installations:\n",
        "  !pip -q install --upgrade openai\n",
        "else:\n",
        "  import requests\n",
        "  text_file_path = \"requirements__Ch8_Setting_Up_Close_Source_and_Open_Source_LLMs.txt\"\n",
        "  url = \"https://raw.githubusercontent.com/PacktPublishing/Mastering-NLP-from-Foundations-to-LLMs/main/Chapter8_notebooks/\" + text_file_path           \n",
        "  res = requests.get(url)\n",
        "  with open(text_file_path, \"w\") as f:\n",
        "    f.write(res.text)\n",
        "    \n",
        "  !pip install -r requirements__Ch8_Setting_Up_Close_Source_and_Open_Source_LLMs.txt"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "lKvVWqxUE4tv"
      },
      "source": [
        "Imports:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "H0Y60k1fE4tv"
      },
      "outputs": [],
      "source": [
        "import openai\n",
        "import re\n",
        "import time"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "Ym-8iF3xE4tw"
      },
      "source": [
        "Define OpenAI's API key:  \n",
        "**You must provide a key and paste it as a string!**  \n",
        "If you don't have a key, skip to part 2 for setting up open source LLMs for free.  "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "OLJdgPbFE4tw"
      },
      "outputs": [],
      "source": [
        "# paste your key here as a string: \"...\"\n",
        "api_key = \"...\""
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "cmNWLxJ3E4tx"
      },
      "source": [
        "### Experiment with `gpt-3.5-turbo`"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "UeyM5927E4ty"
      },
      "source": [
        "Code Settings:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "JiyW9REEE4ty"
      },
      "outputs": [],
      "source": [
        "openai_model = \"gpt-3.5-turbo\"\n",
        "temperature = 0\n",
        "max_attempts = 5\n",
        "attempts = 0"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sFXvAGQqui3d"
      },
      "outputs": [],
      "source": [
        "client = openai.OpenAI(api_key=api_key)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "YkKHwQ5hE4tx"
      },
      "source": [
        "#### Define your prompt\n",
        "Here you define two messages to the model:  \n",
        "1. System prompt: Priming the model by telling it how you'd like it to behave.  \n",
        " This is a capability that doesn't exist for when you use the ChatGPT interface in a web browser.  \n",
        "1. User prompt.\n",
        "\n",
        "Note:  \n",
        "OpenAI allows more options for a more sophisticated priming.  "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3mfrh7UYE4tx"
      },
      "outputs": [],
      "source": [
        "system_prompt = \"Your are an insightful assistant. When you are asked a question, you articulate an answer. Only after you have finished answering the question, you carefully review all the words in the prompt and you state the typos that exist in the prompt, finally, you provide corrections to them.\"\n",
        "user_prompt_oai = \"If neuroscience could extract the last thoughts a person had before they dyed, how would the world be diferent?\""
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "VDFtY0fPE4tz"
      },
      "source": [
        "Define the priming message:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ShsIkMVeE4tz"
      },
      "outputs": [],
      "source": [
        "# Create message:\n",
        "messages = []\n",
        "messages.append({\"role\": \"system\",\n",
        "                 \"content\": system_prompt})\n",
        "messages.append({\"role\": \"user\",\n",
        "                 \"content\": user_prompt_oai})"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "ug5QhUa1E4tz"
      },
      "source": [
        "#### Experiment with the model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "BZqme68_E4t0",
        "outputId": "eef9865f-b7f5-409e-83e5-a61621ad087c"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Prompt: If neuroscience could extract the last thoughts a person had before they dyed, how would the world be diferent?\n",
            "\n",
            "gpt-3.5-turbo's Response: \n",
            "If neuroscience could extract the last thoughts a person had before they died, it would have profound implications for various aspects of society. \n",
            "This ability could potentially revolutionize fields such as psychology, criminology, and end-of-life care. \n",
            "Understanding a person's final thoughts could provide valuable insights into their state of mind, emotional well-being, and potentially help unravel mysteries surrounding their death. \n",
            "It could also offer comfort to loved ones by providing a glimpse into the innermost thoughts of the deceased. \n",
            "However, such technology would raise significant ethical concerns regarding privacy, consent, and the potential misuse of this information. \n",
            "Overall, the world would be both fascinated and apprehensive about the implications of this groundbreaking capability.\n",
            "\n",
            "Typos in the prompt:\n",
            "1. \n",
            "\"dyed\" should be \"died\"\n",
            "2. \n",
            "\"diferent\" should be \"different\"\n",
            "\n",
            "Corrections:\n",
            "If neuroscience could extract the last thoughts a person had before they died, how would the world be different?\n"
          ]
        }
      ],
      "source": [
        "while True:\n",
        "    try:\n",
        "        response = client.chat.completions.create(\n",
        "            model=openai_model,\n",
        "            messages=messages,\n",
        "            temperature=temperature)\n",
        "        response_oai = response.choices[0].message.content.strip()\n",
        "        response_oai = re.sub(r'\\. ', r'. \\n', response_oai)\n",
        "        print(f\"Prompt: {user_prompt_oai}\\n\\n{openai_model}'s Response: \\n{response_oai}\")\n",
        "        break\n",
        "    except Exception as output:\n",
        "        attempts += 1\n",
        "        if attempts >= max_attempts:\n",
        "            print(f\"Quitting due to {openai_model} error: {output}\")\n",
        "            break\n",
        "        print(f\"Attempt #{attempts} failed: {output}\")\n",
        "        time.sleep(1)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "ajtE_AXTE4t0"
      },
      "source": [
        "****"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "8r0QrKVv4z0D"
      },
      "source": [
        "## Part 2 - Using Open Source LLMs Locally\n",
        "Here we set up an open source LLM via Hugging Face's hub.  "
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "bfdxZJU24p3J"
      },
      "source": [
        "Install:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "WcvU5auS4DYW",
        "outputId": "ee4a5a98-1be7-4a93-f4d7-90247c6e75df"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Requirement already satisfied: transformers in /usr/local/lib/python3.10/dist-packages (4.38.1)\n",
            "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers) (3.13.1)\n",
            "Requirement already satisfied: huggingface-hub<1.0,>=0.19.3 in /usr/local/lib/python3.10/dist-packages (from transformers) (0.20.3)\n",
            "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers) (1.25.2)\n",
            "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers) (23.2)\n",
            "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers) (6.0.1)\n",
            "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers) (2023.12.25)\n",
            "Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers) (2.31.0)\n",
            "Requirement already satisfied: tokenizers<0.19,>=0.14 in /usr/local/lib/python3.10/dist-packages (from transformers) (0.15.2)\n",
            "Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.10/dist-packages (from transformers) (0.4.2)\n",
            "Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers) (4.66.2)\n",
            "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.19.3->transformers) (2023.6.0)\n",
            "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.19.3->transformers) (4.9.0)\n",
            "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (3.3.2)\n",
            "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (3.6)\n",
            "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (2.0.7)\n",
            "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (2024.2.2)\n"
          ]
        }
      ],
      "source": [
        "%pip -q install --upgrade transformers"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "NbaKGrqK4vEk"
      },
      "source": [
        "Imports:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Jpb8ql3W4uO_"
      },
      "outputs": [],
      "source": [
        "from transformers import AutoTokenizer, AutoModelForCausalLM"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "O1Bbe-2z4zhy"
      },
      "source": [
        "### Experiment with Microsoft's `DialoGPT-medium`"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "pi9rh3TyEhmP"
      },
      "source": [
        "Settings:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0Ff8IhAKEhvv"
      },
      "outputs": [],
      "source": [
        "hf_model = \"microsoft/DialoGPT-medium\"\n",
        "max_length = 1000\n",
        "\n",
        "tokenizer = AutoTokenizer.from_pretrained(hf_model)\n",
        "model = AutoModelForCausalLM.from_pretrained(hf_model)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "UIH64_Lk6CqU"
      },
      "source": [
        "#### Define your prompt\n",
        "  "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "v-CVoV5T6C0W"
      },
      "outputs": [],
      "source": [
        "user_prompt_hf = \"If dinosaurs were alive today, would they possess a threat to people?\""
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "HIdIkcWFEr5G"
      },
      "source": [
        "#### Experiment with the model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "egxAV51KHzsM",
        "outputId": "1d33121d-62d5-4eae-9c0a-70943b51a483"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "*  Note to user: Feel free to ignore the above warning about padding side.\n",
            "\n",
            "\n",
            "Prompt: If dinosaurs were alive today, would they possess a threat to people?\n",
            "\n",
            "microsoft/DialoGPT-medium's Response: \n",
            "I think they would be more afraid of the humans.\n"
          ]
        }
      ],
      "source": [
        "print(\"\\n*  Note to user: Feel free to ignore the above warning about padding side.\")\n",
        "user_input_ids = tokenizer.encode(user_prompt_hf + tokenizer.eos_token, return_tensors='pt')\n",
        "response_hf_encoded = model.generate(user_input_ids,\n",
        "                             max_length=max_length,\n",
        "                             pad_token_id=tokenizer.eos_token_id)\n",
        "response_hf = tokenizer.decode(response_hf_encoded[:, user_input_ids.shape[-1]:][0], skip_special_tokens=True)\n",
        "print(f\"\\n\\nPrompt: {user_prompt_hf}\\n\\n{hf_model}'s Response: \\n{response_hf}\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "UhWH2S_eqO2z"
      },
      "outputs": [],
      "source": []
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
