{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "UMqBL77hMXP2"
      },
      "source": [
        "# Applying advanced methods with chains\n",
        "\n",
        "Authors:  \n",
        " - [Lior Gazit](https://www.linkedin.com/in/liorgazit).  \n",
        " - [Meysam Ghaffari](https://www.linkedin.com/in/meysam-ghaffari-ph-d-a2553088/).  \n",
        "\n",
        "This notebook is taught and reviewed in our book:  \n",
        "**[Mastering NLP from Foundations to LLMs](https://www.amazon.com/dp/1804619183)**  \n",
        "![image.png]()\n",
        "\n",
        "This Colab notebook is referenced in our book's Github repo:   \n",
        "https://github.com/PacktPublishing/Mastering-NLP-from-Foundations-to-LLMs   \n",
        "<a target=\"_blank\" href=\"https://colab.research.google.com/github/PacktPublishing/Mastering-NLP-from-Foundations-to-LLMs/blob/liors_branch/Chapter9_notebooks/Ch9_Advanced_Methods_with_Chains.ipynb\">\n",
        "  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
        "</a>"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "_O6mghyBdyHh"
      },
      "source": [
        "**Purpose of this notebook:**  \n",
        "Exploring some of the methods that **Langchain** lets' us employ:  \n",
        "* Asking the LLM general knowledge questions  \n",
        "* Deriving structured data formats from LLMs responses  \n",
        "* Setting up a memory feature within the conversation  \n",
        "\n",
        "**Requirements:**  \n",
        "* When running in Colab, use this runtime notebook setting: `Python 3, CPU`  \n",
        "* This code picks OpenAI's API as a choice of LLM, so a paid **API key** is necessary.   "
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "g54Uf66Vz9Fi"
      },
      "source": [
        ">*```Disclaimer: The content and ideas presented in this notebook are solely those of the authors and do not represent the views or intellectual property of the authors' employers.```*"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "wc4H7Oz-TKQQ"
      },
      "source": [
        "Install:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "g9iHIKT2HUi-",
        "outputId": "155f0faf-7aec-460b-d0ea-ebf992fadfe2"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m817.0/817.0 kB\u001b[0m \u001b[31m9.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m227.4/227.4 kB\u001b[0m \u001b[31m17.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.7/1.7 MB\u001b[0m \u001b[31m13.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m252.4/252.4 kB\u001b[0m \u001b[31m18.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m63.1/63.1 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.6/75.6 kB\u001b[0m \u001b[31m2.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.4/49.4 kB\u001b[0m \u001b[31m5.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.8/77.8 kB\u001b[0m \u001b[31m7.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m5.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m138.5/138.5 kB\u001b[0m \u001b[31m13.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.8/1.8 MB\u001b[0m \u001b[31m23.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h"
          ]
        }
      ],
      "source": [
        "# REMARK:\n",
        "# If the below code error's out due to a Python package discrepency, it may be because new versions are causing it.\n",
        "# In which case, set \"default_installations\" to False to revert to the original image:\n",
        "default_installations = True\n",
        "if default_installations:\n",
        "  !pip -q install langchain openai\n",
        "  !pip -q install -U langchain-openai\n",
        "else:\n",
        "  import requests\n",
        "  text_file_path = \"requirements__Ch9_Advanced_Methods_with_Chains.txt\"\n",
        "  url = \"https://raw.githubusercontent.com/PacktPublishing/Mastering-NLP-from-Foundations-to-LLMs/main/Chapter9_notebooks/\" + text_file_path           \n",
        "  res = requests.get(url)\n",
        "  with open(text_file_path, \"w\") as f:\n",
        "    f.write(res.text)\n",
        "    \n",
        "  !pip install -r requirements__Ch9_Advanced_Methods_with_Chains.txt\n"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "yACZPlFPTKQR"
      },
      "source": [
        "Imports:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ZBduT51rHcb7"
      },
      "outputs": [],
      "source": [
        "from langchain import PromptTemplate\n",
        "from langchain.chains import LLMChain\n",
        "from langchain_openai import OpenAI\n",
        "from langchain_community.chat_models import ChatOpenAI\n",
        "from langchain.chains import ConversationChain\n",
        "from langchain.memory import ConversationBufferMemory\n",
        "from langchain.output_parsers import CommaSeparatedListOutputParser\n",
        "import os\n",
        "import pandas as pd\n",
        "import json"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "e-kYVkbxwJ3I"
      },
      "source": [
        "Code Settings:"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "wDtEMZQS81xG"
      },
      "source": [
        "Provide OpenAI's API key:  \n",
        "Remember, if you'd prefer to use a free LLM, you can do so using examples from the book for replacing a paid LLM with a free LLM.  "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hnqlJfXxJZTr"
      },
      "outputs": [],
      "source": [
        "os.environ[\"OPENAI_API_KEY\"] = \"...\"\n",
        "llm = OpenAI(model_name='gpt-3.5-turbo-instruct')"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "2ROPMNWh9Tuz"
      },
      "source": [
        "### Asking the LLM a general knowledge question"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "MBp4EJs5HdDg",
        "outputId": "236d4a8f-990c-42bd-d92f-fb5b695645cc"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "\n",
            "James Hetfield, Lars Ulrich, Kirk Hammett, Robert Trujillo\n"
          ]
        }
      ],
      "source": [
        "simple_question = \"\"\"Who are the members of Metallica. List them as comma separated.\"\"\"\n",
        "\n",
        "llm_chain = LLMChain(\n",
        "    llm=llm,\n",
        "    prompt=PromptTemplate(template=simple_question, input_variables=[]))\n",
        "\n",
        "print(llm_chain.predict())"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "u9SMZ_D798_T"
      },
      "source": [
        "### Requesting output structure: Making the LLM provide output in a particular data format"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ZbfwnQIK-Pbz",
        "outputId": "0099dce0-8dc9-4e72-d583-5e0248feb7e6"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon']\n"
          ]
        }
      ],
      "source": [
        "request_list_format = \"\"\"List the first 10 elements from the periodical table as comma separated list.\"\"\"\n",
        "\n",
        "output_parser = CommaSeparatedListOutputParser()\n",
        "\n",
        "conversation = LLMChain(\n",
        "    llm=llm,\n",
        "    output_parser=output_parser,\n",
        "    prompt=PromptTemplate(template=request_list_format, input_variables=[]))\n",
        "\n",
        "print(conversation.predict())"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "YdpIOMzv-5nY"
      },
      "source": [
        "### Evolve to a fluent conversation: Inserting an element of memory so to have previous interactions as reference and context to follow up prompts"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "OEUAeJSQHhJ3",
        "outputId": "cad053dd-96d9-4253-85a4-0f731f1477b7"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py:316: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.\n",
            "  warnings.warn(\n"
          ]
        },
        {
          "data": {
            "text/plain": [
              "['1. Christmas',\n",
              " '\\n2. Thanksgiving',\n",
              " '\\n3. Halloween',\n",
              " \"\\n4. New Year's Day\",\n",
              " \"\\n5. Valentine's Day\",\n",
              " '\\n6. Easter',\n",
              " '\\n7. Independence Day',\n",
              " '\\n8. Labor Day',\n",
              " '\\n9. Memorial Day',\n",
              " '\\n10. Hanukkah']"
            ]
          },
          "execution_count": 11,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# request_for_continuous_conversation = \"\"\"List all the holidays you know as comma separated list.\n",
        "request_for_continuous_conversation = \"\"\"\n",
        "Current conversation:\n",
        "{history}\n",
        "\n",
        "Your task:\n",
        "{input}\"\"\"\n",
        "\n",
        "conversation = ConversationChain(\n",
        "    llm=llm,\n",
        "    prompt=PromptTemplate(template=request_for_continuous_conversation, input_variables=[\"history\", \"input\"], output_parser=output_parser),\n",
        "    memory=ConversationBufferMemory())\n",
        "\n",
        "# conversation.predict_and_parse(input=\"From the list, write the first 10 holidays, separate by commas.\")\n",
        "conversation.predict_and_parse(input=\"Write the first 10 holidays you know, as a comma separated list.\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "vc-Vhk9BHkcQ",
        "outputId": "2f09380f-1a2a-4ba3-b128-9b749463849f"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py:316: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.\n",
            "  warnings.warn(\n"
          ]
        },
        {
          "data": {
            "text/plain": [
              "['1. Christmas', '\\n2. Easter', '\\n3. Hanukkah']"
            ]
          },
          "execution_count": 12,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "conversation.predict_and_parse(input=\"Observe the list of holidays you printed and remove all the non-religious holidays from the list.\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "pxCfQpq4JMFa",
        "outputId": "39408aee-0dc9-4c55-d223-baabcc1f6d3b"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "\n",
            "{\n",
            "  \"holidays\": [\n",
            "    {\n",
            "      \"name\": \"Christmas\",\n",
            "      \"description\": \"Christmas is a Christian holiday commemorating the birth of Jesus Christ, celebrated on December 25th.\"\n",
            "    },\n",
            "    {\n",
            "      \"name\": \"Easter\",\n",
            "      \"description\": \"Easter is a Christian holiday celebrating the resurrection of Jesus Christ, typically observed on the first Sunday after the full moon following the vernal equinox.\"\n",
            "    },\n",
            "    {\n",
            "      \"name\": \"Hanukkah\",\n",
            "      \"description\": \"Hanukkah is a Jewish holiday celebrating the rededication of the Second Temple in Jerusalem, observed for eight days and nights beginning on the 25th day of Kislev in the Hebrew calendar.\"\n",
            "    }\n",
            "  ]\n",
            "}\n"
          ]
        }
      ],
      "source": [
        "advanced_data_structure = \"\"\"For each of these, tell about the holiday in 2 sentences. Form the output in a json format table. The table's name is \"holidays\" and the fields are \"name\" and \"description\". For each row, the \"name\" is the holiday's name, and the \"description\" is the description you generated. The syntax of the output should be a json format, without newline characters.\n",
        "For instance:\n",
        "```\n",
        "{\"holidays\": [\n",
        "        {\"name\": \"holiday_name\",\n",
        "         \"description\": \"holiday_description\"\n",
        "        }\n",
        "        ]}\n",
        "```\n",
        "\"\"\"\n",
        "\n",
        "output = conversation.predict(input=advanced_data_structure)\n",
        "print(output)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 143
        },
        "id": "cut-qr_SEkcq",
        "outputId": "b1d5a20d-813d-4d2a-ba49-fde38abe0027"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "<style type=\"text/css\">\n",
              "#T_94418_row0_col0, #T_94418_row0_col1, #T_94418_row1_col0, #T_94418_row1_col1, #T_94418_row2_col0, #T_94418_row2_col1 {\n",
              "  text-align: left;\n",
              "}\n",
              "</style>\n",
              "<table id=\"T_94418\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr>\n",
              "      <th class=\"blank level0\" >&nbsp;</th>\n",
              "      <th id=\"T_94418_level0_col0\" class=\"col_heading level0 col0\" >name</th>\n",
              "      <th id=\"T_94418_level0_col1\" class=\"col_heading level0 col1\" >description</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th id=\"T_94418_level0_row0\" class=\"row_heading level0 row0\" >0</th>\n",
              "      <td id=\"T_94418_row0_col0\" class=\"data row0 col0\" >Christmas</td>\n",
              "      <td id=\"T_94418_row0_col1\" class=\"data row0 col1\" >Christmas is a Christian holiday commemorating the birth of Jesus Christ, celebrated on December 25th.</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th id=\"T_94418_level0_row1\" class=\"row_heading level0 row1\" >1</th>\n",
              "      <td id=\"T_94418_row1_col0\" class=\"data row1 col0\" >Easter</td>\n",
              "      <td id=\"T_94418_row1_col1\" class=\"data row1 col1\" >Easter is a Christian holiday celebrating the resurrection of Jesus Christ, typically observed on the first Sunday after the full moon following the vernal equinox.</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th id=\"T_94418_level0_row2\" class=\"row_heading level0 row2\" >2</th>\n",
              "      <td id=\"T_94418_row2_col0\" class=\"data row2 col0\" >Hanukkah</td>\n",
              "      <td id=\"T_94418_row2_col1\" class=\"data row2 col1\" >Hanukkah is a Jewish holiday celebrating the rededication of the Second Temple in Jerusalem, observed for eight days and nights beginning on the 25th day of Kislev in the Hebrew calendar.</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n"
            ],
            "text/plain": [
              "<pandas.io.formats.style.Styler at 0x7f3dd46d7340>"
            ]
          },
          "execution_count": 14,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "pd.set_option('display.max_colwidth', None)\n",
        "\n",
        "dict = json.loads(output)\n",
        "pd.json_normalize(dict[\"holidays\"]).style.set_properties(**{'text-align': 'left'})"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "pmvRtK63ub8p"
      },
      "outputs": [],
      "source": []
    }
  ],
  "metadata": {
    "colab": {
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
