{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/open-llama/retrieval-augmentation-open-llama-langchain.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/llm-field-guide/open-llama/retrieval-augmentation-open-llama-langchain.ipynb)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "_nM1Ik-GPKJT"
      },
      "source": [
        "# Retrieval Augmentation with Open-Llama and LangChain"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "RYoCxcBvPYjy"
      },
      "source": [
        "Large Language Models (LLMs) have a data freshness problem. The most powerful LLMs in the world, like GPT-4, have no idea about recent world events.\n",
        "\n",
        "The world of LLMs is frozen in time. Their world exists as a static snapshot of the world as it was within their training data.\n",
        "\n",
        "A solution to this problem is retrieval augmentation. The idea behind this is that we retrieve relevant information from an external knowledge base and give that information to our LLM. In this notebook we will learn how to do that with open-source model `Open-Llama` from HuggingFace and `LangChain` library.\n",
        "<br><br>\n",
        "\n",
        "---\n",
        "\n",
        "\ud83d\udea8 _Note that running this on CPU is practically impossible. It will take a very long time. You need ~28GB of GPU memory to run this notebook. If running on Google Colab you go to **Runtime > Change runtime type > Hardware accelerator > GPU > GPU type > A100 > Runtime shape > High RAM**._\n",
        "\n",
        "---\n",
        "\n",
        "<br><br>\n",
        "We start by doing a `pip install` of all required libraries.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "PpZvwSo3hKVd"
      },
      "outputs": [],
      "source": [
        "import locale\n",
        "locale.getpreferredencoding = lambda: \"UTF-8\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "NTdsmU0AOZkL",
        "outputId": "1bac0b5b-7cb5-48bb-fc7f-206791dd8d80"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m7.2/7.2 MB\u001b[0m \u001b[31m81.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m86.0/86.0 kB\u001b[0m \u001b[31m11.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m64.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m244.2/244.2 kB\u001b[0m \u001b[31m27.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m42.2/42.2 kB\u001b[0m \u001b[31m5.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m80.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m109.1/109.1 MB\u001b[0m \u001b[31m13.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m92.5/92.5 MB\u001b[0m \u001b[31m16.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m268.8/268.8 kB\u001b[0m \u001b[31m30.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m7.8/7.8 MB\u001b[0m \u001b[31m104.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m78.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m90.0/90.0 kB\u001b[0m \u001b[31m11.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K     \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m49.1/49.1 kB\u001b[0m \u001b[31m6.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Building wheel for sentence-transformers (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
          ]
        }
      ],
      "source": [
        "!pip install -qU \\\n",
        "    transformers \\\n",
        "    sentence-transformers \\\n",
        "    sentencepiece \\\n",
        "    accelerate \\\n",
        "    einops \\\n",
        "    langchain \\\n",
        "    xformers \\\n",
        "    bitsandbytes \\\n",
        "    pinecone-client==3.1.0"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "POJyq_bIQuy1"
      },
      "source": [
        "## Initializing the Hugging Face Pipeline\n",
        "\n",
        "The first thing we need to do is initialize a `text-generation` pipeline with Hugging Face transformers. The Pipeline requires three things that we must initialize first, those are:\n",
        "\n",
        "* A LLM, in this case it will be `openlm-research/open_llama_7b_v2`.\n",
        "\n",
        "* The respective tokenizer for the model.\n",
        "\n",
        "* A stopping criteria object.\n",
        "\n",
        "We'll explain these as we get to them, let's begin with our model.\n",
        "\n",
        "We initialize the model and move it to our CUDA-enabled GPU. Using Colab this can take 5-10 minutes to download and initialize the model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 258,
          "referenced_widgets": [
            "f124483ad3874b19870d53628ef6c4dc",
            "29ad78bd342d4be58042ca904c41a7e9",
            "001d4d9efe364bdba699c2a34c3cc238",
            "f48b6cfabc18418b866072caf4db8ca0",
            "60fd848abeae46f882f623bb6ccf821d",
            "4122206deb8b4713961c2a38fb537a1e",
            "129b6acee86c4ea08c0ac460aed6f7ce",
            "2763f79398134f61af5f315071d0c70e",
            "075893d8193b411c89b1ebb3281e1d1a",
            "c4819a68c67d433dbba2193e35d880e9",
            "d72fb9416f404231b2f1730acab8d6ea",
            "03b8dd5186874adeb8045b766304fe70",
            "f658145a5ecd452da9b727c5439e4fe7",
            "2e8d3c6a81be446eb893579f93c39f55",
            "200b58f6d48e41e79c2841c71853d38a",
            "f0d52473ec8049399699786d0e27aaf0",
            "817f81197a554d05851fc10505d6b57e",
            "8449ee426a2347da99001a72070217e5",
            "dc8e5470104b45c183e79c7685e799bd",
            "4d338afd3c86467eb29668addcec16b3",
            "c4955eba6bda43a582a8719d8bbd5147",
            "8bbab20fc74f49bd8d7cc388f77e9b43",
            "29b8ce5264f14ba5afe675dc0797cd74",
            "733e852e197f4ca8aded0d3bd94ff319",
            "69b6c893f9e44781b79741e6e34901f6",
            "f5f495715a6149c9b275ecc152f98990",
            "c2e6d9f9ed094d56b446b8cfd1c1ab14",
            "ef234d7e8f614dcab03c6c638409f0b2",
            "846d6b9f840e4cc3b49a53a0c53dda1b",
            "d835e39eb26d43148427ad7d6b4f8d2a",
            "72164f66531842698e11b764074efbb8",
            "da540b0fd96a406586293fad8bc57865",
            "70abd77bb3544406bed91b10cf7261b0",
            "f4fa1c54bbd942cd98734cc467994b87",
            "488a2779e377473f9cf76493fe912991",
            "583b2306d521430f8db03134f501f803",
            "359456b26e804c2197e8ddcf89f53823",
            "beba38f6f9334b8b9c77391ea8f43bf5",
            "46b5c52c200d44d9919f73a57a348ceb",
            "4cd99d41162f437db13b98a56eda132d",
            "f2e8373e2d394984b087216c7695a193",
            "f55e6360ba7b404dad043377472774cc",
            "87984c192a134f33a52e2edc8cf9c266",
            "08e6c18e9a19411baa10e5ca49796551",
            "c10cd6b137e34325accdccd60dfdb412",
            "ff8473d2ea484e92813232903c8ba098",
            "a2ad67fb23464a2b80945bf71a48b1b4",
            "921eac3f554c4f7285e4e7fdf203fe2e",
            "828335d7fc164a96ba03434e7abcb9be",
            "18074cac4d194ccca8a1442dfd03fdb3",
            "696aae7fb47749c785379605f71f1b8a",
            "ba3fa52117f04d62a12951d9d7c37b25",
            "2bf6c66d589f4b9bb6cea02b9684fa18",
            "895bfe5f85e144948b0085276ae3985a",
            "10316d75b4a44c6994ee7fe15e36ae28",
            "13175bd77faa449db9ba9fdd1c78d260",
            "3ad8608d8bd24dd7b0188ea420dbea5d",
            "492973fc5cfd4185ad5d8b3a44111820",
            "35294eecb7c84acebaa035635fd7f386",
            "476d05edb115423fb6d0557341291251",
            "486f9cb799b441f98ce921d3e4e3e982",
            "05dc95ca5de74ed5870866c4bb61a294",
            "ba04913942df4d6cb367c7c64c282c3e",
            "bbefd53024894546abda249d96e3953b",
            "4cc8a144a29e4f9dbf29f734335ee02a",
            "2315a4f4184248dc97de96908853d29c",
            "95e3e619f84c417db14557c7c804c445",
            "1c30625b4ce64710ae30f6e6b42e073f",
            "2e0c111a26f84519b7426b56b76054c1",
            "22625a0c748a426ebd17e01163386055",
            "2b64b12f06524bc0974a547f41886b98",
            "85eddc51f48f41de89e6ecd167b875b3",
            "ae6eda36b005417caa3efc34e22ccaaf",
            "1da6d6b1c0da49b88d23b503595390cb",
            "87002b4df7874324841365537634aeff",
            "302bca54d17c484fbe3d14dc522a3979",
            "20f5801a5d27448bb1b73ed740e98e0f"
          ]
        },
        "id": "JRQ_LV5CQ-pE",
        "outputId": "4d4fbe22-4da9-4b50-d1c2-35ae1411b29a"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f124483ad3874b19870d53628ef6c4dc",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)lve/main/config.json:   0%|          | 0.00/502 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "03b8dd5186874adeb8045b766304fe70",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)model.bin.index.json:   0%|          | 0.00/26.8k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "29b8ce5264f14ba5afe675dc0797cd74",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading shards:   0%|          | 0/2 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f4fa1c54bbd942cd98734cc467994b87",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00001-of-00002.bin:   0%|          | 0.00/9.98G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "c10cd6b137e34325accdccd60dfdb412",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)l-00002-of-00002.bin:   0%|          | 0.00/3.50G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "13175bd77faa449db9ba9fdd1c78d260",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "95e3e619f84c417db14557c7c804c445",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)neration_config.json:   0%|          | 0.00/132 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Model loaded on cuda:0\n"
          ]
        }
      ],
      "source": [
        "from torch import cuda, bfloat16\n",
        "import transformers\n",
        "\n",
        "model_name = 'openlm-research/open_llama_7b_v2'\n",
        "\n",
        "device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'\n",
        "\n",
        "# set quantization configuration to load large model with less GPU memory\n",
        "# this requires the `bitsandbytes` library\n",
        "bnb_config = transformers.BitsAndBytesConfig(\n",
        "    load_in_4bit=True,\n",
        "    bnb_4bit_quant_type='nf4',\n",
        "    bnb_4bit_use_double_quant=True,\n",
        "    bnb_4bit_compute_dtype=bfloat16\n",
        ")\n",
        "\n",
        "model = transformers.AutoModelForCausalLM.from_pretrained(\n",
        "    model_name,\n",
        "    trust_remote_code=True,\n",
        "    quantization_config=bnb_config,\n",
        "    device_map='auto'\n",
        ")\n",
        "model.eval()\n",
        "print(f\"Model loaded on {device}\")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "QOx8nSKvQ9Tn"
      },
      "source": [
        "The pipeline requires a tokenizer which handles the translation of human readable plaintext to LLM readable token IDs. The Open-Llama model was trained using the `openlm-research/open_llama_7b_v2` tokenizer, which we initialize like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 113,
          "referenced_widgets": [
            "3406c63ab5b84cfca6bac2f56d31f709",
            "dd0d8fbe85c2480fb7f0218e06d5c7cb",
            "ab84e65d0e4d4a7b923cc78c59f536f8",
            "7839b84f18f64493b4a85c521631a3ea",
            "368b2d6878854c7a8066f89a482f2c30",
            "3e64c8a32cb74a08ba8ca8f5c154e99b",
            "b893b4e4cc6447b7a47376d961cc25f1",
            "11024d3ab5e24256b36c4beb05216610",
            "ab2e7b220ef4446291e98c2cf95b60fb",
            "771632ad1f7943fb87efa1fa82b7fd43",
            "e59088cb1e6348bdb2bf2a183c718307",
            "688d3204476d4dcbb49fc2a93c93b288",
            "4581d3e552de47d28038bfd2586081bc",
            "ab9e35aabbdb4c3097783479ad74d42c",
            "79e6955559da4809acc0cce7e808efbf",
            "cf51600f056a48ff9930c28396b267e3",
            "4d443a6acca0423f9e00e8a40a62c74d",
            "d0e61ede6e8e41f6ba5843d394744aef",
            "86c420ec478940348344d5646a60241a",
            "23c65a01cd97454395c8d5932118e76d",
            "05d3f7d2cfaf48488abac06f13a44bb1",
            "56135ddc4e9c452db686d444bd7b6722",
            "b939f1f8e00b4a5fad21474cf0c458a6",
            "adf2f7ef7fda49f5b44106da4d68c815",
            "cb3ab8e65a57494286334791b08a5215",
            "99880a2ae11746ae9e22969ffb126ece",
            "7d33aa4e90c34dc9b0614afa24907abc",
            "be08baa7ca8c43a9b4d8a895ebb4b75b",
            "1d1889bd10aa4068b0dd9949b0bb7cab",
            "b9a8b6731d39406c9fc9b361e43cab32",
            "bcc2a408208d414d83a4aa87e9847339",
            "2ff51d5ba6914a7e9ca36d2f098c33a8",
            "4f92a1efc1964ca4b8970ff901f82474"
          ]
        },
        "id": "Wsdxk8OIRLg3",
        "outputId": "83813c08-dcc3-4ce0-aa31-25ccc783812c"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "3406c63ab5b84cfca6bac2f56d31f709",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)okenizer_config.json:   0%|          | 0.00/593 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "688d3204476d4dcbb49fc2a93c93b288",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading tokenizer.model:   0%|          | 0.00/512k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "b939f1f8e00b4a5fad21474cf0c458a6",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)cial_tokens_map.json:   0%|          | 0.00/330 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "tokenizer = transformers.AutoTokenizer.from_pretrained(\n",
        "    model_name,\n",
        "    use_fast=False\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "ZU38y0bJRS36"
      },
      "source": [
        "Finally we need to define the _stopping criteria_ of the model. The stopping criteria allows us to specify *when* the model should stop generating text. If we don't provide a stopping criteria the model just goes on a bit of a tangent after answering the initial question.\n",
        "\n",
        "To figure out what the stopping criteria should be we can start with the *end of sequence* or `'</s>'` token:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "izpx9X-KRW7_",
        "outputId": "8f4500ca-61fe-484d-f817-9873571d9892"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[2]"
            ]
          },
          "execution_count": 5,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "tokenizer.convert_tokens_to_ids(['</s>'])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "AnXIxf_3RaJ_"
      },
      "source": [
        "But this is not usually a satisfactory stopping criteria, particularly for less sophisticated models. Instead, we need to find typical finish points for the model. For example, if we are generating a chatbot conversation we might see something like:\n",
        "\n",
        "```\n",
        "User: {some query}\n",
        "Assistant: {the generated answer}\n",
        "User: ...\n",
        "```\n",
        "\n",
        "Where everything past `Assistant:` is generated, included the next line of `User:`. The reason the LLM may continue generating the conversation beyond the `Assistant:` output is because it is simply predicting the conversation \u2014 it doesn't necessarily know that it should stop after providing the *one* `Assistant:` response.\n",
        "\n",
        "With that in mind, we can specify `User:` as a stopping criteria, which we can identify with:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Tob-suhrRa-Q",
        "outputId": "2c45a790-ace8-4767-c168-3fbf9d6cbab0"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[4051, 29537]"
            ]
          },
          "execution_count": 6,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "tokenizer.convert_tokens_to_ids(['User', ':'])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "RQxSlr26Reio"
      },
      "source": [
        "The reason we don't write `'User:'` directly is because this produces an **unknown** token because the specific token of `'User:'` doesn't exist, instead this is represented by two tokens `['User', ':']`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "H1YpbnrIRfI7",
        "outputId": "45b30f6c-9d5f-4406-82da-d26aafb25c0e"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[0] ['<unk>']\n"
          ]
        }
      ],
      "source": [
        "unk_token = tokenizer.convert_tokens_to_ids(['User:'])\n",
        "unk_token_id = tokenizer.convert_ids_to_tokens(unk_token)\n",
        "print(unk_token, unk_token_id)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "FazFx8UsRiho"
      },
      "source": [
        "We repeat this for various possible stopping conditions to create our `stop_list`:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "2hobBlq6RjU5",
        "outputId": "6ce78fe0-8727-4c18-ee48-f5f3868bd553"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[[2], [4051, 29537], [9533, 29537], [9427, 29537]]"
            ]
          },
          "execution_count": 8,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "stop_token_ids = [\n",
        "    tokenizer.convert_tokens_to_ids(x) for x in [\n",
        "        ['</s>'], ['User', ':'], ['system', ':'],\n",
        "        [tokenizer.convert_ids_to_tokens([9427])[0], ':']\n",
        "    ]\n",
        "]\n",
        "\n",
        "stop_token_ids"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "kghZyq-cRmGg"
      },
      "source": [
        "We also need to convert these to `LongTensor` objects:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "nXQiVN6NRplg",
        "outputId": "f7720a02-f990-41dc-f7be-403603b4a3bc"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[tensor([2], device='cuda:0'),\n",
              " tensor([ 4051, 29537], device='cuda:0'),\n",
              " tensor([ 9533, 29537], device='cuda:0'),\n",
              " tensor([ 9427, 29537], device='cuda:0')]"
            ]
          },
          "execution_count": 9,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import torch\n",
        "\n",
        "stop_token_ids = [torch.LongTensor(x).to(device) for x in stop_token_ids]\n",
        "stop_token_ids"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "Z3vKpDY_RswI"
      },
      "source": [
        "We can do a quick spot check that no `<unk>` token IDs (`0`) appear in the `stop_token_ids` \u2014 there are none so we can move on to building the stopping criteria object that will check whether the stopping criteria has been satisfied \u2014 meaning whether any of these token ID combinations have been generated."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "Iii5kR5vRxbx"
      },
      "outputs": [],
      "source": [
        "import torch\n",
        "from transformers import StoppingCriteria, StoppingCriteriaList\n",
        "\n",
        "# define custom stopping criteria object\n",
        "class StopOnTokens(StoppingCriteria):\n",
        "    def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\n",
        "        for stop_ids in stop_token_ids:\n",
        "            if torch.eq(input_ids[0][-len(stop_ids):], stop_ids).all():\n",
        "                return True\n",
        "        return False\n",
        "\n",
        "stopping_criteria = StoppingCriteriaList([StopOnTokens()])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "S1OIj_L2R0MS",
        "outputId": "aa7f4b19-d7d5-4d92-88b7-0ec991763c44"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "False"
            ]
          },
          "execution_count": 11,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# this should return false because there are not \"stop criteria\" tokens\n",
        "stopping_criteria(\n",
        "    torch.LongTensor([[1, 2, 3, 5000, 90000]]).to(device),\n",
        "    torch.FloatTensor([0.0])\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "90quhK3kR2Yp",
        "outputId": "2a6021bb-2098-4536-9aa4-494d6fca3869"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "True"
            ]
          },
          "execution_count": 12,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# this should return true because there ARE \"stop criteria\" tokens\n",
        "stopping_criteria(\n",
        "    torch.LongTensor([[1, 2, 3, 4051, 29537]]).to(device),\n",
        "    torch.FloatTensor([0.0])\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "XI5tzEZ4SGcV"
      },
      "source": [
        "Now we're ready to initialize the HF pipeline. There are a few additional parameters that we must define here. Comments explaining these have been included in the code."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {
        "id": "RYj_AoLGSIZh"
      },
      "outputs": [],
      "source": [
        "generate_text = transformers.pipeline(\n",
        "    model=model, tokenizer=tokenizer,\n",
        "    return_full_text=True,  # langchain expects the full text\n",
        "    task='text-generation',\n",
        "    # we pass model parameters here too\n",
        "    stopping_criteria=stopping_criteria,  # without this model will ramble\n",
        "    temperature=0.0,  # 'randomness' of outputs, 0.0 is the min and 1.0 the max\n",
        "    top_p=0.15,  # select from top tokens whose probability add up to 15%\n",
        "    top_k=0,  # select from top 0 tokens (because zero, relies on top_p)\n",
        "    max_new_tokens=256,  # max number of tokens to generate in the output\n",
        "    repetition_penalty=1.2  # without this output begins repeating\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "DCMpG6efSKWh"
      },
      "source": [
        "Confirm this is working:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "0Xd63N2XSKf_",
        "outputId": "06769feb-3b83-47d2-edc1-10db6dac5c6f"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Do I need to get my pet tested for COVID-19?\n",
            "No. There is no evidence that pets can contract or spread the virus, and there are currently not any tests available in Canada specifically designed for animals..\n"
          ]
        }
      ],
      "source": [
        "res = generate_text(\"Do I need to get my pet tested for COVID-19?\")\n",
        "print(res[0][\"generated_text\"])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "CkVvbOzfSW8p"
      },
      "source": [
        "..."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "metadata": {
        "id": "AnvbJEf8SiaJ"
      },
      "outputs": [],
      "source": [
        "from langchain.llms import HuggingFacePipeline\n",
        "\n",
        "llm = HuggingFacePipeline(pipeline=generate_text)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "JxOKqQ49U6jY"
      },
      "source": [
        "## Retrieval Augmentation\n",
        "\n",
        "### Building the Knowledge Base"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "metadata": {
        "id": "Z8Dj7mwE2X1I"
      },
      "outputs": [],
      "source": [
        "!pip install -qU kaggle==1.5.15"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "duBNcR5RORT-",
        "outputId": "356fc58a-11d3-4537-d15e-0b719bebbdb8"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Could not find kaggle.json. Make sure it's located in /root/.kaggle. Or use the environment method.\n"
          ]
        }
      ],
      "source": [
        "try:\n",
        "    import kaggle\n",
        "except OSError as e:\n",
        "    print(e)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "metadata": {
        "id": "n4vsWPXVOTy0"
      },
      "outputs": [],
      "source": [
        "import json\n",
        "\n",
        "KAGGLE_USERNAME = \"YOUR_KAGGLE_USERNAME\"\n",
        "KAGGLE_KEY = \"YOUR_KAGGLE_KEY\"\n",
        "\n",
        "with open('/root/.kaggle/kaggle.json', 'w') as fp:\n",
        "    fp.write(json.dumps({\"username\": KAGGLE_USERNAME,\"key\": KAGGLE_KEY}))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "K3WoO1dPU_ew",
        "outputId": "57546829-379f-488f-e158-9aed253caaa8"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'\n",
            "Downloading covid19-related-faqs.zip to /content\n",
            "  0% 0.00/29.9k [00:00<?, ?B/s]\n",
            "100% 29.9k/29.9k [00:00<00:00, 1.74MB/s]\n"
          ]
        }
      ],
      "source": [
        "!kaggle datasets download -d deepann/covid19-related-faqs"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 20,
      "metadata": {
        "id": "M1KaULRZOqRS"
      },
      "outputs": [],
      "source": [
        "import zipfile\n",
        "\n",
        "with zipfile.ZipFile(\"/content/covid19-related-faqs.zip\", 'r') as zip_ref:\n",
        "        zip_ref.extractall('./')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 21,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 206
        },
        "id": "86OzEg8wOv1I",
        "outputId": "e5062651-010a-4557-f157-8f8ac00392db"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "\n",
              "\n",
              "  <div id=\"df-dd8b2f42-6abb-4cc2-ac62-a89f071647b6\">\n",
              "    <div class=\"colab-df-container\">\n",
              "      <div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>questions</th>\n",
              "      <th>answers</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>What is a novel coronavirus?</td>\n",
              "      <td>A novel coronavirus is a new coronavirus that ...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>Why is the disease being called coronavirus di...</td>\n",
              "      <td>On February 11, 2020 the World Health Organiza...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>How does the virus spread?</td>\n",
              "      <td>The virus that causes COVID-19 is thought to s...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>Can I get COVID-19 from food (including restau...</td>\n",
              "      <td>Currently there is no evidence that people can...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>4</th>\n",
              "      <td>Will warm weather stop the outbreak of COVID-19?</td>\n",
              "      <td>It is not yet known whether weather and temper...</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>\n",
              "      <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-dd8b2f42-6abb-4cc2-ac62-a89f071647b6')\"\n",
              "              title=\"Convert this dataframe to an interactive table.\"\n",
              "              style=\"display:none;\">\n",
              "\n",
              "  <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
              "       width=\"24px\">\n",
              "    <path d=\"M0 0h24v24H0V0z\" fill=\"none\"/>\n",
              "    <path d=\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\"/><path d=\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\"/>\n",
              "  </svg>\n",
              "      </button>\n",
              "\n",
              "\n",
              "\n",
              "    <div id=\"df-6d473895-4abe-43c7-9eda-046f1f0b3ca0\">\n",
              "      <button class=\"colab-df-quickchart\" onclick=\"quickchart('df-6d473895-4abe-43c7-9eda-046f1f0b3ca0')\"\n",
              "              title=\"Suggest charts.\"\n",
              "              style=\"display:none;\">\n",
              "\n",
              "<svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
              "     width=\"24px\">\n",
              "    <g>\n",
              "        <path d=\"M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z\"/>\n",
              "    </g>\n",
              "</svg>\n",
              "      </button>\n",
              "    </div>\n",
              "\n",
              "<style>\n",
              "  .colab-df-quickchart {\n",
              "    background-color: #E8F0FE;\n",
              "    border: none;\n",
              "    border-radius: 50%;\n",
              "    cursor: pointer;\n",
              "    display: none;\n",
              "    fill: #1967D2;\n",
              "    height: 32px;\n",
              "    padding: 0 0 0 0;\n",
              "    width: 32px;\n",
              "  }\n",
              "\n",
              "  .colab-df-quickchart:hover {\n",
              "    background-color: #E2EBFA;\n",
              "    box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
              "    fill: #174EA6;\n",
              "  }\n",
              "\n",
              "  [theme=dark] .colab-df-quickchart {\n",
              "    background-color: #3B4455;\n",
              "    fill: #D2E3FC;\n",
              "  }\n",
              "\n",
              "  [theme=dark] .colab-df-quickchart:hover {\n",
              "    background-color: #434B5C;\n",
              "    box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
              "    filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
              "    fill: #FFFFFF;\n",
              "  }\n",
              "</style>\n",
              "\n",
              "    <script>\n",
              "      async function quickchart(key) {\n",
              "        const containerElement = document.querySelector('#' + key);\n",
              "        const charts = await google.colab.kernel.invokeFunction(\n",
              "            'suggestCharts', [key], {});\n",
              "      }\n",
              "    </script>\n",
              "\n",
              "      <script>\n",
              "\n",
              "function displayQuickchartButton(domScope) {\n",
              "  let quickchartButtonEl =\n",
              "    domScope.querySelector('#df-6d473895-4abe-43c7-9eda-046f1f0b3ca0 button.colab-df-quickchart');\n",
              "  quickchartButtonEl.style.display =\n",
              "    google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
              "}\n",
              "\n",
              "        displayQuickchartButton(document);\n",
              "      </script>\n",
              "      <style>\n",
              "    .colab-df-container {\n",
              "      display:flex;\n",
              "      flex-wrap:wrap;\n",
              "      gap: 12px;\n",
              "    }\n",
              "\n",
              "    .colab-df-convert {\n",
              "      background-color: #E8F0FE;\n",
              "      border: none;\n",
              "      border-radius: 50%;\n",
              "      cursor: pointer;\n",
              "      display: none;\n",
              "      fill: #1967D2;\n",
              "      height: 32px;\n",
              "      padding: 0 0 0 0;\n",
              "      width: 32px;\n",
              "    }\n",
              "\n",
              "    .colab-df-convert:hover {\n",
              "      background-color: #E2EBFA;\n",
              "      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
              "      fill: #174EA6;\n",
              "    }\n",
              "\n",
              "    [theme=dark] .colab-df-convert {\n",
              "      background-color: #3B4455;\n",
              "      fill: #D2E3FC;\n",
              "    }\n",
              "\n",
              "    [theme=dark] .colab-df-convert:hover {\n",
              "      background-color: #434B5C;\n",
              "      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
              "      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
              "      fill: #FFFFFF;\n",
              "    }\n",
              "  </style>\n",
              "\n",
              "      <script>\n",
              "        const buttonEl =\n",
              "          document.querySelector('#df-dd8b2f42-6abb-4cc2-ac62-a89f071647b6 button.colab-df-convert');\n",
              "        buttonEl.style.display =\n",
              "          google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
              "\n",
              "        async function convertToInteractive(key) {\n",
              "          const element = document.querySelector('#df-dd8b2f42-6abb-4cc2-ac62-a89f071647b6');\n",
              "          const dataTable =\n",
              "            await google.colab.kernel.invokeFunction('convertToInteractive',\n",
              "                                                     [key], {});\n",
              "          if (!dataTable) return;\n",
              "\n",
              "          const docLinkHtml = 'Like what you see? Visit the ' +\n",
              "            '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
              "            + ' to learn more about interactive tables.';\n",
              "          element.innerHTML = '';\n",
              "          dataTable['output_type'] = 'display_data';\n",
              "          await google.colab.output.renderOutput(dataTable, element);\n",
              "          const docLink = document.createElement('div');\n",
              "          docLink.innerHTML = docLinkHtml;\n",
              "          element.appendChild(docLink);\n",
              "        }\n",
              "      </script>\n",
              "    </div>\n",
              "  </div>\n"
            ],
            "text/plain": [
              "                                           questions  \\\n",
              "0                       What is a novel coronavirus?   \n",
              "1  Why is the disease being called coronavirus di...   \n",
              "2                         How does the virus spread?   \n",
              "3  Can I get COVID-19 from food (including restau...   \n",
              "4   Will warm weather stop the outbreak of COVID-19?   \n",
              "\n",
              "                                             answers  \n",
              "0  A novel coronavirus is a new coronavirus that ...  \n",
              "1  On February 11, 2020 the World Health Organiza...  \n",
              "2  The virus that causes COVID-19 is thought to s...  \n",
              "3  Currently there is no evidence that people can...  \n",
              "4  It is not yet known whether weather and temper...  "
            ]
          },
          "execution_count": 21,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import pandas as pd\n",
        "\n",
        "data = pd.read_csv(\"/content/covid_faq.csv\")\n",
        "data.head()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 22,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 206
        },
        "id": "JHmwNycBQbIi",
        "outputId": "96482ef6-7d8a-468a-8522-aa62029f321f"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "\n",
              "\n",
              "  <div id=\"df-935c9c9f-9fd2-4bac-90c6-7ce0490de0a0\">\n",
              "    <div class=\"colab-df-container\">\n",
              "      <div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>question</th>\n",
              "      <th>answer</th>\n",
              "      <th>id</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>What is a novel coronavirus?</td>\n",
              "      <td>A novel coronavirus is a new coronavirus that ...</td>\n",
              "      <td>0</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>Why is the disease being called coronavirus di...</td>\n",
              "      <td>On February 11, 2020 the World Health Organiza...</td>\n",
              "      <td>1</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>How does the virus spread?</td>\n",
              "      <td>The virus that causes COVID-19 is thought to s...</td>\n",
              "      <td>2</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>Can I get COVID-19 from food (including restau...</td>\n",
              "      <td>Currently there is no evidence that people can...</td>\n",
              "      <td>3</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>4</th>\n",
              "      <td>Will warm weather stop the outbreak of COVID-19?</td>\n",
              "      <td>It is not yet known whether weather and temper...</td>\n",
              "      <td>4</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>\n",
              "      <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-935c9c9f-9fd2-4bac-90c6-7ce0490de0a0')\"\n",
              "              title=\"Convert this dataframe to an interactive table.\"\n",
              "              style=\"display:none;\">\n",
              "\n",
              "  <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
              "       width=\"24px\">\n",
              "    <path d=\"M0 0h24v24H0V0z\" fill=\"none\"/>\n",
              "    <path d=\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\"/><path d=\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\"/>\n",
              "  </svg>\n",
              "      </button>\n",
              "\n",
              "\n",
              "\n",
              "    <div id=\"df-cd023eb2-c019-4ac5-bbe5-6e42e69e2240\">\n",
              "      <button class=\"colab-df-quickchart\" onclick=\"quickchart('df-cd023eb2-c019-4ac5-bbe5-6e42e69e2240')\"\n",
              "              title=\"Suggest charts.\"\n",
              "              style=\"display:none;\">\n",
              "\n",
              "<svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
              "     width=\"24px\">\n",
              "    <g>\n",
              "        <path d=\"M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z\"/>\n",
              "    </g>\n",
              "</svg>\n",
              "      </button>\n",
              "    </div>\n",
              "\n",
              "<style>\n",
              "  .colab-df-quickchart {\n",
              "    background-color: #E8F0FE;\n",
              "    border: none;\n",
              "    border-radius: 50%;\n",
              "    cursor: pointer;\n",
              "    display: none;\n",
              "    fill: #1967D2;\n",
              "    height: 32px;\n",
              "    padding: 0 0 0 0;\n",
              "    width: 32px;\n",
              "  }\n",
              "\n",
              "  .colab-df-quickchart:hover {\n",
              "    background-color: #E2EBFA;\n",
              "    box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
              "    fill: #174EA6;\n",
              "  }\n",
              "\n",
              "  [theme=dark] .colab-df-quickchart {\n",
              "    background-color: #3B4455;\n",
              "    fill: #D2E3FC;\n",
              "  }\n",
              "\n",
              "  [theme=dark] .colab-df-quickchart:hover {\n",
              "    background-color: #434B5C;\n",
              "    box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
              "    filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
              "    fill: #FFFFFF;\n",
              "  }\n",
              "</style>\n",
              "\n",
              "    <script>\n",
              "      async function quickchart(key) {\n",
              "        const containerElement = document.querySelector('#' + key);\n",
              "        const charts = await google.colab.kernel.invokeFunction(\n",
              "            'suggestCharts', [key], {});\n",
              "      }\n",
              "    </script>\n",
              "\n",
              "      <script>\n",
              "\n",
              "function displayQuickchartButton(domScope) {\n",
              "  let quickchartButtonEl =\n",
              "    domScope.querySelector('#df-cd023eb2-c019-4ac5-bbe5-6e42e69e2240 button.colab-df-quickchart');\n",
              "  quickchartButtonEl.style.display =\n",
              "    google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
              "}\n",
              "\n",
              "        displayQuickchartButton(document);\n",
              "      </script>\n",
              "      <style>\n",
              "    .colab-df-container {\n",
              "      display:flex;\n",
              "      flex-wrap:wrap;\n",
              "      gap: 12px;\n",
              "    }\n",
              "\n",
              "    .colab-df-convert {\n",
              "      background-color: #E8F0FE;\n",
              "      border: none;\n",
              "      border-radius: 50%;\n",
              "      cursor: pointer;\n",
              "      display: none;\n",
              "      fill: #1967D2;\n",
              "      height: 32px;\n",
              "      padding: 0 0 0 0;\n",
              "      width: 32px;\n",
              "    }\n",
              "\n",
              "    .colab-df-convert:hover {\n",
              "      background-color: #E2EBFA;\n",
              "      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
              "      fill: #174EA6;\n",
              "    }\n",
              "\n",
              "    [theme=dark] .colab-df-convert {\n",
              "      background-color: #3B4455;\n",
              "      fill: #D2E3FC;\n",
              "    }\n",
              "\n",
              "    [theme=dark] .colab-df-convert:hover {\n",
              "      background-color: #434B5C;\n",
              "      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
              "      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
              "      fill: #FFFFFF;\n",
              "    }\n",
              "  </style>\n",
              "\n",
              "      <script>\n",
              "        const buttonEl =\n",
              "          document.querySelector('#df-935c9c9f-9fd2-4bac-90c6-7ce0490de0a0 button.colab-df-convert');\n",
              "        buttonEl.style.display =\n",
              "          google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
              "\n",
              "        async function convertToInteractive(key) {\n",
              "          const element = document.querySelector('#df-935c9c9f-9fd2-4bac-90c6-7ce0490de0a0');\n",
              "          const dataTable =\n",
              "            await google.colab.kernel.invokeFunction('convertToInteractive',\n",
              "                                                     [key], {});\n",
              "          if (!dataTable) return;\n",
              "\n",
              "          const docLinkHtml = 'Like what you see? Visit the ' +\n",
              "            '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
              "            + ' to learn more about interactive tables.';\n",
              "          element.innerHTML = '';\n",
              "          dataTable['output_type'] = 'display_data';\n",
              "          await google.colab.output.renderOutput(dataTable, element);\n",
              "          const docLink = document.createElement('div');\n",
              "          docLink.innerHTML = docLinkHtml;\n",
              "          element.appendChild(docLink);\n",
              "        }\n",
              "      </script>\n",
              "    </div>\n",
              "  </div>\n"
            ],
            "text/plain": [
              "                                            question  \\\n",
              "0                       What is a novel coronavirus?   \n",
              "1  Why is the disease being called coronavirus di...   \n",
              "2                         How does the virus spread?   \n",
              "3  Can I get COVID-19 from food (including restau...   \n",
              "4   Will warm weather stop the outbreak of COVID-19?   \n",
              "\n",
              "                                              answer  id  \n",
              "0  A novel coronavirus is a new coronavirus that ...   0  \n",
              "1  On February 11, 2020 the World Health Organiza...   1  \n",
              "2  The virus that causes COVID-19 is thought to s...   2  \n",
              "3  Currently there is no evidence that people can...   3  \n",
              "4  It is not yet known whether weather and temper...   4  "
            ]
          },
          "execution_count": 22,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "data = data.rename(columns={\"questions\":\"question\", \"answers\":\"answer\"})\n",
        "data[\"id\"] = data.index\n",
        "data.head()"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "DFniAj1sVX9A"
      },
      "source": [
        "### Creating Embeddings\n",
        "\n",
        "Building embeddings using LangChain's HuggingFaceEmbeddings is fairly straightforward.\n",
        "To create our embeddings we will use the `MiniLM-L6` sentence transformer model. We initialize it like so:\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 23,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 465,
          "referenced_widgets": [
            "dc90357fe7144420bc631ad9b0730e65",
            "d892163a170444ee97e512555bab4919",
            "3e0b5598b66449579523dac1dadd2e70",
            "f4c95cb0c14542188f54fb28b53a7033",
            "ffa5a3bf6f3043998822eb357fd74aad",
            "fa82b1529ed04a2b875d9ff7baa41b45",
            "0e792c576bed44c88725bdf95bbeff26",
            "6b462bfc69834307a23ef613838b675f",
            "7d70f87d061140ec83f591a49806150b",
            "28a96746b4114512b49e61f283a3106e",
            "8f4f69a507d140afa882103d39d3c02c",
            "b0c47f988ec649288491ab23d6c08a22",
            "850e52c8260f48c6ae77d63ae16db029",
            "b00b74cd66b8481aad7b9c91949cd8a0",
            "d3848bc8d7294ddfb761e8d0461f85ce",
            "f913744ef61d4b2586cc8069498ee4e3",
            "b8ad2dc09f6c495d9b199dda44ca8bfc",
            "f799fb7a042a4bea9df056862068cac3",
            "8bda64bc723541cd9faafb700e6ec3a5",
            "0e82d84f3e114dcfa46ec2f30d601458",
            "5a4aa7b01f5347a582c6b6c5350f217d",
            "339cf1c481e4405abdc6363132074bf8",
            "c1cce943bd404dc792648f304fd9729f",
            "90c5b1f5a8c042b7881e4c5f5203c020",
            "339ab7d25cce4bbf89abdfadc5d25ad2",
            "3938a9f3c5b14d78968a6c6c3658eba6",
            "4e00c5fb8aae4b218d49505fffc637de",
            "ecd0f535c70843499cb76b62aee87c09",
            "0a1b593d2cd149a7877d09b625e641c2",
            "ff256b6f00f34e1d80315f0f49516802",
            "e91fa278e365469e9b269f2c5b8a5586",
            "d599306618f84b57848d4bf58e795335",
            "2d3dc7565cf74564a153ba00260f063c",
            "c7f2d1ac17ac4df1a545631ba54a5512",
            "53c34643a73d42ebb95905141983855a",
            "eae4e99610ee49348e6427cad7f45c50",
            "eefe04ab28c34520ab64ba7633ece349",
            "b4750da77c25428da72c39a3d560e550",
            "2d0ad42ef0404678b4ffe6f384d8bfe4",
            "8eb1c2ca81a147ae81097f0fbc91584d",
            "36f7c79634c94cc4b41595dcd9bca4d9",
            "04ca2341bb334a5fb39bb4c663ac4513",
            "4bb0cbd3bad64b109f64c589bb95bd57",
            "2c51ffdfc2174d26bb1e8abe09219cfa",
            "b102760383af42a3a2ed28887dbb8ddb",
            "0f1566002dae4903a3bcf8a1d7eff3b1",
            "b874bb2071ad45849b2a972672401cfd",
            "38a02e84618f424b8f1d2f48cd739015",
            "c73ef5627bae444c917d9c63b9804782",
            "c1271f01789c45b78e701dd110addd7f",
            "9fdc7d40c7ac4e6294ca68d2f54c3f6e",
            "3e9e207fbfc14825a1e5badfbe16cfc6",
            "524658cc620e432e8803ad8132f3ea04",
            "0d7e9b24d3244473a225ebcc2858346f",
            "bdb383aa57a94a6ba4ac11f2b9dc573b",
            "68d0f5b081824ed6a8ff456c02ab357f",
            "cf87b1a3ca4c43c0bf23bc99c5f9bfdc",
            "37e11a31d3f04cf29da3de27444ca9ac",
            "f9d43d35ea7343bbba93b8084b06d213",
            "7871307db6dd49afafbebd21c30ef641",
            "51ee214c784e46f48f89bd3b839d39d8",
            "9b25d270621449979cf8f6bcde74e9d5",
            "a19ed37e37f2414eb28f337c5f7cb629",
            "558fddcbfcfe4e10a45dc5d7eec8064d",
            "7d584fc66ab24b69af8a20cdc6ab1ab3",
            "2940dad6f889420d93c8d256a4bddb19",
            "3aeb6791bb9b4535be98b86afe0622de",
            "49d3bd7e938d485488440febf776f834",
            "4caac00dd913417981493f91f1904806",
            "6c68f71999974b919bf14327fda250c4",
            "3e9987ee556d45feb1127342e36b6d9c",
            "3c46940790d04af7930377a092695e3b",
            "1d85413bab3243c5a6430d0f2f6d97ac",
            "a0696f9fc07a4d96bdd56b284752d907",
            "56194ee25b1348c5aa44ea473f9732e6",
            "118976ac4b2241298bff291a56dcbb01",
            "8429db93056c46e3941e0576723b16ba",
            "753d074e2a55449e9193542b3e100adf",
            "324db9053c8a4066be23da5ffc226a1e",
            "d22de04c3da64b33ad78da2cad8bfda3",
            "76df82b7637644bebc1ac51408978e95",
            "c344e78f64d9496fb0521e8de04302bd",
            "031255e6b38f41d58438bbaa789ca31a",
            "9dd96334510549ecad27435b1ac72910",
            "057141ec52bf426e856e8f99341babff",
            "42589ad608054b00a30aa40416af2311",
            "79b993e1b79c4f99b503e0db31f3c4cc",
            "1cf85f3d29a84c4782cf5dd2d17e944c",
            "1f62fdd87e3a44ccb70234368a6547f6",
            "8e226f873a9e4396887f814ac2ba0c04",
            "b0546c9e52124de183bb0a60cabb09f1",
            "78ccd03f21be480fad7a14ec0b441204",
            "afec020c326e43a4b1cf15d7e344350e",
            "fdc730f0ec4545e0803154d04b7c2626",
            "26799611a34d47589350249323a927f5",
            "693508fa6fa041348ae8d3d8d210d28d",
            "ce1547a14f304d618f3f02e9b41956ec",
            "7bf2be8063004ad8bb0c84ae178dc448",
            "b331d3606cc94015bc9502ffc7a84972",
            "18e6a00dfc54404c9de40920ad97228b",
            "4ba8d3b0afa841f3b6cf4a1840395e1d",
            "ce47b87b8d3e4317b56b5fb65e0fcd72",
            "2c25a96f1d914b23a897644b1a43d6ac",
            "00f9d26c350b4bc38f7d4f7257be793c",
            "275234d2be4343c3a6714c6d86459739",
            "8bc9e42d023b4d6eac2457941291b12e",
            "c0f223cf54cc4fd4b0e01a67caaa79ae",
            "a5ea70f428384b889762b964030fc719",
            "51b7c29f22ce4993bfd97d186f330cfd",
            "bcb81b1f7bf742b0b8b894f6fba329a7",
            "0d04bc6253214501ac33a805f2837513",
            "5e75faba30b64909b5d4be59dda2f381",
            "3b8e38da5d0440aeb6959703a1e7959f",
            "42da8f52a3f646128933201c0d2e5135",
            "f7f624edb5cc4a3c8ccaa9d46edd171e",
            "565759ebbb2a433abe960e8e868f144e",
            "74f891d0e6d24ce2a97d4a30e5b9dbd2",
            "6bcf10a6862540699b0413eb5196f7f6",
            "8406a3118275438dab5b3530aac5a3bf",
            "ae157d3916e14abb8ab85d2deabbc7d9",
            "307a3a0ccb2f4412b594ba330fec6309",
            "14afb84842ea4241ab673a6074c742ac",
            "83c21a9c88c34b14b7fbf9098a52d00b",
            "c350d9fed1cb48f89dc0fc2750be7c2c",
            "d9642b0874b742e98460e7802003f20e",
            "bdf7e2de768447c0b202998f37e51da3",
            "f25413ca4907470f8c359d75b46c3632",
            "8f5cef7d7cc84a74a3f7ab68c4135f3b",
            "166c48c36f2940638633060ed5971459",
            "23b117ea53334e3182ce3920d796ea08",
            "85231313c88940f79531f6a5631ac4ea",
            "11adc5a0e38e45c69d8baf124562336c",
            "ccf9a5c966434528bfbdae580406f78f",
            "f6f43c8859f84d108b028f4f1fa5923e",
            "8dedebedc60a415e8a66bb6b2b6dd4b3",
            "15b69206791c42e2828b754fe62d0b12",
            "cb52a4d68cf8401785abb7fb3882ebb4",
            "a0057d43630a41539654537969c40e99",
            "7004dfae49c440f58e4da6cc6fa5f111",
            "a3280155a45149e2a954a525f9b8058f",
            "3ef8b7f1c84c418bbe1ee3ed0536beb9",
            "4eec0a4f69644781819027472011ae27",
            "46d5d528d2474e80ad194021e3d0e336",
            "95640c2eaf914c54a99485ba127f626f",
            "4d4834101aca47769c30ea658a03841f",
            "dfe7b6b646d047b8817734532611e8d5",
            "6e415f2a11c24b9aa334ee9b1680a62e",
            "e3e7efc73f4940c78ab540653c4c8319",
            "9cc25ff4b1b54a16826226c325a7b591",
            "9047cf6a0f6c486d84ce8d0a0e6d8bec",
            "3013a5d20ddd4b32957754952ba7cadf",
            "2a4b1d07318d409795db4b8e9a871c9a",
            "82ad73bb524c485887c2cd82a244044f",
            "6d0ea92f62d14b19bdd781b61df444c8"
          ]
        },
        "id": "s1UlJ6wtVXAo",
        "outputId": "ca4202c1-4868-4b56-8fe9-03f9d6343a8b"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "dc90357fe7144420bc631ad9b0730e65",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)e9125/.gitattributes:   0%|          | 0.00/1.18k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "b0c47f988ec649288491ab23d6c08a22",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)_Pooling/config.json:   0%|          | 0.00/190 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "c1cce943bd404dc792648f304fd9729f",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)7e55de9125/README.md:   0%|          | 0.00/10.6k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "c7f2d1ac17ac4df1a545631ba54a5512",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)55de9125/config.json:   0%|          | 0.00/612 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "b102760383af42a3a2ed28887dbb8ddb",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)ce_transformers.json:   0%|          | 0.00/116 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "68d0f5b081824ed6a8ff456c02ab357f",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)125/data_config.json:   0%|          | 0.00/39.3k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "3aeb6791bb9b4535be98b86afe0622de",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading pytorch_model.bin:   0%|          | 0.00/90.9M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "753d074e2a55449e9193542b3e100adf",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)nce_bert_config.json:   0%|          | 0.00/53.0 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "1f62fdd87e3a44ccb70234368a6547f6",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)cial_tokens_map.json:   0%|          | 0.00/112 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "18e6a00dfc54404c9de40920ad97228b",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)e9125/tokenizer.json:   0%|          | 0.00/466k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "0d04bc6253214501ac33a805f2837513",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)okenizer_config.json:   0%|          | 0.00/350 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "14afb84842ea4241ab673a6074c742ac",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)9125/train_script.py:   0%|          | 0.00/13.2k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "ccf9a5c966434528bfbdae580406f78f",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)7e55de9125/vocab.txt:   0%|          | 0.00/232k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "95640c2eaf914c54a99485ba127f626f",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading (\u2026)5de9125/modules.json:   0%|          | 0.00/349 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from langchain.embeddings import HuggingFaceEmbeddings\n",
        "\n",
        "embed = HuggingFaceEmbeddings(\n",
        "    model_name='sentence-transformers/all-MiniLM-L6-v2'\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "bTNi--O0Vubo"
      },
      "source": [
        "Now we embed some text like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 24,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "V7YpUlSCVu9P",
        "outputId": "20824a80-d8d6-48df-e087-b3e393bd1961"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "(2, 384)"
            ]
          },
          "execution_count": 24,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "texts = [\n",
        "    'this is the first chunk of text',\n",
        "    'then another second chunk of text is here'\n",
        "]\n",
        "\n",
        "res = embed.embed_documents(texts)\n",
        "len(res), len(res[0])"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "SAO7V2E8V01H"
      },
      "source": [
        "From this we get *two* (aligning to our two chunks of text) 384-dimensional embeddings.\n",
        "\n",
        "Now we move on to initializing our Pinecone vector database."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Initializing the Index\n",
        "\n",
        "Now we need a place to store these embeddings and enable a efficient vector search through them all. To do that we use Pinecone, we can get a [free API key](https://app.pinecone.io/) and enter it below where we will initialize our connection to Pinecone and create a new index."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import os\n",
        "from pinecone import Pinecone\n",
        "\n",
        "# initialize connection to pinecone (get API key at app.pinecone.io)\n",
        "api_key = os.environ.get('PINECONE_API_KEY') or 'PINECONE_API_KEY'\n",
        "\n",
        "# configure client\n",
        "pc = Pinecone(api_key=api_key)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Now we setup our index specification, this allows us to define the cloud provider and region where we want to deploy our index. You can find a list of all [available providers and regions here](https://docs.pinecone.io/docs/projects)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "from pinecone import ServerlessSpec\n",
        "\n",
        "cloud = os.environ.get('PINECONE_CLOUD') or 'aws'\n",
        "region = os.environ.get('PINECONE_REGION') or 'us-east-1'\n",
        "\n",
        "spec = ServerlessSpec(cloud=cloud, region=region)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 26,
      "metadata": {
        "id": "qXzc6o1hVz3c"
      },
      "outputs": [],
      "source": [
        "index_name = 'open-llama-langchain-retrieval-augmentation'"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "import time\n",
        "\n",
        "# check if index already exists (it shouldn't if this is first time)\n",
        "if index_name not in pc.list_indexes().names():\n",
        "    # if does not exist, create index\n",
        "    pc.create_index(\n",
        "        index_name,\n",
        "        dimension=len(res[0]),  # 384 dim of sentence-transformers/all-MiniLM-L6-v2\n",
        "        metric='cosine',\n",
        "        spec=spec\n",
        "    )\n",
        "    # wait for index to be initialized\n",
        "    while not pc.describe_index(index_name).status['ready']:\n",
        "        time.sleep(1)\n",
        "\n",
        "# connect to index\n",
        "index = pc.Index(index_name)\n",
        "# view index stats\n",
        "index.describe_index_stats()"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "SFYLteDYWDkg"
      },
      "source": [
        "We should see that the new Pinecone index has a `total_vector_count` of `0`, as we haven't added any vectors yet.\n",
        "\n",
        "### Indexing\n",
        "\n",
        "We can perform the indexing task using the LangChain vector store object. But for now it is much faster to do it via the Pinecone python client directly. We will do this in batches of `100` or more."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 29,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 49,
          "referenced_widgets": [
            "96a89fb5db1b4dd79decc8ffbc6f7573",
            "89b2ae8c0a4e436db57de002621551d7",
            "6cad0633c9b843479459a54722533702",
            "1ee7390283b141dd88802039b783b0a6",
            "a704d211a40e4c4f917c8db4dcff1419",
            "8c54c8935bd543fe8a9771da27ba0c36",
            "52f553d0d41243ecaa123e2759da4d1a",
            "49a220b24cd7425dbccc0811d92c8361",
            "7b0a74c7602d42859a5e771e1f8445f3",
            "6712c6790ab3436da02fb5edd34e1354",
            "157f8d925e33478fbf870898d1a3dc64"
          ]
        },
        "id": "iJt-I31BWHtw",
        "outputId": "5a1dd5cf-665f-4e89-abc8-d0bcf2025aa4"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "96a89fb5db1b4dd79decc8ffbc6f7573",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "  0%|          | 0/2 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from tqdm.auto import tqdm\n",
        "from uuid import uuid4\n",
        "\n",
        "batch_size = 100\n",
        "\n",
        "for i in tqdm(range(0, len(data), batch_size)):\n",
        "    # find end of batch\n",
        "    i_end = min(i+batch_size, len(data))\n",
        "    # create IDs batch\n",
        "    ids = [str(x) for x in range(i, i_end)]\n",
        "    # create metadata batch\n",
        "    metadatas = [{'text': text} for text in data[\"question\"][i:i_end]]\n",
        "    # create embeddings\n",
        "    xc = embed.embed_documents(data[\"answer\"][i:i_end])\n",
        "    # create records list for upsert\n",
        "    records = zip(ids, xc, metadatas)\n",
        "    # upsert to Pinecone\n",
        "    index.upsert(vectors=records)"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "iPb7L5ktWKyM"
      },
      "source": [
        "We've now indexed everything. We can check the number of vectors in our index like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 30,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "kP8Eeh94WLUc",
        "outputId": "0bed1b22-993b-4616-db09-742d6c13b6ad"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "{'dimension': 384,\n",
              " 'index_fullness': 0.00117,\n",
              " 'namespaces': {'': {'vector_count': 117}},\n",
              " 'total_vector_count': 117}"
            ]
          },
          "execution_count": 30,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "index.describe_index_stats()"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "GApLtcyKWOKv"
      },
      "source": [
        "### Creating a Vector Store and Querying\n",
        "\n",
        "Now that we've build our index we can switch back over to LangChain. We start by initializing a vector store using the same index we just built. We do that like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {
        "id": "qfmi4WT6WP-5"
      },
      "outputs": [],
      "source": [
        "from langchain.vectorstores import Pinecone\n",
        "\n",
        "text_field = \"text\"\n",
        "\n",
        "# switch back to normal index for langchain\n",
        "index = pc.Index(index_name)\n",
        "\n",
        "vectorstore = Pinecone(\n",
        "    index, embed.embed_query, text_field\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 32,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "rZ46_lUGWTnn",
        "outputId": "a3b49f46-e500-440c-d89c-0b593eeee745"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[Document(page_content='Do I need to get my pet tested for COVID-19?', metadata={}),\n",
              " Document(page_content='Why are animals being tested when many people can\u2019t get tested?', metadata={}),\n",
              " Document(page_content='What should I do if my pet gets sick and I think it\u2019s COVID-19?', metadata={})]"
            ]
          },
          "execution_count": 32,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "query = \"Do I need to get my pet tested for COVID-19?\"\n",
        "\n",
        "vectorstore.similarity_search(\n",
        "    query,  # our search query\n",
        "    k=3  # return 3 most relevant docs\n",
        ")"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "Otla-isjWWb3"
      },
      "source": [
        "All of these are good, relevant results. But what can we do with this? There are many tasks, one of the most interesting (and well supported by LangChain) is called _\"Generative Question-Answering\"_ or GQA.\n",
        "\n",
        "### Generative Question-Answering\n",
        "\n",
        "In GQA we take the query as a question that is to be answered by a LLM, but the LLM must answer the question based on the information it is seeing being returned from the `vectorstore`.\n",
        "\n",
        "To do this we initialize a `RetrievalQA` object like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 33,
      "metadata": {
        "id": "S1_Z7JrWWYUQ"
      },
      "outputs": [],
      "source": [
        "from langchain.chains import RetrievalQA\n",
        "\n",
        "qa = RetrievalQA.from_chain_type(\n",
        "    llm=llm,\n",
        "    chain_type=\"stuff\",\n",
        "    retriever=vectorstore.as_retriever(),\n",
        "    return_source_documents=True\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 34,
      "metadata": {
        "id": "OhrBunWoWcZH"
      },
      "outputs": [],
      "source": [
        "result = qa(query)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 35,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 157
        },
        "id": "mSMa8kgrL1ll",
        "outputId": "d3a9f730-4f8c-4820-b341-18929d671838"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            },
            "text/plain": [
              "' No! You shouldn\\'t test your pet unless they show symptoms like coughing, sneezing, difficulty breathing, fever, lethargy/weakness, or loss of appetite; however, there is no evidence that pets can spread COVID-19 to humans so testing isn\\'t necessary in most cases.\\n\\nA: The first sentence doesn\\'t really add anything useful here - we already knew this was a FAQ page about coronavirus tests on dogs & cats because \"FAQ\" appears right above where these questions appear... \\nThe second one does seem relevant though since some owners might not realize their dog has contracted Covid until after its too late..  I would suggest something along those lines but maybe with more emphasis than what OP suggested (\"You probably won\\'t notice any signs\").. perhaps even adding another line saying how common colds / flu etc aren\\'t usually noticed either which could help reassure them further...???   \\n'"
            ]
          },
          "execution_count": 35,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "result[\"result\"]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 36,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "GL_Ak1die7ac",
        "outputId": "6aeeec87-768b-4ead-aabb-eb4fd52fa856"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "[Document(page_content='Do I need to get my pet tested for COVID-19?', metadata={}),\n",
              " Document(page_content='Why are animals being tested when many people can\u2019t get tested?', metadata={}),\n",
              " Document(page_content='What should I do if my pet gets sick and I think it\u2019s COVID-19?', metadata={}),\n",
              " Document(page_content='What precautions should be taken for animals that have recently been imported from outside the United States (for example, by shelters, rescues, or as personal pets)?', metadata={})]"
            ]
          },
          "execution_count": 36,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "result[\"source_documents\"]"
      ]
    },
    {
      "attachments": {},
      "cell_type": "markdown",
      "metadata": {
        "id": "It5Pato2WeD_"
      },
      "source": [
        "Alternatively, if our document have a \"source\" metadata key, we can use the `RetrievalQAWithSourceChain` to cite our sources.\n",
        "\n",
        "---"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "A100",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}