{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "id": "split-aluminum",
      "metadata": {
        "id": "split-aluminum",
        "papermill": {
          "duration": 0.048394,
          "end_time": "2021-04-15T21:06:39.560571",
          "exception": false,
          "start_time": "2021-04-15T21:06:39.512177",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/search/question-answering/extractive-question-answering.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/search/question-answering/extractive-question-answering.ipynb)\n",
        "\n",
        "# Extractive Question Answering"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "prospective-turner",
      "metadata": {
        "id": "prospective-turner",
        "papermill": {
          "duration": 0.045573,
          "end_time": "2021-04-15T21:06:39.651272",
          "exception": false,
          "start_time": "2021-04-15T21:06:39.605699",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "This notebook demonstrates how Pinecone helps you build an extractive question-answering application. To build an extractive question-answering system, we need three main components:\n",
        "\n",
        "- A vector index to store and run semantic search\n",
        "- A retriever model for embedding context passages\n",
        "- A reader model to extract answers\n",
        "\n",
        "We will use the SQuAD dataset, which consists of **questions** and **context** paragraphs containing question **answers**. We generate embeddings for the context passages using the retriever, index them in the vector database, and query with semantic search to retrieve the top k most relevant contexts containing potential answers to our question. We then use the reader model to extract the answers from the returned contexts."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "oC3GG-dWkZJ6",
      "metadata": {
        "id": "oC3GG-dWkZJ6"
      },
      "source": [
        "Let's get started by installing the packages needed for notebook to run:"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "terminal-export",
      "metadata": {
        "id": "terminal-export",
        "papermill": {
          "duration": 0.044413,
          "end_time": "2021-04-15T21:06:39.741951",
          "exception": false,
          "start_time": "2021-04-15T21:06:39.697538",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "# Install Dependencies"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "id": "expressed-executive",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "execution": {
          "iopub.execute_input": "2021-04-15T21:06:39.845309Z",
          "iopub.status.busy": "2021-04-15T21:06:39.842494Z",
          "iopub.status.idle": "2021-04-15T21:08:22.163939Z",
          "shell.execute_reply": "2021-04-15T21:08:22.164616Z"
        },
        "id": "expressed-executive",
        "outputId": "20686536-bb69-4415-9cd1-c4b921468be3",
        "papermill": {
          "duration": 102.376674,
          "end_time": "2021-04-15T21:08:22.165052",
          "exception": false,
          "start_time": "2021-04-15T21:06:39.788378",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 365 kB 39.2 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 120 kB 74.9 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115 kB 74.4 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 212 kB 77.0 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 127 kB 65.4 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 175 kB 24.9 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 58 kB 5.7 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 269 kB 66.6 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 85 kB 4.7 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.7 MB 61.6 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.3 MB 59.8 MB/s \n",
            "\u001b[K     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6.6 MB 59.9 MB/s \n",
            "\u001b[?25h  Building wheel for sentence-transformers (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
          ]
        }
      ],
      "source": [
        "!pip install -qU datasets pinecone-client sentence-transformers torch"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "29ad3840",
      "metadata": {
        "id": "29ad3840"
      },
      "source": [
        "# Load Dataset"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "hgIieQukgagu",
      "metadata": {
        "id": "hgIieQukgagu"
      },
      "source": [
        "Now let's load the SQUAD dataset from the HuggingFace Model Hub. We load the dataset into a pandas dataframe and filter the title, question, and context columns, and we drop any duplicate context passages."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "id": "J250IJeh7NIb",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 264,
          "referenced_widgets": [
            "121cbdb2c7f0402f88ad7d89621aeb68",
            "8beaa2e13b9b45cbb8fe3c0714914bbb",
            "d9fa2fabb33c42c3867fd78fca138095",
            "39c3804c2e1a41b59cbe709379a3a38f",
            "9efc14a437ce4187b0087aaf93a873d0",
            "2d3be841bfd14d3cb187088af4ca2a19",
            "d52ed00e4d68497da2b4a6920a332f7c",
            "138273c0bdd048f2b792d4381169e650",
            "1bcb2c168672417c9b00e3fe1930e2d2",
            "1af71e8a29344e76b7af7d679a876532",
            "2147a4a74d0c44398947743a19431885",
            "e684bff451c649b7859714933fdc189a",
            "f14926e807d740dd80d361564c581142",
            "430371cecc4141e49429222f6cf681ae",
            "2c1bd26c0f1042acbad2efe92b555ddf",
            "49f45246ba3840249b5ec5108cdc9d1f",
            "7dfc719d10184aed8f8ad0b15b7868e6",
            "f2aca85cec984d09b084842f7c68e13c",
            "9144bf8687c7444e93048ff6b6a147e1",
            "9b1486542867402382ed2c1354cf92a7",
            "22470cc18a824f2cb59e1285819d1794",
            "b3310b2a26e5420db192691ee868a92a",
            "b90b8b76519d4492a5e2045824e9e71d",
            "229ff9c12cd140489ff57b0d0dcb0192",
            "a39fd85c1a87496baaad41e422961702",
            "df01b9695f25438a97b836c2aff94f9c",
            "9ee1a52b7fe64dfe8526425d3b7b8a5b",
            "f4c90ec901ad45b59c55384b2fb20102",
            "5aca6e21b2454b38a1acc0a0b6cd0528",
            "e30aec88c34b4b4ea232281f25c0adc6",
            "ed84d4c4e43f42d4bd4b0ef748839c4f",
            "adbb9e2a4f0e4145ba4cb8187406d2f4",
            "4aa3613a6f734b4eb7e8e8ac0f0a32db",
            "ae5a9804e2fe438a9d6d8923bd00a293",
            "19c7df6f7d5d4ced8ee15bea3e517cff",
            "a4540dd1d75446569ea325fa623817f0",
            "7bdc110dc32e4959b07ced52ae1a4f32",
            "8fa979c46ecd41aea2a4f75ee3a8c1ef",
            "e610bf35b2d04f9488e3ee733689597f",
            "4694ab9ad6774cb984437b1bb17ad4be",
            "ec1e1878e96b4af6bc10de515862e585",
            "01869946f3174adb9608566ad8c9bd4c",
            "3b3c7fe9408c4508a1378e5cb8fcf131",
            "9e27b412341a4c8ebfc05d90708bc8cb",
            "c7aed31721a940a59b21a1fb79c864eb",
            "159faa667c2b4e4891a7e9a5abf1747f",
            "0fc3fc67cf384a08b895f5f3e6d7acfe",
            "359e126cf4544ce09233743c7568757c",
            "dd3cefe4336249d881b46fe369aba401",
            "d0e96747d60c403883e888e21bbd9139",
            "c44255eda6c04c339655c288cc74197b",
            "a99a1eacf4ad4c0e9ad977b3d5d93e9c",
            "2de3a7931c704fc599732371408bfb1e",
            "c7af90e55c0a49608e0fcb6ff4fa119a",
            "743371a554234ed79aaf9f359955b173",
            "c904502cf289474f9226a6c0c7e83134",
            "ebbdc28595294117951785460e0470e6",
            "9715d6ba384a4e078590536d244920a3",
            "050cb1d01a204f8284057cf057992214",
            "1123e3cc87ab487ca17c5a8daa35a98d",
            "1bcc4afe110e4cc387c7745079c4e813",
            "a689c0ccf8774f04a1641a87459f5689",
            "72916034cf3246909ebd00426dfe93fc",
            "09cca083cf6448579c07d183b8ec46a2",
            "74be0f9f24194fe2a917e4c976376470",
            "a379bd78fec544c7a8c94caa17265e15",
            "d073610ce10747b6aac5d934f655ec11",
            "ce0c8dd357444b41b8af12f8cca0cec9",
            "efe9117a7e524f068c1044a0c5ac4ed3",
            "f3e7b226e0a148dd89f557a3bf024052",
            "9fb61f4814e64ce4a54787187025a375",
            "36948a2f5a61450bbdb8ca64363e9c25",
            "0216dee6fd5b4ded893ee6e4f79b262d",
            "0124f278236246e2a8814cf32e32a9cf",
            "430ffc6858ca4dae841f8588c9ead58b",
            "fd2410da1b404e1da320e395c6341fb9",
            "ba9949cadc354160a0ca34c2e2450884",
            "297faa1dc974403cbe428d13dd5a8cda",
            "938fc4a87c4a4e25a2004aafe5cdf56e",
            "ab60294f20c64c21920580adf26ffea8",
            "4e357ad3d276408db583de1085fb7801",
            "ae255a23e0214992b44901840e74c34a",
            "e1d3a5505aa34bdda8055a5b2c36339d",
            "e89baa21c92a4812b48665bc80d8ad23",
            "1207bdc981234f4e8e082afe13cd430a",
            "438b8054a9974705911aed0b4e1572b4",
            "119699e71cbc4898bfd44a9295e55bd2",
            "8c6e6c6f7a3a416081a20de28f4dfb4a"
          ]
        },
        "id": "J250IJeh7NIb",
        "outputId": "6f249347-51f8-48be-ab90-a202eee648a7"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "121cbdb2c7f0402f88ad7d89621aeb68",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading builder script:   0%|          | 0.00/1.97k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "e684bff451c649b7859714933fdc189a",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading metadata:   0%|          | 0.00/1.02k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Downloading and preparing dataset squad/plain_text (download: 33.51 MiB, generated: 85.63 MiB, post-processed: Unknown size, total: 119.14 MiB) to /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "b90b8b76519d4492a5e2045824e9e71d",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading data files:   0%|          | 0/2 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "ae5a9804e2fe438a9d6d8923bd00a293",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading data:   0%|          | 0.00/8.12M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "c7aed31721a940a59b21a1fb79c864eb",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading data:   0%|          | 0.00/1.05M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "c904502cf289474f9226a6c0c7e83134",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Extracting data files:   0%|          | 0/2 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "d073610ce10747b6aac5d934f655ec11",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Generating train split:   0%|          | 0/87599 [00:00<?, ? examples/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "297faa1dc974403cbe428d13dd5a8cda",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Generating validation split:   0%|          | 0/10570 [00:00<?, ? examples/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Dataset squad downloaded and prepared to /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453. Subsequent calls will reuse this data.\n"
          ]
        }
      ],
      "source": [
        "from datasets import load_dataset\n",
        "\n",
        "# load the squad dataset into a pandas dataframe\n",
        "df = load_dataset(\"squad\", split=\"train\").to_pandas()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "id": "FcmeNO97dHDO",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 424
        },
        "id": "FcmeNO97dHDO",
        "outputId": "11af1908-8950-4433-f66d-7d363d010c97"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "\n",
              "  <div id=\"df-5935beba-26a2-40e4-bfca-a3dda53e101c\">\n",
              "    <div class=\"colab-df-container\">\n",
              "      <div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>title</th>\n",
              "      <th>context</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>University_of_Notre_Dame</td>\n",
              "      <td>Architecturally, the school has a Catholic cha...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>5</th>\n",
              "      <td>University_of_Notre_Dame</td>\n",
              "      <td>As at most other universities, Notre Dame's st...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>10</th>\n",
              "      <td>University_of_Notre_Dame</td>\n",
              "      <td>The university is the major seat of the Congre...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>15</th>\n",
              "      <td>University_of_Notre_Dame</td>\n",
              "      <td>The College of Engineering was established in ...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>20</th>\n",
              "      <td>University_of_Notre_Dame</td>\n",
              "      <td>All of Notre Dame's undergraduate students are...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>...</th>\n",
              "      <td>...</td>\n",
              "      <td>...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>87574</th>\n",
              "      <td>Kathmandu</td>\n",
              "      <td>Institute of Medicine, the central college of ...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>87579</th>\n",
              "      <td>Kathmandu</td>\n",
              "      <td>Football and Cricket are the most popular spor...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>87584</th>\n",
              "      <td>Kathmandu</td>\n",
              "      <td>The total length of roads in Nepal is recorded...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>87589</th>\n",
              "      <td>Kathmandu</td>\n",
              "      <td>The main international airport serving Kathman...</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>87594</th>\n",
              "      <td>Kathmandu</td>\n",
              "      <td>Kathmandu Metropolitan City (KMC), in order to...</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "<p>18891 rows \u00d7 2 columns</p>\n",
              "</div>\n",
              "      <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-5935beba-26a2-40e4-bfca-a3dda53e101c')\"\n",
              "              title=\"Convert this dataframe to an interactive table.\"\n",
              "              style=\"display:none;\">\n",
              "        \n",
              "  <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
              "       width=\"24px\">\n",
              "    <path d=\"M0 0h24v24H0V0z\" fill=\"none\"/>\n",
              "    <path d=\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\"/><path d=\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\"/>\n",
              "  </svg>\n",
              "      </button>\n",
              "      \n",
              "  <style>\n",
              "    .colab-df-container {\n",
              "      display:flex;\n",
              "      flex-wrap:wrap;\n",
              "      gap: 12px;\n",
              "    }\n",
              "\n",
              "    .colab-df-convert {\n",
              "      background-color: #E8F0FE;\n",
              "      border: none;\n",
              "      border-radius: 50%;\n",
              "      cursor: pointer;\n",
              "      display: none;\n",
              "      fill: #1967D2;\n",
              "      height: 32px;\n",
              "      padding: 0 0 0 0;\n",
              "      width: 32px;\n",
              "    }\n",
              "\n",
              "    .colab-df-convert:hover {\n",
              "      background-color: #E2EBFA;\n",
              "      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
              "      fill: #174EA6;\n",
              "    }\n",
              "\n",
              "    [theme=dark] .colab-df-convert {\n",
              "      background-color: #3B4455;\n",
              "      fill: #D2E3FC;\n",
              "    }\n",
              "\n",
              "    [theme=dark] .colab-df-convert:hover {\n",
              "      background-color: #434B5C;\n",
              "      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
              "      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
              "      fill: #FFFFFF;\n",
              "    }\n",
              "  </style>\n",
              "\n",
              "      <script>\n",
              "        const buttonEl =\n",
              "          document.querySelector('#df-5935beba-26a2-40e4-bfca-a3dda53e101c button.colab-df-convert');\n",
              "        buttonEl.style.display =\n",
              "          google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
              "\n",
              "        async function convertToInteractive(key) {\n",
              "          const element = document.querySelector('#df-5935beba-26a2-40e4-bfca-a3dda53e101c');\n",
              "          const dataTable =\n",
              "            await google.colab.kernel.invokeFunction('convertToInteractive',\n",
              "                                                     [key], {});\n",
              "          if (!dataTable) return;\n",
              "\n",
              "          const docLinkHtml = 'Like what you see? Visit the ' +\n",
              "            '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
              "            + ' to learn more about interactive tables.';\n",
              "          element.innerHTML = '';\n",
              "          dataTable['output_type'] = 'display_data';\n",
              "          await google.colab.output.renderOutput(dataTable, element);\n",
              "          const docLink = document.createElement('div');\n",
              "          docLink.innerHTML = docLinkHtml;\n",
              "          element.appendChild(docLink);\n",
              "        }\n",
              "      </script>\n",
              "    </div>\n",
              "  </div>\n",
              "  "
            ],
            "text/plain": [
              "                          title  \\\n",
              "0      University_of_Notre_Dame   \n",
              "5      University_of_Notre_Dame   \n",
              "10     University_of_Notre_Dame   \n",
              "15     University_of_Notre_Dame   \n",
              "20     University_of_Notre_Dame   \n",
              "...                         ...   \n",
              "87574                 Kathmandu   \n",
              "87579                 Kathmandu   \n",
              "87584                 Kathmandu   \n",
              "87589                 Kathmandu   \n",
              "87594                 Kathmandu   \n",
              "\n",
              "                                                 context  \n",
              "0      Architecturally, the school has a Catholic cha...  \n",
              "5      As at most other universities, Notre Dame's st...  \n",
              "10     The university is the major seat of the Congre...  \n",
              "15     The College of Engineering was established in ...  \n",
              "20     All of Notre Dame's undergraduate students are...  \n",
              "...                                                  ...  \n",
              "87574  Institute of Medicine, the central college of ...  \n",
              "87579  Football and Cricket are the most popular spor...  \n",
              "87584  The total length of roads in Nepal is recorded...  \n",
              "87589  The main international airport serving Kathman...  \n",
              "87594  Kathmandu Metropolitan City (KMC), in order to...  \n",
              "\n",
              "[18891 rows x 2 columns]"
            ]
          },
          "execution_count": 3,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# select only title and context column\n",
        "df = df[[\"title\", \"context\"]]\n",
        "# drop rows containing duplicate context passages\n",
        "df = df.drop_duplicates(subset=\"context\")\n",
        "df"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "57bbcb57",
      "metadata": {
        "id": "57bbcb57"
      },
      "source": [
        "# Initialize Pinecone Index"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e24d904c",
      "metadata": {
        "id": "e24d904c"
      },
      "source": [
        "The Pinecone index stores vector representations of our context passages which we can retrieve using another vector (query vector). We first need to initialize our connection to Pinecone to create our vector index. For this, we need a free [API key](\"https://app.pinecone.io/\"), and then we initialize the connection like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "id": "092d1e71",
      "metadata": {
        "id": "092d1e71"
      },
      "outputs": [],
      "source": [
        "from pinecone import Pinecone\n",
        "\n",
        "# connect to pinecone environment\n",
        "pinecone.init(\n",
        "    api_key=\"YOUR_API_KEY\",\n",
        "    environment=\"YOUR_ENV\"  # find next to API key in console\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "58028e12",
      "metadata": {
        "id": "58028e12"
      },
      "source": [
        "Now we create a new index called \"question-answering\" \u2014 we can name the index anything we want. We specify the metric type as \"cosine\" and dimension as 384 because the retriever we use to generate context embeddings is optimized for cosine similarity and outputs 384-dimension vectors."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "id": "b3206184",
      "metadata": {
        "id": "b3206184"
      },
      "outputs": [],
      "source": [
        "index_name = \"extractive-question-answering\"\n",
        "\n",
        "# check if the extractive-question-answering index exists\n",
        "if index_name not in pinecone.list_indexes().names():\n",
        "    # create the index if it does not exist\n",
        "    pinecone.create_index(\n",
        "        index_name,\n",
        "        dimension=384,\n",
        "        metric=\"cosine\"\n",
        "    )\n",
        "\n",
        "# connect to extractive-question-answering index we created\n",
        "index = pinecone.Index(index_name)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6e84a3e5",
      "metadata": {
        "id": "6e84a3e5"
      },
      "source": [
        "# Initialize Retriever"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "oZzhGS1Lpj0g",
      "metadata": {
        "id": "oZzhGS1Lpj0g"
      },
      "source": [
        "Next, we need to initialize our retriever. The retriever will mainly do two things:\n",
        "\n",
        "- Generate embeddings for all context passages (context vectors/embeddings)\n",
        "- Generate embeddings for our questions (query vector/embedding)\n",
        "\n",
        "The retriever will generate embeddings in a way that the questions and context passages containing answers to our questions are nearby in the vector space. We can use cosine similarity to calculate the similarity between the query and context embeddings to find the context passages that contain potential answers to our question.\n",
        "\n",
        "We will use a SentenceTransformer model named ``multi-qa-MiniLM-L6-cos-v1`` designed for semantic search and trained on 215M (question, answer) pairs from diverse sources as our retriever."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "id": "31a85bb3",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 569,
          "referenced_widgets": [
            "47629dc7a874460480ad0cef1ff27e94",
            "ff552c4e226c43fa804ede054a33c6b4",
            "63b327099d4e445091e5dab554faa0c6",
            "802718d5bc434126849f00b48fbe3826",
            "a246245d64c24d04a69e1e5235ec0816",
            "72a00d70acf64554ad420f8d37bde06a",
            "de46d49fd6114323b79ca8e7153e7695",
            "f0946600285c48f4a40754b53b9b9523",
            "8f9be189a5d34792b3f5d6b2fcc9b347",
            "77b6a3431d9c4323b3443a3bd99068bf",
            "1440106ff95444948d34b4c3ab56c77c",
            "a9679dbe891f4f45848bd75dce967a07",
            "5b01a1c3a55c419ab601c1e626c0ddfd",
            "9406321001ca446488ca42483746400d",
            "0d5f35b5839645229ff4059854f4ba0c",
            "54a2af96f81d476ea718e0adcdf458c3",
            "6d9078d8b6594fe68a8db22dec6be279",
            "2f1b00135ee747f0bf0c0936987775d0",
            "94f0f4bbf33a4f41aa967cd3cf631b2b",
            "9cbee33e80334e4a8689ea301e2292db",
            "55df9f5332324bd9bd6b9961caff3e04",
            "f76cf25a14cc40b1b59b9f7e5e979431",
            "d556f8812a444222b792e5616efcea58",
            "78fc6a9c33ec488bba8f5124ca1dbad8",
            "71f105410a454a78a9246f766493db1b",
            "701f2f2c34874a36bb002a686b087d35",
            "e3dcf34534c543c19ed8e810cdfa678d",
            "dc00845a7c2b43518ade2b011aeb3c1a",
            "9cec7c39e40a4861b8b4aaefe6f5c710",
            "44b12ec810ff46ffa2bfaedd74fd4602",
            "479055d7598b4cea8390a7856fa03e22",
            "29a448fe68544bd6b9084b4a01286262",
            "cc2c70ccfd7144d4a29974ee77ceb03a",
            "f64332b0634d4194ab329bdfc96a65c1",
            "ec7f0f161fd541909604f72e050d034e",
            "ae44446d37bc47b3af2ed124f2263e15",
            "8b691f63cdb84460b9c5b0e92912569d",
            "6e541f770a194a939cc32023bbb8b27e",
            "db981ad551664afb8af5c6dd9b6e532e",
            "71cdb7316a644a90b0ba1856104f1f33",
            "5760ab65e0a54c9483ce02fd7aea9492",
            "d2ed720a9c554f598981ed446c02d62c",
            "db45031e100b4ef485676a70f0b4741a",
            "440bb2a05ccc47faa6f18f5ef5b75d97",
            "50bd181314f04eb695439cc4fb71aaf4",
            "b61c4de1b5ec45668d5ca5f10980dffd",
            "41e8cda347424490b0cbba52da04c948",
            "1d9cf72279a9474488daca9782b774b5",
            "0257c7de28834cb7a28e6d1fd6348b93",
            "26d15a9aa1984a67a1cfde60fefa8d23",
            "2133b4ce2f864c72b4f731cec3d556f5",
            "c964111a11d946db9117cbb8cd293e6d",
            "7f38d55301414a5dab61ef58dad3034e",
            "371efcfcc40142e881f686d56d9d2ab0",
            "fb90efe799a140c1a52c9b2589878c7e",
            "d1336e9a722b442d95bd30a996f324e7",
            "727a1ef2e749479cbf7e8ed0aaba62c3",
            "3f7118b2a00c4bf296b0c7ce6a943551",
            "5b1c6411fa89412db2b37c0ec6a07d89",
            "26fd398b386b4e8fb0f3d4306318fe38",
            "e09e80ba801e456a8802985f3d0eeed8",
            "721e98439ff743418e1f1a1b286a7b08",
            "0b9e8d12d8354f619ee2935125961b5e",
            "9db547afb9a34149a88c7b45d590f106",
            "75ed412a691d4477abde140d25abd881",
            "37a7a7c69bba465a8e1ec7d1dd7a0684",
            "40e06eb38c77497885aa9d3777bde0b1",
            "e5fa51fb79124e4e8573d1803300f661",
            "b44631b7c599488694a9d8869dba86e2",
            "49b580bf6f384b638f705c2ec760db8f",
            "1790fd5b9a1a4f848d53013a987023f9",
            "6f3d4ed6477f445b8f7263096d4a1563",
            "a93068057b514608ba619c1d553c65af",
            "fe862323c2c84eb883446552e1ad5291",
            "82d14c53607447edb947a5de91deb27a",
            "ee199701bdb140548da37e446eac0e32",
            "fb269219a24b4fce8d9118c86a7e949a",
            "02737419addc43f780130d4774708bfc",
            "26fbe474c3d345eb818eaf6edc9e2f0a",
            "b73724a5997c4f92b7a6eac328549c48",
            "5c43dcd15ff54507b7f9681ed661b2d8",
            "966e8baba94d45409e1e01080a966377",
            "a400335a81784e288123ff1ac6527e74",
            "f62e4b08efc24c38ae042db8c24a60aa",
            "83eb8152d0ae4f28809c45f85ad36682",
            "82e2d1efbec54dafad142faa559f45b0",
            "91af9bf242d6450aaea934f050dc1775",
            "3663fc84732246299fffcdb7bf3c0e2e",
            "6d21ff1cf22f405493fdd5a855e419b3",
            "49db86b539b1427584e21f69ec49f9be",
            "5b21bf065ba1420fae0469a1d084b268",
            "48a8918eefa54061b3135768563d4a84",
            "1d0a8360553047cebce9881b30023b03",
            "2b93b9eb77f749fdadea146291e999ed",
            "79bcdb3f77e444b893827d2fec6ed538",
            "0cb58cbcfa45466693a3930accb07ec9",
            "ba7e6c6eb47d44ec8f21064d67335984",
            "9416a5475d89402c80f0b9a32626793b",
            "3b916728cdd1489585b07b927a725cb4",
            "fb65d1cc80eb4d1d94e90e63a390338c",
            "eae3e8f40d7749008c55c0513f793043",
            "f61913e6f7ff41dea85fc0f5daefe093",
            "9c86e315b16d4000b659aaba6e3f0a44",
            "dd7c8e90ce5f451bacb31be17dc311b7",
            "b50cb9caf81c49e2bc1ade580334db4a",
            "0c07bd4793d747339ead82c3d49d4ec6",
            "fb988057951844acadefc70d65a3f1e5",
            "f84c26236600477db291dad52a413ed0",
            "c2105d458ca3413d988153be2d8396ed",
            "cc5b2bdfbb6948f4a414dea44fe43ded",
            "86f335210c8240679c637b92a9fc4d1b",
            "0fb93cb08df541c89b91b383821e66b3",
            "266639058e544a22bd069044d57489e7",
            "9e7f4ae31e8647b698e150c1e5ef0663",
            "e555734b7b4a4b819340f57fb366048d",
            "06e8f2fd69b8491ab4027709cb133007",
            "02c7fa05d0ac4c8fb27cb94d90dcd8bd",
            "2c8125ac905143f2ad519d67a7b6e3df",
            "6b02e9228c0749f6b7fec94a9792ec96",
            "456c00761e984c079075efbe9c12e176",
            "51807116612e4d1f82f04c80c4e6f7e3",
            "3ac638d5ac0b467eb7d4b7f85e395709",
            "56b5df4e3ac441bfab1135854cc7cb85",
            "88a9c394ed0e40ada590645d5cca4ff1",
            "fd3ae1b7fc8044b4a49e2c0615974789",
            "c839cf19f65049cabad7d9414f779143",
            "1295677618a64524b67740d404c60b00",
            "6b48f2ce9cd641f98607c82d3fd0a560",
            "c309ecddabb34c8c98a51106b132d02b",
            "2ceabe4ecdaa49c5b1546154cc4d2724",
            "46cc32c3404a47cab852974e755d9277",
            "91a930bd32eb4e7da9176e33fbdebefa",
            "bdb944ccc3d644abbe4dea4c4f1a4751",
            "6b90922a3c2149699779e31fb20392cc",
            "6657478077524d4b92cb1f53e155b0fc",
            "1cbffd072d86465aae60bcf39bc4af23",
            "b92341205c02421085b7c67ff370ca29",
            "3b302633f0ad44f1a5593f5a720b9ead",
            "53a4b0a61f8a45b1aadedd66548137ee",
            "dc19cdcc735c4e6cb5eb783ac388d477",
            "b734287daa3f4de5aa71762ec21bbb79",
            "954d0cd42c714097a32a52608f4ebe5f",
            "c414494cbb0440bcbc14f23d422d7d7c",
            "b135f9edea2e413b93a1bea39c3f3b2d",
            "1c419311a32e4f6e9a25e17127fd9c32",
            "022d3cae1f4a49789a2a62f38f890841",
            "0192c51ac06843b18f6f8f548b9a6bf6",
            "1ae15a7b49cf456d860a55a570794b0c",
            "14d80a2cf1764ed780b330f6e5a119a8",
            "a1cec21a8e214cfdbd74365eaa189216",
            "093bdade945f4970b5a8b99bd061c792",
            "88fa767bfa42485b9d60a1c5bc674150",
            "ecefacb9541d4bf8abfc08a7b8aafb45",
            "2d3f5e7705754fcfba0a0aea131ec136"
          ]
        },
        "id": "31a85bb3",
        "outputId": "6ee65615-6cd5-4e06-f5b6-97263e4d7474"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "47629dc7a874460480ad0cef1ff27e94",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/737 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "a9679dbe891f4f45848bd75dce967a07",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/190 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "d556f8812a444222b792e5616efcea58",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/11.5k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f64332b0634d4194ab329bdfc96a65c1",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/612 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "50bd181314f04eb695439cc4fb71aaf4",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/116 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "d1336e9a722b442d95bd30a996f324e7",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/25.5k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "40e06eb38c77497885aa9d3777bde0b1",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/90.9M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "02737419addc43f780130d4774708bfc",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/53.0 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "6d21ff1cf22f405493fdd5a855e419b3",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/112 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "fb65d1cc80eb4d1d94e90e63a390338c",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/466k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "86f335210c8240679c637b92a9fc4d1b",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/383 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "3ac638d5ac0b467eb7d4b7f85e395709",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/13.8k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "bdb944ccc3d644abbe4dea4c4f1a4751",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/232k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "b135f9edea2e413b93a1bea39c3f3b2d",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading:   0%|          | 0.00/349 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/plain": [
              "SentenceTransformer(\n",
              "  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel \n",
              "  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})\n",
              "  (2): Normalize()\n",
              ")"
            ]
          },
          "execution_count": 6,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import torch\n",
        "from sentence_transformers import SentenceTransformer\n",
        "\n",
        "# set device to GPU if available\n",
        "device = 'cuda' if torch.cuda.is_available() else 'cpu'\n",
        "# load the retriever model from huggingface model hub\n",
        "retriever = SentenceTransformer('multi-qa-MiniLM-L6-cos-v1', device=device)\n",
        "retriever"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "8aaad0a2",
      "metadata": {
        "id": "8aaad0a2"
      },
      "source": [
        "# Generate Embeddings and Upsert"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "Hgy7AagJtO_p",
      "metadata": {
        "id": "Hgy7AagJtO_p"
      },
      "source": [
        "Next, we need to generate embeddings for the context passages. We will do this in batches to help us more quickly generate embeddings and upload them to the Pinecone index. When passing the documents to Pinecone, we need an id (a unique value), context embedding, and metadata for each document representing context passages in the dataset. The metadata is a dictionary containing data relevant to our embeddings, such as the article title, context passage, etc."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "id": "a17824ef",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 118,
          "referenced_widgets": [
            "7f5b8b6ca55847369f22df39310a566c",
            "f39a74f0e131496c9e1744ccb3db5aef",
            "441e2241d5f14f05bc33e2d398c39999",
            "80b87a0f8b1143eba0619c1e16f11486",
            "3cf1ecf87e1b4b3292f12b161532f1be",
            "6451622918bf42bb839e4160163bf949",
            "1ce14d3ad4444229acea213fc908216e",
            "4e39819276ce439ead72f98a6300b6f9",
            "b02b2219ce6a4e729791fffef80e1ff4",
            "14bdead5027147529c8cf404a73d524a",
            "8de66722d12e4d89aeff9952642486c8"
          ]
        },
        "id": "a17824ef",
        "outputId": "c2b46e15-4648-4377-a5aa-19cb01d92a25"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "7f5b8b6ca55847369f22df39310a566c",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "  0%|          | 0/296 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/plain": [
              "{'dimension': 384,\n",
              " 'index_fullness': 0.0,\n",
              " 'namespaces': {'': {'vector_count': 18891}},\n",
              " 'total_vector_count': 18891}"
            ]
          },
          "execution_count": 7,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "from tqdm.auto import tqdm\n",
        "\n",
        "# we will use batches of 64\n",
        "batch_size = 64\n",
        "\n",
        "for i in tqdm(range(0, len(df), batch_size)):\n",
        "    # find end of batch\n",
        "    i_end = min(i+batch_size, len(df))\n",
        "    # extract batch\n",
        "    batch = df.iloc[i:i_end]\n",
        "    # generate embeddings for batch\n",
        "    emb = retriever.encode(batch['context'].tolist()).tolist()\n",
        "    # get metadata\n",
        "    meta = batch.to_dict(orient='records')\n",
        "    # create unique IDs\n",
        "    ids = [f\"{idx}\" for idx in range(i, i_end)]\n",
        "    # add all to upsert list\n",
        "    to_upsert = list(zip(ids, emb, meta))\n",
        "    # upsert/insert these records to pinecone\n",
        "    _ = index.upsert(vectors=to_upsert)\n",
        "\n",
        "# check that we have all vectors in index\n",
        "index.describe_index_stats()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "YFyBYafuJ0y0",
      "metadata": {
        "id": "YFyBYafuJ0y0"
      },
      "source": [
        "# Initialize Reader"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "HgdiLCz5ynOk",
      "metadata": {
        "id": "HgdiLCz5ynOk"
      },
      "source": [
        "We use the `deepset/electra-base-squad2` model from the HuggingFace model hub as our reader model. We load this model into a \"question-answering\" pipeline from HuggingFace transformers and feed it our questions and context passages individually. The model gives a prediction for each context we pass through the pipeline."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "id": "hg9XTDkIJzH_",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 194,
          "referenced_widgets": [
            "96956a440c09482a94b64bc9b0c465f8",
            "e3f02318010545ba9ed85580bec3368d",
            "bfe707bbd44041b1857e36653e59073d",
            "55d647815e35427ea978c55a38a1ee6a",
            "a6ad8ac3d5984bb7b6f1ce55260790ae",
            "bcf413c29278448a8c7bf31acd1281fb",
            "9b5d45497e2b47ad8e49d24b38ba0647",
            "edf4cbedc6fc4bea8bddd5e690125c45",
            "d50a0739c90b4608b33c6b1f487c3336",
            "e77730d2139743538a6ec08123c1ab3a",
            "c6ebc4612bc64710bdd2fd11b509bed3",
            "aad74f875d18425cb8d1ebee1b916109",
            "45e70c2ced504bb9a598322ab800581b",
            "0f6bd14519684eaab5e1a0ea0d728506",
            "ecd2687927fa446e8c54c59d215c6c8b",
            "3bb6e6a4ea4247959adab4a3b7bd718a",
            "e25643454c504174873cd1ee0ede15df",
            "48687ceb03a146f6b12b41bfd1fbdabf",
            "20bb9cdeb87540c5a2ce1c955245ec9a",
            "24d40d2960574a7795821616c544bd9f",
            "210add0eec544de98b2d1e307299667b",
            "05e29faf6ff248e08c7ccc9e00c27b72",
            "ef17c83a58e244459d4d237b56dd5fdc",
            "b4d1dfa6b0ab452ab7a6ed30d79379c5",
            "a714bc6b6f3a485f9c5d7a137bf482b7",
            "8f0b26de8b7f47a9a30e723604d61f41",
            "1273622b6c054667a8ddb55e41661889",
            "e29f16cf4d5843309575535a57f67116",
            "53602496a3774618aa1089dc22b0de30",
            "ef5b7ace430041cea364077c6d123b51",
            "6071c35d1af7441c88bacd96fa227258",
            "1993f091e76744b3884d4026312e9104",
            "33a22aaf172d460cbb2efef52f00e971",
            "1ad54484c811425bafec4941524f4980",
            "6e7dc6ea7c93498f8e5bf5c35f392acb",
            "7935dcf94ca24dc39487c489c5225f8d",
            "3d5613ff391c41ce81a867876b95e28b",
            "ade7b8e25267415793e94476d7931746",
            "245d529f70bf465aa3d188bfc18f51d0",
            "04596412249948f49a49a1bcf32a8000",
            "88a8034544f74e15a11230477e401755",
            "27f9aa17b3c54f59bd14119f2322b934",
            "aa22fff203e04d34b246291e60c0bd1d",
            "3dd4dfba64b84ba8abdb39168d3b3be7",
            "119f25d80b0e4603af07d53315c2391f",
            "331fc72a95f841a4989e151cd687eeb0",
            "430ec9c79ab34c8ea8ada9f89fcc4bb1",
            "49f2e4881029490fb24321b6b612f623",
            "4e8bd94da2064d3c90e4d0dce0860be8",
            "00b344135da443ac90e6e6a0f53cfe0f",
            "b7a722597caf4924826bf4e26a9f88a9",
            "f79ca95608f84facba4c66f5b78f500e",
            "abbef0387df142c38c28fd862392ebec",
            "5377a48b4a1c4320971169bda6e6f5c0",
            "3d64495af2984517a41410c70c66f38d"
          ]
        },
        "id": "hg9XTDkIJzH_",
        "outputId": "0309c40e-b037-44f0-b2d1-8da457201b93"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "96956a440c09482a94b64bc9b0c465f8",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading config.json:   0%|          | 0.00/635 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "aad74f875d18425cb8d1ebee1b916109",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading pytorch_model.bin:   0%|          | 0.00/415M [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "ef17c83a58e244459d4d237b56dd5fdc",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading tokenizer_config.json:   0%|          | 0.00/200 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "1ad54484c811425bafec4941524f4980",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading vocab.txt:   0%|          | 0.00/226k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "119f25d80b0e4603af07d53315c2391f",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Downloading special_tokens_map.json:   0%|          | 0.00/112 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/plain": [
              "<transformers.pipelines.question_answering.QuestionAnsweringPipeline at 0x7effcf322f90>"
            ]
          },
          "execution_count": 8,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "from transformers import pipeline\n",
        "\n",
        "model_name = 'deepset/electra-base-squad2'\n",
        "# load the reader model into a question-answering pipeline\n",
        "reader = pipeline(tokenizer=model_name, model=model_name, task='question-answering', device=device)\n",
        "reader"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e14d89d6",
      "metadata": {
        "id": "e14d89d6"
      },
      "source": [
        "Now all the components we need are ready. Let's write some helper functions to execute our queries. The `get_context` function retrieves the context embeddings containing answers to our question from the Pinecone index, and the `extract_answer` function extracts the answers from these context passages."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "id": "lyYaY3QEQiHZ",
      "metadata": {
        "id": "lyYaY3QEQiHZ"
      },
      "outputs": [],
      "source": [
        "# gets context passages from the pinecone index\n",
        "def get_context(question, top_k):\n",
        "    # generate embeddings for the question\n",
        "    xq = retriever.encode([question]).tolist()\n",
        "    # search pinecone index for context passage with the answer\n",
        "    xc = index.query(vector=xq, top_k=top_k, include_metadata=True)\n",
        "    # extract the context passage from pinecone search result\n",
        "    c = [x[\"metadata\"]['context'] for x in xc[\"matches\"]]\n",
        "    return c"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "id": "Dc9VYOiUQA7B",
      "metadata": {
        "id": "Dc9VYOiUQA7B"
      },
      "outputs": [],
      "source": [
        "from pprint import pprint\n",
        "\n",
        "# extracts answer from the context passage\n",
        "def extract_answer(question, context):\n",
        "    results = []\n",
        "    for c in context:\n",
        "        # feed the reader the question and contexts to extract answers\n",
        "        answer = reader(question=question, context=c)\n",
        "        # add the context to answer dict for printing both together\n",
        "        answer[\"context\"] = c\n",
        "        results.append(answer)\n",
        "    # sort the result based on the score from reader model\n",
        "    sorted_result = pprint(sorted(results, key=lambda x: x['score'], reverse=True))\n",
        "    return sorted_result"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "id": "5E3a3dkJ5ZQD",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "5E3a3dkJ5ZQD",
        "outputId": "9c49972e-d87b-47f8-b17a-92c616d14f3f"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "['Egypt was producing 691,000 bbl/d of oil and 2,141.05 Tcf of natural gas (in 2013), which makes Egypt as the largest oil producer not member of the Organization of the Petroleum Exporting Countries (OPEC) and the second-largest dry natural gas producer in Africa. In 2013, Egypt was the largest consumer of oil and natural gas in Africa, as more than 20% of total oil consumption and more than 40% of total dry natural gas consumption in Africa. Also, Egypt possesses the largest oil refinery capacity in Africa 726,000 bbl/d (in 2012). Egypt is currently planning to build its first nuclear power plant in El Dabaa city, northern Egypt.']"
            ]
          },
          "execution_count": 11,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "question = \"How much oil is Egypt producing in a day?\"\n",
        "context = get_context(question, top_k = 1)\n",
        "context"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "heKNVbWQ_LtC",
      "metadata": {
        "id": "heKNVbWQ_LtC"
      },
      "source": [
        "As we can see, the retiever is working fine and gets us the context passage that contains the answer to our question. Now let's use the reader to extract the exact answer from the context passage."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "id": "DQ4GWdbMSjPl",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "DQ4GWdbMSjPl",
        "outputId": "73eabb60-e42a-4983-ed7f-a90e51eb9781"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[{'answer': '691,000 bbl/d',\n",
            "  'context': 'Egypt was producing 691,000 bbl/d of oil and 2,141.05 Tcf of '\n",
            "             'natural gas (in 2013), which makes Egypt as the largest oil '\n",
            "             'producer not member of the Organization of the Petroleum '\n",
            "             'Exporting Countries (OPEC) and the second-largest dry natural '\n",
            "             'gas producer in Africa. In 2013, Egypt was the largest consumer '\n",
            "             'of oil and natural gas in Africa, as more than 20% of total oil '\n",
            "             'consumption and more than 40% of total dry natural gas '\n",
            "             'consumption in Africa. Also, Egypt possesses the largest oil '\n",
            "             'refinery capacity in Africa 726,000 bbl/d (in 2012). Egypt is '\n",
            "             'currently planning to build its first nuclear power plant in El '\n",
            "             'Dabaa city, northern Egypt.',\n",
            "  'end': 33,\n",
            "  'score': 0.9999852180480957,\n",
            "  'start': 20}]\n"
          ]
        }
      ],
      "source": [
        "extract_answer(question, context)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "fMD_ABuDAyhN",
      "metadata": {
        "id": "fMD_ABuDAyhN"
      },
      "source": [
        "The reader model predicted with 99% accuracy the correct answer *691,000 bbl/d* as seen from the context passage. Let's run few more queries."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "id": "_4NRgV4mGWoj",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "_4NRgV4mGWoj",
        "outputId": "4140fe3e-32b9-42ad-8631-303d07df6dda"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[{'answer': 'Hurley and Chen',\n",
            "  'context': 'According to a story that has often been repeated in the media, '\n",
            "             'Hurley and Chen developed the idea for YouTube during the early '\n",
            "             'months of 2005, after they had experienced difficulty sharing '\n",
            "             \"videos that had been shot at a dinner party at Chen's apartment \"\n",
            "             'in San Francisco. Karim did not attend the party and denied that '\n",
            "             'it had occurred, but Chen commented that the idea that YouTube '\n",
            "             'was founded after a dinner party \"was probably very strengthened '\n",
            "             'by marketing ideas around creating a story that was very '\n",
            "             'digestible\".',\n",
            "  'end': 79,\n",
            "  'score': 0.9999276399612427,\n",
            "  'start': 64}]\n"
          ]
        }
      ],
      "source": [
        "question = \"What are the first names of the men that invented youtube?\"\n",
        "context = get_context(question, top_k=1)\n",
        "extract_answer(question, context)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "id": "juXlctWgJgMF",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "juXlctWgJgMF",
        "outputId": "e25d311a-5ff5-4b5a-f1f2-f07bf606a076"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[{'answer': 'his theories of special relativity and general relativity',\n",
            "  'context': 'Albert Einstein is known for his theories of special relativity '\n",
            "             'and general relativity. He also made important contributions to '\n",
            "             'statistical mechanics, especially his mathematical treatment of '\n",
            "             'Brownian motion, his resolution of the paradox of specific '\n",
            "             'heats, and his connection of fluctuations and dissipation. '\n",
            "             'Despite his reservations about its interpretation, Einstein also '\n",
            "             'made contributions to quantum mechanics and, indirectly, quantum '\n",
            "             'field theory, primarily through his theoretical studies of the '\n",
            "             'photon.',\n",
            "  'end': 86,\n",
            "  'score': 0.9500371217727661,\n",
            "  'start': 29}]\n"
          ]
        }
      ],
      "source": [
        "question = \"What is Albert Eistein famous for?\"\n",
        "context = get_context(question, top_k=1)\n",
        "extract_answer(question, context)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "OhCgeny_BVno",
      "metadata": {
        "id": "OhCgeny_BVno"
      },
      "source": [
        "Let's run another question. This time for top 3 context passages from the retriever."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "id": "iXACn71xmett",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "iXACn71xmett",
        "outputId": "18463177-9144-4145-ef00-1bef3ae175c4"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[{'answer': 'Armstrong',\n",
            "  'context': 'The trip to the Moon took just over three days. After achieving '\n",
            "             'orbit, Armstrong and Aldrin transferred into the Lunar Module, '\n",
            "             'named Eagle, and after a landing gear inspection by Collins '\n",
            "             'remaining in the Command/Service Module Columbia, began their '\n",
            "             'descent. After overcoming several computer overload alarms '\n",
            "             'caused by an antenna switch left in the wrong position, and a '\n",
            "             'slight downrange error, Armstrong took over manual flight '\n",
            "             'control at about 180 meters (590 ft), and guided the Lunar '\n",
            "             'Module to a safe landing spot at 20:18:04 UTC, July 20, 1969 '\n",
            "             '(3:17:04 pm CDT). The first humans on the Moon would wait '\n",
            "             'another six hours before they ventured out of their craft. At '\n",
            "             '02:56 UTC, July 21 (9:56 pm CDT July 20), Armstrong became the '\n",
            "             'first human to set foot on the Moon.',\n",
            "  'end': 80,\n",
            "  'score': 0.9998037815093994,\n",
            "  'start': 71},\n",
            " {'answer': 'Aldrin',\n",
            "  'context': 'The first step was witnessed by at least one-fifth of the '\n",
            "             'population of Earth, or about 723 million people. His first '\n",
            "             \"words when he stepped off the LM's landing footpad were, \"\n",
            "             '\"That\\'s one small step for [a] man, one giant leap for '\n",
            "             'mankind.\" Aldrin joined him on the surface almost 20 minutes '\n",
            "             'later. Altogether, they spent just under two and one-quarter '\n",
            "             'hours outside their craft. The next day, they performed the '\n",
            "             'first launch from another celestial body, and rendezvoused back '\n",
            "             'with Columbia.',\n",
            "  'end': 246,\n",
            "  'score': 0.6958656907081604,\n",
            "  'start': 240},\n",
            " {'answer': 'Frank Borman',\n",
            "  'context': 'On December 21, 1968, Frank Borman, James Lovell, and William '\n",
            "             'Anders became the first humans to ride the Saturn V rocket into '\n",
            "             'space on Apollo 8. They also became the first to leave low-Earth '\n",
            "             'orbit and go to another celestial body, and entered lunar orbit '\n",
            "             'on December 24. They made ten orbits in twenty hours, and '\n",
            "             'transmitted one of the most watched TV broadcasts in history, '\n",
            "             'with their Christmas Eve program from lunar orbit, that '\n",
            "             'concluded with a reading from the biblical Book of Genesis. Two '\n",
            "             'and a half hours after the broadcast, they fired their engine to '\n",
            "             'perform the first trans-Earth injection to leave lunar orbit and '\n",
            "             'return to the Earth. Apollo 8 safely landed in the Pacific ocean '\n",
            "             \"on December 27, in NASA's first dawn splashdown and recovery.\",\n",
            "  'end': 34,\n",
            "  'score': 0.49247056245803833,\n",
            "  'start': 22}]\n"
          ]
        }
      ],
      "source": [
        "question = \"Who was the first person to step foot on the moon?\"\n",
        "context = get_context(question, top_k=3)\n",
        "extract_answer(question, context)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "c9oCWHy0tPpV",
      "metadata": {
        "id": "c9oCWHy0tPpV"
      },
      "source": [
        "The result looks pretty good."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "l8l-jZ_ut_zs",
      "metadata": {
        "id": "l8l-jZ_ut_zs"
      },
      "source": [
        "# Example Application "
      ]
    },
    {
      "cell_type": "markdown",
      "id": "NDjFUSphuHBE",
      "metadata": {
        "id": "NDjFUSphuHBE"
      },
      "source": [
        "To try out an application like this one, see this [example application](https://huggingface.co/spaces/pinecone/extractive-question-answering)."
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "collapsed_sections": [],
      "name": "question_answering.ipynb",
      "provenance": []
    },
    "environment": {
      "name": "tf2-gpu.2-3.m65",
      "type": "gcloud",
      "uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-3:m65"
    },
    "gpuClass": "standard",
    "kernelspec": {
      "display_name": "base",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.8.13 (default, Mar 28 2022, 06:59:08) [MSC v.1916 64 bit (AMD64)]"
    },
    "papermill": {
      "default_parameters": {},
      "duration": 333.240754,
      "end_time": "2021-04-15T21:12:11.363566",
      "environment_variables": {},
      "exception": null,
      "input_path": "/notebooks/question_answering/question_answering.ipynb",
      "output_path": "/notebooks/tmp/question_answering/question_answering.ipynb",
      "parameters": {},
      "start_time": "2021-04-15T21:06:38.122812",
      "version": "2.3.3"
    },
    "vscode": {
      "interpreter": {
        "hash": "5fe10bf018ef3e697f9035d60bf60847932a12bface18908407fd371fe880db9"
      }
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}