{
  "cells": [
    {
      "attachments": {},
      "cell_type": "markdown",
      "id": "3b9756e4",
      "metadata": {},
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/search/semantic-search/deduplication/deduplication_scholarly_articles.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/search/semantic-search/deduplication/deduplication_scholarly_articles.ipynb)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "a4e2fc01",
      "metadata": {
        "id": "a4e2fc01",
        "papermill": {
          "duration": 0.063978,
          "end_time": "2021-04-22T02:05:48.011289",
          "exception": false,
          "start_time": "2021-04-22T02:05:47.947311",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "# Document Deduplication with Similarity Search"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "12df02b4",
      "metadata": {
        "id": "12df02b4",
        "papermill": {
          "duration": 0.048328,
          "end_time": "2021-04-22T02:05:48.113684",
          "exception": false,
          "start_time": "2021-04-22T02:05:48.065356",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "This notebook demonstrates how to use Pinecone's similarity search to create a simple application to identify duplicate documents. \n",
        "\n",
        "The goal is to create a data deduplication application for eliminating near-duplicate copies of academic texts. In this example, we will perform the deduplication of a given text in two steps. First, we will sift a small set of candidate texts using a similarity-search service. Then, we will apply a near-duplication detector over these candidates. \n",
        "\n",
        "The similarity search will use a vector representation of the texts. With this, semantic similarity is translated to proximity in a vector space. For detecting near-duplicates, we will employ a classification model that examines the raw text. "
      ]
    },
    {
      "cell_type": "markdown",
      "id": "d653024a",
      "metadata": {
        "id": "d653024a",
        "papermill": {
          "duration": 0.050676,
          "end_time": "2021-04-22T02:05:48.215811",
          "exception": false,
          "start_time": "2021-04-22T02:05:48.165135",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "## Install Dependencies"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "id": "08cd7b6e",
      "metadata": {
        "id": "08cd7b6e",
        "papermill": {
          "duration": 113.78609,
          "end_time": "2021-04-22T02:07:42.055032",
          "exception": false,
          "start_time": "2021-04-22T02:05:48.268942",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "!pip install -qU \\\n",
        "    datasets==3.4.1 \\\n",
        "    datasketch==1.6.5 \\\n",
        "    gensim==4.3.3 \\\n",
        "    ipywidgets==8.1.5 \\\n",
        "    mmh3==5.1.0 \\\n",
        "    pinecone==6.0.2 \\\n",
        "    sentence-transformers==3.4.1"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "256e64e8",
      "metadata": {
        "id": "256e64e8",
        "papermill": {
          "duration": 0.046393,
          "end_time": "2021-04-22T02:08:11.899649",
          "exception": false,
          "start_time": "2021-04-22T02:08:11.853256",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "## Download and Process Dataset"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "003340f7",
      "metadata": {
        "id": "003340f7",
        "papermill": {
          "duration": 0.043329,
          "end_time": "2021-04-22T02:08:11.990600",
          "exception": false,
          "start_time": "2021-04-22T02:08:11.947271",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "This tutorial will use the [Deduplication Dataset 2020](https://core.ac.uk/documentation/dataset), which consists of 100,000 scholarly documents. We will use Hugging Face Datasets to download the dataset found at [*pinecone/core-2020-05-10-deduplication*](https://huggingface.co/datasets/pinecone/core-2020-05-10-deduplication)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "id": "cbbff1b5",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "cbbff1b5",
        "outputId": "dc197a02-8e6a-4a04-f424-4b65efaab817"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "Dataset({\n",
              "    features: ['core_id', 'doi', 'original_abstract', 'original_title', 'processed_title', 'processed_abstract', 'cat', 'labelled_duplicates'],\n",
              "    num_rows: 100000\n",
              "})"
            ]
          },
          "execution_count": 2,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "from datasets import load_dataset\n",
        "\n",
        "core = load_dataset(\"pinecone/core-2020-05-10-deduplication\", split=\"train\")\n",
        "core"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "7d835cb9",
      "metadata": {
        "id": "7d835cb9"
      },
      "source": [
        "We convert the dataset into Pandas dataframe format like so:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "id": "1c57f15b",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 337
        },
        "id": "1c57f15b",
        "outputId": "742ca1e2-2e18-4bfd-a51b-5a9eb406e73f"
      },
      "outputs": [
        {
          "data": {
            "text/html": [
              "<div>\n",
              "<style scoped>\n",
              "    .dataframe tbody tr th:only-of-type {\n",
              "        vertical-align: middle;\n",
              "    }\n",
              "\n",
              "    .dataframe tbody tr th {\n",
              "        vertical-align: top;\n",
              "    }\n",
              "\n",
              "    .dataframe thead th {\n",
              "        text-align: right;\n",
              "    }\n",
              "</style>\n",
              "<table border=\"1\" class=\"dataframe\">\n",
              "  <thead>\n",
              "    <tr style=\"text-align: right;\">\n",
              "      <th></th>\n",
              "      <th>core_id</th>\n",
              "      <th>doi</th>\n",
              "      <th>original_abstract</th>\n",
              "      <th>original_title</th>\n",
              "      <th>processed_title</th>\n",
              "      <th>processed_abstract</th>\n",
              "      <th>cat</th>\n",
              "      <th>labelled_duplicates</th>\n",
              "    </tr>\n",
              "  </thead>\n",
              "  <tbody>\n",
              "    <tr>\n",
              "      <th>0</th>\n",
              "      <td>11251086</td>\n",
              "      <td>10.1016/j.ajhg.2007.12.013</td>\n",
              "      <td>Unobstructed vision requires a particular refr...</td>\n",
              "      <td>Mutation of solute carrier SLC16A12 associates...</td>\n",
              "      <td>mutation of solute carrier slc16a12 associates...</td>\n",
              "      <td>unobstructed vision refractive lens differenti...</td>\n",
              "      <td>exact_dup</td>\n",
              "      <td>[82332306]</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>1</th>\n",
              "      <td>11309751</td>\n",
              "      <td>10.1103/PhysRevLett.101.193002</td>\n",
              "      <td>Two-color multiphoton ionization of atomic hel...</td>\n",
              "      <td>Polarization control in two-color above-thresh...</td>\n",
              "      <td>polarization control in two-color above-thresh...</td>\n",
              "      <td>multiphoton ionization helium combining extrem...</td>\n",
              "      <td>exact_dup</td>\n",
              "      <td>[147599753]</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>2</th>\n",
              "      <td>11311385</td>\n",
              "      <td>10.1016/j.ab.2011.02.013</td>\n",
              "      <td>Lectin\u2019s are proteins capable of recognising a...</td>\n",
              "      <td>Optimisation of the enzyme-linked lectin assay...</td>\n",
              "      <td>optimisation of the enzyme-linked lectin assay...</td>\n",
              "      <td>lectin\u2019s capable recognising oligosaccharide t...</td>\n",
              "      <td>exact_dup</td>\n",
              "      <td>[147603441]</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>3</th>\n",
              "      <td>11992240</td>\n",
              "      <td>10.1016/j.jpcs.2007.07.063</td>\n",
              "      <td>In this work, we present a detailed transmissi...</td>\n",
              "      <td>Vertical composition fluctuations in (Ga,In)(N...</td>\n",
              "      <td>vertical composition fluctuations in (ga,in)(n...</td>\n",
              "      <td>microscopy interfacial uniformity wells grown ...</td>\n",
              "      <td>exact_dup</td>\n",
              "      <td>[148653623]</td>\n",
              "    </tr>\n",
              "    <tr>\n",
              "      <th>4</th>\n",
              "      <td>11994990</td>\n",
              "      <td>10.1016/S0169-5983(03)00013-3</td>\n",
              "      <td>Three-dimensional (3D) oscillatory boundary la...</td>\n",
              "      <td>Three-dimensional streaming flows driven by os...</td>\n",
              "      <td>three-dimensional streaming flows driven by os...</td>\n",
              "      <td>oscillatory attached deformable walls boundari...</td>\n",
              "      <td>exact_dup</td>\n",
              "      <td>[148656283]</td>\n",
              "    </tr>\n",
              "  </tbody>\n",
              "</table>\n",
              "</div>"
            ],
            "text/plain": [
              "    core_id                             doi  \\\n",
              "0  11251086      10.1016/j.ajhg.2007.12.013   \n",
              "1  11309751  10.1103/PhysRevLett.101.193002   \n",
              "2  11311385        10.1016/j.ab.2011.02.013   \n",
              "3  11992240      10.1016/j.jpcs.2007.07.063   \n",
              "4  11994990   10.1016/S0169-5983(03)00013-3   \n",
              "\n",
              "                                   original_abstract  \\\n",
              "0  Unobstructed vision requires a particular refr...   \n",
              "1  Two-color multiphoton ionization of atomic hel...   \n",
              "2  Lectin\u2019s are proteins capable of recognising a...   \n",
              "3  In this work, we present a detailed transmissi...   \n",
              "4  Three-dimensional (3D) oscillatory boundary la...   \n",
              "\n",
              "                                      original_title  \\\n",
              "0  Mutation of solute carrier SLC16A12 associates...   \n",
              "1  Polarization control in two-color above-thresh...   \n",
              "2  Optimisation of the enzyme-linked lectin assay...   \n",
              "3  Vertical composition fluctuations in (Ga,In)(N...   \n",
              "4  Three-dimensional streaming flows driven by os...   \n",
              "\n",
              "                                     processed_title  \\\n",
              "0  mutation of solute carrier slc16a12 associates...   \n",
              "1  polarization control in two-color above-thresh...   \n",
              "2  optimisation of the enzyme-linked lectin assay...   \n",
              "3  vertical composition fluctuations in (ga,in)(n...   \n",
              "4  three-dimensional streaming flows driven by os...   \n",
              "\n",
              "                                  processed_abstract        cat  \\\n",
              "0  unobstructed vision refractive lens differenti...  exact_dup   \n",
              "1  multiphoton ionization helium combining extrem...  exact_dup   \n",
              "2  lectin\u2019s capable recognising oligosaccharide t...  exact_dup   \n",
              "3  microscopy interfacial uniformity wells grown ...  exact_dup   \n",
              "4  oscillatory attached deformable walls boundari...  exact_dup   \n",
              "\n",
              "  labelled_duplicates  \n",
              "0          [82332306]  \n",
              "1         [147599753]  \n",
              "2         [147603441]  \n",
              "3         [148653623]  \n",
              "4         [148656283]  "
            ]
          },
          "execution_count": 3,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "df = core.to_pandas()\n",
        "df.head()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "PNZmOyrPe_7X",
      "metadata": {
        "id": "PNZmOyrPe_7X"
      },
      "source": [
        "We will use the following columns from the dataset for our task.\n",
        "1. **core_id** - Unique indentifier for each article\n",
        "\n",
        "2. **processed_abstract** - This is obtained by applying preprocssing steps like [this](https://spacy.io/usage/processing-pipelines) to the original abstract of the article from the column **original abstract**.\n",
        "\n",
        "3. **processed_title** - Same as the abstract but for the title of the article.\n",
        "\n",
        "4. **cat** - Every article falls into one of the three possible categories: 'exact_dup', 'near_dup', 'non_dup'\n",
        "\n",
        "5. **labelled_duplicates** - A list of core_ids of articles that are duplicates of current article"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "92eb9871",
      "metadata": {
        "id": "92eb9871",
        "papermill": {
          "duration": 0.055556,
          "end_time": "2021-04-22T02:08:54.513132",
          "exception": false,
          "start_time": "2021-04-22T02:08:54.457576",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "Let's calculate the frequency of duplicates per article. Observe that half of the articles have no duplicates, and only a small fraction of the articles have more than ten duplicates."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "id": "2262a535",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "2262a535",
        "outputId": "7359505f-a9d7-4dc3-c00a-c68b46081069",
        "papermill": {
          "duration": 0.101526,
          "end_time": "2021-04-22T02:08:54.677891",
          "exception": false,
          "start_time": "2021-04-22T02:08:54.576365",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "labelled_duplicates\n",
              "0     50000\n",
              "1     36166\n",
              "2      7620\n",
              "3      3108\n",
              "4      1370\n",
              "5       756\n",
              "6       441\n",
              "7       216\n",
              "8       108\n",
              "10       66\n",
              "9        60\n",
              "11       48\n",
              "13       28\n",
              "12       13\n",
              "Name: count, dtype: int64"
            ]
          },
          "execution_count": 4,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "lens = df.labelled_duplicates.apply(len)\n",
        "lens.value_counts()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6c22e6f5",
      "metadata": {
        "id": "6c22e6f5"
      },
      "source": [
        "Reformat some of the columns to prevent later issues."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "id": "e752cc5f",
      "metadata": {
        "id": "e752cc5f"
      },
      "outputs": [],
      "source": [
        "# Ensure that no processed abstracts exceed the maximum length for upsert to Pinecone\n",
        "df[\"processed_abstract\"] = df[\"processed_abstract\"].str[:8000]"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "fe615876",
      "metadata": {
        "id": "fe615876",
        "papermill": {
          "duration": 0.048605,
          "end_time": "2021-04-22T02:08:54.777369",
          "exception": false,
          "start_time": "2021-04-22T02:08:54.728764",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "We will make use of the text data to create vectors for every article. We combine the **processed_abstract** and **processed_title** of the article to create a new **combined_text** column. "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "id": "040eaa39",
      "metadata": {
        "id": "040eaa39",
        "papermill": {
          "duration": 2.404613,
          "end_time": "2021-04-22T02:08:57.227625",
          "exception": false,
          "start_time": "2021-04-22T02:08:54.823012",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "# Define a new column for calculating embeddings\n",
        "df[\"combined_text\"] = df[\"processed_title\"] + \" \" + df[\"processed_abstract\"]"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "3062b39e",
      "metadata": {},
      "source": [
        "## Initializing the client\n",
        "\n",
        "Now we need a place to store these embeddings and enable a efficient vector search through them all. To do that we use Pinecone, we can get a [free API key](https://app.pinecone.io/) and enter it below where we will initialize the Pinecone client and create a new index."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "id": "2a862ba8",
      "metadata": {},
      "outputs": [
        {
          "ename": "DeprecatedPluginError",
          "evalue": "The `pinecone-plugin-inference` package has been deprecated. The features from that plugin have been incorporated into the main `pinecone` package with no need for additional plugins. Please remove the `pinecone-plugin-inference` package from your dependencies to ensure you have the most up-to-date version of these features.",
          "output_type": "error",
          "traceback": [
            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
            "\u001b[0;31mDeprecatedPluginError\u001b[0m                     Traceback (most recent call last)",
            "Cell \u001b[0;32mIn[18], line 2\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01mos\u001b[39;00m\n\u001b[0;32m----> 2\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01mpinecone\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mimport\u001b[39;00m Pinecone\n\u001b[1;32m      4\u001b[0m \u001b[38;5;66;03m# Get API key from https://app.pinecone.io\u001b[39;00m\n\u001b[1;32m      5\u001b[0m api_key \u001b[38;5;241m=\u001b[39m os\u001b[38;5;241m.\u001b[39menviron\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mPINECONE_API_KEY\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n",
            "File \u001b[0;32m/opt/conda/lib/python3.12/site-packages/pinecone/__init__.py:20\u001b[0m\n\u001b[1;32m     16\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01mlogging\u001b[39;00m\n\u001b[1;32m     18\u001b[0m \u001b[38;5;66;03m# Raise an exception if the user is attempting to use the SDK with\u001b[39;00m\n\u001b[1;32m     19\u001b[0m \u001b[38;5;66;03m# deprecated plugins installed in their project.\u001b[39;00m\n\u001b[0;32m---> 20\u001b[0m \u001b[43mcheck_for_deprecated_plugins\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     22\u001b[0m \u001b[38;5;66;03m# Silence annoying log messages from the plugin interface\u001b[39;00m\n\u001b[1;32m     23\u001b[0m logging\u001b[38;5;241m.\u001b[39mgetLogger(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mpinecone_plugin_interface\u001b[39m\u001b[38;5;124m\"\u001b[39m)\u001b[38;5;241m.\u001b[39msetLevel(logging\u001b[38;5;241m.\u001b[39mCRITICAL)\n",
            "File \u001b[0;32m/opt/conda/lib/python3.12/site-packages/pinecone/deprecated_plugins.py:12\u001b[0m, in \u001b[0;36mcheck_for_deprecated_plugins\u001b[0;34m()\u001b[0m\n\u001b[1;32m      9\u001b[0m     \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01mpinecone_plugins\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01minference\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mimport\u001b[39;00m __installables__  \u001b[38;5;66;03m# type: ignore\u001b[39;00m\n\u001b[1;32m     11\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m __installables__ \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m---> 12\u001b[0m         \u001b[38;5;28;01mraise\u001b[39;00m DeprecatedPluginError(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mpinecone-plugin-inference\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m     13\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mImportError\u001b[39;00m:\n\u001b[1;32m     14\u001b[0m     \u001b[38;5;28;01mpass\u001b[39;00m\n",
            "\u001b[0;31mDeprecatedPluginError\u001b[0m: The `pinecone-plugin-inference` package has been deprecated. The features from that plugin have been incorporated into the main `pinecone` package with no need for additional plugins. Please remove the `pinecone-plugin-inference` package from your dependencies to ensure you have the most up-to-date version of these features."
          ]
        }
      ],
      "source": [
        "import os\n",
        "from pinecone import Pinecone\n",
        "\n",
        "# Get API key from https://app.pinecone.io\n",
        "api_key = os.environ.get(\"PINECONE_API_KEY\")\n",
        "\n",
        "# Configure client\n",
        "pc = Pinecone(api_key=api_key)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "fe50e7e8",
      "metadata": {},
      "source": [
        "## Create the index\n",
        "\n",
        "Now we setup our index specification, this allows us to define the cloud provider and region where we want to deploy our index. You can find a list of all [available providers and regions here](https://docs.pinecone.io/troubleshooting/available-cloud-regions)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "037a3a3b",
      "metadata": {},
      "outputs": [],
      "source": [
        "index_name = \"deduplication\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "50a7ca9f",
      "metadata": {},
      "outputs": [],
      "source": [
        "from pinecone import ServerlessSpec\n",
        "\n",
        "# Check if index already exists (it shouldn't if this is first time running the notebook)\n",
        "if not pc.has_index(name=index_name):\n",
        "    # If does not exist, create index\n",
        "    pc.create_index(\n",
        "        index_name,\n",
        "        dimension=300,\n",
        "        metric=\"cosine\",\n",
        "        spec=ServerlessSpec(cloud=\"aws\", region=\"us-east-1\"),\n",
        "    )\n",
        "\n",
        "# Instantiate the index client\n",
        "index = pc.Index(name=index_name)\n",
        "# View index stats\n",
        "index.describe_index_stats()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "2b39da8f",
      "metadata": {
        "id": "2b39da8f",
        "papermill": {
          "duration": 0.05703,
          "end_time": "2021-04-22T02:08:57.347510",
          "exception": false,
          "start_time": "2021-04-22T02:08:57.290480",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "## Initialize Embedding Model"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e3f5f737",
      "metadata": {
        "id": "e3f5f737",
        "papermill": {
          "duration": 0.046633,
          "end_time": "2021-04-22T02:08:57.442378",
          "exception": false,
          "start_time": "2021-04-22T02:08:57.395745",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "We will use the [Average Word Embedding GloVe](https://nlp.stanford.edu/projects/glove/) model to transform text into vector embeddings. We then upload the embeddings into the Pinecone vector index."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "bfae598d",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "bfae598d",
        "outputId": "f7190757-f6f4-432c-d373-167a3a134898",
        "papermill": {
          "duration": 110.529503,
          "end_time": "2021-04-22T02:10:48.017756",
          "exception": false,
          "start_time": "2021-04-22T02:08:57.488253",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "import torch\n",
        "from sentence_transformers import SentenceTransformer\n",
        "\n",
        "# set device to GPU if available\n",
        "device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
        "model = SentenceTransformer(\"average_word_embeddings_glove.6B.300d\", device=device)\n",
        "model"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "53d9cb5c",
      "metadata": {
        "id": "53d9cb5c"
      },
      "source": [
        "## Generate Embeddings and Upsert"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "K8QdvkFeF2ri",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 101,
          "referenced_widgets": [
            "d0d195ed3cb14569b7e47356319394f5",
            "0c34aba8a1e94d078456517c8edcaf45",
            "f59514b176834a0bbc5302a2e8e972d8",
            "b86c02abd2454a448c0b2ba5b902ad91",
            "459291497dc349beba7c19ffde670457",
            "8d7cd6ea4b7c47049c0f972f0a3f8341",
            "1d0d7fa557a843cea91ef3948f89b251",
            "9640d3f2d3144870a160458c2ed3b93a",
            "488bfd3ac6f447e59c5a4fb16cfed5da",
            "0229c86e24874276a8be09c5e689dace",
            "72dcf6d1a3ee42429cd31e69b376b2be"
          ]
        },
        "id": "K8QdvkFeF2ri",
        "outputId": "af65779e-d94b-45bf-9ef3-f674245197a4"
      },
      "outputs": [],
      "source": [
        "from tqdm.auto import tqdm\n",
        "\n",
        "# We will use batches of 256\n",
        "batch_size = 256\n",
        "for i in tqdm(range(0, len(df), batch_size)):\n",
        "    # Find end of batch\n",
        "    i_end = min(i + batch_size, len(df))\n",
        "\n",
        "    # Extract batch\n",
        "    batch = df.iloc[i:i_end]\n",
        "\n",
        "    # Generate embeddings for batch\n",
        "    emb = model.encode(batch[\"combined_text\"].to_list()).tolist()\n",
        "\n",
        "    # extract both indexed and not indexed metadata\n",
        "    meta = batch[[\"processed_abstract\"]].to_dict(orient=\"records\")\n",
        "\n",
        "    # create IDs\n",
        "    ids = batch.core_id.astype(str)\n",
        "\n",
        "    # add all to upsert list\n",
        "    to_upsert = list(zip(ids, emb, meta))\n",
        "\n",
        "    # upsert/insert these records to pinecone\n",
        "    _ = index.upsert(vectors=to_upsert)\n",
        "\n",
        "# check that we have all vectors in index\n",
        "index.describe_index_stats()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "065632df",
      "metadata": {
        "id": "065632df",
        "papermill": {
          "duration": 0.052509,
          "end_time": "2021-04-22T02:12:47.521808",
          "exception": false,
          "start_time": "2021-04-22T02:12:47.469299",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "## Searching for Candidates\n",
        "\n",
        "Now that we have created vectors for the articles and inserted them in the index, we will create a test set for querying. For each article in the test set we will query the index to get the most similar articles, they are the candidates on which we will performs the next classification step.\n",
        "\n",
        "Below, we list statistics of the number of duplicates per article in the resulting test set."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "1cc9e04c",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "1cc9e04c",
        "outputId": "190ee1e7-7660-442c-f031-b82eed05f214",
        "papermill": {
          "duration": 0.542214,
          "end_time": "2021-04-22T02:12:48.113744",
          "exception": false,
          "start_time": "2021-04-22T02:12:47.571530",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "import math\n",
        "\n",
        "# Create a sample from the dataset\n",
        "SAMPLE_FRACTION = 0.002\n",
        "test_documents = (\n",
        "    df.groupby(df.labelled_duplicates.map(len))\n",
        "    .apply(lambda x: x.head(math.ceil(len(x) * SAMPLE_FRACTION)))\n",
        "    .reset_index(drop=True)\n",
        ")\n",
        "\n",
        "print(\"Number of documents with specified number of duplicates:\")\n",
        "lens = test_documents.labelled_duplicates.apply(len)\n",
        "lens.value_counts()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "e7845a8f",
      "metadata": {
        "id": "e7845a8f",
        "papermill": {
          "duration": 1.784915,
          "end_time": "2021-04-22T02:12:49.951451",
          "exception": false,
          "start_time": "2021-04-22T02:12:48.166536",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "# Use the model to create embeddings for test articles, which will be the query vectors\n",
        "query_vectors = model.encode(test_documents.combined_text.to_list()).tolist()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "83203179",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 49,
          "referenced_widgets": [
            "276ccf94f4434df28cf2bde9a5acabe6",
            "3351594361214bd8ac8810fa489a1e6a",
            "217ffc1cb9934c7bafec4afe290691e6",
            "982b6ff167af4625a1615d0a074a726c",
            "58565b3d6baf477f9ed0e1450476ced8",
            "4d27d3e4e55f491ebae82881c45d759c",
            "b3c0fc09a8684987869271af4cc8a6d5",
            "126b2929a87440e4aa581de452139810",
            "0d147f8afd2049c1aa4d33ca1cc5fff3",
            "57ffad82cd3e4b4b9aab8355a40068a2",
            "cf7ae2bc4705457ebc5ec419f4db7c69"
          ]
        },
        "id": "83203179",
        "outputId": "05ebf6f2-186e-484a-8c94-aacac5df9d9d",
        "papermill": {
          "duration": 2.611295,
          "end_time": "2021-04-22T02:12:52.620658",
          "exception": false,
          "start_time": "2021-04-22T02:12:50.009363",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "# Query the vector index\n",
        "query_results = []\n",
        "for xq in tqdm(query_vectors):\n",
        "    query_res = index.query(vector=xq, top_k=100, include_metadata=True)\n",
        "    query_results.append(query_res)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "f07953c6",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 49,
          "referenced_widgets": [
            "9a00c7a642f64dfca4fa4322d6ff7fd6",
            "70f54eba2c9a44ac81d76a949685a59c",
            "039f100022de4ef9b99737ab9c4dfa9d",
            "2bbd1c4e52094a398bc754ff3cd42e76",
            "302b0ee16d0c4c0ab6b2cb2725054e5b",
            "368a611a07524465a6a30d8b32c01e10",
            "dcbf8fb4b81f4c69abdeaaf1639da134",
            "1f7b99a26ddc428c8eca1a45e84ebb15",
            "b8a4db069f2643aab0d80ff6a8b3f1dd",
            "886e76782a5a441f942557db1d490637",
            "f319abb792634f2c8bf3d9155ca3ce3b"
          ]
        },
        "id": "f07953c6",
        "outputId": "55616a87-0ec7-4b22-cfc2-511a43e5eb87",
        "papermill": {
          "duration": 8.784046,
          "end_time": "2021-04-22T02:13:01.458467",
          "exception": false,
          "start_time": "2021-04-22T02:12:52.674421",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "# Save all retrieval recalls into a list\n",
        "recalls = []\n",
        "\n",
        "for id, res in tqdm(list(zip(test_documents.core_id.values, query_results))):\n",
        "    # Find document with id in labelled dataset\n",
        "    labeled_df = df[df.core_id.astype(str) == str(id)]\n",
        "    # Calculate the retrieval recall\n",
        "    top_k_list = set([match.id for match in res.matches])\n",
        "    labelled_duplicates = set(labeled_df.labelled_duplicates.values[0])\n",
        "    intersection = top_k_list.intersection(labelled_duplicates)\n",
        "    if len(labelled_duplicates) != 0:\n",
        "        recalls.append(len(intersection) / len(labelled_duplicates))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "e96d8971",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "e96d8971",
        "outputId": "1a002fd6-30fa-446e-f7c8-e210465ced88"
      },
      "outputs": [],
      "source": [
        "import statistics\n",
        "\n",
        "print(\"Mean for the retrieval recall is \" + str(statistics.mean(recalls)))\n",
        "print(\"Standard Deviation is  \" + str(statistics.stdev(recalls)))"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "53ab548f",
      "metadata": {
        "id": "53ab548f",
        "papermill": {
          "duration": 0.09082,
          "end_time": "2021-04-22T02:13:01.816697",
          "exception": false,
          "start_time": "2021-04-22T02:13:01.725877",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "### Running the Classifier \n",
        "\n",
        "We mentioned earlier in the article that we will perform two steps for deduplication, searching to produce candidates and performing classifciation on them.\n",
        "\n",
        "We will use Deduplication Classifier based on [LSH](https://en.wikipedia.org/wiki/Locality-sensitive_hashing) for detecting duplicates on the results from the previous step. We will run this on a sample of query results we got in the previous step. Feel free to try out the results on the entire set of query results."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "KqsUt5S39xc2",
      "metadata": {
        "id": "KqsUt5S39xc2"
      },
      "outputs": [],
      "source": [
        "import pandas as pd\n",
        "from gensim.utils import tokenize\n",
        "from datasketch.minhash import MinHash\n",
        "from datasketch.lsh import MinHashLSH"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "dc1598e5",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "dc1598e5",
        "outputId": "93a0328e-1708-4512-a048-df7ffdf4e883",
        "papermill": {
          "duration": 90.772565,
          "end_time": "2021-04-22T02:14:32.677767",
          "exception": false,
          "start_time": "2021-04-22T02:13:01.905202",
          "status": "completed"
        },
        "scrolled": true,
        "tags": []
      },
      "outputs": [],
      "source": [
        "# Counters for correct/false predictions\n",
        "all_predictions = {\"Correct\": 0, \"False\": 0}\n",
        "predictions_per_category = {}\n",
        "\n",
        "# From the results in the previous step, we will take a subset to test our classifier\n",
        "query_sample = query_results[::10]\n",
        "ids_sample = test_documents.core_id.to_list()[::10]\n",
        "\n",
        "for id, res in zip(ids_sample, query_sample):\n",
        "\n",
        "    # Find document with id from the labelled dataset\n",
        "    labeled_df = df[df.core_id.astype(str) == str(id)]\n",
        "\n",
        "    \"\"\"\n",
        "    For every article in the result set, we store the scores and abstract of the articles most similar \n",
        "    to it, according to search in the previous step.\n",
        "    \"\"\"\n",
        "\n",
        "    df_result = pd.DataFrame(\n",
        "        {\n",
        "            \"id\": [match.id for match in res.matches],\n",
        "            \"document\": [\n",
        "                match[\"metadata\"][\"processed_abstract\"] for match in res.matches\n",
        "            ],\n",
        "            \"score\": [match.score for match in res.matches],\n",
        "        }\n",
        "    )\n",
        "\n",
        "    print(df_result.head())\n",
        "\n",
        "    # We need content and labels for our classifier which we can get from the df_results\n",
        "    content = df_result.document.values\n",
        "    labels = list(df_result.id.values)\n",
        "\n",
        "    # Create MinHash for each of the documents in result set\n",
        "    min_hashes = {}\n",
        "    for label, text in zip(labels, content):\n",
        "        m = MinHash(num_perm=128, seed=5)\n",
        "        tokens = set(tokenize(text))\n",
        "        for d in tokens:\n",
        "            m.update(d.encode(\"utf8\"))\n",
        "        min_hashes[label] = m\n",
        "\n",
        "    # Create LSH index\n",
        "    lsh = MinHashLSH(\n",
        "        threshold=0.7,\n",
        "        num_perm=128,\n",
        "    )\n",
        "    for i, j in min_hashes.items():\n",
        "        lsh.insert(str(i), j)\n",
        "\n",
        "    query_minhash = min_hashes[str(id)]\n",
        "    duplicates = lsh.query(query_minhash)\n",
        "    duplicates.remove(str(id))\n",
        "\n",
        "    # Check whether prediction matches labeled duplicates. Here the groud truth is the set of duplicates from our original set\n",
        "    prediction = (\n",
        "        \"Correct\"\n",
        "        if set(labeled_df.labelled_duplicates.values[0]) == set(duplicates)\n",
        "        else \"False\"\n",
        "    )\n",
        "\n",
        "    # Add to all predictions\n",
        "    all_predictions[prediction] += 1\n",
        "\n",
        "    # Create and/or add to the specific category based on number of duplicates in original dataset\n",
        "    num_of_duplicates = len(labeled_df.labelled_duplicates.values[0])\n",
        "    if num_of_duplicates not in predictions_per_category:\n",
        "        predictions_per_category[num_of_duplicates] = [0, 0]\n",
        "\n",
        "    if prediction == \"Correct\":\n",
        "        predictions_per_category[num_of_duplicates][0] += 1\n",
        "    else:\n",
        "        predictions_per_category[num_of_duplicates][1] += 1\n",
        "\n",
        "    # Print the results for a document\n",
        "    print(\n",
        "        \"{}: expected: {}, predicted: {}, prediction: {}\".format(\n",
        "            id, labeled_df.labelled_duplicates.values[0], duplicates, prediction\n",
        "        )\n",
        "    )"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "d766dd65",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "d766dd65",
        "outputId": "059ce267-643c-48b1-d08a-1ede03c94942",
        "papermill": {
          "duration": 0.096165,
          "end_time": "2021-04-22T02:14:32.862012",
          "exception": false,
          "start_time": "2021-04-22T02:14:32.765847",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "all_predictions"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "3409a3ea",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "3409a3ea",
        "outputId": "e787be0b-7931-40e4-8c4d-3c58bd425c38",
        "papermill": {
          "duration": 0.104606,
          "end_time": "2021-04-22T02:14:33.059554",
          "exception": false,
          "start_time": "2021-04-22T02:14:32.954948",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "# Overall accuracy on a test\n",
        "accuracy = round(\n",
        "    all_predictions[\"Correct\"]\n",
        "    / (all_predictions[\"Correct\"] + all_predictions[\"False\"]),\n",
        "    4,\n",
        ")\n",
        "accuracy"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "f8a4409a",
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 206
        },
        "id": "f8a4409a",
        "outputId": "e9e8bd9d-e581-45c9-b73b-ea5dcfd488cf",
        "papermill": {
          "duration": 0.103948,
          "end_time": "2021-04-22T02:14:33.250325",
          "exception": false,
          "start_time": "2021-04-22T02:14:33.146377",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "# Print the prediction count for each class depending on the number of duplicates in labeled dataset\n",
        "pd.DataFrame.from_dict(\n",
        "    predictions_per_category, orient=\"index\", columns=[\"Correct\", \"False\"]\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "88377277",
      "metadata": {
        "id": "88377277",
        "papermill": {
          "duration": 0.087503,
          "end_time": "2021-04-22T02:14:33.422505",
          "exception": false,
          "start_time": "2021-04-22T02:14:33.335002",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "## Delete the Index\n",
        "Delete the index once you are sure that you do not want to use it anymore. Once the index is deleted, you cannot use it again.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "418a418c",
      "metadata": {
        "id": "418a418c",
        "papermill": {
          "duration": 12.913733,
          "end_time": "2021-04-22T02:14:46.427686",
          "exception": false,
          "start_time": "2021-04-22T02:14:33.513953",
          "status": "completed"
        },
        "tags": []
      },
      "outputs": [],
      "source": [
        "# Delete the index if it's not going to be used anymore\n",
        "pc.delete_index(index_name)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "f2280d6e",
      "metadata": {
        "id": "f2280d6e",
        "papermill": {
          "duration": 0.096714,
          "end_time": "2021-04-22T02:14:46.625379",
          "exception": false,
          "start_time": "2021-04-22T02:14:46.528665",
          "status": "completed"
        },
        "tags": []
      },
      "source": [
        "## Summary\n",
        "\n",
        "In this notebook we demonstrate how to perform a deduplication task of over 100,000 articles using Pinecone. With articles embedded as vectors, you can use Pinecone's vector index to find similar articles. For each query article, we then use an LSH classifier on the similar articles to identify duplicate articles. Overall, we show that it is ease to incorporate Pinecone wtih article embedding models and duplication classifiers to build a deduplication service.\n"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "collapsed_sections": [],
      "name": "deduplication_scholarly_articles1.ipynb",
      "provenance": []
    },
    "environment": {
      "name": "tf2-gpu.2-4.m61",
      "type": "gcloud",
      "uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-4:m61"
    },
    "gpuClass": "standard",
    "interpreter": {
      "hash": "b8e7999f96e1b425e2d542f21b571f5a4be3e97158b0b46ea1b2500df63956ce"
    },
    "kernelspec": {
      "display_name": "Python 3 (ipykernel)",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.9"
    },
    "papermill": {
      "default_parameters": {},
      "duration": 542.747435,
      "end_time": "2021-04-22T02:14:49.477923",
      "environment_variables": {},
      "exception": null,
      "input_path": "/notebooks/deduplication/deduplication_scholarly_articles.ipynb",
      "output_path": "/notebooks/tmp/deduplication/deduplication_scholarly_articles.ipynb",
      "parameters": {},
      "start_time": "2021-04-22T02:05:46.730488",
      "version": "2.3.3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}