{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/search/semantic-search/spotify-podcast-search/spotify-podcast-search.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/search/semantic-search/spotify-podcast-search/spotify-podcast-search.ipynb)\n",
    "\n",
    "# Podcast Search\n",
    "\n",
    "In this notebook we will work through the techniques described by Spotify R&D on [how they implemented semantic search](https://engineering.atspotify.com/2022/03/introducing-natural-language-search-for-podcast-episodes/) (or *natural language search*) to improve the podcast discovery process for users."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Data"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Spotify used four data sources for training and evaluation:\n",
    "\n",
    "1. *(query, episode)* pairs from successful podcast searches (found in past search logs)\n",
    "2. Where a successful podcast search occured *after* an initially unsuccessful search, *(query_prior_to_successful_reformulation, episode)* pairs were created. The idea is that these initial queries may be natural language queries that the user then changed to fit a more rigid search format.\n",
    "3. Generate synthetic queries from popular episode titles and descriptions. They fine-tuned a BART model on MS-MARCO and then used it to generate the queries, creating *(synthetic_query, episode)* pairs.\n",
    "4. A small curated set of *semantic* queries were manually written for popular podcast episodes, this is used for evaluation only.\n",
    "\n",
    "Unfortunately we don't have access to Spotify's past search logs and this rules out any emulation of sources **1** and **2**. Given enough time, sure we could curate a set of semantic queries, but we do not (feel free to do this if you want).\n",
    "\n",
    "That leaves us with option **3**. This is probably the most interesting technique used by Spotify, and fortunately we can replicate it. All we need is a dataset containing podcast metadata, which we can [find here](https://www.kaggle.com/datasets/listennotes/all-podcast-episodes-published-in-december-2017).\n",
    "\n",
    "Before getting started we must install all prerequisites:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install -U kaggle sentence-transformers pinecone-client tqdm"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Data Download"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We need to use the Kaggle API to download our podcast metadata dataset. This is installed using `pip install kaggle`. An account and API key is needed, which should be stored in the location displayed when attempting to `import kaggle` (if no error appears, the API key has already been added)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import kaggle"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once the API key is added, we move onto the data download."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from kaggle.api.kaggle_api_extended import KaggleApi\n",
    "api = KaggleApi()\n",
    "api.authenticate()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Kaggle hosts two different types of datasets, competition and standalone. We need to know which of those our podcasts dataset is because the method for downloading each is different. Fortunately, it's easy to identify from nothing more than the URL. Competition datasets contain `/c/` at the beginning of the domain path, whereas standalone datasets contain `/dataset/`, here are two examples:\n",
    "\n",
    "Competition: [https://www.kaggle.com/c/titanic](https://www.kaggle.com/c/titanic)\n",
    "\n",
    "Standalone: [https://www.kaggle.com/datasets/anandaramg/taxi-trip-data-nyc](https://www.kaggle.com/datasets/anandaramg/taxi-trip-data-nyc)\n",
    "\n",
    "If we take a look at the podcasts dataset page we will see that it is a *standalone dataset*:\n",
    "\n",
    "[https://www.kaggle.com/datasets/listennotes/all-podcast-episodes-published-in-december-2017](https://www.kaggle.com/datasets/listennotes/all-podcast-episodes-published-in-december-2017)\n",
    "\n",
    "And so we can download it using the `dataset_download_file` method, for which we must pass the dataset location (`listennotes/all-podcast-episodes-published-in-december-2017`), filename(s) (`podcasts.csv`, `episodes.csv`), and target save location (current directory, `./`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "api.dataset_download_file(\n",
    "    'listennotes/all-podcast-episodes-published-in-december-2017',\n",
    "    file_name='podcasts.csv',\n",
    "    path='./'\n",
    ")\n",
    "api.dataset_download_file(\n",
    "    'listennotes/all-podcast-episodes-published-in-december-2017',\n",
    "    file_name='episodes.csv',\n",
    "    path='./'\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This will download both of our files as zip files, which we extract using the `zipfile` library."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import zipfile\n",
    "\n",
    "with zipfile.ZipFile('podcasts.csv.zip', 'r') as zipref:\n",
    "    zipref.extractall('./')\n",
    "with zipfile.ZipFile('episodes.csv.zip', 'r') as zipref:\n",
    "    zipref.extractall('./')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have two datasets here, `podcasts` details the podcast shows themselves, their title, description, and author. The `episodes` dataset details specific episodes from those podcasts, including the episode title, description, publication date, etc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>uuid</th>\n",
       "      <th>title</th>\n",
       "      <th>image</th>\n",
       "      <th>description</th>\n",
       "      <th>language</th>\n",
       "      <th>categories</th>\n",
       "      <th>website</th>\n",
       "      <th>author</th>\n",
       "      <th>itunes_id</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>8d62d3880db2425b890b986e58aca393</td>\n",
       "      <td>Ecommerce Conversations, by Practical Ecommerce</td>\n",
       "      <td>http://is4.mzstatic.com/image/thumb/Music6/v4/...</td>\n",
       "      <td>Listen in as the Practical Ecommerce editorial...</td>\n",
       "      <td>English</td>\n",
       "      <td>Technology</td>\n",
       "      <td>http://www.practicalecommerce.com</td>\n",
       "      <td>Practical Ecommerce</td>\n",
       "      <td>874457373</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>cbbefd691915468c90f87ab2f00473f9</td>\n",
       "      <td>Eat Sleep Code Podcast</td>\n",
       "      <td>http://is4.mzstatic.com/image/thumb/Music71/v4...</td>\n",
       "      <td>On the show we’ll be talking to passionate peo...</td>\n",
       "      <td>English</td>\n",
       "      <td>Tech News | Technology</td>\n",
       "      <td>http://developer.telerik.com/</td>\n",
       "      <td>Telerik</td>\n",
       "      <td>1015556393</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>73626ad1edb74dbb8112cd159bda86cf</td>\n",
       "      <td>SoundtrackAlley</td>\n",
       "      <td>http://is5.mzstatic.com/image/thumb/Music71/v4...</td>\n",
       "      <td>A podcast about soundtracks and movies from my...</td>\n",
       "      <td>English</td>\n",
       "      <td>Podcasting | Technology</td>\n",
       "      <td>https://soundtrackalley.podbean.com</td>\n",
       "      <td>Randy Andrews</td>\n",
       "      <td>1158188937</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>0f50631ebad24cedb2fee80950f37a1a</td>\n",
       "      <td>The Tech M&amp;A Podcast</td>\n",
       "      <td>http://is1.mzstatic.com/image/thumb/Music71/v4...</td>\n",
       "      <td>The Tech M&amp;A Podcast pulls from the best of th...</td>\n",
       "      <td>English</td>\n",
       "      <td>Business News | Technology | Tech News | Business</td>\n",
       "      <td>http://www.corumgroup.com</td>\n",
       "      <td>Timothy Goddard</td>\n",
       "      <td>538160025</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>69580e7b419045839ca07af06cf0d653</td>\n",
       "      <td>The Tech Informist - For fans of Apple, Google...</td>\n",
       "      <td>http://is4.mzstatic.com/image/thumb/Music62/v4...</td>\n",
       "      <td>The tech news show with two guys shooting the ...</td>\n",
       "      <td>English</td>\n",
       "      <td>Gadgets | Tech News | Technology</td>\n",
       "      <td>http://techinformist.com</td>\n",
       "      <td>The Tech Informist</td>\n",
       "      <td>916080498</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                               uuid  \\\n",
       "0  8d62d3880db2425b890b986e58aca393   \n",
       "1  cbbefd691915468c90f87ab2f00473f9   \n",
       "2  73626ad1edb74dbb8112cd159bda86cf   \n",
       "3  0f50631ebad24cedb2fee80950f37a1a   \n",
       "4  69580e7b419045839ca07af06cf0d653   \n",
       "\n",
       "                                               title  \\\n",
       "0    Ecommerce Conversations, by Practical Ecommerce   \n",
       "1                             Eat Sleep Code Podcast   \n",
       "2                                    SoundtrackAlley   \n",
       "3                               The Tech M&A Podcast   \n",
       "4  The Tech Informist - For fans of Apple, Google...   \n",
       "\n",
       "                                               image  \\\n",
       "0  http://is4.mzstatic.com/image/thumb/Music6/v4/...   \n",
       "1  http://is4.mzstatic.com/image/thumb/Music71/v4...   \n",
       "2  http://is5.mzstatic.com/image/thumb/Music71/v4...   \n",
       "3  http://is1.mzstatic.com/image/thumb/Music71/v4...   \n",
       "4  http://is4.mzstatic.com/image/thumb/Music62/v4...   \n",
       "\n",
       "                                         description language  \\\n",
       "0  Listen in as the Practical Ecommerce editorial...  English   \n",
       "1  On the show we’ll be talking to passionate peo...  English   \n",
       "2  A podcast about soundtracks and movies from my...  English   \n",
       "3  The Tech M&A Podcast pulls from the best of th...  English   \n",
       "4  The tech news show with two guys shooting the ...  English   \n",
       "\n",
       "                                          categories  \\\n",
       "0                                         Technology   \n",
       "1                             Tech News | Technology   \n",
       "2                            Podcasting | Technology   \n",
       "3  Business News | Technology | Tech News | Business   \n",
       "4                   Gadgets | Tech News | Technology   \n",
       "\n",
       "                               website               author   itunes_id  \n",
       "0    http://www.practicalecommerce.com  Practical Ecommerce   874457373  \n",
       "1        http://developer.telerik.com/              Telerik  1015556393  \n",
       "2  https://soundtrackalley.podbean.com        Randy Andrews  1158188937  \n",
       "3            http://www.corumgroup.com      Timothy Goddard   538160025  \n",
       "4             http://techinformist.com   The Tech Informist   916080498  "
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "podcasts = pd.read_csv('podcasts.csv')\n",
    "podcasts.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>title</th>\n",
       "      <th>audio</th>\n",
       "      <th>audio_length</th>\n",
       "      <th>description</th>\n",
       "      <th>pub_date</th>\n",
       "      <th>uuid</th>\n",
       "      <th>podcast_uuid</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>Piątek - 01 grudnia</td>\n",
       "      <td>https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...</td>\n",
       "      <td>490</td>\n",
       "      <td>święci męczennicy jezuiccy Edmund Campion SJ, ...</td>\n",
       "      <td>2017-12-01 00:00:00+00</td>\n",
       "      <td>fd5d891411174c7ca953c1f54657c3eb</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Sobota - 02 grudnia</td>\n",
       "      <td>https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...</td>\n",
       "      <td>481</td>\n",
       "      <td>bł. Rafał Chyliński, prezbiter, Łk 21, 34-36</td>\n",
       "      <td>2017-12-02 00:00:00+00</td>\n",
       "      <td>5c28fa0a27b342cd92ff03c16a8019c2</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Niedziela - 03 grudnia</td>\n",
       "      <td>https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...</td>\n",
       "      <td>667</td>\n",
       "      <td>Pierwsza Niedziela Adwentu, Mk 13, 33-37</td>\n",
       "      <td>2017-12-03 00:00:00+00</td>\n",
       "      <td>efdc9f4f07fa4c4883f8848256066cec</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>Introduction to Luke</td>\n",
       "      <td>http://www.wgcr.net/images/TimelessTruths/TTT-...</td>\n",
       "      <td>1691</td>\n",
       "      <td>Luke 1:1-4 -</td>\n",
       "      <td>2017-12-03 11:30:05+00</td>\n",
       "      <td>cc2860165fa84d1092f6b45f19255a87</td>\n",
       "      <td>36ed4e62dcd94412a5211cc9bd76ba7c</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>Dear Science: Lightning, Dead Cats and Hand Sa...</td>\n",
       "      <td>http://95bfm.com/sites/default/files/291117_De...</td>\n",
       "      <td>1152</td>\n",
       "      <td>&lt;p&gt;Today on Dear Science with AUT's Allan Blac...</td>\n",
       "      <td>2017-12-27 11:00:00+00</td>\n",
       "      <td>69bd409e0469433581ccc76cf7b664ad</td>\n",
       "      <td>fa36a26a1879453f95da1379c737cd6d</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                               title  \\\n",
       "0                                Piątek - 01 grudnia   \n",
       "1                                Sobota - 02 grudnia   \n",
       "2                             Niedziela - 03 grudnia   \n",
       "3                               Introduction to Luke   \n",
       "4  Dear Science: Lightning, Dead Cats and Hand Sa...   \n",
       "\n",
       "                                               audio  audio_length  \\\n",
       "0  https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...           490   \n",
       "1  https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...           481   \n",
       "2  https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...           667   \n",
       "3  http://www.wgcr.net/images/TimelessTruths/TTT-...          1691   \n",
       "4  http://95bfm.com/sites/default/files/291117_De...          1152   \n",
       "\n",
       "                                         description                pub_date  \\\n",
       "0  święci męczennicy jezuiccy Edmund Campion SJ, ...  2017-12-01 00:00:00+00   \n",
       "1       bł. Rafał Chyliński, prezbiter, Łk 21, 34-36  2017-12-02 00:00:00+00   \n",
       "2           Pierwsza Niedziela Adwentu, Mk 13, 33-37  2017-12-03 00:00:00+00   \n",
       "3                                       Luke 1:1-4 -  2017-12-03 11:30:05+00   \n",
       "4  <p>Today on Dear Science with AUT's Allan Blac...  2017-12-27 11:00:00+00   \n",
       "\n",
       "                               uuid                      podcast_uuid  \n",
       "0  fd5d891411174c7ca953c1f54657c3eb  811c18cf575841b3bef4601978f17ca9  \n",
       "1  5c28fa0a27b342cd92ff03c16a8019c2  811c18cf575841b3bef4601978f17ca9  \n",
       "2  efdc9f4f07fa4c4883f8848256066cec  811c18cf575841b3bef4601978f17ca9  \n",
       "3  cc2860165fa84d1092f6b45f19255a87  36ed4e62dcd94412a5211cc9bd76ba7c  \n",
       "4  69bd409e0469433581ccc76cf7b664ad  fa36a26a1879453f95da1379c737cd6d  "
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "episodes = pd.read_csv('episodes.csv')\n",
    "episodes.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Data Preparation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Spotify stated that their episode data consists of the podcast title, podcast description, episode title, episode description, and other metadata concatenated together. We will replicate this by first merging the *episodes* and *podcasts* dataframes with an *inner join* on the podcast ID features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>title_ep</th>\n",
       "      <th>audio</th>\n",
       "      <th>audio_length</th>\n",
       "      <th>description_ep</th>\n",
       "      <th>pub_date</th>\n",
       "      <th>uuid_ep</th>\n",
       "      <th>podcast_uuid</th>\n",
       "      <th>uuid_pod</th>\n",
       "      <th>title_pod</th>\n",
       "      <th>image</th>\n",
       "      <th>description_pod</th>\n",
       "      <th>language</th>\n",
       "      <th>categories</th>\n",
       "      <th>website</th>\n",
       "      <th>author</th>\n",
       "      <th>itunes_id</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>Piątek - 01 grudnia</td>\n",
       "      <td>https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...</td>\n",
       "      <td>490</td>\n",
       "      <td>święci męczennicy jezuiccy Edmund Campion SJ, ...</td>\n",
       "      <td>2017-12-01 00:00:00+00</td>\n",
       "      <td>fd5d891411174c7ca953c1f54657c3eb</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "      <td>Modlitwa w drodze</td>\n",
       "      <td>http://is4.mzstatic.com/image/thumb/Music62/v4...</td>\n",
       "      <td>\\n\\t\\t\\tModlitwa w drodze to propozycja duchow...</td>\n",
       "      <td>Polish</td>\n",
       "      <td>Training | Spirituality | Education | Christia...</td>\n",
       "      <td>http://www.modlitwawdrodze.pl</td>\n",
       "      <td>Modlitwa w drodze</td>\n",
       "      <td>412783872</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Sobota - 02 grudnia</td>\n",
       "      <td>https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...</td>\n",
       "      <td>481</td>\n",
       "      <td>bł. Rafał Chyliński, prezbiter, Łk 21, 34-36</td>\n",
       "      <td>2017-12-02 00:00:00+00</td>\n",
       "      <td>5c28fa0a27b342cd92ff03c16a8019c2</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "      <td>Modlitwa w drodze</td>\n",
       "      <td>http://is4.mzstatic.com/image/thumb/Music62/v4...</td>\n",
       "      <td>\\n\\t\\t\\tModlitwa w drodze to propozycja duchow...</td>\n",
       "      <td>Polish</td>\n",
       "      <td>Training | Spirituality | Education | Christia...</td>\n",
       "      <td>http://www.modlitwawdrodze.pl</td>\n",
       "      <td>Modlitwa w drodze</td>\n",
       "      <td>412783872</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Niedziela - 03 grudnia</td>\n",
       "      <td>https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...</td>\n",
       "      <td>667</td>\n",
       "      <td>Pierwsza Niedziela Adwentu, Mk 13, 33-37</td>\n",
       "      <td>2017-12-03 00:00:00+00</td>\n",
       "      <td>efdc9f4f07fa4c4883f8848256066cec</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "      <td>Modlitwa w drodze</td>\n",
       "      <td>http://is4.mzstatic.com/image/thumb/Music62/v4...</td>\n",
       "      <td>\\n\\t\\t\\tModlitwa w drodze to propozycja duchow...</td>\n",
       "      <td>Polish</td>\n",
       "      <td>Training | Spirituality | Education | Christia...</td>\n",
       "      <td>http://www.modlitwawdrodze.pl</td>\n",
       "      <td>Modlitwa w drodze</td>\n",
       "      <td>412783872</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>Poniedziałek - 04 grudnia</td>\n",
       "      <td>https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...</td>\n",
       "      <td>654</td>\n",
       "      <td>św. Jan Damasceński, prezbiter i doktor Kościo...</td>\n",
       "      <td>2017-12-04 00:00:00+00</td>\n",
       "      <td>a6034db279244d21a34c0723d1495fb8</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "      <td>Modlitwa w drodze</td>\n",
       "      <td>http://is4.mzstatic.com/image/thumb/Music62/v4...</td>\n",
       "      <td>\\n\\t\\t\\tModlitwa w drodze to propozycja duchow...</td>\n",
       "      <td>Polish</td>\n",
       "      <td>Training | Spirituality | Education | Christia...</td>\n",
       "      <td>http://www.modlitwawdrodze.pl</td>\n",
       "      <td>Modlitwa w drodze</td>\n",
       "      <td>412783872</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>Wtorek - 05 grudnia</td>\n",
       "      <td>https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...</td>\n",
       "      <td>535</td>\n",
       "      <td>św. Saba Jerozolimski, prezbiter, Łk 10, 21-24</td>\n",
       "      <td>2017-12-05 00:00:00+00</td>\n",
       "      <td>c35d5236d451454fa0bb5e95a7137d35</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "      <td>811c18cf575841b3bef4601978f17ca9</td>\n",
       "      <td>Modlitwa w drodze</td>\n",
       "      <td>http://is4.mzstatic.com/image/thumb/Music62/v4...</td>\n",
       "      <td>\\n\\t\\t\\tModlitwa w drodze to propozycja duchow...</td>\n",
       "      <td>Polish</td>\n",
       "      <td>Training | Spirituality | Education | Christia...</td>\n",
       "      <td>http://www.modlitwawdrodze.pl</td>\n",
       "      <td>Modlitwa w drodze</td>\n",
       "      <td>412783872</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                    title_ep  \\\n",
       "0        Piątek - 01 grudnia   \n",
       "1        Sobota - 02 grudnia   \n",
       "2     Niedziela - 03 grudnia   \n",
       "3  Poniedziałek - 04 grudnia   \n",
       "4        Wtorek - 05 grudnia   \n",
       "\n",
       "                                               audio  audio_length  \\\n",
       "0  https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...           490   \n",
       "1  https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...           481   \n",
       "2  https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...           667   \n",
       "3  https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...           654   \n",
       "4  https://cdneu.modlitwawdrodze.pl/prayers/MWD_2...           535   \n",
       "\n",
       "                                      description_ep                pub_date  \\\n",
       "0  święci męczennicy jezuiccy Edmund Campion SJ, ...  2017-12-01 00:00:00+00   \n",
       "1       bł. Rafał Chyliński, prezbiter, Łk 21, 34-36  2017-12-02 00:00:00+00   \n",
       "2           Pierwsza Niedziela Adwentu, Mk 13, 33-37  2017-12-03 00:00:00+00   \n",
       "3  św. Jan Damasceński, prezbiter i doktor Kościo...  2017-12-04 00:00:00+00   \n",
       "4     św. Saba Jerozolimski, prezbiter, Łk 10, 21-24  2017-12-05 00:00:00+00   \n",
       "\n",
       "                            uuid_ep                      podcast_uuid  \\\n",
       "0  fd5d891411174c7ca953c1f54657c3eb  811c18cf575841b3bef4601978f17ca9   \n",
       "1  5c28fa0a27b342cd92ff03c16a8019c2  811c18cf575841b3bef4601978f17ca9   \n",
       "2  efdc9f4f07fa4c4883f8848256066cec  811c18cf575841b3bef4601978f17ca9   \n",
       "3  a6034db279244d21a34c0723d1495fb8  811c18cf575841b3bef4601978f17ca9   \n",
       "4  c35d5236d451454fa0bb5e95a7137d35  811c18cf575841b3bef4601978f17ca9   \n",
       "\n",
       "                           uuid_pod          title_pod  \\\n",
       "0  811c18cf575841b3bef4601978f17ca9  Modlitwa w drodze   \n",
       "1  811c18cf575841b3bef4601978f17ca9  Modlitwa w drodze   \n",
       "2  811c18cf575841b3bef4601978f17ca9  Modlitwa w drodze   \n",
       "3  811c18cf575841b3bef4601978f17ca9  Modlitwa w drodze   \n",
       "4  811c18cf575841b3bef4601978f17ca9  Modlitwa w drodze   \n",
       "\n",
       "                                               image  \\\n",
       "0  http://is4.mzstatic.com/image/thumb/Music62/v4...   \n",
       "1  http://is4.mzstatic.com/image/thumb/Music62/v4...   \n",
       "2  http://is4.mzstatic.com/image/thumb/Music62/v4...   \n",
       "3  http://is4.mzstatic.com/image/thumb/Music62/v4...   \n",
       "4  http://is4.mzstatic.com/image/thumb/Music62/v4...   \n",
       "\n",
       "                                     description_pod language  \\\n",
       "0  \\n\\t\\t\\tModlitwa w drodze to propozycja duchow...   Polish   \n",
       "1  \\n\\t\\t\\tModlitwa w drodze to propozycja duchow...   Polish   \n",
       "2  \\n\\t\\t\\tModlitwa w drodze to propozycja duchow...   Polish   \n",
       "3  \\n\\t\\t\\tModlitwa w drodze to propozycja duchow...   Polish   \n",
       "4  \\n\\t\\t\\tModlitwa w drodze to propozycja duchow...   Polish   \n",
       "\n",
       "                                          categories  \\\n",
       "0  Training | Spirituality | Education | Christia...   \n",
       "1  Training | Spirituality | Education | Christia...   \n",
       "2  Training | Spirituality | Education | Christia...   \n",
       "3  Training | Spirituality | Education | Christia...   \n",
       "4  Training | Spirituality | Education | Christia...   \n",
       "\n",
       "                         website             author  itunes_id  \n",
       "0  http://www.modlitwawdrodze.pl  Modlitwa w drodze  412783872  \n",
       "1  http://www.modlitwawdrodze.pl  Modlitwa w drodze  412783872  \n",
       "2  http://www.modlitwawdrodze.pl  Modlitwa w drodze  412783872  \n",
       "3  http://www.modlitwawdrodze.pl  Modlitwa w drodze  412783872  \n",
       "4  http://www.modlitwawdrodze.pl  Modlitwa w drodze  412783872  "
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "episodes = episodes.merge(\n",
    "    podcasts,\n",
    "    left_on='podcast_uuid',\n",
    "    right_on='uuid',\n",
    "    suffixes=('_ep', '_pod')\n",
    ")\n",
    "episodes.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We now have all the features we'd like to concatenate in one place, those are *title_ep*, *title_pod*, *description_ep*, *description_pod*. The remaining features we can ignore.\n",
    "\n",
    "Before concatenation, we should remove any record that contains null/empty values within any of these features, and strip excessive whitespace found at the start or end of features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "features = ['title_ep', 'description_ep', 'title_pod', 'description_pod']\n",
    "# strip whitespace\n",
    "episodes[features] = episodes[features].apply(lambda x: x.str.strip())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Before: 873820\n",
      "After: 778182\n"
     ]
    }
   ],
   "source": [
    "print(f\"Before: {len(episodes)}\")\n",
    "episodes = episodes[\n",
    "    ~episodes[features].isnull().any(axis=1)\n",
    "]\n",
    "print(f\"After: {len(episodes)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now concatenate."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "episodes = episodes['title_ep'] + '. ' + episodes['description_ep'] + '. ' \\\n",
    "    + episodes['title_pod'] + '. ' + episodes['description_pod']\n",
    "episodes = episodes.to_list()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['Fancy New Band: Running Stitch. <p>Running Stitch join Hannah to play sme new tracks ahead of their EP release next year. Cheers NZ On Air Music!</p>. 95bFM. Audio on demand from selected shows',\n",
       " \"Political Commentary w/ David Slack: December 21, 2017. <p>It's the end of the year, and let's face it... 2017 hasn't been a great one for empathy. From\\xa0the public treatment of our politicians\\xa0to the treament of our least fortunate citizens, David Slack reckons it's about time we all took pause. It is Christmas, after all.</p>. 95bFM. Audio on demand from selected shows\",\n",
       " 'From the Crate w/ Troy Ferguson: December 21, 2017. <p>LP exploration with the ever-knowledgeable Troy, featuring the following new cakes and/or tasty re-releases:</p>\\n\\n<ul>\\n\\t<li>Ken Boothe - <em>You Keep Me Hangin\\' On</em></li>\\n\\t<li>The New Sounds -\\xa0<em>The Big Score</em></li>\\n\\t<li>Jitwam -\\xa0<em>Keepyourbusinesstoyourself</em></li>\\n</ul>\\n\\n<p>All available from and thanks to\\xa0<a href=\"http://www.southbound.co.nz/shop/\">Southbound Records</a>.</p>. 95bFM. Audio on demand from selected shows']"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "episodes[50:53]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's shuffle our data too."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "from random import shuffle\n",
    "\n",
    "shuffle(episodes)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Query Generation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We now have episodes, but no queries, and we need *(query, episode)* pairs to fine-tune a model. Spotify generated synthetic queries from the episode text (which we have). To do this they fine-tuned a BART model on MS MARCO, then used it to generate the queries.\n",
    "\n",
    "We don't need to fine-tune the BART model as there are already plenty of models that are readily available and have been fine-tuned on the exact same (MS MARCO) dataset, so we will initialize one of these using the HuggingFace *transformers* library."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from transformers import T5Tokenizer, T5ForConditionalGeneration\n",
    "import torch\n",
    "\n",
    "device = 'cuda' if torch.cuda.is_available() else 'cpu'\n",
    "\n",
    "# after testing many BART and T5 query generation models, this seemed best\n",
    "model_name = 'doc2query/all-t5-base-v1'\n",
    "\n",
    "tokenizer = T5Tokenizer.from_pretrained(model_name)\n",
    "model = T5ForConditionalGeneration.from_pretrained(\n",
    "    model_name\n",
    ").to(device)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we begin generating queries, the Spotify article didn't state if they produce a certain number of queries for each episode, we will assume they generated three queries per episode. In line with the approach taken by the GenQ and GPL techniques."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# (OPTIONAL) it will take a long time to produce queries for the entire dataset, let's drop some episodes\n",
    "episodes = episodes[:100_000]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "2ed4811ef70d478280550edcb16d1859",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/100000 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from tqdm.auto import tqdm\n",
    "\n",
    "batch_size = 128  # larger batch size == faster processing\n",
    "num_queries = 3  # number of queries to generate for each episode\n",
    "pairs = []\n",
    "ep_batch = []\n",
    "\n",
    "for ep in tqdm(episodes):\n",
    "    # remove tab + newline characters if present\n",
    "    ep_batch.append(ep.replace('\\t', ' ').replace('\\n', ' '))\n",
    "    \n",
    "    # we encode in batches\n",
    "    if len(ep_batch) == batch_size:\n",
    "        # tokenize the passage\n",
    "        inputs = tokenizer(\n",
    "            ep_batch,\n",
    "            truncation=True,\n",
    "            padding=True,\n",
    "            max_length=256,\n",
    "            return_tensors='pt'\n",
    "        )\n",
    "\n",
    "        # generate three queries per episode\n",
    "        outputs = model.generate(\n",
    "            input_ids=inputs['input_ids'].to(device),\n",
    "            attention_mask=inputs['attention_mask'].to(device),\n",
    "            max_length=64,\n",
    "            do_sample=True,\n",
    "            top_p=0.95,\n",
    "            num_return_sequences=num_queries\n",
    "        )\n",
    "\n",
    "        # decode query to human readable text\n",
    "        decoded_output = tokenizer.batch_decode(\n",
    "            outputs,\n",
    "            skip_special_tokens=True\n",
    "        )\n",
    "\n",
    "        # loop through to pair query and episodes\n",
    "        for i, query in enumerate(decoded_output):\n",
    "            query = query.replace('\\t', ' ').replace('\\n', ' ')  # remove newline + tabs\n",
    "            ep_idx = int(i/num_queries)  # get index of episode to match query\n",
    "            pairs.append([query, ep_batch[ep_idx]])\n",
    "        \n",
    "        ep_batch = []"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(['what is psalm 51 verse',\n",
       "  'Psalm 51:19h. Psalm 51. SermonAudio.com: MP3. The latest MP3 feed from SermonAudio.com.'],\n",
       " ['who is the host of bat and fleming',\n",
       "  'BART & FLEMING 16: \"Indies at the Oscars\". <p>Deadline Hollywood\\'s Peter Bart and Mike Fleming Jr discuss the recent waves of sexual harassment and assault allegations in Hollywood, as well as the trend for the Oscars to embrace independent films in recent years. Produced by David Janove.</p>. The Deadline Podcast. The Deadline Podcast is the one stop shop for all of Deadline Hollywood\\'s podcasts including BART & FLEMING and TV TALK with Dominic Patten and Pete Hammond.'],\n",
       " ['елитике оруие : акрта секретна рорамма \"лма\"?',\n",
       "  'Космическое оружие СССР: почему была закрыта секретная программа \"Алмаз\"? (831). <table width=\"100%\"><tr><td><div style=\"float:left;width:235px;\"><table cellpadding=0 cellspacing=0><tr><td style=\"border-bottom:0px;\"><img src=\"http://file2.podfm.ru/37/374/3746/37462/images/pod_28169.jpg?2\" ></td></tr></table></div>Полковники Виктор Баранец и Михаил Тимошенко рассказывают в эфире программы \"Военное ревю\" на Радио \"Комсомольская правда\". Выпуск от 2017-12-15 17:05:00. Ведущие: Виктор Баранец, Михаил Тимошенко.</td></tr></table>. Военное ревю. Информационно-аналитическая программа. И запредельно откровенный разговор с кадровыми военными и отставниками об армии и ее проблемах: призыв, дедовщина, тяготы военной службы и все главные моменты нелегкой ратной жизни. На ваши вопросы в прямом эфире отвечает военный обозреватель \"КП\", полковник Виктор Баранец.'],\n",
       " ['trafik stärning Stockholm utgivare',\n",
       "  'Trafik P4 Stockholm 20171204 17.13 (01.28). <br/><br/>. Trafikredaktionen Stockholm. Information om störningar i trafiken där du är. Ansvarig utgivare: Jan Peterson'],\n",
       " ['wrmf airdates kvj',\n",
       "  'The KVJ Show And After The Show Podcast on 979 WRMF 12-22-17. <p>No Name Movie Game, Drunk Girl Trivia, KVJ\\xa0Predictions For 2018 and\\xa0Feel Good Friday (ATS Starts 2HR 1Min 18+ Only)</p>. The KVJ Show. Hot Topics...Dysfunctional Characters.'])"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pairs[0], pairs[100], pairs[130], pairs[250], pairs[550]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We now have *(synthetic_query, episode)* pairs that we can use for fine-tuning a sentence embedding model."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Models\n",
    "\n",
    "Spotify tested using pretrained BERT models, but as they are not fine-tuned for producing sentence embeddings they did not use them. It looks like they also tested the performance of the original SBERT model, which has been fine-tuned for sentence embeddings, but were not happy with the results.\n",
    "\n",
    "In the end, they used the Universal Sentence Encoder (USE) model. They took the USE from TFHub, this is a great approach but to keep things as simple as possible we will be avoiding this and instead use a DistilUSE model supported by the *sentence-transformers* library, `distiluse-base-multilingual-cased-v2`. This will allow us to use *sentence-transformers* fine-tuning utilities.\n",
    "\n",
    "To initialize this model we do:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "SentenceTransformer(\n",
       "  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel \n",
       "  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})\n",
       "  (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})\n",
       ")"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sentence_transformers import SentenceTransformer\n",
    "\n",
    "model = SentenceTransformer(\n",
    "    'distiluse-base-multilingual-cased-v2'\n",
    ").to(device)\n",
    "model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When fine-tuning with the sentence transformers library we need to reformat our data into a list of `InputExample` objects. The exact format varies based on the training task. Ours is a reranking (more on that soon) task, so all we need are two text items, eg our *(query, episode)* pairs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Eval samples: 2999\n",
      "Train samples: 56981\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "271ea0e18c8f42ed8f09e7c37b7b64e1",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/239924 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from sentence_transformers import InputExample\n",
    "\n",
    "eval_split = int(0.01 * len(pairs))\n",
    "test_split = int(0.19 * len(pairs))\n",
    "print(\"Eval samples: \" + str(eval_split) + \"\\nTrain samples: \" + str(test_split))\n",
    "\n",
    "# we separate a number of these for testing\n",
    "test_pairs = pairs[-test_split:]\n",
    "pairs = pairs[:-test_split]\n",
    "         \n",
    "# and take a small number of samples for evaluation\n",
    "eval_pairs = pairs[-eval_split:]\n",
    "pairs = pairs[:-eval_split]\n",
    "\n",
    "train = []\n",
    "\n",
    "for (query, episode) in tqdm(pairs):\n",
    "    train.append(InputExample(texts=[query, episode]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As mentioned, we are going to be using a ranking optimization function. That means that the model is tasked with learning how to identify the correct *episode* from a batch of episodes when given a *query*. The model does this by embedding similar *(query, episode)* pairs as closely as possible in a vector space. We measure the proximity of these embeddings using *cosine similarity*, or the angle between the two embeddings (eg vectors).\n",
    "\n",
    "Because we are using this ranking optimization function, we need to ensure we do not place duplicate queries or episodes in the same training batch, as this will confuse our model when it is told that despite two queries/episodes being the same, one is correct and the other is not.\n",
    "\n",
    "Sentence transformers handles this no-duplicates in a single batch using the `NoDuplicatesDataLoader`. We can initialize it, alongside a `batch_size` parameter (higher is better), like so:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sentence_transformers.datasets import NoDuplicatesDataLoader\n",
    "\n",
    "batch_size = 64\n",
    "\n",
    "loader = NoDuplicatesDataLoader(train, batch_size=batch_size)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we initialize the loss function, as we're optimizing by ranking (as described above) we will be using the `MultipleNegativesRankingLoss`, known as *MNR loss*."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sentence_transformers.losses import MultipleNegativesRankingLoss\n",
    "\n",
    "loss = MultipleNegativesRankingLoss(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "One final thing before moving onto fine-tuning. We need to setup our training metrics. Spotify describe in-batch metrics, we will do the same by adding an evaluator to the fit function. Again, sentence transformers provides strong support for this via the `RerankingEvaluator`.\n",
    "\n",
    "Before initializing the evaluator we need to remove any duplicate episodes, of which there will be plenty (as we created 3 queries per episode)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1001 unique eval pairs\n"
     ]
    }
   ],
   "source": [
    "dedup_eval_pairs = []\n",
    "seen_eps = []\n",
    "\n",
    "for (query, episode) in eval_pairs:\n",
    "    if episode not in seen_eps:\n",
    "        seen_eps.append(episode)\n",
    "        dedup_eval_pairs.append((query, episode))\n",
    "\n",
    "eval_pairs = dedup_eval_pairs\n",
    "print(f\"{len(eval_pairs)} unique eval pairs\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sentence_transformers.evaluation import RerankingEvaluator\n",
    "\n",
    "# we must format samples into a list of:\n",
    "# {'query': '<query>', 'positive': ['<positive>'], 'negative': [<all negatives>]}\n",
    "eval_set = []\n",
    "eval_episodes = [pair[1] for pair in eval_pairs]\n",
    "\n",
    "for i, (query, episode) in enumerate(eval_pairs):\n",
    "    negatives = eval_episodes[:i] + eval_episodes[i+1:]\n",
    "    eval_set.append(\n",
    "        {'query': query, 'positive': [episode], 'negative': negatives}\n",
    "    )\n",
    "    \n",
    "evaluator = RerankingEvaluator(eval_set, mrr_at_k=5, batch_size=batch_size)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's check the zero-shot performance of the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "2b320553075e41e3a94e072cedabc6e2",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Batches:   0%|          | 0/16 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "0.6827534406474566"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "evaluator(model, output_path='./')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We're now ready to fine-tune our model. The Spotify article doesn't give any detail as to the parameters used here, so we will try the typical values of training for *1 epoch* and warming up the learning rather for the first *10%* of steps."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/conda/lib/python3.7/site-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n",
      "  FutureWarning,\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7921cd3990a54827bff2120183a93a5d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Epoch:   0%|          | 0/1 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "836b37f506304368ab6204f9716bb7c7",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Iteration:   0%|          | 0/3748 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "47d0ac8f13a84589b70c6f209b82382a",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Batches:   0%|          | 0/16 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "epochs = 1\n",
    "warmup_steps = int(len(loader) * epochs * 0.1)\n",
    "\n",
    "model.fit(\n",
    "    train_objectives=[(loader, loss)],\n",
    "    evaluator=evaluator,\n",
    "    epochs=epochs,\n",
    "    warmup_steps=warmup_steps,\n",
    "    output_path='distiluse-podcast-nq',\n",
    "    show_progress_bar=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Evaluation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For the final evaluation step we want to emulate a more *real-world* scenario. That is, rather than calculating MRR@5 across small batches of data (as done with the evaluation set), we should index many episodes and calculate similar metrics when searching across this larger index.\n",
    "\n",
    "Earlier we separated the test data `test_pairs`, we can use that now.\n",
    "\n",
    "We will be encoding episodes using the `model`, the embeddings will then be indexed in a Pinecone vector database (you can sign up for free [here](https://app.pinecone.io)).\n",
    "\n",
    "Start by initializing the vector index."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pinecone import Pinecone\n",
    "\n",
    "pinecone.init(\n",
    "    api_key='<<YOUR_API_KEY>>',  # app.pinecone.io\n",
    "    environment='us-west1-gcp'\n",
    ")\n",
    "\n",
    "# check if an evaluation index already exists, if not, create it\n",
    "if 'evaluation' not in pinecone.list_indexes().names():\n",
    "    pinecone.create_index(\n",
    "        'evaluation', dimension=model.get_sentence_embedding_dimension(),\n",
    "        metric='cosine'\n",
    "    )\n",
    "    \n",
    "# now connect to the index\n",
    "index = pinecone.Index('evaluation')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before indexing our test data, we should remove duplicates (as we did before for the eval set)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "18579 unique test pairs\n"
     ]
    }
   ],
   "source": [
    "dedup_test_pairs = []\n",
    "seen_eps = []\n",
    "\n",
    "for (query, episode) in test_pairs:\n",
    "    if episode not in seen_eps:\n",
    "        seen_eps.append(episode)\n",
    "        dedup_test_pairs.append((query, episode))\n",
    "\n",
    "test_pairs = dedup_test_pairs\n",
    "print(f\"{len(test_pairs)} unique test pairs\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we can begin encoding and indexing embeddings. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "a8be55ef936048dca652a6eb3dfbad03",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/18579 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "{'dimension': 512,\n",
       " 'index_fullness': 0.0,\n",
       " 'namespaces': {'': {'vector_count': 18560}}}"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "to_upsert = []\n",
    "eps_seen = []\n",
    "queries = []\n",
    "eps_batch = []\n",
    "id_batch = []\n",
    "upsert_batch = 64\n",
    "\n",
    "for i, (query, episode) in enumerate(tqdm(test_pairs)):\n",
    "    # do this to avoid episode duplication in index\n",
    "    if episode not in eps_seen:\n",
    "        eps_seen.append(episode)\n",
    "        queries.append((query, str(i)))\n",
    "        eps_batch.append(episode)\n",
    "        id_batch.append(str(i))\n",
    "    # on reaching batch_size we encode and upsert\n",
    "    if len(eps_batch) == upsert_batch:\n",
    "        embeds = model.encode(eps_batch).tolist()\n",
    "        # insert to index\n",
    "        index.upsert(vectors=list(zip(id_batch, embeds)))\n",
    "        # refresh batch\n",
    "        eps_batch = []\n",
    "        id_batch = []\n",
    "    \n",
    "# (optional) take a look at the index stats\n",
    "index.describe_index_stats()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "recall_at_k = []\n",
    "\n",
    "for (query, i) in queries:\n",
    "    # encode the query to an embedding\n",
    "    xq = model.encode([query]).tolist()\n",
    "    res = index.query(vector=xq, top_k=30)\n",
    "    # get IDs\n",
    "    ids = [x['id'] for x in res['results'][0]['matches']]\n",
    "    recall_at_k.append(1 if i in ids else 0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.883309112438775"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sum(recall_at_k)/len(recall_at_k)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So far this looks great, but it assumes that our synthetic queries are perfect, and they are not. Instead, we need to measure model performance on more realistic queries, which in this case we must manually create. Let's take a set of episodes and manually write a query that we believe should match to that episode."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "curated = {\n",
    "    \"funny show about after uni party house\": 1,\n",
    "    \"interview with cookbook author\": 8,\n",
    "    \"eat better during xmas holidays\": 14,\n",
    "    \"superhero film analysis\": 27,\n",
    "    \"how to tell more engaging stories\": 33,\n",
    "    \"how to make money with online content\": 34,\n",
    "    \"why is technology so addictive\": 38\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.5714285714285714"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "recall_at_k = []\n",
    "\n",
    "for query, i in curated.items():\n",
    "    # encode the query to an embedding\n",
    "    xq = model.encode([query]).tolist()\n",
    "    res = index.query(vector=xq, top_k=30)\n",
    "    # get IDs\n",
    "    ids = [x['id'] for x in res['results'][0]['matches']]\n",
    "    recall_at_k.append(1 if i in ids else 0)\n",
    "    \n",
    "sum(recall_at_k)/len(recall_at_k)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's compare this to the zero-shot performance, for which we will need to create a new index."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [],
   "source": [
    "zero_model = SentenceTransformer(\n",
    "    'distiluse-base-multilingual-cased-v2'\n",
    ").to(device)\n",
    "\n",
    "# check if an evaluation index already exists, if not, create it\n",
    "if 'eval-zero' not in pinecone.list_indexes().names():\n",
    "    pinecone.create_index(\n",
    "        'eval-zero', dimension=model.get_sentence_embedding_dimension(),\n",
    "        metric='cosine'\n",
    "    )\n",
    "    \n",
    "# now connect to the index\n",
    "index = pinecone.Index('eval-zero')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "ccf531dec04a401bb0553440d337549b",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/18579 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "{'dimension': 512,\n",
       " 'index_fullness': 0.0,\n",
       " 'namespaces': {'': {'vector_count': 18560}}}"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "to_upsert = []\n",
    "eps_seen = []\n",
    "queries = []\n",
    "eps_batch = []\n",
    "id_batch = []\n",
    "upsert_batch = 64\n",
    "\n",
    "for i, (query, episode) in enumerate(tqdm(test_pairs)):\n",
    "    # do this to avoid episode duplication in index\n",
    "    if episode not in eps_seen:\n",
    "        eps_seen.append(episode)\n",
    "        queries.append((query, str(i)))\n",
    "        eps_batch.append(episode)\n",
    "        id_batch.append(str(i))\n",
    "    # on reaching batch_size we encode and upsert\n",
    "    if len(eps_batch) == upsert_batch:\n",
    "        embeds = zero_model.encode(eps_batch).tolist()\n",
    "        # insert to index\n",
    "        index.upsert(vectors=list(zip(id_batch, embeds)))\n",
    "        # refresh batch\n",
    "        eps_batch = []\n",
    "        id_batch = []\n",
    "    \n",
    "# (optional) take a look at the index stats\n",
    "index.describe_index_stats()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.2857142857142857"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "recall_at_k = []\n",
    "\n",
    "for query, i in curated.items():\n",
    "    # encode the query to an embedding\n",
    "    xq = zero_model.encode([query]).tolist()\n",
    "    res = index.query(vector=xq, top_k=30)\n",
    "    # get IDs\n",
    "    ids = [x['id'] for x in res['results'][0]['matches']]\n",
    "    recall_at_k.append(1 if i in ids else 0)\n",
    "    \n",
    "sum(recall_at_k)/len(recall_at_k)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That's a pretty huge difference, despite not being able to follow Spotify's training process exactly thanks to a lack of data, we were able to work with synthetic queries only and produce an impressive performance gain."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "id": "1YY3hiL3xNQ5"
   },
   "source": [
    "# Delete the Index\n",
    "\n",
    "If you're done with the index, we delete it to save resources."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "DnGtG5iaUArB"
   },
   "outputs": [],
   "source": [
    "pinecone.delete_index('evaluation')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---"
   ]
  }
 ],
 "metadata": {
  "environment": {
   "kernel": "python3",
   "name": "common-cu110.m91",
   "type": "gcloud",
   "uri": "gcr.io/deeplearning-platform-release/base-cu110:m91"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.7 (main, Sep 14 2022, 22:38:23) [Clang 14.0.0 (clang-1400.0.29.102)]"
  },
  "vscode": {
   "interpreter": {
    "hash": "b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
