{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Retriever Customization"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Authors - Aditya Malte, Vinay Raman, Ali Taghibakhshi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Setup Instructions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is the second notebook as part of this two-notebook tutorial\n",
    "It runs in the Docker container `nemo:24.01.01`.\n",
    "\n",
    "Run docker when inside the `synthetic-data-retriever-customization` directory using this command: \n",
    "\n",
    "`docker run -it --rm --gpus all --ipc=host --network host -v $(pwd):/workspace nvcr.io/nvidia/nemo:24.01.01`\n",
    "\n",
    "This notebook was tested on a setup comprising 2xA6000 GPUs with CUDA setup.\n",
    "\n",
    "Use the command `ngc registry model download-version \"ohlfw0olaadg/ea-participants/nv-embed-qa:4\"` to download the NeMo Retriever model. It must be downloaded to the directory `files/models`. The same model - NeMo Retriever - has been used as an example in this notebook. If you do not have NVAIE access, then you may download and convert a HF embedding like `intfloat/e5-large-unsupervised` for your purpose as follows:\n",
    "```\n",
    "/NeMo/scripts/nlp_language_modeling/convert_bert_hf_to_nemo.py \\\n",
    "       --input_name_or_path \"intfloat/e5-large-unsupervised\" \\\n",
    "       --output_path /workspace/files/models/my_model.nemo\n",
    "```\n",
    "\n",
    "For the purpose of this notebook, we have used the NeMo Retriever model. If you use another model, or convert an HF model, ensure that the model path is updated accordingly"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n",
      "                                 Dload  Upload   Total   Spent    Left  Speed\n",
      "100 33505  100 33505    0     0   154k      0 --:--:-- --:--:-- --:--:--  154k\n"
     ]
    }
   ],
   "source": [
    "!rm /opt/NeMo/nemo/collections/nlp/models/information_retrieval/megatron_sbert_model.py\n",
    "!curl -o /opt/NeMo/nemo/collections/nlp/models/information_retrieval/megatron_sbert_model.py https://raw.githubusercontent.com/NVIDIA/NeMo/main/nemo/collections/nlp/models/information_retrieval/megatron_sbert_model.py\n",
    "!ln -s /opt/NeMo /NeMo"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com\n",
      "Collecting ipywidgets\n",
      "  Downloading ipywidgets-8.1.2-py3-none-any.whl.metadata (2.4 kB)\n",
      "Requirement already satisfied: comm>=0.1.3 in /usr/local/lib/python3.10/dist-packages (from ipywidgets) (0.2.1)\n",
      "Requirement already satisfied: ipython>=6.1.0 in /usr/local/lib/python3.10/dist-packages (from ipywidgets) (8.20.0)\n",
      "Requirement already satisfied: traitlets>=4.3.1 in /usr/local/lib/python3.10/dist-packages (from ipywidgets) (5.9.0)\n",
      "Collecting widgetsnbextension~=4.0.10 (from ipywidgets)\n",
      "  Downloading widgetsnbextension-4.0.10-py3-none-any.whl.metadata (1.6 kB)\n",
      "Collecting jupyterlab-widgets~=3.0.10 (from ipywidgets)\n",
      "  Downloading jupyterlab_widgets-3.0.10-py3-none-any.whl.metadata (4.1 kB)\n",
      "Requirement already satisfied: decorator in /usr/local/lib/python3.10/dist-packages (from ipython>=6.1.0->ipywidgets) (5.1.1)\n",
      "Requirement already satisfied: jedi>=0.16 in /usr/local/lib/python3.10/dist-packages (from ipython>=6.1.0->ipywidgets) (0.19.1)\n",
      "Requirement already satisfied: matplotlib-inline in /usr/local/lib/python3.10/dist-packages (from ipython>=6.1.0->ipywidgets) (0.1.6)\n",
      "Requirement already satisfied: prompt-toolkit<3.1.0,>=3.0.41 in /usr/local/lib/python3.10/dist-packages (from ipython>=6.1.0->ipywidgets) (3.0.43)\n",
      "Requirement already satisfied: pygments>=2.4.0 in /usr/local/lib/python3.10/dist-packages (from ipython>=6.1.0->ipywidgets) (2.17.2)\n",
      "Requirement already satisfied: stack-data in /usr/local/lib/python3.10/dist-packages (from ipython>=6.1.0->ipywidgets) (0.6.3)\n",
      "Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from ipython>=6.1.0->ipywidgets) (1.2.0)\n",
      "Requirement already satisfied: pexpect>4.3 in /usr/local/lib/python3.10/dist-packages (from ipython>=6.1.0->ipywidgets) (4.9.0)\n",
      "Requirement already satisfied: parso<0.9.0,>=0.8.3 in /usr/local/lib/python3.10/dist-packages (from jedi>=0.16->ipython>=6.1.0->ipywidgets) (0.8.3)\n",
      "Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.10/dist-packages (from pexpect>4.3->ipython>=6.1.0->ipywidgets) (0.7.0)\n",
      "Requirement already satisfied: wcwidth in /usr/local/lib/python3.10/dist-packages (from prompt-toolkit<3.1.0,>=3.0.41->ipython>=6.1.0->ipywidgets) (0.2.13)\n",
      "Requirement already satisfied: executing>=1.2.0 in /usr/local/lib/python3.10/dist-packages (from stack-data->ipython>=6.1.0->ipywidgets) (2.0.1)\n",
      "Requirement already satisfied: asttokens>=2.1.0 in /usr/local/lib/python3.10/dist-packages (from stack-data->ipython>=6.1.0->ipywidgets) (2.4.1)\n",
      "Requirement already satisfied: pure-eval in /usr/local/lib/python3.10/dist-packages (from stack-data->ipython>=6.1.0->ipywidgets) (0.2.2)\n",
      "Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.10/dist-packages (from asttokens>=2.1.0->stack-data->ipython>=6.1.0->ipywidgets) (1.16.0)\n",
      "Downloading ipywidgets-8.1.2-py3-none-any.whl (139 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m139.4/139.4 kB\u001b[0m \u001b[31m8.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading jupyterlab_widgets-3.0.10-py3-none-any.whl (215 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m215.0/215.0 kB\u001b[0m \u001b[31m329.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading widgetsnbextension-4.0.10-py3-none-any.whl (2.3 MB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.3/2.3 MB\u001b[0m \u001b[31m118.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hInstalling collected packages: widgetsnbextension, jupyterlab-widgets, ipywidgets\n",
      "Successfully installed ipywidgets-8.1.2 jupyterlab-widgets-3.0.10 widgetsnbextension-4.0.10\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\u001b[33m\n",
      "\u001b[0m\n",
      "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.3.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
      "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpython -m pip install --upgrade pip\u001b[0m\n",
      "Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com\n",
      "Collecting beir\n",
      "  Downloading beir-2.0.0.tar.gz (53 kB)\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m53.6/53.6 kB\u001b[0m \u001b[31m3.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25ldone\n",
      "\u001b[?25hRequirement already satisfied: sentence-transformers in /usr/local/lib/python3.10/dist-packages (from beir) (2.5.1)\n",
      "Collecting pytrec_eval (from beir)\n",
      "  Downloading pytrec_eval-0.5.tar.gz (15 kB)\n",
      "  Preparing metadata (setup.py) ... \u001b[?25ldone\n",
      "\u001b[?25hRequirement already satisfied: faiss_cpu in /usr/local/lib/python3.10/dist-packages (from beir) (1.8.0)\n",
      "Collecting elasticsearch==7.9.1 (from beir)\n",
      "  Downloading elasticsearch-7.9.1-py2.py3-none-any.whl.metadata (8.0 kB)\n",
      "Requirement already satisfied: datasets in /usr/local/lib/python3.10/dist-packages (from beir) (2.18.0)\n",
      "Requirement already satisfied: urllib3>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from elasticsearch==7.9.1->beir) (1.26.18)\n",
      "Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from elasticsearch==7.9.1->beir) (2023.11.17)\n",
      "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (3.13.1)\n",
      "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (1.24.4)\n",
      "Requirement already satisfied: pyarrow>=12.0.0 in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (12.0.1)\n",
      "Requirement already satisfied: pyarrow-hotfix in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (0.6)\n",
      "Requirement already satisfied: dill<0.3.9,>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (0.3.8)\n",
      "Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (1.5.3)\n",
      "Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (2.31.0)\n",
      "Requirement already satisfied: tqdm>=4.62.1 in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (4.62.3)\n",
      "Requirement already satisfied: xxhash in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (3.4.1)\n",
      "Requirement already satisfied: multiprocess in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (0.70.16)\n",
      "Requirement already satisfied: fsspec<=2024.2.0,>=2023.1.0 in /usr/local/lib/python3.10/dist-packages (from fsspec[http]<=2024.2.0,>=2023.1.0->datasets->beir) (2023.12.2)\n",
      "Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (3.9.1)\n",
      "Requirement already satisfied: huggingface-hub>=0.19.4 in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (0.21.4)\n",
      "Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (23.2)\n",
      "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from datasets->beir) (6.0.1)\n",
      "Requirement already satisfied: transformers<5.0.0,>=4.32.0 in /usr/local/lib/python3.10/dist-packages (from sentence-transformers->beir) (4.38.2)\n",
      "Requirement already satisfied: torch>=1.11.0 in /usr/local/lib/python3.10/dist-packages (from sentence-transformers->beir) (2.2.0a0+81ea7a4)\n",
      "Requirement already satisfied: scikit-learn in /usr/local/lib/python3.10/dist-packages (from sentence-transformers->beir) (1.4.1.post1)\n",
      "Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from sentence-transformers->beir) (1.12.0)\n",
      "Requirement already satisfied: Pillow in /usr/local/lib/python3.10/dist-packages (from sentence-transformers->beir) (9.3.0)\n",
      "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->beir) (23.2.0)\n",
      "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->beir) (6.0.4)\n",
      "Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->beir) (1.9.4)\n",
      "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->beir) (1.4.1)\n",
      "Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->beir) (1.3.1)\n",
      "Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->beir) (4.0.3)\n",
      "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub>=0.19.4->datasets->beir) (4.9.0)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets->beir) (3.3.2)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets->beir) (3.6)\n",
      "Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->sentence-transformers->beir) (1.12)\n",
      "Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->sentence-transformers->beir) (3.2.1)\n",
      "Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->sentence-transformers->beir) (3.1.3)\n",
      "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.32.0->sentence-transformers->beir) (2023.12.25)\n",
      "Requirement already satisfied: tokenizers<0.19,>=0.14 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.32.0->sentence-transformers->beir) (0.15.2)\n",
      "Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.32.0->sentence-transformers->beir) (0.4.2)\n",
      "Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets->beir) (2.8.2)\n",
      "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets->beir) (2023.3.post1)\n",
      "Requirement already satisfied: joblib>=1.2.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->sentence-transformers->beir) (1.3.2)\n",
      "Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->sentence-transformers->beir) (3.2.0)\n",
      "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas->datasets->beir) (1.16.0)\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.11.0->sentence-transformers->beir) (2.1.4)\n",
      "Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.11.0->sentence-transformers->beir) (1.3.0)\n",
      "Downloading elasticsearch-7.9.1-py2.py3-none-any.whl (219 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m219.2/219.2 kB\u001b[0m \u001b[31m293.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hBuilding wheels for collected packages: beir, pytrec_eval\n",
      "  Building wheel for beir (setup.py) ... \u001b[?25ldone\n",
      "\u001b[?25h  Created wheel for beir: filename=beir-2.0.0-py3-none-any.whl size=63550 sha256=4038b7ffd035217622b08f7bd3a340b657b294da0d73c222e53dad5fb0dd535a\n",
      "  Stored in directory: /tmp/pip-ephem-wheel-cache-i7wq8f85/wheels/1c/14/96/c606ede3c10e9300ef771a6183af09d389459195ff5f854862\n",
      "  Building wheel for pytrec_eval (setup.py) ... \u001b[?25ldone\n",
      "\u001b[?25h  Created wheel for pytrec_eval: filename=pytrec_eval-0.5-cp310-cp310-linux_x86_64.whl size=308218 sha256=c5b1109901e3af0058f9da3965b940f7a30fbe8187f4525bf6edd173e1a1e351\n",
      "  Stored in directory: /tmp/pip-ephem-wheel-cache-i7wq8f85/wheels/51/3a/cd/dcc1ddfc763987d5cb237165d8ac249aa98a23ab90f67317a8\n",
      "Successfully built beir pytrec_eval\n",
      "Installing collected packages: pytrec_eval, elasticsearch, beir\n",
      "Successfully installed beir-2.0.0 elasticsearch-7.9.1 pytrec_eval-0.5\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\u001b[33m\n",
      "\u001b[0m\n",
      "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.3.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
      "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpython -m pip install --upgrade pip\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "!pip install ipywidgets\n",
    "!pip install beir"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Please restart kernel after installing the libraries"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import libraries and set configuration"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "import torch\n",
    "import numpy as np\n",
    "import json\n",
    "import math\n",
    "from tqdm import tqdm\n",
    "import pandas as pd\n",
    "from collections import OrderedDict\n",
    "import os"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "QA_PAIRS_PATH = \"/workspace/files/data/qa_pairs_meta_llama_Llama_2_13b_chat_hf_num_questions_300_BeIR_nfcorpus.csv\" \n",
    "HARD_NEGATIVE_MINING_MODEL_NAME_OR_PATH = 'intfloat/e5-large-unsupervised'\n",
    "\n",
    "OUTPUT_DATA_PATH = \"/tmp/data/output_data.json\"\n",
    "output_dir_path = os.path.dirname(OUTPUT_DATA_PATH)\n",
    "if not os.path.exists(output_dir_path):\n",
    "    os.mkdir(output_dir_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "NUM_DEVICES=2 # number of gpus to train on\n",
    "CONFIG_PATH=\"/NeMo/examples/nlp/information_retrieval/conf/\"\n",
    "CONFIG_NAME=\"megatron_sbert_config\"\n",
    "PATH_TO_NEMO_MODEL= \"/workspace/files/models/NV-Embed-QA-4.nemo\" # Path to conveted nemo model from hf, if you have a different model\n",
    "DATASET_PATH= OUTPUT_DATA_PATH # Path to json dataset\n",
    "SAVE_DIR= \"/tmp/trained_model/\" # where the checkpoint and logs are saved"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Read QA Pairs file"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>question</th>\n",
       "      <th>positive_chunk</th>\n",
       "      <th>positive_chunk_id</th>\n",
       "      <th>paragraph_id</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>What are the chronic effects of coffee consump...</td>\n",
       "      <td>Coffee and endothelial function: a battle betw...</td>\n",
       "      <td>0</td>\n",
       "      <td>67</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Can poor blood supply to the intervertebral di...</td>\n",
       "      <td>Symptomatic disc herniation and serum lipid le...</td>\n",
       "      <td>0</td>\n",
       "      <td>16</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>What are the structural diversity and molecula...</td>\n",
       "      <td>An update on bioactive plant lignans.\\nLignans...</td>\n",
       "      <td>0</td>\n",
       "      <td>78</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>What are the recent advances in natural and or...</td>\n",
       "      <td>Beyond celery and starter culture: advances in...</td>\n",
       "      <td>0</td>\n",
       "      <td>95</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>What are the factors that contribute to the de...</td>\n",
       "      <td>Constipation and a Low-Fiber Diet are Not Asso...</td>\n",
       "      <td>0</td>\n",
       "      <td>32</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>295</th>\n",
       "      <td>How does galactose consumption affect the risk...</td>\n",
       "      <td>Adolescent milk fat and galactose consumption ...</td>\n",
       "      <td>0</td>\n",
       "      <td>18</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>296</th>\n",
       "      <td>How does dietary cholesterol affect LDL choles...</td>\n",
       "      <td>Maintenance of the LDL cholesterol:HDL cholest...</td>\n",
       "      <td>0</td>\n",
       "      <td>82</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>297</th>\n",
       "      <td>What are the patterns of monoclonal immunoglob...</td>\n",
       "      <td>Clinical Trials and Observations: Monoclonal g...</td>\n",
       "      <td>0</td>\n",
       "      <td>46</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>298</th>\n",
       "      <td>How do the prevalence of ideal cardiovascular ...</td>\n",
       "      <td>Status of Cardiovascular Health in US Adults: ...</td>\n",
       "      <td>0</td>\n",
       "      <td>85</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>299</th>\n",
       "      <td>How does indomethacin affect natural killer ce...</td>\n",
       "      <td>Natural killer cell activity in peripheral blo...</td>\n",
       "      <td>0</td>\n",
       "      <td>25</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>300 rows × 4 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "                                              question  \\\n",
       "0    What are the chronic effects of coffee consump...   \n",
       "1    Can poor blood supply to the intervertebral di...   \n",
       "2    What are the structural diversity and molecula...   \n",
       "3    What are the recent advances in natural and or...   \n",
       "4    What are the factors that contribute to the de...   \n",
       "..                                                 ...   \n",
       "295  How does galactose consumption affect the risk...   \n",
       "296  How does dietary cholesterol affect LDL choles...   \n",
       "297  What are the patterns of monoclonal immunoglob...   \n",
       "298  How do the prevalence of ideal cardiovascular ...   \n",
       "299  How does indomethacin affect natural killer ce...   \n",
       "\n",
       "                                        positive_chunk  positive_chunk_id  \\\n",
       "0    Coffee and endothelial function: a battle betw...                  0   \n",
       "1    Symptomatic disc herniation and serum lipid le...                  0   \n",
       "2    An update on bioactive plant lignans.\\nLignans...                  0   \n",
       "3    Beyond celery and starter culture: advances in...                  0   \n",
       "4    Constipation and a Low-Fiber Diet are Not Asso...                  0   \n",
       "..                                                 ...                ...   \n",
       "295  Adolescent milk fat and galactose consumption ...                  0   \n",
       "296  Maintenance of the LDL cholesterol:HDL cholest...                  0   \n",
       "297  Clinical Trials and Observations: Monoclonal g...                  0   \n",
       "298  Status of Cardiovascular Health in US Adults: ...                  0   \n",
       "299  Natural killer cell activity in peripheral blo...                  0   \n",
       "\n",
       "     paragraph_id  \n",
       "0              67  \n",
       "1              16  \n",
       "2              78  \n",
       "3              95  \n",
       "4              32  \n",
       "..            ...  \n",
       "295            18  \n",
       "296            82  \n",
       "297            46  \n",
       "298            85  \n",
       "299            25  \n",
       "\n",
       "[300 rows x 4 columns]"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "qa_pairs = pd.read_csv(QA_PAIRS_PATH).sample(frac=1).reset_index(drop=True)\n",
    "qa_pairs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Convert pandas dataframe to qrels, queries and passages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "passages = OrderedDict()\n",
    "queries = []\n",
    "positive_passage_ids = []\n",
    "for _, row in qa_pairs.iterrows():\n",
    "    queries.append(row[\"question\"])\n",
    "    positive_passage_str = row[\"positive_chunk\"]\n",
    "    if(positive_passage_str in passages):\n",
    "        positive_passage_id = passages[positive_passage_str]\n",
    "        positive_passage_ids.append(positive_passage_id)\n",
    "    else:\n",
    "        positive_passage_id = len(passages)\n",
    "        passages[positive_passage_str] = positive_passage_id\n",
    "        positive_passage_ids.append(positive_passage_id)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "300"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(queries)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Perform Embedding Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "GPU available: True (cuda), used: True\n",
      "TPU available: False, using: 0 TPU cores\n",
      "IPU available: False, using: 0 IPUs\n",
      "HPU available: False, using: 0 HPUs\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: context_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: expert_model_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: gradient_accumulation_fusion in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_overlap in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_wgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_dgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: finalize_model_grads_func in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: overlap_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: batch_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: pipeline_model_parallel_split_rank in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_num_layers in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: _cpu_offloading_context in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_activations in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_weights in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: barrier_with_L1_time in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2024-03-15 22:59:21 megatron_init:253] Rank 0 has data parallel group : [0, 1]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:259] Rank 0 has combined group of data parallel and context parallel : [0, 1]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:264] All data parallel group ranks with context parallel combined: [[0, 1]]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:267] Ranks 0 has data parallel rank: 0\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:284] Rank 0 has context parallel group: [0]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:287] All context parallel group ranks: [[0], [1]]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:288] Ranks 0 has context parallel rank: 0\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:299] Rank 0 has model parallel group: [0]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:300] All model parallel group ranks: [[0], [1]]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:310] Rank 0 has tensor model parallel group: [0]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:314] All tensor model parallel group ranks: [[0], [1]]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:315] Rank 0 has tensor model parallel rank: 0\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:344] Rank 0 has pipeline model parallel group: [0]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:356] Rank 0 has embedding group: [0]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:362] All pipeline model parallel group ranks: [[0], [1]]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:363] Rank 0 has pipeline model parallel rank 0\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:364] All embedding group ranks: [[0], [1]]\n",
      "[NeMo I 2024-03-15 22:59:21 megatron_init:365] Rank 0 has embedding rank: 0\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[W init.cpp:767] Warning: nvfuser is no longer supported in torch script, use _jit_set_nvfuser_enabled is deprecated and a no-op (function operator())\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: context_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: expert_model_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: gradient_accumulation_fusion in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_overlap in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_wgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_dgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: finalize_model_grads_func in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: overlap_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: batch_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: pipeline_model_parallel_split_rank in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_num_layers in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: _cpu_offloading_context in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_activations in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_weights in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:21 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: barrier_with_L1_time in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2024-03-15 22:59:21 tokenizer_utils:198] Getting Megatron tokenizer for pretrained model name: intfloat/e5-large-unsupervised, custom vocab file: None, and merges file: None\n",
      "[NeMo I 2024-03-15 22:59:21 tokenizer_utils:127] Getting HuggingFace AutoTokenizer with pretrained_model_name: intfloat/e5-large-unsupervised, vocab_file: None, merges_files: None, special_tokens_dict: {}, and use_fast: False\n"
     ]
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.005921602249145508,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": 11,
       "postfix": null,
       "prefix": "tokenizer_config.json",
       "rate": null,
       "total": 372,
       "unit": "B",
       "unit_divisor": 1000,
       "unit_scale": true
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "621528cf707f4d9bb229d629b70d9ae9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/372 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.005434989929199219,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": 11,
       "postfix": null,
       "prefix": "vocab.txt",
       "rate": null,
       "total": 231508,
       "unit": "B",
       "unit_divisor": 1000,
       "unit_scale": true
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f8b2168c35f64110a7eba690286567b2",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.txt:   0%|          | 0.00/232k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.005540609359741211,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": 11,
       "postfix": null,
       "prefix": "special_tokens_map.json",
       "rate": null,
       "total": 112,
       "unit": "B",
       "unit_divisor": 1000,
       "unit_scale": true
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "46f397a737434014a676a0994b44b215",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "special_tokens_map.json:   0%|          | 0.00/112 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/json": {
       "ascii": false,
       "bar_format": null,
       "colour": null,
       "elapsed": 0.0051577091217041016,
       "initial": 0,
       "n": 0,
       "ncols": null,
       "nrows": 11,
       "postfix": null,
       "prefix": "tokenizer.json",
       "rate": null,
       "total": 466081,
       "unit": "B",
       "unit_divisor": 1000,
       "unit_scale": true
      },
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9c4785db6b004d3c94f930acfeb7d4ae",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/466k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo I 2024-03-15 22:59:22 megatron_base_model:574] Padded vocab_size: 30592, original vocab_size: 30522, dummy tokens: 70.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: context_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: expert_model_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: gradient_accumulation_fusion in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_overlap in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_wgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_dgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: finalize_model_grads_func in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: overlap_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: batch_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: pipeline_model_parallel_split_rank in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_num_layers in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: _cpu_offloading_context in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_activations in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_weights in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: barrier_with_L1_time in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: num_query_groups in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: attention_dropout in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: add_qkv_bias in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: num_moe_experts in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: rotary_interleaved in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: window_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: masked_softmax_fusion in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: persist_layer_norm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: memory_efficient_layer_norm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: fp8_margin in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: fp8_interval in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: fp8_amax_history_len in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: fp8_amax_compute_algo in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: fp8_wgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: clone_scatter_output_in_embedding in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_router_load_balancing_type in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_router_topk in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_grouped_gemm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_aux_loss_coeff in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_z_loss_coeff in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_input_jitter_eps in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_token_dropping in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:22 megatron_sbert_model:406] Model is running inference mode as training data is not specified, or could not be loaded\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Random seed set as 42\n",
      "[NeMo I 2024-03-15 22:59:23 nlp_overrides:1119] Model MegatronSBertModel was successfully restored from /workspace/files/models/NV-Embed-QA-4.nemo.\n"
     ]
    }
   ],
   "source": [
    "import math\n",
    "from tqdm import tqdm\n",
    "import torch\n",
    "\n",
    "from nemo.collections.nlp.models.information_retrieval.megatron_sbert_model import MegatronSBertModel\n",
    "from pytorch_lightning.trainer.trainer import Trainer\n",
    "\n",
    "model = MegatronSBertModel.restore_from(\n",
    "            PATH_TO_NEMO_MODEL,\n",
    "            trainer=Trainer()\n",
    "        ).to(\"cuda:1\")\n",
    "\n",
    "def encode_text(model, texts, batch_size=1, device=\"cuda:0\"):\n",
    "    with torch.no_grad():\n",
    "        tokenized_texts = model.tokenize(texts)\n",
    "        model = model.to(device).eval()\n",
    "\n",
    "        input_ids = tokenized_texts[\"input_ids\"].to(device)\n",
    "        attention_mask = tokenized_texts[\"attention_mask\"].to(device)\n",
    "        token_type_ids = tokenized_texts[\"token_type_ids\"].to(device)\n",
    "\n",
    "        num_batches = int(math.ceil(len(texts)/batch_size))\n",
    "\n",
    "        embeddings = []\n",
    "        for batch_id in tqdm(range(num_batches)):\n",
    "            start = batch_size * batch_id\n",
    "            end = batch_size * (batch_id+1)\n",
    "\n",
    "            batch_embeddings = model(input_ids[start:end, :], attention_mask[start:end, :], token_type_ids[start:end, :])\n",
    "            embeddings.append(batch_embeddings)\n",
    "        return torch.cat(embeddings, dim=1).swapaxes(0,1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|███████████████████| 60/60 [00:01<00:00, 32.25it/s]\n",
      "100%|███████████████████| 20/20 [00:01<00:00, 10.48it/s]\n"
     ]
    }
   ],
   "source": [
    "query_embeddings = encode_text(model, [(\"query: \"+query) for query in queries], device = \"cuda:0\", batch_size=5)\n",
    "passage_embeddings = encode_text(model, [(\"passage: \"+passage) for passage in list(passages)],  device = \"cuda:0\", batch_size=5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Mine Hard Negatives"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Hard negative mining refers to the creation of negative examples that are 'hard'. Essentially, what this means is that rather than performing random sampling - which would lead to easy negatives - we mine for harder negative examples.\n",
    "\n",
    "This has an advantage that the negatives would not be obvious to the model during training, and hence would actually be more helpful.\n",
    "\n",
    "However, hard negative mining has a higher probability of generating false negatives. To avoid this, we set a safety `margin`. This margin is a hyperparameter and you may change it depending on if more false negatives are being generated. For instance, a larger corpus has a higher probability of generating false negatives than a smaller one, as the probability of finding another positive increases. In such cases a lower `margin` value may be more helpful."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def hard_negative_mining(\n",
    "        query_embeddings,\n",
    "        passage_embeddings,\n",
    "        batch_size,\n",
    "        margin,\n",
    "        num_negs,\n",
    "        query_positive_paragraph_idxs\n",
    "):\n",
    "    hard_negative_idxs = []\n",
    "    num_batches = int(math.ceil(query_embeddings.shape[0] / batch_size))\n",
    "    # Split the query embeddings into batches of given batch size\n",
    "    for current_batch_idx in range(num_batches):\n",
    "        start = (current_batch_idx)*batch_size\n",
    "        end = (current_batch_idx+1)*(batch_size)\n",
    "        batch_query_embeddings = query_embeddings[start:end]\n",
    "        batch_query_positive_paragraph_idxs = query_positive_paragraph_idxs[start:end]\n",
    "        \n",
    "        # Find minimum query-positive_chunk similarity score for each query in a batch\n",
    "        query_passage_pos_scores = np.matmul(batch_query_embeddings, passage_embeddings.T)\n",
    "\n",
    "        min_pos_scores = []\n",
    "        for query_id, row in enumerate(query_passage_pos_scores):\n",
    "            min_value = float(\"inf\")\n",
    "            for query_positive_paragraph_idx in query_positive_paragraph_idxs[query_id+start]:\n",
    "                min_value = min(min_value, row[query_positive_paragraph_idx])\n",
    "            min_pos_scores.append(min_value)\n",
    "        min_pos_scores = np.array(min_pos_scores)\n",
    "            \n",
    "        # For each query set minimum threshold as margin*minimum_batch_positive_score \n",
    "        mining_thresholds = min_pos_scores*margin\n",
    "        \n",
    "        # Filter out all chunks belonging to the same paragraph as positive passage OR those manually labelled as positives\n",
    "        for query_idx, positive_paragraph_idxs in enumerate(batch_query_positive_paragraph_idxs):\n",
    "            batch_query_idx = query_idx%batch_size\n",
    "            query_passage_pos_scores[batch_query_idx][positive_paragraph_idxs] = -float(\"inf\")\n",
    "        \n",
    "        # Filter out all chunks with score>mining_threshold\n",
    "        for row_idx in range(query_passage_pos_scores.shape[0]):\n",
    "            row = query_passage_pos_scores[row_idx]\n",
    "            row[row>mining_thresholds[row_idx]] = -float(\"inf\")\n",
    "            \n",
    "        # For each query get top_k hard negatives from all that remains\n",
    "        for row in query_passage_pos_scores:\n",
    "            top_k_hard_negative_idxs = np.argpartition(row, -num_negs)[-num_negs:]\n",
    "            hard_negative_idxs.append(list(top_k_hard_negative_idxs))\n",
    "            \n",
    "    return hard_negative_idxs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "positive_passage_ids_list = [[element] for element in positive_passage_ids]\n",
    "hard_negative_idxs = hard_negative_mining(query_embeddings=query_embeddings.cpu().numpy(), passage_embeddings=passage_embeddings.cpu().numpy(), query_positive_paragraph_idxs=positive_passage_ids_list,\n",
    "                    batch_size=32, num_negs=10, margin=0.95)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Construct training data in the format\n",
    "```\n",
    "[\n",
    "    {\n",
    "        \"question\": \"Query\",\n",
    "        \"pos_doc\": [\"Positive\"],\n",
    "        \"neg_doc\": [\"Negative_1\", \"Negative_2\", ..., \"Negative_n\"]\n",
    "    },\n",
    "    {\n",
    "        // Next data instance\n",
    "    },\n",
    "    ...,\n",
    "    {\n",
    "        // Subsequent data instance\n",
    "    }\n",
    "]\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "data = []\n",
    "for query_id, query in enumerate(queries):\n",
    "    hard_negative_passages = []\n",
    "    for hard_negative_idx in hard_negative_idxs[query_id]:\n",
    "        for key, val in passages.items():\n",
    "            if val == hard_negative_idx:\n",
    "                hard_negative_passage = key\n",
    "                hard_negative_passages.append(hard_negative_passage)\n",
    "    \n",
    "    for key, val in passages.items():\n",
    "        if val == positive_passage_ids[query_id]:\n",
    "            positive_passage = key\n",
    "            break\n",
    "\n",
    "    datapoint = {\n",
    "        \"question\" : query,\n",
    "        \"pos_doc\" : [positive_passage],\n",
    "        \"neg_doc\" : hard_negative_passages\n",
    "    }\n",
    "    data.append(datapoint)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "300"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Saving data to: /tmp/data/output_data.json\n"
     ]
    }
   ],
   "source": [
    "print(f\"Saving data to: {OUTPUT_DATA_PATH}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "with open(OUTPUT_DATA_PATH, \"w\") as file:\n",
    "    json.dump(data, file)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "python /opt/NeMo/examples/nlp/information_retrieval/megatron_sbert_finetune.py --config-path=/NeMo/examples/nlp/information_retrieval/conf/ --config-name=megatron_sbert_config restore_from_path=/workspace/files/models/NV-Embed-QA-4.nemo trainer.devices=2 trainer.val_check_interval=10 trainer.max_epochs=1 +trainer.num_sanity_val_steps=0 model.global_batch_size=8 model.micro_batch_size=4 model.tokenizer.library=huggingface model.tokenizer.type=intfloat/e5-large-unsupervised ++model.data.data_prefix=/tmp/data/output_data.json ++model.tokenizer.do_lower_case=False ++model.data.evaluation_sample_size=50 ++model.data.hard_negatives_to_train=4 ++model.data.evaluation_steps=100 exp_manager.explicit_log_dir=/tmp/trained_model/ exp_manager.create_wandb_logger=False ++exp_manager.checkpoint_callback_params.save_best_model=True exp_manager.resume_if_exists=False\n"
     ]
    }
   ],
   "source": [
    "COMMAND = f\"python /opt/NeMo/examples/nlp/information_retrieval/megatron_sbert_finetune.py \\\n",
    "--config-path={CONFIG_PATH} \\\n",
    "--config-name={CONFIG_NAME} \\\n",
    "restore_from_path={PATH_TO_NEMO_MODEL} \\\n",
    "trainer.devices={NUM_DEVICES} \\\n",
    "trainer.val_check_interval=10 \\\n",
    "trainer.max_epochs=1 \\\n",
    "+trainer.num_sanity_val_steps=0 \\\n",
    "model.global_batch_size=8 \\\n",
    "model.micro_batch_size=4 \\\n",
    "model.tokenizer.library=huggingface \\\n",
    "model.tokenizer.type=intfloat/e5-large-unsupervised \\\n",
    "++model.data.data_prefix={DATASET_PATH} \\\n",
    "++model.tokenizer.do_lower_case=False \\\n",
    "++model.data.evaluation_sample_size=50 \\\n",
    "++model.data.hard_negatives_to_train=4 \\\n",
    "++model.data.evaluation_steps=100 \\\n",
    "exp_manager.explicit_log_dir={SAVE_DIR} \\\n",
    "exp_manager.create_wandb_logger=False \\\n",
    "++exp_manager.checkpoint_callback_params.save_best_model=True \\\n",
    "exp_manager.resume_if_exists=False\"\n",
    "\n",
    "print(COMMAND)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[NeMo W 2024-03-15 22:59:40 nemo_logging:349] /usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.\n",
      "    See https://hydra.cc/docs/next/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.\n",
      "      ret = run_job(\n",
      "    \n",
      "[NeMo I 2024-03-15 22:59:40 megatron_sbert_finetune:31] \n",
      "    \n",
      "    ************** Experiment configuration ***********\n",
      "[NeMo I 2024-03-15 22:59:40 megatron_sbert_finetune:32] \n",
      "    name: megatron_bert\n",
      "    restore_from_path: /workspace/files/models/NV-Embed-QA-4.nemo\n",
      "    trainer:\n",
      "      devices: 2\n",
      "      num_nodes: 1\n",
      "      accelerator: gpu\n",
      "      precision: 16\n",
      "      logger: false\n",
      "      enable_checkpointing: false\n",
      "      use_distributed_sampler: false\n",
      "      max_epochs: 1\n",
      "      max_steps: 100000\n",
      "      log_every_n_steps: 10\n",
      "      val_check_interval: 10\n",
      "      limit_val_batches: 50\n",
      "      limit_test_batches: 500\n",
      "      accumulate_grad_batches: 1\n",
      "      gradient_clip_val: 1.0\n",
      "      benchmark: false\n",
      "      num_sanity_val_steps: 0\n",
      "    exp_manager:\n",
      "      explicit_log_dir: /tmp/trained_model/\n",
      "      exp_dir: null\n",
      "      name: megatron_bert\n",
      "      create_wandb_logger: false\n",
      "      wandb_logger_kwargs:\n",
      "        project: null\n",
      "        name: null\n",
      "      resume_if_exists: false\n",
      "      resume_ignore_no_checkpoint: true\n",
      "      create_checkpoint_callback: true\n",
      "      checkpoint_callback_params:\n",
      "        monitor: val_loss\n",
      "        save_top_k: 10\n",
      "        mode: min\n",
      "        always_save_nemo: false\n",
      "        filename: megatron_bert--{val_loss:.2f}-{step}-{consumed_samples}\n",
      "        model_parallel_size: ${multiply:${model.tensor_model_parallel_size}, ${model.pipeline_model_parallel_size}}\n",
      "        save_best_model: true\n",
      "    model:\n",
      "      mcore_bert: false\n",
      "      micro_batch_size: 4\n",
      "      global_batch_size: 8\n",
      "      tensor_model_parallel_size: 1\n",
      "      pipeline_model_parallel_size: 1\n",
      "      virtual_pipeline_model_parallel_size: null\n",
      "      encoder_seq_length: 512\n",
      "      max_position_embeddings: ${.encoder_seq_length}\n",
      "      position_embedding_type: learned_absolute\n",
      "      num_layers: 24\n",
      "      hidden_size: 1024\n",
      "      ffn_hidden_size: 4096\n",
      "      num_attention_heads: 16\n",
      "      skip_head: true\n",
      "      transformer_block_type: post_ln\n",
      "      init_method_std: 0.02\n",
      "      hidden_dropout: 0.1\n",
      "      kv_channels: null\n",
      "      apply_query_key_layer_scaling: false\n",
      "      normalization: layernorm\n",
      "      layernorm_epsilon: 1.0e-12\n",
      "      make_vocab_size_divisible_by: 128\n",
      "      pre_process: true\n",
      "      post_process: true\n",
      "      bert_binary_head: true\n",
      "      megatron_legacy: true\n",
      "      tokenizer:\n",
      "        library: huggingface\n",
      "        type: intfloat/e5-large-unsupervised\n",
      "        model: null\n",
      "        vocab_file: null\n",
      "        merge_file: null\n",
      "        do_lower_case: false\n",
      "      native_amp_init_scale: 4294967296\n",
      "      native_amp_growth_interval: 1000\n",
      "      fp32_residual_connection: false\n",
      "      fp16_lm_cross_entropy: false\n",
      "      megatron_amp_O2: false\n",
      "      grad_allreduce_chunk_size_mb: 125\n",
      "      grad_div_ar_fusion: false\n",
      "      seed: 1234\n",
      "      use_cpu_initialization: false\n",
      "      onnx_safe: false\n",
      "      gradient_as_bucket_view: true\n",
      "      activations_checkpoint_granularity: null\n",
      "      activations_checkpoint_method: null\n",
      "      activations_checkpoint_num_layers: null\n",
      "      num_micro_batches_with_partial_activation_checkpoints: null\n",
      "      activations_checkpoint_layers_per_pipeline: null\n",
      "      sequence_parallel: false\n",
      "      data:\n",
      "        data_prefix: /tmp/data/output_data.json\n",
      "        index_mapping_dir: null\n",
      "        data_impl: mmap\n",
      "        splits_string: 900,50,50\n",
      "        seq_length: ${model.encoder_seq_length}\n",
      "        skip_warmup: true\n",
      "        num_workers: 0\n",
      "        dataloader_type: single\n",
      "        reset_position_ids: false\n",
      "        reset_attention_mask: false\n",
      "        eod_mask_loss: false\n",
      "        masked_lm_prob: 0.15\n",
      "        short_seq_prob: 0.1\n",
      "        evaluation_sample_size: 50\n",
      "        hard_negatives_to_train: 4\n",
      "        evaluation_steps: 100\n",
      "      optim:\n",
      "        name: fused_adam\n",
      "        lr: 0.0002\n",
      "        weight_decay: 0.01\n",
      "        betas:\n",
      "        - 0.9\n",
      "        - 0.98\n",
      "        sched:\n",
      "          name: CosineAnnealing\n",
      "          warmup_steps: 500\n",
      "          constant_steps: 50000\n",
      "          min_lr: 2.0e-05\n",
      "    \n",
      "[NeMo W 2024-03-15 22:59:40 nemo_logging:349] /usr/local/lib/python3.10/dist-packages/lightning_fabric/connector.py:554: UserWarning: 16 is supported for historical reasons but its usage is discouraged. Please set your precision to 16-mixed instead!\n",
      "      rank_zero_warn(\n",
      "    \n",
      "GPU available: True (cuda), used: True\n",
      "TPU available: False, using: 0 TPU cores\n",
      "IPU available: False, using: 0 IPUs\n",
      "HPU available: False, using: 0 HPUs\n",
      "[NeMo I 2024-03-15 22:59:40 exp_manager:396] Experiments will be logged at /tmp/trained_model\n",
      "[NeMo I 2024-03-15 22:59:40 exp_manager:856] TensorboardLogger has been set up\n",
      "[NeMo W 2024-03-15 22:59:40 exp_manager:966] The checkpoint callback was told to monitor a validation value and trainer's max_steps was set to 100000. Please ensure that max_steps will run for at least 1 epochs to ensure that checkpointing will not error out.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: context_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: expert_model_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: gradient_accumulation_fusion in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_overlap in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_wgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_dgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: finalize_model_grads_func in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: overlap_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: batch_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: pipeline_model_parallel_split_rank in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_num_layers in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: _cpu_offloading_context in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_activations in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_weights in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: barrier_with_L1_time in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[W init.cpp:767] Warning: nvfuser is no longer supported in torch script, use _jit_set_nvfuser_enabled is deprecated and a no-op (function operator())\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:253] Rank 0 has data parallel group : [0, 1]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:259] Rank 0 has combined group of data parallel and context parallel : [0, 1]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:264] All data parallel group ranks with context parallel combined: [[0, 1]]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:267] Ranks 0 has data parallel rank: 0\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:284] Rank 0 has context parallel group: [0]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:287] All context parallel group ranks: [[0], [1]]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:288] Ranks 0 has context parallel rank: 0\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:299] Rank 0 has model parallel group: [0]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:300] All model parallel group ranks: [[0], [1]]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:310] Rank 0 has tensor model parallel group: [0]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:314] All tensor model parallel group ranks: [[0], [1]]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:315] Rank 0 has tensor model parallel rank: 0\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:344] Rank 0 has pipeline model parallel group: [0]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:356] Rank 0 has embedding group: [0]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:362] All pipeline model parallel group ranks: [[0], [1]]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:363] Rank 0 has pipeline model parallel rank 0\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:364] All embedding group ranks: [[0], [1]]\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_init:365] Rank 0 has embedding rank: 0\n",
      "24-03-15 22:59:41 - PID:1085 - rank:(0, 0, 0, 0) - microbatches.py:39 - INFO - setting number of micro-batches to constant 1\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: context_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: expert_model_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: gradient_accumulation_fusion in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_overlap in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_wgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_dgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: finalize_model_grads_func in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: overlap_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: batch_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: pipeline_model_parallel_split_rank in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_num_layers in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: _cpu_offloading_context in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_activations in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_weights in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: barrier_with_L1_time in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo I 2024-03-15 22:59:41 tokenizer_utils:176] Getting HuggingFace AutoTokenizer with pretrained_model_name: intfloat/e5-large-unsupervised\n",
      "[NeMo I 2024-03-15 22:59:41 megatron_base_model:574] Padded vocab_size: 30592, original vocab_size: 30522, dummy tokens: 70.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: context_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: expert_model_parallel_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: gradient_accumulation_fusion in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_overlap in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_ag in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_split_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_atomic_rs in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_wgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: tp_comm_bulk_dgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: finalize_model_grads_func in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: overlap_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: batch_p2p_comm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: pipeline_model_parallel_split_rank in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_num_layers in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: _cpu_offloading_context in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_activations in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: cpu_offloading_weights in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:1139] The model: MegatronSBertModel() does not have field.name: barrier_with_L1_time in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: num_query_groups in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: attention_dropout in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: add_qkv_bias in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: num_moe_experts in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: rotary_interleaved in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: window_size in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: masked_softmax_fusion in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: persist_layer_norm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: memory_efficient_layer_norm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: fp8_margin in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: fp8_interval in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: fp8_amax_history_len in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: fp8_amax_compute_algo in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: fp8_wgrad in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: clone_scatter_output_in_embedding in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_router_load_balancing_type in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_router_topk in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_grouped_gemm in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_aux_loss_coeff in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_z_loss_coeff in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_input_jitter_eps in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_base_model:546] The model: MegatronSBertModel() does not have field.name: moe_token_dropping in its cfg. Add this key to cfg or config_mapping to make to make it configurable.\n",
      "Random seed set as 42\n",
      "[NeMo W 2024-03-15 22:59:41 megatron_sbert_model:404] Model is running in training mode\n",
      "[NeMo I 2024-03-15 22:59:42 nlp_overrides:1119] Model MegatronSBertModel was successfully restored from /workspace/files/models/NV-Embed-QA-4.nemo.\n",
      "[NeMo W 2024-03-15 22:59:42 nemo_logging:349] /usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/configuration_validator.py:153: UserWarning: The `batch_idx` argument in `MegatronSBertModel.on_train_batch_start` hook may not match with the actual batch index when using a `dataloader_iter` argument in your `training_step`.\n",
      "      rank_zero_warn(\n",
      "    \n",
      "[NeMo W 2024-03-15 22:59:42 nemo_logging:349] /usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/configuration_validator.py:153: UserWarning: The `batch_idx` argument in `MegatronSBertModel.on_train_batch_end` hook may not match with the actual batch index when using a `dataloader_iter` argument in your `training_step`.\n",
      "      rank_zero_warn(\n",
      "    \n",
      "Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2\n"
     ]
    }
   ],
   "source": [
    "!{COMMAND}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Model Evaluation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from beir import util, LoggingHandler\n",
    "from beir.retrieval import models\n",
    "from beir.datasets.data_loader import GenericDataLoader\n",
    "from beir.retrieval.evaluation import EvaluateRetrieval\n",
    "from beir.retrieval.search.dense import DenseRetrievalExactSearch as DRES\n",
    "\n",
    "import logging\n",
    "import pathlib, os\n",
    "\n",
    "#### Just some code to print debug information to stdout\n",
    "logging.basicConfig(format='%(asctime)s - %(message)s',\n",
    "                    datefmt='%Y-%m-%d %H:%M:%S',\n",
    "                    level=logging.INFO,\n",
    "                    handlers=[LoggingHandler()])\n",
    "#### /print debug information to stdout\n",
    "\n",
    "#### Download scifact.zip dataset and unzip the dataset\n",
    "dataset = \"nfcorpus\"\n",
    "url = \"https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/{}.zip\".format(dataset)\n",
    "out_dir = os.path.join(\"/tmp\", \"datasets\")\n",
    "data_path = util.download_and_unzip(url, out_dir)\n",
    "\n",
    "#### Provide the data_path where scifact has been downloaded and unzipped\n",
    "corpus, queries, qrels = GenericDataLoader(data_folder=data_path).load(split=\"test\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Create a wrapper NeMo model for retrieval evaluation on this dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from beir.retrieval.search.dense import DenseRetrievalExactSearch as DRES\n",
    "from nemo.collections.nlp.models.information_retrieval.megatron_sbert_model import MegatronSBertModel\n",
    "from pytorch_lightning.trainer.trainer import Trainer\n",
    "from typing import List, Dict\n",
    "import numpy as np\n",
    "\n",
    "import torch\n",
    "import math\n",
    "from tqdm import tqdm\n",
    "\n",
    "class NeMoModel:\n",
    "    def __init__(self, model_path=None, **kwargs):\n",
    "        self.model = MegatronSBertModel.restore_from(\n",
    "            model_path,\n",
    "            trainer=Trainer())\n",
    "        self.model = self.model.to(\"cuda:0\").half()\n",
    "    \n",
    "    def encode_text(self, texts, batch_size=1, device=\"cuda:0\"):\n",
    "        with torch.no_grad():\n",
    "            tokenized_texts = self.model.tokenize(texts)\n",
    "            \n",
    "            input_ids = tokenized_texts[\"input_ids\"].to(device)\n",
    "            attention_mask = tokenized_texts[\"attention_mask\"].to(device)\n",
    "            token_type_ids = tokenized_texts[\"token_type_ids\"].to(device)\n",
    "\n",
    "            num_batches = int(math.ceil(len(texts)/batch_size))\n",
    "\n",
    "            embeddings = []\n",
    "            for batch_id in tqdm(range(num_batches)):\n",
    "                start = batch_size * batch_id\n",
    "                end = batch_size * (batch_id+1)\n",
    "\n",
    "                batch_embeddings = self.model(input_ids[start:end, :], attention_mask[start:end, :], token_type_ids[start:end, :])\n",
    "                embeddings.append(batch_embeddings)\n",
    "            return torch.cat(embeddings, dim=1).swapaxes(0,1)\n",
    "\n",
    "    # Write your own encoding query function (Returns: Query embeddings as numpy array)\n",
    "    def encode_queries(self, queries: List[str], batch_size: int, **kwargs) -> np.ndarray:\n",
    "        queries = [f\"query: {query}\" for query in queries]\n",
    "        embeddings = self.encode_text(texts=queries, batch_size=batch_size)\n",
    "        return embeddings\n",
    "    \n",
    "    # Write your own encoding corpus function (Returns: Document embeddings as numpy array)  \n",
    "    def encode_corpus(self, corpus: List[Dict[str, str]], batch_size: int, **kwargs) -> np.ndarray:\n",
    "        corpus = [f\"passage: {passage}\" for passage in corpus]\n",
    "        embeddings = self.encode_text(texts=corpus, batch_size=batch_size)\n",
    "        return embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Our Fine-tuned model:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "new_model = DRES(NeMoModel(model_path=\"/tmp/trained_model/checkpoints/megatron_bert.nemo\"), batch_size=1)\n",
    "retriever = EvaluateRetrieval(new_model, score_function=\"dot\") # or \"cos_sim\" for cosine similarity\n",
    "results = retriever.retrieve(corpus, queries)\n",
    "\n",
    "#### Evaluate your model with NDCG@k, MAP@K, Recall@K and Precision@K  where k = [1,3,5,10,100,1000] \n",
    "ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The original model:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# The original model\n",
    "old_model = DRES(NeMoModel(model_path=PATH_TO_NEMO_MODEL), batch_size=1)\n",
    "retriever = EvaluateRetrieval(old_model, score_function=\"dot\") # or \"cos_sim\" for cosine similarity\n",
    "results = retriever.retrieve(corpus, queries)\n",
    "\n",
    "#### Evaluate your model with NDCG@k, MAP@K, Recall@K and Precision@K  where k = [1,3,5,10,100,1000] \n",
    "ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As you can see, there is some improvement in the results on evaluation. The improvements might be more noticeable depending on the domain used for synthetic data generation and also the hyperparameters. The improvements might also improve if we sample a larger amount of data for synthetic data generation; although various hyperparameters (like for hard negative mining, batch size) might have to be changed/tested in that case."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
