{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "-eXVaWuemZD8"
   },
   "source": [
    "## Build RAG with Matryoshka Embeddings and LlamaIndex"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "b5SBrcgCjCar"
   },
   "source": [
    "Lets Split RAG Pipeline into 5 parts:\n",
    "\n",
    "1. Data Loading from URL\n",
    "2. Chunking and convert them in Matryoshka Embeddings of different sizes\n",
    "3. LanceDB as Vector Store to these Embeddings\n",
    "4. Query Engine\n",
    "5. Answer Generation using Query Engine"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "2vU-CCvBir1K"
   },
   "source": [
    "In this notebook, we will build a Retrieval-Augmented Generation(RAG) pipeline with Matryoshka Embeddings using LlamaIndex.\n",
    "\n",
    "RAG is a technique that retrieves related documents to the user's question, combines them with LLM-base prompt, and sends them to LLMs like GPT to produce more factually accurate generation."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "uCj-5zusk_Q6"
   },
   "source": [
    "Here is an image illustrating the RAG process\n",
    "\n",
    "![Screenshot from 2024-06-19 00-01-51.png]()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "akzjS6WxmKJ-",
    "outputId": "18a772b6-404f-4194-b7f4-0b928cf8ef2e"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m23.1/23.1 MB\u001b[0m \u001b[31m12.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m290.4/290.4 kB\u001b[0m \u001b[31m26.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m81.3/81.3 kB\u001b[0m \u001b[31m7.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m97.6/97.6 kB\u001b[0m \u001b[31m10.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.4/7.4 MB\u001b[0m \u001b[31m94.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m467.7/467.7 kB\u001b[0m \u001b[31m30.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.9/77.9 kB\u001b[0m \u001b[31m8.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m6.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m98.7/98.7 kB\u001b[0m \u001b[31m10.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m21.3/21.3 MB\u001b[0m \u001b[31m55.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.2/49.2 kB\u001b[0m \u001b[31m4.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h  Building wheel for tinysegmenter (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Building wheel for spider-client (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Building wheel for feedfinder2 (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Building wheel for jieba3k (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
      "  Building wheel for sgmllib3k (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
     ]
    }
   ],
   "source": [
    "# install dependencies\n",
    "!pip install llama-index llama-index-embeddings-huggingface llama-index-readers-web llama-index-vector-stores-lancedb diffusers huggingface-hub -q"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "id": "V27JTgfEluaB"
   },
   "outputs": [],
   "source": [
    "# import modules\n",
    "from llama_index.core import VectorStoreIndex, Settings, StorageContext\n",
    "from llama_index.readers.web import SimpleWebPageReader\n",
    "from llama_index.vector_stores.lancedb import LanceDBVectorStore\n",
    "from llama_index.llms.openai import OpenAI\n",
    "from llama_index.embeddings.huggingface import HuggingFaceEmbedding"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "69WdUVxJoA3O"
   },
   "source": [
    "### Set OPENAI API KEY as env variable"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "id": "l5ORqt6kA5vu"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "# Set your OpenAI API key as env variable\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"sk-proj-....\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "UNuDJdQ1eYoC"
   },
   "source": [
    "### Enter source URL for knowledgebase"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "cellView": "form",
    "id": "ZSpWsbCaeBBL"
   },
   "outputs": [],
   "source": [
    "source_url = \"https://en.wikipedia.org/wiki/Deadpool_(film)\"  # @param {type:\"string\"}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "-rnU7bDBqQur"
   },
   "source": [
    "### Matryoshka Embeddings\n",
    "\n",
    "As research progressed, new state-of-the-art text **embedding models began producing embeddings with increasingly higher output dimensions**. While this enhances performance, it also reduces the efficiency of downstream tasks such as search or classification due to the larger number of values representing each input text."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "OdS1KxvOqrK9"
   },
   "source": [
    "![image.png]()\n",
    "*Image Source: HuggingFace*\n",
    "\n",
    "These Matryoshka embedding models are designed so that even small, truncated embeddings remain useful. In short, Matryoshka embedding models can generate effective embeddings of various dimensions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ErJfXPyceNbT"
   },
   "source": [
    "### Select Matryoshka Embedding Model and respective embedding size"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "cellView": "form",
    "id": "mZo32k5rYW3Q"
   },
   "outputs": [],
   "source": [
    "matryoshka_embedding_model = \"tomaarsen/mpnet-base-nli-matryoshka\"  # @param [\"tomaarsen/mpnet-base-nli-matryoshka\",\"yoshinori-sano/mpnet-base-nli-matryoshka-v2\",\"neuml/pubmedbert-base-embeddings-matryoshka\"]\n",
    "\n",
    "matryoshka_embedding_size = \"256\"  # @param [768, 512, 256, 128, 64]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "hgVHOEBZ2lS5",
    "outputId": "040e1ebe-affe-46fa-c56c-bde19655180a"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "def get_doc_from_url(url):\n",
    "    \"\"\"\n",
    "    This function reads dataset from url and returns the documents.\n",
    "    \"\"\"\n",
    "    documents = SimpleWebPageReader(html_to_text=True).load_data([url])\n",
    "    return documents\n",
    "\n",
    "\n",
    "def build_RAG(\n",
    "    url, matryoshka_embedding_model, matryoshka_embedding_size, uri=\"~/tmp/lancedb\"\n",
    "):\n",
    "    \"\"\"\n",
    "    This function sets embedding model, llm and vector store to be used for creating RAG index.\n",
    "    \"\"\"\n",
    "\n",
    "    Settings.embed_model = HuggingFaceEmbedding(\n",
    "        model_name=matryoshka_embedding_model,\n",
    "        truncate_dim=int(matryoshka_embedding_size),\n",
    "    )\n",
    "    Settings.llm = OpenAI()  # This will now use the API key from the environment\n",
    "    documents = get_doc_from_url(url)\n",
    "    vector_store = LanceDBVectorStore(uri=uri)\n",
    "    storage_context = StorageContext.from_defaults(vector_store=vector_store)\n",
    "    index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)\n",
    "    query_engine = index.as_chat_engine()\n",
    "\n",
    "    return query_engine\n",
    "\n",
    "\n",
    "query_engine = build_RAG(\n",
    "    source_url, matryoshka_embedding_model, matryoshka_embedding_size\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "4Ofe-DVi-Sq9",
    "outputId": "99582338-1cf7-446d-8899-0959c8e8644a"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Ask a question relevant to the given context:\n",
      "Query: Who is writer of Deadpool and what is this idea behind deadpool character\n",
      "Response: The writer of Deadpool is Rhett Reese. \n",
      "\n",
      "The idea behind the Deadpool character was to create a unique and irreverent superhero who breaks the fourth wall, known for his sarcastic humor and self-awareness. The character was designed to be different from traditional superheroes, often making fun of superhero tropes and popular culture references. If you have any more questions or need further information, feel free to ask!\n",
      "Query: How character of Deadpool is different from other Marvel superheros\n",
      "Response: Deadpool is different from other Marvel superheroes in several ways. He is known for his unique characteristics such as sarcastic humor, breaking the fourth wall, and making pop culture references. Deadpool is portrayed as an antihero, engaging in morally ambiguous actions and having an unconventional approach to crime-fighting. If you have any more questions or need further information, feel free to ask!\n",
      "Query: exit\n",
      "Response: If you have any more questions in the future, feel free to ask. Have a great day! Goodbye!\n"
     ]
    }
   ],
   "source": [
    "print(\"Ask a question relevant to the given context:\")\n",
    "while True:\n",
    "    query = input(\"Query: \")\n",
    "    response = query_engine.chat(query)\n",
    "    print(\"Response:\", response)\n",
    "    if query == \"exit\":\n",
    "        break"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "P33Vtz8Zr0lP"
   },
   "source": [
    "Both models were trained on the AllNLI dataset, a combination of the SNLI and MultiNLI datasets. I evaluated these models on the STSBenchmark test set using multiple embedding dimensions. The results are illustrated in the following figure:\n",
    "\n",
    "![image.png]()\n",
    "\n",
    "**Results:**\n",
    "\n",
    "1. **Top Figure:** The Matryoshka model consistently achieves a higher Spearman similarity than the standard model across all dimensions, indicating its superiority in this task.\n",
    "\n",
    "2. **Second Figure:** The Matryoshka model's performance declines much less rapidly than the standard model's. Even at just 8.3% of the full embedding size, the Matryoshka model retains 98.37% of its performance, compared to 96.46% for the standard model.\n",
    "\n",
    "These findings suggest that truncating embeddings with a Matryoshka model can significantly:\n",
    "1. Speed up downstream tasks such as retrieval.\n",
    "2. Save on storage space.\n",
    "\n",
    "All of this is achieved without a notable performance loss."
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
