{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a32a6ee5",
   "metadata": {},
   "source": [
    "# HuggingFace Embeddings\n",
    "\n",
    "- Author: [liniar](https://github.com/namyoungkim)\n",
    "- Design: [liniar](https://github.com/namyoungkim)\n",
    "- Peer Review : [byoon](https://github.com/acho98), [Sun Hyoung Lee](https://github.com/LEE1026icarus)\n",
    "- Proofread : [Youngjun cho](https://github.com/choincnp)\n",
    "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n",
    "\n",
    "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/08-Embedding/03-HuggingFaceEmbeddings.ipynb)[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/08-Embedding/03-HuggingFaceEmbeddings.ipynb)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c5799378",
   "metadata": {},
   "source": [
    "## Overview  \n",
    "- ```Hugging Face``` offers a wide range of **embedding models** for free, enabling various embedding tasks with ease.\n",
    "- In this tutorial, we’ll use ```langchain_huggingface``` to build a **simple text embedding-based search system.** \n",
    "- The following models will be used for **Text Embedding**  \n",
    "\n",
    "    - 1️⃣ **multilingual-e5-large-instruct:** A multilingual instruction-based embedding model.  \n",
    "    - 2️⃣ **multilingual-e5-large:** A powerful multilingual embedding model.  \n",
    "    - 3️⃣ **bge-m3:** Optimized for large-scale text processing.  \n",
    "\n",
    "![](./assets/03-huggingfaceembeddings-workflow.png)  "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3d94e164",
   "metadata": {},
   "source": [
    "### Table of Contents  \n",
    "\n",
    "- [Overview](#overview)\n",
    "- [Environment Setup](#environment-setup)\n",
    "- [Data Preparation for Embedding-Based Search Tutorial](#data-preparation-for-embedding-based-search-tutorial)\n",
    "- [Which Text Embedding Model Should You Use?](#which-text-embedding-model-should-you-use) \n",
    "- [Similarity Calculation](#similarity-calculation)\n",
    "- [HuggingFaceEndpointEmbeddings Overview](#huggingfaceendpointembeddings-overview)\n",
    "- [HuggingFaceEmbeddings Overview](#huggingfaceembeddings-overview)\n",
    "- [FlagEmbedding Usage Guide](#flagembedding-usage-guide)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f60b3831",
   "metadata": {},
   "source": [
    "### References\n",
    "- [LangChain: Embedding Models](https://python.langchain.com/docs/concepts/embedding_models)\n",
    "- [LangChain: Text Embedding](https://python.langchain.com/docs/integrations/text_embedding)\n",
    "- [HuggingFace MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard)\n",
    "- [MTEB GitHub](https://github.com/embeddings-benchmark/mteb)\n",
    "- [Hugging Face Model Hub](https://huggingface.co/models)\n",
    "- [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct)\n",
    "- [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)\n",
    "- [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)\n",
    "- [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/README.md)\n",
    "----"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e7a3526",
   "metadata": {},
   "source": [
    "## Environment Setup  \n",
    "\n",
    "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.  \n",
    "\n",
    "**[Note]**  \n",
    "- ```langchain-opentutorial``` is a package that provides a set of **easy-to-use environment setup,** **useful functions,** and **utilities for tutorials.**  \n",
    "- You can check out the [```langchain-opentutorial``` ](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details.  \n",
    "\n",
    "---\n",
    "\n",
    "### 🛠️ **The following configurations will be set up**  \n",
    "\n",
    "- **Jupyter Notebook Output Settings**\n",
    "    - Display standard error ( ```stderr``` ) messages directly instead of capturing them.  \n",
    "- **Install Required Packages** \n",
    "    - Ensure all necessary dependencies are installed.  \n",
    "- **API Key Setup** \n",
    "    - Configure the API key for authentication.  \n",
    "- **PyTorch Device Selection Setup** \n",
    "    - Automatically select the optimal computing device (CPU, CUDA, or MPS).\n",
    "        - ```{\"device\": \"mps\"}``` : Perform embedding calculations using **MPS** instead of GPU. (For Mac users)\n",
    "        - ```{\"device\": \"cuda\"}``` : Perform embedding calculations using **GPU.** (For Linux and Windows users, requires CUDA installation)\n",
    "        - ```{\"device\": \"cpu\"}``` : Perform embedding calculations using **CPU.** (Available for all users)\n",
    "- **Embedding Model Local Storage Path** \n",
    "    - Define a local path for storing embedding models.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "21943adb",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture --no-stderr\n",
    "%pip install langchain-opentutorial"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "f25ec196",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Install required packages\n",
    "from langchain_opentutorial import package\n",
    "\n",
    "package.install(\n",
    "    [\n",
    "        \"langsmith\",\n",
    "        \"langchain_huggingface\",\n",
    "        \"torch\",\n",
    "        \"numpy\",\n",
    "        \"scikit-learn\",\n",
    "    ],\n",
    "    verbose=False,\n",
    "    upgrade=False,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "7f9065ea",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Environment variables have been set successfully.\n"
     ]
    }
   ],
   "source": [
    "# Set environment variables\n",
    "from langchain_opentutorial import set_env\n",
    "\n",
    "set_env(\n",
    "    {\n",
    "        \"OPENAI_API_KEY\": \"\",\n",
    "        \"LANGCHAIN_API_KEY\": \"\",\n",
    "        \"LANGCHAIN_TRACING_V2\": \"true\",\n",
    "        \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n",
    "        \"LANGCHAIN_PROJECT\": \"HuggingFace Embeddings\",  # Please set it the same as the title\n",
    "        \"HUGGINGFACEHUB_API_TOKEN\": \"\",\n",
    "    }\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "690a9ae0",
   "metadata": {},
   "source": [
    "You can alternatively set OPENAI_API_KEY in ```.env``` file and load it.\n",
    "\n",
    "**[Note]** \n",
    "- This is not necessary if you've already set ```OPENAI_API_KEY``` in previous steps."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "4f99b5b6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from dotenv import load_dotenv\n",
    "\n",
    "load_dotenv(override=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "71b0e4a1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Using MPS (Metal Performance Shaders) on macOS\n",
      "🖥️ Current device in use: mps\n"
     ]
    }
   ],
   "source": [
    "# Automatically select the appropriate device\n",
    "import torch\n",
    "import platform\n",
    "\n",
    "\n",
    "def get_device():\n",
    "    if platform.system() == \"Darwin\":  # macOS specific\n",
    "        if hasattr(torch.backends, \"mps\") and torch.backends.mps.is_available():\n",
    "            print(\"✅ Using MPS (Metal Performance Shaders) on macOS\")\n",
    "            return \"mps\"\n",
    "    if torch.cuda.is_available():\n",
    "        print(\"✅ Using CUDA (NVIDIA GPU)\")\n",
    "        return \"cuda\"\n",
    "    else:\n",
    "        print(\"✅ Using CPU\")\n",
    "        return \"cpu\"\n",
    "\n",
    "\n",
    "# Set the device\n",
    "device = get_device()\n",
    "print(\"🖥️ Current device in use:\", device)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "647c0c07",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Embedding Model Local Storage Path\n",
    "import os\n",
    "import warnings\n",
    "\n",
    "# Ignore warnings\n",
    "warnings.filterwarnings(\"ignore\")\n",
    "\n",
    "# Set the download path to ./cache/\n",
    "os.environ[\"HF_HOME\"] = \"./cache/\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7285cf35",
   "metadata": {},
   "source": [
    "## Data Preparation for Embedding-Based Search Tutorial\n",
    "\n",
    "To perform **embedding-based search,** we prepare both a **Query** and **Documents.**  \n",
    "\n",
    "1. Query  \n",
    "- Write a **key question** that will serve as the basis for the search.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "2aeae907",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Query\n",
    "q = \"Please tell me more about LangChain.\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae9d0180",
   "metadata": {},
   "source": [
    "2. Documents  \n",
    "- Prepare **multiple documents (texts)** that will serve as the target for the search.  \n",
    "- Each document will be **embedded** to enable semantic search capabilities.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "0588d470",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Documents for Text Embedding\n",
    "docs = [\n",
    "    \"Hi, nice to meet you.\",\n",
    "    \"LangChain simplifies the process of building applications with large language models.\",\n",
    "    \"The LangChain English tutorial is structured based on LangChain's official documentation, cookbook, and various practical examples to help users utilize LangChain more easily and effectively.\",\n",
    "    \"LangChain simplifies the process of building applications with large-scale language models.\",\n",
    "    \"Retrieval-Augmented Generation (RAG) is an effective technique for improving AI responses.\",\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "82008642",
   "metadata": {},
   "source": [
    "## Which Text Embedding Model Should You Use?\n",
    "- Leverage the **MTEB leaderboard** and **free embedding models** to confidently select and utilize the **best-performing text embedding models** for your projects! 🚀  \n",
    "\n",
    "---\n",
    "\n",
    "### 🚀 **What is MTEB (Massive Text Embedding Benchmark)?**  \n",
    "- **MTEB** is a benchmark designed to **systematically and objectively evaluate** the performance of text embedding models.  \n",
    "    - **Purpose:** To **fairly compare** the performance of embedding models.  \n",
    "    - **Evaluation Tasks:** Includes tasks like **Classification,**  **Retrieval,**  **Clustering,**  and **Semantic Similarity.**  \n",
    "    - **Supported Models:** A wide range of **text embedding models available on Hugging Face.**  \n",
    "    - **Results:** Displayed as **scores,**  with top-performing models ranked on the **leaderboard.**  \n",
    "\n",
    "🔗 [ **MTEB Leaderboard (Hugging Face)** ](https://huggingface.co/spaces/mteb/leaderboard)  \n",
    "\n",
    "---\n",
    "\n",
    "### 🛠️ **Models Used in This Tutorial**  \n",
    "\n",
    "| **Embedding Model** | **Description** |\n",
    "|----------|----------|\n",
    "| 1️⃣ **multilingual-e5-large-instruct** | Offers strong multilingual support with consistent results. |\n",
    "| 2️⃣ **multilingual-e5-large** | A powerful multilingual embedding model. |\n",
    "| 3️⃣ **bge-m3** | Optimized for large-scale text processing, excelling in retrieval and semantic similarity tasks. |\n",
    "\n",
    "1️⃣ **multilingual-e5-large-instruct**\n",
    "![](./assets/03-huggingfaceembeddings-leaderboard-01.png)\n",
    "\n",
    "2️⃣ **multilingual-e5-large**\n",
    "![](./assets/03-huggingfaceembeddings-leaderboard-02.png)\n",
    "\n",
    "3️⃣ **bge-m3**\n",
    "![](./assets/03-huggingfaceembeddings-leaderboard-03.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "887b63bc",
   "metadata": {},
   "source": [
    "## Similarity Calculation\n",
    "\n",
    "**Similarity Calculation Using Vector Dot Product**  \n",
    "- Similarity is determined using the **dot product** of vectors.  \n",
    "\n",
    "- **Similarity Calculation Formula:**  \n",
    "\n",
    "$$ \\text{similarities} = \\mathbf{query} \\cdot \\mathbf{documents}^T $$  \n",
    "\n",
    "---\n",
    "\n",
    "### 📐 **Mathematical Significance of the Vector Dot Product**  \n",
    "\n",
    "**Definition of Vector Dot Product**  \n",
    "\n",
    "The **dot product** of two vectors, $\\mathbf{a}$ and $\\mathbf{b}$, is mathematically defined as:  \n",
    "\n",
    "$$ \\mathbf{a} \\cdot \\mathbf{b} = \\sum_{i=1}^{n} a_i b_i $$  \n",
    "\n",
    "---\n",
    "\n",
    "**Relationship with Cosine Similarity**  \n",
    "\n",
    "The **dot product** also relates to **cosine similarity** and follows this property:  \n",
    "\n",
    "$$ \\mathbf{a} \\cdot \\mathbf{b} = \\|\\mathbf{a}\\| \\|\\mathbf{b}\\| \\cos \\theta $$  \n",
    "\n",
    "Where:  \n",
    "- $\\|\\mathbf{a}\\|$ and $\\|\\mathbf{b}\\|$ represent the **magnitudes** (**norms,**  specifically Euclidean norms) of vectors $\\mathbf{a}$ and $\\mathbf{b}$.  \n",
    "- $\\theta$ is the **angle between the two vectors.**  \n",
    "- $\\cos \\theta$ represents the **cosine similarity** between the two vectors.  \n",
    "\n",
    "---\n",
    "\n",
    "**🔍 Interpretation of Vector Dot Product in Similarity**  \n",
    "\n",
    "When the **dot product value is large** (a large positive value):  \n",
    "- The **magnitudes** ($\\|\\mathbf{a}\\|$ and $\\|\\mathbf{b}\\|$) of the two vectors are large.  \n",
    "- The **angle** ($\\theta$) between the two vectors is small ( **$\\cos \\theta$ approaches 1** ).  \n",
    "\n",
    "This indicates that the two vectors point in a **similar direction** and are **more semantically similar,**  especially when their magnitudes are also large.  \n",
    "\n",
    "---\n",
    "\n",
    "### 📏 **Calculation of Vector Magnitude (Norm)**  \n",
    "\n",
    "**Definition of Euclidean Norm**  \n",
    "\n",
    "For a vector $\\mathbf{a} = [a_1, a_2, \\ldots, a_n]$, the **Euclidean norm** $\\|\\mathbf{a}\\|$ is calculated as:  \n",
    "\n",
    "$$ \\|\\mathbf{a}\\| = \\sqrt{a_1^2 + a_2^2 + \\cdots + a_n^2} $$  \n",
    "\n",
    "This **magnitude** represents the **length** or **size** of the vector in multi-dimensional space.  \n",
    "\n",
    "---\n",
    "\n",
    "Understanding these mathematical foundations helps ensure precise similarity calculations, enabling better performance in tasks like **semantic search,**  **retrieval systems,**  and **recommendation engines.**  🚀"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bf5d3874",
   "metadata": {},
   "source": [
    "----\n",
    "### Similarity calculation between ```embedded_query``` and ```embedded_document``` \n",
    "- ```embed_documents``` : For embedding multiple texts (documents)\n",
    "- ```embed_query``` : For embedding a single text (query)\n",
    "\n",
    "We've implemented a method to search for the most relevant documents using **text embeddings.** \n",
    "- Let's use ```search_similar_documents(q, docs, hf_embeddings)``` to find the most relevant documents."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "f1e1a612",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "\n",
    "def search_similar_documents(q, docs, hf_embeddings):\n",
    "    \"\"\"\n",
    "    Search for the most relevant documents based on a query using text embeddings.\n",
    "\n",
    "    Args:\n",
    "        q (str): The query string for which relevant documents are to be found.\n",
    "        docs (list of str): A list of document strings to compare against the query.\n",
    "        hf_embeddings: An embedding model object with `embed_query` and `embed_documents` methods.\n",
    "\n",
    "    Returns:\n",
    "        tuple:\n",
    "            - embedded_query (numpy.ndarray): The embedding vector of the query.\n",
    "            - embedded_documents (numpy.ndarray): The embedding matrix of the documents.\n",
    "\n",
    "    Workflow:\n",
    "        1. Embed the query string into a numerical vector using `embed_query`.\n",
    "        2. Embed each document into numerical vectors using `embed_documents`.\n",
    "        3. Calculate similarity scores between the query and documents using the dot product.\n",
    "        4. Sort the documents based on their similarity scores in descending order.\n",
    "        5. Print the query and display the sorted documents by their relevance.\n",
    "        6. Return the query and document embeddings for further analysis if needed.\n",
    "    \"\"\"\n",
    "    # Embed the query and documents using the embedding model\n",
    "    embedded_query = hf_embeddings.embed_query(q)\n",
    "    embedded_documents = hf_embeddings.embed_documents(docs)\n",
    "\n",
    "    # Calculate similarity scores using dot product\n",
    "    similarity_scores = np.array(embedded_query) @ np.array(embedded_documents).T\n",
    "\n",
    "    # Sort documents by similarity scores in descending order\n",
    "    sorted_idx = similarity_scores.argsort()[::-1]\n",
    "\n",
    "    # Display the results\n",
    "    print(f\"[Query] {q}\\n\" + \"=\" * 40)\n",
    "    for i, idx in enumerate(sorted_idx):\n",
    "        print(f\"[{i}] {docs[idx]}\")\n",
    "        print()\n",
    "\n",
    "    # Return embeddings for potential further processing or analysis\n",
    "    return embedded_query, embedded_documents"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a965e60",
   "metadata": {},
   "source": [
    "## HuggingFaceEndpointEmbeddings Overview\n",
    "\n",
    "**HuggingFaceEndpointEmbeddings** is a feature in the **LangChain** library that leverages **Hugging Face’s Inference API endpoint** to generate text embeddings seamlessly.\n",
    "\n",
    "---\n",
    "\n",
    "### 📚 **Key Concepts**\n",
    "\n",
    "1. **Hugging Face Inference API**  \n",
    "   - Access pre-trained embedding models via Hugging Face’s API.  \n",
    "   - No need to download models locally; embeddings are generated directly through the API.  \n",
    "\n",
    "2. **LangChain Integration**  \n",
    "   - Easily integrate embedding results into LangChain workflows using its standardized interface.  \n",
    "\n",
    "3. **Use Cases**  \n",
    "   - Text-query and document similarity calculation  \n",
    "   - Search and recommendation systems  \n",
    "   - Natural Language Understanding (NLU) applications  \n",
    "\n",
    "---\n",
    "\n",
    "### ⚙️ **Key Parameters**\n",
    "\n",
    "- ```model``` : The Hugging Face model ID (e.g., ```BAAI/bge-m3``` )  \n",
    "- ```task``` : The task to perform (usually ```\"feature-extraction\"``` )  \n",
    "- ```api_key``` : Your Hugging Face API token  \n",
    "- ```model_kwargs``` : Additional model configuration parameters  \n",
    "\n",
    "---\n",
    "\n",
    "### 💡 **Advantages**  \n",
    "- **No Local Model Download:** Instant access via API.  \n",
    "- **Scalability:** Supports a wide range of pre-trained Hugging Face models.  \n",
    "- **Seamless Integration:** Effortlessly integrates embeddings into LangChain workflows.  \n",
    "\n",
    "---\n",
    "\n",
    "### ⚠️ **Caveats**  \n",
    "- **API Support:** Not all models support API inference.  \n",
    "- **Speed & Cost:** Free APIs may have slower response times and usage limitations.  \n",
    "\n",
    "---\n",
    "\n",
    "With **HuggingFaceEndpointEmbeddings,**  you can easily integrate Hugging Face’s powerful embedding models into your **LangChain workflows** for efficient and scalable NLP solutions. 🚀"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bd6c1ed1",
   "metadata": {},
   "source": [
    "---\n",
    "Let’s use the ```intfloat/multilingual-e5-large-instruct``` model via the API to search for the most relevant documents using text embeddings.\n",
    "\n",
    "- [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "ebeeab2c",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_huggingface.embeddings import HuggingFaceEndpointEmbeddings\n",
    "\n",
    "model_name = \"intfloat/multilingual-e5-large-instruct\"\n",
    "\n",
    "hf_endpoint_embeddings = HuggingFaceEndpointEmbeddings(\n",
    "    model=model_name,\n",
    "    task=\"feature-extraction\",\n",
    "    huggingfacehub_api_token=os.environ[\"HUGGINGFACEHUB_API_TOKEN\"],\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f2f0d4ff",
   "metadata": {},
   "source": [
    "Search for the most relevant documents based on a query using text embeddings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "f6910f88",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 7.18 ms, sys: 2.32 ms, total: 9.5 ms\n",
      "Wall time: 1.21 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "# Embed the query and documents using the embedding model\n",
    "embedded_query = hf_endpoint_embeddings.embed_query(q)\n",
    "embedded_documents = hf_endpoint_embeddings.embed_documents(docs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "19ed03cf",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Calculate similarity scores using dot product\n",
    "similarity_scores = np.array(embedded_query) @ np.array(embedded_documents).T\n",
    "\n",
    "# Sort documents by similarity scores in descending order\n",
    "sorted_idx = similarity_scores.argsort()[::-1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "2e288dbd",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Query] Please tell me more about LangChain.\n",
      "========================================\n",
      "[0] LangChain simplifies the process of building applications with large language models.\n",
      "\n",
      "[1] LangChain simplifies the process of building applications with large-scale language models.\n",
      "\n",
      "[2] The LangChain English tutorial is structured based on LangChain's official documentation, cookbook, and various practical examples to help users utilize LangChain more easily and effectively.\n",
      "\n",
      "[3] Retrieval-Augmented Generation (RAG) is an effective technique for improving AI responses.\n",
      "\n",
      "[4] Hi, nice to meet you.\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# Display the results\n",
    "print(f\"[Query] {q}\\n\" + \"=\" * 40)\n",
    "for i, idx in enumerate(sorted_idx):\n",
    "    print(f\"[{i}] {docs[idx]}\")\n",
    "    print()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "665a8443",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[HuggingFace Endpoint Embedding]\n",
      "Model: \t\tintfloat/multilingual-e5-large-instruct\n",
      "Document Dimension: \t1024\n",
      "Query Dimension: \t1024\n"
     ]
    }
   ],
   "source": [
    "print(\"[HuggingFace Endpoint Embedding]\")\n",
    "print(f\"Model: \\t\\t{model_name}\")\n",
    "print(f\"Document Dimension: \\t{len(embedded_documents[0])}\")\n",
    "print(f\"Query Dimension: \\t{len(embedded_query)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e999ad0",
   "metadata": {},
   "source": [
    "We can verify that the dimensions of ```embedded_documents``` and ```embedded_query``` are consistent.  "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1893bdf5",
   "metadata": {},
   "source": [
    "You can also perform searches using the ```search_similar_documents``` method we implemented earlier.  \n",
    "From now on, let's use this method for our searches.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "1cf3767e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Query] Please tell me more about LangChain.\n",
      "========================================\n",
      "[0] LangChain simplifies the process of building applications with large language models.\n",
      "\n",
      "[1] LangChain simplifies the process of building applications with large-scale language models.\n",
      "\n",
      "[2] The LangChain English tutorial is structured based on LangChain's official documentation, cookbook, and various practical examples to help users utilize LangChain more easily and effectively.\n",
      "\n",
      "[3] Retrieval-Augmented Generation (RAG) is an effective technique for improving AI responses.\n",
      "\n",
      "[4] Hi, nice to meet you.\n",
      "\n",
      "CPU times: user 7.25 ms, sys: 3.26 ms, total: 10.5 ms\n",
      "Wall time: 418 ms\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "embedded_query, embedded_documents = search_similar_documents(q, docs, hf_endpoint_embeddings)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e220112a",
   "metadata": {},
   "source": [
    "## HuggingFaceEmbeddings Overview\n",
    "\n",
    "- **HuggingFaceEmbeddings** is a feature in the **LangChain** library that enables the conversion of text data into vectors using **Hugging Face embedding models.** \n",
    "- This class downloads and operates Hugging Face models **locally** for efficient processing.\n",
    "\n",
    "---\n",
    "\n",
    "### 📚 **Key Concepts**\n",
    "\n",
    "1. **Hugging Face Pre-trained Models**  \n",
    "   - Leverages pre-trained embedding models provided by Hugging Face.  \n",
    "   - Downloads models locally for direct embedding operations.  \n",
    "\n",
    "2. **LangChain Integration**  \n",
    "   - Seamlessly integrates with LangChain workflows using its standardized interface.  \n",
    "\n",
    "3. **Use Cases**  \n",
    "   - Text-query and document similarity calculation  \n",
    "   - Search and recommendation systems  \n",
    "   - Natural Language Understanding (NLU) applications  \n",
    "\n",
    "---\n",
    "\n",
    "### ⚙️ **Key Parameters**\n",
    "\n",
    "- ```model_name``` : The Hugging Face model ID (e.g., ```sentence-transformers/all-MiniLM-L6-v2``` )\n",
    "- ```model_kwargs``` : Additional model configuration parameters (e.g., GPU/CPU device settings)\n",
    "- ```encode_kwargs``` : Extra settings for embedding generation\n",
    "\n",
    "---\n",
    "\n",
    "### 💡 **Advantages**  \n",
    "- **Local Embedding Operations:** Perform embeddings locally without requiring an internet connection.  \n",
    "- **High Performance:** Utilize GPU settings for faster embedding generation.  \n",
    "- **Model Variety:** Supports a wide range of Hugging Face models.  \n",
    "\n",
    "---\n",
    "\n",
    "### ⚠️ **Caveats**  \n",
    "- **Local Storage Requirement:** Pre-trained models must be downloaded locally.  \n",
    "- **Environment Configuration:** Performance may vary depending on GPU/CPU device settings.  \n",
    "\n",
    "---\n",
    "\n",
    "With **HuggingFaceEmbeddings,** you can efficiently leverage **Hugging Face's powerful embedding models** in a **local environment,** enabling flexible and scalable NLP solutions. 🚀"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9f0bbecb",
   "metadata": {},
   "source": [
    "---\n",
    "Let's download the embedding model locally, perform embeddings, and search for the most relevant documents.\n",
    "\n",
    "```intfloat/multilingual-e5-large-instruct``` \n",
    "\n",
    "- [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "33c80d76",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "120f0abc878740eab866b3c6ac6174de",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "modules.json:   0%|          | 0.00/349 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "576bde51a7e5429081fe8ca25be485a1",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config_sentence_transformers.json:   0%|          | 0.00/128 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b65bb288a10d4b6ca570b295df2e9489",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "README.md:   0%|          | 0.00/140k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "241ac0b490ad45ed82ab5166a03f0e42",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/690 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e65e9daa06854e619322a38b4d05a141",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/1.12G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f9274e892df24ad1999b5f5ccfd37111",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/1.18k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f5a6bdf6628d48eea1830e2d0a47f2f0",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "sentencepiece.bpe.model:   0%|          | 0.00/5.07M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "07be2d5361824e2a84470111c25733fb",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/17.1M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "393ae9475f554abeabd0387e7ede0141",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "special_tokens_map.json:   0%|          | 0.00/964 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9f12edada76440809205236b96557ee7",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "1_Pooling/config.json:   0%|          | 0.00/271 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from langchain_huggingface.embeddings import HuggingFaceEmbeddings\n",
    "\n",
    "model_name = \"intfloat/multilingual-e5-large-instruct\"\n",
    "\n",
    "hf_embeddings_e5_instruct = HuggingFaceEmbeddings(\n",
    "    model_name=model_name,\n",
    "    model_kwargs={\"device\": device},  # mps, cuda, cpu\n",
    "    encode_kwargs={\"normalize_embeddings\": True},\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "52ea2e5a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Query] Please tell me more about LangChain.\n",
      "========================================\n",
      "[0] LangChain simplifies the process of building applications with large language models.\n",
      "\n",
      "[1] LangChain simplifies the process of building applications with large-scale language models.\n",
      "\n",
      "[2] The LangChain English tutorial is structured based on LangChain's official documentation, cookbook, and various practical examples to help users utilize LangChain more easily and effectively.\n",
      "\n",
      "[3] Retrieval-Augmented Generation (RAG) is an effective technique for improving AI responses.\n",
      "\n",
      "[4] Hi, nice to meet you.\n",
      "\n",
      "CPU times: user 326 ms, sys: 120 ms, total: 446 ms\n",
      "Wall time: 547 ms\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "embedded_query, embedded_documents = search_similar_documents(q, docs, hf_embeddings_e5_instruct)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "fa283de3",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \t\tintfloat/multilingual-e5-large-instruct\n",
      "Document Dimension: \t1024\n",
      "Query Dimension: \t1024\n"
     ]
    }
   ],
   "source": [
    "print(f\"Model: \\t\\t{model_name}\")\n",
    "print(f\"Document Dimension: \\t{len(embedded_documents[0])}\")\n",
    "print(f\"Query Dimension: \\t{len(embedded_query)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "51605b3c",
   "metadata": {},
   "source": [
    "---\n",
    "```intfloat/multilingual-e5-large``` \n",
    "\n",
    "- [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "c85d5b92",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9bf8485ade564180aa1fd64c933355e1",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "modules.json:   0%|          | 0.00/387 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "44b9a9438711423cb9f685e4d52d53bc",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "README.md:   0%|          | 0.00/160k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "07eb4d424fa547f48dab5bdb64b7d25a",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "sentence_bert_config.json:   0%|          | 0.00/57.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "2789d046962c4a72bab78ceaa0ab7f23",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/690 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b1b98e59b3db4b088163c9339033f3b3",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.safetensors:   0%|          | 0.00/2.24G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "2865de5038644575874d6d869d0793e3",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/418 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5518d1906144480cbd8cbabec3c058de",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "sentencepiece.bpe.model:   0%|          | 0.00/5.07M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3c11fbd3dd3e43deadb605814d4b9437",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/17.1M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d39d895603ef40d78c42e4552867f7ad",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "special_tokens_map.json:   0%|          | 0.00/280 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e3b5c216804c49758a4d079d1ef5273c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "1_Pooling/config.json:   0%|          | 0.00/201 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from langchain_huggingface.embeddings import HuggingFaceEmbeddings\n",
    "\n",
    "model_name = \"intfloat/multilingual-e5-large\"\n",
    "\n",
    "hf_embeddings_e5_large = HuggingFaceEmbeddings(\n",
    "    model_name=model_name,\n",
    "    model_kwargs={\"device\": device},  # mps, cuda, cpu\n",
    "    encode_kwargs={\"normalize_embeddings\": True},\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "42fb702b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Query] Please tell me more about LangChain.\n",
      "========================================\n",
      "[0] LangChain simplifies the process of building applications with large-scale language models.\n",
      "\n",
      "[1] LangChain simplifies the process of building applications with large language models.\n",
      "\n",
      "[2] The LangChain English tutorial is structured based on LangChain's official documentation, cookbook, and various practical examples to help users utilize LangChain more easily and effectively.\n",
      "\n",
      "[3] Retrieval-Augmented Generation (RAG) is an effective technique for improving AI responses.\n",
      "\n",
      "[4] Hi, nice to meet you.\n",
      "\n",
      "CPU times: user 84.1 ms, sys: 511 ms, total: 595 ms\n",
      "Wall time: 827 ms\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "embedded_query, embedded_documents = search_similar_documents(q, docs, hf_embeddings_e5_large)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "bb62912b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \t\tintfloat/multilingual-e5-large\n",
      "Document Dimension: \t1024\n",
      "Query Dimension: \t1024\n"
     ]
    }
   ],
   "source": [
    "print(f\"Model: \\t\\t{model_name}\")\n",
    "print(f\"Document Dimension: \\t{len(embedded_documents[0])}\")\n",
    "print(f\"Query Dimension: \\t{len(embedded_query)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "69abf61b",
   "metadata": {},
   "source": [
    "---\n",
    "```BAAI/bge-m3``` \n",
    "\n",
    "- [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "ac698306",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "220641d0c0a14704b59fb89a02990f32",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "modules.json:   0%|          | 0.00/349 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "88cfaeafa8b5442bb3b207942f0605de",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config_sentence_transformers.json:   0%|          | 0.00/123 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "a9ca58803e1c4d8181bc7104b688bfd7",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "README.md:   0%|          | 0.00/15.8k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "99e2b4fdd4434ea2bd3ab97b8bc9e5a0",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "sentence_bert_config.json:   0%|          | 0.00/54.0 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "cf40cb0cb8cd4f8d8e61aa9e1bbc53a1",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json:   0%|          | 0.00/687 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "629a48e3132846c0839b06d6bbb3c69e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "pytorch_model.bin:   0%|          | 0.00/2.27G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f427f87441bf40eb99ebf3516b812f3f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/444 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9a70a867f5224a8e83ac08ae074c54e3",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "sentencepiece.bpe.model:   0%|          | 0.00/5.07M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "10313c0b57304bb79ff97b04234d73f1",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/17.1M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9432edda58a84a5aa21e7e3dedc59bc6",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "special_tokens_map.json:   0%|          | 0.00/964 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "2007c23ff89542279a8b322f505190a9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "1_Pooling/config.json:   0%|          | 0.00/191 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from langchain_huggingface import HuggingFaceEmbeddings\n",
    "\n",
    "model_name = \"BAAI/bge-m3\"\n",
    "model_kwargs = {\"device\": device}  # mps, cuda, cpu\n",
    "encode_kwargs = {\"normalize_embeddings\": True}\n",
    "\n",
    "hf_embeddings_bge_m3 = HuggingFaceEmbeddings(\n",
    "    model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "53a49ee5",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Query] Please tell me more about LangChain.\n",
      "========================================\n",
      "[0] LangChain simplifies the process of building applications with large language models.\n",
      "\n",
      "[1] LangChain simplifies the process of building applications with large-scale language models.\n",
      "\n",
      "[2] The LangChain English tutorial is structured based on LangChain's official documentation, cookbook, and various practical examples to help users utilize LangChain more easily and effectively.\n",
      "\n",
      "[3] Hi, nice to meet you.\n",
      "\n",
      "[4] Retrieval-Augmented Generation (RAG) is an effective technique for improving AI responses.\n",
      "\n",
      "CPU times: user 81.1 ms, sys: 1.29 s, total: 1.37 s\n",
      "Wall time: 1.5 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "embedded_query, embedded_documents = search_similar_documents(q, docs, hf_embeddings_bge_m3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "0d1598f3",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \t\tBAAI/bge-m3\n",
      "Document Dimension: \t1024\n",
      "Query Dimension: \t1024\n"
     ]
    }
   ],
   "source": [
    "print(f\"Model: \\t\\t{model_name}\")\n",
    "print(f\"Document Dimension: \\t{len(embedded_documents[0])}\")\n",
    "print(f\"Query Dimension: \\t{len(embedded_query)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "42890bcc",
   "metadata": {},
   "source": [
    "## FlagEmbedding Usage Guide\n",
    "\n",
    "- **FlagEmbedding** is an advanced embedding framework developed by **BAAI (Beijing Academy of Artificial Intelligence).**\n",
    "- It supports **various embedding approaches** and is primarily used with the **BGE (BAAI General Embedding) model.**\n",
    "- FlagEmbedding excels in tasks such as **semantic search**, **natural language processing (NLP)**, and **recommendation systems.**\n",
    "\n",
    "---\n",
    "\n",
    "### 📚 **Core Concepts of FlagEmbedding**\n",
    "\n",
    "1️⃣ ```Dense Embedding``` \n",
    "- Definition: Represents the overall meaning of a text as a single high-density vector.  \n",
    "- Advantages: Effectively captures semantic similarity.  \n",
    "- Use Cases: Semantic search, document similarity computation.  \n",
    "\n",
    "2️⃣ ```Lexical Embedding``` \n",
    "- Definition: Breaks text into word-level components, emphasizing word matching.  \n",
    "- Advantages: Ensures precise matching of specific words or phrases.  \n",
    "- Use Cases: Keyword-based search, exact word matching.  \n",
    "\n",
    "3️⃣ ```Multi-Vector Embedding``` \n",
    "- Definition: Splits a document into multiple vectors for representation.  \n",
    "- Advantages: Allows more granular representation of lengthy texts or diverse topics.  \n",
    "- Use Cases: Complex document structure analysis, detailed topic matching.  "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8bb697bc",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "FlagEmbedding offers a **flexible and powerful toolkit** for leveraging embeddings across a wide range of **NLP tasks and semantic search applications.** 🚀"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "60259624",
   "metadata": {},
   "source": [
    "The following code is used to control **tokenizer parallelism** in Hugging Face's ```transformers``` library:\n",
    "\n",
    "- ```TOKENIZERS_PARALLELISM = \"true\"```  → **Optimized for speed,** suitable for large-scale data processing.  \n",
    "- ```TOKENIZERS_PARALLELISM = \"false\"```  → **Ensures stability,** prevents conflicts and race conditions.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "956c667a",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "os.environ[\"TOKENIZERS_PARALLELISM\"] = \"true\"  # \"false\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "726df20d",
   "metadata": {},
   "outputs": [],
   "source": [
    "# install FlagEmbedding\n",
    "%pip install -qU FlagEmbedding"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8224874",
   "metadata": {},
   "source": [
    "### ⚙️ **Key Parameter**\n",
    "\n",
    "```BGEM3FlagModel``` \n",
    "-  ```model_name``` : The Hugging Face **model ID** (e.g., ```BAAI/bge-m3``` ).\n",
    "-  ```use_fp16``` : When set to **True,** reduces **memory usage** and improves **encoding speed.**\n",
    "\n",
    "```bge_embeddings.encode``` \n",
    "- ```batch_size``` : Defines the **number of documents** to process at once.  \n",
    "- ```max_length``` : Sets the **maximum token length** for encoding documents.  \n",
    "   - Increase for longer documents to ensure full content encoding.  \n",
    "   - Excessively large values may **degrade performance.**\n",
    "- ```return_dense``` : When set to **True**, returns **Dense Vectors** only.  \n",
    "- ```return_sparse``` : When set to **True**, returns **Sparse Vectors.**\n",
    "- ```return_colbert_vecs``` : When set to **True,** returns **ColBERT-style vectors.**\n",
    "\n",
    "\n",
    "\n",
    "### 1️⃣ **Dense Vector Embedding Example**\n",
    "- Definition: Represents the overall meaning of a text as a single high-density vector.  \n",
    "- Advantages: Effectively captures semantic similarity.  \n",
    "- Use Cases: Semantic search, document similarity computation.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "6b3624be",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "01bf90a4e376451fa42848cb9fdec85f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Fetching 30 files:   0%|          | 0/30 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "74a97f05ce8a401eb895d9f38ced73e3",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "imgs/mkqa.jpg:   0%|          | 0.00/608k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "069d0d9a20834a16b5c09e775d5a80ee",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "imgs/.DS_Store:   0%|          | 0.00/6.15k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5c2276dae01249148eeeea1486300d66",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "imgs/long.jpg:   0%|          | 0.00/485k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "4f88fbcea07a42f0b927c9b665b16d2c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "imgs/bm25.jpg:   0%|          | 0.00/132k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d9e85c188dd24f79bca12fe8927b2173",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "imgs/miracl.jpg:   0%|          | 0.00/576k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b386913bf83643c6983cd356395bbcd5",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "imgs/nqa.jpg:   0%|          | 0.00/158k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "a0b6bd0ec09d4d8f9191a5b7ef599a4f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       ".gitattributes:   0%|          | 0.00/1.63k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "bda3666fe0ef4f6a95906e67d60221cd",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "colbert_linear.pt:   0%|          | 0.00/2.10M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "1df70716e2f44988be1fb19faf7023da",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "imgs/others.webp:   0%|          | 0.00/21.0k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "2d05eed702944f4f97ed62acee8ad3ad",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "long.jpg:   0%|          | 0.00/127k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7caa9837bf7f4823a49dae66e9a254fc",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "onnx/Constant_7_attr__value:   0%|          | 0.00/65.6k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d5771da619ec403b8e3008900c37514f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "onnx/config.json:   0%|          | 0.00/698 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "22b88cdb500742138861696050f1e101",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.onnx:   0%|          | 0.00/725k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "8ad27a74f07640668d44014e03435e7c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "model.onnx_data:   0%|          | 0.00/2.27G [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5965f5ade52240648493251ba526c620",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "onnx/tokenizer_config.json:   0%|          | 0.00/1.17k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "700d9883252140d1bb593ec26be2ff37",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer.json:   0%|          | 0.00/17.1M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3fd00b14b2314048ba27713c3f7ae80e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "sparse_linear.pt:   0%|          | 0.00/3.52k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "You're using a XLMRobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\n"
     ]
    }
   ],
   "source": [
    "from FlagEmbedding import BGEM3FlagModel\n",
    "\n",
    "model_name = \"BAAI/bge-m3\"\n",
    "\n",
    "bge_embeddings = BGEM3FlagModel(\n",
    "    model_name,\n",
    "    use_fp16=True,  # Enabling fp16 improves encoding speed with minimal precision trade-off.\n",
    ")\n",
    "\n",
    "# Encode documents with specified parameters\n",
    "embedded_documents_dense_vecs = bge_embeddings.encode(\n",
    "    sentences=docs,\n",
    "    batch_size=12,\n",
    "    max_length=8192,  # Reduce this value if your documents are shorter to speed up encoding.\n",
    ")[\"dense_vecs\"]\n",
    "\n",
    "# Query Encoding\n",
    "embedded_query_dense_vecs = bge_embeddings.encode(\n",
    "    sentences=[q],\n",
    "    batch_size=12,\n",
    "    max_length=8192,  # Reduce this value if your documents are shorter to speed up encoding.\n",
    ")[\"dense_vecs\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "23f3de31",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[-0.0271  ,  0.003561, -0.0506  , ...,  0.00911 , -0.04565 ,\n",
       "         0.02028 ],\n",
       "       [-0.02242 , -0.01398 , -0.00946 , ...,  0.01851 ,  0.01907 ,\n",
       "        -0.01917 ],\n",
       "       [ 0.01386 , -0.02118 ,  0.01807 , ..., -0.01463 ,  0.04373 ,\n",
       "        -0.011856],\n",
       "       [-0.02365 , -0.008675, -0.000806, ...,  0.01537 ,  0.01438 ,\n",
       "        -0.02342 ],\n",
       "       [-0.01289 , -0.007313, -0.0121  , ..., -0.00561 ,  0.03787 ,\n",
       "         0.006016]], dtype=float16)"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embedded_documents_dense_vecs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "2b63cf4a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[-0.02156 , -0.01993 , -0.01706 , ..., -0.01994 ,  0.0318  ,\n",
       "        -0.003395]], dtype=float16)"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embedded_query_dense_vecs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "410698d3",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(5, 1024)"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# docs embedding dimension\n",
    "embedded_documents_dense_vecs.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "33c160b8",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(1, 1024)"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# query embedding dimension\n",
    "embedded_query_dense_vecs.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "6410ce6c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Question: Please tell me more about LangChain.\n",
      "Most similar document: LangChain simplifies the process of building applications with large language models.\n"
     ]
    }
   ],
   "source": [
    "# Calculating Similarity Between Documents and Query\n",
    "from sklearn.metrics.pairwise import cosine_similarity\n",
    "\n",
    "similarities = cosine_similarity(\n",
    "    embedded_query_dense_vecs, embedded_documents_dense_vecs\n",
    ")\n",
    "most_similar_idx = similarities.argmax()\n",
    "\n",
    "# Display the Most Similar Document\n",
    "print(f\"Question: {q}\")\n",
    "print(f\"Most similar document: {docs[most_similar_idx]}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "4dfb7c88",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "34fca22cdafa4ce4b01a1680405d35d0",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Fetching 30 files:   0%|          | 0/30 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "You're using a XLMRobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\n"
     ]
    }
   ],
   "source": [
    "from FlagEmbedding import BGEM3FlagModel\n",
    "\n",
    "model_name = \"BAAI/bge-m3\"\n",
    "\n",
    "bge_embeddings = BGEM3FlagModel(\n",
    "    model_name,\n",
    "    use_fp16=True,  # Enabling fp16 improves encoding speed with minimal precision trade-off.\n",
    ")\n",
    "\n",
    "# Encode documents with specified parameters\n",
    "embedded_documents_dense_vecs_default = bge_embeddings.encode(\n",
    "    sentences=docs, return_dense=True\n",
    ")[\"dense_vecs\"]\n",
    "\n",
    "# Query Encoding\n",
    "embedded_query_dense_vecs_default = bge_embeddings.encode(\n",
    "    sentences=[q], return_dense=True\n",
    ")[\"dense_vecs\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "89d3b1e1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Question: Please tell me more about LangChain.\n",
      "Most similar document: LangChain simplifies the process of building applications with large language models.\n"
     ]
    }
   ],
   "source": [
    "# Calculating Similarity Between Documents and Query\n",
    "from sklearn.metrics.pairwise import cosine_similarity\n",
    "\n",
    "similarities = cosine_similarity(\n",
    "    embedded_query_dense_vecs_default, embedded_documents_dense_vecs_default\n",
    ")\n",
    "most_similar_idx = similarities.argmax()\n",
    "\n",
    "# Display the Most Similar Document\n",
    "print(f\"Question: {q}\")\n",
    "print(f\"Most similar document: {docs[most_similar_idx]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fbd8142e",
   "metadata": {},
   "source": [
    "### 2️⃣ **Sparse(Lexical) Vector Embedding Example**\n",
    "\n",
    "**Sparse Embedding (Lexical Weight)**\n",
    "- **Sparse embedding** is an embedding method that utilizes **high-dimensional vectors where most values are zero.**\n",
    "- The approach using **lexical weight** generates embeddings by considering the **importance of each word.**\n",
    "\n",
    "**How It Works**  \n",
    "1. Calculate the **lexical weight** for each word. Techniques like **TF-IDF** or **BM25** can be used.\n",
    "2. For each word in a document or query, assign a value to the corresponding dimension of the **sparse vector** based on its lexical weight.\n",
    "3. As a result, documents and queries are represented as **high-dimensional vectors where most values are zero.** \n",
    "\n",
    "**Advantages**  \n",
    "- Directly reflects the **importance of words.** \n",
    "- Enables **precise matching** of specific words or phrases.  \n",
    "- **Faster computation** compared to dense embeddings.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "c7fe20d7",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "757be6b2efce438ba041d6c21ed6d404",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Fetching 30 files:   0%|          | 0/30 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "You're using a XLMRobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\n"
     ]
    }
   ],
   "source": [
    "from FlagEmbedding import BGEM3FlagModel\n",
    "\n",
    "model_name = \"BAAI/bge-m3\"\n",
    "\n",
    "bge_embeddings = BGEM3FlagModel(\n",
    "    model_name,\n",
    "    use_fp16=True,  # Enabling fp16 improves encoding speed with minimal precision trade-off.\n",
    ")\n",
    "\n",
    "# Encode documents with specified parameters\n",
    "embedded_documents_sparse_vecs = bge_embeddings.encode(\n",
    "    sentences=docs, return_sparse=True\n",
    ")\n",
    "\n",
    "# Query Encoding\n",
    "embedded_query_sparse_vecs = bge_embeddings.encode(sentences=[q], return_sparse=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "6ec7e781",
   "metadata": {},
   "outputs": [],
   "source": [
    "lexical_scores_0 = bge_embeddings.compute_lexical_matching_score(\n",
    "    embedded_query_sparse_vecs[\"lexical_weights\"][0],\n",
    "    embedded_documents_sparse_vecs[\"lexical_weights\"][0],\n",
    ")\n",
    "\n",
    "lexical_scores_1 = bge_embeddings.compute_lexical_matching_score(\n",
    "    embedded_query_sparse_vecs[\"lexical_weights\"][0],\n",
    "    embedded_documents_sparse_vecs[\"lexical_weights\"][1],\n",
    ")\n",
    "\n",
    "lexical_scores_2 = bge_embeddings.compute_lexical_matching_score(\n",
    "    embedded_query_sparse_vecs[\"lexical_weights\"][0],\n",
    "    embedded_documents_sparse_vecs[\"lexical_weights\"][2],\n",
    ")\n",
    "\n",
    "lexical_scores_3 = bge_embeddings.compute_lexical_matching_score(\n",
    "    embedded_query_sparse_vecs[\"lexical_weights\"][0],\n",
    "    embedded_documents_sparse_vecs[\"lexical_weights\"][3],\n",
    ")\n",
    "\n",
    "lexical_scores_4 = bge_embeddings.compute_lexical_matching_score(\n",
    "    embedded_query_sparse_vecs[\"lexical_weights\"][0],\n",
    "    embedded_documents_sparse_vecs[\"lexical_weights\"][4],\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "1e2e168f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "question: Please tell me more about LangChain.\n",
      "====================\n",
      "Hi, nice to meet you. : 0.0118865966796875\n",
      "LangChain simplifies the process of building applications with large language models. : 0.2313995361328125\n",
      "The LangChain English tutorial is structured based on LangChain's official documentation, cookbook, and various practical examples to help users utilize LangChain more easily and effectively. : 0.18797683715820312\n",
      "LangChain simplifies the process of building applications with large-scale language models. : 0.2268962860107422\n",
      "Retrieval-Augmented Generation (RAG) is an effective technique for improving AI responses. : 0.002368927001953125\n"
     ]
    }
   ],
   "source": [
    "print(f\"question: {q}\")\n",
    "print(\"====================\")\n",
    "for i, doc in enumerate(docs):\n",
    "    print(doc, f\": {eval(f'lexical_scores_{i}')}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3f9079bb",
   "metadata": {},
   "source": [
    "### 3️⃣ **Multi-Vector(ColBERT) Embedding Example**\n",
    "\n",
    "**ColBERT** (Contextualized Late Interaction over BERT) is an efficient approach for **document retrieval.** \n",
    "- This method uses a **multi-vector strategy** to represent both documents and queries with multiple vectors.  \n",
    "\n",
    "**How It Works**  \n",
    "1. Generate a **separate vector** for each **token in a document,** resulting in multiple vectors per document.  \n",
    "2. Similarly, generate a **separate vector** for each **token in a query.** \n",
    "3. During retrieval, calculate the **similarity** between each query token vector and all document token vectors.  \n",
    "4. Aggregate these similarity scores to produce a **final retrieval score.**  \n",
    "\n",
    "**Advantages**  \n",
    "- Enables **fine-grained token-level matching.**  \n",
    "- Captures **contextual embeddings** effectively.  \n",
    "- Performs efficiently even with **long documents.** "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "ca03e851",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f78fcf46adca4cbf868d684692e15b80",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Fetching 30 files:   0%|          | 0/30 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "You're using a XLMRobertaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\n"
     ]
    }
   ],
   "source": [
    "from FlagEmbedding import BGEM3FlagModel\n",
    "\n",
    "model_name = \"BAAI/bge-m3\"\n",
    "\n",
    "bge_embeddings = BGEM3FlagModel(\n",
    "    model_name,\n",
    "    use_fp16=True,  # Enabling fp16 improves encoding speed with minimal precision trade-off.\n",
    ")\n",
    "\n",
    "# Encode documents with specified parameters\n",
    "embedded_documents_colbert_vecs = bge_embeddings.encode(\n",
    "    sentences=docs, return_colbert_vecs=True\n",
    ")\n",
    "\n",
    "# Query Encoding\n",
    "embedded_query_colbert_vecs = bge_embeddings.encode(\n",
    "    sentences=[q], return_colbert_vecs=True\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "7e5b332f",
   "metadata": {},
   "outputs": [],
   "source": [
    "colbert_scores_0 = bge_embeddings.colbert_score(\n",
    "    embedded_query_colbert_vecs[\"colbert_vecs\"][0],\n",
    "    embedded_documents_colbert_vecs[\"colbert_vecs\"][0],\n",
    ")\n",
    "\n",
    "colbert_scores_1 = bge_embeddings.colbert_score(\n",
    "    embedded_query_colbert_vecs[\"colbert_vecs\"][0],\n",
    "    embedded_documents_colbert_vecs[\"colbert_vecs\"][1],\n",
    ")\n",
    "\n",
    "colbert_scores_2 = bge_embeddings.colbert_score(\n",
    "    embedded_query_colbert_vecs[\"colbert_vecs\"][0],\n",
    "    embedded_documents_colbert_vecs[\"colbert_vecs\"][2],\n",
    ")\n",
    "\n",
    "colbert_scores_3 = bge_embeddings.colbert_score(\n",
    "    embedded_query_colbert_vecs[\"colbert_vecs\"][0],\n",
    "    embedded_documents_colbert_vecs[\"colbert_vecs\"][3],\n",
    ")\n",
    "\n",
    "colbert_scores_4 = bge_embeddings.colbert_score(\n",
    "    embedded_query_colbert_vecs[\"colbert_vecs\"][0],\n",
    "    embedded_documents_colbert_vecs[\"colbert_vecs\"][4],\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "1cee99b2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "question: Please tell me more about LangChain.\n",
      "====================\n",
      "Hi, nice to meet you. : 0.509117841720581\n",
      "LangChain simplifies the process of building applications with large language models. : 0.7039894461631775\n",
      "The LangChain English tutorial is structured based on LangChain's official documentation, cookbook, and various practical examples to help users utilize LangChain more easily and effectively. : 0.6632840037345886\n",
      "LangChain simplifies the process of building applications with large-scale language models. : 0.7057777643203735\n",
      "Retrieval-Augmented Generation (RAG) is an effective technique for improving AI responses. : 0.38082367181777954\n"
     ]
    }
   ],
   "source": [
    "print(f\"question: {q}\")\n",
    "print(\"====================\")\n",
    "for i, doc in enumerate(docs):\n",
    "    print(doc, f\": {eval(f'colbert_scores_{i}')}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3d8d5f52",
   "metadata": {},
   "source": [
    "### 💡 **Advantages of FlagEmbedding**  \n",
    "\n",
    "- **Diverse Embedding Options:** Supports the **Dense,** **Lexical,** and **Multi-Vector** approaches.  \n",
    "- **High-Performance Models:** Utilizes powerful pre-trained models like **BGE.**  \n",
    "- **Flexibility:** Choose the optimal embedding method based on your **use case.**  \n",
    "- **Scalability:** Capable of performing embeddings on **large-scale datasets.**  \n",
    "\n",
    "---\n",
    "\n",
    "### ⚠️ **Considerations**  \n",
    "\n",
    "- **Model Size:** Some models may require **significant storage capacity.**  \n",
    "- **Resource Requirements:** **GPU usage is recommended** for large-scale vector computations.  \n",
    "- **Configuration Needs:** Optimal performance may require **parameter tuning.**   \n",
    "\n",
    "---\n",
    "\n",
    "### 📊 **FlagEmbedding Vector Comparison**  \n",
    "\n",
    "| **Embedding Type** | **Strengths**         | **Use Cases**              |\n",
    "|---------------------|-----------------------|----------------------------|\n",
    "| **Dense Vector**   | Emphasizes semantic similarity | Semantic search, document matching |\n",
    "| **Lexical Vector** | Precise word matching        | Keyword search, exact matches      |\n",
    "| **Multi-Vector**   | Captures complex meanings    | Long document analysis, topic classification |\n",
    "\n",
    "---"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "langchain-opentutorial-gmgjIYR5-py3.11",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
