{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "gpuType": "T4"
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "<a href=\"https://colab.research.google.com/drive/1aaU4YZC-fswSImo1fV-w67FXPQg5Ictm?usp=sharing\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a>"
      ],
      "metadata": {
        "id": "AWgDx0D0ED7z"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 📊 What is Vector Embeddings?\n",
        "\n",
        "Vector embedding is a way to represent words, phrases, or texts as numerical vectors in a multi-dimensional space. This helps the model understand language better by capturing meanings and relationships between words.\n",
        "\n",
        "![Vector Embedding](https://qdrant.tech/articles_data/what-are-embeddings/BERT-model.jpg)\n",
        "Source: [Qdrant Blog](https://qdrant.tech/articles/what-are-embeddings/)\n",
        "\n",
        "\n",
        "\n",
        "\n"
      ],
      "metadata": {
        "id": "HxxOWz9dpsYj"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 📝 Step 1: Learn the Basics  \n",
        "👉 **Embedding Models • Vector Stores • Vector Embeddings (Guide)**: [Read the PDF](https://github.com/genieincodebottle/generative-ai/blob/main/docs/vector-embeddings-guide.pdf)  "
      ],
      "metadata": {
        "id": "p6DyV_mX_5s7"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 📦 Step 2: Install & Import required libraries\n",
        "\n",
        "If you face a library installation error, simply re-run the next cell, this usually resolves it"
      ],
      "metadata": {
        "id": "Y3axTI0sp5Hg"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Install required libraries:\n",
        "# - langchain: Core framework for building LLM-based apps (RAG, agents, chains, etc.)\n",
        "# - langchain-chroma: Integration with Chroma vector store\n",
        "# - langchain-community: Community-contributed loaders, tools, and integrations\n",
        "# - langchain-google-genai: LangChain integration package to use Google’s Gemini LLMs and Embedding models\n",
        "# - einops: Tensor operations library (used in some embedding/LLM models)\n",
        "\n",
        "!pip install -qU \\\n",
        "     langchain \\\n",
        "     langchain-chroma \\\n",
        "     langchain-community \\\n",
        "     langchain-google-genai \\\n",
        "     einops"
      ],
      "metadata": {
        "id": "ShxTNxM5gqtr",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "7ebfd36b-b55b-4d6c-e11a-addce3c2c461",
        "collapsed": true
      },
      "execution_count": 1,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\u001b[?25l     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/67.3 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m67.3/67.3 kB\u001b[0m \u001b[31m5.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n",
            "  Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n",
            "  Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.5/2.5 MB\u001b[0m \u001b[31m57.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.4/49.4 kB\u001b[0m \u001b[31m4.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m19.8/19.8 MB\u001b[0m \u001b[31m113.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.4/1.4 MB\u001b[0m \u001b[31m73.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m64.7/64.7 kB\u001b[0m \u001b[31m6.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m284.2/284.2 kB\u001b[0m \u001b[31m28.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.9/1.9 MB\u001b[0m \u001b[31m96.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.9/50.9 kB\u001b[0m \u001b[31m5.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m103.3/103.3 kB\u001b[0m \u001b[31m10.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m16.5/16.5 MB\u001b[0m \u001b[31m119.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m72.5/72.5 kB\u001b[0m \u001b[31m5.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m131.9/131.9 kB\u001b[0m \u001b[31m13.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m65.7/65.7 kB\u001b[0m \u001b[31m7.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m208.0/208.0 kB\u001b[0m \u001b[31m20.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m105.4/105.4 kB\u001b[0m \u001b[31m11.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m71.6/71.6 kB\u001b[0m \u001b[31m7.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m510.8/510.8 kB\u001b[0m \u001b[31m36.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m4.7/4.7 MB\u001b[0m \u001b[31m119.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m452.2/452.2 kB\u001b[0m \u001b[31m39.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m46.0/46.0 kB\u001b[0m \u001b[31m4.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m86.8/86.8 kB\u001b[0m \u001b[31m8.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Building wheel for pypika (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
            "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
            "google-colab 1.0.0 requires requests==2.32.4, but you have requests 2.32.5 which is incompatible.\n",
            "google-generativeai 0.8.5 requires google-ai-generativelanguage==0.6.15, but you have google-ai-generativelanguage 0.6.18 which is incompatible.\u001b[0m\u001b[31m\n",
            "\u001b[0m"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import os   # For environment variable handling and file system operations\n",
        "import getpass   # For securely entering API keys or passwords in notebooks\n",
        "\n",
        "from langchain.vectorstores import Chroma   # Vector store for efficient similarity search\n",
        "from langchain_community.document_loaders import WebBaseLoader   # Loader to fetch and load documents from web URLs\n",
        "from langchain.text_splitter import RecursiveCharacterTextSplitter   # Splits large text into smaller overlapping chunks\n",
        "from langchain.prompts import PromptTemplate   # Helps create reusable prompt templates\n",
        "from langchain.chains import RetrievalQA   # Chain to perform RAG (retrieval + generation) workflow\n",
        "from sklearn.metrics.pairwise import cosine_similarity   # For evaluating embeddings via cosine similarity"
      ],
      "metadata": {
        "id": "RL-3LsYogoH5",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "32d8c068-7db3-4802-b31e-f4924c9bfe1e"
      },
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "WARNING:langchain_community.utils.user_agent:USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 🧪 Step 3: Experiment with Google's Embedding Model (Free-tier)\n",
        "\n",
        "**Embedding Models:**\n",
        "- `text-embedding-004`: Use this for better free tier availability\n",
        "- `gemini-embedding-001`: State-of-the-art performance across English, multilingual and code tasks. It unifies the previously specialized models like text-embedding-005 and text-multilingual-embedding-002 and achieves better performance in their respective domains  \n",
        "\n",
        "🔗 [Docs](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api#generative-ai-get-text-embedding-python_vertex_ai_sdk)  \n",
        "\n",
        "🔗 [Research Paper - Gemini Embedding](https://arxiv.org/abs/2503.07891)\n",
        "\n",
        "\n",
        "\n",
        "\n",
        "\n",
        "\n"
      ],
      "metadata": {
        "id": "F6UeDlrgqI2A"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ChatGoogleGenerativeAI → Wrapper to use Google Gemini LLM for chat/QA tasks\n",
        "# GoogleGenerativeAIEmbeddings → Wrapper to use Google’s embedding models for vector representations\n",
        "from langchain_google_genai import ChatGoogleGenerativeAI, GoogleGenerativeAIEmbeddings\n"
      ],
      "metadata": {
        "id": "Zqf1sMFbKE_5"
      },
      "execution_count": 3,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 🔑 Step 4: Generate Google API Key  \n",
        "\n",
        "- The same API key works for both **Gemini LLM** and **Google Embedding Models**.  \n",
        "- You can create your key here:  \n",
        "\n",
        "  - [Get Google API Key](https://aistudio.google.com/apikey)  \n",
        "\n",
        "- Once generated, **paste your API key** in the next step.  \n"
      ],
      "metadata": {
        "id": "hab7APdb-fgN"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "os.environ[\"GOOGLE_API_KEY\"] = getpass.getpass()"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "yobvrD3glfd4",
        "outputId": "f263e586-fcd9-404b-9e3b-b0a44988025e"
      },
      "execution_count": 4,
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "··········\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Step 5: Implementing Basic RAG with Key Components\n",
        "\n",
        "This implementation demonstrates the core workflow of a Basic RAG system:  \n",
        "\n",
        "1. Chunking and embedding source documents  \n",
        "2. Retrieving relevant documents via similarity search  \n",
        "3. Generating responses using the retrieved context  \n",
        "4. Evaluating response quality using similarity scores  \n",
        "\n",
        "🛠️ **Tech Stack**:\n",
        "\n",
        "- **Chroma:** Vector store for efficient similarity search  \n",
        "- **Embedding Model:** Google’s text-embedding model  \n",
        "- **ChatGoogleGenerativeAI:** Gemini LLM for response generation  \n",
        "- **Cosine Similarity:** For evaluating query–response–context relevance  \n",
        "\n",
        "🔗 **References**:  \n",
        "- [LangChain Chunking Strategies](https://python.langchain.com/v0.1/docs/modules/data_connection/document_transformers/)  \n",
        "- [LangChain Vectorstores](https://python.langchain.com/v0.1/docs/modules/data_connection/vectorstores/)  \n"
      ],
      "metadata": {
        "id": "D5cQdItbNMsl"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Step 1: Initialize the Gemini language model\n",
        "llm = ChatGoogleGenerativeAI(\n",
        "    model=\"gemini-2.0-flash\",\n",
        "    temperature=0.3  # Adjust temperature or other parameters as needed\n",
        ")\n",
        "\n",
        "# Step 2: Load documents from a web URL\n",
        "url = \"https://en.wikipedia.org/wiki/Artificial_intelligence\"\n",
        "loader = WebBaseLoader(url)\n",
        "data = loader.load()\n",
        "\n",
        "# Step 3: Split text into chunks\n",
        "# (Experiment with chunk_size and chunk_overlap for optimal results)\n",
        "text_splitter = RecursiveCharacterTextSplitter(\n",
        "    chunk_size=500,\n",
        "    chunk_overlap=50\n",
        ")\n",
        "chunks = text_splitter.split_documents(data)\n",
        "\n",
        "# Add unique IDs to each text chunk\n",
        "for idx, chunk in enumerate(chunks):\n",
        "    chunk.metadata[\"id\"] = idx\n",
        "\n",
        "# Step 4: Get embedding model\n",
        "# Options:\n",
        "# 1. text-embedding-004  (Use this for better free tier)\n",
        "# 2. Stable: gemini-embedding-001\n",
        "# 3. Experimental: gemini-embedding-exp-03-07\n",
        "gemini_embeddings = GoogleGenerativeAIEmbeddings(\n",
        "    model=\"models/text-embedding-004\"\n",
        ")\n",
        "\n",
        "# Step 5: Create vector store using embeddings\n",
        "vectorstore = Chroma.from_documents(chunks, gemini_embeddings)\n",
        "\n",
        "# Step 6: Define query\n",
        "query = \"What are the main applications of artificial intelligence in healthcare?\"\n",
        "\n",
        "# Step 7: Retrieve relevant documents\n",
        "docs = vectorstore.similarity_search(query, k=5)\n",
        "context = \"\\n\\n\".join([doc.page_content for doc in docs])\n",
        "retrieval_method = \"Basic similarity search\"\n",
        "\n",
        "# Step 8: Generate response\n",
        "prompt = f\"{context}\\n\\nQuestion: {query}\\nAnswer:\"\n",
        "final_response = llm.invoke(prompt).content\n",
        "\n",
        "# Step 9: Print results\n",
        "print(f\"Query: {query}\")\n",
        "print(\"=========================\")\n",
        "print(f\"Final Answer: {final_response}\")\n",
        "print(\"=========================\")\n",
        "print(f\"Retrieval Method: {retrieval_method}\")"
      ],
      "metadata": {
        "id": "B5I5PCsKNP2N",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "7ff16fce-5f17-4664-febd-f2a6c8ae8505"
      },
      "execution_count": 5,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Query: What are the main applications of artificial intelligence in healthcare?\n",
            "=========================\n",
            "Final Answer: Based on the provided text, the main applications of artificial intelligence in healthcare are:\n",
            "\n",
            "*   **Processing and integrating big data:** This is particularly important for organoid and tissue engineering development, which rely heavily on microscopy imaging.\n",
            "*   **Overcoming discrepancies in funding:** AI may help to address imbalances in funding allocation across different research fields.\n",
            "*   **Deepening the understanding of biomedically relevant pathways:** AI tools like AlphaFold 2 can provide new insights into these pathways.\n",
            "*   **Increasing patient care and quality of life:** AI has the potential to improve healthcare outcomes.\n",
            "*   **More accurately diagnosing and treating patients:** Medical professionals are ethically obligated to use AI if it leads to better diagnoses and treatments.\n",
            "=========================\n",
            "Retrieval Method: Basic similarity search\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Step 2: RAG Evaluation  \n",
        "\n",
        "1. Generate embeddings for **query**, **response**, and **context**  \n",
        "2. Measure **cosine similarity** between query–response and response–context  \n",
        "3. Derive an **overall relevance score** as the average of these similarities  \n"
      ],
      "metadata": {
        "id": "C23HOdS0IO_X"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from sklearn.metrics.pairwise import cosine_similarity\n",
        "\n",
        "# Step 10: Define evaluation function\n",
        "def evaluate_response(query, embeddings, response, context):\n",
        "    \"\"\"\n",
        "    Evaluate the relevance of the model's response by comparing embeddings.\n",
        "\n",
        "    - Computes embeddings for query, response, and context\n",
        "    - Calculates cosine similarities\n",
        "    - Returns an average relevance score\n",
        "    \"\"\"\n",
        "    # Compute embeddings\n",
        "    query_embedding = embeddings.embed_query(query)\n",
        "    response_embedding = embeddings.embed_query(response)\n",
        "    context_embedding = embeddings.embed_query(context)\n",
        "\n",
        "    # Compute cosine similarities\n",
        "    query_response_similarity = cosine_similarity(\n",
        "        [query_embedding], [response_embedding]\n",
        "    )[0][0]\n",
        "\n",
        "    response_context_similarity = cosine_similarity(\n",
        "        [response_embedding], [context_embedding]\n",
        "    )[0][0]\n",
        "\n",
        "    # Compute overall relevance score (average)\n",
        "    relevance_score = (\n",
        "        query_response_similarity + response_context_similarity\n",
        "    ) / 2\n",
        "\n",
        "    return {\n",
        "        \"query_response_similarity\": query_response_similarity,\n",
        "        \"response_context_similarity\": response_context_similarity,\n",
        "        \"relevance_score\": relevance_score,\n",
        "    }\n",
        "\n",
        "# Step 11: Evaluate the response\n",
        "evaluation = evaluate_response(query, gemini_embeddings, final_response, context)\n",
        "\n",
        "# Step 12: Print evaluation results\n",
        "print(\"\\nEvaluation Results\")\n",
        "print(\"=========================\")\n",
        "print(f\"Query-Response Similarity   : {evaluation['query_response_similarity']:.4f}\")\n",
        "print(f\"Response-Context Similarity : {evaluation['response_context_similarity']:.4f}\")\n",
        "print(f\"Overall Relevance Score     : {evaluation['relevance_score']:.4f}\")"
      ],
      "metadata": {
        "id": "aIXlHaWqgcza",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "08888a5e-5d80-4ec4-c0be-a90fc97a6d14"
      },
      "execution_count": 6,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "\n",
            "Evaluation Results\n",
            "=========================\n",
            "Query-Response Similarity   : 0.8698\n",
            "Response-Context Similarity : 0.8791\n",
            "Overall Relevance Score     : 0.8745\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Similarly, OpenAI and Hugging Face embedding models can be used as shown below"
      ],
      "metadata": {
        "id": "8vnE9a3jN1U6"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## B. OpenAI Embedding Model\n",
        "\n",
        "### Provide OpenAI API Key.\n",
        "\n",
        "If you want to use OpenAI Embedding. You can create OpenAI API key using following link\n",
        "\n",
        "- [OpenAI API Key](https://platform.openai.com/settings/organization/api-keys)"
      ],
      "metadata": {
        "id": "dVOq9sU--Vj6"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install -qU langchain-openai"
      ],
      "metadata": {
        "id": "BluoWUGsHxes"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()"
      ],
      "metadata": {
        "id": "e3TOKmRoCzzj"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💰 Paid: OpenAI Embedding Models  \n",
        "\n",
        "**Models:** `text-embedding-3-small`, `text-embedding-3-large`, `ada v2`  \n",
        "\n",
        "**Pros:** High-quality embeddings, multiple model sizes, seamless API integration, batch processing for cost efficiency, regularly updated, versatile across NLP tasks.  \n",
        "\n",
        "**Cons:** Paid (costs can scale), closed-source, requires API key + internet, limited customization, data privacy considerations, subject to OpenAI policies.  \n",
        "\n",
        "🔗 [Docs](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings)  \n"
      ],
      "metadata": {
        "id": "gCE5wgpGDTD3"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain_openai import OpenAIEmbeddings\n",
        "\n",
        "openai_embeddings = OpenAIEmbeddings(model=\"text-embedding-3-large\")"
      ],
      "metadata": {
        "id": "QJESKKcjDhgW"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## B. Huggingface Embedding Model\n",
        "\n",
        "### Provide Huggingface API Key.\n",
        "\n",
        "If you want to use Huggingface Embedding Models. You can create Huggingface API key using following link\n",
        "\n",
        "- [Huggingface API Key](https://huggingface.co/settings/tokens)\n",
        "\n",
        "\n"
      ],
      "metadata": {
        "id": "mbH7Tl5fI5qR"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install -qU Sentence-transformers \\\n",
        "                 langchain-huggingface"
      ],
      "metadata": {
        "id": "ILNgaQmfJH79"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "os.environ[\"HF_TOKEN\"] = getpass.getpass()"
      ],
      "metadata": {
        "id": "iOoRf7vrDArL"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🔓 Free: Hugging Face Open-Source Embeddings  \n",
        "\n",
        "**Models:** gte-large-en-v1.5, bge-multilingual-gemma2, snowflake-arctic-embed-l, nomic-embed-text-v1.5, e5-mistral-7b-instruct, etc. → [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard)  \n",
        "\n",
        "**Pros:** Open-source, customizable, community-backed, Hugging Face integration, supports fine-tuning, broad NLP use cases.  \n",
        "**Cons:** May underperform vs. commercial models, variable quality, limited support, high compute needs, community-depende\n"
      ],
      "metadata": {
        "id": "VhVYI4olDrCx"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Hugging Face: Nomic AI Embedding Model  \n",
        "\n",
        "You can choose from various Hugging Face open-source embedding models depending on your use case, performance needs, and system constraints. Model rankings and benchmarks are available on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard).  \n",
        "\n",
        "**Popular Models:**  \n",
        "1. `nomic-ai/nomic-embed-text-v1.5`  \n",
        "2. `nomic-ai/nomic-embed-text-v1`  \n",
        "3. `sentence-transformers/all-MiniLM-L12-v2`  \n",
        "4. `sentence-transformers/all-MiniLM-L6-v2`  \n"
      ],
      "metadata": {
        "id": "Co0T5mEdDx5H"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from langchain_huggingface import HuggingFaceEmbeddings\n",
        "\n",
        "# Change model_name as per your choosen huggingface embedding model\n",
        "nomic_embeddings = HuggingFaceEmbeddings(model_name=\"nomic-ai/nomic-embed-text-v1.5\", model_kwargs = {'trust_remote_code': True})"
      ],
      "metadata": {
        "id": "_m3Ff9WTDrkE"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}