{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "av9C5VQh7LPR"
      },
      "source": [
        "#### Copyright 2024 Google LLC.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "5mvpC35m61BC"
      },
      "outputs": [],
      "source": [
        "# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QMWaKHtB7Txj"
      },
      "source": [
        "# Local Agentic RAG without APIs - using FastEmbed, Ollama-Gemma 3 and Qdrant Vector database\n",
        "\n",
        "Author: Tarun Jain\n",
        "\n",
        "- GitHub: [lucifertrj](https://github.com/lucifertrj/)\n",
        "- Twitter: [TRJ_0751](https://x.com/trj_0751)\n",
        "\n",
        "## Overview\n",
        "\n",
        "We will explore how to build a 100% local agentic RAG system using open-source stack. This system allows you to create a knowledge base from web data and answer user queries without relying on external APIs, ensuring data privacy and flexibility.\n",
        "\n",
        "Running AI inference locally — processing AI models on an organization’s own hardware, such as on-premises servers or devices, rather than relying on cloud-based services has become an increasingly popular choice across various industries. The primary appeal lies in the enhanced control and security it offers over sensitive data.  \n",
        "\n",
        "![1_pas4gyU_facwVjqpjENt_Q.webp]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JB357StN70-S"
      },
      "source": [
        "## Installation\n",
        "\n",
        "Install Ollama through the offical installation script."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4zeg8fEB7xOf"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            ">>> Cleaning up old version at /usr/local/lib/ollama\n",
            ">>> Installing ollama to /usr/local\n",
            ">>> Downloading Linux amd64 bundle\n",
            "######################################################################## 100.0%\n",
            ">>> Adding ollama user to video group...\n",
            ">>> Adding current user to ollama group...\n",
            ">>> Creating ollama systemd service...\n",
            "\u001b[1m\u001b[31mWARNING:\u001b[m systemd is not running\n",
            "\u001b[1m\u001b[31mWARNING:\u001b[m Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies.\n",
            ">>> The Ollama API is now available at 127.0.0.1:11434.\n",
            ">>> Install complete. Run \"ollama\" from the command line.\n"
          ]
        }
      ],
      "source": [
        "!curl -fsSL https://ollama.com/install.sh | sh"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UHQqBXXu73Ty"
      },
      "source": [
        "Install:\n",
        "\n",
        "- Ollama: To get the open source model inference i.e., Gemma-3-4b\n",
        "- Langchain: To orchestrate the Retriever pipeline by indexing the data into the knowledge base.\n",
        "- FastEmbed: For the lightweight embeddings\n",
        "- Qdrant: To save the vector embeddings and index the data.\n",
        "- Agno: For the Agentic AI capabilites\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "iwYSJp_074y5"
      },
      "outputs": [],
      "source": [
        "!pip install ollama\n",
        "!pip install langchain langchain-community\n",
        "!pip install fastembed langchain-qdrant==0.2.0\n",
        "!pip install agno"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "liiXQRe178O-"
      },
      "source": [
        "## Start Ollama\n",
        "\n",
        "Start Ollama in background using nohup."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ukInL5AB77e-"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "nohup: redirecting stderr to stdout\n"
          ]
        }
      ],
      "source": [
        "!nohup ollama serve > ollama.log &"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QuEUMKoNAYgL"
      },
      "source": [
        "## Switch the Runtime\n",
        "\n",
        "- Since we are using Colab, lets switch to GPU\n",
        "- Click on `Runtime` and select `Change Runtime type`\n",
        "- Choose `T4 GPU` one can access it for free."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HTRC0-th8C0D"
      },
      "source": [
        "## Get Gemma 3\n",
        "\n",
        "- Gemma 3 is available in 4 variants: 1B, 4B, 12B, and 27B.\n",
        "- Pull the gemma3 model to use with the library: ollama pull gemma3:12b\n",
        "- See [https://ollama.com/search](https://ollama.com/search) for more information on the models available."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4wJk-SPU8Umm"
      },
      "outputs": [],
      "source": [
        "import ollama"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "pzWEdaiH8aDN"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "ProgressResponse(status='success', completed=None, total=None, digest=None)"
            ]
          },
          "execution_count": 11,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "ollama.pull('gemma3:4b') # get the model from here: https://ollama.com/library/gemma3"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-x8gyi7Z-U_u"
      },
      "outputs": [],
      "source": [
        "from agno.agent import Agent\n",
        "from agno.models.ollama import Ollama"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NvAT0-Sw-aW8"
      },
      "outputs": [],
      "source": [
        "test_ollama = Agent(\n",
        "    model=Ollama(id=\"gemma3:4b\")\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "MSad7fz--gKn"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f47af978650d49ea8f555dfadbedd7f9",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Output()"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
            ],
            "text/plain": []
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "test_ollama.print_response(\"who is Virat Kohli. give brief intro\",stream=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0180E14VArBn"
      },
      "source": [
        "Alright, the model is running, its time to build the Agentic RAG completely local, lets start by saving our external data into Vector database and prepare the knowledge base."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7BJnFZ3WA08N"
      },
      "outputs": [],
      "source": [
        "from langchain_community.document_loaders import WebBaseLoader\n",
        "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
        "from langchain_community.embeddings.fastembed import FastEmbedEmbeddings\n",
        "\n",
        "from langchain_qdrant import QdrantVectorStore\n",
        "from qdrant_client import QdrantClient\n",
        "from qdrant_client.http.models import Distance, VectorParams\n",
        "from qdrant_client.http.exceptions import UnexpectedResponse"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JnHOeXo9A86_"
      },
      "source": [
        "## External Data\n",
        "\n",
        "External data can be in PDF, Docx, YouTube, CSV, or any other format. For this demonstration, we’ll use a web page to demonstrate how to chat with a website.\n",
        "\n",
        "The key objective is to load data from the URL using LangChain loaders and extract raw text. Since LLMs cannot process an entire document at once, we need to chunk the data into smaller parts. Once we have the raw text, we divide it into smaller chunks and store it in a knowledge base.\n",
        "\n",
        "We will use Gemma-3 release blog as the external data to build the Agentic RAG."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-r1pK5hYBAno"
      },
      "outputs": [],
      "source": [
        "urls = [\n",
        "    \"https://blog.google/technology/developers/gemma-3/\",\n",
        "]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "718mzbYlBHxt"
      },
      "outputs": [],
      "source": [
        "loader = WebBaseLoader(urls)\n",
        "data = loader.load()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "FJbBejsEBMw6"
      },
      "outputs": [],
      "source": [
        "text_splitter = RecursiveCharacterTextSplitter(\n",
        "    chunk_size=1024, chunk_overlap=50\n",
        ")\n",
        "chunks = text_splitter.split_documents(data)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Unqpw0ceBQDd"
      },
      "source": [
        "## Setup your Vector database\n",
        "\n",
        "While defining the vector database, it’s crucial to consider that data may change over time, and preserving previous versions is important. To manage this, we define a unique collection name each time we store new data. Whenever data changes, a new collection should be used."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "jRmawlwrBPbR"
      },
      "outputs": [],
      "source": [
        "client = QdrantClient(path=\"/tmp/app\")\n",
        "collection_name = \"agent-rag\"\n",
        "\n",
        "try:\n",
        "    collection_info = client.get_collection(collection_name=collection_name)\n",
        "except (UnexpectedResponse, ValueError):\n",
        "    client.create_collection(\n",
        "        collection_name=collection_name,\n",
        "        vectors_config=VectorParams(size=1024, distance=Distance.COSINE),\n",
        "    )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zrXklg7IBNv7"
      },
      "source": [
        "## Index your document\n",
        "\n",
        "Now that we’ve defined the vector database client, its time to define the embedding model. The final step to index the data is to initialize the vector store and add the chunked data for efficient retrieval."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3pIJTuDIBLo3"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.11/dist-packages/langchain_community/embeddings/fastembed.py:109: UserWarning: The model thenlper/gte-large now uses mean pooling instead of CLS embedding. In order to preserve the previous behaviour, consider either pinning fastembed version to 0.5.1 or using `add_custom_model` functionality.\n",
            "  values[\"model\"] = fastembed.TextEmbedding(\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f34292490a714f2bb9930d4a4fc0d58e",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Fetching 5 files:   0%|          | 0/5 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "59e884604968460eae6ca627ff15a0ae",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "config.json:   0%|          | 0.00/660 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "9e6fd37e8b764c6694885819ed80c6ec",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "model.onnx:   0%|          | 0.00/1.34G [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "ca4907b8de784cfcad95412eb36f4fa0",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "tokenizer_config.json:   0%|          | 0.00/1.41k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "a75ece95a9c84e7f90e203086d2c4da9",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "tokenizer.json:   0%|          | 0.00/712k [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "e470872477cc410ab6b55c147845856b",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "special_tokens_map.json:   0%|          | 0.00/695 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "embeddings = FastEmbedEmbeddings(model_name=\"thenlper/gte-large\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2poQRoggBcDL"
      },
      "outputs": [],
      "source": [
        "vector_store = QdrantVectorStore(\n",
        "    client=client,\n",
        "    collection_name=collection_name,\n",
        "    embedding=embeddings,\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1fxtS4GjCGqh"
      },
      "outputs": [],
      "source": [
        "vector_store.add_documents(documents=chunks)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "lA51rfIsCbaC"
      },
      "outputs": [],
      "source": [
        "retriever = vector_store.as_retriever()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aKQ3qDfbCHtC"
      },
      "source": [
        "Hurray! The data is now saved in vector database, lets cross check:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qkUg9_10CNgd"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "collection  meta.json\n"
          ]
        }
      ],
      "source": [
        "!ls /tmp/app"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hp7NO1vrCQN4"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "agent-rag\n"
          ]
        }
      ],
      "source": [
        "!ls /tmp/app/collection/"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Gq_DRinECSCT"
      },
      "source": [
        "We have the agent-rag inside the collection directory, thats saved locally."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZZNQboJgCZA7"
      },
      "source": [
        "## Agentic Pipeline set\n",
        "\n",
        "Agno offers multiple ways to define a knowledge base. In this case, we will load the LangChain retriever pipeline into Agno as a knowledge base object, enabling seamless retrieval and decision-making within the Agentic workflow."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "moYxj-I5DDw7"
      },
      "outputs": [],
      "source": [
        "def search_and_augment(user_query):\n",
        "    docs_content = retriever.invoke(user_query)\n",
        "\n",
        "    context = \"\"\n",
        "    for data in docs_content:\n",
        "        context += data.page_content\n",
        "\n",
        "    prompt = f\"\"\"\n",
        "    Answer to the USER QUESTION from the provided CONTEXT.\n",
        "    The given CONTEXT is the only source of information, if USER QUESTION is not from the given CONTEXT, just say `I don't know, no enough information`\n",
        "    -----\n",
        "    CONTEXT: {context}\n",
        "    -----\n",
        "    USER QUESTION: {user_query}\n",
        "    \"\"\"\n",
        "\n",
        "    return prompt"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Oiupk4GIC3-w"
      },
      "outputs": [],
      "source": [
        "user_query = \"How many global languages is supported?\" # question taken from external data\n",
        "user_query2 = \"who is Virat Kohli?\" # not from the given data"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xI21AaC9Coep"
      },
      "outputs": [],
      "source": [
        "agent = Agent(\n",
        "    model=Ollama(id=\"gemma3:4b\")\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Kag1StkPCWrY"
      },
      "outputs": [],
      "source": [
        "prompt1 = search_and_augment(user_query)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "PTDZE1ofCqlB"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Gemma 3 offers out-of-the-box support for over 35 languages and pretrained support for over 140 languages.\n"
          ]
        }
      ],
      "source": [
        "response = agent.run(prompt1)\n",
        "print(response.content)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "F5YgM2T2Czp8"
      },
      "source": [
        "Lets ask out of context question"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0F-oYc2_C8dA"
      },
      "outputs": [],
      "source": [
        "prompt2 = search_and_augment(user_query2)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "nYVoMELxC_yM"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "I don't know, no enough information\n"
          ]
        }
      ],
      "source": [
        "response = agent.run(prompt2)\n",
        "print(response.content)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2A5SgGvtDp6-"
      },
      "source": [
        "If you notice, earlier when we tested Gemma, I asked who is Virat Kohli it answered, not it didn't answer, thats what RAG is capable off. With Agentic capability it have the improved decision making ability."
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "name": "[Gemma_3]Local_Agentic_RAG.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
