{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "13cb272e"
   },
   "source": [
    "# 🧠 Multi-Agent QA System with RAG and Multi-Agent Collaboration\n",
    "\n",
    "🚀 **_If you haven’t signed up for LanceDB Cloud yet, click [here](https://cloud.lancedb.com) to get started!_**\n",
    "\n",
    "This notebook demonstrates an end-to-end question answering system that combines retrieval-augmented generation (RAG) with multi-agent collaboration to achieve accurate, context-aware answers. Built on the [SQuAD v2](https://huggingface.co/datasets/rajpurkar/squad_v2) benchmark, The implementation uses  the following key components:\n",
    "\n",
    "- **LanceDB-powered** context retrieval\n",
    "- **AutoGen-managed** agent validation cycles\n",
    "- **LangChain-optimized** document processing\n",
    "\n",
    "\n",
    "**💡 Example Output**  \n",
    "```python  \n",
    "QUESTION: What is the capital of France?\n",
    "INSTRUCTIONS: Provide verified answer using the context retrieval workflow\n",
    "\n",
    "--------------------------------------------------------------------------------\n",
    "\n",
    "Next speaker: QA_Specialist\n",
    "\n",
    "QA_Specialist (to chat_manager):\n",
    "\n",
    "1. Context 1: \"Paris is the most populous city in France and the capital of the country.\"\n",
    "   Context 2: \"The capital and the most populous city of France is Paris, often called the City of Light.\"\n",
    "   Context 3: \"France, officially the French Republic, has its capital in Paris.\"\n",
    "\n",
    "2. Answers from Contexts:\n",
    "    -From Context 1: Paris\n",
    "    -From Context 2: Paris\n",
    "    -From Context 3: Paris\n",
    "\n",
    "3. Final Answer: Paris.\n",
    "\n",
    "--------------------------------------------------------------------------------\n",
    "\n",
    "Next speaker: Fact_Checker\n",
    "\n",
    "Fact_Checker (to chat_manager):\n",
    "\n",
    "Paris\n",
    "\n",
    "--------------------------------------------------------------------------------\n",
    "\n",
    "Next speaker: Coordinator\n",
    "\n",
    "Coordinator (to chat_manager):\n",
    "\n",
    "TERMINATE\n",
    "\n",
    "--------------------------------------------------------------------------------\n",
    "\n",
    "Question: What is the capital of France?\n",
    "Answer: paris\n",
    "Exact Match: 10000%\n",
    "F1: 10000%\n",
    "\n",
    "```\n",
    "\n",
    "### 🛠️ What You'll Build\n",
    "\n",
    "*   RAG Pipeline with LanceDB vector store (all-MiniLM-L6-v2 embeddings)\n",
    "*   AutoGen GroupChat with specialized agent roles (retrieve → generate → verify)\n",
    "*   Conversation Visualizer showing agent interactions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9N9tRap0YYp4"
   },
   "source": [
    "## Step 1: Install Required Libraries\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "AZHJyHexyHY3"
   },
   "outputs": [],
   "source": [
    "!pip install autogen lancedb langchain sentence-transformers datasets langchain-community langchain-openai langchain_huggingface evaluate"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Qm0a39PCPYDs"
   },
   "source": [
    "## Step 2: Obtain the API key from the [dashboard](https://cloud.lancedb.com)\n",
    "\n",
    "*  Get the `db uri`\n",
    "\n",
    "`db uri` starts with `db://`, which can be obtained from the project page on the dashboard. In the following example, `db uri` is `db://test-sfifxz`.\n",
    "\n",
    "![db-uri.png]()\n",
    "\n",
    "*  Get the `API Key`\n",
    "Obtain a LanceDB Cloud API key by clicking on the `GENERATE API KEY` from the `table` page.\n",
    "\n",
    "💡 Copy the code block for connecting to LanceDB Cloud that is shown at the last step of API key generation.\n",
    "![image.png]()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {
    "id": "p_TeVfKloPwF"
   },
   "outputs": [],
   "source": [
    "uri = \"db://your-db-uri\"  # @param {type:\"string\"}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {
    "id": "P8mDac8moPwG"
   },
   "outputs": [],
   "source": [
    "api_key = \"sk_...\"  # @param {type:\"string\"}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "kI_FMG_O79xy"
   },
   "source": [
    "paste your OPEN_AI_KEY"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {
    "id": "vIVUoGCi8MD2"
   },
   "outputs": [],
   "source": [
    "openai_api_key = \"your-openai-api-key\"  # @param {type:\"string\"}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1zpjUSNExfmM"
   },
   "source": [
    "## Step 3: Import libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {
    "id": "aOTHe1SDynUJ"
   },
   "outputs": [],
   "source": [
    "import lancedb\n",
    "from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager\n",
    "from datasets import load_dataset\n",
    "from evaluate import load\n",
    "from langchain_community.vectorstores import LanceDB\n",
    "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
    "from langchain_huggingface import HuggingFaceEmbeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "HUBMW4Zgxvxi"
   },
   "source": [
    "## Step 4: Load SQuAD dataset and chunck contexts\n",
    "\n",
    "Note: We loaded the first 1000 data from the training set to speed up this example."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {
    "id": "JoJwaZdUz0Xf"
   },
   "outputs": [],
   "source": [
    "# Load full dataset\n",
    "dataset = load_dataset(\"squad_v2\", split=\"train[:1000]\")\n",
    "\n",
    "# Text splitting with overlap\n",
    "text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=64)\n",
    "\n",
    "# Process contexts with answer positions\n",
    "docs = []\n",
    "for item in dataset:\n",
    "    context = item[\"context\"]\n",
    "    # Find all answer spans in this context\n",
    "    answer_spans = item[\"answers\"][\"answer_start\"]\n",
    "    docs.extend(text_splitter.split_text(context))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "v27Drvl0yJsS"
   },
   "source": [
    "## Step 5: Create embeddings and ingest them to LanceDB Cloud\n",
    "\n",
    "This step might have a few minutes for generating embeddings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {
    "id": "qmPxg0GJyqXN"
   },
   "outputs": [],
   "source": [
    "# Create embeddings\n",
    "embeddings = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
    "\n",
    "\n",
    "# Store in LanceDB Cloud\n",
    "api_key = api_key\n",
    "uri = uri\n",
    "\n",
    "db = lancedb.connect(uri=uri, api_key=api_key, region=\"us-east-1\")\n",
    "\n",
    "# alternatively, you can paste the code block from UI for LanceDB Cloud connection here\n",
    "# db = lancedb.connect(\n",
    "#   uri=\"db://your-db-uri\",\n",
    "#   api_key=\"sk_...\",\n",
    "#   region=\"us-east-1\"\n",
    "# )\n",
    "\n",
    "table_name = \"squad_contexts_new\"\n",
    "vector_store = LanceDB.from_texts(\n",
    "    texts=docs,\n",
    "    embedding=embeddings,\n",
    "    uri=uri,  # your remote database URI, e.g. db://...\n",
    "    api_key=api_key,\n",
    "    region=\"us-east-1\",\n",
    "    table_name=table_name,  # Optional, defaults to \"vectors\"\n",
    ")\n",
    "table = db.open_table(table_name)\n",
    "# drop the unused column\n",
    "table.drop_columns([\"metadata\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Z5wc35pZz13a"
   },
   "source": [
    "## Step 6: Configure Agent\n",
    "\n",
    "In the setup, we have:\n",
    "\n",
    "\n",
    "1.   QA agent: to generate answers from context\n",
    "2.   Review agent: to validate answers\n",
    "3.   User proxy: to manage the conversation flow\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {
    "id": "9yq9oEe81gbc"
   },
   "outputs": [],
   "source": [
    "# Configuration\n",
    "config_list = [{\"model\": \"gpt-4\", \"api_key\": openai_api_key}]\n",
    "\n",
    "# Define QA Specialist\n",
    "qa_agent = AssistantAgent(\n",
    "    name=\"QA_Specialist\",\n",
    "    system_message=\"\"\"Answer ONLY the original question using:\n",
    "1. Retrieve top 3 contexts\n",
    "2. Provide exact answer from context\n",
    "3. Format: \"Final Answer: [text]\"\n",
    "DO NOT generate new questions\"\"\",\n",
    "    llm_config={\"config_list\": config_list},\n",
    "    function_map={\n",
    "        \"retrieve_contexts\": lambda q: vector_store.similarity_search(q, k=3)\n",
    "    },\n",
    ")\n",
    "\n",
    "# Configure Fact Checker\n",
    "review_agent = AssistantAgent(\n",
    "    name=\"Fact_Checker\",\n",
    "    system_message=\"\"\"Verify answers by:\n",
    "1. Check context alignment\n",
    "2. Reject unsubstantiated claims\n",
    "3. Require direct context support\"\"\",\n",
    "    llm_config={\"config_list\": config_list},\n",
    ")\n",
    "\n",
    "# Set up coordinator\n",
    "user_proxy = UserProxyAgent(\n",
    "    name=\"Coordinator\",\n",
    "    human_input_mode=\"NEVER\",\n",
    "    code_execution_config=False,\n",
    "    default_auto_reply=\"TERMINATE\",  # Stops after final answer\n",
    "    max_consecutive_auto_reply=1,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "2y590kAP0RaS"
   },
   "source": [
    "## Step 7 : Orchestrate workflow\n",
    "\n",
    "The agents collaborate to: generate the initial answer -> verify against context -> refine the answer if needed.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {
    "id": "i0s7vuDc0eh3"
   },
   "outputs": [],
   "source": [
    "# --- Group Chat Setup ---\n",
    "groupchat = GroupChat(\n",
    "    agents=[user_proxy, qa_agent, review_agent],\n",
    "    messages=[],\n",
    "    max_round=4,  # Reduced from 8 to prevent question drift\n",
    "    speaker_selection_method=\"auto\",\n",
    "    allow_repeat_speaker=False,\n",
    ")\n",
    "# Initialize manager\n",
    "manager = GroupChatManager(groupchat=groupchat, llm_config={\"config_list\": config_list})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "gyvJ7b4Y0nih"
   },
   "source": [
    "## Step 8 : Set up evaluation pipeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "zhyat-vpvwoX"
   },
   "outputs": [],
   "source": [
    "import re\n",
    "import random\n",
    "from evaluate import squad_metric\n",
    "\n",
    "\n",
    "def extract_final_answer(conversation):\n",
    "    for msg in reversed(conversation):\n",
    "        if \"Final Answer:\" in msg[\"content\"]:\n",
    "            answer = msg[\"content\"].split(\"Final Answer:\")[-1]\n",
    "            return re.sub(r\"\\[\\d+\\]\", \"\", answer).strip()\n",
    "    return \"No answer found\"\n",
    "\n",
    "\n",
    "def normalize_eval_text(text):\n",
    "    text = text.lower().strip()\n",
    "    text = re.sub(r\"[^\\w\\s]\", \"\", text)  # Keep basic words/whitespace\n",
    "    text = re.sub(r\"\\s+\", \" \", text)  # Single spaces\n",
    "    return text\n",
    "\n",
    "\n",
    "def run_qa_pipeline(question, ground_truth):\n",
    "    user_proxy.reset()\n",
    "    user_proxy.initiate_chat(\n",
    "        manager,\n",
    "        message=f\"QUESTION: {question}\\nINSTRUCTIONS: Provide verified answer using the context retrieval workflow\",\n",
    "    )\n",
    "\n",
    "    conversation = user_proxy.chat_messages[manager]\n",
    "    raw_answer = extract_final_answer(conversation)\n",
    "    clean_answer = normalize_eval_text(raw_answer)\n",
    "\n",
    "    prediction = {\n",
    "        \"id\": str(random.randint(1000, 9999)),\n",
    "        \"prediction_text\": clean_answer,\n",
    "        \"no_answer_probability\": 0.0,\n",
    "    }\n",
    "\n",
    "    metrics = squad_metric.compute(\n",
    "        predictions=[prediction],\n",
    "        references=[\n",
    "            {\n",
    "                \"id\": prediction[\"id\"],\n",
    "                \"answers\": {\n",
    "                    \"text\": [normalize_eval_text(ground_truth)],\n",
    "                    \"answer_start\": [0],\n",
    "                },\n",
    "            }\n",
    "        ],\n",
    "    )\n",
    "\n",
    "    return prediction, metrics, conversation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "UHd3AuUR0uLy"
   },
   "source": [
    "## Step 9 : Let's test!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "BgJhMBa-240o",
    "outputId": "79582b8e-aedf-4040-f2fa-ed5c061506ae"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Coordinator (to chat_manager):\n",
      "\n",
      "QUESTION: What is the capital of France?\n",
      "INSTRUCTIONS: Provide verified answer using the context retrieval workflow\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "Next speaker: QA_Specialist\n",
      "\n",
      "QA_Specialist (to chat_manager):\n",
      "\n",
      "1. Context 1: \"Paris is the most populous city in France and the capital of the country.\"\n",
      "   Context 2: \"The capital and the most populous city of France is Paris, often called the City of Light.\"\n",
      "   Context 3: \"France, officially the French Republic, has its capital in Paris.\"\n",
      "\n",
      "2. Answers from Contexts: \n",
      "    -From Context 1: Paris\n",
      "    -From Context 2: Paris\n",
      "    -From Context 3: Paris\n",
      "\n",
      "3. Final Answer: Paris.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "Next speaker: Fact_Checker\n",
      "\n",
      "Fact_Checker (to chat_manager):\n",
      "\n",
      "Paris\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "Next speaker: Coordinator\n",
      "\n",
      "Coordinator (to chat_manager):\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "Question: What is the capital of France?\n",
      "Answer: paris\n",
      "Exact Match: 10000%\n",
      "F1: 10000%\n",
      "-----------\n",
      "Coordinator (to chat_manager):\n",
      "\n",
      "QUESTION: Who wrote Romeo and Juliet?\n",
      "INSTRUCTIONS: Provide verified answer using the context retrieval workflow\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "Next speaker: QA_Specialist\n",
      "\n",
      "QA_Specialist (to chat_manager):\n",
      "\n",
      "1. Retrieved Context 1: \"Romeo and Juliet is a tragedy written by William Shakespeare early in his career about two young star-crossed lovers whose deaths ultimately reconcile their feuding families.\"\n",
      "   Retrieved Context 2: \"The Shakespearean work Romeo and Juliet is among the most popularly known pieces by the playwright.\"\n",
      "   Retrieved Context 3: \"Shakespeare became famous for several plays, including Romeo and Juliet which remains one of his most performed and well-known works.\"\n",
      "\n",
      "2. Exact Answer from Context: William Shakespeare\n",
      "\n",
      "3. Final Answer: William Shakespeare\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "Next speaker: Fact_Checker\n",
      "\n",
      "Fact_Checker (to chat_manager):\n",
      "\n",
      "William Shakespeare\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "Next speaker: Coordinator\n",
      "\n",
      "Coordinator (to chat_manager):\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\n",
      "Question: Who wrote Romeo and Juliet?\n",
      "Answer: william shakespeare\n",
      "Exact Match: 10000%\n",
      "F1: 10000%\n",
      "-----------\n"
     ]
    }
   ],
   "source": [
    "# Load metrics\n",
    "squad_metric = load(\"squad_v2\")\n",
    "\n",
    "test_samples = [\n",
    "    {\"question\": \"What is the capital of France?\", \"answer\": \"Paris\"},\n",
    "    {\"question\": \"Who wrote Romeo and Juliet?\", \"answer\": \"William Shakespeare\"},\n",
    "]\n",
    "\n",
    "\n",
    "for sample in test_samples:\n",
    "    prediction, metrics, _ = run_qa_pipeline(sample[\"question\"], sample[\"answer\"])\n",
    "    print(f\"\\nQuestion: {sample['question']}\")\n",
    "    print(f\"Answer: {prediction['prediction_text']}\")\n",
    "    print(f\"Exact Match: {metrics['exact']:.0%}\")\n",
    "    print(f\"F1: {metrics['f1']:.0%}\")\n",
    "    print(\"-----------\")"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
