{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KeKWVpg_135y"
      },
      "source": [
        "# From Zero🙎🏾to Hero🦸🏾: Mastering Generative AI with MongoDB\n",
        "\n",
        "---\n",
        "\n",
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mongodb-developer/GenAI-Showcase/blob/main/notebooks/agents/zero_to_hero_with_genai_with_mongodb_openai.ipynb)\n",
        "\n",
        "[![AI Learning Hub For Developers](https://img.shields.io/badge/AI%20Learning%20Hub%20For%20Developers-Click%20Here-blue)](https://www.mongodb.com/resources/use-cases/artificial-intelligence?utm_campaign=ai_learning_hub&utm_source=github&utm_medium=referral)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "79P5T4Un23_D"
      },
      "source": [
        "**What to Expect**\n",
        "\n",
        "[**Part 1: Foundations of Generative AI & Search**](#part1)\n",
        "- **Comprehensive understanding of Generative AI applications**\n",
        "- **In-depth code walkthroughs** of various retrieval mechanisms including text search, vector search, and hybrid search\n",
        "- **Exploration of Voyage AI** and embedding generation techniques\n",
        "\n",
        "[**Part 2: Building Intelligent Search Systems**](#part2)\n",
        "- **Hands-on implementation** of semantic search mechanisms\n",
        "- **Practical development** of Retrieval Augmented Generation (RAG) systems\n",
        "\n",
        "[**Part 3: Advanced AI Agents & Integration**](#part3)\n",
        "- **Introduction to AI Agents** and their capabilities\n",
        "- **Step-by-step implementation** of Agentic RAG with MongoDB\n",
        "- **OPENAI Agent SDK**: Build AI Agents with OpenAI Agent SDK\n",
        "\n",
        "[**Part 4: Agentic Chat System**](#part4)\n",
        "- Agentic Chatbot that can answer queries\n",
        "- Implement persistent chat history tracking\n",
        "- Preserve conversation context across interactions\n",
        "- Implement advanced query-answering mechanisms\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vCeJ6-LGiPNF"
      },
      "source": [
        "\n",
        "\n",
        "---\n",
        "\n",
        "\n",
        "**How to use this notebook:**\n",
        "- Execute each cell block sequentially\n",
        "- Look out for checkpoints ⛳ for key learning takeaways\n",
        "- Look out for key information 🔑 for insights that are useful in LLM application development\n",
        "- Ensure you use external link provided to gain access to MongoDB Free Account, Voyage AI API key or any other resources requried\n",
        "\n",
        "---\n",
        "\n",
        "\n",
        "* Don't forget to Star 🌟 us on [GitHub](https://github.com/mongodb-developer/GenAI-Showcase)\n",
        "* And Checkout the [AI Learning Hub](https://www.mongodb.com/resources/use-cases/artificial-intelligence)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lYWiq6EcW3LP"
      },
      "source": [
        "# 💼 Use Case: Virtual Primary Care Assistant for Medical Pharmarcy\n",
        "\n",
        "\n",
        "---\n",
        "\n",
        "\n",
        "\n",
        "## Overview\n",
        "The Virtual Primary Care Assistant leverages MongoDB's vector search capabilities to provide CVS Pharmacy customers with reliable medical information and personalized guidance based on medication reviews and health conditions. This intelligent assistant integrates with a  Medical Pharmarcy's existing customer data infrastructure to offer a comprehensive health support experience.\n",
        "\n",
        "## Key Features\n",
        "- **Medication Information Retrieval**: Users can ask questions about medications and receive accurate information about dosage, side effects, and drug interactions.\n",
        "- **Experience-Based Insights**: Leverages real patient reviews and experiences to provide context-rich responses about medication effectiveness for specific conditions.\n",
        "- **Symptom Assessment**: Helps users understand possible conditions based on symptoms and suggests when to seek professional medical care.\n",
        "- **Personalized Recommendations**: Provides tailored guidance by considering the user's prescription history, health profile, and previous interactions.\n",
        "\n",
        "## Technical Implementation\n",
        "- MongoDB serves as the knowledge base, storing structured medication data and vector embeddings of patient reviews\n",
        "- Vector search enables semantic understanding of user queries about medications and conditions\n",
        "- Hybrid search combines keyword and semantic matching for optimal retrieval of relevant information\n",
        "- RAG architecture integrates retrieval results with LLM processing to generate accurate, contextual responses\n",
        "- Agentic capabilities allow the system to determine when to search for information versus when to recommend professional consultation\n",
        "\n",
        "## Business Value\n",
        "- Reduces call center volume by answering common medication questions\n",
        "- Improves medication adherence through accessible information and reminders\n",
        "- Enhances customer satisfaction by providing 24/7 access to reliable health guidance\n",
        "- Generates insights on common customer concerns to inform product offerings and services"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Di0CSVLydnkC"
      },
      "source": [
        "![image.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bp-Bs9Gy3tGB"
      },
      "source": [
        "## Part 1: Foundations of Generative AI & Search\n",
        "<a name=\"part1\"></a>\n",
        "\n",
        "---\n",
        "- **Understanding Generative AI Applications**\n",
        "  - Core concepts and architecture\n",
        "  - LLMs and their capabilities\n",
        "  - Real-world use cases and limitations\n",
        "- **Retrieval Mechanisms Deep Dive**\n",
        "  - Traditional text search techniques\n",
        "  - Vector search fundamentals\n",
        "  - Hybrid search approaches and when to use each\n",
        "- **Embedding Generation with Voyage AI**\n",
        "  - Introduction to embeddings and their importance\n",
        "  - Working with Voyage AI embedding models\n",
        "  - Optimizing embedding generation for different content types\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0zDqC4Ys3CD8"
      },
      "source": [
        "### Step 1: Importing Libraries\n",
        "\n",
        "Install the necessary libraries for the notebook\n",
        "- pymongo: MongoDB Python driver, this will be used to connect to the MongoDB Atlas cluster.\n",
        "- voyageai: Voyage AI Python client. This will be used to generate the embeddings for the wikipedia data.\n",
        "- pandas: Data manipulation and analysis, this will be used to load the wikipedia data and prepare it for the vector search.\n",
        "- datasets: Load and manage datasets, this will be used to load the wikipedia data.\n",
        "- matplotlib: Plotting and visualizing data, this will be used to visualize the data."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "4gW6KP8-1jKl",
        "outputId": "1abb280d-8840-47af-959a-b1bb4dcae11e"
      },
      "outputs": [],
      "source": [
        "!pip install -Uq pymongo voyageai pandas datasets matplotlib"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jL-eBYML4ITf"
      },
      "source": [
        "Creating the function `set_env_securely` to securely get and set environment variables. This is a helper function to get and set environment variables securely."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {
        "id": "z5RcEGsh4Iuc"
      },
      "outputs": [],
      "source": [
        "import getpass\n",
        "import os\n",
        "\n",
        "\n",
        "# Function to securely get and set environment variables\n",
        "def set_env_securely(var_name, prompt):\n",
        "    value = getpass.getpass(prompt)\n",
        "    os.environ[var_name] = value"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qh0FKPwc4Wn4"
      },
      "source": [
        "### Step 2: Data Loading and Preparation\n",
        "\n",
        "For this Virtual Primary Care Assistant, we're working with two complementary datasets:\n",
        "\n",
        "1. **[ChatDoctor-HealthCareMagic-100k](https://huggingface.co/datasets/lavita/ChatDoctor-HealthCareMagic-100k)**\n",
        "  - This dataset contains doctor-patient conversations about medical conditions and treatments\n",
        "  - It provides authentic patient questions and professional medical responses\n",
        "  - We use this data to train our system to understand medical queries and provide informed responses\n",
        "\n",
        "2. **[Drug Reviews Dataset](https://huggingface.co/datasets/Reboot87/drugs_reviews_dataset)**\n",
        "  - Contains patient-reported experiences with various medications\n",
        "  - Includes information about conditions treated, effectiveness ratings, and detailed reviews\n",
        "  - Provides valuable real-world insights on medication effects and side effects\n",
        "\n",
        "The structure of these datasets is as follows:\n",
        "\n",
        "**Healthcare Conversation Dataset:**\n",
        "- `input`: Patient's medical question or symptom description\n",
        "- `output`: Doctor's medical advice or response\n",
        "\n",
        "**Drug Reviews Dataset:**\n",
        "- `drugName`: Name of the medication\n",
        "- `condition`: Medical condition being treated\n",
        "- `review`: Patient's detailed experience with the medication\n",
        "- `rating`: Numerical rating (1-10) of the patient's satisfaction\n",
        "\n",
        "These datasets provide complementary information that allows our system to understand medical questions, provide contextual information about medications, and offer personalized guidance based on real patient experiences."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "metadata": {
        "id": "fPV1rmDYqQbL"
      },
      "outputs": [],
      "source": [
        "# Import necessary libraries\n",
        "# datasets is a Hugging Face library for accessing and working with datasets\n",
        "# pandas is used for data manipulation and analysis\n",
        "import pandas as pd\n",
        "from datasets import load_dataset"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {
        "id": "II-QAXRVYNhM"
      },
      "outputs": [],
      "source": [
        "# Load the healthcare conversation dataset from Hugging Face repository\n",
        "# This dataset contains doctor-patient conversations for medical advice\n",
        "# 'lavita/ChatDoctor-HealthCareMagic-100k' is a dataset with 100k medical conversations\n",
        "healthcare_conversation_dataset = load_dataset(\n",
        "    \"lavita/ChatDoctor-HealthCareMagic-100k\", streaming=True, split=\"train\"\n",
        ")\n",
        "\n",
        "# Limit the dataset to 10,000 examples for processing efficiency\n",
        "# Using .take() method which is memory-efficient as it streams the data\n",
        "# This is important for large datasets to avoid memory issues\n",
        "healthcare_conversation_dataset = healthcare_conversation_dataset.take(1000)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 20,
      "metadata": {
        "id": "ib-WCzbbYZur"
      },
      "outputs": [],
      "source": [
        "# Load the drug reviews dataset from Hugging Face repository\n",
        "# This dataset contains patient reviews of various medications\n",
        "# 'Reboot87/drugs_reviews_dataset' contains structured data about drug experiences\n",
        "drug_reviews_dataset = load_dataset(\n",
        "    \"Reboot87/drugs_reviews_dataset\", streaming=True, split=\"train\"\n",
        ")\n",
        "\n",
        "# Limit the dataset to 10,000 examples to manage memory usage and processing time\n",
        "# This sample size should be sufficient for building our demonstration model\n",
        "# The streaming=True parameter ensures we don't load the entire dataset into memory\n",
        "drug_reviews_dataset = drug_reviews_dataset.take(1000)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 21,
      "metadata": {
        "id": "MAbp2-4OYtaW"
      },
      "outputs": [],
      "source": [
        "# Convert datasets to dataframes for easier manipulation and analysis\n",
        "# Pandas DataFrames provide powerful tools for data exploration and preprocessing\n",
        "# This transformation allows us to use pandas' rich functionality for data cleaning and feature engineering\n",
        "healthcare_conversation_dataset = pd.DataFrame(healthcare_conversation_dataset)\n",
        "\n",
        "# Similarly convert the drug reviews dataset to a DataFrame\n",
        "# This enables SQL-like operations, filtering, and statistical analysis\n",
        "# Having both datasets as DataFrames ensures consistent data handling approaches\n",
        "drug_reviews_dataset = pd.DataFrame(drug_reviews_dataset)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 22,
      "metadata": {
        "id": "JBDsRBtyZrKW"
      },
      "outputs": [],
      "source": [
        "# Remove the attributes instruction from the healthcare_conversation_dataset\n",
        "# The 'instruction' column contains generic prompts that aren't needed for our conversational data analysis\n",
        "# Removing it helps focus on the actual patient inputs and doctor responses\n",
        "healthcare_conversation_dataset = healthcare_conversation_dataset.drop(\n",
        "    columns=[\"instruction\"]\n",
        ")\n",
        "\n",
        "# Remove the attributes patientId, date, usefulCount and review_length from the drug_reviews_dataset\n",
        "# patientId: Removed to ensure data anonymization and privacy protection\n",
        "# date: Temporal information isn't critical for our current analysis\n",
        "# usefulCount: Engagement metrics aren't relevant for our semantic understanding\n",
        "# review_length: This is a derived feature that can be recalculated if needed\n",
        "drug_reviews_dataset = drug_reviews_dataset.drop(\n",
        "    columns=[\"patientId\", \"date\", \"usefulCount\", \"review_length\"]\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 206
        },
        "id": "iKZJXs-qYxiC",
        "outputId": "fc669f0c-3112-42e4-d74d-c8e5458e361f"
      },
      "outputs": [],
      "source": [
        "healthcare_conversation_dataset.head()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 206
        },
        "id": "PP2h06A6YzcF",
        "outputId": "84749222-22b1-49eb-e828-d8a5fe862112"
      },
      "outputs": [],
      "source": [
        "drug_reviews_dataset.head()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OHdxOtNN7VEA"
      },
      "source": [
        "### Step 4: Embedding Generation with Voyage AI\n",
        "\n",
        "In this step, we will generate the embeddings for the wikipedia data using the Voyage AI API.\n",
        "\n",
        "We will use the `voyage-3-large` model to generate the embeddings.\n",
        "\n",
        "One importnat thing to note is that althoguh you are expected to have credit card for the voyage api, your first 200 million tokens are free for every account, and subsequent usage is priced on a per-token basis.\n",
        "\n",
        "Go [here](https://docs.voyageai.com/docs/api-key-and-installation) for more information on getting your API key and setting it in the environment variables."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 29,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "tB-tG8h47ZeY",
        "outputId": "dafc520a-4ba6-4859-f521-d4e62ed7bd7c"
      },
      "outputs": [],
      "source": [
        "set_env_securely(\"VOYAGE_API_KEY\", \"Enter your Voyage API Key: \")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 30,
      "metadata": {
        "id": "FpnKu9qX7Yp3"
      },
      "outputs": [],
      "source": [
        "import voyageai\n",
        "\n",
        "# Initialize the Voyage AI client.\n",
        "voyageai_client = voyageai.Client()\n",
        "\n",
        "\n",
        "def get_embedding(text, task_prefix=\"document\"):\n",
        "    \"\"\"\n",
        "    Generate embeddings for a text string with a task-specific prefix using the voyage-3-large model.\n",
        "\n",
        "    Parameters:\n",
        "        text (str): The input text to be embedded.\n",
        "        task_prefix (str): A prefix describing the task; this is prepended to the text.\n",
        "\n",
        "    Returns:\n",
        "        list: The embedding vector as a list of floats (or ints if another output_dtype is chosen).\n",
        "    \"\"\"\n",
        "    if not text.strip():\n",
        "        print(\"Attempted to get embedding for empty text.\")\n",
        "        return []\n",
        "\n",
        "    # Call the Voyage API to generate the embedding.\n",
        "    # Here, we wrap the text in a list since the API expects a list of texts.\n",
        "    # Default output embedding: 1024\n",
        "    result = voyageai_client.embed(\n",
        "        [text], model=\"voyage-3-large\", input_type=task_prefix\n",
        "    )\n",
        "\n",
        "    # Return the first embedding from the result.\n",
        "    return result.embeddings[0]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aLi_aITj7bKL"
      },
      "source": [
        "The `get_embedding` function is used to generate the embeddings for the text using the voyage-3-large model.\n",
        "\n",
        "The function takes a text string and a task prefix as input and returns the embedding vector as a list of floats.\n",
        "\n",
        "The function also takes an optional argument `input_type` which can be set to `\"document\"` or `\"query\"` to specify the type of input to the model.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {
        "id": "anKEePjVZc15"
      },
      "outputs": [],
      "source": [
        "# Define a function to generate an embedding from a conversation row.\n",
        "def generate_embedding_for_healthcare_dataset(row):\n",
        "    \"\"\"\n",
        "    Generate an embedding for a conversation by concatenating the patient's input\n",
        "    and the medical practitioner's response.\n",
        "\n",
        "    Parameters:\n",
        "      row (pd.Series): A row from the healthcare conversation dataset containing:\n",
        "          - 'input': The patient's message.\n",
        "          - 'output': The practitioner's response.\n",
        "\n",
        "    Returns:\n",
        "      embedding: The embedding vector generated from the concatenated conversation.\n",
        "    \"\"\"\n",
        "    # Concatenate the input and output with descriptive text.\n",
        "    conversation_text = (\n",
        "        f\"This is the input from the patient: {row['input']}. \"\n",
        "        f\"This is the response from the medical practitioner: {row['output']}\"\n",
        "    )\n",
        "\n",
        "    # Generate and return the embedding using the get_embedding function.\n",
        "    return get_embedding(conversation_text)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "vSL0mIIcbhGE",
        "outputId": "57867be7-417a-4ee1-e1db-0262f39843d3"
      },
      "outputs": [],
      "source": [
        "from tqdm import tqdm\n",
        "\n",
        "# Enable the tqdm progress_apply method on pandas DataFrames\n",
        "tqdm.pandas()\n",
        "\n",
        "# Apply the embedding generation function with a progress bar.\n",
        "# Each row is processed with generate_embedding_for_healthcare_dataset, and the resulting\n",
        "# embeddings are stored in the new \"embedding\" column.\n",
        "healthcare_conversation_dataset[\"embedding\"] = (\n",
        "    healthcare_conversation_dataset.progress_apply(\n",
        "        generate_embedding_for_healthcare_dataset, axis=1\n",
        "    )\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 206
        },
        "id": "lt3Xjwo1btMS",
        "outputId": "2839fd06-449e-4abc-a30c-07472fd3aaf8"
      },
      "outputs": [],
      "source": [
        "healthcare_conversation_dataset.head()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "6CKrotVKb3zH",
        "outputId": "5d71ee0c-828e-42f0-dced-a01deaa85934"
      },
      "outputs": [],
      "source": [
        "# Generate embeddings the drug_reviews_dataset using the review attribute\n",
        "drug_reviews_dataset[\"embedding\"] = drug_reviews_dataset[\"review\"].progress_apply(\n",
        "    get_embedding\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 206
        },
        "id": "H6L6ZZfbcU3m",
        "outputId": "018ab8d5-433c-4dca-fc01-ce0f670265f2"
      },
      "outputs": [],
      "source": [
        "drug_reviews_dataset.head()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HZ8IpncE7tXd"
      },
      "source": [
        "### Step 5: MongoDB (Operational and Vector Database)\n",
        "\n",
        "MongoDB acts as both an operational and vector database for the RAG system.\n",
        "MongoDB Atlas specifically provides a database solution that efficiently stores, queries and retrieves vector embeddings.\n",
        "\n",
        "Creating a database and collection within MongoDB is made simple with MongoDB Atlas.\n",
        "\n",
        "1. First, register for a [MongoDB Atlas account](https://www.mongodb.com/cloud/atlas/register). For existing users, sign into MongoDB Atlas.\n",
        "2. [Follow the instructions](https://www.mongodb.com/docs/atlas/tutorial/deploy-free-tier-cluster/). Select Atlas UI as the procedure to deploy your first cluster.\n",
        "\n",
        "Follow MongoDB’s [steps to get the connection](https://www.mongodb.com/docs/manual/reference/connection-string/) string from the Atlas UI. After setting up the database and obtaining the Atlas cluster connection URI, securely store the URI within your development environment.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 36,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "cOSIEWUW7t-L",
        "outputId": "847f8b36-e036-4a8a-f3d5-a3cc4ac1a70c"
      },
      "outputs": [],
      "source": [
        "# Set MongoDB URI\n",
        "set_env_securely(\"MONGO_URI\", \"Enter your MONGO URI: \")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 37,
      "metadata": {
        "id": "FYpEYJTM7xyc"
      },
      "outputs": [],
      "source": [
        "import pymongo\n",
        "\n",
        "\n",
        "def get_mongo_client(mongo_uri):\n",
        "    \"\"\"Establish and validate connection to the MongoDB.\"\"\"\n",
        "\n",
        "    client = pymongo.MongoClient(\n",
        "        mongo_uri, appname=\"devrel.showcase.zero_to_hero_genai.python\"\n",
        "    )\n",
        "\n",
        "    # Validate the connection\n",
        "    ping_result = client.admin.command(\"ping\")\n",
        "    if ping_result.get(\"ok\") == 1.0:\n",
        "        # Connection successful\n",
        "        print(\"Connection to MongoDB successful\")\n",
        "        return client\n",
        "    else:\n",
        "        print(\"Connection to MongoDB failed\")\n",
        "    return None\n",
        "\n",
        "\n",
        "MONGO_URI = os.environ[\"MONGO_URI\"]\n",
        "if not MONGO_URI:\n",
        "    print(\"MONGO_URI not set in environment variables\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "htR2RZRl7444",
        "outputId": "b90603ec-71e8-4fc3-c2f9-acccfe17f5b7"
      },
      "outputs": [],
      "source": [
        "from pymongo.errors import CollectionInvalid\n",
        "\n",
        "# Connect to MongoDB using the connection string from environment variables\n",
        "mongo_client = get_mongo_client(MONGO_URI)\n",
        "\n",
        "# Define database and collection names\n",
        "DB_NAME = \"virtual_primary_care_assistant\"\n",
        "DRUG_REVIEW_COLLECTION_NAME = \"drug_reviews\"\n",
        "CONVERSATION_COLLECTION_NAME = \"conversations\"\n",
        "\n",
        "\n",
        "# Get a reference to the database (creates it if it doesn't exist)\n",
        "db = mongo_client[DB_NAME]\n",
        "\n",
        "# Check if each required collection exists and create if needed\n",
        "for collection_name in [\n",
        "    DRUG_REVIEW_COLLECTION_NAME,\n",
        "    CONVERSATION_COLLECTION_NAME,\n",
        "]:\n",
        "    if collection_name not in db.list_collection_names():\n",
        "        try:\n",
        "            # Create the collection explicitly (this ensures it exists before we use it)\n",
        "            db.create_collection(collection_name)\n",
        "            print(f\"Collection '{collection_name}' created successfully.\")\n",
        "        except CollectionInvalid as e:\n",
        "            # Handle case where collection creation fails (e.g., if another process created it)\n",
        "            print(f\"Error creating collection: {e}\")\n",
        "    else:\n",
        "        # Collection already exists, no need to create it\n",
        "        print(f\"Collection '{collection_name}' already exists.\")\n",
        "\n",
        "# Get a reference to collections for later use\n",
        "drug_reviews_collection = db[DRUG_REVIEW_COLLECTION_NAME]\n",
        "healthcare_conversation_collection = db[CONVERSATION_COLLECTION_NAME]\n",
        "collections_list = [drug_reviews_collection, healthcare_conversation_collection]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XiUO0uRn9YgP"
      },
      "source": [
        "### Step 6: Index Creation\n",
        "\n",
        "#### What is a Vector Search Index and Why Do We Need It?\n",
        "A vector search index organizes high-dimensional embeddings for efficient similarity searches. Without it, finding similar vectors would require exhaustive comparisons against every vector in your database—becoming impractical at scale. These indexes enable fast semantic searches by organizing vectors based on their geometric relationships, essential for RAG, recommendation systems, and semantic search.\n",
        "\n",
        "#### Understanding HNSW (Hierarchical Navigable Small Worlds)\n",
        "HNSW is MongoDB Atlas Vector Search's algorithm of choice for approximate nearest neighbor searches:\n",
        "- Creates a multi-layered graph connecting vectors to their nearest neighbors\n",
        "- Enables logarithmic search complexity through a hierarchical approach\n",
        "- Balances speed and accuracy via configurable parameters\n",
        "- Provides excellent performance characteristics for production applications\n",
        "\n",
        "#### What is a Search Index and Why Do We Need It?\n",
        "Traditional search indexes improve retrieval speed for non-vector operations:\n",
        "- Fast filtering on metadata fields (dates, categories, etc.)\n",
        "- Supporting hybrid search combining keywords and semantics\n",
        "- Optimizing sorting and standard database operations"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "d04q0U5b_BNL"
      },
      "source": [
        "In this step, we will create two critical indexes for our Wikipedia dataset:\n",
        "\n",
        "1. A vector search index (Float32 ANN Index) for the embedding field to enable semantic similarity searches\n",
        "2. A traditional search index on text fields to support keyword-based filtering and hybrid search approaches\n",
        "\n",
        "Together, these indexes will form the foundation of our information retrieval system, allowing for both precise keyword matching and nuanced semantic understanding."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-GufBgm0_KGU"
      },
      "source": [
        "#### Create vector search indexes"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 39,
      "metadata": {
        "id": "V666fTeT9bpp"
      },
      "outputs": [],
      "source": [
        "from pymongo.operations import SearchIndexModel\n",
        "\n",
        "\n",
        "def setup_vector_search_index(collection, index_definition, index_name=\"vector_index\"):\n",
        "    \"\"\"\n",
        "    Setup a vector search index for a MongoDB collection and wait for 30 seconds.\n",
        "\n",
        "    Args:\n",
        "    collection: MongoDB collection object\n",
        "    index_definition: Dictionary containing the index definition\n",
        "    index_name: Name of the index (default: \"vector_index\")\n",
        "    \"\"\"\n",
        "    new_vector_search_index_model = SearchIndexModel(\n",
        "        definition=index_definition, name=index_name, type=\"vectorSearch\"\n",
        "    )\n",
        "\n",
        "    # Create the new index\n",
        "    try:\n",
        "        result = collection.create_search_index(model=new_vector_search_index_model)\n",
        "        print(f\"Creating index '{index_name}' for {collection.name} collection\")\n",
        "\n",
        "        return result\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"Error creating new vector search index '{index_name}': {e!s}\")\n",
        "        return None"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 40,
      "metadata": {
        "id": "uk9ICFQn9iez"
      },
      "outputs": [],
      "source": [
        "# Define the configuration for a vector index using float32 precision with approximate nearest neighbor (ANN) search.\n",
        "vector_index_definition_float32_ann = {\n",
        "    # 'fields' holds a list of field configurations that specify how to interpret the data for indexing.\n",
        "    \"fields\": [\n",
        "        {\n",
        "            # The field is of type 'vector', indicating that it contains vectorized (numerical) data.\n",
        "            \"type\": \"vector\",\n",
        "            # 'path' specifies the key in the data where the vector (embedding) is stored.\n",
        "            \"path\": \"embedding\",\n",
        "            # 'numDimensions' indicates the number of dimensions in the embedding vector.\n",
        "            # Here, it is set to 1024, which is the default dimension size of embeddings generated by the model.\n",
        "            \"numDimensions\": 1024,\n",
        "            # 'similarity' defines the method used to compare vectors; in this case, cosine similarity is used.\n",
        "            \"similarity\": \"cosine\",\n",
        "        }\n",
        "    ]\n",
        "}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 41,
      "metadata": {
        "id": "uPOuR2en9liA"
      },
      "outputs": [],
      "source": [
        "# This is the name of the vector indexes\n",
        "vector_search_float32_ann_index_name = \"vector_index_float32_ann\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ZfkFKMjr9roG",
        "outputId": "2278f3cd-df78-4757-953b-f8e792c99eb2"
      },
      "outputs": [],
      "source": [
        "# Iterate over a list of collections to set up a vector search index for each collection.\n",
        "\n",
        "for specific_collection in collections_list:\n",
        "    # Call the function setup_vector_search_index to configure the vector search index.\n",
        "    # Parameters:\n",
        "    #   - collection_name: The current collection (drug review or conversation data).\n",
        "    #   - vector_index_definition_float32_ann: The definition settings for the vector index,\n",
        "    #     using float32 precision for approximate nearest neighbor (ANN) search.\n",
        "    #   - vector_search_float32_ann_index_name: The designated name for the vector search index.\n",
        "    setup_vector_search_index(\n",
        "        specific_collection,\n",
        "        vector_index_definition_float32_ann,\n",
        "        vector_search_float32_ann_index_name,\n",
        "    )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3wsq9kBk-O_N"
      },
      "source": [
        "#### Create Search Index"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 43,
      "metadata": {
        "id": "Vvi5R9n1eqcM"
      },
      "outputs": [],
      "source": [
        "def setup_text_search_index(collection, definition, index_name=\"text_search_index\"):\n",
        "    \"\"\"\n",
        "    Setup a text search index for a MongoDB collection in Atlas.\n",
        "\n",
        "    Args:\n",
        "        collection (Collection): MongoDB collection object.\n",
        "        definition (dict): The search index definition configuration.\n",
        "        index_name (str): Name of the index (default: \"text_search_index\").\n",
        "    \"\"\"\n",
        "    # Construct the search index model using the provided definition.\n",
        "    # This model specifies the configuration for how MongoDB will index and search the text content.\n",
        "    search_index_model = {\n",
        "        \"name\": index_name,  # Unique identifier for the index.\n",
        "        \"type\": \"search\",  # Specifies that we're creating a full-text search index.\n",
        "        \"definition\": definition,  # Use the passed definition for mapping configuration.\n",
        "    }\n",
        "\n",
        "    # Attempt to create the search index on the MongoDB collection.\n",
        "    try:\n",
        "        result = collection.create_search_index(search_index_model)\n",
        "        print(f\"Creating index '{index_name}' for {collection.name} collection\")\n",
        "        return result\n",
        "    except Exception as e:\n",
        "        # Handle any errors that might occur during index creation.\n",
        "        # Common issues might include duplicate index names or permission errors.\n",
        "        print(f\"Error creating text search index '{index_name}': {e}\")\n",
        "        return None"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 44,
      "metadata": {
        "id": "8WdlGeyHe3hJ"
      },
      "outputs": [],
      "source": [
        "# Define the text search index definition for the drugs_review dataset.\n",
        "# This configuration specifies that only the \"drugName\", \"condition\" and \"review\" fields will be indexed,\n",
        "# and automatic field detection is disabled.\n",
        "drug_review_text_search_definition = {\n",
        "    \"mappings\": {\n",
        "        \"dynamic\": False,  # Disable automatic detection; only explicitly defined fields are indexed.\n",
        "        \"fields\": {\n",
        "            \"drugName\": {\n",
        "                \"type\": \"string\"\n",
        "            },  # Index the \"drugName\" field as searchable text.\n",
        "            \"condition\": {\n",
        "                \"type\": \"string\"\n",
        "            },  # Index the \"condition\" field as searchable text.\n",
        "            \"review\": {\n",
        "                \"type\": \"string\"\n",
        "            },  # Index the \"review\" field as searchable text.\n",
        "        },\n",
        "    }\n",
        "}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 45,
      "metadata": {
        "id": "CxuOohMPfNX5"
      },
      "outputs": [],
      "source": [
        "# Define the text search index definition for the conversations dataset.\n",
        "# This configuration specifies that only the \"input\" fields will be indexed,\n",
        "# and automatic field detection is disabled.\n",
        "conversation_text_search_definition = {\n",
        "    \"mappings\": {\n",
        "        \"dynamic\": False,  # Disable automatic detection; only explicitly defined fields are indexed.\n",
        "        \"fields\": {\n",
        "            \"input\": {\n",
        "                \"type\": \"string\"\n",
        "            },  # Index the \"drugName\" field as searchable text.\n",
        "        },\n",
        "    }\n",
        "}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "dURrHiWG-TRe",
        "outputId": "a66fcc3b-e59e-41a9-cc96-0a4c46611120"
      },
      "outputs": [],
      "source": [
        "setup_text_search_index(\n",
        "    drug_reviews_collection, drug_review_text_search_definition, \"text_search_index\"\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 53
        },
        "id": "AtzZL-0thWas",
        "outputId": "e429dc4a-0aea-4db7-a5a5-c8e3945ea668"
      },
      "outputs": [],
      "source": [
        "setup_text_search_index(\n",
        "    healthcare_conversation_collection,\n",
        "    conversation_text_search_definition,\n",
        "    \"text_search_index\",\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "h0CsJdxo93eD"
      },
      "source": [
        "### Step 7: Data Ingestion"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "wC_nvaPu95bT",
        "outputId": "3d226528-3468-4fc9-e6b5-e1c89661d4c2"
      },
      "outputs": [],
      "source": [
        "# Convert the pandas DataFrame to a list of dictionaries\n",
        "# Each row becomes a dictionary where column names are keys\n",
        "healthcare_conversation_dataset = healthcare_conversation_dataset.to_dict(\"records\")\n",
        "drug_reviews_dataset = drug_reviews_dataset.to_dict(\"records\")\n",
        "\n",
        "# Insert all documents into MongoDB in a single bulk operation\n",
        "# This is much more efficient than inserting documents one at a time\n",
        "healthcare_conversation_collection.insert_many(healthcare_conversation_dataset)\n",
        "drug_reviews_collection.insert_many(drug_reviews_dataset)\n",
        "\n",
        "# Confirm successful data ingestion to the user\n",
        "print(\"Data ingestion into MongoDB completed\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cvpnJxBr_-GF"
      },
      "source": [
        "### Step 8: Implementing Powerful Full-Text Search Capabilities\n",
        "\n",
        "In this step, we'll develop a robust full-text search function that leverages MongoDB's text search capabilities. This function will enable precise keyword matching across our Wikipedia dataset, allowing users to find exact information quickly and efficiently."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 49,
      "metadata": {
        "id": "g7ViOdXIBA2z"
      },
      "outputs": [],
      "source": [
        "def text_search_with_mongodb(query_text, collection, top_n=5, paths=\"review\"):\n",
        "    \"\"\"\n",
        "    Perform a text search in the MongoDB collection based on the user query.\n",
        "\n",
        "    Args:\n",
        "        query_text (str): The user's query string.\n",
        "        collection (MongoCollection): The MongoDB collection to search.\n",
        "        top_n (int): The number of top results to return.\n",
        "        paths (str or list): The field(s) to search within. This can be a single field (as a string)\n",
        "                             or multiple fields (as a list of strings).\n",
        "\n",
        "    Returns:\n",
        "        list: A list of matching documents.\n",
        "    \"\"\"\n",
        "    # If a single field is provided as a string, convert it to a list for consistency.\n",
        "    if not isinstance(paths, list):\n",
        "        paths = [paths]\n",
        "\n",
        "    # Define the text search stage using MongoDB's $search operator.\n",
        "    # This is part of Atlas Search and provides more powerful text search capabilities\n",
        "    # than MongoDB's standard text index.\n",
        "    text_search_stage = {\n",
        "        \"$search\": {\n",
        "            \"index\": \"text_search_index\",  # Reference the previously created search index.\n",
        "            \"text\": {\n",
        "                \"query\": query_text,  # The actual search term provided by the user.\n",
        "                \"path\": paths,  # Search within the specified field(s).\n",
        "            },\n",
        "        }\n",
        "    }\n",
        "\n",
        "    # Limit the number of results returned to improve performance.\n",
        "    # This is especially important for large collections.\n",
        "    limit_stage = {\"$limit\": top_n}\n",
        "\n",
        "    # Define which fields to include in the returned documents.\n",
        "    # Excluding unnecessary fields reduces bandwidth and processing overhead.\n",
        "    project_stage = {\n",
        "        \"$project\": {\n",
        "            \"_id\": 0,  # Exclude MongoDB's internal ID field.\n",
        "            \"embedding\": 0,  # Exclude the embedding field.\n",
        "        }\n",
        "    }\n",
        "\n",
        "    # Combine all stages into a MongoDB aggregation pipeline.\n",
        "    # The pipeline will execute stages in sequence: search, limit, then project.\n",
        "    pipeline = [text_search_stage, limit_stage, project_stage]\n",
        "\n",
        "    # Execute the search by running the aggregation pipeline against the specified collection.\n",
        "    # Convert the cursor to a list to ensure results are fully fetched before the function returns.\n",
        "    results = collection.aggregate(pipeline)\n",
        "\n",
        "    return list(results)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 50,
      "metadata": {
        "id": "yScTQsExBOJu"
      },
      "outputs": [],
      "source": [
        "# Define our search query text\n",
        "query_text = \"cough\"\n",
        "\n",
        "# Execute the full-text search using our previously defined function.\n",
        "# This searches through the MongoDB collection for documents where any of the specified fields\n",
        "# (\"review\", \"drugName\", \"condition\") match the query text \"cough\".\n",
        "get_knowledge_full_text_mdb = text_search_with_mongodb(\n",
        "    query_text, drug_reviews_collection, paths=[\"review\", \"drugName\", \"condition\"]\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 206
        },
        "id": "HEUhKmSPBW00",
        "outputId": "db88cd55-6313-4df8-ef40-aa9e1b8e9424"
      },
      "outputs": [],
      "source": [
        "pd.DataFrame(get_knowledge_full_text_mdb).head()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5s5kvmDeBjUt"
      },
      "source": [
        "### Step 9: Define Semantic Search Function (Vector Search)\n",
        "\n",
        "The `semantic_search_with_mongodb` function performs a vector search in the MongoDB collection based on the user query.\n",
        "\n",
        "**Semantic search and vector search are intrinsically connected—semantic search is the application of vector search technology to understand the meaning behind queries rather than just matching keywords. Vector search powers semantic search by converting text into numerical vector representations (embeddings) that capture semantic meaning, allowing the system to find content with similar meanings even when the exact words differ.**\n",
        "\n",
        "- `user_query` parameter is the user's query string.\n",
        "- `collection` parameter is the MongoDB collection to search.\n",
        "- `top_n` parameter is the number of top results to return.\n",
        "- `vector_search_index_name` parameter is the name of the vector search index to use for the search.\n",
        "\n",
        "The `numCandidates` parameter is the number of candidate matches to consider. This is set to 150 to match the number of candidate matches to consider in the Elasticsearch vector search.\n",
        "\n",
        "Another point to note is the queries in MongoDB are performed using the `aggregate` function enabled by the MongoDB Query Language(MQL).\n",
        "\n",
        "This allows for more flexibility in the queries and the ability to perform more complex searches. And data processing operations can be defined as stages in the pipeline. If you are a data engineer, data scientist or ML Engineer, the concept of pipeline processing is a key concept."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 52,
      "metadata": {
        "id": "G2ebEXaeBkOY"
      },
      "outputs": [],
      "source": [
        "def semantic_search_with_mongodb(\n",
        "    user_query, collection, top_n=5, vector_search_index_name=\"vector_index\"\n",
        "):\n",
        "    \"\"\"\n",
        "    Perform a vector search in the MongoDB collection based on the user query.\n",
        "\n",
        "    Args:\n",
        "    user_query (str): The user's query string.\n",
        "    collection (MongoCollection): The MongoDB collection to search.\n",
        "    top_n (int): The number of top results to return.\n",
        "    vector_search_index_name (str): The name of the vector search index.\n",
        "\n",
        "    Returns:\n",
        "    list: A list of matching documents.\n",
        "    \"\"\"\n",
        "\n",
        "    # Retrieve the pre-generated embedding for the query from our dictionary\n",
        "    # This embedding represents the semantic meaning of the query as a vector\n",
        "    query_embedding = get_embedding(user_query)\n",
        "\n",
        "    # Check if we have a valid embedding for the query\n",
        "    if query_embedding is None:\n",
        "        return \"Invalid query or embedding generation failed.\"\n",
        "\n",
        "    # Define the vector search stage using MongoDB's $vectorSearch operator\n",
        "    # This stage performs the semantic similarity search\n",
        "    vector_search_stage = {\n",
        "        \"$vectorSearch\": {\n",
        "            \"index\": vector_search_index_name,  # The vector index we created earlier\n",
        "            \"queryVector\": query_embedding,  # The numerical vector representing our query\n",
        "            \"path\": \"embedding\",  # The field containing document embeddings\n",
        "            \"numCandidates\": 100,  # Explore this many vectors for potential matches\n",
        "            \"limit\": top_n,  # Return only the top N most similar results\n",
        "        }\n",
        "    }\n",
        "\n",
        "    # Define which fields to include in the results and their format\n",
        "    project_stage = {\n",
        "        \"$project\": {\n",
        "            \"_id\": 0,  # Exclude MongoDB's internal ID\n",
        "            \"embedding\": 0,\n",
        "            \"score\": {\n",
        "                \"$meta\": \"vectorSearchScore\"  # Include similarity score from vector search\n",
        "            },\n",
        "        }\n",
        "    }\n",
        "\n",
        "    # Combine the search and projection stages into a complete pipeline\n",
        "    pipeline = [vector_search_stage, project_stage]\n",
        "\n",
        "    # Execute the pipeline against our collection and get results\n",
        "    results = collection.aggregate(pipeline)\n",
        "\n",
        "    # Convert cursor to a Python list for easier handling\n",
        "    return list(results)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 53,
      "metadata": {
        "id": "VoS2qMMoCERk"
      },
      "outputs": [],
      "source": [
        "# Define our search query about cough treatment.\n",
        "# The query asks for a recommendation on what drug to use for a cough.\n",
        "query_text = \"I have a cough, what drug can I use?\"\n",
        "\n",
        "# Execute a semantic search using our MongoDB collection.\n",
        "# Unlike keyword search, semantic search retrieves documents that have a similar meaning to the query,\n",
        "# even if they don't contain the exact same words.\n",
        "get_knowledge_semantic_mdb = semantic_search_with_mongodb(\n",
        "    query_text,  # The natural language query for semantic search.\n",
        "    drug_reviews_collection,  # The MongoDB collection containing drug review documents.\n",
        "    vector_search_index_name=vector_search_float32_ann_index_name,  # The reference name of our vector index for semantic search.\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 206
        },
        "id": "mdKF1ORgCDn-",
        "outputId": "a972f311-890d-41c4-8151-53555ba93e36"
      },
      "outputs": [],
      "source": [
        "# The results will contain semantically relevant documents related to cough treatment,\n",
        "# ranked by their vector similarity scores to the query embedding generated from our query.\n",
        "pd.DataFrame(get_knowledge_semantic_mdb).head()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fD4lUsBICTb_"
      },
      "source": [
        "#### ⛳ Knowledge Checkpoint:\n",
        "\n",
        "You now understand semantic search and vector search, including:\n",
        "\n",
        "- How semantic search leverages vector search technology to find content based on meaning rather than exact keyword matches\n",
        "- The relationship between text embeddings and vector search functionality\n",
        "- How MongoDB implements vector search through the $vectorSearch operator\n",
        "- The role of similarity metrics in determining relevance between queries and documents\n",
        "- Why vector search enables more natural language understanding in search systems\n",
        "- The practical implementation of semantic search in a MongoDB pipeline\n",
        "\n",
        "This foundation will be essential as we progress toward building more sophisticated retrieval and generation systems."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BCkQFhSFDW9d"
      },
      "source": [
        "### Step 10: Define Hybrid Search Function\n",
        "\n",
        "\n",
        "The `hybrid_search_with_mongodb` function conducts a hybrid search on a MongoDB Atlas collection that combines a vector search and a full-text search using Atlas Search.\n",
        "\n",
        "In the MongoDB hybrid search function, there are two weights:\n",
        "\n",
        "- vector_weight = 0.5: This weight scales the score obtained from the vector search portion.\n",
        "- full_text_weight = 0.5: This weight scales the score from the full-text search portion."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "G7S2dEzeDZ6W"
      },
      "source": [
        "#### Note: In the MongoDB hybrid search function, two weights:\n",
        "    - `vector_weight`\n",
        "    - `full_text_weight`\n",
        "\n",
        "They are used to control the influence of each search component on the final score.\n",
        "\n",
        "Here's how they work:\n",
        "\n",
        "Purpose:\n",
        "The weights allow you to adjust how much the vector (semantic) search and the full-text search contribute to the overall ranking.\n",
        "For example, a higher full_text_weight means that the full-text search results will have a larger impact on the final score, whereas a higher vector_weight would give more importance to the vector similarity score.\n",
        "\n",
        "Usage in the Pipeline:\n",
        "Within the aggregation pipeline, after retrieving results from each search type, the function computes a reciprocal ranking score for each result (using an expression like `1/(rank + 60)`).\n",
        "This score is then multiplied by the corresponding weight:\n",
        "\n",
        "**Vector Search:**\n",
        "\n",
        "```\n",
        "\"vs_score\": {\n",
        "  \"$multiply\": [ vector_weight, { \"$divide\": [1.0, { \"$add\": [\"$rank\", 60] } ] } ]\n",
        "}\n",
        "```\n",
        "\n",
        "\n",
        "**Full-Text Search:**\n",
        "```\n",
        "\"fts_score\": {\n",
        "  \"$multiply\": [ full_text_weight, { \"$divide\": [1.0, { \"$add\": [\"$rank\", 60] } ] } ]\n",
        "}\n",
        "```\n",
        "\n",
        "Finally, these weighted scores are combined (typically by adding them together) to produce a final score that determines the ranking of the documents.\n",
        "\n",
        "**Impact:**\n",
        "By adjusting these weights, you can fine-tune the search results to better match your application's needs. For instance, if the full-text component is more reliable for your dataset, you might set full_text_weight higher than vector_weight.\n",
        "\n",
        "The weights in the MongoDB function allow you to balance the contributions from vector-based and full-text search components, ensuring that the final ranking score reflects the desired importance of each search method."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 55,
      "metadata": {
        "id": "s48NMn6cCxCU"
      },
      "outputs": [],
      "source": [
        "def hybrid_search_with_mongodb(\n",
        "    user_query,\n",
        "    collection,\n",
        "    vector_search_index_name=\"vector_index\",\n",
        "    text_search_index_name=\"text_search_index\",\n",
        "    vector_weight=0.5,\n",
        "    full_text_weight=0.5,\n",
        "    top_k=10,\n",
        "    text_search_paths=[\"review\"],\n",
        "):\n",
        "    \"\"\"\n",
        "    Conduct a hybrid search on a MongoDB Atlas collection that combines a vector search\n",
        "    and a full-text search using Atlas Search.\n",
        "\n",
        "    Args:\n",
        "        user_query (str): The user's query string.\n",
        "        collection (MongoCollection): The MongoDB collection to search.\n",
        "        vector_search_index_name (str): The name of the vector search index.\n",
        "        text_search_index_name (str): The name of the text search index.\n",
        "        vector_weight (float): The weight of the vector search.\n",
        "        full_text_weight (float): The weight of the full-text search.\n",
        "        top_k (int): Number of results to return.\n",
        "\n",
        "    Returns:\n",
        "        list: A list of documents (dict) with combined scores.\n",
        "    \"\"\"\n",
        "\n",
        "    # Get the collection name from the collection object\n",
        "    collection_name = collection.name\n",
        "\n",
        "    # Get the pre-computed embedding vector for the user's query\n",
        "    query_vector = get_embedding(user_query)\n",
        "\n",
        "    # Create a MongoDB aggregation pipeline to perform hybrid search\n",
        "    pipeline = [\n",
        "        # PART 1: VECTOR SEARCH\n",
        "        # Perform semantic vector search using the query embedding\n",
        "        {\n",
        "            \"$vectorSearch\": {\n",
        "                \"index\": vector_search_index_name,  # Name of the vector search index\n",
        "                \"path\": \"embedding\",  # Field containing document embeddings\n",
        "                \"queryVector\": query_vector,  # The query vector to compare against\n",
        "                \"numCandidates\": 100,  # Number of candidates to consider for similarity\n",
        "                \"limit\": top_k,  # Initial limit of results\n",
        "            }\n",
        "        },\n",
        "        # Group all vector search results into a single document\n",
        "        # This prepares for the ranking step\n",
        "        {\n",
        "            \"$group\": {\n",
        "                \"_id\": None,\n",
        "                \"docs\": {\"$push\": \"$$ROOT\"},  # Push all documents into an array\n",
        "            }\n",
        "        },\n",
        "        # Unwind the array of documents to process each individually\n",
        "        # This adds a rank based on the original vector search order\n",
        "        {\n",
        "            \"$unwind\": {\n",
        "                \"path\": \"$docs\",\n",
        "                \"includeArrayIndex\": \"rank\",  # Add the array index as a rank field\n",
        "            }\n",
        "        },\n",
        "        # Calculate a vector search score based on rank\n",
        "        # Higher ranks get lower scores via division formula\n",
        "        {\n",
        "            \"$addFields\": {\n",
        "                \"vs_score\": {\n",
        "                    \"$multiply\": [\n",
        "                        vector_weight,  # Apply configurable weight to vector scores\n",
        "                        {\n",
        "                            \"$divide\": [1.0, {\"$add\": [\"$rank\", 60]}]\n",
        "                        },  # Score formula: 1/(rank+60)\n",
        "                    ]\n",
        "                }\n",
        "            }\n",
        "        },\n",
        "        # Project only the needed fields from each document\n",
        "        # Including the calculated vector search score\n",
        "        {\n",
        "            \"$project\": {\n",
        "                \"vs_score\": 1,\n",
        "                \"_id\": \"$docs._id\",\n",
        "                \"review\": \"$docs.review\",\n",
        "                \"drugName\": \"$docs.drugName\",\n",
        "                \"condition\": \"$docs.condition\",\n",
        "            }\n",
        "        },\n",
        "        # PART 2: TEXT SEARCH\n",
        "        # Combine with full-text search results using unionWith\n",
        "        {\n",
        "            \"$unionWith\": {\n",
        "                \"coll\": collection_name,  # Collection to search\n",
        "                \"pipeline\": [\n",
        "                    # Perform full text search using Atlas Search\n",
        "                    {\n",
        "                        \"$search\": {\n",
        "                            \"index\": text_search_index_name,  # Name of the text search index\n",
        "                            \"text\": {\n",
        "                                \"query\": user_query,  # Raw text query from user\n",
        "                                \"path\": text_search_paths,  # Field to search in\n",
        "                            },\n",
        "                        }\n",
        "                    },\n",
        "                    {\"$limit\": top_k},  # Limit initial text search results\n",
        "                    # Group text search results similar to vector search\n",
        "                    {\"$group\": {\"_id\": None, \"docs\": {\"$push\": \"$$ROOT\"}}},\n",
        "                    # Unwind and add ranking just like in vector search\n",
        "                    {\"$unwind\": {\"path\": \"$docs\", \"includeArrayIndex\": \"rank\"}},\n",
        "                    # Calculate a full-text search score based on rank\n",
        "                    # Using the same formula as vector search\n",
        "                    {\n",
        "                        \"$addFields\": {\n",
        "                            \"fts_score\": {\n",
        "                                \"$multiply\": [\n",
        "                                    full_text_weight,  # Apply configurable weight to text scores\n",
        "                                    {\"$divide\": [1.0, {\"$add\": [\"$rank\", 60]}]},\n",
        "                                ]\n",
        "                            }\n",
        "                        }\n",
        "                    },\n",
        "                    # Project only the needed fields for text search results\n",
        "                    {\n",
        "                        \"$project\": {\n",
        "                            \"fts_score\": 1,\n",
        "                            \"_id\": \"$docs._id\",\n",
        "                            \"review\": \"$docs.review\",\n",
        "                            \"drugName\": \"$docs.drugName\",\n",
        "                            \"condition\": \"$docs.condition\",\n",
        "                        }\n",
        "                    },\n",
        "                ],\n",
        "            }\n",
        "        },\n",
        "        # PART 3: COMBINING RESULTS\n",
        "        # Group by document ID to handle duplicates from both searches\n",
        "        # This ensures we don't return the same document twice\n",
        "        {\n",
        "            \"$group\": {\n",
        "                \"_id\": \"$_id\",\n",
        "                \"review\": {\"$first\": \"$review\"},\n",
        "                \"drugName\": {\"$first\": \"$drugName\"},\n",
        "                \"condition\": {\"$first\": \"$condition\"},\n",
        "                \"vs_score\": {\n",
        "                    \"$max\": \"$vs_score\"\n",
        "                },  # Take highest vector score if present in both\n",
        "                \"fts_score\": {\n",
        "                    \"$max\": \"$fts_score\"\n",
        "                },  # Take highest text score if present in both\n",
        "            }\n",
        "        },\n",
        "        # Handle documents that only appeared in one search type\n",
        "        # by setting missing scores to 0\n",
        "        {\n",
        "            \"$project\": {\n",
        "                \"_id\": 1,\n",
        "                \"review\": 1,\n",
        "                \"drugName\": 1,\n",
        "                \"condition\": 1,\n",
        "                \"vs_score\": {\n",
        "                    \"$ifNull\": [\"$vs_score\", 0]\n",
        "                },  # Default to 0 if not in vector results\n",
        "                \"fts_score\": {\n",
        "                    \"$ifNull\": [\"$fts_score\", 0]\n",
        "                },  # Default to 0 if not in text results\n",
        "            }\n",
        "        },\n",
        "        # Calculate the final combined score and remove _id from results\n",
        "        {\n",
        "            \"$project\": {\n",
        "                \"score\": {\"$add\": [\"$fts_score\", \"$vs_score\"]},  # Combined final score\n",
        "                \"_id\": 0,  # Exclude MongoDB ID\n",
        "                \"review\": 1,\n",
        "                \"drugName\": 1,\n",
        "                \"condition\": 1,\n",
        "                \"vs_score\": 1,  # Keep individual scores for analysis\n",
        "                \"fts_score\": 1,\n",
        "            }\n",
        "        },\n",
        "        # Sort by the combined score in descending order\n",
        "        {\"$sort\": {\"score\": -1}},\n",
        "        # Return only the top k results based on combined score\n",
        "        {\"$limit\": top_k},\n",
        "    ]\n",
        "\n",
        "    # Execute the aggregation pipeline and convert results to a list\n",
        "    results = list(collection.aggregate(pipeline))\n",
        "    return results"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 56,
      "metadata": {
        "id": "BqnPf3sQEcHy"
      },
      "outputs": [],
      "source": [
        "# Define our query about YouTube's founding history\n",
        "# This query asks for specific factual information about the platform's launch\n",
        "query_text = \"I have a cough, what drug would be best?\"\n",
        "\n",
        "# Execute a hybrid search that combines both vector (semantic) and full-text search\n",
        "# We heavily weight text search (0.9) over vector search (0.1) since:\n",
        "#   1. This is a factual query where keywords are likely important\n",
        "#   2. We want exact matches about YouTube's founding to be prioritized\n",
        "#   3. The query contains specific entities (\"YouTube\") that full-text search handles well\n",
        "get_knowledge_hybrid_mdb = hybrid_search_with_mongodb(\n",
        "    query_text,  # Our natural language query\n",
        "    drug_reviews_collection,  # The MongoDB collection containing our data\n",
        "    vector_weight=0.5,  # Low weight for semantic/vector search component\n",
        "    full_text_weight=0.5,  # High weight for keyword/text search component\n",
        "    top_k=10,  # Return the top 10 most relevant results\n",
        "    text_search_paths=[\n",
        "        \"review\",\n",
        "        \"condition\",\n",
        "        \"drugName\",\n",
        "    ],  # Search within the reviews, conditions and drugNames fields\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 206
        },
        "id": "LsKsBeGsEjxg",
        "outputId": "a94066e6-9e09-41b7-84d3-6456e492689e"
      },
      "outputs": [],
      "source": [
        "pd.DataFrame(get_knowledge_hybrid_mdb).head()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "885V43-MEu4a"
      },
      "source": [
        "#### ⛳ Knowledge Checkpoint:\n",
        "\n",
        "You now understand how to implement hybrid search by:\n",
        "1. Combining vector search for semantic understanding with text search for keyword matching\n",
        "2. Weighting these different search strategies based on query characteristics\n",
        "3. Using MongoDB's aggregation pipeline to merge and rank results from different search methods\n",
        "4. Calculating combined relevance scores that leverage both search technologies"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8JvT235rFKrF"
      },
      "source": [
        "## Part 2: Building Intelligent Search Systems (RAG)\n",
        "<a name=\"part2\"></a>\n",
        "\n",
        "---\n",
        "\n",
        "- Practical development of Retrieval Augmented Generation (RAG) systems"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4UB25DHQGrjb"
      },
      "source": [
        "### Step 1: Importing Libraries\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "9oNeIwG-GybG",
        "outputId": "c9e75955-ad15-4fd1-e30d-76abe1844416"
      },
      "outputs": [],
      "source": [
        "!pip install -Uq openai"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 59,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "GXr2mMHcG34t",
        "outputId": "0ba17020-de3d-46da-be1f-e7ce9802bd19"
      },
      "outputs": [],
      "source": [
        "set_env_securely(\"OPENAI_API_KEY\", \"Enter your OPEN API Key: \")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wB0zzI8JFyGX"
      },
      "source": [
        "### Step 2: Setting up the LLM"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 60,
      "metadata": {
        "id": "H8ivA_5GEvUK"
      },
      "outputs": [],
      "source": [
        "# Import the OpenAI Python client library\n",
        "from openai import OpenAI\n",
        "\n",
        "# Initialize the OpenAI client\n",
        "# This will use the API key set in your environment variables (OPENAI_API_KEY)\n",
        "openai_client = OpenAI()\n",
        "\n",
        "# Create a chat completion request to the OpenAI API\n",
        "# This sends a conversation to GPT-4o and gets a response\n",
        "completion = openai_client.chat.completions.create(\n",
        "    model=\"gpt-4o\",  # Specify the GPT-4o model (latest version)\n",
        "    messages=[\n",
        "        # Set the system message to define the assistant's role and behavior\n",
        "        {\n",
        "            \"role\": \"developer\",\n",
        "            \"content\": \"You are a medical primary care virtual assistant.\",\n",
        "        },\n",
        "        # The user's initial message to start the conversation\n",
        "        {\"role\": \"user\", \"content\": \"Hello!\"},\n",
        "    ],\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "pllL1ICdHGP5",
        "outputId": "19f49612-7c23-4298-c41e-90f7ca891410"
      },
      "outputs": [],
      "source": [
        "# The response from this API call will contain the assistant's reply\n",
        "# which you would typically process with something like:\n",
        "print(completion.choices[0].message.content)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tgXbra6XNnTm"
      },
      "source": [
        "### Step 3: Setting Up The RAG Pipeline\n",
        "\n",
        "This step establishes our Retrieval-Augmented Generation (RAG) system, which enhances LLM responses with contextually relevant information:\n",
        "\n",
        "1. **Define the `custom_rag_pipeline` function**\n",
        "  * Create a comprehensive function that orchestrates all components of our RAG system\n",
        "  * Establish parameters for search strategy, result count, and response formatting\n",
        "\n",
        "2. **Implement the Retrieval component**\n",
        "  * Process the user's query to identify key information needs\n",
        "  * Execute our hybrid search mechanism (combining vector and keyword search)\n",
        "  * Apply relevance filtering to ensure only high-quality results are used\n",
        "\n",
        "3. **Process retrieved documents for context**\n",
        "  * Extract and consolidate the most relevant information from search results\n",
        "  * Format the retrieved content to optimize context window usage\n",
        "  * Structure the information to provide clear attribution and sources\n",
        "\n",
        "4. **Augment LLM prompt with retrieved context**\n",
        "  * Combine the user's original query with the retrieved information\n",
        "  * Apply prompt engineering techniques to guide the model's use of context\n",
        "  * Ensure the model distinguishes between provided context and its own knowledge\n",
        "\n",
        "5. **Generate and refine the final response**\n",
        "  * Process the LLM's output to ensure accuracy and relevance\n",
        "  * Format the response according to user preferences\n",
        "  * Include citations and references to source documents when appropriate"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 62,
      "metadata": {
        "id": "yyTCfDTeHKpP"
      },
      "outputs": [],
      "source": [
        "def custom_rag_pipeline(user_query, collection):\n",
        "    \"\"\"\n",
        "    Implements a custom Retrieval-Augmented Generation (RAG) pipeline.\n",
        "\n",
        "    Args:\n",
        "        user_query (str): The user's question or query.\n",
        "        collection (MongoCollection): MongoDB collection to search for relevant context.\n",
        "\n",
        "    Returns:\n",
        "        str: The LLM-generated response with citations.\n",
        "    \"\"\"\n",
        "    # 1. Retrieve relevant documents using the hybrid search method.\n",
        "    # NOTE: You can switch the retrieval mechanism between text and vector search as needed.\n",
        "    retrieved_docs = hybrid_search_with_mongodb(\n",
        "        user_query,\n",
        "        collection,\n",
        "        vector_search_index_name=vector_search_float32_ann_index_name,\n",
        "    )\n",
        "\n",
        "    # 2. Format the retrieved documents into context for the LLM.\n",
        "    formatted_context = \"\"\n",
        "\n",
        "    # Check if any documents were retrieved.\n",
        "    if retrieved_docs and len(retrieved_docs) > 0:\n",
        "        # Add a header for the context section.\n",
        "        formatted_context = \"\\n\\nRelevant information from drug reviews:\\n\\n\"\n",
        "\n",
        "        # Process each retrieved document and format its content.\n",
        "        for i, doc in enumerate(retrieved_docs):\n",
        "            # Extract key fields from the document.\n",
        "            review = doc.get(\"review\", \"No review available\")\n",
        "            condition = doc.get(\"condition\", \"No condition available\")\n",
        "            drug_name = doc.get(\"drugName\", \"No drug name available\")\n",
        "\n",
        "            # Append the formatted document with a citation reference.\n",
        "            formatted_context += f\"[{i+1}] Review: {review}\\nCondition: {condition}\\nDrug Name: {drug_name}\\n\\n\"\n",
        "\n",
        "    # 3. Craft the prompt for the LLM using the user query and the formatted context.\n",
        "    prompt = f\"\"\"\n",
        "Based on the following information, please answer the user's question:\n",
        "User Question: {user_query}\n",
        "{formatted_context}\n",
        "Please provide a comprehensive answer based on the information above.\n",
        "If the provided information does not contain the answer, state that clearly.\n",
        "Include citation numbers [X] to indicate which sources were used for specific details.\n",
        "\"\"\"\n",
        "    # 4. Send the prompt to the LLM and get the response.\n",
        "    response = openai_client.chat.completions.create(\n",
        "        model=\"gpt-4o\",\n",
        "        messages=[\n",
        "            {\n",
        "                \"role\": \"system\",\n",
        "                \"content\": \"You are a helpful assistant that provides accurate information based on the provided context. Always cite your sources.\",\n",
        "            },\n",
        "            {\"role\": \"user\", \"content\": prompt},\n",
        "        ],\n",
        "        temperature=0.3,  # Lower temperature for more factual responses.\n",
        "    )\n",
        "\n",
        "    # 5. Return the LLM's response.\n",
        "    return response.choices[0].message.content"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 157
        },
        "id": "M3FfbSKyO3US",
        "outputId": "45b67722-4cb6-454a-b1e8-a65e3a0ad8f9"
      },
      "outputs": [],
      "source": [
        "user_query = \"I have a cough, can you help me with some medications\"\n",
        "\n",
        "custom_rag_pipeline(user_query, drug_reviews_collection)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "C6popSUjQlCO"
      },
      "source": [
        "#### ⛳ Knowledge Checkpoint: RAG Pipeline Implementation\n",
        "\n",
        "You now understand how to build a complete Retrieval-Augmented Generation pipeline with MongoDB, including:\n",
        "\n",
        "- Retrieving relevant documents using hybrid search that combines semantic and keyword matching\n",
        "- Formatting retrieved documents with proper citations and source attribution\n",
        "- Creating effective prompts that guide the LLM to use the retrieved context appropriately\n",
        "- Configuring the LLM to prioritize factual responses based on provided information\n",
        "- Managing the end-to-end flow from user query to contextualized LLM response\n",
        "\n",
        "This pattern enables applications to leverage both the structured data in your MongoDB collections and the reasoning capabilities of large language models while maintaining accuracy and traceability."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8QKZrM4KRBqQ"
      },
      "source": [
        "## Part 3: Advanced AI Agents & Integration\n",
        "\n",
        "<a name=\"part3\"></a>\n",
        "\n",
        "\n",
        "---\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xAJ-jS4VRL68"
      },
      "source": [
        "### Step 1: Importing Libraries\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "AKSCMkCTPEzy",
        "outputId": "917c1b9a-4aea-477a-b3e5-2b6fd47aee2d"
      },
      "outputs": [],
      "source": [
        "!pip install -Uq openai-agents"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2xiC9mjDRcgh"
      },
      "source": [
        "### Step 2: Creating A Minimal Agent"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-I_JP0FaRbfD"
      },
      "source": [
        "An agent is a computational entity capable of acting autonomously on behalf of another entity to achieve specific objectives. It accomplishes these goals by processing inputs from its environment and leveraging available technical resources such as microservices, REST APIs, and functions.\n",
        "\n",
        "In the context of generative AI, the definition extends to include large language models (LLMs) that are guided by system instructions, equipped with various tools, and augmented with memory components.\n",
        "\n",
        "It is important to note that the definition of an agent is not standardized. Nonetheless, there is a growing consensus that various software systems can exhibit agentic characteristics, suggesting that agency exists on a spectrum.\n",
        "\n",
        "[TODO: Include image of agentic spectrum and you can add levels]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VgybwYtMRiD6"
      },
      "source": [
        "Two main modules from the OpenAI SDK are used:\n",
        "\n",
        "1. **Agent**: The Agent module in the OpenAI SDK provides a robust framework for creating autonomous computational entities. The Agent module streamlines the process of building intelligent agents by providing a well-defined structure that supports customization, scalability, and integration with external tools and services. All agent will have some common properties such as: name, instructions, model and tools.\n",
        "\n",
        "2. **Runner**: The execution engine that drives agent interactions. It handles the entire lifecycle of an agent’s run—from initiating LLM calls to processing outputs and managing transitions\n",
        "  - Runner Execution Methods:\n",
        "    - ```run()```: An asynchronous method that executes the agent’s process and returns a RunResult.\n",
        "    - ```run_sync()```: A synchronous version that internally calls run().\n",
        "    - ```run_streamed()```: Executes the agent asynchronously in streaming mode, returning events as they are generated by the LLM, and ultimately a complete RunResultStreaming object.\n",
        "\n",
        "Note: Using ```run_sync()``` within a Jupyter Notebook or Google Colab environment will not work as there's already an event loop within a Jupter environment"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eg7OQQzDRkGj"
      },
      "source": [
        "Below, we will create a Minimal Agent.\n",
        "\n",
        "**A Minimal Agent is a large language model equipped with an instructional or system prompt that continuously operates in a loop until the desired outcome is achieved.**\n",
        "\n",
        "Our minimal agent is a deep research agent that's given the name \"Virtual Primary Care Assistant\", assigned the OpenAI o3-mini model and provided with a detailed instruction on how it's meant to behave and provide outputs."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 65,
      "metadata": {
        "id": "-wSPNO7o6-NK"
      },
      "outputs": [],
      "source": [
        "OPENAI_MODEL = \"gpt-4o\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 66,
      "metadata": {
        "id": "VAp9tIZjRkcT"
      },
      "outputs": [],
      "source": [
        "from agents import Agent, Runner\n",
        "\n",
        "virtual_primary_care_assistant = Agent(\n",
        "    name=\"Virtual Primary Care Assistant\",\n",
        "    model=OPENAI_MODEL,\n",
        "    instructions=\"\"\"\n",
        "      You are a virtual primary care assistant dedicated to providing reliable, compassionate,\n",
        "      and evidence-based health guidance. Your role is to help patients understand and manage\n",
        "      their primary care needs, triage symptoms, answer common health questions, and advise on\n",
        "      when to seek further medical care. Ensure that your responses are clear, empathetic,\n",
        "      and informed by current medical guidelines, always prioritizing patient safety and accurate information.\n",
        "    \"\"\",\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5esjP4J4RqCV"
      },
      "outputs": [],
      "source": [
        "run_result = await Runner.run(\n",
        "    starting_agent=virtual_primary_care_assistant,\n",
        "    input=\"Get me information on cough medications and their reviews.\",\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "-w8DBAU8Rrvi",
        "outputId": "56ea8e68-96e4-4a39-b16b-a2e0085a27e3"
      },
      "outputs": [],
      "source": [
        "print(run_result.final_output)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gTr-yyVMSKQz"
      },
      "source": [
        "### Step 3: Agentic RAG: AI Agents with Retrieval Tools"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6QUlpq8ERuGw"
      },
      "outputs": [],
      "source": [
        "from datetime import datetime\n",
        "\n",
        "from agents.tool import function_tool\n",
        "\n",
        "\n",
        "@function_tool\n",
        "def get_medication_reviews(user_query: str) -> str:\n",
        "    \"\"\"\n",
        "    Retrieves patient reviews and information about medications related to the query.\n",
        "\n",
        "    This tool searches a database of medication reviews to find relevant patient experiences\n",
        "    with drugs that match the symptoms, conditions, or medication names in the user query.\n",
        "    Use this tool when discussing specific medications or treatment options.\n",
        "\n",
        "    Args:\n",
        "        user_query (str): The medication name, condition, or symptom to search for reviews about.\n",
        "\n",
        "    Returns:\n",
        "        str: Patient reviews and experiences with relevant medications.\n",
        "    \"\"\"\n",
        "    # Execute the hybrid search to find medication reviews\n",
        "    retrieved_context = hybrid_search_with_mongodb(\n",
        "        user_query=user_query,\n",
        "        collection=drug_reviews_collection,\n",
        "        vector_search_index_name=vector_search_float32_ann_index_name,\n",
        "    )\n",
        "\n",
        "    return str(retrieved_context)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 141,
      "metadata": {
        "id": "9kK7piOvTDzb"
      },
      "outputs": [],
      "source": [
        "virtual_primary_care_assistant.tools.append(get_medication_reviews)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 142,
      "metadata": {
        "id": "MQMJWZlVTGg3"
      },
      "outputs": [],
      "source": [
        "run_result_with_tool = await Runner.run(\n",
        "    starting_agent=virtual_primary_care_assistant,\n",
        "    input=\"Get me information on cough medications and their reviews\",\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "vrE5jI3aTefH",
        "outputId": "bac7b72c-e3e9-46ce-c883-d8f8780a34a0"
      },
      "outputs": [],
      "source": [
        "print(run_result_with_tool.final_output)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "qqg22u80TjK4",
        "outputId": "8a1ddf1a-1063-4f3c-8039-3e26f264f142"
      },
      "outputs": [],
      "source": [
        "run_result_with_tool.raw_responses"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YXY5-ChymwXd"
      },
      "source": [
        "### Step 4: Robust Agent (Multipe tools)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 145,
      "metadata": {
        "id": "JRZNHCUBT1En"
      },
      "outputs": [],
      "source": [
        "# Add a retrieval tool to provide our agent with past conversation history of medical engagements.\n",
        "# This will give our agent the ability to look up past scenarios to inform responses\n",
        "\n",
        "\n",
        "@function_tool\n",
        "def get_past_medical_conversations(user_query: str) -> str:\n",
        "    \"\"\"\n",
        "    Retrieves relevant past medical conversations between doctors and patients related to the query.\n",
        "\n",
        "    This tool searches a database of real doctor-patient interactions to find conversations\n",
        "    that match the symptoms, conditions, or questions in the user query. Use this tool to provide\n",
        "    examples of how medical professionals have addressed similar concerns.\n",
        "\n",
        "    Args:\n",
        "        user_query (str): The medical condition, symptom, or question to search for.\n",
        "\n",
        "    Returns:\n",
        "        str: Examples of relevant doctor-patient conversations matching the query.\n",
        "    \"\"\"\n",
        "    # Use semantic search to find relevant past conversations\n",
        "    lookup_scenario_history = semantic_search_with_mongodb(\n",
        "        user_query,\n",
        "        healthcare_conversation_collection,\n",
        "        vector_search_index_name=vector_search_float32_ann_index_name,\n",
        "    )\n",
        "\n",
        "    return str(lookup_scenario_history)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ivE6C-LQs1G3"
      },
      "source": [
        "Let's update our agent instruction to ensure it knows when to utilize the right tools"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 146,
      "metadata": {
        "id": "Mt0yMnN-siSV"
      },
      "outputs": [],
      "source": [
        "upgraded_virtual_primary_care_assistant = Agent(\n",
        "    name=\"Virtual Primary Care Assistant\",\n",
        "    model=OPENAI_MODEL,\n",
        "    instructions=\"\"\"\n",
        "     MANDATORY TOOL USAGE PROTOCOL:\n",
        "\n",
        "     You have access to two essential tools that you must use appropriately:\n",
        "\n",
        "     1. get_medication_reviews:\n",
        "        - ALWAYS use this tool when users ask about medications, treatments, or remedies\n",
        "        - ALWAYS use this tool if you plan to mention any medication names in your response\n",
        "        - Example queries: \"What helps with cough?\", \"Tell me about allergy medications\"\n",
        "        - Command: get_medication_reviews with search terms like \"cough medications\" or \"allergy treatments\"\n",
        "\n",
        "     2. get_past_medical_conversations:\n",
        "        - ALWAYS use this tool when users ask about medical conditions, symptoms, or doctor advice\n",
        "        - ALWAYS use this tool if a user wants examples of past conversations or scenarios\n",
        "        - Example queries: \"How do doctors treat coughs?\", \"Show me conversations about headaches\"\n",
        "        - Command: get_past_medical_conversations with search terms like \"cough treatment\" or \"headache advice\"\n",
        "\n",
        "     CRITICAL INSTRUCTION: When a user's message contains BOTH medication questions AND requests for\n",
        "     past medical conversations, you MUST use BOTH tools, one after another.\n",
        "\n",
        "     For example, with a query like \"I have a cough, can you help me with medications and show me\n",
        "     relevant conversations\", you MUST call:\n",
        "     1. get_medication_reviews with \"cough medications\"\n",
        "     2. get_past_medical_conversations with \"cough treatment conversations\"\n",
        "\n",
        "     After using the appropriate tools, provide a helpful response that:\n",
        "     - Clearly distinguishes between medication information and past conversation examples\n",
        "     - Reminds users that this information is educational and not personalized medical advice\n",
        "     - Advises consulting healthcare professionals for specific medical concerns\n",
        "\n",
        "     Always prioritize patient safety and provide compassionate, evidence-based guidance.\n",
        "   \"\"\",\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 147,
      "metadata": {
        "id": "qRCd19sgpGG3"
      },
      "outputs": [],
      "source": [
        "upgraded_virtual_primary_care_assistant.tools.append(get_past_medical_conversations)\n",
        "upgraded_virtual_primary_care_assistant.tools.append(get_medication_reviews)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "yvG7cMNdvlGX",
        "outputId": "d2d6a36f-146d-46b6-c7f8-a66cde681576"
      },
      "outputs": [],
      "source": [
        "upgraded_virtual_primary_care_assistant.tools"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 149,
      "metadata": {
        "id": "-GBTyzyFpi4U"
      },
      "outputs": [],
      "source": [
        "run_result_with_tools = await Runner.run(\n",
        "    starting_agent=upgraded_virtual_primary_care_assistant,\n",
        "    input=\"I have a cough, can you help me with some medications, and get me some relevant past scenarios and conversations related to cough.\",\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "mYLQ2VGFrHfF",
        "outputId": "8f088da2-92a5-431f-c71e-41e3ff3ea04d"
      },
      "outputs": [],
      "source": [
        "print(run_result_with_tools.final_output)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gEHhSeRm77jm"
      },
      "source": [
        "![image.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NEfYXBaQygXq"
      },
      "source": [
        "### Step 5: Agent as Tools (Ochestration)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 101,
      "metadata": {
        "id": "-2GqxGQuyl-F"
      },
      "outputs": [],
      "source": [
        "# Define specialized agents for different information retrieval tasks\n",
        "medication_agent = Agent(\n",
        "    name=\"medication_information_agent\",\n",
        "    instructions=\"You provide detailed information about medications, their effectiveness, and side effects based on patient reviews. Always cite your sources.\",\n",
        "    handoff_description=\"A medication information specialist with access to patient reviews\",\n",
        "    tools=[get_medication_reviews],\n",
        ")\n",
        "\n",
        "conversation_agent = Agent(\n",
        "    name=\"medical_conversation_agent\",\n",
        "    instructions=\"You provide examples of doctor-patient conversations related to specific medical conditions or symptoms. Always present this as educational content, not medical advice.\",\n",
        "    handoff_description=\"A specialist with access to past doctor-patient conversations\",\n",
        "    tools=[get_past_medical_conversations],\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 102,
      "metadata": {
        "id": "Xbrwjru4yp66"
      },
      "outputs": [],
      "source": [
        "# Create an orchestrator agent that can use both specialized agents as tools\n",
        "orchestrator_agent = Agent(\n",
        "    name=\"medical_assistant_orchestrator\",\n",
        "    instructions=(\n",
        "        \"You are a virtual primary care assistant. Your job is to help patients by retrieving relevant information using your tools.\\n\\n\"\n",
        "        \"IMPORTANT RULES:\\n\"\n",
        "        \"1. ALWAYS use translate_to_medication_information when a query mentions medications, treatments, or remedies\\n\"\n",
        "        \"2. ALWAYS use translate_to_medical_conversations when a query mentions medical conditions or asks for conversation examples\\n\"\n",
        "        \"3. If a query requires BOTH medication information AND medical conversations, use BOTH tools in sequence\\n\"\n",
        "        \"4. NEVER attempt to provide medical information without using your tools\\n\"\n",
        "        \"5. Each tool provides different types of information - use all appropriate tools for complete assistance\"\n",
        "    ),\n",
        "    tools=[\n",
        "        medication_agent.as_tool(\n",
        "            tool_name=\"translate_to_medication_information\",\n",
        "            tool_description=\"Get information about medications, treatments, and patient reviews\",\n",
        "        ),\n",
        "        conversation_agent.as_tool(\n",
        "            tool_name=\"translate_to_medical_conversations\",\n",
        "            tool_description=\"Get examples of doctor-patient conversations about medical conditions\",\n",
        "        ),\n",
        "    ],\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 103,
      "metadata": {
        "id": "oM5P_DVtzg1z"
      },
      "outputs": [],
      "source": [
        "# Final agent to synthesize information from all sources\n",
        "synthesizer_agent = Agent(\n",
        "    name=\"medical_response_synthesizer\",\n",
        "    instructions=(\n",
        "        \"You create comprehensive, well-organized responses for patients by combining information from multiple sources.\\n\\n\"\n",
        "        \"When organizing your response:\\n\"\n",
        "        \"1. Clearly separate medication information from doctor-patient conversation examples\\n\"\n",
        "        \"2. Provide a concise summary at the beginning highlighting key points\\n\"\n",
        "        \"3. Include appropriate disclaimers about medical advice\\n\"\n",
        "        \"4. Format the information for easy reading, using bullet points where appropriate\\n\"\n",
        "        \"5. Ensure your tone is empathetic, clear, and professional\"\n",
        "    ),\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 129,
      "metadata": {
        "id": "FqdOHcsXzpa2"
      },
      "outputs": [],
      "source": [
        "from agents import ItemHelpers, MessageOutputItem, trace\n",
        "\n",
        "\n",
        "async def virtual_primary_care_assistant(user_query):\n",
        "    \"\"\"Run the complete virtual primary care assistant workflow\"\"\"\n",
        "    # First, have the orchestrator determine which tools to use\n",
        "    with trace(\"Orchestrator evaluator\"):\n",
        "        orchestrator_result = await Runner.run(orchestrator_agent, user_query)\n",
        "\n",
        "        # Print intermediate steps for debugging/transparency\n",
        "        print(\"\\n--- Orchestrator Processing Steps ---\")\n",
        "        for item in orchestrator_result.new_items:\n",
        "            if isinstance(item, MessageOutputItem):\n",
        "                text = ItemHelpers.text_message_output(item)\n",
        "                if text:\n",
        "                    print(f\"  - Information gathering step: {text}\")\n",
        "\n",
        "        # Then synthesize all the gathered information into a cohesive response\n",
        "        synthesizer_result = await Runner.run(\n",
        "            synthesizer_agent, orchestrator_result.to_input_list()\n",
        "        )\n",
        "\n",
        "        print(f\"\\n\\n--- Final Medical Response ---\\n{synthesizer_result.final_output}\")\n",
        "        print()\n",
        "\n",
        "    return synthesizer_result.final_output"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 130,
      "metadata": {
        "id": "1m791Z5ozypU"
      },
      "outputs": [],
      "source": [
        "import asyncio\n",
        "\n",
        "import nest_asyncio\n",
        "\n",
        "# Apply nest_asyncio to patch the event loop\n",
        "nest_asyncio.apply()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 132,
      "metadata": {
        "id": "dVnG6oGk0LM-"
      },
      "outputs": [],
      "source": [
        "def run_virtual_primary_care_assistant(query):\n",
        "    # Create a new event loop\n",
        "    loop = asyncio.new_event_loop()\n",
        "    asyncio.set_event_loop(loop)\n",
        "\n",
        "    # Run the async function and get the result\n",
        "    result = loop.run_until_complete(virtual_primary_care_assistant(query))\n",
        "\n",
        "    # Clean up\n",
        "    loop.close()\n",
        "\n",
        "    return result"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        },
        "id": "NE8M_N7E0VXW",
        "outputId": "f9d95255-66cd-4116-8128-99574829f53b"
      },
      "outputs": [],
      "source": [
        "# Now call the function this way\n",
        "query = input(\"What health concern can I help you with today? \")\n",
        "run_virtual_primary_care_assistant(query)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Nd0X6k-e6esW"
      },
      "source": [
        "![image.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uc_Em_q_pSyf"
      },
      "source": [
        "## Part 4: Agentic Chat System\n",
        "\n",
        "---\n",
        "\n",
        "\n",
        "\n",
        "<a name=\"part4\"></a>\n",
        "\n",
        "\n",
        "This section demonstrates an Agentic Chat System that enhances the virtual primary care assistant by maintaining a complete conversation history. The system features:\n",
        "\n",
        "- **Persistent Chat History:** Every interaction, including the user’s input and the agent’s response, is stored along with a timestamp.\n",
        "- **Contextual Input:** On each turn, the complete conversation history is appended to the agent's input, ensuring that the context is preserved throughout the conversation.\n",
        "- **Session Management with Thread IDs:** Each message is tagged with a thread ID to uniquely identify the session, making it easy to track and retrieve conversation history.\n",
        "- **Ordered Retrieval:** The chat history can be retrieved by providing a thread ID, with all records ordered by their timestamps.\n",
        "\n",
        "Below is the complete code implementation for the Agentic Chat System.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "gkBKm_S7_jO_",
        "outputId": "0035ed32-5346-4bd2-a679-b181d4eac4ad"
      },
      "outputs": [],
      "source": [
        "# Get a reference to the database (creates it if it doesn't exist)\n",
        "db = mongo_client[DB_NAME]\n",
        "\n",
        "# Create a chat_history collection in the MongoDB Database\n",
        "chat_history_collection_name = \"chat_history\"\n",
        "\n",
        "if chat_history_collection_name not in db.list_collection_names():\n",
        "    db.create_collection(chat_history_collection_name)\n",
        "    print(f\"Collection '{chat_history_collection_name}' created successfully.\")\n",
        "else:\n",
        "    # Collection already exists, no need to create it\n",
        "    print(f\"Collection '{chat_history_collection_name}' already exists.\")\n",
        "\n",
        "# Get a reference to collections for later use\n",
        "chat_history_collection = db[chat_history_collection_name]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 153,
      "metadata": {
        "id": "1WUEtIw1CAIZ"
      },
      "outputs": [],
      "source": [
        "import datetime\n",
        "import uuid\n",
        "\n",
        "\n",
        "async def virtual_primary_care_assistant(user_query, thread_id=None):\n",
        "    \"\"\"\n",
        "    Run the complete virtual primary care assistant workflow.\n",
        "\n",
        "    For each conversation turn:\n",
        "      - Stores the user's input and the assistant's output in the MongoDB collection along with a timestamp and thread_id.\n",
        "      - Retrieves and appends previous conversation history (ordered by timestamp) to the agent's input.\n",
        "\n",
        "    If no thread_id is provided, a new conversation session is started.\n",
        "\n",
        "    Returns:\n",
        "      tuple: (final_output, thread_id) where thread_id is the session identifier.\n",
        "    \"\"\"\n",
        "    # Generate a new thread id if not provided.\n",
        "    if thread_id is None:\n",
        "        thread_id = str(uuid.uuid4())\n",
        "        print(f\"New conversation started with thread id: {thread_id}\")\n",
        "    else:\n",
        "        print(f\"Continuing conversation with thread id: {thread_id}\")\n",
        "\n",
        "    # --- Step 1: Store the new user query ---\n",
        "    now = datetime.datetime.utcnow()\n",
        "    chat_history_collection.insert_one(\n",
        "        {\n",
        "            \"thread_id\": thread_id,\n",
        "            \"role\": \"user\",\n",
        "            \"message\": user_query,\n",
        "            \"timestamp\": now,\n",
        "        }\n",
        "    )\n",
        "\n",
        "    # --- Step 2: Retrieve full conversation history for context ---\n",
        "    chat_history = list(\n",
        "        chat_history_collection.find({\"thread_id\": thread_id}).sort(\"timestamp\", 1)\n",
        "    )\n",
        "    conversation_context = \"\"\n",
        "    for entry in chat_history:\n",
        "        if entry[\"role\"] == \"user\":\n",
        "            conversation_context += f\"User: {entry['message']}\\n\"\n",
        "        else:\n",
        "            conversation_context += f\"Assistant: {entry['message']}\\n\"\n",
        "\n",
        "    # --- Step 3: Run the orchestrator agent with the conversation context ---\n",
        "    with trace(\"Orchestrator evaluator\"):\n",
        "        orchestrator_result = await Runner.run(orchestrator_agent, conversation_context)\n",
        "\n",
        "    # Print intermediate processing steps for debugging/transparency.\n",
        "    print(\"\\n--- Orchestrator Processing Steps ---\")\n",
        "    for item in orchestrator_result.new_items:\n",
        "        if isinstance(item, MessageOutputItem):\n",
        "            text = ItemHelpers.text_message_output(item)\n",
        "            if text:\n",
        "                print(f\"  - Information gathering step: {text}\")\n",
        "\n",
        "    # --- Step 4: Run the synthesizer agent to produce a cohesive response ---\n",
        "    synthesizer_result = await Runner.run(\n",
        "        synthesizer_agent, orchestrator_result.to_input_list()\n",
        "    )\n",
        "\n",
        "    # --- Step 5: Store the assistant's final output in the chat history ---\n",
        "    now = datetime.datetime.utcnow()\n",
        "    chat_history_collection.insert_one(\n",
        "        {\n",
        "            \"thread_id\": thread_id,\n",
        "            \"role\": \"assistant\",\n",
        "            \"message\": synthesizer_result.final_output,\n",
        "            \"timestamp\": now,\n",
        "        }\n",
        "    )\n",
        "\n",
        "    print(f\"\\n\\n--- Final Medical Response ---\\n{synthesizer_result.final_output}\\n\")\n",
        "    return synthesizer_result.final_output, thread_id"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 154,
      "metadata": {
        "id": "PSs1OkIsCLEJ"
      },
      "outputs": [],
      "source": [
        "def run_virtual_primary_care_assistant(query, thread_id=None):\n",
        "    \"\"\"\n",
        "    Run the virtual primary care assistant synchronously.\n",
        "\n",
        "    Optionally, a thread_id can be provided to continue an existing conversation.\n",
        "    Returns a tuple (final_output, thread_id).\n",
        "    \"\"\"\n",
        "    # Create a new event loop\n",
        "    loop = asyncio.new_event_loop()\n",
        "    asyncio.set_event_loop(loop)\n",
        "\n",
        "    # Run the async function and get the result\n",
        "    result, thread_id = loop.run_until_complete(\n",
        "        virtual_primary_care_assistant(query, thread_id=thread_id)\n",
        "    )\n",
        "\n",
        "    # Clean up the loop\n",
        "    loop.close()\n",
        "\n",
        "    return result, thread_id"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 155,
      "metadata": {
        "id": "K0DxcWPKCQHQ"
      },
      "outputs": [],
      "source": [
        "def chat_session():\n",
        "    \"\"\"\n",
        "    Launches a chat session that continues until the user enters 'q', 'exit', or 'quit'.\n",
        "    The session uses a persistent thread_id to preserve conversation history.\n",
        "    \"\"\"\n",
        "    print(\n",
        "        \"Starting Virtual Primary Care Assistant Chat. Type 'q', 'exit' or 'quit' to exit.\"\n",
        "    )\n",
        "    session_thread_id = None\n",
        "    while True:\n",
        "        query = input(\"What health concern can I help you with today? \")\n",
        "        if query.lower() in [\"q\", \"exit\", \"quit\"]:\n",
        "            print(\"Exiting chat session.\")\n",
        "            break\n",
        "        response, session_thread_id = run_virtual_primary_care_assistant(\n",
        "            query, thread_id=session_thread_id\n",
        "        )\n",
        "        print(\"Assistant:\", response)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "CDr-U28SC4gU",
        "outputId": "05499ffa-4b8a-48a2-8e0e-8bbd69c98af9"
      },
      "outputs": [],
      "source": [
        "# Start the chat session\n",
        "chat_session()"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "base",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.11.5"
    },
    "widgets": {
      "application/vnd.jupyter.widget-state+json": {
        "state": {}
      }
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
