{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "On2aMyOsOpOw"
      },
      "source": [
        "# Emergency Response System: Intelligent Crisis Management: Unlocking Enterprise Data with MongoDB Vector Search, LangChain, and LangGraph\n",
        "\n",
        "-------------"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xM8q0Fia3Dsp"
      },
      "source": [
        "## **Use Case Overview**\n",
        "\n",
        "In today's complex technical environment, organizations face critical incidents—ranging from network outages and security breaches to infrastructure failures and service disruptions. When these crises occur, teams must rapidly mobilize the right expertise, access relevant knowledge resources, and coordinate response efforts under significant time pressure.\n",
        "\n",
        "Imagine:\n",
        "- a critical 5G network outage affecting multiple metropolitan areas,\n",
        "- a data center hardware failure impacting enterprise customers,\n",
        "- or a security breach requiring immediate containment.\n",
        "\n",
        "Each crisis demands rapid response spanning multiple technical domains, requiring organizations to quickly assemble the right experts, access relevant procedures, and coordinate complex actions—all while business-critical services remain offline.\n",
        "\n",
        "This solution transforms Emergency Response Management by:\n",
        "\n",
        "* **Accelerating crisis detection**: Automatically parsing incident reports to extract critical parameters, affected systems, and required skill sets.\n",
        "* **Assembling optimal response teams**: Identifying available experts with the precise skills needed for each unique crisis situation.\n",
        "* **Mobilizing knowledge resources**: Retrieving relevant technical procedures, best practices, and previous incident documentation.\n",
        "* **Orchestrating coordinated response**: Generating comprehensive response plans with prioritized action items, team assignments, and communication protocols.\n",
        "\n",
        "Built on `MongoDB Atlas Vector Search` for high-performance semantic search and document retrieval, `LangChain` and `LangGraph` for agentic workflow orchestration, this approach delivers an intelligent emergency response system that dramatically reduces incident resolution time and business impact.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "n884ViGbCeuS"
      },
      "source": [
        "![image.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6IWjBgukBwPD"
      },
      "source": [
        "**Key Components**\n",
        "\n",
        "1. Crisis Detection: Analyzes unstructured incident reports to extract structured data about the crisis type, severity, affected systems, and required expertise.\n",
        "2. Expert Identification: Searches employee records using semantic matching to identify personnel with crisis-relevant skills and availability.\n",
        "3. Knowledge Resource Gathering: Retrieves technical documentation, recovery procedures, and best practices specifically relevant to the current crisis.\n",
        "4. Response Plan Generation: Creates comprehensive response plans with team assignments, prioritized action items, communication protocols, and estimated resolution timelines.\n",
        "\n",
        "**Business Impact**\n",
        "\n",
        "* Reduced Average Time to Resolution: Accelerates response time by automating the most time-consuming aspects of crisis management.\n",
        "* Optimal Team Composition: Ensures the most qualified experts are engaged based on real-time availability and precise skill matching.\n",
        "* Enhanced Decision Support: Provides response teams with only the most relevant knowledge resources and procedures.\n",
        "* Improved Stakeholder Communication: Generates structured briefings and updates for both technical teams and business stakeholders.\n",
        "\n",
        "This intelligent system transforms crisis management from a reactive, often chaotic process into a structured, data-driven workflow that minimizes business impact and accelerates service restoration.\n",
        "\n",
        "\n",
        "**Cross-Industry Applications**\n",
        "This emergency response architecture can be readily adapted to various industries:\n",
        "\n",
        "**1. Healthcare**\n",
        "\n",
        "- Mobilizing specialized medical teams for rare conditions or mass casualty events\n",
        "- Coordinating expertise during disease outbreaks or public health emergencies\n",
        "\n",
        "**2. Financial Services**\n",
        "\n",
        "- Assembling fraud response teams for complex financial incidents\n",
        "- Coordinating technical and business experts during trading system failures\n",
        "\n",
        "**3. Energy and Utilities**\n",
        "\n",
        "- Mobilizing technical teams during power grid failures or outages\n",
        "- Assembling environmental specialists during contamination events\n",
        "\n",
        "**4. Manufacturing**\n",
        "\n",
        "- Coordinating experts to minimize downtime on critical production equipment\n",
        "- Assembling cross-functional teams for supply chain or quality control crises\n",
        "\n",
        "**5. Transportation**\n",
        "\n",
        "- Mobilizing aviation or maritime experts during system failures or safety incidents\n",
        "- Coordinating response teams for logistics network disruptions\n",
        "\n",
        "**6. Government**\n",
        "\n",
        "- Assembling emergency management teams during natural disasters\n",
        "- Mobilizing technical expertise for infrastructure failures or cybersecurity incidents"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UonQhN9jPQ_8"
      },
      "source": [
        "**Objective**:\n",
        "\n",
        "Enable enterprise users to query and explore organizational knowledge across FAQs, project details, and employee expertise in natural language.\n",
        "\n",
        "**Key Benefits:**\n",
        "\n",
        "- Reduced time-to-insight: Semantic search surfaces relevant results even when keywords differ.\n",
        "\n",
        "- Contextual reasoning: Agents chain multi-step queries (e.g., “Which engineer led Project P123?”).\n",
        "\n",
        "- Scalable architecture: Easily extend to new data sources (Confluence, emails, design documents).\n",
        "\n",
        "**Key Components:**\n",
        "\n",
        "- MongoDB Atlas Vector Search: Dense vector indexing for semantic relevance.\n",
        "\n",
        "- Voyage AI: State of the art embedding models and rerankers\n",
        "\n",
        "- LangChain: Embedding pipelines and workflow management.\n",
        "\n",
        "- LangGraph: Agentic, graph-driven decision making for complex queries.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "z0f92qK2aPTO"
      },
      "outputs": [],
      "source": [
        "!pip install -qU openai pymongo voyageai langchain_voyageai langchain_openai langchain-mongodb langgraph-checkpoint-mongodb langchain-core langgraph"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "id": "faQsbVuhY5HD"
      },
      "outputs": [],
      "source": [
        "import getpass\n",
        "import os\n",
        "\n",
        "\n",
        "# Function to securely get and set environment variables\n",
        "def set_env_securely(var_name, prompt):\n",
        "    value = getpass.getpass(prompt)\n",
        "    os.environ[var_name] = value"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "EyU2HRqCY_mT",
        "outputId": "881924bc-f8ba-4d8b-f09e-568bf8ece809"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Enter your OPENAI API KEY: ··········\n"
          ]
        }
      ],
      "source": [
        "# Set your OpenAI API Key\n",
        "# TODO: Place a link on where openai api key can be obtained\n",
        "set_env_securely(\"OPENAI_API_KEY\", \"Enter your OPENAI API KEY: \")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dkyacdN5X4CS"
      },
      "source": [
        "## Part 0: Synthetic Data Creation"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Wtzw958o0IWv"
      },
      "outputs": [],
      "source": [
        "import json\n",
        "from typing import Any, List, Optional\n",
        "\n",
        "from pydantic import BaseModel, Field"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_0ghVFwxX7Cz"
      },
      "outputs": [],
      "source": [
        "# Define Pydantic data models for datasets\n",
        "class FAQ(BaseModel):\n",
        "    faq_id: str = Field(..., description=\"Unique FAQ identifier\")\n",
        "    question: str = Field(..., description=\"FAQ question text\")\n",
        "    answer: str = Field(..., description=\"FAQ answer text\")\n",
        "    tags: List[str] = Field(default_factory=list, description=\"Related tags\")\n",
        "\n",
        "\n",
        "class Project(BaseModel):\n",
        "    project_id: str = Field(..., description=\"Unique project identifier\")\n",
        "    name: str = Field(..., description=\"Project name\")\n",
        "    description: str = Field(..., description=\"Detailed project description\")\n",
        "    status: str = Field(..., description=\"Current project status\")\n",
        "    start_date: str = Field(..., description=\"ISO format start date\")\n",
        "    end_date: str = Field(..., description=\"ISO format end date\")\n",
        "\n",
        "    # Team information\n",
        "    project_manager: str = Field(..., description=\"Employee ID of project manager\")\n",
        "    team_members: List[str] = Field(..., description=\"List of employee IDs\")\n",
        "\n",
        "    # Technical details\n",
        "    technologies: List[str] = Field(..., description=\"Technology stack\")\n",
        "    skills_required: List[str] = Field(..., description=\"Required skills\")\n",
        "\n",
        "    # Project relationships\n",
        "    dependencies: List[str] = Field(\n",
        "        default_factory=list, description=\"Dependent project IDs\"\n",
        "    )\n",
        "    related_projects: List[str] = Field(\n",
        "        default_factory=list, description=\"Related project IDs\"\n",
        "    )\n",
        "\n",
        "\n",
        "class Employee(BaseModel):\n",
        "    emp_id: str = Field(..., description=\"Unique employee identifier\")\n",
        "    name: str = Field(..., description=\"Employee full name\")\n",
        "    role: str = Field(..., description=\"Employee role/title\")\n",
        "    department: str = Field(..., description=\"Department name\")\n",
        "    skills: List[str] = Field(default_factory=list, description=\"List of skills\")\n",
        "    bio: Optional[str] = Field(None, description=\"Short professional biography\")\n",
        "    manager: Optional[str] = Field(None, description=\"Employee ID of manager\")\n",
        "    start_date: str = Field(..., description=\"ISO format start date\")\n",
        "    end_date: str = Field(..., description=\"ISO format end date\")\n",
        "\n",
        "    # Project relationships\n",
        "    current_projects: List[str] = Field(\n",
        "        default_factory=list, description=\"Current project IDs\"\n",
        "    )\n",
        "    past_projects: List[str] = Field(\n",
        "        default_factory=list, description=\"Past project IDs\"\n",
        "    )\n",
        "\n",
        "    # Team relationships\n",
        "    mentors: List[str] = Field(\n",
        "        default_factory=list, description=\"Employee IDs of mentors\"\n",
        "    )\n",
        "    mentees: List[str] = Field(\n",
        "        default_factory=list, description=\"Employee IDs of mentees\"\n",
        "    )\n",
        "    frequent_collaborators: List[str] = Field(\n",
        "        default_factory=list, description=\"Frequent collaborators\"\n",
        "    )\n",
        "\n",
        "\n",
        "class KnowledgeAsset(BaseModel):\n",
        "    asset_id: str = Field(..., description=\"Unique knowledge asset ID\")\n",
        "    title: str = Field(..., description=\"Title of the knowledge asset\")\n",
        "    content: str = Field(..., description=\"Content or description\")\n",
        "    type: str = Field(..., description=\"Type (documentation, best_practice, solution)\")\n",
        "    author: str = Field(..., description=\"Employee ID of author\")\n",
        "    creation_date: str = Field(..., description=\"ISO creation date\")\n",
        "    tags: List[str] = Field(default_factory=list, description=\"Tags for categorization\")\n",
        "    related_projects: List[str] = Field(\n",
        "        default_factory=list, description=\"Related project IDs\"\n",
        "    )\n",
        "    related_employees: List[str] = Field(\n",
        "        default_factory=list, description=\"Related employee IDs\"\n",
        "    )\n",
        "\n",
        "\n",
        "# Mapping from dataset type to Pydantic model\n",
        "model_map = {\n",
        "    \"faqs\": FAQ,\n",
        "    \"projects\": Project,\n",
        "    \"employees\": Employee,\n",
        "    \"knowledge_assets\": KnowledgeAsset,\n",
        "}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "XKYfNOhTn-x4"
      },
      "outputs": [],
      "source": [
        "class DatasetReference:\n",
        "    \"\"\"Track generated IDs for cross-referencing between datasets\"\"\"\n",
        "\n",
        "    def __init__(self):\n",
        "        self.employee_ids = set()\n",
        "        self.project_ids = set()\n",
        "        self.faq_ids = set()\n",
        "        self.knowledge_asset_ids = set()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "QN-xqD9voCkI"
      },
      "outputs": [],
      "source": [
        "data_refs = DatasetReference()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "id": "JcB_eYwvzZKz"
      },
      "outputs": [],
      "source": [
        "from openai import OpenAI\n",
        "\n",
        "openai_client = OpenAI()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Co4G8bmEoG-O"
      },
      "outputs": [],
      "source": [
        "def generate_synthetic_data_with_refs(\n",
        "    dataset_type: str, instructions: str, entry: int, data_refs\n",
        ") -> BaseModel:\n",
        "    \"\"\"Generate synthetic data with valid references to other datasets\"\"\"\n",
        "\n",
        "    if dataset_type not in model_map:\n",
        "        raise ValueError(f\"Unsupported dataset type: {dataset_type}\")\n",
        "\n",
        "    selected_model = model_map[dataset_type]\n",
        "    current_id = f\"{dataset_type}-{entry}\"\n",
        "\n",
        "    # Build reference context based on dataset type\n",
        "    if dataset_type == \"employees\":\n",
        "        data_refs.employee_ids.add(current_id)\n",
        "        reference_context = f\"\"\"\n",
        "        Valid reference IDs:\n",
        "        - Employee IDs: {list(data_refs.employee_ids) if data_refs.employee_ids else 'None yet'}\n",
        "        - Project IDs: {list(data_refs.project_ids) if data_refs.project_ids else 'None yet'}\n",
        "\n",
        "        Instructions:\n",
        "        - For mentors/mentees: Use existing employee IDs from the list above\n",
        "        - For current_projects/past_projects: Use existing project IDs\n",
        "        - Leave lists empty if no valid IDs are available yet\n",
        "        \"\"\"\n",
        "\n",
        "    elif dataset_type == \"projects\":\n",
        "        data_refs.project_ids.add(current_id)\n",
        "        reference_context = f\"\"\"\n",
        "        Valid reference IDs:\n",
        "        - Employee IDs: {list(data_refs.employee_ids)}\n",
        "        - Project IDs: {list(data_refs.project_ids) if data_refs.project_ids else 'None yet'}\n",
        "\n",
        "        Instructions:\n",
        "        - For team_members/project_manager: Use employee IDs from the list above\n",
        "        - For dependencies/related_projects: Use project IDs (leave empty for first few projects)\n",
        "        - Ensure all referenced IDs exist in the lists above\n",
        "        \"\"\"\n",
        "\n",
        "    elif dataset_type == \"knowledge_assets\":\n",
        "        data_refs.knowledge_asset_ids.add(current_id)\n",
        "        reference_context = f\"\"\"\n",
        "        Valid reference IDs:\n",
        "        - Employee IDs: {list(data_refs.employee_ids)}\n",
        "        - Project IDs: {list(data_refs.project_ids)}\n",
        "\n",
        "        Instructions:\n",
        "        - For author: Use one employee ID from the list above\n",
        "        - For related_projects: Use valid project IDs\n",
        "        - For related_employees: Use valid employee IDs\n",
        "        \"\"\"\n",
        "\n",
        "    else:  # FAQs don't need references\n",
        "        reference_context = \"This entity type doesn't need ID references.\"\n",
        "\n",
        "    # Enhanced instructions with reference context\n",
        "    full_instructions = f\"\"\"\n",
        "    {instructions}\n",
        "\n",
        "    REFERENCE CONTEXT:\n",
        "    {reference_context}\n",
        "\n",
        "    IMPORTANT: Only use IDs from the valid reference lists above.\n",
        "    \"\"\"\n",
        "\n",
        "    # Generate the data\n",
        "    response = openai_client.responses.parse(\n",
        "        model=\"gpt-4.1\",\n",
        "        input=f\"Generate a synthetic {dataset_type} record with the id '{current_id}'. {full_instructions}\",\n",
        "        text_format=selected_model,\n",
        "    )\n",
        "\n",
        "    return response.output_parsed"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8hOfBpHoodEU"
      },
      "outputs": [],
      "source": [
        "from tqdm import tqdm\n",
        "\n",
        "\n",
        "def generate_all_synthetic_data():\n",
        "    \"\"\"Generate all synthetic datasets with proper ID references\"\"\"\n",
        "\n",
        "    # Clear previous references\n",
        "    global data_refs\n",
        "    data_refs = DatasetReference()\n",
        "\n",
        "    all_datasets = {}\n",
        "\n",
        "    # 1. Generate Employees first (they're referenced by others)\n",
        "    print(\"Generating Employees...\")\n",
        "    employee_dataset_generation_instruction = \"\"\"\n",
        "    Generate Employee record for a telecommunications company:\n",
        "    - Include roles like network engineer, system administrator, project manager\n",
        "    - Add mentorship relationships (using valid employee IDs)\n",
        "    - Add project assignments (using valid project IDs)\n",
        "    - Include bio and skills relevant to telecom industry\n",
        "    \"\"\"\n",
        "\n",
        "    number_of_employee_datapoints = 10\n",
        "    employee_datapoints = []\n",
        "\n",
        "    for i in tqdm(\n",
        "        range(number_of_employee_datapoints), desc=\"Generating Employee datapoints\"\n",
        "    ):\n",
        "        generated_employee = generate_synthetic_data_with_refs(\n",
        "            \"employees\",\n",
        "            employee_dataset_generation_instruction,\n",
        "            entry=i,\n",
        "            data_refs=data_refs,\n",
        "        )\n",
        "        employee_datapoints.append(generated_employee)\n",
        "\n",
        "    all_datasets[\"employees\"] = employee_datapoints\n",
        "\n",
        "    # 2. Generate Projects (referenced by knowledge assets)\n",
        "    print(\"Generating Projects...\")\n",
        "    project_dataset_generation_instruction = \"\"\"\n",
        "    Generate Project record for telecommunications company:\n",
        "    - Include valid team member IDs from employees\n",
        "    - Add project dependencies (using valid project IDs)\n",
        "    - Include technology stacks relevant to telecom\n",
        "    - Reference valid project manager from employees\n",
        "    \"\"\"\n",
        "\n",
        "    number_of_project_datapoints = 8\n",
        "    project_datapoints = []\n",
        "\n",
        "    for i in tqdm(\n",
        "        range(number_of_project_datapoints), desc=\"Generating Project datapoints\"\n",
        "    ):\n",
        "        generated_project = generate_synthetic_data_with_refs(\n",
        "            \"projects\",\n",
        "            project_dataset_generation_instruction,\n",
        "            entry=i,\n",
        "            data_refs=data_refs,\n",
        "        )\n",
        "        project_datapoints.append(generated_project)\n",
        "\n",
        "    all_datasets[\"projects\"] = project_datapoints\n",
        "\n",
        "    # 3. Generate FAQs (no ID references needed)\n",
        "    print(\"Generating FAQs...\")\n",
        "    faq_dataset_generation_instruction = \"\"\"\n",
        "    Generate FAQs based on a company all hands at a telecoms company like Cisco.\n",
        "    \"\"\"\n",
        "\n",
        "    number_of_faq_datapoints = 5\n",
        "    faq_datapoints = []\n",
        "\n",
        "    for i in tqdm(range(number_of_faq_datapoints), desc=\"Generating FAQ datapoints\"):\n",
        "        generated_faq = generate_synthetic_data_with_refs(\n",
        "            \"faqs\", faq_dataset_generation_instruction, entry=i, data_refs=data_refs\n",
        "        )\n",
        "        faq_datapoints.append(generated_faq)\n",
        "\n",
        "    all_datasets[\"faqs\"] = faq_datapoints\n",
        "\n",
        "    # 4. Generate Knowledge Assets (reference both employees and projects)\n",
        "    print(\"Generating Knowledge Assets...\")\n",
        "    knowledge_asset_dataset_generation_instruction = \"\"\"\n",
        "    Generate Knowledge Asset record:\n",
        "    - Use valid author employee ID\n",
        "    - Reference valid related projects\n",
        "    - Include valid related employees\n",
        "    - Document technical procedures and best practices\n",
        "    \"\"\"\n",
        "\n",
        "    number_of_knowledge_asset_datapoints = 6\n",
        "    knowledge_asset_datapoints = []\n",
        "\n",
        "    for i in tqdm(\n",
        "        range(number_of_knowledge_asset_datapoints),\n",
        "        desc=\"Generating Knowledge Asset datapoints\",\n",
        "    ):\n",
        "        generated_knowledge_asset = generate_synthetic_data_with_refs(\n",
        "            \"knowledge_assets\",\n",
        "            knowledge_asset_dataset_generation_instruction,\n",
        "            entry=i,\n",
        "            data_refs=data_refs,\n",
        "        )\n",
        "        knowledge_asset_datapoints.append(generated_knowledge_asset)\n",
        "\n",
        "    all_datasets[\"knowledge_assets\"] = knowledge_asset_datapoints\n",
        "\n",
        "    return all_datasets"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "PXQ9tS1rojcN"
      },
      "outputs": [],
      "source": [
        "synthetic_datasets = generate_all_synthetic_data()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "R9u7FApihlVP"
      },
      "outputs": [],
      "source": [
        "import pandas as pd\n",
        "\n",
        "\n",
        "def export_datapoints_to_json(\n",
        "    datapoints: List[Any],\n",
        "    dataset_type: str,\n",
        "    output_dir: str = \"synthetic_data\",\n",
        "    indent: int = 2,\n",
        ") -> str:\n",
        "    \"\"\"\n",
        "    Exports a list of Pydantic model instances (or dict-like objects) to a JSON file.\n",
        "\n",
        "    Args:\n",
        "      datapoints (List[Any]): List of Pydantic instances or dicts.\n",
        "      dataset_type (str): Identifier for the dataset (e.g., 'faqs', 'projects').\n",
        "      indent (int): Number of spaces for JSON indentation.\n",
        "\n",
        "    Returns:\n",
        "      str: The full path to the saved JSON file.\n",
        "    \"\"\"\n",
        "\n",
        "    # Convert models to dicts\n",
        "    list_of_dicts = [\n",
        "        dp.model_dump()\n",
        "        if hasattr(dp, \"model_dump\")\n",
        "        else getattr(dp, \"dict\", lambda: dp)()\n",
        "        for dp in datapoints\n",
        "    ]\n",
        "\n",
        "    # Create DataFrame\n",
        "    df = pd.DataFrame(list_of_dicts)\n",
        "\n",
        "    # Ensure output directory exists\n",
        "    os.makedirs(output_dir, exist_ok=True)\n",
        "\n",
        "    # Build filename and path\n",
        "    filename = f\"{dataset_type}_datapoints.json\"\n",
        "    output_path = os.path.join(output_dir, filename)\n",
        "\n",
        "    # Export to JSON\n",
        "    df.to_json(output_path, orient=\"records\", indent=indent)\n",
        "\n",
        "    print(f\"Saved {len(df)} records to {output_path}\")\n",
        "    return output_path"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "BQ2fJFMRiv0v"
      },
      "outputs": [],
      "source": [
        "export_datapoints_to_json(datapoints=synthetic_datasets[\"faqs\"], dataset_type=\"faqs\")\n",
        "\n",
        "export_datapoints_to_json(\n",
        "    datapoints=synthetic_datasets[\"employees\"], dataset_type=\"employees\"\n",
        ")\n",
        "\n",
        "export_datapoints_to_json(\n",
        "    datapoints=synthetic_datasets[\"projects\"], dataset_type=\"projects\"\n",
        ")\n",
        "\n",
        "export_datapoints_to_json(\n",
        "    datapoints=synthetic_datasets[\"knowledge_assets\"], dataset_type=\"knowledge_assets\"\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qjmma6xtVtab"
      },
      "source": [
        "## Part 1: Data Loading, Cleaning and Preparation"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Vu_MMZt5qv0E",
        "outputId": "5749a9ba-537f-4b22-aad5-6c0132a7512f"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Enter your VOYAGE AI API key: ··········\n"
          ]
        }
      ],
      "source": [
        "set_env_securely(\"VOYAGE_API_KEY\", \"Enter your VOYAGE AI API key: \")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "CPwcb6_0WTkw"
      },
      "outputs": [],
      "source": [
        "# TODO: Change to hugging face\n",
        "employees_data_df = pd.read_json(\"synthetic_data/employees_datapoints.json\")\n",
        "faqs_data_df = pd.read_json(\"synthetic_data/faqs_datapoints.json\")\n",
        "projects_data_df = pd.read_json(\"synthetic_data/projects_datapoints.json\")\n",
        "knowledge_assets_data_df = pd.read_json(\n",
        "    \"synthetic_data/knowledge_assets_datapoints.json\"\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "XxAevTA7qHRT"
      },
      "outputs": [],
      "source": [
        "employees_data_df.head()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "PJDAGdA3sP_k"
      },
      "outputs": [],
      "source": [
        "faqs_data_df.head()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ouMGUZX1k4_2"
      },
      "source": [
        "### Generating emebdding for datapoints"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "LMXWu0Jrk9sp"
      },
      "outputs": [],
      "source": [
        "from typing import Optional\n",
        "\n",
        "import voyageai\n",
        "\n",
        "VOYAGE_AI_EMBEDDING_MODEL = \"voyage-3-large\"\n",
        "VOYAGE_AI_EMBEDDING_MODEL_DIMENSION = 1024\n",
        "\n",
        "# Initialize the Voyage AI client.\n",
        "voyageai_client = voyageai.Client()\n",
        "\n",
        "\n",
        "def get_embedding(text, task_prefix=\"document\"):\n",
        "    \"\"\"\n",
        "    Generate embeddings for a text string with a task-specific prefix using the voyage-3-large model.\n",
        "\n",
        "    Parameters:\n",
        "      text (str): The input text to be embedded.\n",
        "      task_prefix (str): A prefix describing the task; this is prepended to the text.\n",
        "\n",
        "    Returns:\n",
        "      list: The embedding vector as a list of floats (or ints if another output_dtype is chosen).\n",
        "    \"\"\"\n",
        "    if not text.strip():\n",
        "        print(\"Attempted to get embedding for empty text.\")\n",
        "        return []\n",
        "\n",
        "    # Call the Voyage API to generate the embedding.\n",
        "    # Here, we wrap the text in a list since the API expects a list of texts.\n",
        "    # Default output embedding: 1024\n",
        "    result = voyageai_client.embed(\n",
        "        [text], model=VOYAGE_AI_EMBEDDING_MODEL, input_type=task_prefix\n",
        "    )\n",
        "\n",
        "    # Return the first embedding from the result.\n",
        "    return result.embeddings[0]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wL9X3dkIqQH8"
      },
      "outputs": [],
      "source": [
        "from tqdm import tqdm\n",
        "\n",
        "\n",
        "def generate_employee_embedding(employee_row):\n",
        "    \"\"\"\n",
        "    Generate an embedding for an employee by concatenating relevant fields.\n",
        "\n",
        "    Parameters:\n",
        "        employee_row (pandas.Series): A row from the employees DataFrame\n",
        "\n",
        "    Returns:\n",
        "        list: The embedding vector\n",
        "    \"\"\"\n",
        "    try:\n",
        "        # Ensure each field exists and handle possible missing values\n",
        "        name = employee_row.get(\"name\", \"\")\n",
        "        role = employee_row.get(\"role\", \"\")\n",
        "        department = employee_row.get(\"department\", \"\")\n",
        "\n",
        "        # Handle skills which should be a list\n",
        "        skills = employee_row.get(\"skills\", [])\n",
        "        if not isinstance(skills, list):\n",
        "            skills = [] if pd.isna(skills) else [str(skills)]\n",
        "\n",
        "        # Handle bio which might be None/NaN\n",
        "        bio = employee_row.get(\"bio\", \"\")\n",
        "        if pd.isna(bio):\n",
        "            bio = \"\"\n",
        "\n",
        "        # Concatenate relevant fields with spaces in between\n",
        "        concatenated_text = (\n",
        "            f\"Name: {name} \"\n",
        "            f\"Role: {role} \"\n",
        "            f\"Department: {department} \"\n",
        "            f\"Skills: {', '.join(skills)} \"\n",
        "            f\"Bio: {bio}\"\n",
        "        )\n",
        "\n",
        "        # Generate and return the embedding\n",
        "        return get_embedding(concatenated_text)\n",
        "    except Exception as e:\n",
        "        print(\n",
        "            f\"Error generating embedding for {employee_row.get('emp_id', 'unknown')}: {e}\"\n",
        "        )\n",
        "        # Return empty list or None to indicate error\n",
        "        return []"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qCV3AtbytNCR"
      },
      "outputs": [],
      "source": [
        "def generate_faq_embedding(faq_row):\n",
        "    \"\"\"\n",
        "    Generate an embedding for an FAQ by concatenating question and answer fields.\n",
        "\n",
        "    Parameters:\n",
        "        faq_row (pandas.Series): A row from the FAQs DataFrame\n",
        "\n",
        "    Returns:\n",
        "        list: The embedding vector\n",
        "    \"\"\"\n",
        "    try:\n",
        "        # Ensure each field exists and handle possible missing values\n",
        "        question = faq_row.get(\"question\", \"\")\n",
        "        answer = faq_row.get(\"answer\", \"\")\n",
        "\n",
        "        # Concatenate question and answer fields with space in between\n",
        "        concatenated_text = f\"Question: {question} Answer: {answer}\"\n",
        "\n",
        "        # Generate and return the embedding\n",
        "        return get_embedding(concatenated_text)\n",
        "    except Exception as e:\n",
        "        print(f\"Error generating embedding for {faq_row.get('faq_id', 'unknown')}: {e}\")\n",
        "        # Return empty list or None to indicate error\n",
        "        return []"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "UUiO4_PykpbZ"
      },
      "outputs": [],
      "source": [
        "def generate_knowledge_asset_embedding(knowledge_asset_row):\n",
        "    \"\"\"\n",
        "    Generate an embedding for a knowledge asset by concatenating title and content fields.\n",
        "\n",
        "    Parameters:\n",
        "        knowledge_asset_row (pandas.Series): A row from the Knowledge Assets DataFrame\n",
        "\n",
        "    Returns:\n",
        "        list: The embedding vector\n",
        "    \"\"\"\n",
        "\n",
        "    try:\n",
        "        # Ensure each field exists and handle possible missing values\n",
        "        title = knowledge_asset_row.get(\"title\", \"\")\n",
        "        content = knowledge_asset_row.get(\"content\", \"\")\n",
        "\n",
        "        # Concatenate title and content fields with space in between\n",
        "        concatenated_text = f\"Title: {title} Content: {content}\"\n",
        "\n",
        "        # Generate and return the embedding\n",
        "        return get_embedding(concatenated_text)\n",
        "    except Exception as e:\n",
        "        print(\n",
        "            f\"Error generating embedding for {knowledge_asset_row.get('asset_id', 'unknown')}: {e}\"\n",
        "        )\n",
        "        # Return empty list or None to indicate error\n",
        "        return []"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1Y-YWIQKk12Q"
      },
      "outputs": [],
      "source": [
        "def generate_project_embedding(project_row):\n",
        "    \"\"\"\n",
        "    Generate an embedding for a project by concatenating name and description fields.\n",
        "\n",
        "    Parameters:\n",
        "        project_row (pandas.Series): A row from the Projects DataFrame\n",
        "\n",
        "    Returns:\n",
        "        list: The embedding vector\n",
        "    \"\"\"\n",
        "    try:\n",
        "        # Ensure each field exists and handle possible missing values\n",
        "        name = project_row.get(\"name\", \"\")\n",
        "        description = project_row.get(\"description\", \"\")\n",
        "        status = project_row.get(\"status\", \"\")\n",
        "\n",
        "        # Concatenate name, description,\n",
        "        concatenated_text = f\"Name: {name} Description: {description} Status: {status}\"\n",
        "\n",
        "        # Generate and return the embedding\n",
        "        return get_embedding(concatenated_text)\n",
        "    except Exception as e:\n",
        "        print(\n",
        "            f\"Error generating embedding for {project_row.get('project_id', 'unknown')}: {e}\"\n",
        "        )\n",
        "        # Return empty list or None to indicate error\n",
        "        return []"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3TsYGgkSsdty"
      },
      "outputs": [],
      "source": [
        "# Apply the function to each row in the employees DataFrame\n",
        "tqdm.pandas(desc=\"Generating employee embeddings\")\n",
        "employees_data_df[\"embedding\"] = employees_data_df.progress_apply(\n",
        "    generate_employee_embedding, axis=1\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9cj26uMystG0"
      },
      "outputs": [],
      "source": [
        "employees_data_df.head()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "KkFgItJRtVHn"
      },
      "outputs": [],
      "source": [
        "# Apply the function to each row in the FAQs DataFrame with tqdm progress bar\n",
        "tqdm.pandas(desc=\"Generating FAQ embeddings\")\n",
        "faqs_data_df[\"embedding\"] = faqs_data_df.progress_apply(generate_faq_embedding, axis=1)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bFBq7s0jtY5z"
      },
      "outputs": [],
      "source": [
        "faqs_data_df.head()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "OjghOcHJlic5"
      },
      "outputs": [],
      "source": [
        "tqdm.pandas(desc=\"Generating Knowledge Asset embeddings\")\n",
        "knowledge_assets_data_df[\"embedding\"] = knowledge_assets_data_df.progress_apply(\n",
        "    generate_knowledge_asset_embedding, axis=1\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ILomW7gtlnCS"
      },
      "outputs": [],
      "source": [
        "knowledge_assets_data_df.head()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GDWIakPUlqh8"
      },
      "outputs": [],
      "source": [
        "tqdm.pandas(desc=\"Generating Project embeddings\")\n",
        "projects_data_df[\"embedding\"] = projects_data_df.progress_apply(\n",
        "    generate_project_embedding, axis=1\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "SgjZWi-glvdn"
      },
      "outputs": [],
      "source": [
        "projects_data_df.head()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dQiZ9ErAV31p"
      },
      "source": [
        "## Part 2: Database Connection, Collection and Indexes"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZZb2BWwlt_WP"
      },
      "source": [
        "### Connecting to MongoDB\n",
        "\n",
        "MongoDB acts as both an operational and a vector database for the RAG system.\n",
        "MongoDB Atlas specifically provides a database solution that efficiently stores, queries and retrieves vector embeddings.\n",
        "\n",
        "#### Setup\n",
        "\n",
        "To use MongoDB as a toolbox, you will need to complete the following steps:\n",
        "\n",
        "1. Register for a MongoDB Account:\n",
        "   - Go to the MongoDB website (https://www.mongodb.com/cloud/atlas/register).\n",
        "   - Click on the \"Try Free\" or \"Get Started Free\" button.\n",
        "   - Fill out the registration form with your details and create an account.\n",
        "\n",
        "2. Create a [MongoDB Cluster](https://www.mongodb.com/docs/atlas/tutorial/deploy-free-tier-cluster/#procedure)\n",
        "\n",
        "3. Set Up [Database Access](https://www.mongodb.com/docs/atlas/security-add-mongodb-users/#add-database-users):\n",
        "   - In the left sidebar, click on \"Database Access\" under \"Security\".\n",
        "   - Click \"Add New Database User\".\n",
        "   - Create a username and a strong password. Save these credentials securely.\n",
        "   - Set the appropriate permissions for the user (e.g., \"Read and write to any database\").\n",
        "\n",
        "4. Configure Network Access:\n",
        "   - In the left sidebar, click on \"Network Access\" under \"Security\".\n",
        "   - Click \"Add IP Address\".\n",
        "   - To allow access from anywhere (not recommended for production), enter 0.0.0.0/0.\n",
        "   - For better security, whitelist only the specific IP addresses that need access.\n",
        "\n",
        "5. Follow MongoDB’s [steps to get the connection](https://www.mongodb.com/docs/manual/reference/connection-string/) string from the Atlas UI. After setting up the database and obtaining the Atlas cluster connection URI, securely store the URI within your development environment.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "2_o7gYMbt-hh",
        "outputId": "6f56d51b-ecd0-4d32-8697-246f2bd1face"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Enter your MongoDB URI: ··········\n"
          ]
        }
      ],
      "source": [
        "set_env_securely(\"MONGODB_URI\", \"Enter your MongoDB URI: \")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "BaZLGJaYuqYM"
      },
      "outputs": [],
      "source": [
        "import pymongo\n",
        "\n",
        "\n",
        "def get_mongo_client(mongo_uri):\n",
        "    \"\"\"Establish and validate connection to the MongoDB.\"\"\"\n",
        "\n",
        "    client = pymongo.MongoClient(\n",
        "        mongo_uri,\n",
        "        appname=\"devrel.showcase.partners.langchain.knowledge_discovery.python\",\n",
        "    )\n",
        "\n",
        "    # Validate the connection\n",
        "    ping_result = client.admin.command(\"ping\")\n",
        "    if ping_result.get(\"ok\") == 1.0:\n",
        "        # Connection successful\n",
        "        print(\"Connection to MongoDB successful\")\n",
        "        return client\n",
        "    else:\n",
        "        print(\"Connection to MongoDB failed\")\n",
        "    return None"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "bXmeIpwKts8T",
        "outputId": "2442396d-3217-410a-f173-ac15e5958331"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Connection to MongoDB successful\n"
          ]
        }
      ],
      "source": [
        "DB_NAME = \"enterprise_knowledge_discovery\"\n",
        "db_client = get_mongo_client(os.environ.get(\"MONGODB_URI\"))\n",
        "db = db_client[DB_NAME]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "02tb3D5zvb-S"
      },
      "source": [
        "### Create collections"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "axKY_DkSwFjs"
      },
      "outputs": [],
      "source": [
        "# Collection names\n",
        "EMPLOYEES_COLLECTION = \"employees\"\n",
        "FAQS_COLLECTION = \"faqs\"\n",
        "KNOWLEDGE_ASSETS_COLLECTION = \"knowledge_assets\"\n",
        "PROJECTS_COLLECTION = \"projects\"\n",
        "\n",
        "EMPLOYEES_VECTOR_INDEX_NAME = \"employees_vector_search_index\"\n",
        "FAQS_VECTOR_INDEX_NAME = \"faqs_vector_search_index\"\n",
        "KNOWLEDGE_ASSETS_VECTOR_INDEX_NAME = \"knowledge_assets_vector_search_index\"\n",
        "PROJECT_VECTOR_INDEX_NAME = \"projects_vector_search_index\"\n",
        "\n",
        "EMPLOYEES_SEARCH_INDEX_NAME = \"employees_text_search_index\"\n",
        "FAQS_SEARCH_INDEX_NAME = \"faqs_text_search_index\"\n",
        "KNOWLEDGE_ASSETS_SEARCH_INDEX_NAME = \"knowledge_assets_text_search_index\"\n",
        "PROJECTS_SEARCH_INDEX_NAME = \"projects_text_search_index\"\n",
        "\n",
        "VECTOR_DIMENSION = 1024"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Pk-cWqJ_vddl"
      },
      "outputs": [],
      "source": [
        "# Create collections with validation if they don't exist\n",
        "def create_collections():\n",
        "    # Get list of existing collections\n",
        "    existing_collections = db.list_collection_names()\n",
        "    print(f\"Existing collections: {existing_collections}\")\n",
        "\n",
        "    # Create employees collection with schema validation if it doesn't exist\n",
        "    if EMPLOYEES_COLLECTION not in existing_collections:\n",
        "        db.create_collection(\n",
        "            EMPLOYEES_COLLECTION,\n",
        "            validator={\n",
        "                \"$jsonSchema\": {\n",
        "                    \"bsonType\": \"object\",\n",
        "                    \"required\": [\n",
        "                        \"emp_id\",\n",
        "                        \"name\",\n",
        "                        \"role\",\n",
        "                        \"department\",\n",
        "                        \"start_date\",\n",
        "                        \"end_date\",\n",
        "                    ],\n",
        "                    \"properties\": {\n",
        "                        \"emp_id\": {\"bsonType\": \"string\"},\n",
        "                        \"name\": {\"bsonType\": \"string\"},\n",
        "                        \"role\": {\"bsonType\": \"string\"},\n",
        "                        \"department\": {\"bsonType\": \"string\"},\n",
        "                        \"skills\": {\"bsonType\": \"array\"},\n",
        "                        \"bio\": {\"bsonType\": [\"string\", \"null\"]},\n",
        "                        \"manager\": {\"bsonType\": [\"string\", \"null\"]},\n",
        "                        \"current_projects\": {\"bsonType\": \"array\"},\n",
        "                        \"past_projects\": {\"bsonType\": \"array\"},\n",
        "                        \"mentors\": {\"bsonType\": \"array\"},\n",
        "                        \"mentees\": {\"bsonType\": \"array\"},\n",
        "                        \"frequent_collaborators\": {\"bsonType\": \"array\"},\n",
        "                        \"start_date\": {\"bsonType\": \"string\"},\n",
        "                        \"end_date\": {\"bsonType\": \"string\"},\n",
        "                        \"embedding\": {\"bsonType\": \"array\"},\n",
        "                    },\n",
        "                }\n",
        "            },\n",
        "            validationLevel=\"moderate\",\n",
        "        )\n",
        "        print(f\"Created {EMPLOYEES_COLLECTION} collection with schema validation\")\n",
        "\n",
        "    # Create FAQs collection with schema validation if it doesn't exist\n",
        "    if FAQS_COLLECTION not in existing_collections:\n",
        "        db.create_collection(\n",
        "            FAQS_COLLECTION,\n",
        "            validator={\n",
        "                \"$jsonSchema\": {\n",
        "                    \"bsonType\": \"object\",\n",
        "                    \"required\": [\n",
        "                        \"faq_id\",\n",
        "                        \"question\",\n",
        "                        \"answer\",\n",
        "                    ],\n",
        "                    \"properties\": {\n",
        "                        \"faq_id\": {\"bsonType\": \"string\"},\n",
        "                        \"question\": {\"bsonType\": \"string\"},\n",
        "                        \"answer\": {\"bsonType\": \"string\"},\n",
        "                        \"tags\": {\"bsonType\": \"array\"},\n",
        "                        \"embedding\": {\"bsonType\": \"array\"},\n",
        "                    },\n",
        "                }\n",
        "            },\n",
        "            validationLevel=\"moderate\",\n",
        "        )\n",
        "        print(f\"Created {FAQS_COLLECTION} collection with schema validation\")\n",
        "\n",
        "    # Create Projects collection with schema validation\n",
        "    if PROJECTS_COLLECTION not in existing_collections:\n",
        "        db.create_collection(\n",
        "            PROJECTS_COLLECTION,\n",
        "            validator={\n",
        "                \"$jsonSchema\": {\n",
        "                    \"bsonType\": \"object\",\n",
        "                    \"required\": [\"project_id\", \"name\", \"description\", \"status\"],\n",
        "                    \"properties\": {\n",
        "                        \"project_id\": {\"bsonType\": \"string\"},\n",
        "                        \"name\": {\"bsonType\": \"string\"},\n",
        "                        \"description\": {\"bsonType\": \"string\"},\n",
        "                        \"status\": {\"bsonType\": \"string\"},\n",
        "                        \"team_members\": {\"bsonType\": \"array\"},\n",
        "                        \"technologies\": {\"bsonType\": \"array\"},\n",
        "                        \"dependencies\": {\"bsonType\": \"array\"},\n",
        "                        \"related_projects\": {\"bsonType\": \"array\"},\n",
        "                        \"embedding\": {\"bsonType\": \"array\"},\n",
        "                    },\n",
        "                }\n",
        "            },\n",
        "            validationLevel=\"moderate\",\n",
        "        )\n",
        "        print(f\"Created {PROJECTS_COLLECTION} collection with schema validation\")\n",
        "\n",
        "    # Create Projects collection with schema validation\n",
        "    if KNOWLEDGE_ASSETS_COLLECTION not in existing_collections:\n",
        "        db.create_collection(\n",
        "            KNOWLEDGE_ASSETS_COLLECTION,\n",
        "            validator={\n",
        "                \"$jsonSchema\": {\n",
        "                    \"bsonType\": \"object\",\n",
        "                    \"required\": [\"asset_id\", \"title\", \"content\", \"type\"],\n",
        "                    \"properties\": {\n",
        "                        \"asset_id\": {\"bsonType\": \"string\"},\n",
        "                        \"title\": {\"bsonType\": \"string\"},\n",
        "                        \"content\": {\"bsonType\": \"string\"},\n",
        "                        \"type\": {\"bsonType\": \"string\"},\n",
        "                        \"author\": {\"bsonType\": \"string\"},\n",
        "                        \"creation_date\": {\"bsonType\": \"string\"},\n",
        "                        \"last_updated\": {\"bsonType\": \"string\"},\n",
        "                        \"tags\": {\"bsonType\": \"array\"},\n",
        "                        \"related_projects\": {\"bsonType\": \"array\"},\n",
        "                        \"related_employees\": {\"bsonType\": \"array\"},\n",
        "                        \"relevance_score\": {\"bsonType\": \"number\"},\n",
        "                        \"embedding\": {\"bsonType\": \"array\"},\n",
        "                    },\n",
        "                }\n",
        "            },\n",
        "            validationLevel=\"moderate\",\n",
        "        )\n",
        "        print(\n",
        "            f\"Created {KNOWLEDGE_ASSETS_COLLECTION} collection with schema validation\"\n",
        "        )"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "gvJN9mOrxmWT"
      },
      "outputs": [],
      "source": [
        "# Call function to create collections\n",
        "create_collections()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UWx6LsrVvsCo"
      },
      "source": [
        "### Create Indexes"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kXrBnLyOmjv0"
      },
      "source": [
        "Create the vector search indexes"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "z0x3KzlVxhe7"
      },
      "outputs": [],
      "source": [
        "import time\n",
        "\n",
        "from pymongo.operations import SearchIndexModel\n",
        "\n",
        "\n",
        "# Create vector search index if it doesn't exist\n",
        "def create_vector_search_index(collection, vector_index_name):\n",
        "    # Check if index already exists\n",
        "    try:\n",
        "        existing_indexes = collection.list_search_indexes()\n",
        "        for index in existing_indexes:\n",
        "            if index[\"name\"] == vector_index_name:\n",
        "                print(f\"Vector search index '{vector_index_name}' already exists.\")\n",
        "                return\n",
        "    except Exception as e:\n",
        "        print(f\"Could not list search indexes: {e}\")\n",
        "        return\n",
        "\n",
        "    # Create vector search index\n",
        "    search_index_model = SearchIndexModel(\n",
        "        definition={\n",
        "            \"fields\": [\n",
        "                {\n",
        "                    \"type\": \"vector\",\n",
        "                    \"path\": \"embedding\",\n",
        "                    \"numDimensions\": VECTOR_DIMENSION,\n",
        "                    \"similarity\": \"cosine\",\n",
        "                }\n",
        "            ]\n",
        "        },\n",
        "        name=vector_index_name,\n",
        "        type=\"vectorSearch\",\n",
        "    )\n",
        "\n",
        "    try:\n",
        "        result = collection.create_search_index(model=search_index_model)\n",
        "        print(f\"New search index named '{result}' is building.\")\n",
        "    except Exception as e:\n",
        "        print(f\"Error creating vector search index: {e}\")\n",
        "        return\n",
        "\n",
        "    # Wait for initial sync to complete\n",
        "    print(\n",
        "        f\"Polling to check if the index '{result}' is ready. This may take up to a minute.\"\n",
        "    )\n",
        "    predicate = lambda index: index.get(\"queryable\") is True\n",
        "\n",
        "    while True:\n",
        "        try:\n",
        "            indices = list(collection.list_search_indexes(result))\n",
        "            if indices and predicate(indices[0]):\n",
        "                break\n",
        "            time.sleep(5)\n",
        "        except Exception as e:\n",
        "            print(f\"Error checking index readiness: {e}\")\n",
        "            time.sleep(5)\n",
        "\n",
        "    print(f\"{result} is ready for querying.\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "p9-idkGHx9ZL"
      },
      "outputs": [],
      "source": [
        "create_vector_search_index(db[EMPLOYEES_COLLECTION], EMPLOYEES_VECTOR_INDEX_NAME)\n",
        "create_vector_search_index(db[FAQS_COLLECTION], FAQS_VECTOR_INDEX_NAME)\n",
        "create_vector_search_index(\n",
        "    db[KNOWLEDGE_ASSETS_COLLECTION], KNOWLEDGE_ASSETS_VECTOR_INDEX_NAME\n",
        ")\n",
        "create_vector_search_index(db[PROJECTS_COLLECTION], PROJECT_VECTOR_INDEX_NAME)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "P1i4jQoVmeD9"
      },
      "source": [
        "Create the search indexes"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "zjq2xnKGmdj0"
      },
      "outputs": [],
      "source": [
        "def create_text_search_index(collection, index_definition, index_name):\n",
        "    \"\"\"\n",
        "    Create a search index for a MongoDB Atlas collection.\n",
        "\n",
        "    Args:\n",
        "    collection: MongoDB collection object\n",
        "    index_definition: Dictionary defining the index mappings\n",
        "    index_name: String name for the index\n",
        "\n",
        "    Returns:\n",
        "    str: Result of the index creation operation\n",
        "    \"\"\"\n",
        "\n",
        "    try:\n",
        "        search_index_model = SearchIndexModel(\n",
        "            definition=index_definition, name=index_name\n",
        "        )\n",
        "\n",
        "        result = collection.create_search_index(model=search_index_model)\n",
        "        print(f\"Search index '{index_name}' created successfully\")\n",
        "        return result\n",
        "    except Exception as e:\n",
        "        print(f\"Error creating search index: {e!s}\")\n",
        "        return None"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "IYAMWS1hmu1s"
      },
      "outputs": [],
      "source": [
        "employees_collection_search_index_defintion = {\n",
        "    \"mappings\": {\n",
        "        \"dynamic\": True,\n",
        "        \"fields\": {\n",
        "            \"name\": {\"type\": \"string\"},\n",
        "            \"role\": {\"type\": \"string\"},\n",
        "            \"department\": {\"type\": \"string\"},\n",
        "            \"bio\": {\"type\": \"string\"},\n",
        "        },\n",
        "    }\n",
        "}\n",
        "\n",
        "projects_collection_search_index_defintion = {\n",
        "    \"mappings\": {\n",
        "        \"dynamic\": True,\n",
        "        \"fields\": {\n",
        "            \"name\": {\"type\": \"string\"},\n",
        "            \"description\": {\"type\": \"string\"},\n",
        "            \"status\": {\"type\": \"string\"},\n",
        "        },\n",
        "    }\n",
        "}\n",
        "\n",
        "faqs_collection_search_index_defintion = {\n",
        "    \"mappings\": {\n",
        "        \"dynamic\": True,\n",
        "        \"fields\": {\"question\": {\"type\": \"string\"}, \"answer\": {\"type\": \"string\"}},\n",
        "    }\n",
        "}\n",
        "\n",
        "knowledge_assets_collection_search_index_defintion = {\n",
        "    \"mappings\": {\n",
        "        \"dynamic\": True,\n",
        "        \"fields\": {\n",
        "            \"title\": {\"type\": \"string\"},\n",
        "            \"content\": {\"type\": \"string\"},\n",
        "            \"type\": {\"type\": \"string\"},\n",
        "        },\n",
        "    }\n",
        "}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qx-1-M_TnLgQ"
      },
      "outputs": [],
      "source": [
        "create_text_search_index(\n",
        "    db[EMPLOYEES_COLLECTION],\n",
        "    employees_collection_search_index_defintion,\n",
        "    EMPLOYEES_SEARCH_INDEX_NAME,\n",
        ")\n",
        "create_text_search_index(\n",
        "    db[FAQS_COLLECTION], faqs_collection_search_index_defintion, FAQS_SEARCH_INDEX_NAME\n",
        ")\n",
        "create_text_search_index(\n",
        "    db[KNOWLEDGE_ASSETS_COLLECTION],\n",
        "    knowledge_assets_collection_search_index_defintion,\n",
        "    KNOWLEDGE_ASSETS_SEARCH_INDEX_NAME,\n",
        ")\n",
        "create_text_search_index(\n",
        "    db[PROJECTS_COLLECTION],\n",
        "    projects_collection_search_index_defintion,\n",
        "    PROJECTS_SEARCH_INDEX_NAME,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZQEtLOZsyS6C"
      },
      "source": [
        "### Data Ingestion"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bsow5pu7yUcn"
      },
      "outputs": [],
      "source": [
        "employee_documents = employees_data_df.to_dict(orient=\"records\")\n",
        "faq_documents = faqs_data_df.to_dict(orient=\"records\")\n",
        "knowledge_asset_documents = knowledge_assets_data_df.to_dict(orient=\"records\")\n",
        "project_documents = projects_data_df.to_dict(orient=\"records\")\n",
        "\n",
        "db[EMPLOYEES_COLLECTION].insert_many(employee_documents)\n",
        "db[FAQS_COLLECTION].insert_many(faq_documents)\n",
        "db[KNOWLEDGE_ASSETS_COLLECTION].insert_many(knowledge_asset_documents)\n",
        "db[PROJECTS_COLLECTION].insert_many(project_documents)\n",
        "\n",
        "print(f\"Inserted {len(employee_documents)} documents into {EMPLOYEES_COLLECTION}.\")\n",
        "print(f\"Inserted {len(faq_documents)} documents into {FAQS_COLLECTION}.\")\n",
        "print(\n",
        "    f\"Inserted {len(knowledge_asset_documents)} documents into {KNOWLEDGE_ASSETS_COLLECTION}.\"\n",
        ")\n",
        "print(f\"Inserted {len(project_documents)} documents into {PROJECTS_COLLECTION}.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kGdVCWNYV377"
      },
      "source": [
        "## Part 3: Creating and Testing Retrieval Methods With LangChain"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "188tPfOcysDC"
      },
      "source": [
        "### Text Search"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "vbLHl5KY4Dp6"
      },
      "outputs": [],
      "source": [
        "# Test lexical search with MongoDB Atlas\n",
        "from typing import Any, List, Tuple\n",
        "\n",
        "from langchain.schema import Document\n",
        "from langchain_mongodb.retrievers.full_text_search import (\n",
        "    MongoDBAtlasFullTextSearchRetriever,\n",
        ")\n",
        "\n",
        "\n",
        "def full_text_search(\n",
        "    collection, search_field, query: str, top_k: int = 10\n",
        ") -> List[Document]:\n",
        "    # Dynamically get the search index name from the collection name\n",
        "    collection_name = collection.name\n",
        "    search_index_name = f\"{collection_name}_text_search_index\"\n",
        "\n",
        "    full_text_search = MongoDBAtlasFullTextSearchRetriever(\n",
        "        collection=collection,\n",
        "        search_index_name=search_index_name,\n",
        "        search_field=search_field,\n",
        "        k=top_k,\n",
        "        include_scores=True,  # This will include a score in each record\n",
        "    )\n",
        "    result = full_text_search.get_relevant_documents(query)\n",
        "\n",
        "    # Remove the emmbedding attribute from each document in the results\n",
        "    for doc in result:\n",
        "        del doc.metadata[\"embedding\"]\n",
        "\n",
        "    return result"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "FkQW50p44Hdj",
        "outputId": "208a562f-a23d-4856-b107-296166cd7c98"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/tmp/ipython-input-3957023351.py:19: LangChainDeprecationWarning: The method `BaseRetriever.get_relevant_documents` was deprecated in langchain-core 0.1.46 and will be removed in 1.0. Use :meth:`~invoke` instead.\n",
            "  result = full_text_search.get_relevant_documents(query)\n"
          ]
        },
        {
          "data": {
            "text/plain": [
              "[Document(metadata={'_id': '68c02694dc3b288b36954711', 'emp_id': 'employees-0', 'role': 'Network Engineer', 'department': 'Network Operations', 'skills': ['IP networking', 'routing and switching', 'fiber optics', 'network security', 'VoIP'], 'bio': 'Jordan Singh is a seasoned network engineer specializing in telecom network infrastructure with expertise in routing, switching, and optical transmission technologies.', 'manager': None, 'start_date': '2020-07-15', 'end_date': '', 'current_projects': [], 'past_projects': [], 'mentors': [], 'mentees': [], 'frequent_collaborators': [], 'score': 0.9914655685424805}, page_content='Jordan Singh'),\n",
              " Document(metadata={'_id': '68fa0cb57e65d1c84f9e0465', 'emp_id': 'employees-4', 'role': 'Network Engineer', 'department': 'Network Operations', 'skills': ['Network Design', 'Cisco Routers', 'VoIP Implementation', 'VPN Configuration', 'Troubleshooting', 'Telecom Infrastructure'], 'bio': 'Jordan Kim is an experienced Network Engineer with expertise in designing, implementing, and maintaining large-scale telecommunication networks. Skilled in troubleshooting and optimizing network infrastructure.', 'manager': 'employees-2', 'start_date': '2021-05-17', 'end_date': '', 'current_projects': [], 'past_projects': [], 'mentors': ['employees-1'], 'mentees': ['employees-3'], 'frequent_collaborators': ['employees-0'], 'score': 0.9914655685424805}, page_content='Jordan Kim'),\n",
              " Document(metadata={'_id': '68fa0cb57e65d1c84f9e0461', 'emp_id': 'employees-0', 'role': 'Network Engineer', 'department': 'Network Operations', 'skills': ['Network Design', 'Cisco Routing & Switching', 'Fiber Optic Communication', 'Telecommunications Protocols', 'Network Security'], 'bio': 'Jordan Lee is an experienced network engineer specializing in the design and maintenance of robust telecommunications infrastructure. Proven expertise in optimizing large-scale networks and ensuring high reliability for carrier-grade operations.', 'manager': None, 'start_date': '2021-03-15', 'end_date': '', 'current_projects': [], 'past_projects': [], 'mentors': [], 'mentees': [], 'frequent_collaborators': [], 'score': 0.9914655685424805}, page_content='Jordan Lee')]"
            ]
          },
          "execution_count": 12,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "full_text_search(\n",
        "    collection=db[EMPLOYEES_COLLECTION], search_field=\"name\", query=\"Jordan\"\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eULInEOJyybT"
      },
      "source": [
        "### Vector Search"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {
        "id": "igiULcXx4VlW"
      },
      "outputs": [],
      "source": [
        "from langchain_mongodb import MongoDBAtlasVectorSearch\n",
        "from langchain_voyageai import VoyageAIEmbeddings\n",
        "\n",
        "# Initialize embeddings model\n",
        "embedding_model = VoyageAIEmbeddings(\n",
        "    batch_size=1,\n",
        "    model=VOYAGE_AI_EMBEDDING_MODEL,\n",
        "    voyage_api_key=os.environ.get(\"VOYAGE_API_KEY\"),\n",
        "    output_dimension=VOYAGE_AI_EMBEDDING_MODEL_DIMENSION,\n",
        "    show_progress_bar=True,\n",
        ")\n",
        "\n",
        "\n",
        "def semantic_search(\n",
        "    collection, text_key: str, query: str, top_k: int = 10\n",
        ") -> List[Tuple[Any, float]]:\n",
        "    # Dynamically get the vector search index name from the collection name\n",
        "    collection_name = collection.name\n",
        "    vector_search_index_name = f\"{collection_name}_vector_search_index\"\n",
        "\n",
        "    vector_store = MongoDBAtlasVectorSearch.from_connection_string(\n",
        "        connection_string=os.environ.get(\"MONGODB_URI\"),\n",
        "        namespace=f\"{DB_NAME}.{collection_name}\",\n",
        "        embedding=embedding_model,\n",
        "        index_name=vector_search_index_name,\n",
        "        text_key=text_key,\n",
        "    )\n",
        "\n",
        "    return vector_store.similarity_search_with_score(query=query, k=top_k)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 466,
          "referenced_widgets": [
            "56bca67a0f6f4658bdfea47f24d78706",
            "ba34f08e153040b6b66c255bb3b6f54e",
            "5d500724b68941e9821aeee92fb3f5eb",
            "210b99e07eb04dadb3930d95966e44b4",
            "c422cea214cc4d96b234dedc08ff4abe",
            "dca43ee32eec4886b03482ca15743c34",
            "2a5bf18548444ff0bd5397fa9e409a2c",
            "3265fdce9fa64da5870cb7384ae974f5",
            "be6558b7fdd1486d8c933064a97b4b10",
            "06fb49b9cae941f59b978b9e022ee944",
            "72aea2635cdf428d91b4133802e514a7"
          ]
        },
        "id": "dS4oLGg9NPAN",
        "outputId": "e3e9eb40-146b-4c32-ae8e-db2e20680b6a"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "56bca67a0f6f4658bdfea47f24d78706",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "  0%|          | 0/1 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/plain": [
              "[(Document(id='68fa0cb57e65d1c84f9e0464', metadata={'_id': '68fa0cb57e65d1c84f9e0464', 'emp_id': 'employees-3', 'name': 'Priya Deshmukh', 'role': 'Network Engineer', 'department': 'Network Operations', 'skills': ['Cisco networking', 'VoIP installation', 'Fiber optic infrastructure', 'Network security', 'BGP & OSPF routing', 'Troubleshooting WAN/LAN'], 'manager': 'employees-1', 'start_date': '2021-02-15', 'end_date': '', 'current_projects': [], 'past_projects': [], 'mentors': ['employees-2'], 'mentees': ['employees-0'], 'frequent_collaborators': ['employees-2', 'employees-1']}, page_content='Priya is a seasoned Network Engineer with 7 years of experience in designing, implementing, and optimizing telecom network infrastructures. She specializes in VoIP systems and high-capacity fiber-optic deployments for enterprise clients.'),\n",
              "  0.7213080525398254),\n",
              " (Document(id='68921a051d77d2d9c2b14100', metadata={'_id': '68921a051d77d2d9c2b14100', 'emp_id': 'employees-7', 'name': 'Sophia Kim', 'role': 'Network Engineer', 'department': 'Network Operations', 'skills': ['IP routing', 'network design', 'fiber optics', 'troubleshooting', 'VoIP'], 'manager': 'employees-3', 'start_date': '2021-04-12', 'end_date': '', 'current_projects': [], 'past_projects': [], 'mentors': ['employees-6'], 'mentees': ['employees-4'], 'frequent_collaborators': ['employees-0', 'employees-1']}, page_content='Sophia Kim is a dedicated network engineer with more than 5 years’ experience designing and maintaining high-capacity telecom networks. She specializes in optimizing network infrastructure for performance and reliability, and has strong expertise with fiber optic systems.'),\n",
              "  0.7185817956924438),\n",
              " (Document(id='68921a051d77d2d9c2b140ff', metadata={'_id': '68921a051d77d2d9c2b140ff', 'emp_id': 'employees-6', 'name': 'Samantha Riley', 'role': 'Network Engineer', 'department': 'Network Operations', 'skills': ['Network Design', 'Cisco Routers', 'VoIP Configuration', 'Telecommunications Protocols', 'Firewall Management', 'Network Troubleshooting'], 'manager': 'employees-1', 'start_date': '2019-04-15', 'end_date': '', 'current_projects': [], 'past_projects': [], 'mentors': ['employees-1'], 'mentees': ['employees-0'], 'frequent_collaborators': ['employees-3', 'employees-5']}, page_content='Samantha Riley is a skilled network engineer with over 7 years of experience in designing and maintaining large-scale telecommunications networks. She specializes in VoIP solutions and has a strong background in network security and troubleshooting complex network issues.'),\n",
              "  0.7175770998001099)]"
            ]
          },
          "execution_count": 14,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "semantic_search(\n",
        "    collection=db[EMPLOYEES_COLLECTION],\n",
        "    text_key=\"bio\",\n",
        "    query=\"Get me someone that is good at speaking to clients\",\n",
        "    top_k=3,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "A525FGZGyymn"
      },
      "source": [
        "### Hybrid Search"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WumKdiC-NXDG"
      },
      "outputs": [],
      "source": [
        "from langchain_mongodb.retrievers import MongoDBAtlasHybridSearchRetriever\n",
        "\n",
        "\n",
        "def hybrid_search(\n",
        "    collection, text_key: str, query: str, top_k: int = 10\n",
        ") -> List[Document]:\n",
        "    # Dynamically get the vector search index name from the collection name\n",
        "    collection_name = collection.name\n",
        "    vector_search_index_name = f\"{collection_name}_vector_search_index\"\n",
        "    search_index_name = f\"{collection_name}_text_search_index\"\n",
        "\n",
        "    # intilaize the vector store first\n",
        "    vector_store = MongoDBAtlasVectorSearch.from_connection_string(\n",
        "        connection_string=os.environ.get(\"MONGODB_URI\"),\n",
        "        namespace=f\"{DB_NAME}.{collection_name}\",\n",
        "        embedding=embedding_model,\n",
        "        index_name=vector_search_index_name,\n",
        "        text_key=\"bio\",\n",
        "    )\n",
        "\n",
        "    hybrid_search = MongoDBAtlasHybridSearchRetriever(\n",
        "        vectorstore=vector_store, search_index_name=search_index_name, top_k=top_k\n",
        "    )\n",
        "\n",
        "    return hybrid_search.get_relevant_documents(query)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 448,
          "referenced_widgets": [
            "3fd986c252fd40ccb77ed9b185946d0b",
            "a1e329ee64e74c1baa37fb3c27580892",
            "212f3f25669d4fcdb37bae2121bcf04d",
            "8c7d71782d284afd93c491446c319254",
            "11ec5659934c469e8a96fc008b72813b",
            "9fd4422f56e24e41b0cca88c957250a5",
            "e61fe1515d1d4326a9ba069f003a8bc5",
            "85768a1ecb714ba7a29ad01659b99449",
            "e89cd05f2f5744eab740a4d77bc418e0",
            "85a57236d48541f9af1bfc8f60266f2e",
            "f9c72b84091543669df9a6a4d3f86935"
          ]
        },
        "id": "ZPvFiOrlNYvm",
        "outputId": "802fafe0-1ed1-489f-bf4e-9fbee407313a"
      },
      "outputs": [
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "3fd986c252fd40ccb77ed9b185946d0b",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "  0%|          | 0/1 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/plain": [
              "[Document(metadata={'_id': '68fa0cb57e65d1c84f9e0465', 'emp_id': 'employees-4', 'name': 'Jordan Kim', 'role': 'Network Engineer', 'department': 'Network Operations', 'skills': ['Network Design', 'Cisco Routers', 'VoIP Implementation', 'VPN Configuration', 'Troubleshooting', 'Telecom Infrastructure'], 'manager': 'employees-2', 'start_date': '2021-05-17', 'end_date': '', 'current_projects': [], 'past_projects': [], 'mentors': ['employees-1'], 'mentees': ['employees-3'], 'frequent_collaborators': ['employees-0'], 'vector_score': 0.01639344262295082, 'rank': 0, 'fulltext_score': 0, 'score': 0.01639344262295082}, page_content='Jordan Kim is an experienced Network Engineer with expertise in designing, implementing, and maintaining large-scale telecommunication networks. Skilled in troubleshooting and optimizing network infrastructure.'),\n",
              " Document(metadata={'_id': '68921a051d77d2d9c2b140fd', 'emp_id': 'employees-4', 'name': 'Maya Patel', 'role': 'Network Engineer', 'department': 'Network Operations', 'skills': ['Network Design', 'Cisco Routers & Switches', 'VoIP', 'Telecommunications Infrastructure', 'Network Security', 'Fiber Optic Communication', 'Linux Administration'], 'manager': 'employees-1', 'start_date': '2017-03-12', 'end_date': '', 'current_projects': [], 'past_projects': [], 'mentors': ['employees-1'], 'mentees': ['employees-2'], 'frequent_collaborators': ['employees-0', 'employees-3'], 'vector_score': 0.016129032258064516, 'rank': 1, 'fulltext_score': 0, 'score': 0.016129032258064516}, page_content='Maya Patel is a seasoned network engineer with over 7 years of experience in the telecommunications sector. Her expertise spans network design, deployment, and ongoing optimization, focusing on delivering high-availability communication platforms. Passionate about mentoring new engineers, she contributes to building robust technical teams.'),\n",
              " Document(metadata={'_id': '68c02694dc3b288b36954718', 'emp_id': 'employees-7', 'name': 'Anjali Patel', 'role': 'System Administrator', 'department': 'IT Operations', 'skills': ['Linux administration', 'Network security', 'Telecommunications systems', 'Firewall configuration', 'Cloud infrastructure', 'Incident response'], 'manager': 'employees-3', 'start_date': '2021-03-15', 'end_date': '', 'current_projects': [], 'past_projects': [], 'mentors': ['employees-5'], 'mentees': ['employees-6'], 'frequent_collaborators': ['employees-1', 'employees-4'], 'vector_score': 0.015873015873015872, 'rank': 2, 'fulltext_score': 0, 'score': 0.015873015873015872}, page_content='Anjali Patel is an experienced System Administrator specializing in telecom infrastructure. With a strong focus on network security and high-availability systems, Anjali ensures seamless IT operations and supports large-scale telecommunications environments.')]"
            ]
          },
          "execution_count": 16,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "hybrid_search(\n",
        "    collection=db[EMPLOYEES_COLLECTION],\n",
        "    text_key=\"bio\",\n",
        "    query=\"Get me someone that know android development\",\n",
        "    top_k=3,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PKVAET0yyz2f"
      },
      "source": [
        "### Graph Search"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {
        "id": "aalHMPB446GW"
      },
      "outputs": [],
      "source": [
        "from langchain.chat_models import init_chat_model\n",
        "from langchain_mongodb.graphrag.graph import MongoDBGraphStore\n",
        "from langchain_openai import OpenAI\n",
        "\n",
        "# For best results, use latest models such as gpt-4o and Claude Sonnet 3.5+, etc.\n",
        "chat_model = init_chat_model(\"gpt-4o\", model_provider=\"openai\", temperature=0)\n",
        "\n",
        "\n",
        "def graph_traversal(collection, query):\n",
        "    \"\"\"\n",
        "    Execute a Graph RAG query against a MongoDB collection.\n",
        "\n",
        "    Args:\n",
        "    collection: MongoDB collection object\n",
        "    query: String query to execute\n",
        "\n",
        "    Returns:\n",
        "    str: Result of the query execution\n",
        "\n",
        "    \"\"\"\n",
        "\n",
        "    collection_name = collection.name\n",
        "\n",
        "    graph_store = MongoDBGraphStore(\n",
        "        connection_string=os.environ.get(\"MONGODB_URI\"),\n",
        "        database_name=DB_NAME,\n",
        "        collection_name=collection_name,\n",
        "        entity_extraction_model=chat_model,\n",
        "    )\n",
        "\n",
        "    results = graph_store.chat_response(query)\n",
        "\n",
        "    return results"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "jVycw_Bo7-O_",
        "outputId": "3910c1a0-4c61-4b36-88ba-a7bc6d9fe296"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "AIMessage(content='There are no entities related to the query about projects that share team members with good communication.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 472, 'total_tokens': 490, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_cbf1785567', 'id': 'chatcmpl-CTni54CdmYkU5yr8PO7CNRmaSYXfV', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--dfb7c5f1-8ae2-4f2b-a559-8aaa151349dd-0', usage_metadata={'input_tokens': 472, 'output_tokens': 18, 'total_tokens': 490, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})"
            ]
          },
          "execution_count": 18,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "graph_traversal(\n",
        "    collection=db[PROJECTS_COLLECTION],\n",
        "    query=\"Find all projects that share team members with good communication\",\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "C79Iw4SCRKjD"
      },
      "source": [
        "Cross-Team Project Knowledge Discovery\n",
        "\n",
        "- Finding how different projects interconnect through shared team members\n",
        "- Identifying knowledge transfer paths when employees move between projects\n",
        "- Discovering dependencies between projects that aren't documented but exist through shared personnel\n",
        "\n",
        "\n",
        "Expert Network Mapping\n",
        "\n",
        "- Tracing expertise flows when experts collaborate on projects\n",
        "- Finding indirect expertise paths (e.g., \"Who can John reach out to for\n",
        "Android development help through his network?\")\n",
        "- Discovering emerging expertise clusters around specific technologies"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "o8O8jBCNWCZB"
      },
      "source": [
        "## Part 4: Automated Workflow and Agentic AI Implementation"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "t7TtixBr9diP"
      },
      "source": [
        "#### **AUTOMATION SCENARIO : Critical 5G Network Issue Response ( Workflow Automation)**\n",
        "    \n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KE3RL2wZNEP1"
      },
      "source": [
        "- Context: A major 5G network outage affects multiple regions. The system needs to\n",
        "quickly assemble an emergency response team with specific expertise.\n",
        "\n",
        "- Workflow Steps:\n",
        "  - Step 1: Crisis Detection and Skill Requirements\n",
        "  - Step 2: Expert Identification\n",
        "  - Step 3: Team Composition Analysis\n",
        "  - Step 4: Knowledge Asset Preparation\n",
        "  - Step 5: Team Activation and Brief"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "r6MxAAF_vCY4"
      },
      "source": [
        "##### Overview"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BtV2CFnNCx-g"
      },
      "source": [
        "![image.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9k1x0Y2VIp44"
      },
      "source": [
        "##### Create Collections and Indexes [Crisis]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "y82X2bU4IsnB",
        "outputId": "78b1f327-cd78-466f-e6ed-7bba03626619"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "crisis_events collection already exists\n"
          ]
        }
      ],
      "source": [
        "CRISIS_EVENT_COLLECTION = \"crisis_events\"\n",
        "\n",
        "existing_collections = db.list_collection_names()\n",
        "\n",
        "# Create collection\n",
        "if CRISIS_EVENT_COLLECTION not in existing_collections:\n",
        "    db.create_collection(CRISIS_EVENT_COLLECTION)\n",
        "    print(f\"Created {CRISIS_EVENT_COLLECTION} collection\")\n",
        "else:\n",
        "    print(f\"{CRISIS_EVENT_COLLECTION} collection already exists\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5wxv4FW8JLZJ"
      },
      "outputs": [],
      "source": [
        "# Create search and vector indexes\n",
        "create_vector_search_index(\n",
        "    db[CRISIS_EVENT_COLLECTION], f\"{CRISIS_EVENT_COLLECTION}_vector_search_index\"\n",
        ")\n",
        "create_text_search_index(\n",
        "    db[CRISIS_EVENT_COLLECTION],\n",
        "    {\"mappings\": {\"dynamic\": True}},\n",
        "    f\"{CRISIS_EVENT_COLLECTION}_text_search_index\",\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tmBq9SnbEeqO"
      },
      "source": [
        "##### Data Models"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 22,
      "metadata": {
        "id": "5-P60UjqEOkR"
      },
      "outputs": [],
      "source": [
        "from datetime import datetime\n",
        "from enum import Enum\n",
        "\n",
        "\n",
        "# Minimal enums\n",
        "class CrisisType(str, Enum):\n",
        "    NETWORK_OUTAGE = \"Network Outage\"\n",
        "    SECURITY_BREACH = \"Security Breach\"\n",
        "    SYSTEM_FAILURE = \"System Failure\"\n",
        "    INFRASTRUCTURE_FAILURE = \"Infrastructure Failure\"\n",
        "\n",
        "\n",
        "class SeverityLevel(str, Enum):\n",
        "    LOW = \"low\"\n",
        "    MEDIUM = \"medium\"\n",
        "    HIGH = \"high\"\n",
        "    CRITICAL = \"critical\"\n",
        "\n",
        "\n",
        "# Minimal CrisisEvent Model\n",
        "class CrisisEvent(BaseModel):\n",
        "    event_id: str = Field(..., description=\"Unique crisis identifier\")\n",
        "    event_type: CrisisType = Field(..., description=\"Type of crisis\")\n",
        "    severity: SeverityLevel = Field(..., description=\"Crisis severity level\")\n",
        "    title: str = Field(..., description=\"Brief crisis description\")\n",
        "    description: str = Field(..., description=\"Detailed crisis description\")\n",
        "    affected_systems: List[str] = Field(\n",
        "        default_factory=list, description=\"Affected systems/services\"\n",
        "    )\n",
        "    affected_regions: List[str] = Field(\n",
        "        default_factory=list, description=\"Affected geographical regions\"\n",
        "    )\n",
        "    customer_impact: str = Field(\n",
        "        ..., description=\"Estimated customer impact description\"\n",
        "    )\n",
        "    required_skills: List[str] = Field(\n",
        "        default_factory=list, description=\"Skills needed for response\"\n",
        "    )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rrHWE9MuEg9L"
      },
      "source": [
        "##### Incident Report Parser"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 23,
      "metadata": {
        "id": "f608A1QAEjlf"
      },
      "outputs": [],
      "source": [
        "class IncidentReportParser:\n",
        "    def parse_incident_report(self, incident_report: str) -> CrisisEvent:\n",
        "        \"\"\"Parse incident report text and create CrisisEvent object\"\"\"\n",
        "\n",
        "        # Create a prompt for GPT-4.1 to parse the incident report\n",
        "        prompt = f\"\"\"\n",
        "    Parse the following incident report and extract information to create a CrisisEvent.\n",
        "\n",
        "    Incident Report:\n",
        "    {incident_report}\n",
        "\n",
        "    Extract the following information:\n",
        "    1. event_type: Determine the type (Network Outage, Security Breach, System Failure, Infrastructure Failure)\n",
        "    2. severity: Determine severity level (low, medium, high, critical)\n",
        "    3. title: Create a brief title (max 100 characters)\n",
        "    4. description: Extract or create a detailed description (max 500 characters)\n",
        "    5. affected_systems: List of affected systems/services\n",
        "    6. affected_regions: List of affected geographical regions\n",
        "    7. customer_impact: Estimated impact on customers\n",
        "    8. required_skills: Skills needed to respond to this crisis\n",
        "\n",
        "    Generate a unique event_id in the format: CRISIS-YYYYMMDD-XXX\n",
        "\n",
        "    Return ONLY a JSON object that matches the CrisisEvent schema exactly.\n",
        "    \"\"\"\n",
        "\n",
        "        try:\n",
        "            # Parse with GPT-4.1\n",
        "            response = openai_client.responses.parse(\n",
        "                model=\"gpt-4.1\", input=prompt, text_format=CrisisEvent\n",
        "            )\n",
        "\n",
        "            crisis_event = response.output_parsed\n",
        "\n",
        "            return crisis_event\n",
        "\n",
        "        except Exception as e:\n",
        "            print(f\"Error parsing incident report: {e}\")\n",
        "            # Fallback to basic crisis event\n",
        "            return self._create_fallback_crisis(incident_report)\n",
        "\n",
        "    def _create_fallback_crisis(self, incident_report: str) -> CrisisEvent:\n",
        "        \"\"\"Create a basic crisis event if parsing fails\"\"\"\n",
        "        today = datetime.now()\n",
        "        event_id = f\"CRISIS-{today.strftime('%Y%m%d')}-001\"\n",
        "\n",
        "        return CrisisEvent(\n",
        "            event_id=event_id,\n",
        "            event_type=CrisisType.SYSTEM_FAILURE,\n",
        "            severity=SeverityLevel.MEDIUM,\n",
        "            title=\"Unknown Crisis Event\",\n",
        "            description=incident_report[:500],  # Truncate to 500 chars\n",
        "            affected_systems=[\"Unknown\"],\n",
        "            affected_regions=[\"Unknown\"],\n",
        "            customer_impact=\"Assessment required\",\n",
        "            required_skills=[\"General IT Support\"],\n",
        "        )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RKZhEBw6v4xk"
      },
      "source": [
        "##### Example Incident Report"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 24,
      "metadata": {
        "id": "IBI4zT8NGFNi"
      },
      "outputs": [],
      "source": [
        "# Example incident report (single string)\n",
        "incident_report = \"\"\"\n",
        "  NETWORK CRISIS REPORT - PRIORITY CRITICAL\n",
        "\n",
        "  Incident #: INC-20250505-3547\n",
        "  Service: 5G Network Service\n",
        "  Status: ACTIVE OUTAGE\n",
        "\n",
        "  SUMMARY:\n",
        "  Complete 5G network failure reported across North America region\n",
        "\n",
        "  AFFECTED AREAS:\n",
        "  - New York City metro area\n",
        "  - Boston metropolitan region\n",
        "  - Philadelphia and surrounding counties\n",
        "\n",
        "  IMPACT ASSESSMENT:\n",
        "  - Estimated 2 million customers unable to access 5G services\n",
        "  - Enterprise customers reporting business-critical service disruptions\n",
        "  - Mobile data speeds degraded to 4G in surrounding areas\n",
        "\n",
        "  TECHNICAL DETAILS:\n",
        "  - Core Network Status: DOWN\n",
        "  - gNodeB Stations: 3/5 nodes failed\n",
        "  - Data Center: Primary facility shows hardware failures\n",
        "  - Root Cause: Equipment overheating during maintenance window\n",
        "\n",
        "  TIMELINE:\n",
        "  15:00 EST - Maintenance window begins\n",
        "  15:25 EST - First customer complaints received\n",
        "  15:30 EST - Network monitoring alerts triggered\n",
        "  15:45 EST - Service outage confirmed\n",
        "\n",
        "  REQUIRED RESPONSE:\n",
        "  - Network engineers with 5G expertise\n",
        "  - Hardware repair technicians\n",
        "  - Crisis management team\n",
        "  - Customer communications team\n",
        "\n",
        "  BUSINESS IMPACT:\n",
        "  - Revenue impact: $5,000/minute\n",
        "  - SLA breach: Yes (2-hour response requirement)\n",
        "  - Media attention: High (local news coverage)\n",
        "\n",
        "  NEXT STEPS:\n",
        "  1. Activate emergency response protocol\n",
        "  2. Dispatch on-site technicians\n",
        "  3. Prepare customer communications\n",
        "  4. Assess backup systems deployment\n",
        "\"\"\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yXuiT2eD0TyB"
      },
      "source": [
        "The incident report can be a text document such as a PDF, and if images and tables are included in the PDF then we advice leveraging voyage multimodal embedding models"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eanzEPe3wMdt"
      },
      "source": [
        "##### Testing the Incident Response Parser"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 25,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "OJq8UQPtGMgD",
        "outputId": "3494fa6b-9611-4f7f-86b7-502e0006627c"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "=== Processing Incident Report ===\n",
            "Event ID: CRISIS-20250505-001\n",
            "Type: CrisisType.NETWORK_OUTAGE\n",
            "Severity: SeverityLevel.CRITICAL\n",
            "Title: Critical 5G Network Failure Across Major US Cities\n",
            "Description: A complete 5G network outage affects the North America region, with service down in New York City, Boston, and Philadelphia due to equipment overheating during maintenance. Primary data center and majority of gNodeB stations have failed, causing major business and consumer disruptions.\n",
            "Affected Systems: 5G Network Service, Core Network, gNodeB stations, Primary Data Center\n",
            "Affected Regions: New York City metro area, Boston metropolitan region, Philadelphia and surrounding counties\n",
            "Customer Impact: Approximately 2 million customers unable to access 5G services; enterprise customers experience business-critical disruptions; mobile data speeds reduced to 4G in adjacent areas.\n",
            "Required Skills: 5G network engineering, Hardware repair, Crisis management, Customer communications\n",
            "\n"
          ]
        }
      ],
      "source": [
        "parser = IncidentReportParser()\n",
        "\n",
        "# Parse incident report\n",
        "print(\"=== Processing Incident Report ===\")\n",
        "\n",
        "# Parse incident report into CrisisEvent\n",
        "crisis_event = parser.parse_incident_report(incident_report)\n",
        "\n",
        "# Display results\n",
        "print(f\"Event ID: {crisis_event.event_id}\")\n",
        "print(f\"Type: {crisis_event.event_type}\")\n",
        "print(f\"Severity: {crisis_event.severity}\")\n",
        "print(f\"Title: {crisis_event.title}\")\n",
        "print(f\"Description: {crisis_event.description}\")\n",
        "print(f\"Affected Systems: {', '.join(crisis_event.affected_systems)}\")\n",
        "print(f\"Affected Regions: {', '.join(crisis_event.affected_regions)}\")\n",
        "print(f\"Customer Impact: {crisis_event.customer_impact}\")\n",
        "print(f\"Required Skills: {', '.join(crisis_event.required_skills)}\")\n",
        "print()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CquHe2xKwQfk"
      },
      "source": [
        "##### Issue Response Engine (Brings all processes together)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 26,
      "metadata": {
        "id": "zamxR7EcGX7j"
      },
      "outputs": [],
      "source": [
        "class IssueResponseEngine:\n",
        "    def __init__(self, db):\n",
        "        self.db = db\n",
        "\n",
        "    def crisis_detection_and_parsing(self, incident_report):\n",
        "        \"\"\"Integrates incident report parsing with emergency response\"\"\"\n",
        "\n",
        "        # Initialize a parser\n",
        "        parser = IncidentReportParser()\n",
        "\n",
        "        # Parse incident into CrisisEvent\n",
        "        crisis_event = parser.parse_incident_report(incident_report)\n",
        "\n",
        "        # Save to MongoDB\n",
        "        db.crisis_events.insert_one(crisis_event.model_dump())\n",
        "        print(\"Crisis event saved into records\")\n",
        "\n",
        "        # Initialize emergency response\n",
        "        crisis_data = crisis_event.model_dump()\n",
        "\n",
        "        print(\"Crisis Event Generated:\")\n",
        "        print(json.dumps(crisis_event.model_dump(), indent=2))\n",
        "\n",
        "        return crisis_data\n",
        "\n",
        "    def experts_identification(self, crisis_data, limit=5):\n",
        "        \"\"\"Identifies experts with required skills\"\"\"\n",
        "        # Use the skills data in the crisis_data to search the employees for the right skills\n",
        "        skills_to_search_against = crisis_data[\"required_skills\"]\n",
        "\n",
        "        # Create search query focusing on the skills array and bio fields\n",
        "        search_query = f\"Find experts with {', '.join(skills_to_search_against)} in their skills and experience\"\n",
        "\n",
        "        print(f\"Search Query: {search_query}\")\n",
        "\n",
        "        # Use hybrid search to retrieve employees with the right skills\n",
        "        results = hybrid_search(\n",
        "            collection=db[EMPLOYEES_COLLECTION],\n",
        "            text_key=\"bio\",\n",
        "            query=search_query,\n",
        "            top_k=limit,\n",
        "        )\n",
        "\n",
        "        return results\n",
        "\n",
        "    def knowledge_asset_gathering(self, crisis_data, limit=5):\n",
        "        \"\"\"Gather relevant knowledge assets for the team\"\"\"\n",
        "\n",
        "        # Look for knowledge assets that are semantically similar to the crisis event description\n",
        "        search_query = crisis_data[\"description\"]\n",
        "\n",
        "        # Use semantic search to retrieve knowledge assets\n",
        "        results = semantic_search(\n",
        "            collection=self.db[KNOWLEDGE_ASSETS_COLLECTION],\n",
        "            text_key=\"content\",\n",
        "            query=search_query,\n",
        "            top_k=limit,\n",
        "        )\n",
        "\n",
        "        return results\n",
        "\n",
        "    def team_activation_and_brief(\n",
        "        self, crisis_data, experts_identified, knowledge_assets\n",
        "    ):\n",
        "        \"\"\"Create response plan and activate team\"\"\"\n",
        "\n",
        "        try:\n",
        "            # Prepare the prompt with crisis and team information\n",
        "            prompt = f\"\"\"\n",
        "      CRISIS EVENT BRIEFING\n",
        "\n",
        "      Crisis Details:\n",
        "      - Event ID: {crisis_data.get('event_id')}\n",
        "      - Type: {crisis_data.get('event_type')}\n",
        "      - Severity: {crisis_data.get('severity')}\n",
        "      - Title: {crisis_data.get('title')}\n",
        "      - Description: {crisis_data.get('description')}\n",
        "      - Affected Systems: {', '.join(crisis_data.get('affected_systems', []))}\n",
        "      - Affected Regions: {', '.join(crisis_data.get('affected_regions', []))}\n",
        "      - Customer Impact: {crisis_data.get('customer_impact')}\n",
        "\n",
        "      Response Team:\n",
        "      {self._format_team_members(experts_identified)}\n",
        "\n",
        "      Relevant Knowledge Assets:\n",
        "      {self._format_knowledge_assets(knowledge_assets)}\n",
        "\n",
        "      INSTRUCTIONS:\n",
        "      Create a detailed briefing for the emergency response team that includes:\n",
        "      1. Executive summary of the crisis\n",
        "      2. Team assignments with specific roles\n",
        "      3. Priority action items\n",
        "      4. Available resources and documentation\n",
        "      5. Expected timeline and milestones\n",
        "      6. Communication protocols\n",
        "      7. Success criteria for resolution\n",
        "\n",
        "      Keep the briefing concise but comprehensive.\n",
        "      \"\"\"\n",
        "\n",
        "            # Call GPT-4.1 to generate briefing\n",
        "            response = openai_client.responses.create(\n",
        "                model=\"gpt-4.1\",\n",
        "                input=prompt,\n",
        "            )\n",
        "\n",
        "            briefing_text = response.output_text\n",
        "\n",
        "            print(\"Briefing Generated:\")\n",
        "            print(briefing_text)\n",
        "\n",
        "            return briefing_text\n",
        "\n",
        "        except Exception as e:\n",
        "            print(f\"Error generating briefing: {e}\")\n",
        "\n",
        "    def _format_team_members(self, experts):\n",
        "        \"\"\"Format expert list for briefing\"\"\"\n",
        "        formatted = []\n",
        "        for i, expert in enumerate(experts):\n",
        "            role = self._assign_crisis_role(expert)\n",
        "            formatted.append(\n",
        "                f\"- {expert.get('name')} ({expert.get('role')}) - Crisis Role: {role}\"\n",
        "            )\n",
        "        return \"\\n\".join(formatted)\n",
        "\n",
        "    def _format_knowledge_assets(self, assets):\n",
        "        \"\"\"Format knowledge assets for briefing\"\"\"\n",
        "        if not assets:\n",
        "            return \"No specific knowledge assets found.\"\n",
        "        formatted = []\n",
        "        for asset in assets[:5]:  # Limit to top 5\n",
        "            formatted.append(f\"- {asset.get('title')} (Type: {asset.get('type')})\")\n",
        "        return \"\\n\".join(formatted)\n",
        "\n",
        "    def _assign_crisis_role(self, expert):\n",
        "        \"\"\"Assign crisis-specific role based on expert skills\"\"\"\n",
        "        skills = expert.get(\"matching_skills\", [])\n",
        "        if any(\"5G\" in skill for skill in skills):\n",
        "            return \"Technical Lead\"\n",
        "        elif any(\"Security\" in skill for skill in skills):\n",
        "            return \"Security Specialist\"\n",
        "        elif any(\"Network\" in skill for skill in skills):\n",
        "            return \"Network Specialist\"\n",
        "        else:\n",
        "            return \"Technical Support\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "huQbfnUmwcuD"
      },
      "source": [
        "##### LangGraph State"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "FVTtGMipAvc3"
      },
      "outputs": [],
      "source": [
        "import operator\n",
        "from datetime import datetime\n",
        "from typing import Annotated, Any, Dict, List, TypedDict\n",
        "\n",
        "import langgraph.graph as lg\n",
        "from langchain.schema import BaseMessage\n",
        "from langchain_core.messages import AIMessage, HumanMessage\n",
        "from langgraph.checkpoint.mongodb import MongoDBSaver\n",
        "\n",
        "\n",
        "# Define state for the crisis response graph\n",
        "class EmergencyResponseState(TypedDict):\n",
        "    \"\"\"State for the emergency response workflow\"\"\"\n",
        "\n",
        "    incident_report: str  # Incident report text\n",
        "    crisis_event: Dict[str, Any]  # Crisis details\n",
        "    skill_requirements: Optional[List[str]]  # Required skills\n",
        "    available_experts: Optional[List[Dict[str, Any]]]  # Identified experts\n",
        "    selected_team: Optional[List[Dict[str, Any]]]  # Final team composition\n",
        "    relevant_knowledge: Optional[List[Dict[str, Any]]]  # Knowledge assets\n",
        "    response_plan: Optional[Dict[str, Any]]  # Final response plan\n",
        "    # messages: List[Union[HumanMessage, AIMessage]]  # Conversation history\n",
        "    messages: Annotated[List[BaseMessage], operator.add]\n",
        "    errors: Optional[List[str]]  # Any errors encountered"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "I3b4SO4Rwlha"
      },
      "source": [
        "##### Workflow Definiton"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ibC8zBtuAvgH"
      },
      "outputs": [],
      "source": [
        "class EmergencyResponseWorkflow:\n",
        "    \"\"\"Orchestrates the emergency response workflow using LangGraph\"\"\"\n",
        "\n",
        "    def __init__(self, client, db, collection_names):\n",
        "        self.client = client\n",
        "        self.db = db\n",
        "        self.employees_collection = db[collection_names[\"employees\"]]\n",
        "        self.projects_collection = db[collection_names[\"projects\"]]\n",
        "        self.knowledge_collection = db[collection_names[\"knowledge_assets\"]]\n",
        "\n",
        "        # Initialize IssueResponseEngine\n",
        "        self.issue_engine = IssueResponseEngine(db)\n",
        "\n",
        "        # Create MongoDB checkpointer to save state of the workflow at any given time\n",
        "        self.checkpoint_store = MongoDBSaver(client, DB_NAME, \"checkpoints\")\n",
        "\n",
        "        # Build the graph\n",
        "        self.workflow = self._build_graph()\n",
        "\n",
        "    def _detect_crisis_and_requirements(\n",
        "        self, state: EmergencyResponseState\n",
        "    ) -> EmergencyResponseState:\n",
        "        \"\"\"Detect crisis parameters and determine skill requirements\"\"\"\n",
        "        print(\"\\n\")\n",
        "        print(\"1. Beginning crisis detecting and parsing provided information...\")\n",
        "        try:\n",
        "            incident_report = state[\"incident_report\"]\n",
        "\n",
        "            # Use IssueResponseEngine to parse incident and create crisis data\n",
        "            crisis_data = self.issue_engine.crisis_detection_and_parsing(\n",
        "                incident_report\n",
        "            )\n",
        "\n",
        "            state[\"crisis_event\"] = crisis_data\n",
        "\n",
        "            # Update skill requirements from crisis data\n",
        "            state[\"skill_requirements\"] = crisis_data.get(\"required_skills\", [])\n",
        "\n",
        "            state[\"messages\"].append(\n",
        "                AIMessage(\n",
        "                    content=f\"Crisis detected: {crisis_data.get('event_type')}. Severity: {crisis_data.get('severity')}. \"\n",
        "                    f\"Required skills identified: {', '.join(state['skill_requirements'])}\"\n",
        "                )\n",
        "            )\n",
        "\n",
        "        except Exception as e:\n",
        "            error_msg = f\"Error detecting crisis requirements: {e!s}\"\n",
        "            state[\"errors\"] = state.get(\"errors\", []) + [error_msg]\n",
        "            state[\"messages\"].append(AIMessage(content=error_msg))\n",
        "\n",
        "        return state\n",
        "\n",
        "    def _identify_experts(\n",
        "        self, state: EmergencyResponseState\n",
        "    ) -> EmergencyResponseState:\n",
        "        \"\"\"Identify available experts with required skills\"\"\"\n",
        "        print(\"\\n\")\n",
        "        print(\n",
        "            \"2. Identifying experts within records suitable to handle crisis event...\"\n",
        "        )\n",
        "        try:\n",
        "            # Use IssueResponseEngine to identify experts\n",
        "            crisis_data = state[\"crisis_event\"]\n",
        "            available_experts = self.issue_engine.experts_identification(crisis_data, 5)\n",
        "\n",
        "            # Format experts for consistency with existing state structure\n",
        "            formatted_experts = []\n",
        "            for expert in available_experts:\n",
        "                expert = expert.metadata\n",
        "                formatted_experts.append(\n",
        "                    {\n",
        "                        \"emp_id\": expert.get(\"emp_id\"),\n",
        "                        \"name\": expert.get(\"name\"),\n",
        "                        \"role\": expert.get(\"role\"),\n",
        "                        \"department\": expert.get(\"department\"),\n",
        "                        \"bio\": expert.get(\"bio\"),\n",
        "                        \"skills\": expert.get(\"skills\", []),\n",
        "                        \"current_projects\": expert.get(\"current_projects\", []),\n",
        "                    }\n",
        "                )\n",
        "\n",
        "            print(\"Below are the experts identified ⬇️\")\n",
        "            print(formatted_experts)\n",
        "\n",
        "            # The available experts are the selected team\n",
        "            # TODO: Create a process that organizes the team based on the available experts\n",
        "            state[\"selected_team\"] = formatted_experts\n",
        "            state[\"messages\"].append(\n",
        "                AIMessage(\n",
        "                    content=f\"Identified {len(formatted_experts)} available experts with required skills.\"\n",
        "                )\n",
        "            )\n",
        "\n",
        "        except Exception as e:\n",
        "            print(e)\n",
        "            error_msg = f\"Error identifying experts: {e!s}\"\n",
        "            # Fix for the second error\n",
        "            if state.get(\"errors\") is None:\n",
        "                state[\"errors\"] = [error_msg]\n",
        "            else:\n",
        "                state[\"errors\"] = state.get(\"errors\", []) + [error_msg]\n",
        "            state[\"messages\"].append(AIMessage(content=error_msg))\n",
        "\n",
        "        return state\n",
        "\n",
        "    def _gather_knowledge_assets(\n",
        "        self, state: EmergencyResponseState\n",
        "    ) -> EmergencyResponseState:\n",
        "        \"\"\"Gather relevant knowledge assets for the team\"\"\"\n",
        "        print(\"\\n\")\n",
        "        print(\"3. Gathering knowledge assets to prep team on...\")\n",
        "\n",
        "        try:\n",
        "            # Use IssueResponseEngine to gather knowledge assets\n",
        "            crisis_data = state[\"crisis_event\"]\n",
        "            knowledge_assets = self.issue_engine.knowledge_asset_gathering(crisis_data)\n",
        "            print(\"Below are the knowledge assets gathered ⬇️\")\n",
        "\n",
        "            # Format knowledge assets for consistency\n",
        "            formatted_assets = []\n",
        "            for asset in knowledge_assets:\n",
        "                # Handle tuple format (Document, score)\n",
        "                if isinstance(asset, tuple):\n",
        "                    doc = asset[0]\n",
        "                    if hasattr(doc, \"metadata\"):\n",
        "                        asset_data = doc.metadata\n",
        "                    else:\n",
        "                        asset_data = doc\n",
        "                elif hasattr(asset, \"metadata\"):\n",
        "                    asset_data = asset.metadata\n",
        "                else:\n",
        "                    asset_data = asset\n",
        "\n",
        "                formatted_assets.append(\n",
        "                    {\n",
        "                        \"asset_id\": asset_data.get(\"asset_id\", \"unknown\"),\n",
        "                        \"title\": asset_data.get(\"title\", \"Untitled Asset\"),\n",
        "                        \"type\": asset_data.get(\"type\", \"documentation\"),\n",
        "                        \"author\": asset_data.get(\"author\", \"Unknown\"),\n",
        "                        \"content\": asset_data.get(\"content\", \"\"),\n",
        "                        \"creation_date\": asset_data.get(\"creation_date\", \"\"),\n",
        "                    }\n",
        "                )\n",
        "\n",
        "            print(formatted_assets)\n",
        "\n",
        "            state[\"relevant_knowledge\"] = formatted_assets\n",
        "            state[\"messages\"].append(\n",
        "                AIMessage(\n",
        "                    content=f\"Gathered {len(formatted_assets)} relevant knowledge assets for the team.\"\n",
        "                )\n",
        "            )\n",
        "\n",
        "        except Exception as e:\n",
        "            print(\"Error gethering knowledge assets ❌\")\n",
        "            print(e)\n",
        "            error_msg = f\"Error gathering knowledge assets: {e!s}\"\n",
        "            state[\"errors\"] = state.get(\"errors\", []) + [error_msg]\n",
        "            state[\"messages\"].append(AIMessage(content=error_msg))\n",
        "\n",
        "        return state\n",
        "\n",
        "    def _estimate_resolution_time(self, crisis_event: Dict[str, Any]) -> str:\n",
        "        \"\"\"Estimate resolution time based on crisis severity\"\"\"\n",
        "        severity = crisis_event.get(\"severity\", \"low\")\n",
        "        if severity == \"critical\":\n",
        "            return \"1-2 hours\"\n",
        "        elif severity == \"high\":\n",
        "            return \"4-8 hours\"\n",
        "        else:\n",
        "            return \"24-48 hours\"\n",
        "\n",
        "    def _create_activation_summary(self, response_plan: Dict[str, Any]) -> str:\n",
        "        \"\"\"Create summary for team activation\"\"\"\n",
        "        team_size = len(response_plan[\"team_members\"])\n",
        "        team_lead = (\n",
        "            response_plan[\"team_lead\"][\"name\"] if response_plan[\"team_lead\"] else \"None\"\n",
        "        )\n",
        "        crisis_type = response_plan[\"crisis_details\"].get(\"event_type\", \"Unknown\")\n",
        "        resolution_time = response_plan[\"expected_resolution_time\"]\n",
        "\n",
        "        return (\n",
        "            f\"Crisis {response_plan['crisis_id']}: {crisis_type} response team activated. \"\n",
        "            f\"Team of {team_size} led by {team_lead}. Expected resolution: {resolution_time}\"\n",
        "        )\n",
        "\n",
        "    def _activate_team_and_create_plan(\n",
        "        self, state: EmergencyResponseState\n",
        "    ) -> EmergencyResponseState:\n",
        "        \"\"\"Create response plan and activate team\"\"\"\n",
        "        print(\"4. Activating team and creating a response plan...\")\n",
        "\n",
        "        try:\n",
        "            # TODO: Send an email to all employees selected and include response plan\n",
        "            selected_team = state[\"selected_team\"]\n",
        "            crisis_event = state[\"crisis_event\"]\n",
        "            relevant_knowledge = state[\"relevant_knowledge\"]\n",
        "\n",
        "            # Use IssueResponseEngine to create team briefing\n",
        "            briefing_text = self.issue_engine.team_activation_and_brief(\n",
        "                crisis_event, selected_team, relevant_knowledge\n",
        "            )\n",
        "\n",
        "            # Create response plan\n",
        "            response_plan = {\n",
        "                \"crisis_id\": f\"CRISIS-{datetime.now().strftime('%Y%m%d-%H%M%S')}\",\n",
        "                \"team_lead\": selected_team[0] if selected_team else None,\n",
        "                \"team_members\": selected_team,\n",
        "                \"crisis_details\": crisis_event,\n",
        "                \"briefing\": briefing_text,\n",
        "                \"action_items\": self._generate_action_items(\n",
        "                    crisis_event, selected_team\n",
        "                ),\n",
        "                \"knowledge_resources\": relevant_knowledge,\n",
        "                \"status\": \"active\",\n",
        "                \"created_at\": datetime.now().isoformat(),\n",
        "                \"expected_resolution_time\": self._estimate_resolution_time(\n",
        "                    crisis_event\n",
        "                ),\n",
        "            }\n",
        "\n",
        "            # Create summary for team activation\n",
        "            activation_summary = self._create_activation_summary(response_plan)\n",
        "\n",
        "            state[\"response_plan\"] = response_plan\n",
        "            state[\"messages\"].append(\n",
        "                AIMessage(\n",
        "                    content=f\"Emergency response team activated. Plan created: {activation_summary}\"\n",
        "                )\n",
        "            )\n",
        "\n",
        "        except Exception as e:\n",
        "            error_msg = f\"Error activating team: {e!s}\"\n",
        "            state[\"errors\"] = state.get(\"errors\", []) + [error_msg]\n",
        "            state[\"messages\"].append(AIMessage(content=error_msg))\n",
        "\n",
        "        return state\n",
        "\n",
        "    def _should_continue(self, state: EmergencyResponseState) -> str:\n",
        "        \"\"\"Determine if workflow should continue or end\"\"\"\n",
        "        if state.get(\"errors\") and len(state[\"errors\"]) < 3:\n",
        "            # Retry if we have minor errors\n",
        "            print(\"There were minor errors so retrying...\")\n",
        "            print(state[\"errors\"])\n",
        "            return \"retry\"\n",
        "        elif not state.get(\"selected_team\"):\n",
        "            # End if no team could be formed\n",
        "            return \"end_failure\"\n",
        "        else:\n",
        "            # Continue to completion\n",
        "            return \"end_success\"\n",
        "\n",
        "    def _build_graph(self):\n",
        "        \"\"\"Build the LangGraph workflow\"\"\"\n",
        "        # Define the graph\n",
        "        builder = lg.StateGraph(EmergencyResponseState)\n",
        "\n",
        "        # Add nodes\n",
        "        builder.add_node(\"detect_crisis\", self._detect_crisis_and_requirements)\n",
        "        builder.add_node(\"identify_experts\", self._identify_experts)\n",
        "        builder.add_node(\"gather_knowledge\", self._gather_knowledge_assets)\n",
        "        builder.add_node(\"activate_team\", self._activate_team_and_create_plan)\n",
        "\n",
        "        # Define edges\n",
        "        builder.add_edge(\"detect_crisis\", \"identify_experts\")\n",
        "        builder.add_edge(\"identify_experts\", \"gather_knowledge\")\n",
        "        builder.add_edge(\"gather_knowledge\", \"activate_team\")\n",
        "\n",
        "        # Add conditional edge for completion\n",
        "        builder.add_conditional_edges(\n",
        "            \"activate_team\",\n",
        "            self._should_continue,\n",
        "            {\"retry\": \"detect_crisis\", \"end_success\": lg.END, \"end_failure\": lg.END},\n",
        "        )\n",
        "\n",
        "        # Set entry point\n",
        "        builder.set_entry_point(\"detect_crisis\")\n",
        "\n",
        "        # Compile the graph with MongoDB checkpointing\n",
        "        return builder.compile(checkpointer=self.checkpoint_store)\n",
        "\n",
        "    def respond_to_crisis(self, incident_report) -> Dict[str, Any]:\n",
        "        \"\"\"Respond to an emergency crisis event\"\"\"\n",
        "        # Initialize state\n",
        "        initial_state = EmergencyResponseState(\n",
        "            incident_report=incident_report,\n",
        "            crisis_event={},\n",
        "            skill_requirements=None,\n",
        "            available_experts=None,\n",
        "            selected_team=None,\n",
        "            relevant_knowledge=None,\n",
        "            response_plan=None,\n",
        "            messages=[HumanMessage(content=incident_report)],\n",
        "            errors=[],\n",
        "        )\n",
        "\n",
        "        # Run the workflow\n",
        "        config = {\"configurable\": {\"thread_id\": 1}}\n",
        "        final_state = self.workflow.invoke(initial_state, config)\n",
        "\n",
        "        return final_state"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0pUNuOSnw91Q"
      },
      "source": [
        "##### Workflow Excecution"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 29,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000,
          "referenced_widgets": [
            "e67d2ed9f7304842added60be1d66a3e",
            "5843086730774128bb9e656db4a65e83",
            "1a942ca9422e478898ebcbb9de43eabd",
            "9f456078b9b34e5c9fe15a39126badf5",
            "e27c57fefa25464d9c8793f834e32908",
            "c08bbb35d71b40b7a9ce238313a91e36",
            "b9b2a9e1095e450dbb82d4052ec85c91",
            "7fd24343103d43b38d9358c32963108a",
            "74ac542ed5de4227b660ee80a7c9673e",
            "b0327ec8ad8a48adbc13f572da0426da",
            "9dba347cdc7240b7992f924f9500383c",
            "7d9ef38e785843b7af33e6b61f0ce725",
            "c1ab584e38b8471d9093414375e87b59",
            "840e1c25008e4fbcb48a39a52b905ae5",
            "a76c6fc5624c4a16b154ae28bf2836b6",
            "6bb5168734b4481da4a2aa25cf3e5539",
            "2b66fe785e63464b95a6094d8077a37a",
            "2ae5036a8d2c4598a3c6f1989e0f5494",
            "4ee74006f7a54af086fda3747c9416ae",
            "f24448dc76b5426a894a1fd3a9f3a8d8",
            "c29883fcb4e641b3950bce9552500e78",
            "d7e20546e14e478481116d39455fe0c4",
            "f556ad6f36534503bdda98e1da38fd7b",
            "49c2c3d4bbc94e9386e7255f6c0ed89c",
            "c70e0ec05b374e73a08f7746d6bceac6",
            "b934e2c7168447ce9cf1accf9c86ab70",
            "e1441a03c912498891c384f088a9540f",
            "d0e6db037f8f414cb805c88981cdc731",
            "534ce430982b410f929d5dc434f46688",
            "c8c6de07b28a42d1aa78dccd4617980c",
            "8bd3488abf424ef5821b1f775285cb98",
            "5f2b022ca0c540c294d906143ef51f77",
            "29347ee87c5e4f148bc2800f0ecdfbc5",
            "41584d6e4abb4d37a34860a6b31cffe1",
            "9ae343d11e6a4112b93a1becada22293",
            "1e85166a7916452bae86108ae64361d4",
            "e1f993f548604a19932ea3e92120aba6",
            "202946dab5af445291fa726a58ce66d1",
            "ca155adeaecb4164a911f33c6c2b7fa8",
            "dd7b275e29cf4112b8da21ac364c0ad5",
            "5572add8abee448289005ada46d6c0d2",
            "469c5fb526b34f789cc03aa02d30b69c",
            "ab58a73920d741b194ad26dfb17fdd3b",
            "6465d1842d774ec6801e5a5f1eb70a53"
          ]
        },
        "id": "sdgRQy7dAvig",
        "outputId": "a4bbc8dd-a11a-4bcd-bdc7-143f9c1e0097"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "=== Emergency Response System Activated ===\n",
            "\n",
            "\n",
            "1. Beginning crisis detecting and parsing provided information...\n",
            "Crisis event saved into records\n",
            "Crisis Event Generated:\n",
            "{\n",
            "  \"event_id\": \"CRISIS-20250505-001\",\n",
            "  \"event_type\": \"Network Outage\",\n",
            "  \"severity\": \"critical\",\n",
            "  \"title\": \"Critical 5G Network Failure Across Major North American Cities\",\n",
            "  \"description\": \"A complete 5G network outage has affected the NYC, Boston, and Philadelphia regions. Core network is down due to equipment overheating during maintenance. Estimated 2 million customers impacted, including major enterprise clients. Mobile data speeds are degraded in surrounding areas. Immediate emergency technical and customer response required.\",\n",
            "  \"affected_systems\": [\n",
            "    \"5G Network Service\",\n",
            "    \"Core Network\",\n",
            "    \"gNodeB Stations\",\n",
            "    \"Primary Data Center\"\n",
            "  ],\n",
            "  \"affected_regions\": [\n",
            "    \"New York City metro area\",\n",
            "    \"Boston metropolitan region\",\n",
            "    \"Philadelphia and surrounding counties\"\n",
            "  ],\n",
            "  \"customer_impact\": \"Estimated 2 million customers without 5G access; business-critical disruptions reported; mobile data reduced to 4G in surrounding areas; significant revenue and SLA impacts.\",\n",
            "  \"required_skills\": [\n",
            "    \"5G network engineering\",\n",
            "    \"Hardware repair\",\n",
            "    \"Crisis management\",\n",
            "    \"Customer communications\"\n",
            "  ]\n",
            "}\n",
            "\n",
            "\n",
            "2. Identifying experts within records suitable to handle crisis event...\n",
            "Search Query: Find experts with 5G network engineering, Hardware repair, Crisis management, Customer communications in their skills and experience\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "e67d2ed9f7304842added60be1d66a3e",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "  0%|          | 0/1 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Below are the experts identified ⬇️\n",
            "[{'emp_id': 'employees-9', 'name': 'Aisha Patel', 'role': 'Network Engineer', 'department': 'Network Operations', 'bio': None, 'skills': ['LTE/5G Networking', 'Network Security', 'Cisco Routers & Switches', 'RF Planning', 'Troubleshooting', 'Fiber Optic Communications', 'Data Center Networking'], 'current_projects': []}, {'emp_id': 'employees-5', 'name': 'Sarah Kim', 'role': 'Network Engineer', 'department': 'Network Operations', 'bio': None, 'skills': ['Network Design', 'Telecommunications Infrastructure', 'Fiber Optic Networking', 'Routing & Switching', 'VoIP', 'Troubleshooting', 'Cisco Certified'], 'current_projects': []}, {'emp_id': 'employees-7', 'name': 'Sophia Kim', 'role': 'Network Engineer', 'department': 'Network Operations', 'bio': None, 'skills': ['IP routing', 'network design', 'fiber optics', 'troubleshooting', 'VoIP'], 'current_projects': []}, {'emp_id': 'employees-4', 'name': 'Ravi Sharma', 'role': 'Network Engineer', 'department': 'Network Operations', 'bio': None, 'skills': ['Network Design', 'Troubleshooting', 'Cisco Routers', 'Optical Fiber Communication', 'Telecommunications Protocols', 'Packet Switching'], 'current_projects': []}, {'emp_id': 'employees-2', 'name': 'Priya Raman', 'role': 'Network Engineer', 'department': 'Network Operations', 'bio': None, 'skills': ['Network Design', 'Routing & Switching', 'Telecommunications Infrastructure', 'LAN/WAN Optimization', 'Fiber Optics', 'VoIP', 'Troubleshooting'], 'current_projects': []}]\n",
            "\n",
            "\n",
            "3. Gathering knowledge assets to prep team on...\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "7d9ef38e785843b7af33e6b61f0ce725",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "  0%|          | 0/1 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Below are the knowledge assets gathered ⬇️\n",
            "[{'asset_id': 'knowledge_assets-2', 'title': 'Technical Procedures and Best Practices for Cloud Data Migration', 'type': 'documentation', 'author': 'employees-6', 'content': '', 'creation_date': '2024-06-20'}, {'asset_id': 'knowledge_assets-4', 'title': 'Secure API Integration Procedures and Best Practices', 'type': 'best_practice', 'author': 'employees-3', 'content': '', 'creation_date': '2024-06-16T10:00:00Z'}, {'asset_id': 'knowledge_assets-3', 'title': 'Best Practices for API Deployment and Version Management', 'type': 'best_practice', 'author': 'employees-6', 'content': '', 'creation_date': '2024-06-18'}, {'asset_id': 'knowledge_assets-5', 'title': 'Automating Deployment Pipelines: Technical Procedures & Best Practices', 'type': 'documentation', 'author': 'employees-3', 'content': '', 'creation_date': '2024-05-12T09:30:00Z'}, {'asset_id': 'knowledge_assets-3', 'title': 'Standard Procedures and Best Practices for Secure API Development', 'type': 'best_practice', 'author': 'employees-4', 'content': '', 'creation_date': '2024-06-15T09:25:00Z'}]\n",
            "4. Activating team and creating a response plan...\n",
            "Briefing Generated:\n",
            "---\n",
            "## CRISIS RESPONSE TEAM BRIEFING  \n",
            "**Event ID:** CRISIS-20250505-001  \n",
            "**Event:** Critical 5G Network Failure Across Major North American Cities  \n",
            "**Severity:** CRITICAL\n",
            "\n",
            "---\n",
            "\n",
            "### 1. Executive Summary\n",
            "\n",
            "A total outage of the 5G network has simultaneously impacted New York City, Boston, and Philadelphia metropolitan areas due to equipment overheating during scheduled maintenance at the core network. This outage affects approximately 2 million customers, severely degrading service for enterprise clients and reducing mobile data speeds to 4G in adjacent regions. Immediate resolution is vital to reduce further SLA, financial, and customer trust damages.\n",
            "\n",
            "### 2. Team Assignments & Roles\n",
            "\n",
            "| Name           | Role               | Assignment                                 |\n",
            "|----------------|--------------------|--------------------------------------------|\n",
            "| **Aisha Patel**   | Technical Support  | Lead technical triage & data center escalation |\n",
            "| **Sarah Kim**    | Technical Support  | Fault isolation: gNodeB & radio access analysis |\n",
            "| **Sophia Kim**   | Technical Support  | Core network diagnostics & configuration review |\n",
            "| **Ravi Sharma**  | Technical Support  | Overheating investigation and equipment coordination |\n",
            "| **Priya Raman**  | Technical Support  | Customer/enterprise impact mapping & technical comms |\n",
            "\n",
            "**All engineers**: On rotating shifts for continuous coverage, status escalation responsibility as per situation criticality.\n",
            "\n",
            "### 3. Priority Action Items\n",
            "\n",
            "1. **Immediate Core Network Restoration**\n",
            "   - Diagnose overheating incident, restore failed components, and reroute traffic if possible.\n",
            "2. **Outage Containment**\n",
            "   - Isolate affected zones, stabilize gNodeB stations, and prevent cascading failures.\n",
            "3. **Service Continuity**\n",
            "   - Deploy fallback/temporary solutions to partially restore service or escalate to 4G fallback.\n",
            "4. **Customer Impact Minimization**\n",
            "   - Map affected business clients; prioritize mission-critical sectors.\n",
            "5. **Root Cause Analysis**\n",
            "   - Collect forensic data for post-mortem and long-term remediation.\n",
            "6. **Ongoing Status Updates**\n",
            "   - Maintain near real-time crisis dashboard for executives and customer service.\n",
            "\n",
            "### 4. Available Resources & Documentation\n",
            "\n",
            "- **Technical Procedures and Best Practices for Cloud Data Migration**\n",
            "- **Secure API Integration Procedures and Best Practices**\n",
            "- **Best Practices for API Deployment & Version Management**\n",
            "- **Automating Deployment Pipelines: Technical Procedures & Best Practices**\n",
            "- **Standard Procedures and Best Practices for Secure API Development**\n",
            "\n",
            "(All resources are available on the shared crisis drive and may provide applicable guidance for rapid deployment or temporary reroute solutions.)\n",
            "\n",
            "### 5. Expected Timeline & Milestones\n",
            "\n",
            "- **0–1 hrs:** Situation triage, core isolation, and first technical update  \n",
            "- **1–3 hrs:** Action on preliminary fix and start of targeted restoration  \n",
            "- **3–6 hrs:** Progress update, phased service restoration (reprioritize if delays), post-outage impact assessment initiation  \n",
            "- **6+ hrs:** Full network restoration, incident review, and executive summary\n",
            "\n",
            "### 6. Communication Protocols\n",
            "\n",
            "- **Incident Command:** Led by Aisha Patel; status via secure team Slack #crisis-response and standby phone bridge\n",
            "- **Update Frequency:** Every 30 minutes or at major milestone completion\n",
            "- **Stakeholder Reports:** Every 1 hour to executive leadership and customer service liaisons\n",
            "- **Customer Messaging:** Drafted by Priya Raman in sync with PR; distributed via website, SMS, and enterprise client portals\n",
            "\n",
            "### 7. Success Criteria\n",
            "\n",
            "- **Restoration:** 5G network service fully restored to impacted metro areas\n",
            "- **Stabilization:** No lingering or recurrent outages detected for 24 hours\n",
            "- **Root Cause:** Documented and communicated, with preventive actions identified\n",
            "- **Customer Communication:** Timely, clear, and accurate updates delivered throughout\n",
            "- **SLA Compliance:** Post-crisis SLA review completed and breach instances mitigated\n",
            "\n",
            "---\n",
            "\n",
            "**ALL HANDS: Be vigilant, submit all findings through assigned channels, and prepare escalation summaries at each milestone.**\n",
            "There were minor errors so retrying...\n",
            "[\"Error activating team: 'EmergencyResponseWorkflow' object has no attribute '_generate_action_items'\"]\n",
            "\n",
            "\n",
            "1. Beginning crisis detecting and parsing provided information...\n",
            "Crisis event saved into records\n",
            "Crisis Event Generated:\n",
            "{\n",
            "  \"event_id\": \"CRISIS-20250505-001\",\n",
            "  \"event_type\": \"Network Outage\",\n",
            "  \"severity\": \"critical\",\n",
            "  \"title\": \"Critical 5G Network Failure in North America Impacting Millions\",\n",
            "  \"description\": \"A complete 5G network failure has occurred across major North American cities due to equipment overheating during a maintenance window, causing core network and multiple gNodeB node failures. Business-critical outages and severe degradation of mobile data speeds are being reported.\",\n",
            "  \"affected_systems\": [\n",
            "    \"5G Network Service\",\n",
            "    \"Core Network\",\n",
            "    \"gNodeB Stations\",\n",
            "    \"Primary Data Center\"\n",
            "  ],\n",
            "  \"affected_regions\": [\n",
            "    \"New York City metro area\",\n",
            "    \"Boston metropolitan region\",\n",
            "    \"Philadelphia and surrounding counties\"\n",
            "  ],\n",
            "  \"customer_impact\": \"Approximately 2 million customers cannot access 5G services. Enterprise clients face business-critical disruptions; mobile data speed reduced to 4G in surrounding areas.\",\n",
            "  \"required_skills\": [\n",
            "    \"Network engineering (5G expertise)\",\n",
            "    \"Hardware repair\",\n",
            "    \"Crisis management\",\n",
            "    \"Customer communications\"\n",
            "  ]\n",
            "}\n",
            "\n",
            "\n",
            "2. Identifying experts within records suitable to handle crisis event...\n",
            "Search Query: Find experts with Network engineering (5G expertise), Hardware repair, Crisis management, Customer communications in their skills and experience\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f556ad6f36534503bdda98e1da38fd7b",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "  0%|          | 0/1 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Below are the experts identified ⬇️\n",
            "[{'emp_id': 'employees-9', 'name': 'Aisha Patel', 'role': 'Network Engineer', 'department': 'Network Operations', 'bio': None, 'skills': ['LTE/5G Networking', 'Network Security', 'Cisco Routers & Switches', 'RF Planning', 'Troubleshooting', 'Fiber Optic Communications', 'Data Center Networking'], 'current_projects': []}, {'emp_id': 'employees-5', 'name': 'Sarah Kim', 'role': 'Network Engineer', 'department': 'Network Operations', 'bio': None, 'skills': ['Network Design', 'Telecommunications Infrastructure', 'Fiber Optic Networking', 'Routing & Switching', 'VoIP', 'Troubleshooting', 'Cisco Certified'], 'current_projects': []}, {'emp_id': 'employees-7', 'name': 'Sophia Kim', 'role': 'Network Engineer', 'department': 'Network Operations', 'bio': None, 'skills': ['IP routing', 'network design', 'fiber optics', 'troubleshooting', 'VoIP'], 'current_projects': []}, {'emp_id': 'employees-4', 'name': 'Ravi Sharma', 'role': 'Network Engineer', 'department': 'Network Operations', 'bio': None, 'skills': ['Network Design', 'Troubleshooting', 'Cisco Routers', 'Optical Fiber Communication', 'Telecommunications Protocols', 'Packet Switching'], 'current_projects': []}, {'emp_id': 'employees-0', 'name': 'Jordan Singh', 'role': 'Network Engineer', 'department': 'Network Operations', 'bio': None, 'skills': ['IP networking', 'routing and switching', 'fiber optics', 'network security', 'VoIP'], 'current_projects': []}]\n",
            "\n",
            "\n",
            "3. Gathering knowledge assets to prep team on...\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "41584d6e4abb4d37a34860a6b31cffe1",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "  0%|          | 0/1 [00:00<?, ?it/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Below are the knowledge assets gathered ⬇️\n",
            "[{'asset_id': 'knowledge_assets-2', 'title': 'Technical Procedures and Best Practices for Cloud Data Migration', 'type': 'documentation', 'author': 'employees-6', 'content': '', 'creation_date': '2024-06-20'}, {'asset_id': 'knowledge_assets-4', 'title': 'Secure API Integration Procedures and Best Practices', 'type': 'best_practice', 'author': 'employees-3', 'content': '', 'creation_date': '2024-06-16T10:00:00Z'}, {'asset_id': 'knowledge_assets-3', 'title': 'Best Practices for API Deployment and Version Management', 'type': 'best_practice', 'author': 'employees-6', 'content': '', 'creation_date': '2024-06-18'}, {'asset_id': 'knowledge_assets-5', 'title': 'Automating Deployment Pipelines: Technical Procedures & Best Practices', 'type': 'documentation', 'author': 'employees-3', 'content': '', 'creation_date': '2024-05-12T09:30:00Z'}, {'asset_id': 'knowledge_assets-3', 'title': 'Standard Procedures and Best Practices for Secure API Development', 'type': 'best_practice', 'author': 'employees-4', 'content': '', 'creation_date': '2024-06-15T09:25:00Z'}]\n",
            "4. Activating team and creating a response plan...\n"
          ]
        },
        {
          "ename": "KeyboardInterrupt",
          "evalue": "",
          "output_type": "error",
          "traceback": [
            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
            "\u001b[0;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
            "\u001b[0;32m/tmp/ipython-input-1171894299.py\u001b[0m in \u001b[0;36m<cell line: 0>\u001b[0;34m()\u001b[0m\n\u001b[1;32m     60\u001b[0m \u001b[0;31m# Execute the complete workflow starting from incident report\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     61\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"=== Emergency Response System Activated ===\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 62\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0memergency_workflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrespond_to_crisis\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mincident_report\u001b[0m\u001b[0;34m)\u001b[0m  \u001b[0;31m# Empty dict to trigger detection & parsing\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     63\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     64\u001b[0m \u001b[0;31m# Print results\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/tmp/ipython-input-1663767310.py\u001b[0m in \u001b[0;36mrespond_to_crisis\u001b[0;34m(self, incident_report)\u001b[0m\n\u001b[1;32m    297\u001b[0m         \u001b[0;31m# Run the workflow\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    298\u001b[0m         \u001b[0mconfig\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m{\u001b[0m\u001b[0;34m\"configurable\"\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;34m{\u001b[0m\u001b[0;34m\"thread_id\"\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m}\u001b[0m\u001b[0;34m}\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 299\u001b[0;31m         \u001b[0mfinal_state\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mworkflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minvoke\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minitial_state\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mconfig\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    300\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    301\u001b[0m         \u001b[0;32mreturn\u001b[0m \u001b[0mfinal_state\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/langgraph/pregel/main.py\u001b[0m in \u001b[0;36minvoke\u001b[0;34m(self, input, config, context, stream_mode, print_mode, output_keys, interrupt_before, interrupt_after, durability, **kwargs)\u001b[0m\n\u001b[1;32m   3092\u001b[0m         \u001b[0minterrupts\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mlist\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mInterrupt\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   3093\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 3094\u001b[0;31m         for chunk in self.stream(\n\u001b[0m\u001b[1;32m   3095\u001b[0m             \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   3096\u001b[0m             \u001b[0mconfig\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/langgraph/pregel/main.py\u001b[0m in \u001b[0;36mstream\u001b[0;34m(self, input, config, context, stream_mode, print_mode, output_keys, interrupt_before, interrupt_after, durability, subgraphs, debug, **kwargs)\u001b[0m\n\u001b[1;32m   2677\u001b[0m                     \u001b[0;32mfor\u001b[0m \u001b[0mtask\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mloop\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmatch_cached_writes\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   2678\u001b[0m                         \u001b[0mloop\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moutput_writes\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtask\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mid\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtask\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwrites\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcached\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2679\u001b[0;31m                     for _ in runner.tick(\n\u001b[0m\u001b[1;32m   2680\u001b[0m                         \u001b[0;34m[\u001b[0m\u001b[0mt\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mt\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mloop\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtasks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mt\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwrites\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   2681\u001b[0m                         \u001b[0mtimeout\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstep_timeout\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/langgraph/pregel/_runner.py\u001b[0m in \u001b[0;36mtick\u001b[0;34m(self, tasks, reraise, timeout, retry_policy, get_waiter, schedule_task)\u001b[0m\n\u001b[1;32m    165\u001b[0m             \u001b[0mt\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtasks\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    166\u001b[0m             \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 167\u001b[0;31m                 run_with_retry(\n\u001b[0m\u001b[1;32m    168\u001b[0m                     \u001b[0mt\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    169\u001b[0m                     \u001b[0mretry_policy\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/langgraph/pregel/_retry.py\u001b[0m in \u001b[0;36mrun_with_retry\u001b[0;34m(task, retry_policy, configurable)\u001b[0m\n\u001b[1;32m     40\u001b[0m             \u001b[0mtask\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwrites\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mclear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     41\u001b[0m             \u001b[0;31m# run the task\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 42\u001b[0;31m             \u001b[0;32mreturn\u001b[0m \u001b[0mtask\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mproc\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minvoke\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtask\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mconfig\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     43\u001b[0m         \u001b[0;32mexcept\u001b[0m \u001b[0mParentCommand\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mexc\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     44\u001b[0m             \u001b[0mns\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mstr\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mconfig\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mCONF\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mCONFIG_KEY_CHECKPOINT_NS\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/langgraph/_internal/_runnable.py\u001b[0m in \u001b[0;36minvoke\u001b[0;34m(self, input, config, **kwargs)\u001b[0m\n\u001b[1;32m    654\u001b[0m                     \u001b[0;31m# run in context\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    655\u001b[0m                     \u001b[0;32mwith\u001b[0m \u001b[0mset_config_context\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mconfig\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrun\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mcontext\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 656\u001b[0;31m                         \u001b[0minput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mcontext\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstep\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minvoke\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mconfig\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    657\u001b[0m                 \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    658\u001b[0m                     \u001b[0minput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mstep\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minvoke\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mconfig\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/langgraph/_internal/_runnable.py\u001b[0m in \u001b[0;36minvoke\u001b[0;34m(self, input, config, **kwargs)\u001b[0m\n\u001b[1;32m    398\u001b[0m                 \u001b[0mrun_manager\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mon_chain_end\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mret\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    399\u001b[0m         \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 400\u001b[0;31m             \u001b[0mret\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfunc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    401\u001b[0m         \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrecurse\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mret\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mRunnable\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    402\u001b[0m             \u001b[0;32mreturn\u001b[0m \u001b[0mret\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minvoke\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mconfig\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/tmp/ipython-input-1663767310.py\u001b[0m in \u001b[0;36m_activate_team_and_create_plan\u001b[0;34m(self, state)\u001b[0m\n\u001b[1;32m    174\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    175\u001b[0m             \u001b[0;31m# Use IssueResponseEngine to create team briefing\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 176\u001b[0;31m             briefing_text = self.issue_engine.team_activation_and_brief(\n\u001b[0m\u001b[1;32m    177\u001b[0m                 \u001b[0mcrisis_event\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mselected_team\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrelevant_knowledge\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    178\u001b[0m             )\n",
            "\u001b[0;32m/tmp/ipython-input-1941078503.py\u001b[0m in \u001b[0;36mteam_activation_and_brief\u001b[0;34m(self, crisis_data, experts_identified, knowledge_assets)\u001b[0m\n\u001b[1;32m    100\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    101\u001b[0m       \u001b[0;31m# Call GPT-4.1 to generate briefing\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 102\u001b[0;31m       response = openai_client.responses.create(\n\u001b[0m\u001b[1;32m    103\u001b[0m         \u001b[0mmodel\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"gpt-4.1\"\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    104\u001b[0m         \u001b[0minput\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mprompt\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/openai/resources/responses/responses.py\u001b[0m in \u001b[0;36mcreate\u001b[0;34m(self, background, conversation, include, input, instructions, max_output_tokens, max_tool_calls, metadata, model, parallel_tool_calls, previous_response_id, prompt, prompt_cache_key, reasoning, safety_identifier, service_tier, store, stream, stream_options, temperature, text, tool_choice, tools, top_logprobs, top_p, truncation, user, extra_headers, extra_query, extra_body, timeout)\u001b[0m\n\u001b[1;32m    838\u001b[0m         \u001b[0mtimeout\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mfloat\u001b[0m \u001b[0;34m|\u001b[0m \u001b[0mhttpx\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTimeout\u001b[0m \u001b[0;34m|\u001b[0m \u001b[0;32mNone\u001b[0m \u001b[0;34m|\u001b[0m \u001b[0mNotGiven\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnot_given\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    839\u001b[0m     ) -> Response | Stream[ResponseStreamEvent]:\n\u001b[0;32m--> 840\u001b[0;31m         return self._post(\n\u001b[0m\u001b[1;32m    841\u001b[0m             \u001b[0;34m\"/responses\"\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    842\u001b[0m             body=maybe_transform(\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/openai/_base_client.py\u001b[0m in \u001b[0;36mpost\u001b[0;34m(self, path, cast_to, body, options, files, stream, stream_cls)\u001b[0m\n\u001b[1;32m   1257\u001b[0m             \u001b[0mmethod\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"post\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0murl\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mpath\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mjson_data\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mbody\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfiles\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mto_httpx_files\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfiles\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1258\u001b[0m         )\n\u001b[0;32m-> 1259\u001b[0;31m         \u001b[0;32mreturn\u001b[0m \u001b[0mcast\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mResponseT\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mcast_to\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mopts\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstream\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstream\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstream_cls\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstream_cls\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1260\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1261\u001b[0m     def patch(\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/openai/_base_client.py\u001b[0m in \u001b[0;36mrequest\u001b[0;34m(self, cast_to, options, stream, stream_cls)\u001b[0m\n\u001b[1;32m    980\u001b[0m             \u001b[0mresponse\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    981\u001b[0m             \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 982\u001b[0;31m                 response = self._client.send(\n\u001b[0m\u001b[1;32m    983\u001b[0m                     \u001b[0mrequest\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    984\u001b[0m                     \u001b[0mstream\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mstream\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_should_stream_response_body\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpx/_client.py\u001b[0m in \u001b[0;36msend\u001b[0;34m(self, request, stream, auth, follow_redirects)\u001b[0m\n\u001b[1;32m    912\u001b[0m         \u001b[0mauth\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_build_request_auth\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mauth\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    913\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 914\u001b[0;31m         response = self._send_handling_auth(\n\u001b[0m\u001b[1;32m    915\u001b[0m             \u001b[0mrequest\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    916\u001b[0m             \u001b[0mauth\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mauth\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpx/_client.py\u001b[0m in \u001b[0;36m_send_handling_auth\u001b[0;34m(self, request, auth, follow_redirects, history)\u001b[0m\n\u001b[1;32m    940\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    941\u001b[0m             \u001b[0;32mwhile\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 942\u001b[0;31m                 response = self._send_handling_redirects(\n\u001b[0m\u001b[1;32m    943\u001b[0m                     \u001b[0mrequest\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    944\u001b[0m                     \u001b[0mfollow_redirects\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mfollow_redirects\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpx/_client.py\u001b[0m in \u001b[0;36m_send_handling_redirects\u001b[0;34m(self, request, follow_redirects, history)\u001b[0m\n\u001b[1;32m    977\u001b[0m                 \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    978\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 979\u001b[0;31m             \u001b[0mresponse\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_send_single_request\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    980\u001b[0m             \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    981\u001b[0m                 \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_event_hooks\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"response\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpx/_client.py\u001b[0m in \u001b[0;36m_send_single_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m   1012\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1013\u001b[0m         \u001b[0;32mwith\u001b[0m \u001b[0mrequest_context\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1014\u001b[0;31m             \u001b[0mresponse\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtransport\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhandle_request\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1015\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1016\u001b[0m         \u001b[0;32massert\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstream\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mSyncByteStream\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpx/_transports/default.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    248\u001b[0m         )\n\u001b[1;32m    249\u001b[0m         \u001b[0;32mwith\u001b[0m \u001b[0mmap_httpcore_exceptions\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 250\u001b[0;31m             \u001b[0mresp\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_pool\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhandle_request\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mreq\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    251\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    252\u001b[0m         \u001b[0;32massert\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstream\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtyping\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mIterable\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpcore/_sync/connection_pool.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    254\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    255\u001b[0m             \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_close_connections\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mclosing\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 256\u001b[0;31m             \u001b[0;32mraise\u001b[0m \u001b[0mexc\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    257\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    258\u001b[0m         \u001b[0;31m# Return the response. Note that in this case we still have to manage\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpcore/_sync/connection_pool.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    234\u001b[0m                 \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    235\u001b[0m                     \u001b[0;31m# Send the request on the assigned connection.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 236\u001b[0;31m                     response = connection.handle_request(\n\u001b[0m\u001b[1;32m    237\u001b[0m                         \u001b[0mpool_request\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    238\u001b[0m                     )\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpcore/_sync/connection.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    101\u001b[0m             \u001b[0;32mraise\u001b[0m \u001b[0mexc\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    102\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 103\u001b[0;31m         \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_connection\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhandle_request\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    104\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    105\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m_connect\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrequest\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mRequest\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mNetworkStream\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpcore/_sync/http11.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    134\u001b[0m                 \u001b[0;32mwith\u001b[0m \u001b[0mTrace\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"response_closed\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlogger\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mtrace\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    135\u001b[0m                     \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_response_closed\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 136\u001b[0;31m             \u001b[0;32mraise\u001b[0m \u001b[0mexc\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    137\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    138\u001b[0m     \u001b[0;31m# Sending the request...\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpcore/_sync/http11.py\u001b[0m in \u001b[0;36mhandle_request\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    104\u001b[0m                     \u001b[0mheaders\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    105\u001b[0m                     \u001b[0mtrailing_data\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 106\u001b[0;31m                 ) = self._receive_response_headers(**kwargs)\n\u001b[0m\u001b[1;32m    107\u001b[0m                 trace.return_value = (\n\u001b[1;32m    108\u001b[0m                     \u001b[0mhttp_version\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpcore/_sync/http11.py\u001b[0m in \u001b[0;36m_receive_response_headers\u001b[0;34m(self, request)\u001b[0m\n\u001b[1;32m    175\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    176\u001b[0m         \u001b[0;32mwhile\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 177\u001b[0;31m             \u001b[0mevent\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_receive_event\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    178\u001b[0m             \u001b[0;32mif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mevent\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mh11\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mResponse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    179\u001b[0m                 \u001b[0;32mbreak\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpcore/_sync/http11.py\u001b[0m in \u001b[0;36m_receive_event\u001b[0;34m(self, timeout)\u001b[0m\n\u001b[1;32m    215\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    216\u001b[0m             \u001b[0;32mif\u001b[0m \u001b[0mevent\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0mh11\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mNEED_DATA\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 217\u001b[0;31m                 data = self._network_stream.read(\n\u001b[0m\u001b[1;32m    218\u001b[0m                     \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mREAD_NUM_BYTES\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtimeout\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    219\u001b[0m                 )\n",
            "\u001b[0;32m/usr/local/lib/python3.12/dist-packages/httpcore/_backends/sync.py\u001b[0m in \u001b[0;36mread\u001b[0;34m(self, max_bytes, timeout)\u001b[0m\n\u001b[1;32m    126\u001b[0m         \u001b[0;32mwith\u001b[0m \u001b[0mmap_exceptions\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mexc_map\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    127\u001b[0m             \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_sock\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msettimeout\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 128\u001b[0;31m             \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_sock\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrecv\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmax_bytes\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    129\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    130\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0mwrite\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbuffer\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mbytes\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtimeout\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mfloat\u001b[0m \u001b[0;34m|\u001b[0m \u001b[0;32mNone\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/lib/python3.12/ssl.py\u001b[0m in \u001b[0;36mrecv\u001b[0;34m(self, buflen, flags)\u001b[0m\n\u001b[1;32m   1230\u001b[0m                     \u001b[0;34m\"non-zero flags not allowed in calls to recv() on %s\"\u001b[0m \u001b[0;34m%\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1231\u001b[0m                     self.__class__)\n\u001b[0;32m-> 1232\u001b[0;31m             \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbuflen\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1233\u001b[0m         \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1234\u001b[0m             \u001b[0;32mreturn\u001b[0m \u001b[0msuper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrecv\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbuflen\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mflags\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/lib/python3.12/ssl.py\u001b[0m in \u001b[0;36mread\u001b[0;34m(self, len, buffer)\u001b[0m\n\u001b[1;32m   1103\u001b[0m                 \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_sslobj\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbuffer\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1104\u001b[0m             \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1105\u001b[0;31m                 \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_sslobj\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1106\u001b[0m         \u001b[0;32mexcept\u001b[0m \u001b[0mSSLError\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1107\u001b[0m             \u001b[0;32mif\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0mSSL_ERROR_EOF\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msuppress_ragged_eofs\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
          ]
        }
      ],
      "source": [
        "# Initialize the workflow\n",
        "collection_names = {\n",
        "    \"employees\": EMPLOYEES_COLLECTION,\n",
        "    \"projects\": PROJECTS_COLLECTION,\n",
        "    \"knowledge_assets\": KNOWLEDGE_ASSETS_COLLECTION,\n",
        "}\n",
        "\n",
        "incident_report = \"\"\"\n",
        "  NETWORK CRISIS REPORT - PRIORITY CRITICAL\n",
        "\n",
        "  Incident #: INC-20250505-3547\n",
        "  Service: 5G Network Service\n",
        "  Status: ACTIVE OUTAGE\n",
        "\n",
        "  SUMMARY:\n",
        "  Complete 5G network failure reported across North America region\n",
        "\n",
        "  AFFECTED AREAS:\n",
        "  - New York City metro area\n",
        "  - Boston metropolitan region\n",
        "  - Philadelphia and surrounding counties\n",
        "\n",
        "  IMPACT ASSESSMENT:\n",
        "  - Estimated 2 million customers unable to access 5G services\n",
        "  - Enterprise customers reporting business-critical service disruptions\n",
        "  - Mobile data speeds degraded to 4G in surrounding areas\n",
        "\n",
        "  TECHNICAL DETAILS:\n",
        "  - Core Network Status: DOWN\n",
        "  - gNodeB Stations: 3/5 nodes failed\n",
        "  - Data Center: Primary facility shows hardware failures\n",
        "  - Root Cause: Equipment overheating during maintenance window\n",
        "\n",
        "  TIMELINE:\n",
        "  15:00 EST - Maintenance window begins\n",
        "  15:25 EST - First customer complaints received\n",
        "  15:30 EST - Network monitoring alerts triggered\n",
        "  15:45 EST - Service outage confirmed\n",
        "\n",
        "  REQUIRED RESPONSE:\n",
        "  - Network engineers with 5G expertise\n",
        "  - Hardware repair technicians\n",
        "  - Crisis management team\n",
        "  - Customer communications team\n",
        "\n",
        "  BUSINESS IMPACT:\n",
        "  - Revenue impact: $5,000/minute\n",
        "  - SLA breach: Yes (2-hour response requirement)\n",
        "  - Media attention: High (local news coverage)\n",
        "\n",
        "  NEXT STEPS:\n",
        "  1. Activate emergency response protocol\n",
        "  2. Dispatch on-site technicians\n",
        "  3. Prepare customer communications\n",
        "  4. Assess backup systems deployment\n",
        "\"\"\"\n",
        "\n",
        "emergency_workflow = EmergencyResponseWorkflow(db_client, db, collection_names)\n",
        "\n",
        "# Execute the complete workflow starting from incident report\n",
        "print(\"=== Emergency Response System Activated ===\")\n",
        "result = emergency_workflow.respond_to_crisis(\n",
        "    incident_report\n",
        ")  # Empty dict to trigger detection & parsing\n",
        "\n",
        "# Print results\n",
        "if result.get(\"response_plan\"):\n",
        "    print(f\"\\nCrisis ID: {result['response_plan']['crisis_id']}\")\n",
        "    print(f\"Team Lead: {result['response_plan']['team_lead']['name']}\")\n",
        "    print(f\"Team Size: {len(result['response_plan']['team_members'])}\")\n",
        "    print(f\"Expected Resolution: {result['response_plan']['expected_resolution_time']}\")\n",
        "\n",
        "    print(\"\\nTeam Composition:\")\n",
        "    for member in result[\"response_plan\"][\"team_members\"]:\n",
        "        print(f\"- {member['name']} ({member['assigned_role']})\")\n",
        "\n",
        "    print(\"\\nAction Items:\")\n",
        "    for action in result[\"response_plan\"][\"action_items\"]:\n",
        "        print(f\"- {action['title']} (Priority: {action['priority']})\")\n",
        "\n",
        "    print(\"\\nBriefing:\")\n",
        "    print(result[\"response_plan\"][\"briefing\"])\n",
        "else:\n",
        "    print(\"Failed to create emergency response plan\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MQNnCJJKQRF0"
      },
      "source": [
        "#### **AUTONOMY SCENARIO : Critical 5G Network Issue Response (Agentic AI )**\n",
        "    \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QvrsWJ6NTIkN"
      },
      "source": [
        "##### Overview"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Plkt_f8PNbC-"
      },
      "source": [
        "![image.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SxaFSSEhxSK8"
      },
      "source": [
        "##### Define Tools"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 30,
      "metadata": {
        "id": "9U0JIOhHQGdT"
      },
      "outputs": [],
      "source": [
        "from langchain.agents import tool\n",
        "\n",
        "emergency_workflow = EmergencyResponseWorkflow(db_client, db, collection_names)\n",
        "issue_engine = IssueResponseEngine(db)\n",
        "\n",
        "\n",
        "@tool\n",
        "def detect_crisis(incident_report: str) -> Dict[str, Any]:\n",
        "    \"\"\"\n",
        "    Analyzes an incident report to detect crisis parameters and determine required skills.\n",
        "\n",
        "    This tool parses unstructured incident reports to extract critical information about\n",
        "    the crisis event, including its type, severity, affected systems, and the skills needed\n",
        "    for an effective response. It utilizes a specialized parsing engine to convert freeform\n",
        "    text into structured data that can be used for emergency response coordination.\n",
        "\n",
        "    Args:\n",
        "        incident_report (str): The full text of the incident report describing the emergency.\n",
        "\n",
        "    Returns:\n",
        "        Dict[str, Any]: A dictionary containing:\n",
        "            - event_id (str): Unique identifier for the crisis\n",
        "            - event_type (str): Type of crisis (e.g., Network Outage, Security Breach)\n",
        "            - severity (str): Severity level (critical, high, medium, low)\n",
        "            - title (str): Short descriptive title\n",
        "            - description (str): Detailed crisis description\n",
        "            - affected_systems (List[str]): Systems impacted by the crisis\n",
        "            - affected_regions (List[str]): Geographical regions affected\n",
        "            - customer_impact (str): Description of impact on customers\n",
        "            - required_skills (List[str]): Skills needed to address the crisis\n",
        "\n",
        "    Raises:\n",
        "        Exception: If there is an error parsing the incident report or extracting required information.\n",
        "\n",
        "    Example:\n",
        "        >>> incident_report = \"NETWORK CRISIS REPORT - PRIORITY CRITICAL\\\\n...\"\n",
        "        >>> crisis_data = detect_crisis(incident_report, issue_engine)\n",
        "        >>> print(f\"Crisis: {crisis_data['event_type']}, Severity: {crisis_data['severity']}\")\n",
        "        Crisis: Network Outage, Severity: critical\n",
        "    \"\"\"\n",
        "    try:\n",
        "        # Use IssueResponseEngine to parse incident and create crisis data\n",
        "        crisis_data = issue_engine.crisis_detection_and_parsing(incident_report)\n",
        "\n",
        "        # Log the detection\n",
        "        print(\"\\n1. Crisis detected and parsed:\")\n",
        "        print(f\"- Type: {crisis_data.get('event_type')}\")\n",
        "        print(f\"- Severity: {crisis_data.get('severity')}\")\n",
        "        print(f\"- Required skills: {', '.join(crisis_data.get('required_skills', []))}\")\n",
        "\n",
        "        return crisis_data\n",
        "\n",
        "    except Exception as e:\n",
        "        error_msg = f\"Error detecting crisis requirements: {e!s}\"\n",
        "        print(f\"⚠️ {error_msg}\")\n",
        "        raise Exception(error_msg)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 31,
      "metadata": {
        "id": "AGgrAJy8TJoS"
      },
      "outputs": [],
      "source": [
        "@tool\n",
        "def identify_experts(crisis_data: str) -> str:\n",
        "    \"\"\"\n",
        "    Identifies and ranks available experts with skills relevant to the crisis.\n",
        "\n",
        "    This tool searches the employee database to find personnel with the specific skills\n",
        "    required to address the current crisis. It evaluates their expertise, availability,\n",
        "    and relevance to create a ranked list of suitable experts who can be assembled into\n",
        "    a response team.\n",
        "\n",
        "    Args:\n",
        "        crisis_data (Dict[str, Any]): Structured data about the crisis event, including required skills.\n",
        "        issue_engine: An instance of IssueResponseEngine with access to employee data.\n",
        "\n",
        "    Returns:\n",
        "        List[Dict[str, Any]]: A list of expert profiles containing:\n",
        "            - emp_id (str): Employee identifier\n",
        "            - name (str): Employee name\n",
        "            - role (str): Job role/title\n",
        "            - department (str): Department\n",
        "            - bio (str): Brief professional biography\n",
        "            - skills (List[str]): List of professional skills\n",
        "            - current_projects (List[str]): Currently assigned projects\n",
        "\n",
        "    Raises:\n",
        "        Exception: If there is an error searching for experts or processing expert data.\n",
        "\n",
        "    Example:\n",
        "        >>> crisis_data = {\"event_type\": \"Network Outage\", \"required_skills\": [\"5G network engineering\"]}\n",
        "        >>> experts = identify_experts(crisis_data, issue_engine)\n",
        "        >>> print(f\"Found {len(experts)} suitable experts\")\n",
        "        Found 5 suitable experts\n",
        "    \"\"\"\n",
        "    try:\n",
        "        # Use IssueResponseEngine to identify experts\n",
        "        print(\"\\n2. Identifying experts with required skills:\")\n",
        "\n",
        "        # Convert crisis data into an object\n",
        "        crisis_data = json.loads(crisis_data)\n",
        "\n",
        "        available_experts = issue_engine.experts_identification(crisis_data)\n",
        "\n",
        "        # Format experts for consistency\n",
        "        formatted_experts = []\n",
        "        for expert in available_experts:\n",
        "            if hasattr(expert, \"metadata\"):\n",
        "                expert_data = expert.metadata\n",
        "            else:\n",
        "                expert_data = expert\n",
        "\n",
        "            formatted_experts.append(\n",
        "                {\n",
        "                    \"emp_id\": expert_data.get(\"emp_id\"),\n",
        "                    \"name\": expert_data.get(\"name\"),\n",
        "                    \"role\": expert_data.get(\"role\"),\n",
        "                    \"department\": expert_data.get(\"department\"),\n",
        "                    \"bio\": expert_data.get(\"bio\", \"\"),\n",
        "                    \"skills\": expert_data.get(\"skills\", []),\n",
        "                    \"current_projects\": expert_data.get(\"current_projects\", []),\n",
        "                }\n",
        "            )\n",
        "\n",
        "        print(f\"- Identified {len(formatted_experts)} experts with relevant skills\")\n",
        "        for i, expert in enumerate(formatted_experts, 1):\n",
        "            print(f\"  {i}. {expert['name']} ({expert['role']})\")\n",
        "\n",
        "        return formatted_experts\n",
        "\n",
        "    except Exception as e:\n",
        "        error_msg = f\"Error identifying experts: {e!s}\"\n",
        "        print(f\"⚠️ {error_msg}\")\n",
        "        raise Exception(error_msg)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 32,
      "metadata": {
        "id": "1HVZkayrTPCi"
      },
      "outputs": [],
      "source": [
        "@tool\n",
        "def gather_knowledge(crisis_data: str) -> str:\n",
        "    \"\"\"\n",
        "    Retrieves relevant knowledge assets to support the emergency response team.\n",
        "\n",
        "    This tool searches the knowledge base for documentation, procedures, and historical\n",
        "    information relevant to the current crisis. It identifies technical guides, best\n",
        "    practices, and previous incident reports that can assist the response team in\n",
        "    addressing the crisis effectively.\n",
        "\n",
        "    Args:\n",
        "        crisis_data (Dict[str, Any]): Structured data about the crisis event.\n",
        "        issue_engine: An instance of IssueResponseEngine with access to knowledge assets.\n",
        "\n",
        "    Returns:\n",
        "        List[Dict[str, Any]]: A list of knowledge assets containing:\n",
        "            - asset_id (str): Unique identifier for the asset\n",
        "            - title (str): Title of the knowledge asset\n",
        "            - type (str): Type of asset (e.g., documentation, best_practice, procedure)\n",
        "            - author (str): Creator of the asset\n",
        "            - content (str): Content of the knowledge asset\n",
        "            - creation_date (str): When the asset was created\n",
        "\n",
        "    Raises:\n",
        "        Exception: If there is an error retrieving or processing knowledge assets.\n",
        "\n",
        "    Example:\n",
        "        >>> crisis_data = {\"event_type\": \"Network Outage\", \"affected_systems\": [\"5G Network\"]}\n",
        "        >>> assets = gather_knowledge(crisis_data, issue_engine)\n",
        "        >>> print(f\"Found {len(assets)} relevant knowledge assets\")\n",
        "        Found 5 relevant knowledge assets\n",
        "    \"\"\"\n",
        "    try:\n",
        "        # Use IssueResponseEngine to gather knowledge assets\n",
        "        print(\"\\n3. Gathering relevant knowledge assets:\")\n",
        "\n",
        "        crisis_data = json.loads(crisis_data)\n",
        "\n",
        "        knowledge_assets = issue_engine.knowledge_asset_gathering(crisis_data)\n",
        "\n",
        "        # Format knowledge assets for consistency\n",
        "        formatted_assets = []\n",
        "        for asset in knowledge_assets:\n",
        "            # Handle tuple format (Document, score)\n",
        "            if isinstance(asset, tuple):\n",
        "                doc = asset[0]\n",
        "                score = asset[1] if len(asset) > 1 else 1.0\n",
        "                if hasattr(doc, \"metadata\"):\n",
        "                    asset_data = doc.metadata\n",
        "                else:\n",
        "                    asset_data = doc\n",
        "            elif hasattr(asset, \"metadata\"):\n",
        "                asset_data = asset.metadata\n",
        "                score = 1.0\n",
        "            else:\n",
        "                asset_data = asset\n",
        "                score = 1.0\n",
        "\n",
        "            formatted_assets.append(\n",
        "                {\n",
        "                    \"asset_id\": asset_data.get(\"asset_id\", \"unknown\"),\n",
        "                    \"title\": asset_data.get(\"title\", \"Untitled Asset\"),\n",
        "                    \"type\": asset_data.get(\"type\", \"documentation\"),\n",
        "                    \"author\": asset_data.get(\"author\", \"Unknown\"),\n",
        "                    \"content\": asset_data.get(\"content\", \"\"),\n",
        "                    \"creation_date\": asset_data.get(\"creation_date\", \"\"),\n",
        "                    \"relevance_score\": score,\n",
        "                }\n",
        "            )\n",
        "\n",
        "        print(f\"- Retrieved {len(formatted_assets)} knowledge assets\")\n",
        "        for i, asset in enumerate(formatted_assets, 1):\n",
        "            print(f\"  {i}. {asset['title']} ({asset['type']})\")\n",
        "\n",
        "        return formatted_assets\n",
        "\n",
        "    except Exception as e:\n",
        "        error_msg = f\"Error gathering knowledge assets: {e!s}\"\n",
        "        print(f\"⚠️ {error_msg}\")\n",
        "        raise Exception(error_msg)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 33,
      "metadata": {
        "id": "zjRC6RFeTTuO"
      },
      "outputs": [],
      "source": [
        "@tool\n",
        "def activate_team_and_generate_plan(\n",
        "    crisis_data: str,\n",
        "    selected_experts: str,\n",
        "    knowledge_assets: str,\n",
        ") -> str:\n",
        "    \"\"\"\n",
        "    Creates a comprehensive response plan and activates the emergency response team.\n",
        "\n",
        "    This tool assembles the identified experts into a cohesive team, generates a detailed\n",
        "    response plan with action items, creates a briefing document, and activates the team\n",
        "    for crisis response. It establishes communication protocols, timelines, and success\n",
        "    criteria for effective crisis management.\n",
        "\n",
        "    Args:\n",
        "        crisis_data (Dict[str, Any]): Structured data about the crisis event.\n",
        "        selected_experts (List[Dict[str, Any]]): List of experts selected for the response team.\n",
        "        knowledge_assets (List[Dict[str, Any]]): Relevant knowledge assets for the crisis.\n",
        "        issue_engine: An instance of IssueResponseEngine for generating briefings.\n",
        "\n",
        "    Returns:\n",
        "        Dict[str, Any]: A comprehensive response plan containing:\n",
        "            - crisis_id (str): Unique identifier for the response plan\n",
        "            - team_lead (Dict[str, Any]): Team lead information\n",
        "            - team_members (List[Dict[str, Any]]): Full team composition\n",
        "            - crisis_details (Dict[str, Any]): Crisis event details\n",
        "            - briefing (str): Detailed team briefing document\n",
        "            - action_items (List[Dict[str, Any]]): Prioritized response actions\n",
        "            - knowledge_resources (List[Dict[str, Any]]): Relevant knowledge assets\n",
        "            - status (str): Current status of the response (e.g., \"active\")\n",
        "            - created_at (str): Timestamp of plan creation\n",
        "            - expected_resolution_time (str): Estimated time to resolution\n",
        "\n",
        "    Raises:\n",
        "        Exception: If there is an error creating the team, generating the briefing, or assembling the plan.\n",
        "        ValueError: If no experts are provided to form a team.\n",
        "\n",
        "    Example:\n",
        "        >>> crisis_data = {\"event_type\": \"Network Outage\", \"severity\": \"critical\"}\n",
        "        >>> experts = [{\"name\": \"Jane Smith\", \"role\": \"Network Engineer\"}]\n",
        "        >>> assets = [{\"title\": \"Network Recovery Procedures\"}]\n",
        "        >>> response_plan = activate_team_and_generate_plan(crisis_data, experts, assets, issue_engine)\n",
        "        >>> print(f\"Response plan created: {response_plan['crisis_id']}\")\n",
        "        Response plan created: CRISIS-20250507-120000\n",
        "    \"\"\"\n",
        "    try:\n",
        "        print(\"\\n4. Activating team and creating response plan:\")\n",
        "\n",
        "        if not selected_experts:\n",
        "            raise ValueError(\"No experts available to form a team\")\n",
        "\n",
        "        # Convert input to objects\n",
        "        crisis_data = json.loads(crisis_data)\n",
        "        selected_experts = json.loads(selected_experts)\n",
        "        knowledge_assets = json.loads(knowledge_assets)\n",
        "\n",
        "        # Use IssueResponseEngine to create team briefing\n",
        "        briefing_text = issue_engine.team_activation_and_brief(\n",
        "            crisis_data, selected_experts, knowledge_assets\n",
        "        )\n",
        "\n",
        "        # Create response plan\n",
        "        response_plan = {\n",
        "            \"crisis_id\": f\"CRISIS-{datetime.now().strftime('%Y%m%d-%H%M%S')}\",\n",
        "            \"team_lead\": selected_experts[0] if selected_experts else None,\n",
        "            \"team_members\": selected_experts,\n",
        "            \"crisis_details\": crisis_data,\n",
        "            \"briefing\": briefing_text,\n",
        "            \"knowledge_resources\": knowledge_assets,\n",
        "            \"status\": \"active\",\n",
        "            \"created_at\": datetime.now().isoformat(),\n",
        "            \"expected_resolution_time\": _estimate_resolution_time(crisis_data),\n",
        "        }\n",
        "\n",
        "        # Generate activation summary\n",
        "        summary = _create_activation_summary(response_plan)\n",
        "        print(f\"- {summary}\")\n",
        "        print(f\"- Team size: {len(selected_experts)} experts\")\n",
        "        print(\n",
        "            f\"- Expected resolution time: {response_plan['expected_resolution_time']}\"\n",
        "        )\n",
        "        print(\"- Response plan status: ACTIVE\")\n",
        "\n",
        "        return response_plan\n",
        "\n",
        "    except Exception as e:\n",
        "        error_msg = f\"Error activating team: {e!s}\"\n",
        "        print(f\"⚠️ {error_msg}\")\n",
        "        raise Exception(error_msg)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "b-FFyoiBTi87"
      },
      "source": [
        "##### Aggregate Tools"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 34,
      "metadata": {
        "id": "XAkPEPB4Tn40"
      },
      "outputs": [],
      "source": [
        "toolbox = [\n",
        "    detect_crisis,\n",
        "    identify_experts,\n",
        "    gather_knowledge,\n",
        "    activate_team_and_generate_plan,\n",
        "]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hmLCa_hBWabP"
      },
      "source": [
        "##### LLM Defintion"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 35,
      "metadata": {
        "id": "4_FZck2bWd4U"
      },
      "outputs": [],
      "source": [
        "from langchain.chat_models import init_chat_model\n",
        "\n",
        "llm = init_chat_model(\"openai:gpt-4.1\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pwYhl2FRio87"
      },
      "source": [
        "##### Agent Definition"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 36,
      "metadata": {
        "id": "vC8GsYTyixwl"
      },
      "outputs": [],
      "source": [
        "emergency_resposne_agent = llm.bind_tools(toolbox)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Pz1KEjxnWzLg"
      },
      "source": [
        "##### Node Definition"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 37,
      "metadata": {
        "id": "VvflgAwAT1p8"
      },
      "outputs": [],
      "source": [
        "import functools\n",
        "\n",
        "from langchain_core.messages import AIMessage, ToolMessage\n",
        "\n",
        "\n",
        "def agent_node(state, agent, name):\n",
        "    # Extract just the messages from the state to pass to the agent\n",
        "    messages = state[\"messages\"]\n",
        "\n",
        "    # Ensure all message names are properly sanitized before sending to the agent\n",
        "    for msg in messages:\n",
        "        if hasattr(msg, \"name\"):\n",
        "            msg.name = sanitize_name(msg.name or \"anonymous\")\n",
        "\n",
        "    result = agent.invoke(messages)\n",
        "\n",
        "    if isinstance(result, ToolMessage):\n",
        "        # Sanitize tool message name\n",
        "        result.name = sanitize_name(result.name)\n",
        "    else:\n",
        "        # Use a fixed, compliant name for the AI\n",
        "        result = AIMessage(**result.dict(exclude={\"type\", \"name\"}), name=\"assistant\")\n",
        "\n",
        "    return {\n",
        "        \"messages\": [result],\n",
        "        \"sender\": sanitize_name(name),\n",
        "    }"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 38,
      "metadata": {
        "id": "Ui5zeHQKT_pg"
      },
      "outputs": [],
      "source": [
        "from langgraph.prebuilt import ToolNode\n",
        "\n",
        "chatbot_node = functools.partial(\n",
        "    agent_node, agent=emergency_resposne_agent, name=\"Emergency Response Agent\"\n",
        ")\n",
        "tool_node = ToolNode(toolbox, name=\"tools\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "I0CGXZn-xhyu"
      },
      "source": [
        "##### Autonomous Graph Agent Definition"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 39,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "YIR988csT7y4",
        "outputId": "c7f38e04-f3fd-4dd0-b41f-95970e0c6da3"
      },
      "outputs": [
        {
          "data": {
            "text/plain": [
              "<langgraph.graph.state.StateGraph at 0x78b771387710>"
            ]
          },
          "execution_count": 39,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "from langgraph.graph import END, StateGraph\n",
        "from langgraph.prebuilt import tools_condition\n",
        "\n",
        "workflow = StateGraph(EmergencyResponseState)\n",
        "\n",
        "workflow.add_node(\"chatbot\", chatbot_node)\n",
        "workflow.add_node(\"tools\", tool_node)\n",
        "\n",
        "workflow.set_entry_point(\"chatbot\")\n",
        "workflow.add_conditional_edges(\"chatbot\", tools_condition, {\"tools\": \"tools\", END: END})\n",
        "\n",
        "workflow.add_edge(\"tools\", \"chatbot\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 40,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Se5llvnrUMNh",
        "outputId": "c9b94703-2e22-4eaf-fcd6-8419e5aec1df"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/tmp/ipython-input-3211739412.py:5: DeprecationWarning: AsyncMongoDBSaver is deprecated and will be removed in 0.3.0 release. Please use the async methods of MongoDBSaver instead.\n",
            "  mongodb_checkpointer = AsyncMongoDBSaver(async_mongodb_client)\n"
          ]
        }
      ],
      "source": [
        "from langgraph.checkpoint.mongodb import AsyncMongoDBSaver\n",
        "from pymongo import AsyncMongoClient\n",
        "\n",
        "async_mongodb_client = AsyncMongoClient(os.getenv(\"MONGODB_URI\"))\n",
        "mongodb_checkpointer = AsyncMongoDBSaver(async_mongodb_client)\n",
        "\n",
        "graph = workflow.compile(checkpointer=mongodb_checkpointer)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 41,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 266
        },
        "id": "W1GNAJg1UORL",
        "outputId": "0ce5a795-fb62-4568-e223-c48be6f53c8f"
      },
      "outputs": [
        {
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAANgAAAD5CAIAAADKsmwpAAAQAElEQVR4nOydB2AUxffHZ/daekJITwhJCAkQSuiiiEgRlSIoigQQKYLwpyhFUGkGRJD6Q6kqIBaKdARBUIoGKQGB0EJLJ4WEtEtydff/9ja5XJK7SIC7zGbnYzz2Zmb37va+NzPvzcwbKcuyiECobaSIQMAAIkQCFhAhErCACJGABUSIBCwgQiRgARFiZbKSNVdO5xXm6DUqvVaj12tM8iiWoiiWQYhmEUNxCTTivF9MaT5LsbShAJduTIRkriiFKjrKoDCFKidSUpbVURVPh1eiDXllzxFLlz4pReFAS6SUwkkaEGLXprsbEiAU8SPypMar/t6f/TBLrdcxtIRycJIq7LmvW6dmyguBcDidsRTNPXIJNCcI/thQgOL0ZlLAUAbusUE3Fe80TVMM3PxKiTKK0VY4nVNr5a+IqnQthb1Ep2e1KkZdzGh1LLxzvyD73qN9kHAgQkSZidqDG9OKi3QePnYRnVxbdHZGgkaPTuzMTriuLClmvBso3pjkj4SA2IW4fXnag7SShk2d+472RnWLnPv6X79LKVHqXxjo3bS9E8IbUQtx/cf35HJ6xGdBqO5y/UzRqd0ZAWGOffBuqcUrxG8/vecf6vjKiLpWEZrl21kJ7V9yb9XFFeGKSIUIdWFoC+fuUZ5INHwzK8ErwO61930RltBIfGycmxgY5igqFQLvLQh+kKKK2ZODsER0Qty/Ph28dCJpkSsxOjr4379yEZaITIgMSo4vGjE3CIkTCWrYxGnL/CSEH+IS4paFSZ4N7JGI6TvGpzBPe+uCEmGGuIRY8FA7aLIwHLzWwy/E/q99DxBmiEiI+9el2ztIbfyJZ86cuW/fPlRzevbsmZaWhqxA3zH+4OVGmCEiIWamqIOaOyDbcv36dVRz0tPTc3OtZVVIZTA2Tf+xLRvhhIiEqFXr23arj6xDTEzM2LFjO3fu3L9//7lz52Znc19zu3bt7t+/P3/+/K5du8JTpVK5bt264cOH88VWrFihUqn407t3775169b33nsPTjl58mTfvn0h8bXXXps6dSqyAm6eivR7xQgnxCLEu1eKKQq5eUmQFbh58+bkyZPbt2+/c+fOjz766NatW/PmzUMGdcLj7NmzT5w4AQfbtm3bvHnzsGHDVq5cCeWPHj26YcMG/goymWzPnj3h4eGrV69+7rnnoAAkQpu+bNkyZAU8AxQlSh3CCbHMR0xPKJHIKGQdLl26ZGdnN3LkSJqmfXx8mjVrdufOnarFhg4dCjVfcHAw//Ty5cunT5+eNGkSMkwfc3V1nTZtGrIJvkGKG+fwGlETixBVRQwttVb1HxkZCY3sBx980LFjxy5dujRo0ABa2KrFoNr7559/oOGGKlOn4yokd3d3Yy7IF9kKd08Fo2cQToiladYxDMtYy1Rs0qTJqlWrPD09v/rqqwEDBowfPx5qu6rFIBfaYiiwd+/e2NjYESNGmObK5XJkM6QSbhY4TohFiA6OUlZvraYZePbZZ6EveODAAegd5ufnQ+3I13lGWJbdtWvXoEGDQIjQfENKYWEhqiXys1SUFW/G4yAWIXr5y7UaazVGFy5cgN4eHECl2KdPHzB1QWTggjEto9VqS0pKvLy8+KcajebUqVOolshIVlMSvJQoFiE26eDMskhdYpUeOjTEYCzv3r0bnH9Xr14F6xgU6evrq1AoQHlnzpyBhhjsmKCgoP3796empubl5UVHR0PPsqCgoKioqOoFoSQ8glkNV0NWIDOpxM7RKg6Ex0ZEfkSZnD532CqToMAchgZ36dKlMBwyZswYR0dH6AtKpZwhCKb0+fPnoY6E6nDhwoVgXA8cOBCciB06dJgwYQI87dGjB/gaK10wICAAXIngdIRuJbICOelqbz87hBMimhi7fVlKcaFuxLxgJHq+/vDOiM9CHF0wqoZEVCO+NNSnKB+7MVbbc2hTulRBYaVCJKoF9vW8ZQoHeu+a+/3H+5ktoNfrweFsNgtsC/ACUuZMzZCQkI0bNyLrsNmA2SwnJycYMzSbFRERASM0yAJJN4rbdHVHmCGuNSv376h2rU6duCLUYoEq3TUe+MrhizebBX1Boy381Ck0YDYLXOjQxTSbBb8ZsJbMZh3bmpVwVfne5yEIM0S3eGrHslQ9iwZPC0CiZPXUOwPGBfqF2tB5/miIbs3KW1MDCrLVZ3/DdOmGVdk4NzEg1AFDFSJxruIbu6hR7B85BQ/E1RRs/TJVbid9bZwfwhLxLrBfPe1uj0E+4e0dkQjYMj/Z3U/eZxS+wR5EHXJkzdS7fsH2/SdgWkk8Lb6bnQDjKENmBiKMEXsQpk3zEjUqpuPL9SO74huO47HZtzY99W5xaCvnXsOsZdc/LUhYOnT6QM7lU3kyBe0bbPfKO760DAmdO5eKYv94mHNf7VRPNnxmQ0E4i4kQS/lrT86Nc/lqlV4qo8Hv7eQqd3SWSmSMVmNyfwxhOOGGGf4xWHoMomnEGKb18Ad8VmkBVBY61lCyPA6sydNKpxtOMQ3yWX7Ml2FReTBZI1IZpddRJUpdUb6upEgPBVw9ZC+87hkQJphF3ESIlYnZn5Nyq1hVqNfquHuj15XfH+5m0eXBhuGQMdUcf2CI5lop0SAnXsWgNobm4sxysWfZssTy000OKh3zTxFCVb8xqRxJJLTCXuLiLg1r7RyOfTTEqhAh2pqJEydGRUV16tQJEUwgwdxtjU6n42eIEUwhd8TWECGahdwRW0OEaBZyR2yNVquVyYTvInraECHaGlIjmoXcEVtDhGgWckdsDRGiWcgdsTUgRNJHrAoRoq0hNaJZyB2xNUSIZiF3xNYQIZqF3BFbQ4RoFnJHbA04tIkQq0LuiE1hWZZhGIkErwBIOECEaFNIu2wJclNsChGiJchNsSlkxoMliBBtCqkRLUFuik0hQrQEuSk2hQjREuSm2BQiREuQm2JTiLFiCSJEm0JqREuQm2JrLMVyFTlEiDYFBvcyMjIQoQpEiDYF2uVKW6MReIgQbQoRoiWIEG0KEaIliBBtChGiJYgQbQoRoiWIEG0KEaIliBBtChGiJYgQbQoRoiWIEG0KCFGvJzukmkGMO0/VLjC4QrRYFSJEW0NaZ7MQIdoaIkSzkD6irSFCNAsRoq0hQjQLEaKtIUI0CxGirSFCNAvZecpGREZG0nSpacjtpEbT8NinT5/o6GhEIFazzWjZsiXidnXkAFciRVG+vr5Dhw5FBANEiDbinXfecXR0NE1p1apVWFgYIhggQrQRPXr0MJVd/fr1Bw8ejAhlECHajnfffdfFxYU/btKkSYsWLRChDCJE2/H888+Hh4fDgaur65AhQxDBBLFbzQ+SNFf/yS8u1jN6bl94WkIxeu6GSGSUXssdGDeWN+QixjBdQSLldgLn0+HYuLk4GCF6femxVErpjOlSWq/jSucX5MbFXXN0tG8d2db4HowvWlq47KXLU0wuy1NpZ3vDSyB9FaeQTC6p5ynv+Go9hD2iFuL385OLC3UyBa3XMLyqjLIzao5rM8qESIHLhaEQrwPEotKSLKOnKhXgjiUsW55eqhuKRno9w7lx2PK2yPSsShcsLSBlWV3FFBqxTIXPUv6GTZDbgYIRo2NDWji9NMwLYYx4hfjd7ARXT7tew31RXacwS39gY0rLzi6dersjXBGpEDfNS/b0dXjhbQ8kGrYvTWza1uW5/phqUYzGSnysSq3Si0qFQHhrtxvn8hGuiFKIF3PtHUT3wSO7umm0+LZ+YhRiiZLRiXCuPmfNsPkPMP3kYpx9o9OXOmtEB4vvpybTwAhYQIRIwAIxCpGmWUqUQ5uGkSKEJ2IUIsNQGHeWrAyudrM4a0REURQSH9xnJkLEBxhNJgskcEOUNSLF/y9C8G0IRFkjsvz/IgTfhoC4bwhYIFJjBYmzZUb4fnCR1oi0KK1mDlzbZpE6dp+W1fzmoFe+/W41egLmzvto6rRxyCawUB/i+oWLUohQHdZqr/2z6JmHftuHnoA9e3d8sXguqiFU2fIGDBGjEBmGZWu1rxQffx09GU9+BdwgVvMjodfrf9n50/dbNsBxs6Yt3h0+tkWLSD5LKpXt3rN93fqVcrm8efPIj2dGu7q4QnpCwt39B3Ze/Pd8Rsb9oIYhr77a/7V+AyH9xe7t4HHJ0vlr1604sO8E4ipoKvbC2e3bt1y9drlRo7BJEz8Ka9yEv3hMzEl40aTkBFdXt9DQ8MkTZ3h7+3wwZczlyxch9/ffDx77/axEIkHCR4w1IjizqRp22jd889W+fb9Ef7Z01iefe3p6z/h4YnJyIp918tSxoiLl4kVfTZ825+rVS5s2reXTV69Zdv78P5MnzVj0xSpQ4f9WLT5zNgbSDx/iHqdPm82rEACd7d23IypqxMLPVzIMM2v2FL4LC+qcM2/6Sy/13rHt0NzZizIz01euWgTpK5dvaNq0OaQf/yO2piokDm2MMKzrrME3UlBYsOOXHz+YPLN9u2fgaceOzxUXF+U8zA4MDIKnDg6Ow4aO4kvGnD55Je5f/nj27C+gmK+PHxy3jmx3+PD+c+dPP9PxuarXz819+MGkmR4e3D7O7wx77+NPJkOFFxnZduOmtV2e7zbwjSjErcl3Gz9uyrTp42/GX28S3gw9FtAhYcnsG3xg2ZrZKslJCYgLEhLBP5VKpdGfLTHmtmgeaTx2dXHTqNXGl9m9e9vZczEpKUl8gq+vv9nrNwppzKsQaB7RCh7vp6eCEO/du/1Cl+7GYuFhnP5u3rz22ELE2VgRoxC52pCqgRKLiovg0U5hZzYXdGly5dKKFlrYmZ9M1mo1742eEBnZztnJeeLkUZau7+joZDx2cHCAx4KCfKVSqVarFSYvymcVG95M3UOMfUSuOmRr0DQ72NdYAbdu34Sqa9z7Hz7f+UVQIaQolYWWCpeoSozHyiIlPLq4uNrZcRJUmWTxv4f67nVzFawYhQjVVo0GVkJCGkO1d/nKRf4pWBJQ2x058ms1p+Tn58Gjp0dplI/ExHvwZ6lwcnKCSqXij3m/TIB/ILxieFjTa9euGIvxxyGNGqPHhzi0caKmw3uOjo49e7wKVvNvh/f/eyn2q6+XXLhwFuzWak4Bfw0oafuOH8DQAfsaTgFDJyMzHbIUCoWnp1ds7Bm4FB9M287Ofumy+VAyLy/3p583enl5876hAf0H/R1zYteurZAFhdesXd6mdfvGoVw8MX//BjduXAXfUA2HiFgyxIcRhomxNToDgRcGunrLln8+Zer7cXGXouct4U1mS4C379NPFly/Efda/26fzPpw9Kj/69dvIEhn+AjOlTgkaiRoaPacqdAoa3VaMFACA4PffOtlGDAEh+WC+cv5viY4aEaNHL/9lx/gIou/nNeyRes5s7/gr9+39+tQZvpH/1fj3dRwFaIYY9/8tCS5OF/39vQQJDK+n3d72Cchrp44OsBFO7Ii2nlgmCJGIcJgBF0XRsUeBzKyghGcQ1ucEUfIyApeiHUF26YFZAAAEABJREFUHxlZwQuGxXgRkVgR6ZoVWrS7KZA1K/gAHUSGhBzBDLG6b0S6eArfIT6xzr4RqcFCjBWsoES6nJQlfkSsAJOZEWUnkcK4IRBl04zICB92iDUIE0uUiBdiFKKdvUSvEqOxQktpiRzTUXYxOnbdPeVaNRIbOfc1NE05uSI8EaMQXxzkqdboLK8hqZtcOJrj5IZvAyjSoa6wVi4HVicg0XD7siortWTox4EIV8S7TW78haITv2R6BTo2CHOQSNiKG3NzxkylFaeVfR+VnrP85JYyc9xwZCzC7e5MlZekLAWaqNbRV3o10zImx8Y3bJovlaDCh0zSDWVxgWbMIqxnpIt64/D4C8XnDmWXFOvVKl0lCVSSGUVVWXhEsdx/5eXBFqcrn2UUoiH8WNlTljKJ70+VSYetdJZBvKbHZi9rfFcU903yjqnyEFMSGSWV0h4+dq9Pwn1balELkWfFihXw+OGHHyKbMHny5EGDBj377LPICuzYsQM+jkwmc3R09PT0DAoKioyMbGoA4Y2ohRgXF9eiRYtr165FREQgWzF//vx+/fq1atUKWQdQ+e3bt2ma5kePKIpydXV1dnbet++JIjJaG5EaK/DzGz9+fEZGBhzbUoWIC84023oqBHr37s1HiaANgBALCgpSUlIQ3oixRszJyYGv586dOx06dEA2B9Rfr149hUKBrENJScmwYcMSExONKQ4ODqdOnUJ4I64aUa1Wjx07Fr4qd3f3WlEhMGPGDPgNIKthb2/fs2dP03BQCxYsQNgjLiEePHhwzJgxAQEBqPbw9vbm43pZj9dff93HxwcZVHjx4sW9e/euXbsW4Y0ohJifnz9t2jRk+Ibatm2LapUvv/wyODgYWROwl7t27QoHfn5cmNDly5fL5fKJEycijBGFEKOjo0eNGoXwIC0tjY+9ZFWmTp0KPdFffy0NWQYfPyoqqlu3bqmpqQhL6rKxAmbBiRMn3n77bYQT4LtZt24dX1fZGDCf33nnnXHjxvXq1QthRp2tEYuLi0ePHt2lSxeEGdB7A3sC1QYuLi7QXwQLmvfhY0UdrBHT09MLCwv9/f1hdAERzPHzzz//+eef3377LcKGulYj3rhxg7eLsVVhcnJyra+Ygf4i2C6dOnW6desWwoO6I8T79+8jg6fwwIED1vaPPAlDhw41BiquRWB0B9roefPmQWONMKCOCBHEN3fuXDiAMX6EN2CmgDMFYYBMJoM2+urVq59//jmqbQTfR8zLy3Nzc9u9ezf4CBHhsdizZ8/OnTu3bNlSi7upCVuI33zzDdy7kSNHIuGQlJTUsGFDhBnx8fHDhw9fv369VSdkVINQm2boC+bk5ECvX1gqhN7hkCFDEH6Eh4efOXNm1apVW7duRbWBIIW4YcMGsD2hRR47diwSFND+hITgO2X/u+++A5tv1qxZyOYIT4iHDh2Cx8aNGwtxe1hwZUNXDGEMjA127twZOtzgi0U2REh9RPgKYYQqPz/f1RXX1bn/hV6vB3977U7/eRSgwYEu46JFizp27IhsgmBqxBkzZvATj4WrQuDBgwfvv/8+wp7AwMDjx4/DL3/jxo3IJghAiDEx3E7bU6ZMeeutt5DAoSgKQ5PZEqtXrwajEBprZH2wFqJOp+vXrx8/q97b2xsJH/gU8O0i4TBu3Dj4Cl5++eWsrCxkTfDtI2ZkZMAIBPg7amXGlJXQaDTZ2dmC+0TwnqF3vnjx4hYtWiDrgGmNCENPcXFx7u7udUmFyLCyCYYiBTeI4OHhAc4K8DJmZmYi64CpEKE6BOsY1TnA0lqzZg2MjAsxZO2lS5es10EikR5qh5SUFJqm/f39kUC4ffv2nDlzrDfugmmNqDeA6i4NGjQYP358UVEREgggRBhEQFYDUyFC+/XTTz+hOs2+ffvi4+OVSiUSAnfv3g0NDUVWA1MhWi8QAla0adMmLS3t9OnTCHugRrSqEDENITpmzBgkDsLDwydNmtSyZUsnJyeEMXfu3BFjjVjn+4imgFukoKAA2xXHyBChAIZYvLy8kNXAVIgwyrlu3TokGsBdmpubW1tzAf8Ta1eHCOc+IiWyXcpg0OL+/fvg8Ub4YQMhEj8iXhQXF9+8eROMGIQTCxYsaN68ef/+/ZHVIH1EvHBwcLCzs1u4cCHCCagRrepERNgKcc+ePUuWLEGipFmzZk2aNEE4Id4+olwuF1sf0RR+aez+/fsRBsBopKenp7U9u5gKsV+/fjNmzEDiBswXPqxj7WLtwT0eTIXIMIwNgghiTnBw8LvvvotqGxu0ywhbIR49epQPISJywFZFZTvB1BaiFqJMJqNpkW69URWoF2txyZVtmmbiRxQGhYWFzs7O0F2RSrnpAS+//DL8Vg8cOICsDIzsdevWjV+/ZlVIH1EYgAqRYfV7UVFRnz59srOzYUjwyJEjyMrYwIPIg6kQz5w5Y5tVjMLif//73yuvvMJvmAWDgX/88QeyMtae/WUE3z6imP2Ilhg0aBCMAfLHcH/i4+N5UVoP21gqCFshtm/ffuXKlYhgQlRU1N27d01TMjMzT548iayJbSwVhK0QwYTSarWIYAL0mwMCAkxDT2k0GvBzIWti7RUCRjCdoR0XFwc1os0CrwiCbdu2Xbx48fz582fPnlUqlenp6d6ObdgC96O7b/n5+hj2F6+4G7kB2rClOTJUOQyquiN6haeUoTC/HznFogJlYZDHCynXqVSqgK1yQfNUvCBNU14BCg///w7VjJf7ZvTo0XCL4S3BI1iFXl5eUA1Ar+jYsWOIYMKm6HvF+XqKRnrOtcB1p7mv0aC1ymos2/2e3+LekMgY5IQYCtEsX5g1FEfGXjlbVp5/SlMUY9SJ8YJVinH/0Ig1WbEtlUE2JZNTLZ+r1/FVt2o+EV41YrNmzX788UejK5ufPQ8j7ohgwoaP73kG2g8c54uwiAn/31w7nR8X89A3SBHYzOJOR3j1EYcOHVo1dmBt7WeLJxs+ude0Xf0eUYJRIRDxrOug6cEHv0+P/d1i9A68hAhtce/evU1T6tevj2fQ6Vrht++zpDJJZA9BRohs1tHt0skcS7nYWc2DBw82rRQjIyPDwsIQwUBmssrD1w4Jkzbd3bVaVmMhngB2QnRxcenbty8/ouru7j5s2DBEKEOr1kntBDwXhGFQdqb51WE4fipjpdjcACKUodOwOo2A3auMnmUszCB4IqtZXYL+OfggPUFVotRpNaWvVGbWl7qjWINfii3zKFBlDiMw+w3OI0RL4KzSC1I0xTIsLaG6NvxCH6CXSqRrP7pXejrnGyh1HCDOQWVwdZV5tCjei0AbCjGcP4Ey8UpJpEgioSVSysGZbhDu2Km3OyJgxmMK8fCWzOSbRVoVQ8vgK6YlCqnCiQYHksEtxZYqjitY6rwqd0Ihg5bYUulwKeWep9I08FrJHGTGJMM4i/Hk0uvQnJDLfaD8S/Cj06zJxUs/pFQCaTqV7mGWNivtYeyxhw7O0rA2zs/3r48IeFBjIf62KTPhmhIqLWcvZ/9mgqxa9Bom9XrOlb/zrvyd2/ZF92eEU0HCj1bQU0G4N2/h/ddMiOtnJkBFE9jC18lLwNG6JHK6YSQ4yT2z7uZfOJ577UzBqPlBSAhwbY6Q5zGzbMUBRhMe1VhJiS/5esodZ0+nJl0DBa1CU7wauUZ0D6IkkjVT7yIhUE2NIggqDCNW5JGEmPdAu299WrNuwX7CbIurJ6Sjn0+452ohaLGaGkUQsKzF39F/C/He5eKti1Oa9wymhbf13aPi3sAxpH3g6mm4a5GiBF0hctWhpRj2/y3E376/36gj7nvHPTn2rrRHQ7d1M+4hjGFZQVeIHNTj9RHXf5Lg7OUkd6q7laEJ3qFuEoVk+1J8A2bWYaoT4omd2XotE9hKRLOwGncKeJCmykjEdPTCYKwIuHHm50WazapOiFdP53kG10Miw9Hd/sA3mFaKQm+Xufdvwf9kUYgx+3JomvIMxnTG0aW4Y9Nmd1QW5aKnTXA7H1WxLj8by+iMLLK9I7H/6z22/PAtehqUjquZw6IQb14odHAT6oyjJ0QqlxzZko7qBJ9Fzzz02z6EB+xjGCslSp13qEiHYl28nHMy1AhDau6+iY+/joSA+SG+m2eLoE9p7ypD1iEx+crvx79NSb3u5FivaXjnl14cbWfnCOkxZ345enLjuJFrt2z7ODPrnq93aJdnB7dv04c/69fDX8VePqSQO7Ru2cvLIxBZDd9Qt9w0LLekrKH75sXu7eBxydL5a9etOLDvBOJ2YT/5/ZYNSckJrq5uoaHhkyfO8Pb24QtXk1X64iy7a/fWI0d+TUlNahgY3K7dMyNHjDNd3vokmK8R710rpKXWctlk56Ss3zxRq1VPGPPt8KjF6Zm3124cpzcsR5NIZSUlhXsPLn2r/ydLos+0bN5tx94FuXlcMIPT53adPrfz9d7TJ4/dVL+e39Hj3yGrQctpiqbizwtjc7JqOHyIC540fdpsXoWxF87OmTf9pZd679h2aO7sRZmZ6StXLeJLVpNlZPfubT/+tHHgG1Hbfv61b983Dh7au237FlQT+GV+ZjGfXFzASGXWEuLFy4elEtm7gxd7ewb5eIW8+dqnaenxV2+c5HP1em3PF0c3bNCCoqh2kb3hV5iWfgvS//5nR8uI7iBNBwcXqCNDQ9oha0LTdGaqCuFGNb39R2DjprVdnu8GSoI6LyKi5fhxU86c+fumoe2uJsvI5SsXw8Ob9erVx82tXp/eA1Z/vbljh+dQTeCqcwtDK+aFqNHqrWebQbvcIKCZo2PpKlf3er713QMSki4ZCwT6R/AHDvYu8FiiKgQ5Zj9M8fYKNpYJ8LNyuHOKLSnCLxxZNb39R+DevdtNmkQYn4aHNYPHmzevVZ9lpHnzVhcunP1ySfThIwfyC/L9/QJCQ5/aciLzfcTqF/M/ISUqZUradXC+mCYWFJav76oafkmlLmIYvULhYEyRy+2RNYH3IMFvcP1JxpqVSqVarVYoyj0hDg7c/SwuLqomy/QKUF86ODjGnD65+MvPpFJp1649x743ycPj6Yx3mBeiXC6jkLXqA2fn+sENI3t1q7Dto6NjdQ5LO4UjTUu02vK2Uq0pRtYE6mCFA3YLep6kdrCz43SmUpWvXSoy6Ky+u0c1WaZXgO4KtMjwl5h47+LFc5u3bCgqUi5cUIOwytX0LMwL0cVT+sBq/gs/78YXLh8KCWptjOiQkXXPs351VjDUT/XcfBOT414o65PciLduDFNGz/oFOyDceIJJD1CHhYc1vXbtijGFPw5p1LiaLNMrgL0cFtY0OLhRUFAI/BUqCw8e2oNqBFXDIb5GzZ30GmsNLYBHhmGY/b+t0GhUWQ+Sfj3y9bKvo9Iz71R/VqvmPeKuH4cBFTj+868tSalXkdXQKPXQp27Uyrqt/2NQ06ZZoVB4enrFxp7591KsTqcb0H/Q3zEndu3aWlBYAClr1i5v07p949BwKGIi5O8AAAS2SURBVFlNlpE//jwMlvXp06eggwimzF9//9k8ohWqCdx8RAudPvM1YkhLB/jQhdkqZ4+nP7gCZu+0CT8f/+uHleuGZz1IDAyIeLP/p/9pfPR4YURRUe7eQ8t+3PEptOz9Xvng51/mWCmCVFZCrswOxwlHjzENbEjUyE2b1507f3rrz7+Cd+ZBdtb2X374es0y8BG2a/vMe6Mn8MWqyTIydcqsr1cv/XT2FMQtOa8PbfSbA4eip4TFaGCbo5P0rKRRB18kPuJPpfg2tOv3vg/CjHUz7/qH2Hcd5IeEyeZ5dwa87x8Qbqapsdgfb92lnlqJ5TCX9dGotP3GYqdCDlbYa1YQa3EWm8VVfK26upw5nJ0Rn+sTbn4mWF5+5tKvo8xm2SucStTmhyV8PEMmjPkGPT1mfd7dUhaM1kgkZj5gUGDL0cMs2np3zqa71lPg+X1T1aw+EgJcIFAL77+65aRte7qf+y3HkhCdnepPGf+D2SywQuRy851Lmn7KERktvQfubWjVcpmZBYdSSXUR3dSF6lFfNEJYwjBI0PvisHxwD3NUJ4t23d2uxeQnxGYEtzPTTkFl416v9jsrT/c93DqV4tfInsI19CBFCXuBfTX8h892+JyGqkJ1QYZ1vceYkBr3AAZTBozH1xRgWWEvsK+G/x48GPdFSMrVLFTXSb+RW5hdNHpBEMIY+J1QVN3covARPpUEvb+4UdzRhIdpRaiOknolp+BBwbgvMe0aGmH0UCMySMCwTxTpAUzPictD79/ISoitIxPoTbkVk1qcrxz7RQjCHqEvsKcs2io1CdQ5YVkoq9fdOJ6UcevpL1mqFRL/zbp6NKGeu3TMQgGoECEk8Igj1blBa+ZMGTkv6NyRvEunch+mFdg7K7xD3R3chBPcvozcNGVOYr6qRCNXSAaMbeAfLpiYUgaruW6azTX26nXo5QZ/scfyrsbkJV64z8XV5Pb4prgbZLLZC2UScKfUC8uUTgEyhtukqQrLI7lgm1wRQ9TXCj98tnxPGyPl+9hUDEpLs4gpL8y/Fi2BnhUNKTq1lmG4wKEu9eU9BgcERQhwmaKQzeZq/PGP6V5u18MN/uDgzr/Ku1eL8rM16iJWr2fKe9JU+fg8F17SxPVgDBHL+bYZzklrLGYYAjKEvCq/DioNGUuXzTKnDIGJOWWzxqvRtOEZnC5lWZ3hKVP+WlIZRcsoB0eJk5tdxDPOfqHYTat5RAwxeQUMiyz2LZ50nCO0tRP8IYKtoAStRMtguikkwSwyuUQiF3BALKmUQhYWYBAhCgmZHaUuFrAfEbpOASHmrdu66aavqwQ1xTUExSNwen+2wl6CLFToRIhC4oU33MHV8OfPghxxTbpW0O1NL0u5eO3XTHgUtixIomhJ664eDYXgflLmsRePPUi6WTh8VpCjq8UOLhGiIPllZdrDDI1ex+gtREIwemQfE7bKXOqqoyJVUqgqzhlawu0GZu8kfWmIt19odT8bIkQho0ElJSaLLU2d/nTFIAll280bnrAV9rg3eplNxwCoMlGxZVc26q7CUEHZq1DGEYyyA/5EicT+0Zx7RIgELCDuGwIWECESsIAIkYAFRIgELCBCJGABESIBC/4fAAD//+m/VkAAAAAGSURBVAMAHxjwZPJOmpMAAAAASUVORK5CYII=",
            "text/plain": [
              "<IPython.core.display.Image object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from IPython.display import Image, display\n",
        "\n",
        "try:\n",
        "    display(Image(graph.get_graph(xray=True).draw_mermaid_png()))\n",
        "except Exception:\n",
        "    # This requires some extra dependencies and is optional\n",
        "    pass"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 42,
      "metadata": {
        "id": "RKefwkwqe_lX"
      },
      "outputs": [],
      "source": [
        "import re\n",
        "\n",
        "\n",
        "def sanitize_name(name: str) -> str:\n",
        "    \"\"\"Sanitize the name to match OpenAI's pattern requirements.\"\"\"\n",
        "    # Remove any spaces, <, |, \\, /, and >\n",
        "    sanitized = re.sub(r\"[\\s<|\\\\/>]\", \"_\", name)\n",
        "    # Ensure the name isn't empty\n",
        "    return sanitized or \"anonymous\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 43,
      "metadata": {
        "id": "Df6i8Bw8e9H4"
      },
      "outputs": [],
      "source": [
        "import asyncio\n",
        "import time\n",
        "\n",
        "from langchain_core.messages import AIMessage, HumanMessage\n",
        "\n",
        "\n",
        "async def chat_loop():\n",
        "    config = {\"configurable\": {\"thread_id\": \"01010\"}}\n",
        "\n",
        "    while True:\n",
        "        user_input = await asyncio.get_event_loop().run_in_executor(\n",
        "            None, input, \"User: \"\n",
        "        )\n",
        "        if user_input.lower() in [\"quit\", \"exit\", \"q\"]:\n",
        "            print(\"Goodbye!\")\n",
        "            break\n",
        "\n",
        "        # Use a sanitized name for the human\n",
        "        state = {\"messages\": [HumanMessage(content=user_input, name=\"human\")]}\n",
        "\n",
        "        print(\"Assistant: \", end=\"\", flush=True)\n",
        "\n",
        "        max_retries = 3\n",
        "        retry_delay = 1\n",
        "\n",
        "        for attempt in range(max_retries):\n",
        "            try:\n",
        "                async for chunk in graph.astream(state, config, stream_mode=\"values\"):\n",
        "                    if chunk.get(\"messages\"):\n",
        "                        last_message = chunk[\"messages\"][-1]\n",
        "                        if isinstance(last_message, AIMessage):\n",
        "                            # Ensure the AI name is properly sanitized\n",
        "                            last_message.name = \"assistant\"\n",
        "                            print(last_message.content, end=\"\", flush=True)\n",
        "                        elif isinstance(last_message, ToolMessage):\n",
        "                            # Sanitize tool names as well\n",
        "                            tool_name = sanitize_name(last_message.name)\n",
        "                            print(f\"\\n[Tool Used: {tool_name}]\")\n",
        "                            print(f\"Tool Call ID: {last_message.tool_call_id}\")\n",
        "                            print(f\"Content: {last_message.content}\")\n",
        "                            print(\"Assistant: \", end=\"\", flush=True)\n",
        "                break\n",
        "            except Exception as e:\n",
        "                if attempt < max_retries - 1:\n",
        "                    print(f\"\\nAn unexpected error occurred: {e!s}\")\n",
        "                    print(f\"\\nRetrying in {retry_delay} seconds...\")\n",
        "                    await asyncio.sleep(retry_delay)\n",
        "                    retry_delay *= 2\n",
        "                else:\n",
        "                    print(f\"\\nMax retries reached. Error: {e!s}\")\n",
        "                    break\n",
        "\n",
        "        print(\"\\n\")  # New line after the complete response"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kIqVy1UEyZDU"
      },
      "source": [
        "##### Executing the Agent"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 44,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "EQtccaKqfw0G",
        "outputId": "991528fb-cbd5-416e-f708-aa8c5b5cd5ac"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "User: Can you run this incident report \"\"\"   NETWORK CRISIS REPORT - PRIORITY CRITICAL    Incident #: INC-20250505-3547   Service: 5G Network Service   Status: ACTIVE OUTAGE    SUMMARY:   Complete 5G network failure reported across North America region    AFFECTED AREAS:   - New York City metro area   - Boston metropolitan region   - Philadelphia and surrounding counties    IMPACT ASSESSMENT:   - Estimated 2 million customers unable to access 5G services   - Enterprise customers reporting business-critical service disruptions   - Mobile data speeds degraded to 4G in surrounding areas    TECHNICAL DETAILS:   - Core Network Status: DOWN   - gNodeB Stations: 3/5 nodes failed   - Data Center: Primary facility shows hardware failures   - Root Cause: Equipment overheating during maintenance window    TIMELINE:   15:00 EST - Maintenance window begins   15:25 EST - First customer complaints received   15:30 EST - Network monitoring alerts triggered   15:45 EST - Service outage confirmed    REQUIRED RESPONSE:   - Network engineers with 5G expertise   - Hardware repair technicians   - Crisis management team   - Customer communications team    BUSINESS IMPACT:   - Revenue impact: $5,000/minute   - SLA breach: Yes (2-hour response requirement)   - Media attention: High (local news coverage)    NEXT STEPS:   1. Activate emergency response protocol   2. Dispatch on-site technicians   3. Prepare customer communications   4. Assess backup systems deployment \"\"\"\n",
            "Assistant: "
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/tmp/ipython-input-3900299550.py:20: PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/\n",
            "  result = AIMessage(**result.dict(exclude={\"type\", \"name\"}), name=\"assistant\")\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Crisis event saved into records\n",
            "Crisis Event Generated:\n",
            "{\n",
            "  \"event_id\": \"CRISIS-20250505-001\",\n",
            "  \"event_type\": \"Network Outage\",\n",
            "  \"severity\": \"critical\",\n",
            "  \"title\": \"Critical 5G Network Outage in Major US Metro Areas - North America\",\n",
            "  \"description\": \"Complete 5G network failure across North American metro regions, with core network and gNodeB station failures due to equipment overheating during maintenance. Affects millions and enterprise customers, causing business-critical disruptions and media attention.\",\n",
            "  \"affected_systems\": [\n",
            "    \"5G Network Service\",\n",
            "    \"Core Network\",\n",
            "    \"gNodeB Stations\",\n",
            "    \"Primary Data Center\"\n",
            "  ],\n",
            "  \"affected_regions\": [\n",
            "    \"New York City metro area\",\n",
            "    \"Boston metropolitan region\",\n",
            "    \"Philadelphia and surrounding counties\"\n",
            "  ],\n",
            "  \"customer_impact\": \"Estimated 2 million customers unable to access 5G services; enterprise and business customers suffer critical disruptions; 4G data speeds in surrounding areas.\",\n",
            "  \"required_skills\": [\n",
            "    \"Network engineers with 5G expertise\",\n",
            "    \"Hardware repair technicians\",\n",
            "    \"Crisis management team\",\n",
            "    \"Customer communications team\"\n",
            "  ]\n",
            "}\n",
            "\n",
            "1. Crisis detected and parsed:\n",
            "- Type: CrisisType.NETWORK_OUTAGE\n",
            "- Severity: SeverityLevel.CRITICAL\n",
            "- Required skills: Network engineers with 5G expertise, Hardware repair technicians, Crisis management team, Customer communications team\n",
            "\n",
            "[Tool Used: detect_crisis]\n",
            "Tool Call ID: call_d8DN8mKlas8SZv8NVVKoo5yv\n",
            "Content: {\"event_id\": \"CRISIS-20250505-001\", \"event_type\": \"Network Outage\", \"severity\": \"critical\", \"title\": \"Critical 5G Network Outage in Major US Metro Areas - North America\", \"description\": \"Complete 5G network failure across North American metro regions, with core network and gNodeB station failures due to equipment overheating during maintenance. Affects millions and enterprise customers, causing business-critical disruptions and media attention.\", \"affected_systems\": [\"5G Network Service\", \"Core Network\", \"gNodeB Stations\", \"Primary Data Center\"], \"affected_regions\": [\"New York City metro area\", \"Boston metropolitan region\", \"Philadelphia and surrounding counties\"], \"customer_impact\": \"Estimated 2 million customers unable to access 5G services; enterprise and business customers suffer critical disruptions; 4G data speeds in surrounding areas.\", \"required_skills\": [\"Network engineers with 5G expertise\", \"Hardware repair technicians\", \"Crisis management team\", \"Customer communications team\"]}\n",
            "Assistant: Incident detected and analyzed:\n",
            "\n",
            "- Crisis Type: Network Outage (Critical)\n",
            "- Title: Critical 5G Network Outage in Major US Metro Areas - North America\n",
            "- Description: Complete failure of the 5G network across New York, Boston, and Philadelphia due to equipment overheating during maintenance. Affects millions of customers and businesses, with major media coverage.\n",
            "- Impact: 2 million customers affected, significant business disruption, degraded services, and high revenue loss.\n",
            "- Affected Systems: 5G Network Service, Core Network, gNodeB Stations, Primary Data Center\n",
            "- Required Response: \n",
            "  - Network engineers with 5G expertise\n",
            "  - Hardware repair technicians\n",
            "  - Crisis management team\n",
            "  - Customer communications team\n",
            "\n",
            "The crisis requires immediate activation of emergency protocols and multidisciplinary experts for rapid response. Would you like me to proceed gathering a crisis response team, related knowledge assets, and initiate response planning?"
          ]
        },
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/tmp/ipython-input-3900299550.py:20: PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/\n",
            "  result = AIMessage(**result.dict(exclude={\"type\", \"name\"}), name=\"assistant\")\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "\n",
            "User: q\n",
            "Goodbye!\n"
          ]
        }
      ],
      "source": [
        "# For Jupyter notebooks and IPython environments\n",
        "import nest_asyncio\n",
        "\n",
        "nest_asyncio.apply()\n",
        "\n",
        "# Run the async function\n",
        "await chat_loop()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sP5u3V_ZAvrf"
      },
      "outputs": [],
      "source": [
        "\"\"\"\n",
        "  NETWORK CRISIS REPORT - PRIORITY CRITICAL\n",
        "\n",
        "  Incident #: INC-20250505-3547\n",
        "  Service: 5G Network Service\n",
        "  Status: ACTIVE OUTAGE\n",
        "\n",
        "  SUMMARY:\n",
        "  Complete 5G network failure reported across North America region\n",
        "\n",
        "  AFFECTED AREAS:\n",
        "  - New York City metro area\n",
        "  - Boston metropolitan region\n",
        "  - Philadelphia and surrounding counties\n",
        "\n",
        "  IMPACT ASSESSMENT:\n",
        "  - Estimated 2 million customers unable to access 5G services\n",
        "  - Enterprise customers reporting business-critical service disruptions\n",
        "  - Mobile data speeds degraded to 4G in surrounding areas\n",
        "\n",
        "  TECHNICAL DETAILS:\n",
        "  - Core Network Status: DOWN\n",
        "  - gNodeB Stations: 3/5 nodes failed\n",
        "  - Data Center: Primary facility shows hardware failures\n",
        "  - Root Cause: Equipment overheating during maintenance window\n",
        "\n",
        "  TIMELINE:\n",
        "  15:00 EST - Maintenance window begins\n",
        "  15:25 EST - First customer complaints received\n",
        "  15:30 EST - Network monitoring alerts triggered\n",
        "  15:45 EST - Service outage confirmed\n",
        "\n",
        "  REQUIRED RESPONSE:\n",
        "  - Network engineers with 5G expertise\n",
        "  - Hardware repair technicians\n",
        "  - Crisis management team\n",
        "  - Customer communications team\n",
        "\n",
        "  BUSINESS IMPACT:\n",
        "  - Revenue impact: $5,000/minute\n",
        "  - SLA breach: Yes (2-hour response requirement)\n",
        "  - Media attention: High (local news coverage)\n",
        "\n",
        "  NEXT STEPS:\n",
        "  1. Activate emergency response protocol\n",
        "  2. Dispatch on-site technicians\n",
        "  3. Prepare customer communications\n",
        "  4. Assess backup systems deployment\n",
        "\"\"\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "m681wdEhodUm"
      },
      "outputs": [],
      "source": []
    }
  ],
  "metadata": {
    "colab": {
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "base",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "name": "python",
      "version": "3.11.5"
    },
    "widgets": {
      "application/vnd.jupyter.widget-state+json": {
        "06fb49b9cae941f59b978b9e022ee944": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "11ec5659934c469e8a96fc008b72813b": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "1a942ca9422e478898ebcbb9de43eabd": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_7fd24343103d43b38d9358c32963108a",
            "max": 1,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_74ac542ed5de4227b660ee80a7c9673e",
            "value": 1
          }
        },
        "1e85166a7916452bae86108ae64361d4": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_5572add8abee448289005ada46d6c0d2",
            "max": 1,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_469c5fb526b34f789cc03aa02d30b69c",
            "value": 1
          }
        },
        "202946dab5af445291fa726a58ce66d1": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "210b99e07eb04dadb3930d95966e44b4": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_06fb49b9cae941f59b978b9e022ee944",
            "placeholder": "​",
            "style": "IPY_MODEL_72aea2635cdf428d91b4133802e514a7",
            "value": " 1/1 [00:00&lt;00:00,  6.60it/s]"
          }
        },
        "212f3f25669d4fcdb37bae2121bcf04d": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_85768a1ecb714ba7a29ad01659b99449",
            "max": 1,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_e89cd05f2f5744eab740a4d77bc418e0",
            "value": 1
          }
        },
        "29347ee87c5e4f148bc2800f0ecdfbc5": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "2a5bf18548444ff0bd5397fa9e409a2c": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "2ae5036a8d2c4598a3c6f1989e0f5494": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "2b66fe785e63464b95a6094d8077a37a": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "3265fdce9fa64da5870cb7384ae974f5": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "3fd986c252fd40ccb77ed9b185946d0b": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_a1e329ee64e74c1baa37fb3c27580892",
              "IPY_MODEL_212f3f25669d4fcdb37bae2121bcf04d",
              "IPY_MODEL_8c7d71782d284afd93c491446c319254"
            ],
            "layout": "IPY_MODEL_11ec5659934c469e8a96fc008b72813b"
          }
        },
        "41584d6e4abb4d37a34860a6b31cffe1": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_9ae343d11e6a4112b93a1becada22293",
              "IPY_MODEL_1e85166a7916452bae86108ae64361d4",
              "IPY_MODEL_e1f993f548604a19932ea3e92120aba6"
            ],
            "layout": "IPY_MODEL_202946dab5af445291fa726a58ce66d1"
          }
        },
        "469c5fb526b34f789cc03aa02d30b69c": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "49c2c3d4bbc94e9386e7255f6c0ed89c": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_d0e6db037f8f414cb805c88981cdc731",
            "placeholder": "​",
            "style": "IPY_MODEL_534ce430982b410f929d5dc434f46688",
            "value": "100%"
          }
        },
        "4ee74006f7a54af086fda3747c9416ae": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "534ce430982b410f929d5dc434f46688": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "5572add8abee448289005ada46d6c0d2": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "56bca67a0f6f4658bdfea47f24d78706": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_ba34f08e153040b6b66c255bb3b6f54e",
              "IPY_MODEL_5d500724b68941e9821aeee92fb3f5eb",
              "IPY_MODEL_210b99e07eb04dadb3930d95966e44b4"
            ],
            "layout": "IPY_MODEL_c422cea214cc4d96b234dedc08ff4abe"
          }
        },
        "5843086730774128bb9e656db4a65e83": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_c08bbb35d71b40b7a9ce238313a91e36",
            "placeholder": "​",
            "style": "IPY_MODEL_b9b2a9e1095e450dbb82d4052ec85c91",
            "value": "100%"
          }
        },
        "5d500724b68941e9821aeee92fb3f5eb": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_3265fdce9fa64da5870cb7384ae974f5",
            "max": 1,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_be6558b7fdd1486d8c933064a97b4b10",
            "value": 1
          }
        },
        "5f2b022ca0c540c294d906143ef51f77": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "6465d1842d774ec6801e5a5f1eb70a53": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "6bb5168734b4481da4a2aa25cf3e5539": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "72aea2635cdf428d91b4133802e514a7": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "74ac542ed5de4227b660ee80a7c9673e": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "7d9ef38e785843b7af33e6b61f0ce725": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_c1ab584e38b8471d9093414375e87b59",
              "IPY_MODEL_840e1c25008e4fbcb48a39a52b905ae5",
              "IPY_MODEL_a76c6fc5624c4a16b154ae28bf2836b6"
            ],
            "layout": "IPY_MODEL_6bb5168734b4481da4a2aa25cf3e5539"
          }
        },
        "7fd24343103d43b38d9358c32963108a": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "840e1c25008e4fbcb48a39a52b905ae5": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_4ee74006f7a54af086fda3747c9416ae",
            "max": 1,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_f24448dc76b5426a894a1fd3a9f3a8d8",
            "value": 1
          }
        },
        "85768a1ecb714ba7a29ad01659b99449": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "85a57236d48541f9af1bfc8f60266f2e": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "8bd3488abf424ef5821b1f775285cb98": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "8c7d71782d284afd93c491446c319254": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_85a57236d48541f9af1bfc8f60266f2e",
            "placeholder": "​",
            "style": "IPY_MODEL_f9c72b84091543669df9a6a4d3f86935",
            "value": " 1/1 [00:00&lt;00:00,  9.29it/s]"
          }
        },
        "9ae343d11e6a4112b93a1becada22293": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_ca155adeaecb4164a911f33c6c2b7fa8",
            "placeholder": "​",
            "style": "IPY_MODEL_dd7b275e29cf4112b8da21ac364c0ad5",
            "value": "100%"
          }
        },
        "9dba347cdc7240b7992f924f9500383c": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "9f456078b9b34e5c9fe15a39126badf5": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_b0327ec8ad8a48adbc13f572da0426da",
            "placeholder": "​",
            "style": "IPY_MODEL_9dba347cdc7240b7992f924f9500383c",
            "value": " 1/1 [00:00&lt;00:00,  6.24it/s]"
          }
        },
        "9fd4422f56e24e41b0cca88c957250a5": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "a1e329ee64e74c1baa37fb3c27580892": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_9fd4422f56e24e41b0cca88c957250a5",
            "placeholder": "​",
            "style": "IPY_MODEL_e61fe1515d1d4326a9ba069f003a8bc5",
            "value": "100%"
          }
        },
        "a76c6fc5624c4a16b154ae28bf2836b6": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_c29883fcb4e641b3950bce9552500e78",
            "placeholder": "​",
            "style": "IPY_MODEL_d7e20546e14e478481116d39455fe0c4",
            "value": " 1/1 [00:00&lt;00:00,  8.60it/s]"
          }
        },
        "ab58a73920d741b194ad26dfb17fdd3b": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "b0327ec8ad8a48adbc13f572da0426da": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "b934e2c7168447ce9cf1accf9c86ab70": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_5f2b022ca0c540c294d906143ef51f77",
            "placeholder": "​",
            "style": "IPY_MODEL_29347ee87c5e4f148bc2800f0ecdfbc5",
            "value": " 1/1 [00:00&lt;00:00,  2.86it/s]"
          }
        },
        "b9b2a9e1095e450dbb82d4052ec85c91": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "ba34f08e153040b6b66c255bb3b6f54e": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_dca43ee32eec4886b03482ca15743c34",
            "placeholder": "​",
            "style": "IPY_MODEL_2a5bf18548444ff0bd5397fa9e409a2c",
            "value": "100%"
          }
        },
        "be6558b7fdd1486d8c933064a97b4b10": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "c08bbb35d71b40b7a9ce238313a91e36": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "c1ab584e38b8471d9093414375e87b59": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_2b66fe785e63464b95a6094d8077a37a",
            "placeholder": "​",
            "style": "IPY_MODEL_2ae5036a8d2c4598a3c6f1989e0f5494",
            "value": "100%"
          }
        },
        "c29883fcb4e641b3950bce9552500e78": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "c422cea214cc4d96b234dedc08ff4abe": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "c70e0ec05b374e73a08f7746d6bceac6": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "FloatProgressModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "FloatProgressModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "ProgressView",
            "bar_style": "success",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_c8c6de07b28a42d1aa78dccd4617980c",
            "max": 1,
            "min": 0,
            "orientation": "horizontal",
            "style": "IPY_MODEL_8bd3488abf424ef5821b1f775285cb98",
            "value": 1
          }
        },
        "c8c6de07b28a42d1aa78dccd4617980c": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "ca155adeaecb4164a911f33c6c2b7fa8": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "d0e6db037f8f414cb805c88981cdc731": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "d7e20546e14e478481116d39455fe0c4": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "dca43ee32eec4886b03482ca15743c34": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "dd7b275e29cf4112b8da21ac364c0ad5": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "e1441a03c912498891c384f088a9540f": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "e1f993f548604a19932ea3e92120aba6": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HTMLModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HTMLModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HTMLView",
            "description": "",
            "description_tooltip": null,
            "layout": "IPY_MODEL_ab58a73920d741b194ad26dfb17fdd3b",
            "placeholder": "​",
            "style": "IPY_MODEL_6465d1842d774ec6801e5a5f1eb70a53",
            "value": " 1/1 [00:00&lt;00:00,  2.85it/s]"
          }
        },
        "e27c57fefa25464d9c8793f834e32908": {
          "model_module": "@jupyter-widgets/base",
          "model_module_version": "1.2.0",
          "model_name": "LayoutModel",
          "state": {
            "_model_module": "@jupyter-widgets/base",
            "_model_module_version": "1.2.0",
            "_model_name": "LayoutModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "LayoutView",
            "align_content": null,
            "align_items": null,
            "align_self": null,
            "border": null,
            "bottom": null,
            "display": null,
            "flex": null,
            "flex_flow": null,
            "grid_area": null,
            "grid_auto_columns": null,
            "grid_auto_flow": null,
            "grid_auto_rows": null,
            "grid_column": null,
            "grid_gap": null,
            "grid_row": null,
            "grid_template_areas": null,
            "grid_template_columns": null,
            "grid_template_rows": null,
            "height": null,
            "justify_content": null,
            "justify_items": null,
            "left": null,
            "margin": null,
            "max_height": null,
            "max_width": null,
            "min_height": null,
            "min_width": null,
            "object_fit": null,
            "object_position": null,
            "order": null,
            "overflow": null,
            "overflow_x": null,
            "overflow_y": null,
            "padding": null,
            "right": null,
            "top": null,
            "visibility": null,
            "width": null
          }
        },
        "e61fe1515d1d4326a9ba069f003a8bc5": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "e67d2ed9f7304842added60be1d66a3e": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_5843086730774128bb9e656db4a65e83",
              "IPY_MODEL_1a942ca9422e478898ebcbb9de43eabd",
              "IPY_MODEL_9f456078b9b34e5c9fe15a39126badf5"
            ],
            "layout": "IPY_MODEL_e27c57fefa25464d9c8793f834e32908"
          }
        },
        "e89cd05f2f5744eab740a4d77bc418e0": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "f24448dc76b5426a894a1fd3a9f3a8d8": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "ProgressStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "ProgressStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "bar_color": null,
            "description_width": ""
          }
        },
        "f556ad6f36534503bdda98e1da38fd7b": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "HBoxModel",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_49c2c3d4bbc94e9386e7255f6c0ed89c",
              "IPY_MODEL_c70e0ec05b374e73a08f7746d6bceac6",
              "IPY_MODEL_b934e2c7168447ce9cf1accf9c86ab70"
            ],
            "layout": "IPY_MODEL_e1441a03c912498891c384f088a9540f"
          }
        },
        "f9c72b84091543669df9a6a4d3f86935": {
          "model_module": "@jupyter-widgets/controls",
          "model_module_version": "1.5.0",
          "model_name": "DescriptionStyleModel",
          "state": {
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "DescriptionStyleModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/base",
            "_view_module_version": "1.2.0",
            "_view_name": "StyleView",
            "description_width": ""
          }
        },
        "state": {}
      }
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
