{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# 🐫 **CAMEL Agent to MCP Server Conversion via .to_mcp() for FAISS RAG**\n"
      ],
      "metadata": {
        "id": "fNv8EK3G2zDk"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "<div class=\"align-center\">\n",
        "  <a href=\"https://www.camel-ai.org/\"><img src=\"https://i.postimg.cc/KzQ5rfBC/button.png\"width=\"150\"></a>\n",
        "  <a href=\"https://discord.camel-ai.org\"><img src=\"https://i.postimg.cc/L4wPdG9N/join-2.png\"  width=\"150\"></a></a>\n",
        "  \n",
        "⭐ <i>Star us on [*Github*](https://github.com/camel-ai/camel), join our [*Discord*](https://discord.camel-ai.org) or follow our [*X*](https://x.com/camelaiorg)\n",
        "</div>"
      ],
      "metadata": {
        "id": "u9C6kQIp3y_C"
      }
    },
    {
      "cell_type": "markdown",
      "source": [],
      "metadata": {
        "id": "FmREQ-bS2YGC"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Overview\n",
        "This cookbook demonstrates how to convert a CAMEL RAG agent into an MCP server using the .to_mcp() method, allowing other MCP clients to use your agent as a tool.\n",
        "## What You'll Learn\n",
        "- How to create a self-contained RAG agent with embedded FAISS tools\n",
        "- How to convert any CAMEL agent to an MCP server using `.to_mcp()`\n",
        "- How to test the converted MCP server with a client\n",
        "\n",
        "\n",
        "## 🎯 Key Concept: The .to_mcp() Method\n",
        "The `.to_mcp()` method converts any CAMEL agent into an MCP server, making it available as a tool for other MCP clients.\n",
        "\n",
        "\n",
        "\n",
        "\n"
      ],
      "metadata": {
        "id": "HM_n50_G33-R"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Simple example from CAMEL documentation\n",
        "from camel.agents import ChatAgent\n",
        "\n",
        "agent = ChatAgent(model=\"gpt-4o-mini\")\n",
        "mcp_server = agent.to_mcp(name=\"demo\", description=\"A demo agent\")\n",
        "mcp_server.run(transport=\"stdio\")"
      ],
      "metadata": {
        "id": "WKbAedpuT7ld"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# File 1: RAG Agent Configuration (faiss_rag_agent_config.py)\n",
        "\n",
        "## Purpose: This file creates a self-contained RAG agent with embedded FAISS vector search tools.\n",
        "\n",
        "## What it does:\n",
        "\n",
        "Defines three core FAISS tools: create_faiss_index, query_faiss, and get_index_stats\n",
        "\n",
        "- Implements simple text chunking and vector embeddings using numpy\n",
        "- Creates a ChatAgent with Gemini model and embedded tools\n",
        "- Provides a complete RAG system without external dependencies\n",
        "- Self-contained: No external MCP server dependencies\n",
        "- FAISS integration: Uses Facebook AI Similarity Search for vector operations\n",
        "- Simple embeddings: Uses deterministic random embeddings for demonstration\n",
        "- Tool composition: Wraps functions as FunctionTool objects for CAMEL\n"
      ],
      "metadata": {
        "id": "p3rs66Il4aXz"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import logging\n",
        "import os\n",
        "from pathlib import Path\n",
        "from typing import List, Dict, Any\n",
        "import json\n",
        "\n",
        "from camel.agents import ChatAgent\n",
        "from camel.messages import BaseMessage\n",
        "from camel.models import ModelFactory\n",
        "from camel.types import ModelPlatformType, RoleType\n",
        "from camel.toolkits import FunctionTool\n",
        "from camel.storages import FaissStorage\n",
        "from camel.storages.vectordb_storages.base import VectorRecord, VectorDBQuery\n",
        "from camel.types.enums import VectorDistance\n",
        "import numpy as np\n",
        "\n",
        "# Prevent logging since MCP needs to use stdout\n",
        "root_logger = logging.getLogger()\n",
        "root_logger.handlers = []\n",
        "root_logger.addHandler(logging.NullHandler())\n",
        "\n",
        "# Global storage instance\n",
        "faiss_storage = None\n",
        "\n",
        "def initialize_faiss_storage():\n",
        "    \"\"\"Initialize CAMEL's FaissStorage with professional settings\"\"\"\n",
        "    global faiss_storage\n",
        "\n",
        "    storage_path = Path.cwd() / \"rag_storage\"\n",
        "    storage_path.mkdir(exist_ok=True)\n",
        "\n",
        "    faiss_storage = FaissStorage(\n",
        "        vector_dim=1536,  # OpenAI embedding dimension\n",
        "        index_type='HNSW',  # Fast with high recall\n",
        "        collection_name='rag_documents',\n",
        "        storage_path=str(storage_path),\n",
        "        distance=VectorDistance.COSINE,\n",
        "        m=16,  # HNSW connections per node\n",
        "    )\n",
        "\n",
        "    # Load existing index if available\n",
        "    try:\n",
        "        faiss_storage.load()\n",
        "    except:\n",
        "        pass  # No existing index to load\n",
        "\n",
        "    return faiss_storage\n",
        "\n",
        "\n",
        "def create_rag_agent():\n",
        "    \"\"\"Create a self-contained RAG agent with CAMEL's FaissStorage\"\"\"\n",
        "\n",
        "    # Initialize storage\n",
        "    initialize_faiss_storage()\n",
        "\n",
        "    # Create Gemini model\n",
        "    model = ModelFactory.create(\n",
        "        model_platform=ModelPlatformType.GEMINI,\n",
        "        model_type=\"gemini-2.5-flash-preview-04-17\",\n",
        "        api_key=os.getenv(\"GOOGLE_API_KEY\"),\n",
        "        model_config_dict={\n",
        "            \"temperature\": 0.2,\n",
        "        }\n",
        "    )\n",
        "\n",
        "    # No tools needed since .to_mcp() doesn't expose them\n",
        "    tools = []\n",
        "\n",
        "    print(f\"DEBUG: Created {len(tools)} tools:\")\n",
        "    for i, tool in enumerate(tools):\n",
        "        print(f\"  Tool {i+1}: {tool.func.__name__}\")\n",
        "\n",
        "    # Enhanced system message that tells the agent it HAS these tools\n",
        "    system_content = (\n",
        "        \"You are a professional RAG assistant with access to CAMEL's FaissStorage for vector database operations. \"\n",
        "        \"You have the following tools available:\\n\"\n",
        "        \"1. create_faiss_index(documents, chunk_size) - Create a persistent FAISS index\\n\"\n",
        "        \"2. query_faiss(query, top_k) - Search for semantically similar content\\n\"\n",
        "        \"3. get_index_stats() - Get detailed statistics about the vector storage\\n\"\n",
        "        \"\\nYou MUST use these tools when users ask you to create indexes or search for information. \"\n",
        "        \"Do not claim you cannot perform these operations - you have the tools to do them!\"\n",
        "    )\n",
        "\n",
        "    system_message = BaseMessage(\n",
        "        role_name=\"Professional RAG Assistant\",\n",
        "        role_type=RoleType.ASSISTANT,\n",
        "        meta_dict={\"task\": \"Production FAISS RAG Operations\"},\n",
        "        content=system_content\n",
        "    )\n",
        "\n",
        "    # Create the agent (conversational interface only)\n",
        "    rag_agent = ChatAgent(\n",
        "        system_message=system_message,\n",
        "        model=model,\n",
        "        tools=tools  # This should attach the tools\n",
        "    )\n",
        "\n",
        "    # Verify tools are attached - Fixed debug code\n",
        "    print(f\"DEBUG: Agent created\")\n",
        "    if hasattr(rag_agent, '_internal_tools'):\n",
        "        print(f\"DEBUG: Agent has {len(rag_agent._internal_tools)} internal tools\")\n",
        "        # Fix: _internal_tools contains tool names (strings), not objects\n",
        "        for i, tool_name in enumerate(rag_agent._internal_tools):\n",
        "            print(f\"  Internal Tool {i+1}: {tool_name}\")\n",
        "    else:\n",
        "        print(\"DEBUG: Agent has no _internal_tools attribute\")\n",
        "\n",
        "    if hasattr(rag_agent, 'tool_dict'):\n",
        "        print(f\"DEBUG: Agent tool_dict: {list(rag_agent.tool_dict.keys())}\")\n",
        "        # Also show the actual tool objects\n",
        "        for tool_name, tool_obj in rag_agent.tool_dict.items():\n",
        "            print(f\"  {tool_name}: {tool_obj.func.__name__}\")\n",
        "\n",
        "    return rag_agent\n",
        "\n"
      ],
      "metadata": {
        "id": "kGjDLuW04en8"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# File 2: MCP Server Conversion (faiss_rag_agent_mcp_server.py)\n",
        "## Purpose: This is the main file that demonstrates the agent-to-MCP conversion.\n",
        "\n",
        "## What it does:\n",
        "- Imports the RAG agent from the configuration file\n",
        "- Uses the .to_mcp() method to convert the agent into an MCP server\n",
        "- Runs the server with stdio transport for local communication\n",
        "\n",
        "## Critical Points:\n",
        "- Must be synchronous (no async/await) - let .to_mcp() handle event loops\n",
        "- Requires GOOGLE_API_KEY environment variable\n"
      ],
      "metadata": {
        "id": "oAL2f5cM6cur"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "import json\n",
        "import numpy as np\n",
        "from typing import List\n",
        "from pathlib import Path\n",
        "\n",
        "from faiss_rag_agent_config import (\n",
        "    create_rag_agent,\n",
        "    initialize_faiss_storage,\n",
        "    faiss_storage  # Import the global storage\n",
        ")\n",
        "from camel.storages.vectordb_storages.base import VectorRecord, VectorDBQuery\n",
        "\n",
        "def main():\n",
        "    \"\"\"Initialize and run the RAG agent as an MCP server with hybrid approach\"\"\"\n",
        "\n",
        "    # Ensure API key is set\n",
        "    if not os.getenv(\"GOOGLE_API_KEY\"):\n",
        "        print(\"Error: GOOGLE_API_KEY environment variable not set\")\n",
        "        return\n",
        "\n",
        "    print(\"DEBUG: Creating RAG agent...\")\n",
        "    # Create the self-contained RAG agent\n",
        "    rag_agent = create_rag_agent()\n",
        "\n",
        "    # Ensure storage is initialized at the module level\n",
        "    print(\"DEBUG: Ensuring FAISS storage is initialized...\")\n",
        "    storage_instance = initialize_faiss_storage()\n",
        "\n",
        "    print(\"DEBUG: Converting agent to MCP server...\")\n",
        "\n",
        "    try:\n",
        "        # Step 1: Convert ChatAgent to MCP server (for conversational capabilities)\n",
        "        mcp_server = rag_agent.to_mcp(\n",
        "            name=\"RAG-Agent-MCP\",\n",
        "            description=\"A self-contained RAG assistant with FAISS vector search capabilities\"\n",
        "        )\n",
        "        print(\"DEBUG: .to_mcp() conversion successful!\")\n",
        "\n",
        "        # Step 2: Add custom FAISS tools to the same MCP server instance\n",
        "        print(\"DEBUG: Adding custom FAISS tools to MCP server...\")\n",
        "\n",
        "        @mcp_server.tool()\n",
        "        async def create_faiss_index(documents: List[str], chunk_size: int = 1000) -> str:\n",
        "            \"\"\"Create a FAISS index from document texts using CAMEL's storage.\"\"\"\n",
        "            # Import here to ensure we get the latest global state\n",
        "            from faiss_rag_agent_config import faiss_storage, initialize_faiss_storage\n",
        "\n",
        "            try:\n",
        "                print(f\"DEBUG: create_faiss_index called with {len(documents)} documents\")\n",
        "\n",
        "                # Ensure storage is available\n",
        "                current_storage = faiss_storage\n",
        "                if current_storage is None:\n",
        "                    print(\"DEBUG: Storage is None, initializing...\")\n",
        "                    current_storage = initialize_faiss_storage()\n",
        "\n",
        "                print(f\"DEBUG: Storage status before clear: {current_storage.status().vector_count} vectors\")\n",
        "                current_storage.clear()\n",
        "                print(\"DEBUG: Storage cleared successfully\")\n",
        "\n",
        "                all_records = []\n",
        "                chunk_id = 0\n",
        "\n",
        "                for doc_idx, doc_text in enumerate(documents):\n",
        "                    words = doc_text.split()\n",
        "\n",
        "                    for i in range(0, len(words), max(1, chunk_size // 10)):\n",
        "                        chunk_words = words[i:i + max(1, chunk_size // 10)]\n",
        "                        chunk_text = ' '.join(chunk_words)\n",
        "\n",
        "                        if chunk_text.strip():\n",
        "                            # Create simple embedding\n",
        "                            np.random.seed(hash(chunk_text) % (2**32))\n",
        "                            embedding = np.random.normal(0, 1, 1536).astype(np.float32)\n",
        "\n",
        "                            record = VectorRecord(\n",
        "                                id=f\"chunk_{chunk_id}\",\n",
        "                                vector=embedding,\n",
        "                                payload={\n",
        "                                    \"text\": chunk_text,\n",
        "                                    \"source_id\": doc_idx,\n",
        "                                    \"chunk_id\": chunk_id,\n",
        "                                    \"metadata\": {\n",
        "                                        \"document_index\": doc_idx,\n",
        "                                        \"chunk_index\": chunk_id\n",
        "                                    }\n",
        "                                }\n",
        "                            )\n",
        "\n",
        "                            all_records.append(record)\n",
        "                            chunk_id += 1\n",
        "\n",
        "                if all_records:\n",
        "                    print(f\"DEBUG: Adding {len(all_records)} records to storage\")\n",
        "                    current_storage.add(all_records)\n",
        "\n",
        "                    # Verify the addition\n",
        "                    final_status = current_storage.status()\n",
        "                    print(f\"DEBUG: Final storage status: {final_status.vector_count} vectors\")\n",
        "\n",
        "                    return json.dumps({\n",
        "                        \"status\": \"success\",\n",
        "                        \"message\": f\"Indexed {len(all_records)} chunks from {len(documents)} documents\",\n",
        "                        \"total_chunks\": len(all_records),\n",
        "                        \"final_vector_count\": final_status.vector_count\n",
        "                    })\n",
        "                else:\n",
        "                    return json.dumps({\"error\": \"No text chunks were created\"})\n",
        "\n",
        "            except Exception as e:\n",
        "                import traceback\n",
        "                error_details = traceback.format_exc()\n",
        "                print(f\"DEBUG: Error in create_faiss_index: {error_details}\")\n",
        "                return json.dumps({\"error\": f\"Error creating FAISS index: {str(e)}\", \"details\": error_details})\n",
        "\n",
        "        @mcp_server.tool()\n",
        "        async def query_faiss(query: str, top_k: int = 3) -> str:\n",
        "            \"\"\"Query the FAISS index using CAMEL's storage.\"\"\"\n",
        "            # Import here to ensure we get the latest global state\n",
        "            from faiss_rag_agent_config import faiss_storage, initialize_faiss_storage\n",
        "\n",
        "            try:\n",
        "                print(f\"DEBUG: query_faiss called with query: '{query}', top_k: {top_k}\")\n",
        "\n",
        "                # Ensure storage is available\n",
        "                current_storage = faiss_storage\n",
        "                if current_storage is None:\n",
        "                    print(\"DEBUG: Storage is None, initializing...\")\n",
        "                    current_storage = initialize_faiss_storage()\n",
        "\n",
        "                status = current_storage.status()\n",
        "                print(f\"DEBUG: Storage has {status.vector_count} vectors\")\n",
        "\n",
        "                if status.vector_count == 0:\n",
        "                    return json.dumps({\"error\": \"No documents have been indexed yet\"})\n",
        "\n",
        "                # Create query embedding\n",
        "                np.random.seed(hash(query) % (2**32))\n",
        "                query_embedding = np.random.normal(0, 1, 1536).astype(np.float32)\n",
        "\n",
        "                db_query = VectorDBQuery(\n",
        "                    query_vector=query_embedding,\n",
        "                    top_k=min(top_k, status.vector_count)\n",
        "                )\n",
        "\n",
        "                results = current_storage.query(db_query)\n",
        "\n",
        "                formatted_results = []\n",
        "                for result in results:\n",
        "                    formatted_results.append({\n",
        "                        \"id\": result.record.id,\n",
        "                        \"text\": result.record.payload.get(\"text\", \"\"),\n",
        "                        \"metadata\": result.record.payload.get(\"metadata\", {}),\n",
        "                        \"score\": float(result.similarity)\n",
        "                    })\n",
        "\n",
        "                return json.dumps({\n",
        "                    \"results\": formatted_results,\n",
        "                    \"query_info\": {\n",
        "                        \"total_results\": len(formatted_results),\n",
        "                        \"search_type\": \"semantic_similarity\"\n",
        "                    }\n",
        "                })\n",
        "\n",
        "            except Exception as e:\n",
        "                import traceback\n",
        "                error_details = traceback.format_exc()\n",
        "                print(f\"DEBUG: Error in query_faiss: {error_details}\")\n",
        "                return json.dumps({\"error\": f\"Error querying FAISS index: {str(e)}\", \"details\": error_details})\n",
        "\n",
        "        @mcp_server.tool()\n",
        "        async def get_index_stats() -> str:\n",
        "            \"\"\"Get statistics about the FAISS storage.\"\"\"\n",
        "            # Import here to ensure we get the latest global state\n",
        "            from faiss_rag_agent_config import faiss_storage, initialize_faiss_storage\n",
        "\n",
        "            try:\n",
        "                print(\"DEBUG: get_index_stats called\")\n",
        "\n",
        "                # Ensure storage is available\n",
        "                current_storage = faiss_storage\n",
        "                if current_storage is None:\n",
        "                    print(\"DEBUG: Storage is None, initializing...\")\n",
        "                    current_storage = initialize_faiss_storage()\n",
        "\n",
        "                status = current_storage.status()\n",
        "\n",
        "                stats = {\n",
        "                    \"vector_count\": status.vector_count,\n",
        "                    \"vector_dimension\": 1536,\n",
        "                    \"index_type\": \"HNSW\",\n",
        "                    \"distance_metric\": \"cosine\",\n",
        "                    \"collection_name\": current_storage.collection_name,\n",
        "                    \"persistent_storage\": current_storage.storage_path is not None,\n",
        "                    \"storage_path\": str(current_storage.storage_path) if current_storage.storage_path else \"in-memory\"\n",
        "                }\n",
        "\n",
        "                return json.dumps(stats, indent=2)\n",
        "\n",
        "            except Exception as e:\n",
        "                import traceback\n",
        "                error_details = traceback.format_exc()\n",
        "                print(f\"DEBUG: Error in get_index_stats: {error_details}\")\n",
        "                return json.dumps({\"error\": f\"Error getting index stats: {str(e)}\", \"details\": error_details})\n",
        "\n",
        "        print(\"DEBUG: Custom FAISS tools added successfully!\")\n",
        "        print(\"DEBUG: Starting hybrid MCP server...\")\n",
        "\n",
        "        # Run the MCP server with both conversational and tool capabilities\n",
        "        mcp_server.run(transport=\"stdio\")\n",
        "\n",
        "    except Exception as e:\n",
        "        import traceback\n",
        "        error_details = traceback.format_exc()\n",
        "        print(f\"DEBUG: Hybrid MCP setup failed: {e}\")\n",
        "        print(f\"DEBUG: Error details: {error_details}\")\n",
        "        return\n",
        "\n",
        "if __name__ == \"__main__\":\n",
        "    main()"
      ],
      "metadata": {
        "id": "RD4dhcOb4qWo"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# File 3: Test Client (test_faiss_rag_mcp_client.py)\n",
        "## Purpose: Tests the converted MCP server by connecting to it as a client.\n",
        "\n",
        "## What it does:\n",
        "\n",
        "- Creates an MCP configuration that points to your server script\n",
        "- Connects to the MCP server using MCPToolkit\n",
        "- Creates a client agent that can use the server's tools\n",
        "- Provides an interactive interface to test RAG functionality\n",
        "\n"
      ],
      "metadata": {
        "id": "u0a_VY1A67Kc"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import asyncio\n",
        "import json\n",
        "import os\n",
        "import sys\n",
        "from getpass import getpass\n",
        "from pathlib import Path\n",
        "\n",
        "from camel.agents import ChatAgent\n",
        "from camel.logger import get_logger\n",
        "from camel.messages import BaseMessage\n",
        "from camel.models import ModelFactory\n",
        "from camel.toolkits import MCPToolkit\n",
        "from camel.types import ModelPlatformType, RoleType\n",
        "\n",
        "logger = get_logger(__name__)\n",
        "\n",
        "async def main():\n",
        "    \"\"\"Test the self-contained RAG MCP server\"\"\"\n",
        "\n",
        "    # Configuration for connecting to our RAG MCP server\n",
        "    rag_mcp_config = {\n",
        "        \"mcpServers\": {\n",
        "            \"rag_agent_server\": {\n",
        "                \"type\": \"script\",\n",
        "                \"command\": \"python\",\n",
        "                \"args\": [\"faiss_rag_agent_mcp_server.py\"],\n",
        "                \"transport\": \"stdio\",\n",
        "                \"env\": {\n",
        "                    \"GOOGLE_API_KEY\": os.getenv(\"GOOGLE_API_KEY\") or getpass('Enter Gemini API key for server: ')\n",
        "                }\n",
        "            }\n",
        "        }\n",
        "    }\n",
        "\n",
        "    # Write config file\n",
        "    config_path = Path.cwd() / \"rag_mcp_client_config.json\"\n",
        "    with open(config_path, 'w') as f:\n",
        "        json.dump(rag_mcp_config, f, indent=2)\n",
        "\n",
        "    # Get API key for client model\n",
        "    # gemini_api_key = getpass('Enter your Gemini API key for client: ')\n",
        "    # if not gemini_api_key:\n",
        "        # logger.error(\"API key is required to proceed.\")\n",
        "        # return\n",
        "    # os.environ[\"GOOGLE_API_KEY\"] = gemini_api_key\n",
        "    gemini_api_key = os.getenv(\"GOOGLE_API_KEY\") or getpass('Enter Gemini API key for client: ')\n",
        "\n",
        "    # Connect to the RAG MCP server\n",
        "    logger.info(\"Connecting to self-contained RAG MCP server...\")\n",
        "    mcp_toolkit = MCPToolkit(config_path=str(config_path))\n",
        "\n",
        "    await mcp_toolkit.connect()\n",
        "    tools = mcp_toolkit.get_tools()\n",
        "\n",
        "    # Debugging:-------------------\n",
        "    # Add debug before the existing debug section:\n",
        "    print(f\"\\n=== DEBUG: MCP Client received {len(tools)} tools ===\")\n",
        "    if tools:\n",
        "        for i, tool in enumerate(tools):\n",
        "            print(f\"Tool {i+1}:\")\n",
        "            print(f\"  Name: {getattr(tool, 'name', 'Unknown')}\")\n",
        "            if hasattr(tool, 'func'):\n",
        "                print(f\"  Function: {tool.func.__name__}\")\n",
        "                print(f\"  Doc: {tool.func.__doc__[:100] if tool.func.__doc__ else 'No doc'}...\")\n",
        "            else:\n",
        "                print(f\"  Type: {type(tool)}\")\n",
        "                print(f\"  Attributes: {[attr for attr in dir(tool) if not attr.startswith('_')]}\")\n",
        "    else:\n",
        "        print(\"  No tools received from MCP server!\")\n",
        "    print(\"=\" * 50)\n",
        "    # Debugging--------------------\n",
        "\n",
        "    if not tools:\n",
        "        logger.error(\"No tools found from the RAG MCP server\")\n",
        "        return\n",
        "\n",
        "    logger.info(f\"Found {len(tools)} tools from RAG MCP server\")\n",
        "\n",
        "    try:\n",
        "        # Create a client model to interact with the RAG MCP server\n",
        "        model = ModelFactory.create(\n",
        "            model_platform=ModelPlatformType.GEMINI,\n",
        "            model_type=\"gemini-2.5-flash-preview-04-17\",\n",
        "            api_key=gemini_api_key,\n",
        "            model_config_dict={\n",
        "                \"temperature\": 0.2,\n",
        "                # \"max_output_tokens\": 1024,\n",
        "            }\n",
        "        )\n",
        "\n",
        "        system_message = BaseMessage(\n",
        "            role_name=\"RAG Client\",\n",
        "            role_type=RoleType.ASSISTANT,\n",
        "            meta_dict={\"task\": \"RAG MCP Client\"},\n",
        "            content=\"You are a client that uses a self-contained RAG MCP server for document search and retrieval tasks.\"\n",
        "        )\n",
        "\n",
        "        # Create client agent with RAG MCP tools\n",
        "        client_agent = ChatAgent(\n",
        "            system_message=system_message,\n",
        "            model=model,\n",
        "            tools=tools\n",
        "        )\n",
        "        client_agent.reset()\n",
        "\n",
        "        # Debugging --------------------------\n",
        "\n",
        "        # Add direct tool testing:\n",
        "        print(\"\\n=== DEBUG: Testing direct tool access ===\")\n",
        "        if tools:\n",
        "            try:\n",
        "                # Try to call get_index_stats directly\n",
        "                stats_tool = None\n",
        "                for tool in tools:\n",
        "                    if hasattr(tool, 'func') and 'stats' in tool.func.__name__:\n",
        "                        stats_tool = tool\n",
        "                        break\n",
        "\n",
        "                if stats_tool:\n",
        "                    print(\"Found stats tool, testing direct call...\")\n",
        "                    result = stats_tool.func()\n",
        "                    print(f\"Direct tool call result: {result}\")\n",
        "                else:\n",
        "                    print(\"Could not find stats tool\")\n",
        "            except Exception as e:\n",
        "                print(f\"Direct tool call failed: {e}\")\n",
        "        else:\n",
        "            print(\"No tools to test directly\")\n",
        "        print(\"=\" * 50)\n",
        "# DEBUGGING ----------------------------------\n",
        "\n",
        "        print(\"\\n=== Self-Contained RAG Agent as MCP Server Test ===\\n\")\n",
        "        print(\"Testing CAMEL RAG agent (with embedded FAISS) exported as MCP server using .to_mcp()...\")\n",
        "        print(\"Type 'exit' to quit\\n\")\n",
        "\n",
        "        # Test the RAG MCP server\n",
        "        test_message = \"Hello! Please create an index with documents about AI, machine learning, and FAISS, then search for information about vector databases.\"\n",
        "        print(f\"Test Query: {test_message}\\n\")\n",
        "\n",
        "        response = await client_agent.astep(test_message)\n",
        "        if response and response.msgs:\n",
        "            print(f\"RAG MCP Response: {response.msgs[0].content}\\n\")\n",
        "\n",
        "        # Interactive loop\n",
        "        while True:\n",
        "            user_input = input(\"You: \")\n",
        "            if user_input.lower() == 'exit':\n",
        "                break\n",
        "\n",
        "            response = await client_agent.astep(user_input)\n",
        "\n",
        "            if response and response.msgs:\n",
        "                agent_reply = response.msgs[0].content\n",
        "                print(f\"RAG MCP Agent: {agent_reply}\\n\")\n",
        "            else:\n",
        "                print(\"No response received.\\n\")\n",
        "\n",
        "    except Exception as e:\n",
        "        logger.error(f\"An error occurred: {str(e)}\")\n",
        "        print(f\"\\nError: {str(e)}\")\n",
        "    finally:\n",
        "        try:\n",
        "            await mcp_toolkit.disconnect()\n",
        "        except Exception as cleanup_error:\n",
        "            logger.warning(f\"Error during cleanup: {cleanup_error}\")\n",
        "\n",
        "if __name__ == \"__main__\":\n",
        "    if sys.platform == \"win32\" and sys.version_info >= (3, 8):\n",
        "        asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())\n",
        "    asyncio.run(main())"
      ],
      "metadata": {
        "id": "P9ZZ8Pwn41ck"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "![250609_04h20m30s_screenshot.png]()"
      ],
      "metadata": {
        "id": "mX4Pv4Ha7YC3"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "![250611_18h33m50s_screenshot.png]()"
      ],
      "metadata": {
        "id": "GwSDg3XHF1Ve"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Running the Example\n",
        "## Step 1: Set Up Environment\n",
        "```python\n",
        "export GOOGLE_API_KEY=\"your-gemini-api-key-here\"\n",
        "```\n",
        "## Step 2: Test the MCP Server (Optional)\n",
        "This starts the server and waits for connections. If it runs without errors, the conversion worked!\n",
        "```python\n",
        "python faiss_rag_agent_mcp_server.py\n",
        "```\n",
        "## Step 3: Run the Client Test\n",
        "```python\n",
        "python test_faiss_rag_mcp_client.py\n",
        "```\n"
      ],
      "metadata": {
        "id": "TzY2LZq77nLb"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Key Takeaways\n",
        "\n",
        "### ✅ What We Accomplished\n",
        "\n",
        "1. **Self-Contained Agent**: Created a RAG agent with embedded FAISS tools\n",
        "2. **Agent → MCP Conversion**: Used `.to_mcp()` to convert the agent to an MCP server\n",
        "3. **Cross-Client Compatibility**: The MCP server works with any MCP client\n",
        "4. **Tool Composition**: Other agents can now use our RAG capabilities as tools\n"
      ],
      "metadata": {
        "id": "k8ovcxfU5UBu"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "<div class=\"align-center\">\n",
        "  <a href=\"https://www.camel-ai.org/\"><img src=\"https://i.postimg.cc/KzQ5rfBC/button.png\"width=\"150\"></a>\n",
        "  <a href=\"https://discord.camel-ai.org\"><img src=\"https://i.postimg.cc/L4wPdG9N/join-2.png\"  width=\"150\"></a></a>\n",
        "  \n",
        "⭐ <i>Star us on [*Github*](https://github.com/camel-ai/camel), join our [*Discord*](https://discord.camel-ai.org) or follow our [*X*](https://x.com/camelaiorg)\n",
        "</div>"
      ],
      "metadata": {
        "id": "LX-fx2OP8Z7Y"
      }
    }
  ]
}
