{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# Week 8 Exercise: The Price is Right - Autonomous Deal-Hunting AI\n",
        "\n",
        "## Overview\n",
        "This notebook implements a complete autonomous agentic AI system that:\n",
        "- Scans online deals from RSS feeds\n",
        "- Estimates fair market prices using multiple AI/ML models\n",
        "- Identifies great deals by comparing listed vs estimated prices\n",
        "- Sends push notifications for good opportunities\n",
        "- Displays everything in a Gradio UI\n",
        "\n",
        "## Architecture\n",
        "- **SpecialistAgent**: Fine-tuned LLM deployed on Modal\n",
        "- **FrontierAgent**: RAG + GPT-4o-mini/DeepSeek\n",
        "- **RandomForestAgent**: ML model on embeddings\n",
        "- **EnsembleAgent**: Weighted combination of all pricers\n",
        "- **ScannerAgent**: RSS feed deal scraper\n",
        "- **MessagingAgent**: Push notifications\n",
        "- **PlanningAgent**: Orchestrates everything\n",
        "\n",
        "---\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Core imports\n",
        "import os\n",
        "import sys\n",
        "import json\n",
        "import pickle\n",
        "import logging\n",
        "from dotenv import load_dotenv\n",
        "from pathlib import Path\n",
        "\n",
        "# Add parent directory to path to access week8 modules\n",
        "parent_dir = Path.cwd().parent\n",
        "if str(parent_dir) not in sys.path:\n",
        "    sys.path.insert(0, str(parent_dir))\n",
        "\n",
        "print(f\"Working directory: {Path.cwd()}\")\n",
        "print(f\"Parent directory added to path: {parent_dir}\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Environment setup\n",
        "load_dotenv(override=True)\n",
        "\n",
        "# Verify required environment variables\n",
        "required_vars = ['OPENAI_API_KEY', 'HF_TOKEN']\n",
        "optional_vars = ['DEEPSEEK_API_KEY', 'PUSHOVER_USER', 'PUSHOVER_TOKEN']\n",
        "\n",
        "print(\"Required environment variables:\")\n",
        "for var in required_vars:\n",
        "    status = \"SET\" if os.getenv(var) else \"MISSING\"\n",
        "    print(f\"  {var}: {status}\")\n",
        "\n",
        "print(\"\\nOptional environment variables:\")\n",
        "for var in optional_vars:\n",
        "    status = \"SET\" if os.getenv(var) else \"NOT SET\"\n",
        "    print(f\"  {var}: {status}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Modal Setup\n",
        "\n",
        "Before proceeding, ensure Modal is configured:\n",
        "\n",
        "1. If this is your first time, uncomment and run the next cell to set up Modal\n",
        "2. This will open a browser for authentication\n",
        "3. Alternatively, run `modal setup` from command line in an activated environment\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Modal Authentication - Run this to authenticate\n",
        "# This will open a browser for you to sign in to Modal and create a token\n",
        "\n",
        "!modal token new\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Check if Modal is configured and import it\n",
        "import modal\n",
        "from pathlib import Path\n",
        "\n",
        "print(f\"Modal version: {modal.__version__}\")\n",
        "\n",
        "# Check if Modal token exists\n",
        "modal_config = Path.home() / \".modal.toml\"\n",
        "if modal_config.exists():\n",
        "    print(\"Modal configuration found - you're all set!\")\n",
        "else:\n",
        "    print(\"WARNING: Modal configuration not found. You need to run 'modal setup' first.\")\n",
        "    print(\"Please follow the instructions in the cell above.\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Configure HuggingFace Secret in Modal\n",
        "\n",
        "Before deploying, you need to set up your HuggingFace token as a secret in Modal:\n",
        "\n",
        "1. Go to https://modal.com and sign in\n",
        "2. Navigate to **Secrets** in the sidebar\n",
        "3. Click **Create new secret**\n",
        "4. Select **Hugging Face**\n",
        "5. Name it **hf-secret** (important: this is referenced in the code)\n",
        "6. Add your HF_TOKEN value\n",
        "7. Save the secret\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Review the Pricer Service Configuration\n",
        "\n",
        "Let's examine the Modal deployment configuration:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Read and display the pricer_service.py configuration\n",
        "pricer_service_path = parent_dir / \"pricer_service.py\"\n",
        "\n",
        "with open(pricer_service_path, 'r') as f:\n",
        "    content = f.read()\n",
        "    \n",
        "# Show the key configuration details\n",
        "print(\"Pricer Service Configuration:\")\n",
        "print(\"=\"*50)\n",
        "for line in content.split('\\n')[:30]:\n",
        "    if any(keyword in line for keyword in ['BASE_MODEL', 'HF_USER', 'RUN_NAME', 'GPU', 'FINETUNED_MODEL', 'REVISION']):\n",
        "        print(line)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "\n",
        "# modal deploy ../pricer_service2.py\n",
        "\n",
        "!modal deploy ../pricer_service2.py\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test the deployed pricer service\n",
        "Pricer = modal.Cls.from_name(\"pricer-service\", \"Pricer\")\n",
        "pricer = Pricer()\n",
        "\n",
        "test_description = \"Quadcast HyperX condenser mic, connects via usb-c to your computer for crystal clear audio\"\n",
        "\n",
        "print(f\"Testing pricer with: {test_description}\")\n",
        "print(\"\\nCalling Modal (this may take 30 seconds on first call as container wakes up)...\")\n",
        "\n",
        "result = pricer.price.remote(test_description)\n",
        "\n",
        "print(f\"\\nEstimated price: ${result:.2f}\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Import the SpecialistAgent\n",
        "from agents.specialist_agent import SpecialistAgent\n",
        "\n",
        "# Initialize logging to see agent messages\n",
        "logging.basicConfig(level=logging.INFO, format='%(message)s')\n",
        "\n",
        "print(\"Initializing SpecialistAgent...\")\n",
        "specialist = SpecialistAgent()\n",
        "print(\"\\nAgent ready!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test the SpecialistAgent with multiple products\n",
        "test_products = [\n",
        "    \"iPad Pro 2nd generation with 256GB storage\",\n",
        "    \"Sony WH-1000XM5 wireless noise-cancelling headphones\",\n",
        "    \"Nintendo Switch OLED model with neon controllers\"\n",
        "]\n",
        "\n",
        "print(\"Testing SpecialistAgent with sample products:\\n\")\n",
        "for product in test_products:\n",
        "    price = specialist.price(product)\n",
        "    print(f\"Product: {product}\")\n",
        "    print(f\"Estimated Price: ${price:.2f}\")\n",
        "    print(\"-\" * 70)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Additional imports for RAG\n",
        "import numpy as np\n",
        "from tqdm import tqdm\n",
        "from sentence_transformers import SentenceTransformer\n",
        "import chromadb\n",
        "from huggingface_hub import login\n",
        "\n",
        "# Import items and testing modules\n",
        "from items import Item\n",
        "from testing import Tester\n",
        "\n",
        "print(\"RAG imports complete\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Set up constants\n",
        "DB = \"products_vectorstore\"\n",
        "\n",
        "# Log in to HuggingFace\n",
        "hf_token = os.environ['HF_TOKEN']\n",
        "login(hf_token, add_to_git_credential=True)\n",
        "\n",
        "print(f\"Vector database name: {DB}\")\n",
        "print(\"HuggingFace login successful\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Load Training Data\n",
        "\n",
        "We need the `train.pkl` and `test.pkl` files from Week 6. These files contain the curated product data.\n",
        "\n",
        "**Options:**\n",
        "1. Copy them from your `week6` folder to `week8/philip` folder\n",
        "2. Or download from: https://drive.google.com/drive/folders/1f_IZGybvs9o0J5sb3xmtTEQB3BXllzrW\n",
        "\n",
        "Place the files in the `week8/philip` directory before running the next cell.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Try to load from current directory first, then from week6\n",
        "with open('train.pkl', 'rb') as file:\n",
        "        train = pickle.load(file)\n",
        "with open('test.pkl', 'rb') as file:\n",
        "        test = pickle.load(file)\n",
        "print(f\"Loaded from current directory\")\n",
        "\n",
        "print(f\"\\nTraining set: {len(train):,} items\")\n",
        "print(f\"Test set: {len(test):,} items\")\n",
        "print(f\"\\nSample item: {train[0].title}\")\n",
        "print(f\"Price: ${train[0].price:.2f}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Initialize ChromaDB\n",
        "\n",
        "ChromaDB will store our product embeddings for fast similarity search.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Initialize ChromaDB client\n",
        "client = chromadb.PersistentClient(path=DB)\n",
        "\n",
        "# Check if collection exists and delete it if needed (for fresh start)\n",
        "collection_name = \"products\"\n",
        "existing_collections = client.list_collections()\n",
        "\n",
        "if collection_name in [col.name for col in existing_collections]:\n",
        "    print(f\"Deleting existing collection: {collection_name}\")\n",
        "    client.delete_collection(collection_name)\n",
        "\n",
        "# Create new collection\n",
        "collection = client.create_collection(collection_name)\n",
        "print(f\"Created collection: {collection_name}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Initialize SentenceTransformer\n",
        "\n",
        "We'll use `all-MiniLM-L6-v2` which maps text to 384-dimensional vectors. It's fast and runs locally.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Load the SentenceTransformer model\n",
        "model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')\n",
        "\n",
        "# Test it with a sample text\n",
        "test_vector = model.encode([\"iPad Pro with 256GB storage\"])[0]\n",
        "print(f\"Model loaded successfully\")\n",
        "print(f\"Vector dimensions: {len(test_vector)}\")\n",
        "print(f\"Sample vector (first 10 values): {test_vector[:10]}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Helper Function: Extract Product Description\n",
        "\n",
        "We need to extract clean product descriptions from our Item objects.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Helper function to extract description from Item\n",
        "def description(item):\n",
        "    \"\"\"Extract the product description without the question and price\"\"\"\n",
        "    text = item.prompt.replace(\"How much does this cost to the nearest dollar?\\n\\n\", \"\")\n",
        "    return text.split(\"\\n\\nPrice is $\")[0]\n",
        "\n",
        "# Test it\n",
        "sample_desc = description(train[0])\n",
        "print(f\"Sample description ({len(sample_desc)} chars):\")\n",
        "print(sample_desc[:200] + \"...\" if len(sample_desc) > 200 else sample_desc)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Populate the Vector Database\n",
        "\n",
        "Now we'll vectorize and store all products in ChromaDB.\n",
        "\n",
        "**Options:**\n",
        "- Full dataset: 400,000 products (takes ~30-45 minutes)\n",
        "- Subset: 20,000 products (takes ~3-5 minutes, still gives great results)\n",
        "\n",
        "Uncomment your preferred option in the next cell.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# NUMBER_OF_DOCUMENTS = len(train)  # Full dataset (~400k)\n",
        "NUMBER_OF_DOCUMENTS = 20000  # Smaller subset (faster, still effective)\n",
        "\n",
        "print(f\"Will process {NUMBER_OF_DOCUMENTS:,} documents\")\n",
        "print(f\"Processing in batches of 1000...\")\n",
        "print(f\"Estimated time: {NUMBER_OF_DOCUMENTS // 1000 * 7} seconds\")\n",
        "print(\"\\nStarting vectorization...\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Populate ChromaDB with product vectors\n",
        "for i in tqdm(range(0, NUMBER_OF_DOCUMENTS, 1000)):\n",
        "    batch_items = train[i: i+1000]\n",
        "    \n",
        "    documents = [description(item) for item in batch_items]\n",
        "    \n",
        "    vectors = model.encode(documents).astype(float).tolist()\n",
        "    \n",
        "    metadatas = [{\"category\": item.category, \"price\": item.price} for item in batch_items]\n",
        "    \n",
        "    ids = [f\"doc_{j}\" for j in range(i, i+len(documents))]\n",
        "    \n",
        "    collection.add(\n",
        "        ids=ids,\n",
        "        documents=documents,\n",
        "        embeddings=vectors,\n",
        "        metadatas=metadatas\n",
        "    )\n",
        "\n",
        "print(f\"\\nComplete! Added {NUMBER_OF_DOCUMENTS:,} products to the vector database.\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test query: Find wireless headphones\n",
        "test_query = \"Sony wireless noise-cancelling headphones\"\n",
        "\n",
        "query_vector = model.encode([test_query])\n",
        "\n",
        "results = collection.query(\n",
        "    query_embeddings=query_vector.astype(float).tolist(),\n",
        "    n_results=5\n",
        ")\n",
        "\n",
        "print(f\"Query: '{test_query}'\")\n",
        "print(f\"\\nTop 5 similar products:\\n\")\n",
        "\n",
        "for i, (doc, metadata) in enumerate(zip(results['documents'][0], results['metadatas'][0]), 1):\n",
        "    print(f\"{i}. Price: ${metadata['price']:.2f} | Category: {metadata['category']}\")\n",
        "    print(f\"   Description: {doc[:100]}...\")\n",
        "    print()\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test with another query\n",
        "test_query2 = \"gaming laptop with RTX graphics card\"\n",
        "\n",
        "query_vector2 = model.encode([test_query2])\n",
        "results2 = collection.query(\n",
        "    query_embeddings=query_vector2.astype(float).tolist(),\n",
        "    n_results=5\n",
        ")\n",
        "\n",
        "print(f\"Query: '{test_query2}'\")\n",
        "print(f\"\\nTop 5 similar products:\\n\")\n",
        "\n",
        "for i, (doc, metadata) in enumerate(zip(results2['documents'][0], results2['metadatas'][0]), 1):\n",
        "    print(f\"{i}. Price: ${metadata['price']:.2f} | Category: {metadata['category']}\")\n",
        "    print(f\"   Description: {doc[:100]}...\")\n",
        "    print()\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Import the FrontierAgent\n",
        "from agents.frontier_agent import FrontierAgent\n",
        "\n",
        "print(\"FrontierAgent imported successfully\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Initialize the FrontierAgent with our ChromaDB collection\n",
        "frontier = FrontierAgent(collection)\n",
        "\n",
        "print(\"FrontierAgent initialized and ready!\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Test FrontierAgent with Sample Products\n",
        "\n",
        "Let's test the FrontierAgent with a few products and see how it performs!\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test FrontierAgent with sample products\n",
        "test_products_frontier = [\n",
        "    \"Apple iPad Pro 12.9-inch with 256GB storage and Apple Pencil support\",\n",
        "    \"Sony WH-1000XM5 wireless noise-cancelling headphones with 30-hour battery\",\n",
        "    \"Nintendo Switch OLED model with vibrant 7-inch screen and neon Joy-Con controllers\"\n",
        "]\n",
        "\n",
        "print(\"Testing FrontierAgent with sample products:\\n\")\n",
        "print(\"=\"*80)\n",
        "\n",
        "for product in test_products_frontier:\n",
        "    print(f\"\\nProduct: {product}\")\n",
        "    print(\"-\"*80)\n",
        "    \n",
        "    # Get price estimate\n",
        "    estimate = frontier.price(product)\n",
        "    \n",
        "    print(f\"FrontierAgent Estimate: ${estimate:.2f}\")\n",
        "    print(\"=\"*80)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Compare both agents on the same products\n",
        "comparison_products = [\n",
        "    \"Wireless gaming mouse with RGB lighting\",\n",
        "    \"USB-C charging cable 6 feet braided\",\n",
        "    \"Bluetooth speaker waterproof portable\"\n",
        "]\n",
        "\n",
        "print(\"Agent Comparison: SpecialistAgent vs FrontierAgent\\n\")\n",
        "print(\"=\"*80)\n",
        "\n",
        "for product in comparison_products:\n",
        "    print(f\"\\nProduct: {product}\")\n",
        "    print(\"-\"*80)\n",
        "    \n",
        "    # Get predictions from both agents\n",
        "    specialist_price = specialist.price(product)\n",
        "    frontier_price = frontier.price(product)\n",
        "    \n",
        "    print(f\"Specialist (Fine-tuned LLM): ${specialist_price:.2f}\")\n",
        "    print(f\"Frontier (RAG + GPT-4o-mini): ${frontier_price:.2f}\")\n",
        "    print(f\"Difference: ${abs(specialist_price - frontier_price):.2f}\")\n",
        "    print(\"=\"*80)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Evaluate on a small sample of test data\n",
        "num_test_samples = 10  # Small sample to keep it fast\n",
        "\n",
        "print(f\"Evaluating both agents on {num_test_samples} test samples...\\n\")\n",
        "\n",
        "specialist_errors = []\n",
        "frontier_errors = []\n",
        "\n",
        "for i in range(num_test_samples):\n",
        "    item = test[i]\n",
        "    actual_price = item.price\n",
        "    desc = description(item)\n",
        "    \n",
        "    # Get predictions\n",
        "    specialist_pred = specialist.price(desc)\n",
        "    frontier_pred = frontier.price(desc)\n",
        "    \n",
        "    # Calculate errors\n",
        "    specialist_error = abs(specialist_pred - actual_price)\n",
        "    frontier_error = abs(frontier_pred - actual_price)\n",
        "    \n",
        "    specialist_errors.append(specialist_error)\n",
        "    frontier_errors.append(frontier_error)\n",
        "    \n",
        "    print(f\"Item {i+1}: {item.title[:50]}...\")\n",
        "    print(f\"  Actual: ${actual_price:.2f}\")\n",
        "    print(f\"  Specialist: ${specialist_pred:.2f} (error: ${specialist_error:.2f})\")\n",
        "    print(f\"  Frontier: ${frontier_pred:.2f} (error: ${frontier_error:.2f})\")\n",
        "    print()\n",
        "\n",
        "# Calculate average errors\n",
        "avg_specialist_error = np.mean(specialist_errors)\n",
        "avg_frontier_error = np.mean(frontier_errors)\n",
        "\n",
        "print(\"=\"*80)\n",
        "print(\"RESULTS:\")\n",
        "print(f\"Specialist Agent - Average Error: ${avg_specialist_error:.2f}\")\n",
        "print(f\"Frontier Agent - Average Error: ${avg_frontier_error:.2f}\")\n",
        "print(f\"Better performer: {'Specialist' if avg_specialist_error < avg_frontier_error else 'Frontier'}\")\n",
        "print(\"=\"*80)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Understand How FrontierAgent Works\n",
        "\n",
        "Let's peek inside to see how the FrontierAgent uses RAG to build context.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Let's see what similar products the FrontierAgent finds\n",
        "test_product = \"MacBook Pro 14-inch with M2 chip and 16GB RAM\"\n",
        "\n",
        "print(f\"Test product: {test_product}\\n\")\n",
        "print(\"=\"*80)\n",
        "\n",
        "# Find similar products (this is what FrontierAgent does internally)\n",
        "documents, prices = frontier.find_similars(test_product)\n",
        "\n",
        "print(\"\\nSimilar products found by RAG:\\n\")\n",
        "for i, (doc, price) in enumerate(zip(documents, prices), 1):\n",
        "    print(f\"{i}. ${price:.2f}\")\n",
        "    print(f\"   {doc[:150]}...\")\n",
        "    print()\n",
        "\n",
        "# Now get the actual price prediction\n",
        "final_price = frontier.price(test_product)\n",
        "print(\"=\"*80)\n",
        "print(f\"\\nFinal FrontierAgent prediction: ${final_price:.2f}\")\n",
        "print(\"=\"*80)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# Phase 4: Train Random Forest Model\n",
        "\n",
        "In this phase, we'll:\n",
        "1. Extract embeddings from ChromaDB \n",
        "2. Train a Random Forest Regressor on the embeddings\n",
        "3. Evaluate its performance\n",
        "4. Save the model as `random_forest_model.pkl`\n",
        "5. Test the RandomForestAgent\n",
        "\n",
        "Random Forest works directly on the vector embeddings, learning patterns without needing an LLM!\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Additional imports for ML\n",
        "from sklearn.ensemble import RandomForestRegressor\n",
        "from sklearn.metrics import mean_squared_error, r2_score\n",
        "import joblib\n",
        "\n",
        "print(\"ML imports complete\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Extract all data from ChromaDB\n",
        "result = collection.get(include=['embeddings', 'documents', 'metadatas'])\n",
        "\n",
        "# Convert to numpy arrays\n",
        "vectors = np.array(result['embeddings'])\n",
        "documents = result['documents']\n",
        "prices = np.array([metadata['price'] for metadata in result['metadatas']])\n",
        "\n",
        "print(f\"Extracted data from ChromaDB:\")\n",
        "print(f\"  Vectors shape: {vectors.shape}\")\n",
        "print(f\"  Number of products: {len(documents):,}\")\n",
        "print(f\"  Price range: ${prices.min():.2f} - ${prices.max():.2f}\")\n",
        "print(f\"  Mean price: ${prices.mean():.2f}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Train Random Forest Model\n",
        "\n",
        "Now we'll train a Random Forest Regressor on the embeddings to predict prices.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Train Random Forest\n",
        "print(\"Training Random Forest model...\")\n",
        "print(\"This may take 1-2 minutes...\\n\")\n",
        "\n",
        "rf_model = RandomForestRegressor(\n",
        "    n_estimators=100,      # Number of trees\n",
        "    max_depth=20,          # Max depth of each tree\n",
        "    random_state=42,       # For reproducibility\n",
        "    n_jobs=-1,             # Use all CPU cores\n",
        "    verbose=1              # Show progress\n",
        ")\n",
        "\n",
        "# Train the model\n",
        "rf_model.fit(vectors, prices)\n",
        "\n",
        "print(\"\\nRandom Forest training complete!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Evaluate on training data\n",
        "train_predictions = rf_model.predict(vectors)\n",
        "\n",
        "# Calculate metrics\n",
        "train_mse = mean_squared_error(prices, train_predictions)\n",
        "train_rmse = np.sqrt(train_mse)\n",
        "train_r2 = r2_score(prices, train_predictions)\n",
        "\n",
        "print(\"Random Forest - Training Set Performance:\")\n",
        "print(\"=\"*80)\n",
        "print(f\"Root Mean Squared Error (RMSE): ${train_rmse:.2f}\")\n",
        "print(f\"R² Score: {train_r2:.4f}\")\n",
        "print(f\"Mean Absolute Error: ${np.mean(np.abs(prices - train_predictions)):.2f}\")\n",
        "print(\"=\"*80)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Test on Sample Products\n",
        "\n",
        "Let's test the Random Forest directly with some product descriptions.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test Random Forest with sample products\n",
        "rf_test_products = [\n",
        "    \"Wireless Bluetooth headphones with noise cancellation\",\n",
        "    \"USB-C laptop charger 65W power adapter\",\n",
        "    \"Mechanical gaming keyboard RGB backlit\"\n",
        "]\n",
        "\n",
        "print(\"Testing Random Forest model:\\n\")\n",
        "print(\"=\"*80)\n",
        "\n",
        "for product_desc in rf_test_products:\n",
        "    # Encode the description\n",
        "    product_vector = model.encode([product_desc])\n",
        "    \n",
        "    # Predict price\n",
        "    predicted_price = max(0, rf_model.predict(product_vector)[0])\n",
        "    \n",
        "    print(f\"Product: {product_desc}\")\n",
        "    print(f\"Random Forest Prediction: ${predicted_price:.2f}\")\n",
        "    print(\"-\"*80)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Save the Random Forest Model\n",
        "\n",
        "Save the trained model so the RandomForestAgent can use it.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Save the model\n",
        "model_path = 'random_forest_model.pkl'\n",
        "joblib.dump(rf_model, model_path)\n",
        "\n",
        "print(f\"Random Forest model saved to: {model_path}\")\n",
        "print(f\"File size: {os.path.getsize(model_path) / 1024 / 1024:.2f} MB\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Test the RandomForestAgent\n",
        "\n",
        "Now let's use the RandomForestAgent class which loads and uses our saved model.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Import and initialize RandomForestAgent\n",
        "from agents.random_forest_agent import RandomForestAgent\n",
        "\n",
        "rf_agent = RandomForestAgent()\n",
        "print(\"RandomForestAgent initialized!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test RandomForestAgent\n",
        "agent_test_products = [\n",
        "    \"Apple AirPods Pro with active noise cancellation\",\n",
        "    \"Samsung Galaxy S23 smartphone 128GB\",\n",
        "    \"LG 55-inch 4K OLED smart TV\"\n",
        "]\n",
        "\n",
        "print(\"Testing RandomForestAgent:\\n\")\n",
        "print(\"=\"*80)\n",
        "\n",
        "for product in agent_test_products:\n",
        "    price = rf_agent.price(product)\n",
        "    print(f\"Product: {product}\")\n",
        "    print(f\"RandomForestAgent Prediction: ${price:.2f}\")\n",
        "    print(\"-\"*80)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Compare all three agents\n",
        "comparison_products_all = [\n",
        "    \"Dell XPS 15 laptop with Intel i7 and 16GB RAM\",\n",
        "    \"Sony PlayStation 5 console with controller\",\n",
        "    \"Bose QuietComfort 45 wireless headphones\"\n",
        "]\n",
        "\n",
        "print(\"THREE-WAY AGENT COMPARISON\\n\")\n",
        "print(\"=\"*80)\n",
        "\n",
        "for product in comparison_products_all:\n",
        "    print(f\"\\nProduct: {product}\")\n",
        "    print(\"-\"*80)\n",
        "    \n",
        "    specialist_price = specialist.price(product)\n",
        "    frontier_price = frontier.price(product)\n",
        "    rf_price = rf_agent.price(product)\n",
        "    \n",
        "    print(f\"Specialist (Fine-tuned LLM):  ${specialist_price:>8.2f}\")\n",
        "    print(f\"Frontier (RAG + GPT-4o):      ${frontier_price:>8.2f}\")\n",
        "    print(f\"RandomForest (ML on vectors): ${rf_price:>8.2f}\")\n",
        "    print(f\"Average:                      ${np.mean([specialist_price, frontier_price, rf_price]):>8.2f}\")\n",
        "    print(\"=\"*80)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Additional imports for ensemble\n",
        "from sklearn.linear_model import LinearRegression\n",
        "import pandas as pd\n",
        "\n",
        "print(\"Ensemble imports ready\")\n",
        "print(f\"Pandas version: {pd.__version__}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Collect Predictions from All Three Agents\n",
        "\n",
        "We'll get predictions from all three agents on a subset of test data to train the ensemble.\n",
        "\n",
        "**Note**: This will take several minutes as we need to call the Modal API and OpenAI API for each test sample.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Collect predictions from all three agents\n",
        "# Using a subset to keep training time reasonable\n",
        "NUM_ENSEMBLE_SAMPLES = 50  # Adjust if you want more/fewer samples\n",
        "\n",
        "print(f\"Collecting predictions from all three agents on {NUM_ENSEMBLE_SAMPLES} test samples...\")\n",
        "print(\"This will take a few minutes...\\n\")\n",
        "\n",
        "specialist_predictions = []\n",
        "frontier_predictions = []\n",
        "rf_predictions = []\n",
        "actual_prices = []\n",
        "\n",
        "for i in tqdm(range(NUM_ENSEMBLE_SAMPLES)):\n",
        "    item = test[i]\n",
        "    desc = description(item)\n",
        "    \n",
        "    # Get predictions from each agent\n",
        "    spec_pred = specialist.price(desc)\n",
        "    front_pred = frontier.price(desc)\n",
        "    rf_pred = rf_agent.price(desc)\n",
        "    \n",
        "    specialist_predictions.append(spec_pred)\n",
        "    frontier_predictions.append(front_pred)\n",
        "    rf_predictions.append(rf_pred)\n",
        "    actual_prices.append(item.price)\n",
        "\n",
        "print(f\"\\nCollected {NUM_ENSEMBLE_SAMPLES} predictions from each agent!\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Prepare Training Data for Ensemble\n",
        "\n",
        "Create a DataFrame with all the predictions and engineered features.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create training data for ensemble\n",
        "ensemble_data = pd.DataFrame({\n",
        "    'Specialist': specialist_predictions,\n",
        "    'Frontier': frontier_predictions,\n",
        "    'RandomForest': rf_predictions,\n",
        "})\n",
        "\n",
        "# Add min and max features (helps the ensemble understand uncertainty)\n",
        "ensemble_data['Min'] = ensemble_data[['Specialist', 'Frontier', 'RandomForest']].min(axis=1)\n",
        "ensemble_data['Max'] = ensemble_data[['Specialist', 'Frontier', 'RandomForest']].max(axis=1)\n",
        "\n",
        "print(\"Ensemble training data:\")\n",
        "print(ensemble_data.head(10))\n",
        "print(f\"\\nShape: {ensemble_data.shape}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Train the Ensemble Model\n",
        "\n",
        "Train a Linear Regression to learn the best way to combine the three agent predictions.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Train ensemble model\n",
        "print(\"Training Ensemble Model...\\n\")\n",
        "\n",
        "ensemble_model = LinearRegression()\n",
        "ensemble_model.fit(ensemble_data, actual_prices)\n",
        "\n",
        "# Make predictions\n",
        "ensemble_predictions = ensemble_model.predict(ensemble_data)\n",
        "\n",
        "# Evaluate\n",
        "ensemble_mae = np.mean(np.abs(np.array(actual_prices) - ensemble_predictions))\n",
        "ensemble_rmse = np.sqrt(mean_squared_error(actual_prices, ensemble_predictions))\n",
        "ensemble_r2 = r2_score(actual_prices, ensemble_predictions)\n",
        "\n",
        "print(\"Ensemble Model Performance:\")\n",
        "print(\"=\"*80)\n",
        "print(f\"Mean Absolute Error: ${ensemble_mae:.2f}\")\n",
        "print(f\"Root Mean Squared Error: ${ensemble_rmse:.2f}\")\n",
        "print(f\"R² Score: {ensemble_r2:.4f}\")\n",
        "print(\"=\"*80)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Analyze Ensemble Weights\n",
        "\n",
        "Let's see how the ensemble weights each agent's predictions.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Show the learned weights\n",
        "print(\"Ensemble Model Weights:\")\n",
        "print(\"=\"*80)\n",
        "for feature, coef in zip(ensemble_data.columns, ensemble_model.coef_):\n",
        "    print(f\"{feature:15s}: {coef:8.4f}\")\n",
        "print(f\"{'Intercept':15s}: {ensemble_model.intercept_:8.4f}\")\n",
        "print(\"=\"*80)\n",
        "print(\"\\nInterpretation: Higher weights mean that agent has more influence on the final prediction.\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Compare Individual vs Ensemble Performance\n",
        "\n",
        "Let's see if the ensemble actually improves over individual agents!\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Compare performance of all models\n",
        "spec_mae = np.mean(np.abs(np.array(actual_prices) - np.array(specialist_predictions)))\n",
        "front_mae = np.mean(np.abs(np.array(actual_prices) - np.array(frontier_predictions)))\n",
        "rf_mae = np.mean(np.abs(np.array(actual_prices) - np.array(rf_predictions)))\n",
        "\n",
        "print(\"Performance Comparison (Mean Absolute Error):\")\n",
        "print(\"=\"*80)\n",
        "print(f\"Specialist Agent:     ${spec_mae:8.2f}\")\n",
        "print(f\"Frontier Agent:       ${front_mae:8.2f}\")\n",
        "print(f\"RandomForest Agent:   ${rf_mae:8.2f}\")\n",
        "print(f\"Ensemble Agent:       ${ensemble_mae:8.2f}  <-- Combined!\")\n",
        "print(\"=\"*80)\n",
        "\n",
        "best_individual = min(spec_mae, front_mae, rf_mae)\n",
        "improvement = ((best_individual - ensemble_mae) / best_individual) * 100\n",
        "\n",
        "if ensemble_mae < best_individual:\n",
        "    print(f\"\\nEnsemble improves over best individual by {improvement:.1f}%\")\n",
        "else:\n",
        "    print(f\"\\nEnsemble is within {abs(improvement):.1f}% of best individual\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Save the Ensemble Model\n",
        "\n",
        "Save the trained ensemble so the EnsembleAgent can use it.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Save the ensemble model\n",
        "ensemble_model_path = 'ensemble_model.pkl'\n",
        "joblib.dump(ensemble_model, ensemble_model_path)\n",
        "\n",
        "print(f\"Ensemble model saved to: {ensemble_model_path}\")\n",
        "print(f\"File size: {os.path.getsize(ensemble_model_path) / 1024:.2f} KB\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Test the EnsembleAgent\n",
        "\n",
        "Now let's use the EnsembleAgent class which orchestrates all three agents!\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Import and initialize EnsembleAgent\n",
        "from agents.ensemble_agent import EnsembleAgent\n",
        "\n",
        "print(\"Initializing EnsembleAgent (this creates all three sub-agents)...\")\n",
        "ensemble_agent = EnsembleAgent(collection)\n",
        "print(\"EnsembleAgent ready!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test the EnsembleAgent\n",
        "ensemble_test_products = [\n",
        "    \"Apple MacBook Air M2 chip 13-inch with 256GB SSD\",\n",
        "    \"Nintendo Switch OLED with Mario Kart bundle\",\n",
        "    \"Dyson V15 cordless vacuum cleaner\"\n",
        "]\n",
        "\n",
        "print(\"Testing EnsembleAgent (calls all 3 agents internally):\\n\")\n",
        "print(\"=\"*80)\n",
        "\n",
        "for product in ensemble_test_products:\n",
        "    print(f\"\\nProduct: {product}\")\n",
        "    print(\"-\"*80)\n",
        "    \n",
        "    # The ensemble agent calls all three agents and combines their predictions\n",
        "    final_price = ensemble_agent.price(product)\n",
        "    \n",
        "    print(f\"Final Ensemble Prediction: ${final_price:.2f}\")\n",
        "    print(\"=\"*80)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Import deal-related classes\n",
        "from agents.deals import ScrapedDeal, DealSelection, Deal, Opportunity\n",
        "from agents.scanner_agent import ScannerAgent\n",
        "\n",
        "print(\"Scanner imports complete\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Test RSS Feed Scraping\n",
        "\n",
        "First, let's see what deals are available from RSS feeds.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Fetch deals from RSS feeds\n",
        "print(\"Fetching deals from RSS feeds...\")\n",
        "print(\"This scrapes from multiple deal websites (DealNews, etc.)\")\n",
        "print(\"May take 1-2 minutes...\\n\")\n",
        "\n",
        "deals = ScrapedDeal.fetch(show_progress=True)\n",
        "\n",
        "print(f\"\\nFetched {len(deals)} deals from RSS feeds!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Look at a sample deal\n",
        "sample_deal = deals[2] if deals else None\n",
        "\n",
        "if sample_deal:\n",
        "    print(f\"\\nSample Deal #{1}:\")\n",
        "    print(\"=\"*80)\n",
        "    print(sample_deal.describe())\n",
        "    print(\"=\"*80)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Test the ScannerAgent\n",
        "\n",
        "The ScannerAgent uses GPT-4o-mini to intelligently select and parse the best deals.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Initialize ScannerAgent\n",
        "scanner = ScannerAgent()\n",
        "\n",
        "print(\"ScannerAgent initialized!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Use the scanner to select and parse the best deals\n",
        "print(\"ScannerAgent is analyzing deals with GPT-4o-mini...\")\n",
        "print(\"This will select the 5 best deals with clear prices and descriptions\\n\")\n",
        "\n",
        "# Empty memory means it won't filter out any deals\n",
        "selected_deals = scanner.scan(memory=[])\n",
        "\n",
        "if selected_deals:\n",
        "    print(f\"ScannerAgent selected {len(selected_deals.deals)} high-quality deals:\\n\")\n",
        "    print(\"=\"*80)\n",
        "    \n",
        "    for i, deal in enumerate(selected_deals.deals, 1):\n",
        "        print(f\"\\nDeal {i}:\")\n",
        "        print(f\"  Description: {deal.product_description[:100]}...\")\n",
        "        print(f\"  Price: ${deal.price:.2f}\")\n",
        "        print(f\"  URL: {deal.url}\")\n",
        "        print(\"-\"*80)\n",
        "else:\n",
        "    print(\"No deals found or parsed successfully\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Import messaging and planning agents\n",
        "from agents.messaging_agent import MessagingAgent\n",
        "from agents.planning_agent import PlanningAgent\n",
        "\n",
        "print(\"Messaging and Planning imports complete\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Setup Pushover \n",
        "\n",
        "For push notifications, you'll need Pushover:\n",
        "1. Sign up at https://pushover.net (free tier available)\n",
        "2. Create an application to get your PUSHOVER_TOKEN\n",
        "3. Get your PUSHOVER_USER key from your account\n",
        "4. Add both to your `.env` file\n",
        "\n",
        "If you don't have Pushover set up, the agent will still work - it just won't send notifications.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Initialize MessagingAgent\n",
        "messenger = MessagingAgent()\n",
        "\n",
        "print(\"MessagingAgent initialized!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Test push notification (only if you have Pushover configured)\n",
        "\n",
        "pushover_configured = os.getenv('PUSHOVER_USER') and os.getenv('PUSHOVER_TOKEN')\n",
        "\n",
        "if pushover_configured:\n",
        "    print(\"Testing push notification...\")\n",
        "    messenger.push(\"Test from Price is Right system!\")\n",
        "    print(\"Check your phone/device for the notification!\")\n",
        "else:\n",
        "    print(\"Pushover not configured - skipping notification test\")\n",
        "    print(\"To enable: add PUSHOVER_USER and PUSHOVER_TOKEN to your .env file\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Initialize the PlanningAgent\n",
        "\n",
        "The PlanningAgent coordinates all the other agents!\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Initialize PlanningAgent\n",
        "planner = PlanningAgent(collection)\n",
        "\n",
        "print(\"PlanningAgent initialized and ready to coordinate all agents!\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Test Single Deal Processing\n",
        "\n",
        "Let's test how the planner processes a single deal.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Create a test deal\n",
        "test_deal = Deal(\n",
        "    product_description=\"Sony WH-1000XM5 wireless noise-cancelling headphones, premium sound quality, 30-hour battery life\",\n",
        "    price=299.99,\n",
        "    url=\"https://example.com/deal\"\n",
        ")\n",
        "\n",
        "print(\"Testing PlanningAgent with a single deal...\")\n",
        "print(f\"\\nDeal: {test_deal.product_description[:80]}...\")\n",
        "print(f\"Listed Price: ${test_deal.price:.2f}\\n\")\n",
        "\n",
        "# Process the deal\n",
        "opportunity = planner.run(test_deal)\n",
        "\n",
        "print(f\"\\nResults:\")\n",
        "print(\"=\"*80)\n",
        "print(f\"Deal Price:     ${opportunity.deal.price:.2f}\")\n",
        "print(f\"Estimate:       ${opportunity.estimate:.2f}\")\n",
        "print(f\"Discount:       ${opportunity.discount:.2f}\")\n",
        "print(f\"Good deal?:     {'YES!' if opportunity.discount > 50 else 'Not quite'}\")\n",
        "print(\"=\"*80)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Run Complete Planning Cycle\n",
        "\n",
        "Now let's run a full planning cycle: scan, price, and identify opportunities!\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Run a full planning cycle\n",
        "print(\"Running full planning cycle...\")\n",
        "print(\"This will:\")\n",
        "print(\"  1. Scan RSS feeds for deals\")\n",
        "print(\"  2. Parse with GPT-4o-mini\")\n",
        "print(\"  3. Price each deal with EnsembleAgent\")\n",
        "print(\"  4. Identify best opportunity\")\n",
        "print(\"  5. Send notification if discount > $50\\n\")\n",
        "\n",
        "print(\"=\"*80)\n",
        "\n",
        "best_opportunity = planner.plan(memory=[])\n",
        "\n",
        "if best_opportunity:\n",
        "    print(f\"\\nBEST OPPORTUNITY FOUND:\")\n",
        "    print(\"=\"*80)\n",
        "    print(f\"Product: {best_opportunity.deal.product_description[:100]}...\")\n",
        "    print(f\"Deal Price: ${best_opportunity.deal.price:.2f}\")\n",
        "    print(f\"Estimated Value: ${best_opportunity.estimate:.2f}\")\n",
        "    print(f\"Potential Savings: ${best_opportunity.discount:.2f}\")\n",
        "    print(f\"URL: {best_opportunity.deal.url}\")\n",
        "    print(\"=\"*80)\n",
        "    \n",
        "    if pushover_configured:\n",
        "        print(\"\\nNotification sent to your device!\")\n",
        "else:\n",
        "    print(\"\\nNo opportunities found with discount > $50\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Import Gradio and framework\n",
        "import gradio as gr\n",
        "from deal_agent_framework import DealAgentFramework\n",
        "\n",
        "print(\"Gradio and framework imports complete\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Initialize the Agent Framework\n",
        "\n",
        "The DealAgentFramework manages the entire system including memory persistence.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Initialize the agent framework\n",
        "agent_framework = DealAgentFramework()\n",
        "agent_framework.init_agents_as_needed()\n",
        "\n",
        "print(\"Agent framework initialized and ready!\")\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "\n",
        "with gr.Blocks(title=\"The Price is Right\", fill_width=True) as ui:\n",
        "    \n",
        "    gr.Markdown('<div style=\"text-align: center;font-size:24px\">The Price is Right - Deal Hunting AI</div>')\n",
        "    gr.Markdown('<div style=\"text-align: center;font-size:14px\">Autonomous agent framework for finding great deals</div>')\n",
        "    \n",
        "    # Display current opportunities\n",
        "    opportunities_display = gr.Dataframe(\n",
        "        headers=[\"Description\", \"Price\", \"Estimate\", \"Discount\", \"URL\"],\n",
        "        label=\"Opportunities Found\",\n",
        "        wrap=True,\n",
        "        column_widths=[4, 1, 1, 1, 2],\n",
        "        row_count=10,\n",
        "    )\n",
        "    \n",
        "    # Button to trigger a scan\n",
        "    scan_button = gr.Button(\"Run Scan Cycle\", variant=\"primary\")\n",
        "    status_text = gr.Textbox(label=\"Status\", lines=3)\n",
        "    \n",
        "    def run_scan():\n",
        "        try:\n",
        "            # Run the planning cycle (returns full memory list)\n",
        "            memory_before_count = len(agent_framework.memory)\n",
        "            all_opportunities = agent_framework.run()\n",
        "            \n",
        "            if all_opportunities and len(all_opportunities) > 0:\n",
        "                # Create table data from all opportunities\n",
        "                table_data = [[\n",
        "                    opp.deal.product_description[:80] + \"...\",\n",
        "                    f\"${opp.deal.price:.2f}\",\n",
        "                    f\"${opp.estimate:.2f}\",\n",
        "                    f\"${opp.discount:.2f}\",\n",
        "                    opp.deal.url\n",
        "                ] for opp in all_opportunities]\n",
        "                \n",
        "                # Check if new opportunity was added\n",
        "                if len(all_opportunities) > memory_before_count:\n",
        "                    latest = all_opportunities[-1]\n",
        "                    status = f\"New opportunity found! Discount: ${latest.discount:.2f}\"\n",
        "                else:\n",
        "                    status = \"Scan complete. No new opportunities found (discount < $50)\"\n",
        "                    \n",
        "                return table_data, status\n",
        "            else:\n",
        "                status = \"Scan complete. No opportunities found.\"\n",
        "                return gr.update(), status\n",
        "        except Exception as e:\n",
        "            import traceback\n",
        "            error_details = traceback.format_exc()\n",
        "            return gr.update(), f\"Error: {str(e)}\\n\\nDetails:\\n{error_details}\"\n",
        "    \n",
        "    scan_button.click(run_scan, outputs=[opportunities_display, status_text])\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# Launch the UI\n",
        "ui.launch(inbrowser=True)\n"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": ".venv",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.12"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
