{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ADVANCED WEB SCRAPING LAB\n",
    "## Multi-Strategy Scraping + Async Batch Analysis\n",
    "\n",
    "### Building on Day 1: A Real-World Extension\n",
    "\n",
    "In the original Day 1 lab, we built a simple web scraper that summarizes websites. However, we hit a wall with JavaScript-heavy sites like OpenAI.com. This lab extends that work with **production-ready techniques** that you'd actually use in real applications.\n",
    "\n",
    "## What You'll Build\n",
    "\n",
    "1. **Multi-Strategy Website Handler**: Automatically detects whether a site needs JavaScript rendering and switches between `requests` (fast) and Playwright (robust)\n",
    "2. **Async Batch Analyzer**: Process multiple URLs concurrently to compare summaries across sites\n",
    "3. **Practical Applications**: Competitive analysis, research automation, and more\n",
    "\n",
    "<table style=\"margin: 0; text-align: left; width: 100%;\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../assets/important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td style=\"vertical-align: middle; padding-left: 20px;\">\n",
    "            <h2 style=\"color:#900;\">Prerequisites</h2>\n",
    "            <span style=\"color:#900;\">This builds on Day 1, so make sure you've completed <b>day1.ipynb</b> first! You'll need the same setup (OpenAI API key, .env file, etc.). We'll also install Playwright for handling JavaScript-rendered sites.</span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>\n",
    "\n",
    "<table style=\"margin: 20px 0 0 0; text-align: left; width: 100%;\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../assets/business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td style=\"vertical-align: middle; padding-left: 20px;\">\n",
    "            <h2 style=\"color:#181;\">Business value of this exercise</h2>\n",
    "            <span style=\"color:#181;\">Real-world scraping requires robustness and efficiency. The techniques here teach you:<br/>\n",
    "            • <b>Fallback strategies</b>: Essential for production systems<br/>\n",
    "            • <b>Async/concurrent processing</b>: Critical for scaling<br/>\n",
    "            • <b>Comparative analysis</b>: Competitive intelligence, market research, etc.<br/>\n",
    "            These patterns apply to any data pipeline, not just web scraping!</span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 1: Setup and Installation\n",
    "\n",
    "**Before running this notebook**, you need to install Playwright and BeautifulSoup for handling JavaScript-heavy websites.\n",
    "\n",
    "### Recommended: Install via terminal (before running this notebook)\n",
    "\n",
    "Open your terminal and run:\n",
    "\n",
    "```bash\n",
    "cd /.../llm_engineering\n",
    "uv add playwright beautifulsoup4\n",
    "uv run playwright install chromium\n",
    "```\n",
    "\n",
    "**Note:** When using `uv`, you need to use `uv run` to execute commands in the managed environment. This ensures Playwright's browser binaries are installed correctly.\n",
    "\n",
    "Then select the `.venv` kernel in this notebook and you're ready to go!\n",
    "\n",
    "### Alternative: Install from within the notebook\n",
    "\n",
    "If you prefer, you can uncomment and run the code cell below. The `!` runs shell commands, which will install to your active kernel environment."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# OPTION 2: Install from within the notebook\n",
    "# (Only if you didn't already install via terminal above)\n",
    "\n",
    "# !pip install playwright beautifulsoup4\n",
    "# !playwright install chromium"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Import our dependencies"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Standard imports from Day 1\n",
    "import os\n",
    "from dotenv import load_dotenv\n",
    "from IPython.display import Markdown, display\n",
    "from openai import OpenAI\n",
    "\n",
    "# New imports for advanced scraping\n",
    "import asyncio\n",
    "from typing import List, Dict, Optional\n",
    "import requests\n",
    "from bs4 import BeautifulSoup\n",
    "from playwright.async_api import async_playwright\n",
    "import time"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Load environment variables and connect to OpenAI"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load environment variables\n",
    "load_dotenv(override=True)\n",
    "api_key = os.getenv('OPENAI_API_KEY')\n",
    "\n",
    "# Check the key (same as Day 1)\n",
    "if not api_key:\n",
    "    print(\"No API key was found - please head over to the troubleshooting notebook!\")\n",
    "elif not api_key.startswith(\"sk-proj-\"):\n",
    "    print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key\")\n",
    "elif api_key.strip() != api_key:\n",
    "    print(\"An API key was found, but it looks like it might have space or tab characters\")\n",
    "else:\n",
    "    print(\"API key found and looks good so far!\")\n",
    "\n",
    "# Initialize OpenAI client\n",
    "openai = OpenAI()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 2: Multi-Strategy Website Fetcher\n",
    "\n",
    "This is where it gets interesting! We'll build a smart scraper that:\n",
    "1. First tries the fast approach (simple HTTP request)\n",
    "2. Detects if JavaScript rendering is needed\n",
    "3. Falls back to Playwright if necessary\n",
    "\n",
    "### How do we detect if a site needs JavaScript?\n",
    "\n",
    "We use a few heuristics:\n",
    "- Check if the page content is suspiciously short\n",
    "- Look for common JS framework markers (React, Vue, Angular)\n",
    "- Check for loading indicators or empty root divs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def fetch_with_requests(url: str) -> Optional[str]:\n",
    "    \"\"\"\n",
    "    Try to fetch a website using simple HTTP requests.\n",
    "    Fast and cheap, but won't work for JS-heavy sites.\n",
    "    \"\"\"\n",
    "    try:\n",
    "        headers = {\n",
    "            'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36'\n",
    "        }\n",
    "        response = requests.get(url, headers=headers, timeout=10)\n",
    "        response.raise_for_status()\n",
    "        \n",
    "        soup = BeautifulSoup(response.content, 'html.parser')\n",
    "        \n",
    "        # Remove script and style elements\n",
    "        for script in soup([\"script\", \"style\"]):\n",
    "            script.decompose()\n",
    "        \n",
    "        text = soup.get_text(separator='\\n')\n",
    "        # Clean up whitespace\n",
    "        lines = (line.strip() for line in text.splitlines())\n",
    "        text = '\\n'.join(line for line in lines if line)\n",
    "        \n",
    "        return text\n",
    "    \n",
    "    except Exception as e:\n",
    "        print(f\"Request failed: {e}\")\n",
    "        return None"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def needs_javascript(content: str, url: str) -> bool:\n",
    "    \"\"\"\n",
    "    Heuristic to detect if a website needs JavaScript rendering.\n",
    "    This is not perfect, but works well in practice!\n",
    "    \"\"\"\n",
    "    if not content:\n",
    "        return True\n",
    "    \n",
    "    # If content is very short, probably need JS\n",
    "    if len(content.strip()) < 200:\n",
    "        return True\n",
    "    \n",
    "    # Check for common JS framework indicators with minimal content\n",
    "    # Many sites now use SSR, so we need to be smarter about detection\n",
    "    js_indicators = [\n",
    "        'root',  # React often uses <div id=\"root\">\n",
    "        '__NEXT_DATA__',  # Next.js\n",
    "        'nuxt',  # Nuxt.js\n",
    "        'ng-version',  # Angular\n",
    "    ]\n",
    "    \n",
    "    content_lower = content.lower()\n",
    "    \n",
    "    # If we see these AND very little actual content, it's probably JS-rendered\n",
    "    has_framework = any(indicator.lower() in content_lower for indicator in js_indicators)\n",
    "    has_little_text = len(content.strip().split()) < 50\n",
    "    \n",
    "    # Additional check: look for common \"loading\" or placeholder patterns\n",
    "    loading_indicators = [\n",
    "        'loading...',\n",
    "        'please enable javascript',\n",
    "        'javascript is required',\n",
    "        'you need to enable javascript'\n",
    "    ]\n",
    "    has_loading = any(indicator in content_lower for indicator in loading_indicators)\n",
    "    \n",
    "    return (has_framework and has_little_text) or has_loading"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "async def fetch_with_playwright(url: str) -> Optional[str]:\n",
    "    \"\"\"\n",
    "    Fetch a website using Playwright, which runs a real browser.\n",
    "    Slower and more resource-intensive, but handles JavaScript!\n",
    "    \"\"\"\n",
    "    try:\n",
    "        async with async_playwright() as p:\n",
    "            browser = await p.chromium.launch(headless=True)\n",
    "            page = await browser.new_page()\n",
    "            \n",
    "            # Set a more realistic user agent to avoid bot detection\n",
    "            await page.set_extra_http_headers({\n",
    "                'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'\n",
    "            })\n",
    "            \n",
    "            # Navigate to the page with more lenient wait condition\n",
    "            # Using 'domcontentloaded' instead of 'networkidle' for sites with lots of tracking/ads\n",
    "            try:\n",
    "                await page.goto(url, wait_until='domcontentloaded', timeout=15000)\n",
    "            except Exception as goto_error:\n",
    "                # If domcontentloaded fails, try with 'load'\n",
    "                print(f\"  → Retrying with 'load' event...\")\n",
    "                await page.goto(url, wait_until='load', timeout=15000)\n",
    "            \n",
    "            # Wait a bit for any lazy-loaded content\n",
    "            await page.wait_for_timeout(3000)\n",
    "            \n",
    "            # Get the text content\n",
    "            content = await page.inner_text('body')\n",
    "            \n",
    "            await browser.close()\n",
    "            return content\n",
    "            \n",
    "    except Exception as e:\n",
    "        print(f\"Playwright failed: {e}\")\n",
    "        return None"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "async def fetch_website_smart(url: str, verbose: bool = True) -> str:\n",
    "    \"\"\"\n",
    "    Smart website fetcher that tries requests first, then falls back to Playwright.\n",
    "    This is the 'multi-strategy' approach!\n",
    "    \"\"\"\n",
    "    if verbose:\n",
    "        print(f\"Fetching {url}...\")\n",
    "    \n",
    "    # Strategy 1: Try simple requests first (fast!)\n",
    "    if verbose:\n",
    "        print(\"  → Trying simple HTTP request...\")\n",
    "    \n",
    "    content = fetch_with_requests(url)\n",
    "    \n",
    "    # Check if we need JavaScript rendering\n",
    "    if content and not needs_javascript(content, url):\n",
    "        if verbose:\n",
    "            print(\"  ✓ Success with simple request!\")\n",
    "        return content\n",
    "    \n",
    "    # Strategy 2: Fall back to Playwright\n",
    "    if verbose:\n",
    "        print(\"  → Falling back to Playwright (JavaScript rendering)...\")\n",
    "    \n",
    "    content = await fetch_with_playwright(url)\n",
    "    \n",
    "    if content:\n",
    "        if verbose:\n",
    "            print(\"  ✓ Success with Playwright!\")\n",
    "        return content\n",
    "    else:\n",
    "        if verbose:\n",
    "            print(\"  ✗ Failed to fetch website\")\n",
    "        return \"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Let's test our smart fetcher!\n",
    "\n",
    "**Important: The Modern Web Landscape (2025)**\n",
    "\n",
    "Most modern websites now use **Server-Side Rendering (SSR)** or **hybrid rendering**, which means:\n",
    "- ✅ React/Vue/Angular sites often render HTML on the server\n",
    "- ✅ Simple HTTP requests work for most sites!\n",
    "- ✅ Playwright is needed less often than you might think\n",
    "\n",
    "**When do you REALLY need Playwright?**\n",
    "1. **Bot protection**: Sites with Cloudflare, reCAPTCHA, etc.\n",
    "2. **Dynamic interactions**: Infinite scroll, \"Load More\" buttons, login walls\n",
    "3. **Pure CSR**: Older Single-Page Apps without SSR\n",
    "4. **Complex scraping**: Google Maps reviews, social media feeds, etc.\n",
    "\n",
    "**Our Strategy:**\n",
    "- Try simple requests first (fast, cheap)\n",
    "- Detect if meaningful content was returned\n",
    "- Fall back to Playwright only when needed\n",
    "\n",
    "Let's test this in action!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Test with a simple site\n",
    "simple_content = await fetch_website_smart(\"https://edwarddonner.com\")\n",
    "print(f\"\\nGot {len(simple_content)} characters\\n\")\n",
    "print(simple_content[:500] + \"...\")  # Preview"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Test with different types of sites\n",
    "# Most modern sites now use SSR (Server-Side Rendering) so simple requests work!\n",
    "\n",
    "# Example 1: Simple static site\n",
    "# example_url_a = \"https://edwarddonner.com\"\n",
    "# print(\"=== Testing {site} (static site) ===\".format(site=example_url_a))\n",
    "# static_content = await fetch_website_smart(\"https://edwarddonner.com\")\n",
    "# print(f\"Got {len(static_content)} characters\\n\")\n",
    "\n",
    "# Example 2: Modern framework site with SSR (react.dev)\n",
    "example_url_b = \"https://OpenAI.com\"\n",
    "print(\"=== Testing {site} (SSR with React) ===\".format(site=example_url_b))\n",
    "react_content = await fetch_website_smart(example_url_b)\n",
    "print(f\"Got {len(react_content)} characters\\n\")\n",
    "print(react_content[:500] + \"...\")  # Preview\n",
    "\n",
    "# Note: Most modern sites use SSR or hybrid rendering, so Playwright is rarely needed!\n",
    "# The main use cases for Playwright are:\n",
    "# 1. Sites that explicitly block non-browser agents (bot protection)\n",
    "# 2. Sites with heavy client-side interactivity (SPAs without SSR)\n",
    "# 3. Sites requiring scrolling/clicking to load content (infinite scroll, etc.)\n",
    "#\n",
    "# See the Google Maps example in community-contributions for advanced Playwright usage!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 3: Async Batch Analysis\n",
    "\n",
    "Now let's build the second feature: analyzing multiple URLs concurrently!\n",
    "\n",
    "### Why async?\n",
    "\n",
    "When fetching multiple websites:\n",
    "- **Sequential**: Fetch site 1 (2s) → Fetch site 2 (2s) → Fetch site 3 (2s) = **6 seconds total**\n",
    "- **Async**: Fetch all 3 at once = **~2 seconds total**\n",
    "\n",
    "Async is critical for production systems!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# First, let's update our prompts from Day 1\n",
    "\n",
    "system_prompt = \"\"\"\n",
    "You are an assistant that analyzes the contents of a website,\n",
    "and provides a short summary, ignoring text that might be navigation related.\n",
    "Respond in markdown. Do not wrap the markdown in a code block - respond just with the markdown.\n",
    "\"\"\"\n",
    "\n",
    "user_prompt_prefix = \"\"\"\n",
    "Here are the contents of a website.\n",
    "Provide a short summary of this website.\n",
    "If it includes news or announcements, then summarize these too.\n",
    "\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "async def summarize_website(url: str, verbose: bool = False) -> Dict[str, str]:\n",
    "    \"\"\"\n",
    "    Fetch and summarize a single website.\n",
    "    Returns a dictionary with url, content, and summary.\n",
    "    \"\"\"\n",
    "    try:\n",
    "        # Fetch the website\n",
    "        content = await fetch_website_smart(url, verbose=verbose)\n",
    "        \n",
    "        if not content:\n",
    "            return {\"url\": url, \"error\": \"Failed to fetch content\", \"summary\": \"\"}\n",
    "        \n",
    "        # Create messages for OpenAI\n",
    "        messages = [\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt_prefix + content}\n",
    "        ]\n",
    "        \n",
    "        # Call OpenAI API\n",
    "        response = openai.chat.completions.create(\n",
    "            model=\"gpt-4.1-mini\",\n",
    "            messages=messages\n",
    "        )\n",
    "        \n",
    "        summary = response.choices[0].message.content\n",
    "        \n",
    "        return {\n",
    "            \"url\": url,\n",
    "            \"summary\": summary,\n",
    "            \"content_length\": len(content)\n",
    "        } # type: ignore\n",
    "        \n",
    "    except Exception as e:\n",
    "        return {\"url\": url, \"error\": str(e), \"summary\": \"\"}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "async def analyze_websites_batch(urls: List[str], verbose: bool = True) -> List[Dict[str, str]]:\n",
    "    \"\"\"\n",
    "    Analyze multiple websites concurrently using asyncio.\n",
    "    This is MUCH faster than doing them one at a time!\n",
    "    \"\"\"\n",
    "    if verbose:\n",
    "        print(f\"Starting batch analysis of {len(urls)} websites...\\n\")\n",
    "    \n",
    "    start_time = time.time()\n",
    "    \n",
    "    # Run all summarizations concurrently\n",
    "    tasks = [summarize_website(url, verbose=verbose) for url in urls]\n",
    "    results = await asyncio.gather(*tasks)\n",
    "    \n",
    "    elapsed = time.time() - start_time\n",
    "    \n",
    "    if verbose:\n",
    "        print(f\"\\n✓ Completed in {elapsed:.2f} seconds\")\n",
    "        print(f\"  Average: {elapsed/len(urls):.2f} seconds per site\\n\")\n",
    "    \n",
    "    return results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def display_batch_results(results: List[Dict[str, str]]):\n",
    "    \"\"\"\n",
    "    Display the results in a nice formatted way.\n",
    "    \"\"\"\n",
    "    for i, result in enumerate(results, 1):\n",
    "        print(f\"\\n{'='*80}\")\n",
    "        print(f\"Result {i}: {result['url']}\")\n",
    "        print(f\"{'='*80}\\n\")\n",
    "        \n",
    "        if 'error' in result:\n",
    "            print(f\"❌ Error: {result['error']}\")\n",
    "        else:\n",
    "            print(f\"📊 Content length: {result['content_length']:,} characters\\n\")\n",
    "            display(Markdown(result['summary']))\n",
    "            print()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Let's try our batch analyzer!\n",
    "\n",
    "We'll analyze several AI company websites at once."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# List of websites to analyze\n",
    "# Using sites that are generally scraper-friendly for educational purposes\n",
    "example_sites = [\n",
    "    \"https://anthropic.com\",\n",
    "    \"https://www.deepmind.com\",\n",
    "    \"https://openai.com\",\n",
    "]\n",
    "\n",
    "# Analyze them all at once!\n",
    "results = await analyze_websites_batch(example_sites)\n",
    "\n",
    "# Display the results\n",
    "display_batch_results(results)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 4: Comparative Analysis\n",
    "\n",
    "Now that we can analyze multiple sites, let's do something really useful:\n",
    "**Ask GPT to compare them!**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_comparative_analysis(results: List[Dict[str, str]], question: str) -> str:\n",
    "    \"\"\"\n",
    "    Use GPT to perform comparative analysis across multiple website summaries.\n",
    "    \"\"\"\n",
    "    # Build the context from all summaries\n",
    "    context = \"\"\n",
    "    for i, result in enumerate(results, 1):\n",
    "        if 'error' not in result:\n",
    "            context += f\"\\n\\nWebsite {i}: {result['url']}\\n\"\n",
    "            context += f\"Summary: {result['summary']}\\n\"\n",
    "    \n",
    "    # Create the comparison prompt\n",
    "    messages = [\n",
    "        {\n",
    "            \"role\": \"system\",\n",
    "            \"content\": \"You are an analyst that compares and contrasts information from multiple sources.\"\n",
    "        },\n",
    "        {\n",
    "            \"role\": \"user\",\n",
    "            \"content\": f\"{context}\\n\\nQuestion: {question}\\n\\nProvide a detailed comparison.\"\n",
    "        }\n",
    "    ]\n",
    "    \n",
    "    response = openai.chat.completions.create(\n",
    "        model=\"gpt-4.1-mini\",\n",
    "        messages=messages\n",
    "    )\n",
    "    \n",
    "    return response.choices[0].message.content"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's compare the AI companies!\n",
    "comparison = create_comparative_analysis(\n",
    "    results,\n",
    "    \"How do these companies differ in their approach to AI safety and their products?\"\n",
    ")\n",
    "\n",
    "display(Markdown(\"## Comparative Analysis\\n\\n\" + comparison))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 5: Try It Yourself!\n",
    "\n",
    "Now it's your turn to experiment. Here are some ideas:\n",
    "\n",
    "1. **Competitive Analysis**: Compare your company's website with competitors\n",
    "2. **News Aggregation**: Fetch multiple news sites and find common themes\n",
    "3. **Product Research**: Compare product features across different vendors\n",
    "4. **Job Market Research**: Analyze job postings from multiple companies\n",
    "\n",
    "Use the cells below to try your own analysis!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Your turn! Try analyzing websites of your choice\n",
    "# Add your URLs to this list\n",
    "my_urls = []\n",
    "\n",
    "# Example\n",
    "# my_urls = [\n",
    "#     \"https://msnbc.com\",\n",
    "#     \"https://bbc.com\",\n",
    "#     \"https://cnn.com\",\n",
    "#     \"https://foxnews.com\"\n",
    "# ]\n",
    "\n",
    "my_results = await analyze_websites_batch(my_urls)\n",
    "display_batch_results(my_results)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Try your own comparative question!\n",
    "my_question = \"\"\n",
    "\n",
    "# Example\n",
    "# my_question = \"How neutral and unbiased are the news reporting styles of these websites? Do they lean towards any particular political perspective?\"\n",
    "\n",
    "my_comparison = create_comparative_analysis(my_results, my_question)\n",
    "display(Markdown(my_comparison))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<table style=\"margin: 0; text-align: left; width: 100%;\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../assets/business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td style=\"vertical-align: middle; padding-left: 20px;\">\n",
    "            <h2 style=\"color:#181;\">What You Learned</h2>\n",
    "            <span style=\"color:#181;\">\n",
    "                <b>1. Fallback Strategies</b>: Try fast methods first, fall back to robust ones<br/>\n",
    "                <b>2. Async Programming</b>: Process multiple items concurrently for efficiency<br/>\n",
    "                <b>3. Detection Heuristics</b>: Use simple rules to make smart decisions<br/>\n",
    "                <b>4. Comparative Analysis</b>: Combine multiple data sources for insights<br/><br/>\n",
    "                These patterns apply far beyond web scraping - they're fundamental to production systems!\n",
    "            </span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>\n",
    "\n",
    "<table style=\"margin: 20px 0 0 0; text-align: left; width: 100%;\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../assets/important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td style=\"vertical-align: middle; padding-left: 20px;\">\n",
    "            <h2 style=\"color:#900;\">Next Steps</h2>\n",
    "            <span style=\"color:#900;\">\n",
    "                • Try improving the JS detection heuristics<br/>\n",
    "                • Add error handling and retry logic<br/>\n",
    "                • Implement rate limiting for batch requests<br/>\n",
    "                • Store results in a database for historical analysis<br/>\n",
    "                • Build a monitoring system for competitor websites<br/><br/>\n",
    "                Share your improvements via Pull Request to the community-contributions folder!\n",
    "            </span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "llm-engineering (3.12.10)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
