{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 🚀 Web Data Extraction with WaterCrawl, LiteLLM, and Rank-BM25\n",
    "\n",
    "Welcome to this **step-by-step Jupyter Notebook** where we explore goal-oriented web crawling. This tutorial will guide you through using WaterCrawl to map a website, filter URLs with Rank-BM25 and LLMs, scrape content, and analyze it to meet a specific objective.\n",
    "\n",
    "#### What’s inside?\n",
    "| 🔧 Component | 💡 Why we’re using it |\n",
    "|--------------|----------------------|\n",
    "| **WaterCrawl** | For mapping websites and scraping content with precision. |\n",
    "| **LiteLLM** | To interact with various LLM providers for strategy generation and content analysis. |\n",
    "| **Rank-BM25** | For efficient keyword-based URL filtering and ranking. |\n",
    "\n",
    "#### Notebook Flow 🗺️\n",
    "1. **Setup**: Install dependencies and configure API keys.\n",
    "2. **Initialization**: Set up the target URL and objective.\n",
    "3. **Sitemap Extraction**: Use WaterCrawl to fetch the website's sitemap.\n",
    "4. **URL Filtering**: Apply Rank-BM25 and LLM-based filtering to select relevant URLs.\n",
    "5. **Content Scraping**: Scrape content from top URLs using WaterCrawl.\n",
    "6. **Content Analysis**: Analyze scraped content with LLMs to meet the objective.\n",
    "7. **Results**: Compile and display the final structured response.\n",
    "\n",
    "#### Why you’ll ❤️ this approach\n",
    "- **Efficiency**: Quickly process complex websites.\n",
    "- **Flexibility**: Switch between LLM providers for different tasks.\n",
    "- **Precision**: Combine keyword and semantic analysis for accurate results.\n",
    "\n",
    "> **Tip:** If you’re new to WaterCrawl, check out the [WaterCrawl Documentation](https://docs.watercrawl.dev/intro) for more details.\n",
    "\n",
    "Ready? Let’s start by setting up our environment! 🏁"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### ➡️ **Install all the dependencies:**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install -r requirements.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### ➡️ **API keys you’ll need (grab these first!)** \n",
    "\n",
    "| Service | What it’s for | Where to generate |\n",
    "|---------|---------------|-------------------|\n",
    "| **WaterCrawl** | Auth for crawling endpoints | <https://app.watercrawl.dev/dashboard/api-keys> |\n",
    "| **OpenAI/Other LLMs** | LLM interactions | Depends on provider (e.g., OpenAI, Anthropic) |\n",
    "\n",
    "---\n",
    "**Option 1 – Keep it clean: Use a `.env` file** ⚠️\n",
    "\n",
    "Create the file **once**, store your keys, and everything else “just works”.\n",
    "\n",
    "```python\n",
    "# Create .env file\n",
    "env_text = \"\"\"\n",
    "WATERCRAWL_API_KEY=your_watercrawl_api_key_here\n",
    "OPENAI_API_KEY=your_openai_api_key_here\n",
    "ANTHROPIC_API_KEY=your_anthropic_api_key_here\n",
    "DEEPSEEK_API_KEY=your_deepseek_api_key_here\n",
    "\"\"\".strip()\n",
    "\n",
    "with open(\".env\", \"w\") as f:\n",
    "    f.write(env_text)\n",
    "print(\".env file created — now edit it with your real keys ✏️\")\n",
    "```\n",
    "\n",
    "**Option 2 – Quick-and-dirty: Hard-code in the notebook** ⚠️\n",
    "\n",
    "Not recommended — anyone who sees or commits the notebook can read your keys.\n",
    "\n",
    "WATERCRAWL_API_KEY=your_watercrawl_api_key_here\n",
    "OPENAI_API_KEY=your_openai_api_key_here\n",
    "ANTHROPIC_API_KEY=your_anthropic_api_key_here\n",
    "DEEPSEEK_API_KEY=your_deepseek_api_key_here"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### ➡️ **If you’re using a `.env` file, load the API keys with dotenv**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from dotenv import load_dotenv\n",
    "import os\n",
    "\n",
    "load_dotenv()  # pulls everything from .env for any API keys for LLMS\n",
    "\n",
    "\n",
    "WATERCRAWL_API_KEY = os.environ.get(\"WATERCRAWL_API_KEY\")\n",
    "OPENAI_API_KEY = os.environ.get(\"OPENAI_API_KEY\")\n",
    "ANTHROPIC_API_KEY = os.environ.get(\"ANTHROPIC_API_KEY\")\n",
    "DEEPSEEK_API_KEY = os.environ.get(\"DEEPSEEK_API_KEY\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### ➡️ **Import necessary packages:**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys\n",
    "import os\n",
    "sys.path.append(os.path.abspath('./objective_crawler'))\n",
    "from core import ObjectiveCrawler\n",
    "from clients import WaterCrawler, LLMClient\n",
    "from config import DEFAULT_MODEL, DEFAULT_TOP_K, DEFAULT_STRATEGIES"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ➡️ **Set up the Objective and URL:**\n",
    "Define the website URL you want to crawl and the objective you want to achieve."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Example URL and Objective\n",
    "target_url = \"https://watercrawl.dev/\"\n",
    "objective = \"Find pricing information\"\n",
    "\n",
    "# You can modify these to test with your own inputs\n",
    "print(f\"Target URL: {target_url}\")\n",
    "print(f\"Objective: {objective}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ➡️ **Initialize the Crawler:**\n",
    "Set up the crawler with configurable options for LLM model, number of URLs to scrape, and search strategies."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Initialize LLM Client with your preferred model\n",
    "llm_client = LLMClient(model=DEFAULT_MODEL, api_key=LITELLM_API_KEY)\n",
    "\n",
    "# Initialize WaterCrawler with your API key\n",
    "water_crawler = WaterCrawler(api_key=WATERCRAWL_API_KEY)\n",
    "\n",
    "# Create ObjectiveCrawler instance\n",
    "crawler = ObjectiveCrawler(\n",
    "    water_crawler=water_crawler,\n",
    "    llm_client=llm_client,\n",
    "    top_k=DEFAULT_TOP_K,\n",
    "    num_strategies=DEFAULT_STRATEGIES\n",
    ")\n",
    "\n",
    "print(\"Crawler initialized with model:\", DEFAULT_MODEL)\n",
    "print(f\"Top K URLs to scrape: {DEFAULT_TOP_K}\")\n",
    "print(f\"Number of search strategies: {DEFAULT_STRATEGIES}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ➡️ **Fetch Sitemap with WaterCrawl:**\n",
    "Use WaterCrawl to map the entire website and retrieve all URLs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Fetch sitemap\n",
    "sitemap_urls = crawler.get_sitemap(target_url)\n",
    "\n",
    "print(f\"Total URLs in sitemap: {len(sitemap_urls)}\")\n",
    "print(\"Sample URLs:\", sitemap_urls[:5] if sitemap_urls else [])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ➡️ **Generate Search Strategies with LLM:**\n",
    "Generate multiple search strategies to filter URLs based on the objective."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate search strategies\n",
    "strategies = crawler.generate_search_strategies(objective)\n",
    "\n",
    "print(\"Generated Search Strategies:\")\n",
    "for i, strategy in enumerate(strategies, 1):\n",
    "    print(f\"{i}. {strategy}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ➡️ **Filter URLs with Rank-BM25:**\n",
    "Use Rank-BM25 to perform keyword-based filtering and ranking of URLs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Filter URLs with BM25\n",
    "filtered_urls_bm25 = crawler.filter_urls_bm25(sitemap_urls, objective, strategies)\n",
    "\n",
    "print(f\"URLs after BM25 filtering: {len(filtered_urls_bm25)}\")\n",
    "print(\"Top URLs:\", filtered_urls_bm25[:5] if filtered_urls_bm25 else [])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ➡️ **Further Filter URLs with LLM:**\n",
    "Use an LLM to refine the list of URLs based on relevance to the objective."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Filter URLs with LLM\n",
    "filtered_urls_llm = crawler.filter_urls_llm(filtered_urls_bm25, objective)\n",
    "\n",
    "print(f\"URLs after LLM filtering: {len(filtered_urls_llm)}\")\n",
    "print(\"Top URLs for scraping:\", filtered_urls_llm)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ➡️ **Scrape Content from Top URLs:**\n",
    "Scrape the content from the selected URLs using WaterCrawl."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Scrape content\n",
    "scraped_contents = crawler.scrape_urls(filtered_urls_llm)\n",
    "\n",
    "print(f\"Scraped content from {len(scraped_contents)} URLs\")\n",
    "for url, content in scraped_contents.items():\n",
    "    print(f\"URL: {url}\")\n",
    "    print(f\"Content length: {len(content) if content else 0} characters\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ➡️ **Analyze Content with LLM:**\n",
    "Analyze the scraped content to extract information relevant to the objective."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Analyze content\n",
    "individual_analyses = crawler.analyze_content(scraped_contents, objective)\n",
    "\n",
    "print(\"Individual Analyses:\")\n",
    "for url, analysis in individual_analyses.items():\n",
    "    print(f\"URL: {url}\")\n",
    "    print(f\"Analysis: {analysis[:200]}...\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ➡️ **Compile Final Results:**\n",
    "Compile all analyses into a structured JSON response that answers the objective."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Compile final result\n",
    "final_result = crawler.generate_final_result(objective, individual_analyses)\n",
    "\n",
    "print(\"Final Result:\")\n",
    "import json\n",
    "print(json.dumps(final_result, indent=2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 🌟 Conclusion\n",
    "Congratulations! You've successfully used WaterCrawl, LiteLLM, and Rank-BM25 to crawl a website, filter URLs, scrape content, and analyze it to meet a specific objective. \n",
    "\n",
    "#### What you’ve learned:\n",
    "- How to set up and configure tools for web data extraction.\n",
    "- Mapping a website and filtering URLs with BM25 and LLMs.\n",
    "- Scraping and analyzing content to answer targeted questions.\n",
    "\n",
    "#### Next Steps:\n",
    "- Experiment with different URLs and objectives.\n",
    "- Try different LLM models for strategy generation and analysis.\n",
    "- Scale up by integrating with other tools or larger datasets.\n",
    "\n",
    "If you found this tutorial helpful, consider starring the [WaterCrawl repo](https://github.com/watercrawl/watercrawl) on GitHub! ⭐"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
