{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7zFw1gUY7AQr"
   },
   "source": [
    "# Web Scraping using Oxylabs Web Scraper API"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "xMXm6w3w7AQs"
   },
   "source": [
    "This notebook shows how to use [Oxylabs Web Scraper API](https://oxylabs.io/products/scraper-api/web-data) with AutoGen agents to scrape data and generate automated reports.\n",
    "\n",
    "First, you need to have Python installed on your system and access to an LLM provider. For this tutorial, we'll use OpenAI's API and the gpt-5-nano model. Start by creating a virtual environment:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Yu7YkIh27AQt"
   },
   "outputs": [],
   "source": [
    "! python -m venv venv\n",
    "! source venv/bin/activate"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "mDY_2gTL7AQt"
   },
   "source": [
    "Install the required dependencies:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Cdvx_hDP7AQu"
   },
   "outputs": [],
   "source": [
    "! pip install aiohttp ag2[openai]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "a-doTNZh7AQu"
   },
   "source": [
    "As an example, let’s use the [Amazon source](https://developers.oxylabs.io/scraper-apis/web-scraper-api/amazon) to search for items on Amazon based on a provided query using your credentials from the Oxylabs dashboard.\n",
    "\n",
    "Import `aiohttp` components and define an `AmazonScraper` class with a constructor, Web Scraper API endpoint URL, and credentials:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Z4_2WF3w7AQu"
   },
   "outputs": [],
   "source": [
    "from aiohttp import BasicAuth, ClientSession\n",
    "\n",
    "\n",
    "class AmazonScraper:\n",
    "    def __init__(self) -> None:\n",
    "        self._base_url = \"https://realtime.oxylabs.io/v1/queries\"\n",
    "        self._auth = BasicAuth(\"USERNAME\", \"PASSWORD\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "N_kRij8m7AQu"
   },
   "source": [
    "**NOTE:** Don't forget to replace placeholders with your credentials.\n",
    "\n",
    "Define an asynchronous `get_amazon_search_data` method with a query parameter. Use type hints (`str → list[dict]`) for better readability."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "MeLR7NRe7AQu"
   },
   "outputs": [],
   "source": [
    "async def get_amazon_search_data(self, query: str) -> list[dict]:\n",
    "    \"\"\"Gets search data for provided query from Amazon.\"\"\"\n",
    "    ..."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "WlS2xxyM7AQv"
   },
   "source": [
    "Next, define your API payload together with the API call:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "ATYJ3z0v7AQv"
   },
   "outputs": [],
   "source": [
    "print(f\"Fetching data for query: {query}\")\n",
    "payload = {\n",
    "    \"source\": \"amazon_search\",\n",
    "    \"domain\": \"com\",\n",
    "    \"query\": query,\n",
    "    \"start_page\": 1,\n",
    "    \"pages\": 1,\n",
    "    \"parse\": True,\n",
    "}\n",
    "session = ClientSession()\n",
    "\n",
    "\n",
    "try:\n",
    "    response = await session.post(\n",
    "        self._base_url,\n",
    "        auth=self._auth,\n",
    "        json=payload,\n",
    "    )\n",
    "    response.raise_for_status()\n",
    "    data = await response.json()\n",
    "finally:\n",
    "    await session.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZYPMGgn27AQv"
   },
   "source": [
    "The `self._auth` parameter provides Web Scraper API authentication. The `finally` clause ensures session cleanup regardless of outcome.\n",
    "You can parse the response slightly to make it easier for AI processing. Now insert the following part to return a list of dictionaries representing Amazon data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "n4TNlm0S7AQv"
   },
   "outputs": [],
   "source": [
    "results = data[\"results\"][0][\"content\"][\"results\"]\n",
    "return [*results.values()]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "nv1blbvc7AQv"
   },
   "source": [
    "You now have a class for scraping Amazon results. This can be expanded for different sources or multiple pages.\n",
    "For what you have now, the full class should look like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "dOuRIiZg7AQv"
   },
   "outputs": [],
   "source": [
    "from aiohttp import ClientSession\n",
    "\n",
    "\n",
    "class AmazonScraper:\n",
    "    def __init__(self) -> None:\n",
    "        self._base_url = \"https://realtime.oxylabs.io/v1/queries\"\n",
    "        self._auth = BasicAuth(\"USERNAME\", \"PASSWORD\")\n",
    "\n",
    "    async def get_amazon_search_data(self, query: str) -> list[dict]:\n",
    "        \"\"\"Gets search data for provided query from Amazon.\"\"\"\n",
    "        print(f\"Fetching data for query: {query}\")\n",
    "        payload = {\n",
    "            \"source\": \"amazon_search\",\n",
    "            \"domain\": \"com\",\n",
    "            \"query\": query,\n",
    "            \"start_page\": 1,\n",
    "            \"pages\": 1,\n",
    "            \"parse\": True,\n",
    "        }\n",
    "        session = ClientSession()\n",
    "\n",
    "        try:\n",
    "            response = await session.post(\n",
    "                self._base_url,\n",
    "                auth=self._auth,\n",
    "                json=payload,\n",
    "            )\n",
    "            response.raise_for_status()\n",
    "            data = await response.json()\n",
    "        finally:\n",
    "            await session.close()\n",
    "\n",
    "        results = data[\"results\"][0][\"content\"][\"results\"]\n",
    "        return [*results.values()]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "5u2Ea_lz7AQv"
   },
   "source": [
    "Test it in `main.py` by initializing and calling the `get_amazon_search_data` method:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "BR7adrV47AQv"
   },
   "outputs": [],
   "source": [
    "import asyncio\n",
    "from pprint import pprint\n",
    "\n",
    "from scraper import AmazonScraper\n",
    "\n",
    "\n",
    "async def main():\n",
    "    scraper = AmazonScraper()\n",
    "    pprint(scraper.get_amazon_search_data(\"laptop\"))\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    asyncio.run(main())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9Ai76t-q7AQv"
   },
   "source": [
    "Running `python main.py` shows a list of Amazon laptop results.\n",
    "Now, create the `AmazonDataSummarizer` class to implement AutoGen agents\n",
    "1. Build AI agents to summarize the scraped data by defining the `AmazonDataSummarizer` class and constructor variables for later use.\n",
    "2. Use dependency injection to link code parts in a structured way.\n",
    "3. Import and define the OpenAI client for AutoGen to communicate with OpenAI models.\n",
    "Use your OpenAI API key here:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "_xnv9SUH7AQv"
   },
   "outputs": [],
   "source": [
    "from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
    "\n",
    "\n",
    "class AmazonDataSummarizer:\n",
    "    def __init__(self, scraper: AmazonScraper) -> None:\n",
    "        self._client = OpenAIChatCompletionClient(\n",
    "            model=\"gpt-5-nano\",\n",
    "            api_key=\"YOUR_API_KEY\",\n",
    "        )\n",
    "        self._scraper = scraper"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "UpGZws6m7AQw"
   },
   "source": [
    "Define AI agent names using an Enum class for easier tracking:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "1QU2cfet7AQw"
   },
   "outputs": [],
   "source": [
    "from enum import Enum\n",
    "\n",
    "\n",
    "class AgentName(str, Enum):\n",
    "    \"\"\"Enum for AI agent names.\"\"\"\n",
    "\n",
    "    PRICE_SUMMARIZER = \"Price_Summarizer\"\n",
    "    DEAL_FINDER = \"Deal_Finder\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "o1tCo8-l7AQw"
   },
   "source": [
    "This makes agent tracking easier. Import AssistantAgent and define the `_initialize_agents` method with agent configuration:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "LjAKM_nv7AQw"
   },
   "outputs": [],
   "source": [
    "def _initialize_agents(self) -> list[AssistantAgent]:\n",
    "    \"\"\"Initializes the agents.\"\"\"\n",
    "    price_summarizer_agent = AssistantAgent(\n",
    "        name=AgentName.PRICE_SUMMARIZER,\n",
    "        model_client=self._client,\n",
    "        reflect_on_tool_use=True,\n",
    "        tools=[self._scraper.get_amazon_search_data],\n",
    "        system_message=\"You are an expert in analyzing prices from online shopping data. Summarize the key price statistics, including average, min, max, and any interesting price patterns. Share your summary with the group\",\n",
    "    )\n",
    "\n",
    "    deal_finder_agent = AssistantAgent(\n",
    "        name=AgentName.DEAL_FINDER,\n",
    "        model_client=self._client,\n",
    "        tools=[self._scraper.get_amazon_search_data],\n",
    "        reflect_on_tool_use=True,\n",
    "        system_message=\"You are a skilled deal finder in online shopping data. Find the best possible deals based on price, availability, and general value. Share your findings with the group. Respond with 'SUMMARY_COMPLETE' when you've shared your findings.\",\n",
    "    )\n",
    "\n",
    "    return [price_summarizer_agent, deal_finder_agent]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "b07DWdsg7AQw"
   },
   "source": [
    "`AssistantAgent` needs:\n",
    "* Agent name\n",
    "* OpenAI client\n",
    "* Scraping tools\n",
    "* System message (clear instructions like ChatGPT prompts)\n",
    "The system message defines what each agent should do. `SUMMARY_COMPLETE` signals when to stop running agents, while the `reflect_on_tool_use=True` flag makes agents use function data as response context.\n",
    "\n",
    "This integrates data sources with AutoGen agents. Define an async function in `tools`, and agents can use it. Prompts handle everything else.\n",
    "\n",
    "Next, make agents work together using AutoGen teams. Teams enable agent collaboration on shared tasks. For this, use AutoGen teams for agent collaboration. `RoundRobinGroupChat` runs agents sequentially, so they take turns analyzing Amazon data and sharing findings.\n",
    "\n",
    "Finally, the `SUMMARY_COMPLETE` termination condition tells AutoGen when agents finish and the team should stop. Here's how it should look:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "pt5JUydf7AQw"
   },
   "outputs": [],
   "source": [
    "async def generate_summary(self, query: str) -> None:\n",
    "    \"\"\"Generates a summary using AI agents based on the given query\"\"\"\n",
    "    agents = self._initialize_agents()\n",
    "\n",
    "    text_termination = TextMentionTermination(\"SUMMARY_COMPLETE\")\n",
    "    team = RoundRobinGroupChat(\n",
    "        participants=agents,\n",
    "        termination_condition=text_termination,\n",
    "    )\n",
    "\n",
    "    task = f\"Search for products for the query {query} and provide a summary in formatted Markdown of your findings.\"\n",
    "    messages = []\n",
    "\n",
    "    async for message in team.run_stream(task=task):\n",
    "        if isinstance(message, BaseChatMessage) and message.source in {\n",
    "            AgentName.PRICE_SUMMARIZER,\n",
    "            AgentName.DEAL_FINDER,\n",
    "        }:\n",
    "            messages.append(message.to_text())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Y2AY-Gwn7AQw"
   },
   "source": [
    "Set up the team with agents and a termination condition. Pass the task to run_stream and collect agent messages. This produces a complete price and deal summary in Markdown format.\n",
    "\n",
    "Save results to a Markdown file using this method:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "jbz-Eq3Q7AQw"
   },
   "outputs": [],
   "source": [
    "def _write_to_md(self, messages: list[str]) -> None:\n",
    "    \"\"\"Writes the messages to a Markdown file.\"\"\"\n",
    "    with open(\"summary.md\", \"w\") as f:\n",
    "        for message in messages:\n",
    "            f.write(f\"{message}\\n\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "yns_qleQ7AQw"
   },
   "source": [
    "Call this at the end of generate_summary to save results. The complete method:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "UrX8IAGW7AQw"
   },
   "outputs": [],
   "source": [
    "async def generate_summary(self, query: str) -> None:\n",
    "    \"\"\"Generates a summary using AI agents based on the given query\"\"\"\n",
    "    agents = self._initialize_agents()\n",
    "\n",
    "    text_termination = TextMentionTermination(\"SUMMARY_COMPLETE\")\n",
    "    team = RoundRobinGroupChat(\n",
    "        participants=agents,\n",
    "        termination_condition=text_termination,\n",
    "    )\n",
    "\n",
    "    task = f\"Search for products for the query {query} and provide a summary in formatted Markdown of your findings.\"\n",
    "    messages = []\n",
    "\n",
    "    async for message in team.run_stream(task=task):\n",
    "        if isinstance(message, BaseChatMessage) and message.source in {\n",
    "            AgentName.PRICE_SUMMARIZER,\n",
    "            AgentName.DEAL_FINDER,\n",
    "        }:\n",
    "            messages.append(message.to_text())\n",
    "\n",
    "    self._write_to_md(messages)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "_7pmNk1t7AQw"
   },
   "source": [
    "Now you have the complete tool for generating summaries using AutoGen agents with Web Scraper API data. Combine everything in the main file:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "REXKTL_p7AQw"
   },
   "outputs": [],
   "source": [
    "import asyncio\n",
    "\n",
    "from scraper import AmazonScraper\n",
    "from summary import AmazonDataSummarizer\n",
    "\n",
    "\n",
    "async def main():\n",
    "    scraper = AmazonScraper()\n",
    "    summarizer = AmazonDataSummarizer(scraper=scraper)\n",
    "    await summarizer.generate_summary(query=\"laptop\")\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    asyncio.run(main())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "UaMSwnHD7AQx"
   },
   "source": [
    "Running `python main.py` creates a `summary.md` file in your directory where you can view results in a Markdown-capable text editor."
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "front_matter": {
   "description": "Use AG2 to Web Scrape with Oxylabs",
   "tags": [
    "llm",
    "openai",
    "web scraping",
    "oxylabs"
   ]
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
