{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "source": [
    "# Structured Report Generation\n",
    "[![ Click here to deploy.](https://brev-assets.s3.us-west-1.amazonaws.com/nv-lb-dark.svg)](https://console.brev.dev/launchable/deploy?launchableID=env-2qzwjeLSpDcjU68lNQyDcZwxmn9)\n",
    "\n",
    "Deploy with Launchables. Launchables are pre-configured, fully optimized environments that users can deploy with a single click.\n",
    "\n",
    "In this notebook, you will use the newest llama model, llama-3.3-70b, to generate a report on a given topic. You will use Langchain's LangGraph to build an Agent that takes in user-defined topics and structure, then plans the topics of the section indicated in the structure. Next, the Agent uses Tavily to do web search on the given topics and uses this information to write the sections and synthesize the final report. \n",
    "\n",
    "Rather than deploying the model locally, you leverage NVIDIA API Catalog by calling the model's NIM API Endpoint. As you don't need a GPU to run the model, you can run this notebook anywhere!\n",
    "\n",
    "You can find the original notebook on LangChain's GitHub [here](https://github.com/langchain-ai/report-mAIstro).\n",
    "\n",
    "Below is the architecture diagram.\n",
    "\n",
    "# ![Architecture Diagram]()\n",
    "\n",
    "\n",
    "\n",
    "The Agent takes in user defined topics and structure, then plans the topics of the section indicated in the structure. The Agent then uses Tavily to do web search on the given topics and uses this information to write the sections and synthesize the final report. \n",
    "\n",
    " \n",
    "A two-phase approach is used for planning and research: \n",
    "\n",
    "\n",
    "Phase 1 - Planning \n",
    "- Analyzes user inputs \n",
    "- Maps out report sections \n",
    "\n",
    " \n",
    "Phase 2 - Research \n",
    "- Conducts parallel web research via Tavily API \n",
    "- Processes relevant data for each section \n",
    "\n",
    " \n",
    "\n",
    "The report is then written in a strategic sequence: \n",
    "- Write research-based sections in parallel \n",
    "- Write introductions, conclusions, and connect each of the sections  \n",
    "\n",
    "\n",
    "All sections maintain awareness of each other's content for consistency. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Content Overview\n",
    ">[Prerequisites](#Prerequisites)  \n",
    ">[Writing Report Plan](#Writing-Report-Plan)  \n",
    ">[Research and Writing](#Research-and-Writing)  \n",
    ">[Write Single Section](#Write-Single-Section)  \n",
    ">[Validate Single Section](#Validate-Single-Section)  \n",
    ">[Write All Sections](#Write-All-Sections)  \n",
    ">[Validate Final Report](#Final-Report)\n",
    "________________________\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "\n",
    "### Install Dependencies"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture --no-stderr\n",
    "%pip install --quiet -U langgraph langchain_community langchain_core tavily-python langchain_nvidia_ai_endpoints"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## API Keys\n",
    "Prior to getting started, you will need to create API Keys for the NVIDIA API Catalog, Tavily, and LangChain.\n",
    "\n",
    "- NVIDIA NIM Trial API Key\n",
    "  1. Prior to getting started, you will need to create API Keys to access NVIDIA NIM trial hosted endpoints.\n",
    "  2. If you don’t have an NVIDIA account, you will be asked to sign-up.\n",
    "  3. Click [here](https://build.nvidia.com/meta/llama-3_3-70b-instruct?signin=true&api_key=true) to sign-in and get an API key\n",
    "- LangChain\n",
    "  1. Go to **[LangChain Settings page](https://smith.langchain.com/settings)**. You will need to create an account if you have not already.\n",
    "  2. On the left panel, navigate to \"API Keys\".\n",
    "  3. Click on the \"Create API Key\" on the top right of the page.\n",
    "- Tavily\n",
    "  1. Go to the **[Tavily homepage](https://tavily.com/)** and click on \"Get Started\"\n",
    "  2. Sign in or create an account.\n",
    "  3. Create an API Key.\n",
    "\n",
    "### Export API Keys\n",
    "\n",
    "Save these API Keys as environment variables.\n",
    "\n",
    "First, set the NVIDIA API Key as the environment variable."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "import getpass\n",
    "import os\n",
    "\n",
    "if not os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n",
    "    nvapi_key = getpass.getpass(\"Enter your NVIDIA API key: \")\n",
    "    assert nvapi_key.startswith(\"nvapi-\"), f\"{nvapi_key[:5]}... is not a valid key\"\n",
    "    os.environ[\"NVIDIA_API_KEY\"] = nvapi_key"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "plaintext"
    }
   },
   "source": [
    "Next, set the LangChain API Key as an environment variable. You will use [LangSmith](https://docs.smith.langchain.com/) for [tracing](https://docs.smith.langchain.com/concepts/tracing)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os, getpass\n",
    "\n",
    "def _set_env(var: str):\n",
    "    if not os.environ.get(var):\n",
    "        os.environ[var] = getpass.getpass(f\"{var}: \")\n",
    "        \n",
    "_set_env(\"LANGCHAIN_API_KEY\")\n",
    "os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
    "os.environ[\"LANGCHAIN_PROJECT\"] = \"report-mAIstro\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, set the Tavily API Key as an environment variable. You will use [Tavily API](https://tavily.com/) web search tool."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "_set_env(\"TAVILY_API_KEY\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from tavily import TavilyClient, AsyncTavilyClient\n",
    "tavily_client = TavilyClient()\n",
    "tavily_async_client = AsyncTavilyClient()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Working with the NVIDIA API Catalog\n",
    "\n",
    "Let's test the API endpoint.\n",
    "\n",
    "In this notebook, you will use the newest llama model, llama-3.3-70b-instruct, as the LLM. Define the LLM below and test the API Catalog.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## Core LC Chat Interface\n",
    "from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
    "\n",
    "llm = ChatNVIDIA(model=\"meta/llama-3.3-70b-instruct\", temperature=0)\n",
    "result = llm.invoke(\"Write a ballad about LangChain.\")\n",
    "print(result.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Optional: Locally Run NVIDIA NIM Microservices"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once you familiarize yourself with this blueprint, you may want to self-host models with NVIDIA NIM Microservices using NVIDIA AI Enterprise software license. This gives you the ability to run models anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.\n",
    "\n",
    "[Learn more about NIM Microservices](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)\n",
    "\n",
    "<div class=\"alert alert-block alert-success\">\n",
    "<b>NOTE:</b> Run the following cell only if you're using a local NIM Microservice instead of the API Catalog Endpoint.\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
    "\n",
    "# connect to an embedding NIM running at localhost:8000, specifying a model\n",
    "llm = ChatNVIDIA(base_url=\"http://localhost:8000/v1\", model=\"meta/llama3-8b-instruct\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Writing Report Plan"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Creating Utils Functions\n",
    "\n",
    "Next, you will create Utility functions that will be used for web research during report generation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import asyncio\n",
    "from langsmith import traceable\n",
    "from pydantic import BaseModel, Field\n",
    "\n",
    "class Section(BaseModel):\n",
    "    name: str = Field(\n",
    "        description=\"Name for this section of the report.\",\n",
    "    )\n",
    "    description: str = Field(\n",
    "        description=\"Brief overview of the main topics and concepts to be covered in this section.\",\n",
    "    )\n",
    "    research: bool = Field(\n",
    "        description=\"Whether to perform web research for this section of the report.\"\n",
    "    )\n",
    "    content: str = Field(\n",
    "        description=\"The content of the section.\"\n",
    "    ) \n",
    "\n",
    "def deduplicate_and_format_sources(search_response, max_tokens_per_source, include_raw_content=True):\n",
    "    \"\"\"\n",
    "    Takes either a single search response or list of responses from Tavily API and formats them.\n",
    "    Limits the raw_content to approximately max_tokens_per_source.\n",
    "    include_raw_content specifies whether to include the raw_content from Tavily in the formatted string.\n",
    "    \n",
    "    Args:\n",
    "        search_response: Either:\n",
    "            - A dict with a 'results' key containing a list of search results\n",
    "            - A list of dicts, each containing search results\n",
    "            \n",
    "    Returns:\n",
    "        str: Formatted string with deduplicated sources\n",
    "    \"\"\"\n",
    "    # Convert input to list of results\n",
    "    if isinstance(search_response, dict):\n",
    "        sources_list = search_response['results']\n",
    "    elif isinstance(search_response, list):\n",
    "        sources_list = []\n",
    "        for response in search_response:\n",
    "            if isinstance(response, dict) and 'results' in response:\n",
    "                sources_list.extend(response['results'])\n",
    "            else:\n",
    "                sources_list.extend(response)\n",
    "    else:\n",
    "        raise ValueError(\"Input must be either a dict with 'results' or a list of search results\")\n",
    "    \n",
    "    # Deduplicate by URL\n",
    "    unique_sources = {}\n",
    "    for source in sources_list:\n",
    "        if source['url'] not in unique_sources:\n",
    "            unique_sources[source['url']] = source\n",
    "    \n",
    "    # Format output\n",
    "    formatted_text = \"Sources:\\n\\n\"\n",
    "    for i, source in enumerate(unique_sources.values(), 1):\n",
    "        formatted_text += f\"Source {source['title']}:\\n===\\n\"\n",
    "        formatted_text += f\"URL: {source['url']}\\n===\\n\"\n",
    "        formatted_text += f\"Most relevant content from source: {source['content']}\\n===\\n\"\n",
    "        if include_raw_content:\n",
    "            # Using rough estimate of 4 characters per token\n",
    "            char_limit = max_tokens_per_source * 4\n",
    "            # Handle None raw_content\n",
    "            raw_content = source.get('raw_content', '')\n",
    "            if raw_content is None:\n",
    "                raw_content = ''\n",
    "                print(f\"Warning: No raw_content found for source {source['url']}\")\n",
    "            if len(raw_content) > char_limit:\n",
    "                raw_content = raw_content[:char_limit] + \"... [truncated]\"\n",
    "            formatted_text += f\"Full source content limited to {max_tokens_per_source} tokens: {raw_content}\\n\\n\"\n",
    "                \n",
    "    return formatted_text.strip()\n",
    "\n",
    "def format_sections(sections: list[Section]) -> str:\n",
    "    \"\"\" Format a list of sections into a string \"\"\"\n",
    "    formatted_str = \"\"\n",
    "    for idx, section in enumerate(sections, 1):\n",
    "        formatted_str += f\"\"\"\n",
    "{'='*60}\n",
    "Section {idx}: {section.name}\n",
    "{'='*60}\n",
    "Description:\n",
    "{section.description}\n",
    "Requires Research: \n",
    "{section.research}\n",
    "\n",
    "Content:\n",
    "{section.content if section.content else '[Not yet written]'}\n",
    "\n",
    "\"\"\"\n",
    "    return formatted_str\n",
    "\n",
    "@traceable\n",
    "def tavily_search(query):\n",
    "    \"\"\" Search the web using the Tavily API.\n",
    "    \n",
    "    Args:\n",
    "        query (str): The search query to execute\n",
    "        \n",
    "    Returns:\n",
    "        dict: Tavily search response containing:\n",
    "            - results (list): List of search result dictionaries, each containing:\n",
    "                - title (str): Title of the search result\n",
    "                - url (str): URL of the search result\n",
    "                - content (str): Snippet/summary of the content\n",
    "                - raw_content (str): Full content of the page if available\"\"\"\n",
    "     \n",
    "    return tavily_client.search(query, \n",
    "                         max_results=5, \n",
    "                         include_raw_content=True)\n",
    "\n",
    "@traceable\n",
    "async def tavily_search_async(search_queries, tavily_topic, tavily_days):\n",
    "    \"\"\"\n",
    "    Performs concurrent web searches using the Tavily API.\n",
    "\n",
    "    Args:\n",
    "        search_queries (List[SearchQuery]): List of search queries to process\n",
    "        tavily_topic (str): Type of search to perform ('news' or 'general')\n",
    "        tavily_days (int): Number of days to look back for news articles (only used when tavily_topic='news')\n",
    "\n",
    "    Returns:\n",
    "        List[dict]: List of search results from Tavily API, one per query\n",
    "\n",
    "    Note:\n",
    "        For news searches, each result will include articles from the last `tavily_days` days.\n",
    "        For general searches, the time range is unrestricted.\n",
    "    \"\"\"\n",
    "    \n",
    "    search_tasks = []\n",
    "    for query in search_queries:\n",
    "        if tavily_topic == \"news\":\n",
    "            search_tasks.append(\n",
    "                tavily_async_client.search(\n",
    "                    query,\n",
    "                    max_results=5,\n",
    "                    include_raw_content=True,\n",
    "                    topic=\"news\",\n",
    "                    days=tavily_days\n",
    "                )\n",
    "            )\n",
    "        else:\n",
    "            search_tasks.append(\n",
    "                tavily_async_client.search(\n",
    "                    query,\n",
    "                    max_results=5,\n",
    "                    include_raw_content=True,\n",
    "                    topic=\"general\"\n",
    "                )\n",
    "            )\n",
    "\n",
    "    # Execute all searches concurrently\n",
    "    search_docs = await asyncio.gather(*search_tasks)\n",
    "\n",
    "    return search_docs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Planning\n",
    "\n",
    "First, let's define the Schema for report sections."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing_extensions import TypedDict\n",
    "from typing import  Annotated, List, Optional, Literal\n",
    "  \n",
    "class Sections(BaseModel):\n",
    "    sections: List[Section] = Field(\n",
    "        description=\"Sections of the report.\",\n",
    "    )\n",
    "class SearchQuery(BaseModel):\n",
    "    search_query: str = Field(\n",
    "        None, description=\"Query for web search.\"\n",
    "    )\n",
    "class Queries(BaseModel):\n",
    "    queries: List[SearchQuery] = Field(\n",
    "        description=\"List of search queries.\",\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now you will define the LangGraph state. Each state will have the following fields. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import operator\n",
    "\n",
    "class ReportState(TypedDict):\n",
    "    topic: str # Report topic\n",
    "    tavily_topic: Literal[\"general\", \"news\"] # Tavily search topic\n",
    "    tavily_days: Optional[int] # Only applicable for news topic\n",
    "    report_structure: str # Report structure\n",
    "    number_of_queries: int # Number web search queries to perform per section    \n",
    "    sections: list[Section] # List of report sections \n",
    "    completed_sections: Annotated[list, operator.add] # Send() API key\n",
    "    report_sections_from_research: str # String of any completed sections from research to write final sections\n",
    "    final_report: str # Final report"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next you will write the report planner instructions, and a function that will generate the report sections."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_core.messages import HumanMessage, SystemMessage\n",
    "\n",
    "# Prompt to generate a search query to help with planning the report outline\n",
    "report_planner_query_writer_instructions=\"\"\"You are an expert technical writer, helping to plan a report. \n",
    "\n",
    "The report will be focused on the following topic:\n",
    "\n",
    "{topic}\n",
    "\n",
    "The report structure will follow these guidelines:\n",
    "\n",
    "{report_organization}\n",
    "\n",
    "Your goal is to generate {number_of_queries} search queries that will help gather comprehensive information for planning the report sections. \n",
    "\n",
    "The query should:\n",
    "\n",
    "1. Be related to the topic \n",
    "2. Help satisfy the requirements specified in the report organization\n",
    "\n",
    "Make the query specific enough to find high-quality, relevant sources while covering the breadth needed for the report structure.\"\"\"\n",
    "\n",
    "# Prompt generating the report outline\n",
    "report_planner_instructions=\"\"\"You are an expert technical writer, helping to plan a report.\n",
    "\n",
    "Your goal is to generate the outline of the sections of the report. \n",
    "\n",
    "The overall topic of the report is:\n",
    "\n",
    "{topic}\n",
    "\n",
    "The report should follow this organization: \n",
    "\n",
    "{report_organization}\n",
    "\n",
    "You should reflect on this information to plan the sections of the report: \n",
    "\n",
    "{context}\n",
    "\n",
    "Now, generate the sections of the report. Each section should have the following fields:\n",
    "\n",
    "- Name - Name for this section of the report.\n",
    "- Description - Brief overview of the main topics and concepts to be covered in this section.\n",
    "- Research - Whether to perform web research for this section of the report.\n",
    "- Content - The content of the section, which you will leave blank for now.\n",
    "\n",
    "Consider which sections require web research. For example, introduction and conclusion will not require research because they will distill information from other parts of the report.\"\"\"\n",
    "\n",
    "\n",
    "def invoke_structured_llm_with_retry(structured_llm, queries, max_attempts=3):\n",
    "    \"\"\"\n",
    "    Not all LLMs support structured generation.\n",
    "    Retry max_attempts to get a result back\n",
    "    \"\"\"\n",
    "    for _ in range(max_attempts):\n",
    "        results = structured_llm.invoke(queries)\n",
    "        if results:\n",
    "            return results\n",
    "    return results\n",
    "\n",
    "async def generate_report_plan(state: ReportState):\n",
    "\n",
    "    # Inputs\n",
    "    topic = state[\"topic\"]\n",
    "    report_structure = state[\"report_structure\"]\n",
    "    number_of_queries = state[\"number_of_queries\"]\n",
    "    tavily_topic = state[\"tavily_topic\"]\n",
    "    tavily_days = state.get(\"tavily_days\", None)\n",
    "\n",
    "    # Convert JSON object to string if necessary\n",
    "    if isinstance(report_structure, dict):\n",
    "        report_structure = str(report_structure)\n",
    "\n",
    "    # Generate search query\n",
    "    structured_llm = llm.with_structured_output(Queries)\n",
    "    \n",
    "    # Format system instructions\n",
    "    system_instructions_query = report_planner_query_writer_instructions.format(topic=topic, report_organization=report_structure, number_of_queries=number_of_queries)\n",
    "    \n",
    "    # Generate queries  \n",
    "    results = invoke_structured_llm_with_retry(structured_llm,\n",
    "                                              [SystemMessage(content=system_instructions_query)]+[HumanMessage(content=\"Generate search queries that will help with planning the sections of the report.\")])\n",
    "    \n",
    "    # Web search\n",
    "    query_list = [query.search_query for query in results.queries]\n",
    "    search_docs = await tavily_search_async(query_list, tavily_topic, tavily_days)\n",
    "\n",
    "    # Deduplicate and format sources\n",
    "    source_str = deduplicate_and_format_sources(search_docs, max_tokens_per_source=1000, include_raw_content=True)\n",
    "\n",
    "    # Format system instructions\n",
    "    system_instructions_sections = report_planner_instructions.format(topic=topic, report_organization=report_structure, context=source_str)\n",
    "\n",
    "    # Generate sections \n",
    "    structured_llm = llm.with_structured_output(Sections)\n",
    "    report_sections = invoke_structured_llm_with_retry(structured_llm,\n",
    "                                                      [SystemMessage(content=system_instructions_sections)]+[HumanMessage(content=\"Generate the sections of the report. Your response must include a 'sections' field containing a list of sections. Each section must have: name, description, plan, research, and content fields.\")])\n",
    "    \n",
    "    return {\"sections\": report_sections.sections}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Structure\n",
    "report_structure = \"\"\"This report type focuses on comparative analysis.\n",
    "\n",
    "The report structure should include:\n",
    "1. Introduction (no research needed)\n",
    "   - Brief overview of the topic area\n",
    "   - Context for the comparison\n",
    "\n",
    "2. Main Body Sections:\n",
    "   - One dedicated section for EACH offering being compared in the user-provided list\n",
    "   - Each section should examine:\n",
    "     - Core Features (bulleted list)\n",
    "     - Architecture & Implementation (2-3 sentences)\n",
    "     - One example use case (2-3 sentences)\n",
    "   \n",
    "3. No Main Body Sections other than the ones dedicated to each offering in the user-provided list\n",
    "\n",
    "4. Conclusion with Comparison Table (no research needed)\n",
    "   - Structured comparison table that:\n",
    "     * Compares all offerings from the user-provided list across key dimensions\n",
    "     * Highlights relative strengths and weaknesses\n",
    "   - Final recommendations\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, choose the topic of your report. The default is CPU vs. GPU, but feel free to change the topic to something of your interest. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Topic \n",
    "report_topic = \"Give an overview of capabilities and specific use case examples for these processing units: CPU, GPU.\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's run the agent. We set the Tavily topic to \"general\", but you can set it to \"news\" if you want Tavily to retrieve latest searches. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Tavily search parameters\n",
    "tavily_topic = \"general\"\n",
    "tavily_days = None # Only applicable for news topic\n",
    "\n",
    "# Generate report plan\n",
    "sections = await generate_report_plan({\"topic\": report_topic, \"report_structure\": report_structure, \"number_of_queries\": 2, \"tavily_topic\": tavily_topic, \"tavily_days\": tavily_days})\n",
    "\n",
    "# Print sections\n",
    "for section in sections['sections']:\n",
    "    print(f\"{'='*50}\")\n",
    "    print(f\"Name: {section.name}\")\n",
    "    print(f\"Description: {section.description}\")\n",
    "    print(f\"Research: {section.research}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Research and Writing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Now you are ready to give the Agent details about which sections require research, and the number of queries needed.  Let's First you will define the LangGraph state. Each state will have the following fields. \n",
    "\n",
    "Let's define the LangGraph state. Each state will have the following fields:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class SectionState(TypedDict):\n",
    "    tavily_topic: Literal[\"general\", \"news\"] # Tavily search topic\n",
    "    tavily_days: Optional[int] # Only applicable for news topic\n",
    "    number_of_queries: int # Number web search queries to perform per section \n",
    "    section: Section # Report section   \n",
    "    search_queries: list[SearchQuery] # List of search queries\n",
    "    source_str: str # String of formatted source content from web search\n",
    "    report_sections_from_research: str # String of any completed sections from research to write final sections\n",
    "    completed_sections: list[Section] # Final key we duplicate in outer state for Send() API\n",
    "\n",
    "class SectionOutputState(TypedDict):\n",
    "    completed_sections: list[Section] # Final key we duplicate in outer state for Send() API"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Write Single Section\n",
    "Now you will define the query writer instructions and the Agent function and nodes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import Image, display\n",
    "from langgraph.graph import START, END, StateGraph\n",
    "\n",
    "# Query writer instructions\n",
    "query_writer_instructions=\"\"\"Your goal is to generate targeted web search queries that will gather comprehensive information for writing a technical report section.\n",
    "\n",
    "Topic for this section:\n",
    "{section_topic}\n",
    "\n",
    "When generating {number_of_queries} search queries, ensure they:\n",
    "1. Cover different aspects of the topic (e.g., core features, real-world applications, technical architecture)\n",
    "2. Include specific technical terms related to the topic\n",
    "3. Target recent information by including year markers where relevant (e.g., \"2024\")\n",
    "4. Look for comparisons or differentiators from similar technologies/approaches\n",
    "5. Search for both official documentation and practical implementation examples\n",
    "\n",
    "Your queries should be:\n",
    "- Specific enough to avoid generic results\n",
    "- Technical enough to capture detailed implementation information\n",
    "- Diverse enough to cover all aspects of the section plan\n",
    "- Focused on authoritative sources (documentation, technical blogs, academic papers)\"\"\"\n",
    "\n",
    "# Section writer instructions\n",
    "section_writer_instructions = \"\"\"You are an expert technical writer crafting one section of a technical report.\n",
    "\n",
    "Topic for this section:\n",
    "{section_topic}\n",
    "\n",
    "Guidelines for writing:\n",
    "\n",
    "1. Technical Accuracy:\n",
    "- Include specific version numbers\n",
    "- Reference concrete metrics/benchmarks\n",
    "- Cite official documentation\n",
    "- Use technical terminology precisely\n",
    "\n",
    "2. Length and Style:\n",
    "- Strict 150-200 word limit\n",
    "- No marketing language\n",
    "- Technical focus\n",
    "- Write in simple, clear language\n",
    "- Start with your most important insight in **bold**\n",
    "- Use short paragraphs (2-3 sentences max)\n",
    "\n",
    "3. Structure:\n",
    "- Use ## for section title (Markdown format)\n",
    "- Only use ONE structural element IF it helps clarify your point:\n",
    "  * Either a focused table comparing 2-3 key items (using Markdown table syntax)\n",
    "  * Or a short list (3-5 items) using proper Markdown list syntax:\n",
    "    - Use `*` or `-` for unordered lists\n",
    "    - Use `1.` for ordered lists\n",
    "    - Ensure proper indentation and spacing\n",
    "- End with ### Sources that references the below source material formatted as:\n",
    "  * List each source with title, date, and URL\n",
    "  * Format: `- Title : URL`\n",
    "\n",
    "3. Writing Approach:\n",
    "- Include at least one specific example or case study\n",
    "- Use concrete details over general statements\n",
    "- Make every word count\n",
    "- No preamble prior to creating the section content\n",
    "- Focus on your single most important point\n",
    "\n",
    "4. Use this source material to help write the section:\n",
    "{context}\n",
    "\n",
    "5. Quality Checks:\n",
    "- Exactly 150-200 words (excluding title and sources)\n",
    "- Careful use of only ONE structural element (table or list) and only if it helps clarify your point\n",
    "- One specific example / case study\n",
    "- Starts with bold insight\n",
    "- No preamble prior to creating the section content\n",
    "- Sources cited at end\"\"\"\n",
    "\n",
    "def generate_queries(state: SectionState):\n",
    "    \"\"\" Generate search queries for a section \"\"\"\n",
    "\n",
    "    # Get state \n",
    "    number_of_queries = state[\"number_of_queries\"]\n",
    "    section = state[\"section\"]\n",
    "\n",
    "    # Generate queries \n",
    "    structured_llm = llm.with_structured_output(Queries)\n",
    "\n",
    "    # Format system instructions\n",
    "    system_instructions = query_writer_instructions.format(section_topic=section.description, number_of_queries=number_of_queries)\n",
    "\n",
    "    # Generate queries  \n",
    "    queries = invoke_structured_llm_with_retry(structured_llm,\n",
    "                                              [SystemMessage(content=system_instructions)]+[HumanMessage(content=\"Generate search queries on the provided topic.\")])\n",
    "\n",
    "    return {\"search_queries\": queries.queries}\n",
    "\n",
    "async def search_web(state: SectionState):\n",
    "    \"\"\" Search the web for each query, then return a list of raw sources and a formatted string of sources.\"\"\"\n",
    "    \n",
    "    # Get state \n",
    "    search_queries = state[\"search_queries\"]\n",
    "    tavily_topic = state[\"tavily_topic\"]\n",
    "    tavily_days = state.get(\"tavily_days\", None)\n",
    "\n",
    "    # Web search\n",
    "    query_list = [query.search_query for query in search_queries]\n",
    "    search_docs = await tavily_search_async(query_list, tavily_topic, tavily_days)\n",
    "\n",
    "    # Deduplicate and format sources\n",
    "    source_str = deduplicate_and_format_sources(search_docs, max_tokens_per_source=5000, include_raw_content=True)\n",
    "\n",
    "    return {\"source_str\": source_str}\n",
    "\n",
    "def write_section(state: SectionState):\n",
    "    \"\"\" Write a section of the report \"\"\"\n",
    "\n",
    "    # Get state \n",
    "    section = state[\"section\"]\n",
    "    source_str = state[\"source_str\"]\n",
    "\n",
    "    # Format system instructions\n",
    "    system_instructions = section_writer_instructions.format(section_title=section.name, section_topic=section.description, context=source_str)\n",
    "\n",
    "    # Generate section  \n",
    "    section_content = llm.invoke([SystemMessage(content=system_instructions)]+[HumanMessage(content=\"Generate a report section based on the provided sources.\")])\n",
    "    \n",
    "    # Write content to the section object  \n",
    "    section.content = section_content.content\n",
    "\n",
    "    # Write the updated section to completed sections\n",
    "    return {\"completed_sections\": [section]}\n",
    "\n",
    "# Add nodes and edges \n",
    "section_builder = StateGraph(SectionState, output=SectionOutputState)\n",
    "section_builder.add_node(\"generate_queries\", generate_queries)\n",
    "section_builder.add_node(\"search_web\", search_web)\n",
    "section_builder.add_node(\"write_section\", write_section)\n",
    "\n",
    "section_builder.add_edge(START, \"generate_queries\")\n",
    "section_builder.add_edge(\"generate_queries\", \"search_web\")\n",
    "section_builder.add_edge(\"search_web\", \"write_section\")\n",
    "section_builder.add_edge(\"write_section\", END)\n",
    "\n",
    "# Compile\n",
    "section_builder_graph = section_builder.compile()\n",
    "\n",
    "# View\n",
    "display(Image(section_builder_graph.get_graph(xray=1).draw_mermaid_png()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Validate Single Section\n",
    "\n",
    "Call on the Agent to write a single section to ensure the content is generated as expected. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Test with one section\n",
    "sections = sections['sections'] \n",
    "test_section = sections[1]\n",
    "print(f\"{'='*50}\")\n",
    "print(f\"Name: {test_section.name}\")\n",
    "print(f\"Description: {test_section.description}\")\n",
    "print(f\"Research: {test_section.research}\")\n",
    "\n",
    "# Run\n",
    "report_section = await section_builder_graph.ainvoke({\"section\": test_section, \"number_of_queries\": 2, \"tavily_topic\": tavily_topic, \"tavily_days\": tavily_days})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import Markdown\n",
    "section = report_section['completed_sections'][0]\n",
    "Markdown(section.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Write All Sections"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class ReportStateOutput(TypedDict):\n",
    "    final_report: str # Final report"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langgraph.constants import Send\n",
    "\n",
    "final_section_writer_instructions=\"\"\"You are an expert technical writer crafting a section that synthesizes information from the rest of the report.\n",
    "\n",
    "Section to write: \n",
    "{section_topic}\n",
    "\n",
    "Available report content:\n",
    "{context}\n",
    "\n",
    "1. Section-Specific Approach:\n",
    "\n",
    "For Introduction:\n",
    "- Use # for report title (Markdown format)\n",
    "- 50-100 word limit\n",
    "- Write in simple and clear language\n",
    "- Focus on the core motivation for the report in 1-2 paragraphs\n",
    "- Use a clear narrative arc to introduce the report\n",
    "- Include NO structural elements (no lists or tables)\n",
    "- No sources section needed\n",
    "\n",
    "For Conclusion/Summary:\n",
    "- Use ## for section title (Markdown format)\n",
    "- 100-150 word limit\n",
    "- For comparative reports:\n",
    "    * Must include a focused comparison table using Markdown table syntax\n",
    "    * Table should distill insights from the report\n",
    "    * Keep table entries clear and concise\n",
    "- For non-comparative reports: \n",
    "    * Only use ONE structural element IF it helps distill the points made in the report:\n",
    "    * Either a focused table comparing items present in the report (using Markdown table syntax)\n",
    "    * Or a short list using proper Markdown list syntax:\n",
    "      - Use `*` or `-` for unordered lists\n",
    "      - Use `1.` for ordered lists\n",
    "      - Ensure proper indentation and spacing\n",
    "- End with specific next steps or implications\n",
    "- No sources section needed\n",
    "\n",
    "3. Writing Approach:\n",
    "- Use concrete details over general statements\n",
    "- Make every word count\n",
    "- Focus on your single most important point\n",
    "\n",
    "4. Quality Checks:\n",
    "- For introduction: 50-100 word limit, # for report title, no structural elements, no sources section\n",
    "- For conclusion: 100-150 word limit, ## for section title, only ONE structural element at most, no sources section\n",
    "- Markdown format\n",
    "- Do not include word count or any preamble in your response\"\"\"\n",
    "\n",
    "def initiate_section_writing(state: ReportState):\n",
    "    \"\"\" This is the \"map\" step when we kick off web research for some sections of the report \"\"\"    \n",
    "    \n",
    "    # Kick off section writing in parallel via Send() API for any sections that require research\n",
    "    return [\n",
    "        Send(\"build_section_with_web_research\", {\"section\": s, \n",
    "                                                 \"number_of_queries\": state[\"number_of_queries\"], \n",
    "                                                 \"tavily_topic\": state[\"tavily_topic\"], \n",
    "                                                 \"tavily_days\": state.get(\"tavily_days\", None)}) \n",
    "        for s in state[\"sections\"] \n",
    "        if s.research\n",
    "    ]\n",
    "\n",
    "def write_final_sections(state: SectionState):\n",
    "    \"\"\" Write final sections of the report, which do not require web search and use the completed sections as context \"\"\"\n",
    "\n",
    "    # Get state \n",
    "    section = state[\"section\"]\n",
    "    completed_report_sections = state[\"report_sections_from_research\"]\n",
    "    \n",
    "    # Format system instructions\n",
    "    system_instructions = final_section_writer_instructions.format(section_title=section.name, section_topic=section.description, context=completed_report_sections)\n",
    "\n",
    "    # Generate section  \n",
    "    section_content = llm.invoke([SystemMessage(content=system_instructions)]+[HumanMessage(content=\"Generate a report section based on the provided sources.\")])\n",
    "    \n",
    "    # Write content to section \n",
    "    section.content = section_content.content\n",
    "\n",
    "    # Write the updated section to completed sections\n",
    "    return {\"completed_sections\": [section]}\n",
    "\n",
    "def gather_completed_sections(state: ReportState):\n",
    "    \"\"\" Gather completed sections from research \"\"\"    \n",
    "\n",
    "    # List of completed sections\n",
    "    completed_sections = state[\"completed_sections\"]\n",
    "\n",
    "    # Format completed section to str to use as context for final sections\n",
    "    completed_report_sections = format_sections(completed_sections)\n",
    "\n",
    "    return {\"report_sections_from_research\": completed_report_sections}\n",
    "\n",
    "def initiate_final_section_writing(state: ReportState):\n",
    "    \"\"\" This is the \"map\" step when we kick off research on any sections that require it using the Send API \"\"\"    \n",
    "\n",
    "    # Kick off section writing in parallel via Send() API for any sections that do not require research\n",
    "    return [\n",
    "        Send(\"write_final_sections\", {\"section\": s, \"report_sections_from_research\": state[\"report_sections_from_research\"]}) \n",
    "        for s in state[\"sections\"] \n",
    "        if not s.research\n",
    "    ]\n",
    "\n",
    "def compile_final_report(state: ReportState):\n",
    "    \"\"\" Compile the final report \"\"\"    \n",
    "\n",
    "    # Get sections\n",
    "    sections = state[\"sections\"]\n",
    "    completed_sections = {s.name: s.content for s in state[\"completed_sections\"]}\n",
    "\n",
    "    # Update sections with completed content while maintaining original order\n",
    "    for section in sections:\n",
    "        section.content = completed_sections[section.name]\n",
    "\n",
    "    # Compile final report\n",
    "    all_sections = \"\\n\\n\".join([s.content for s in sections])\n",
    "\n",
    "    return {\"final_report\": all_sections}\n",
    "\n",
    "# Add nodes and edges \n",
    "builder = StateGraph(ReportState, output=ReportStateOutput)\n",
    "builder.add_node(\"generate_report_plan\", generate_report_plan)\n",
    "builder.add_node(\"build_section_with_web_research\", section_builder.compile())\n",
    "builder.add_node(\"gather_completed_sections\", gather_completed_sections)\n",
    "builder.add_node(\"write_final_sections\", write_final_sections)\n",
    "builder.add_node(\"compile_final_report\", compile_final_report)\n",
    "builder.add_edge(START, \"generate_report_plan\")\n",
    "builder.add_conditional_edges(\"generate_report_plan\", initiate_section_writing, [\"build_section_with_web_research\"])\n",
    "builder.add_edge(\"build_section_with_web_research\", \"gather_completed_sections\")\n",
    "builder.add_conditional_edges(\"gather_completed_sections\", initiate_final_section_writing, [\"write_final_sections\"])\n",
    "builder.add_edge(\"write_final_sections\", \"compile_final_report\")\n",
    "builder.add_edge(\"compile_final_report\", END)\n",
    "\n",
    "graph = builder.compile()\n",
    "display(Image(graph.get_graph(xray=1).draw_mermaid_png()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Structure\n",
    "report_structure = \"\"\"This report type focuses on comparative analysis.\n",
    "\n",
    "The report structure should include:\n",
    "1. Introduction (no research needed)\n",
    "   - Brief overview of the topic area\n",
    "   - Context for the comparison\n",
    "\n",
    "2. Main Body Sections:\n",
    "   - One dedicated section for EACH offering being compared in the user-provided list\n",
    "   - Each section should examine:\n",
    "     - Core Features (bulleted list)\n",
    "     - Architecture & Implementation (2-3 sentences)\n",
    "     - One example use case (2-3 sentences)\n",
    "   \n",
    "3. No Main Body Sections other than the ones dedicated to each offering in the user-provided list\n",
    "\n",
    "4. Conclusion with Comparison Table (no research needed)\n",
    "   - Structured comparison table that:\n",
    "     * Compares all offerings from the user-provided list across key dimensions\n",
    "     * Highlights relative strengths and weaknesses\n",
    "   - Final recommendations\"\"\"\n",
    "\n",
    "# Tavily search parameters\n",
    "tavily_topic = \"general\"\n",
    "tavily_days = None # Only applicable for news topic"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once again, choose the topic of your report. The default is CPU vs. GPU, but feel free to change the topic to something of your interest. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Topic \n",
    "report_topic = \"Give an overview of capabilities and specific use case examples for these processing units: CPU, GPU.\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "report = await graph.ainvoke({\"topic\": report_topic, \n",
    "                                   \"report_structure\": report_structure, \n",
    "                                   \"number_of_queries\": 2, \n",
    "                                   \"tavily_topic\": tavily_topic, \n",
    "                                   \"tavily_days\": tavily_days})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Final Report\n",
    "Let's validate all sections. This is the final report the Agent has written. \n",
    "Finally, let's look at the final output. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import Markdown\n",
    "Markdown(report['final_report'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#print the final report in plain text\n",
    "report['final_report']"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
