{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "f269b895",
   "metadata": {},
   "source": [
    "# LangGraph Fundamentals\n",
    "\n",
    "Welcome to this comprehensive guide on **LangGraph**, a powerful framework for building stateful, multi-agent applications with LLMs. \n",
    "\n",
    "LangGraph allows you to orchestrate complex AI workflows by organizing your application as a **graph** - where nodes represent actions (like calling an LLM or executing a tool) and edges define how data flows between them.\n",
    "\n",
    "In this notebook, you'll learn the **7 core building blocks** that make up every LangGraph application:\n",
    "\n",
    "1. **State** - The data container that flows through your graph\n",
    "2. **Nodes** - Functions that perform work and update the state\n",
    "3. **Edges** - Connections that define the flow between nodes\n",
    "4. **Reducers** - Strategies for merging state updates from parallel nodes\n",
    "5. **Tools** - Functions that LLMs can call to interact with external systems\n",
    "6. **Memory (Checkpointers)** - Persistence layer to save and resume workflows\n",
    "7. **Routing Functions** - Logic to dynamically choose paths through your graph\n",
    "8. **Superstep** - one complete round of execution \n",
    "\n",
    "By the end of this notebook, you'll understand how these components work together and be able to build your own agent workflows - from simple chatbots to complex multi-agent systems.\n",
    "\n",
    "Let's get started! 🚀\n",
    "\n",
    "---\n",
    "📢 Discover more Agentic AI notebooks on my [GitHub repository](https://github.com/lisekarimi/agentverse) and explore additional AI projects on my [portfolio](https://lisekarimi.com)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4d6633c0",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Standard library imports\n",
    "import os\n",
    "import operator\n",
    "import sqlite3\n",
    "from typing import TypedDict, Annotated, Literal\n",
    "\n",
    "# Third-party imports\n",
    "import gradio as gr\n",
    "from dotenv import load_dotenv\n",
    "from IPython.display import Image, display\n",
    "\n",
    "# LangChain imports\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.tools import tool\n",
    "from langchain_core.messages import HumanMessage, AIMessage\n",
    "\n",
    "# LangGraph imports\n",
    "from langgraph.graph import StateGraph, START, END\n",
    "from langgraph.graph.message import add_messages\n",
    "from langgraph.checkpoint.sqlite import SqliteSaver\n",
    "from langgraph.prebuilt import ToolNode, tools_condition\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "28c72920",
   "metadata": {},
   "source": [
    "## 🗺️ State, Nodes, and Edges\n",
    "\n",
    "Think of LangGraph like a road trip:\n",
    "\n",
    "- State = Your car (carries data like name, count, messages)\n",
    "- Nodes = Cities where you stop and do something (greet someone, calculate, call a tool)\n",
    "- Edges = Roads connecting the cities (define which city comes next)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "aa165ec3",
   "metadata": {},
   "outputs": [],
   "source": [
    "# STATE = Your car carrying data\n",
    "class State(TypedDict):\n",
    "    message: str\n",
    "\n",
    "# NODES = Cities where work happens\n",
    "def city_a(state: State):\n",
    "    print(\"🏙️ City A: Processing...\")\n",
    "    return {\"message\": state[\"message\"] + \" → visited A\"}\n",
    "\n",
    "def city_b(state: State):\n",
    "    print(\"🏙️ City B: Processing...\")\n",
    "    return {\"message\": state[\"message\"] + \" → visited B\"}\n",
    "\n",
    "# BUILD THE MAP\n",
    "graph = StateGraph(State)\n",
    "graph.add_node(\"A\", city_a)\n",
    "graph.add_node(\"B\", city_b)\n",
    "\n",
    "# EDGES = Roads between cities\n",
    "graph.add_edge(START, \"A\")  # Start → City A\n",
    "graph.add_edge(\"A\", \"B\")     # City A → City B\n",
    "graph.add_edge(\"B\", END)     # City B → End\n",
    "\n",
    "app = graph.compile()\n",
    "\n",
    "# TAKE THE TRIP\n",
    "result = app.invoke({\"message\": \"Starting\"})\n",
    "print(f\"\\n📍 Final: {result['message']}\")\n",
    "\n",
    "# Generate and display the graph\n",
    "graph_image = app.get_graph().draw_mermaid_png()\n",
    "display(Image(graph_image))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "94c5f9c4",
   "metadata": {},
   "source": [
    "## 🔀 Reducer\n",
    "\n",
    "You need a reducer (which is a function) when multiple nodes want to update the SAME field in the state, and you want to combine/merge their updates instead of having the last node overwrite everyone else's work.\n",
    "\n",
    "**Common cases:**\n",
    "\n",
    "1. **Multiple agents collecting data** - Each agent finds something and you want to keep ALL findings, not just the last one\n",
    "   - Example: 3 researchers each add their findings to a list\n",
    "\n",
    "2. **Accumulating messages** - Multiple nodes adding messages to a conversation history\n",
    "   - Example: User message + Agent 1 response + Agent 2 response = full conversation\n",
    "\n",
    "3. **Aggregating scores/metrics** - Multiple nodes calculating partial results\n",
    "   - Example: Sum up scores from different evaluators\n",
    "\n",
    "4. **Parallel processing** - When nodes run in parallel and all contribute to the same field\n",
    "\n",
    "**Without reducer:** Last node wins, previous updates are lost ❌  \n",
    "**With reducer:** All updates are combined/merged ✅\n",
    "\n",
    "**`add_messages`** is the most common for chat applications (handles message-specific logic like deduplication), while **`operator.add`** is the most common for general list appending in any other use case."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ec406050",
   "metadata": {},
   "outputs": [],
   "source": [
    "# ========== STATE WITH operator.add REDUCER ==========\n",
    "class State(TypedDict):\n",
    "    # WITH reducer: values accumulate ✅\n",
    "    scores: Annotated[list[int], operator.add]\n",
    "\n",
    "# ========== TWO NODES THAT RUN IN PARALLEL ==========\n",
    "def node_a(state: State):\n",
    "    print(\"Node A running...\")\n",
    "    return {\"scores\": [10, 20]}\n",
    "\n",
    "def node_b(state: State):\n",
    "    print(\"Node B running...\")\n",
    "    return {\"scores\": [30, 40]}\n",
    "\n",
    "# ========== BUILD GRAPH ==========\n",
    "graph = StateGraph(State)\n",
    "graph.add_node(\"a\", node_a)\n",
    "graph.add_node(\"b\", node_b)\n",
    "\n",
    "# Both run in parallel from START\n",
    "graph.add_edge(START, \"a\")\n",
    "graph.add_edge(START, \"b\")\n",
    "graph.add_edge(\"a\", END)\n",
    "graph.add_edge(\"b\", END)\n",
    "\n",
    "app = graph.compile()\n",
    "\n",
    "# ========== RUN ==========\n",
    "result = app.invoke({\"scores\": []})\n",
    "\n",
    "print(\"\\nScores (with operator.add reducer):\", result[\"scores\"])\n",
    "\n",
    "# Generate and display the graph\n",
    "graph_image = app.get_graph().draw_mermaid_png()\n",
    "display(Image(graph_image))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c7bc8ac5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# ========== STATE FOR AGENT ==========\n",
    "class AgentState(TypedDict):\n",
    "    messages: Annotated[list, add_messages]\n",
    "\n",
    "# ========== AGENT NODES ==========\n",
    "def analyzer_agent(state: AgentState):\n",
    "    \"\"\"Analyzes user input\"\"\"\n",
    "    last_message = state[\"messages\"][-1].content\n",
    "\n",
    "    return {\n",
    "        \"messages\": [AIMessage(content=f\"Analysis: '{last_message}' looks like a question\")]\n",
    "    }\n",
    "\n",
    "def responder_agent(state: AgentState):\n",
    "    \"\"\"Responds based on analysis\"\"\"\n",
    "    return {\n",
    "        \"messages\": [AIMessage(content=\"Here's my answer based on the analysis...\")]\n",
    "    }\n",
    "\n",
    "# ========== BUILD GRAPH ==========\n",
    "graph = StateGraph(AgentState)\n",
    "graph.add_node(\"analyze\", analyzer_agent)\n",
    "graph.add_node(\"respond\", responder_agent)\n",
    "\n",
    "graph.add_edge(START, \"analyze\")\n",
    "graph.add_edge(\"analyze\", \"respond\")\n",
    "graph.add_edge(\"respond\", END)\n",
    "\n",
    "app = graph.compile()\n",
    "\n",
    "# ========== RUN ==========\n",
    "result = app.invoke({\n",
    "    \"messages\": [HumanMessage(content=\"What is LangGraph?\")]\n",
    "})\n",
    "\n",
    "print(\"Full conversation:\")\n",
    "for msg in result[\"messages\"]:\n",
    "    print(f\"  [{msg.type}]: {msg.content}\")\n",
    "\n",
    "# Generate and display the graph\n",
    "graph_image = app.get_graph().draw_mermaid_png()\n",
    "display(Image(graph_image))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "46fee521",
   "metadata": {},
   "source": [
    "## 🛠️ Tools\n",
    "Tools are functions that agents can call to perform specific tasks"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dd5af568",
   "metadata": {},
   "outputs": [],
   "source": [
    "# ========== SIMPLE TOOL ==========\n",
    "@tool\n",
    "def send_notification(message: str) -> str:\n",
    "    \"\"\"Send a notification with a message\"\"\"\n",
    "    print(f\"📩 Notification sent: {message}\")\n",
    "    return f\"Sent: {message}\"\n",
    "\n",
    "# ========== STATE ==========\n",
    "class State(TypedDict):\n",
    "    messages: Annotated[list, add_messages]\n",
    "\n",
    "# ========== LLM WITH TOOLS ==========\n",
    "tools = [send_notification]\n",
    "llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
    "llm_with_tools = llm.bind_tools(tools)\n",
    "\n",
    "# ========== NODES ==========\n",
    "def chatbot(state: State):\n",
    "    \"\"\"LLM decides whether to use tools\"\"\"\n",
    "    return {\"messages\": [llm_with_tools.invoke(state[\"messages\"])]}\n",
    "\n",
    "tool_node = ToolNode(tools)\n",
    "\n",
    "# ========== BUILD GRAPH ==========\n",
    "graph = StateGraph(State)\n",
    "graph.add_node(\"chatbot\", chatbot)\n",
    "graph.add_node(\"tools\", tool_node)\n",
    "\n",
    "graph.add_edge(START, \"chatbot\")\n",
    "graph.add_conditional_edges(\"chatbot\", tools_condition)\n",
    "graph.add_edge(\"tools\", \"chatbot\")\n",
    "\n",
    "app = graph.compile()\n",
    "\n",
    "# ========== RUN ==========\n",
    "result = app.invoke({\n",
    "    \"messages\": [(\"user\", \"Send me a notification saying 'Hello World'\")]\n",
    "})\n",
    "\n",
    "print(\"\\nFinal messages:\")\n",
    "for msg in result[\"messages\"]:\n",
    "    print(f\"  {msg.type}: {msg.content}\")\n",
    "\n",
    "# Generate and display the graph\n",
    "graph_image = app.get_graph().draw_mermaid_png()\n",
    "display(Image(graph_image))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a4e5a9fc",
   "metadata": {},
   "source": [
    "## 💾 Memory\n",
    "\n",
    "- Memory (checkpointer) allows the chatbot to remember past conversations:\n",
    "    - SqliteSaver stores conversation history in a database file (memory.db)\n",
    "    - thread_id identifies different conversation sessions (like a chat room ID)\n",
    "    - Same thread_id = same conversation history\n",
    "    - Different thread_id = separate, independent conversations\n",
    "    - When you restart the code, it loads the saved history from the database\n",
    "    - add_messages reducer ensures messages are appended (not overwritten)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "679785f8",
   "metadata": {},
   "outputs": [],
   "source": [
    "load_dotenv()\n",
    "\n",
    "# State\n",
    "class State(TypedDict):\n",
    "    messages: Annotated[list, add_messages]\n",
    "\n",
    "# LLM\n",
    "llm = ChatOpenAI(model=\"gpt-4o-mini\", api_key=os.getenv(\"OPENAI_API_KEY\"))\n",
    "\n",
    "# Node\n",
    "def chatbot(state: State):\n",
    "    return {\"messages\": [llm.invoke(state[\"messages\"])]} # Pass the entire conversation history\n",
    "\n",
    "# Graph with SQLite\n",
    "graph = StateGraph(State)\n",
    "graph.add_node(\"chatbot\", chatbot)\n",
    "graph.add_edge(START, \"chatbot\")\n",
    "graph.add_edge(\"chatbot\", END)\n",
    "\n",
    "conn = sqlite3.connect(\"memory.db\", check_same_thread=False)\n",
    "memory = SqliteSaver(conn)\n",
    "app = graph.compile(checkpointer=memory)\n",
    "\n",
    "# Gradio\n",
    "def chat(message, _):\n",
    "    config = {\"configurable\": {\"thread_id\": \"1\"}}\n",
    "    response = app.invoke({\"messages\": [(\"user\", message)]}, config) # Runs the entire graph workflow\n",
    "    return response[\"messages\"][-1].content\n",
    "\n",
    "demo = gr.ChatInterface(chat, title=\"LangGraph Chatbot with Memory\")\n",
    "demo.launch()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b2fb4383",
   "metadata": {},
   "source": [
    "## 🚦 Routing Functions\n",
    "A routing function is like a traffic light that decides which path to take based on conditions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1a173cfb",
   "metadata": {},
   "outputs": [],
   "source": [
    "# STATE\n",
    "class State(TypedDict):\n",
    "    question: str\n",
    "    info_collected: list[str]\n",
    "    answer: str\n",
    "\n",
    "# NODES\n",
    "def search_page(state: State):\n",
    "    \"\"\"Scrape a webpage\"\"\"\n",
    "    print(\"🔍 Scraping webpage...\")\n",
    "    new_info = f\"Info from page {len(state['info_collected']) + 1}\"\n",
    "    return {\"info_collected\": state[\"info_collected\"] + [new_info]}\n",
    "\n",
    "def give_answer(state: State):\n",
    "    \"\"\"Provide final answer\"\"\"\n",
    "    print(\"✅ Enough info! Giving answer...\")\n",
    "    return {\"answer\": f\"Answer based on {len(state['info_collected'])} pages\"}\n",
    "\n",
    "# ROUTING FUNCTION - Do we have enough info?\n",
    "def check_if_enough(state: State) -> Literal[\"search_more\", \"answer\"]:\n",
    "    \"\"\"Decide: need more info or ready to answer?\"\"\"\n",
    "    if len(state[\"info_collected\"]) < 3:\n",
    "        print(f\"⚠️ Only {len(state['info_collected'])} pages. Need more!\")\n",
    "        return \"search_more\"\n",
    "    else:\n",
    "        print(f\"✅ Got {len(state['info_collected'])} pages. Ready!\")\n",
    "        return \"answer\"\n",
    "\n",
    "# BUILD GRAPH\n",
    "graph = StateGraph(State)\n",
    "graph.add_node(\"search\", search_page)\n",
    "graph.add_node(\"answer\", give_answer)\n",
    "\n",
    "graph.add_edge(START, \"search\")\n",
    "\n",
    "# CONDITIONAL EDGE - Router decides: more scraping or answer?\n",
    "graph.add_conditional_edges(\n",
    "    \"search\",\n",
    "    check_if_enough,\n",
    "    {\"search_more\": \"search\", \"answer\": \"answer\"}  # Loop back or finish\n",
    ")\n",
    "\n",
    "graph.add_edge(\"answer\", END)\n",
    "\n",
    "app = graph.compile()\n",
    "\n",
    "# RUN\n",
    "result = app.invoke({\n",
    "    \"question\": \"What is LangGraph?\",\n",
    "    \"info_collected\": [],\n",
    "    \"answer\": \"\"\n",
    "})\n",
    "\n",
    "print(f\"\\n📄 Pages scraped: {len(result['info_collected'])}\")\n",
    "print(f\"💬 Answer: {result['answer']}\")\n",
    "\n",
    "# Generate and display the graph\n",
    "graph_image = app.get_graph().draw_mermaid_png()\n",
    "display(Image(graph_image))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "13c65a3e",
   "metadata": {},
   "source": [
    "## ⚡ Superstep\n",
    "\n",
    "Here's a SIMPLE analogy:\n",
    "\n",
    "**Making a Sandwich (Sequential)**\n",
    "\n",
    "**Steps:**\n",
    "1. Get bread\n",
    "2. Add peanut butter\n",
    "3. Add jelly\n",
    "4. Close sandwich\n",
    "\n",
    "**Each step happens ONE AT A TIME** (you can't add jelly before bread!)\n",
    "\n",
    "**In LangGraph terms:**\n",
    "- Each step = 1 node\n",
    "- Since they're sequential, each is also called a \"superstep\"\n",
    "- **3 steps = 3 supersteps**\n",
    "\n",
    "---\n",
    "\n",
    "**Making 3 Sandwiches with 3 People (Parallel)**\n",
    "\n",
    "**Superstep 1:** (Everyone works at SAME TIME)\n",
    "- Person A gets bread\n",
    "- Person B gets bread\n",
    "- Person C gets bread\n",
    "**(3 steps happen in 1 superstep!)**\n",
    "\n",
    "**Superstep 2:** (Everyone waits for everyone to finish, then all work together)\n",
    "- Person A adds peanut butter\n",
    "- Person B adds peanut butter  \n",
    "- Person C adds peanut butter\n",
    "**(3 steps happen in 1 superstep!)**\n",
    "\n",
    "**Superstep 3:**\n",
    "- Person A adds jelly\n",
    "- Person B adds jelly\n",
    "- Person C adds jelly\n",
    "\n",
    "---\n",
    "\n",
    "**The Rule:**\n",
    "\n",
    "**Superstep = A \"round\" where everyone works, then everyone waits**\n",
    "\n",
    "- **Sequential** → 1 person working = 1 step = 1 superstep\n",
    "- **Parallel** → 3 people working = 3 steps = 1 superstep\n",
    "\n",
    "---\n",
    "\n",
    "A superstep is **one complete** round of execution in LangGraph where:\n",
    "\n",
    "- All nodes that are ready to run execute (potentially in parallel)\n",
    "- Their updates get applied to the state\n",
    "- LangGraph checks what to do next"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0e5c47c6",
   "metadata": {},
   "outputs": [],
   "source": [
    "class State(TypedDict):\n",
    "    count: int\n",
    "\n",
    "def step_1(state: State):\n",
    "    print(f\"Step 1 (superstep 1) - count: {state['count']}\")\n",
    "    return {\"count\": state[\"count\"] + 1}\n",
    "\n",
    "def step_2(state: State):\n",
    "    print(f\"Step 2 (superstep 2) - count: {state['count']}\")\n",
    "    return {\"count\": state[\"count\"] + 1}\n",
    "\n",
    "def step_3(state: State):\n",
    "    print(f\"Step 3 (superstep 3) - count: {state['count']}\")\n",
    "    return {\"count\": state[\"count\"] + 1}\n",
    "\n",
    "graph = StateGraph(State)\n",
    "graph.add_node(\"s1\", step_1)\n",
    "graph.add_node(\"s2\", step_2)\n",
    "graph.add_node(\"s3\", step_3)\n",
    "\n",
    "graph.add_edge(START, \"s1\")\n",
    "graph.add_edge(\"s1\", \"s2\")\n",
    "graph.add_edge(\"s2\", \"s3\")\n",
    "graph.add_edge(\"s3\", END)\n",
    "\n",
    "app = graph.compile()\n",
    "\n",
    "result = app.invoke({\"count\": 0})\n",
    "\n",
    "# Generate and display the graph\n",
    "graph_image = app.get_graph().draw_mermaid_png()\n",
    "display(Image(graph_image))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
