{
  "cells": [
    {
      "cell_type": "markdown",
      "id": "3631f2b9-aa79-472e-a9d6-9125a90ee704",
      "metadata": {},
      "source": [
        "# How to convert LangGraph calls to LangGraph Cloud calls"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "2e9edff6-38a4-45b8-a612-fb594a226879",
      "metadata": {},
      "source": [
        "So you're used to interacting with your graph locally, but now you've deployed it with LangGraph cloud. How do you change all the places in your codebase where you call LangGraph directly to call LangGraph Cloud? This notebook contains side-by-side comparisons so you can easily transition from calling LangGraph to calling LangGraph Cloud."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "7c2f84f1-0751-4779-97d4-5cbb286093b7",
      "metadata": {},
      "source": [
        "## Setup"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "323db423-b644-40bd-9c2d-976a53f602f7",
      "metadata": {},
      "source": [
        "We'll be using a simple ReAct agent for this how-to guide. You will also need to set up a project with `agent.py` and `langgraph.json` files. See [quick start](https://langchain-ai.github.io/langgraph/cloud/quick_start/#develop) for setting this up."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "id": "6b4285e4-7434-4971-bde0-aabceef8ee7e",
      "metadata": {},
      "outputs": [],
      "source": [
        "%%capture --no-stderr\n",
        "%pip install -U langgraph langchain-openai"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "id": "f7f9f24a-e3d0-422b-8924-47950b2facd6",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "OPENAI_API_KEY:  ········\n"
          ]
        }
      ],
      "source": [
        "import getpass\n",
        "import os\n",
        "\n",
        "\n",
        "def _set_env(var: str):\n",
        "    if not os.environ.get(var):\n",
        "        os.environ[var] = getpass.getpass(f\"{var}: \")\n",
        "\n",
        "\n",
        "_set_env(\"OPENAI_API_KEY\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "id": "ef5a3ec6-0cd0-4541-ab1b-d63ede22720e",
      "metadata": {},
      "outputs": [],
      "source": [
        "# this is all that's needed for the agent.py\n",
        "from typing import Literal\n",
        "from langchain_community.tools.tavily_search import TavilySearchResults\n",
        "from langchain_core.runnables import ConfigurableField\n",
        "from langchain_core.tools import tool\n",
        "from langchain_openai import ChatOpenAI\n",
        "from langgraph.prebuilt import create_react_agent\n",
        "\n",
        "\n",
        "@tool\n",
        "def get_weather(city: Literal[\"nyc\", \"sf\"]):\n",
        "    \"\"\"Use this to get weather information.\"\"\"\n",
        "    if city == \"nyc\":\n",
        "        return \"It might be cloudy in nyc\"\n",
        "    elif city == \"sf\":\n",
        "        return \"It's always sunny in sf\"\n",
        "    else:\n",
        "        raise AssertionError(\"Unknown city\")\n",
        "\n",
        "\n",
        "tools = [get_weather]\n",
        "\n",
        "model = ChatOpenAI(model_name=\"gpt-4o\", temperature=0)\n",
        "graph = create_react_agent(model, tools)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "eb9e138e-cb0e-480a-a32a-d63019720262",
      "metadata": {},
      "source": [
        "Now we'll set up the langgraph client. The client assumes the LangGraph Cloud server is running on `localhost:8123`"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "id": "3ab06b39-7bd1-4611-a37e-9b94e25643d2",
      "metadata": {},
      "outputs": [],
      "source": [
        "from langgraph_sdk import get_client\n",
        "\n",
        "client = get_client()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "ee4e8d83-e68d-40b8-a128-5c85e0aafc85",
      "metadata": {},
      "source": [
        "## Invoking the graph"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "ed935900-1ecc-4f39-9dc9-70a92f179d00",
      "metadata": {},
      "source": [
        "Below examples show how to mirror `.invoke() / .ainvoke()` methods of LangGraph's `CompiledGraph` runnable, i.e. create a blocking graph execution"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "a3e5c63f-d33b-4d27-b90c-205ee30f1197",
      "metadata": {},
      "source": [
        "### With LangGraph"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "id": "a11bd693-662e-42ad-aa8f-99531f0d091e",
      "metadata": {},
      "outputs": [],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's the weather in sf\")]}\n",
        "invoke_output = await graph.ainvoke(inputs)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "id": "9f649fcc-81f0-4c94-9ee9-b2db31bfc5d4",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "================================\u001b[1m Human Message \u001b[0m=================================\n",
            "\n",
            "what's the weather in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "Tool Calls:\n",
            "  get_weather (call_GOKlsBY2XKm7pZnmAzJweYDU)\n",
            " Call ID: call_GOKlsBY2XKm7pZnmAzJweYDU\n",
            "  Args:\n",
            "    city: sf\n",
            "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
            "Name: get_weather\n",
            "\n",
            "It's always sunny in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "The weather in San Francisco is currently sunny.\n"
          ]
        }
      ],
      "source": [
        "for m in invoke_output[\"messages\"]:\n",
        "    m.pretty_print()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "ec78ae0f-c474-472e-8273-658bb56f1476",
      "metadata": {},
      "source": [
        "### With LangGraph Cloud"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "id": "72bd6ac6-ace3-43b4-99b8-559b3d2a614f",
      "metadata": {},
      "outputs": [],
      "source": [
        "# NOTE: We're not specifying the thread here -- this allows us to create a thread just for this run\n",
        "wait_output = await client.runs.wait(None, \"agent\", input=inputs)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "id": "0d81c783-2c7b-421a-a0f9-752e22039472",
      "metadata": {},
      "outputs": [],
      "source": [
        "# we'll use this for pretty message formatting\n",
        "from langchain_core.messages import convert_to_messages"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "id": "da0f4d54-662c-42b0-ba35-c52f30a2fb1e",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "================================\u001b[1m Human Message \u001b[0m=================================\n",
            "\n",
            "what's the weather in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "Tool Calls:\n",
            "  get_weather (call_pQJsT9uLG3nVppN8Dt2OhnFx)\n",
            " Call ID: call_pQJsT9uLG3nVppN8Dt2OhnFx\n",
            "  Args:\n",
            "    city: sf\n",
            "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
            "Name: get_weather\n",
            "\n",
            "It's always sunny in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "The weather in San Francisco is currently sunny.\n"
          ]
        }
      ],
      "source": [
        "for m in convert_to_messages(wait_output[\"messages\"]):\n",
        "    m.pretty_print()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "129853ea-83e2-4adf-b5e4-60f70c0ccb73",
      "metadata": {},
      "source": [
        "## Streaming"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "1468248f-f50b-43f3-b566-48ae4a1b643b",
      "metadata": {},
      "source": [
        "Below examples show how to mirror `.stream() / .astream()` methods for streaming partial graph execution results.  \n",
        "Note: LangGraph's `stream_mode=values/updates/debug` behave nearly identically in LangGraph Cloud (with the exception of additional streamed chunks with `metadata` / `end` events types)"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "3ed2aae5-d137-4a5b-868b-ed4d551aefaa",
      "metadata": {},
      "source": [
        "### With LangGraph"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "id": "e9e9ffb0-2cd5-466f-b70b-b6ed51b852d1",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "================================\u001b[1m Human Message \u001b[0m=================================\n",
            "\n",
            "what's the weather in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "Tool Calls:\n",
            "  get_weather (call_302y9671bqMkMcpLZOWLNAnq)\n",
            " Call ID: call_302y9671bqMkMcpLZOWLNAnq\n",
            "  Args:\n",
            "    city: sf\n",
            "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
            "Name: get_weather\n",
            "\n",
            "It's always sunny in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "The weather in San Francisco is currently sunny.\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's the weather in sf\")]}\n",
        "async for chunk in graph.astream(inputs, stream_mode=\"values\"):\n",
        "    chunk[\"messages\"][-1].pretty_print()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "5da024a7-1d7e-4212-9251-aaadaba6acbd",
      "metadata": {},
      "source": [
        "### With LangGraph Cloud"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "id": "a3c02bf7-af0c-47b9-8339-1e303571220e",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "================================\u001b[1m Human Message \u001b[0m=================================\n",
            "\n",
            "what's the weather in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "Tool Calls:\n",
            "  get_weather (call_NYVNSiBeF0oTAYnaDrlEAG7a)\n",
            " Call ID: call_NYVNSiBeF0oTAYnaDrlEAG7a\n",
            "  Args:\n",
            "    city: sf\n",
            "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
            "Name: get_weather\n",
            "\n",
            "It's always sunny in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "The weather in San Francisco is currently sunny.\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's the weather in sf\")]}\n",
        "async for chunk in client.runs.stream(\n",
        "    None, \"agent\", input=inputs, stream_mode=\"values\"\n",
        "):\n",
        "    if chunk.event == \"values\":\n",
        "        messages = convert_to_messages(chunk.data[\"messages\"])\n",
        "        messages[-1].pretty_print()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "d693e1b8-bb65-439f-bbbe-b6a12cc26f1a",
      "metadata": {},
      "source": [
        "## Persistence"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "bcd1a770-8f4a-4c5e-9e54-9851a6acb985",
      "metadata": {},
      "source": [
        "In LangGraph, you need to provide a `checkpointer` object when compiling your graph to persist state across interactions with your graph (i.e. threads). In LangGraph Cloud, you don't need to create a checkpointer -- the server already implements one for you. You can also directly manage the threads from a client."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "3afcc9e4-e650-497c-be05-1d6a6ad06af3",
      "metadata": {},
      "source": [
        "### With LangGraph"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "id": "ac145136-410c-41fe-a936-00c8f6d9116f",
      "metadata": {},
      "outputs": [],
      "source": [
        "from langgraph.checkpoint.memory import MemorySaver"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "id": "5fc8fb7d-989d-42a4-88eb-f037cf64f8d3",
      "metadata": {},
      "outputs": [],
      "source": [
        "checkpointer = MemorySaver()\n",
        "graph_with_memory = create_react_agent(model, tools, checkpointer=checkpointer)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "id": "a93f76d3-3d97-435f-8718-bacd56002872",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "The weather in NYC might be cloudy.\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's the weather in nyc\")]}\n",
        "invoke_output = await graph_with_memory.ainvoke(\n",
        "    inputs, config={\"configurable\": {\"thread_id\": \"1\"}}\n",
        ")\n",
        "invoke_output[\"messages\"][-1].pretty_print()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "id": "ba3a2d61-ecdd-4a6e-b275-e7cb5f465def",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "New York City (NYC) is known for a variety of iconic landmarks, cultural institutions, and vibrant neighborhoods. Some of the most notable things NYC is known for include:\n",
            "\n",
            "1. **Statue of Liberty**: A symbol of freedom and democracy.\n",
            "2. **Times Square**: Famous for its bright lights, Broadway theaters, and bustling atmosphere.\n",
            "3. **Central Park**: A large urban park offering a green oasis in the middle of the city.\n",
            "4. **Empire State Building**: An iconic skyscraper with an observation deck offering panoramic views of the city.\n",
            "5. **Broadway**: Renowned for its world-class theater productions.\n",
            "6. **Wall Street**: The financial hub of the United States.\n",
            "7. **Museums**: Including the Metropolitan Museum of Art, the Museum of Modern Art (MoMA), and the American Museum of Natural History.\n",
            "8. **Diverse Cuisine**: A melting pot of culinary experiences from around the world.\n",
            "9. **Cultural Diversity**: A rich tapestry of cultures, languages, and traditions.\n",
            "10. **Fashion**: A global fashion capital, home to numerous designers and fashion events.\n",
            "\n",
            "These are just a few highlights, but NYC offers countless other attractions and experiences.\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's it known for?\")]}\n",
        "invoke_output = await graph_with_memory.ainvoke(\n",
        "    inputs, config={\"configurable\": {\"thread_id\": \"1\"}}\n",
        ")\n",
        "invoke_output[\"messages\"][-1].pretty_print()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "id": "990a9557-894f-4b1d-a9dd-8d089cf6e06b",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "Could you please specify what \"it\" refers to? Are you asking about a specific city, person, object, or something else?\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's it known for?\")]}\n",
        "invoke_output = await graph_with_memory.ainvoke(\n",
        "    inputs, config={\"configurable\": {\"thread_id\": \"2\"}}\n",
        ")\n",
        "invoke_output[\"messages\"][-1].pretty_print()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "id": "98a998ac-1ff2-4eb7-8ff6-5a513c098807",
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "{'v': 1,\n",
              " 'ts': '2024-06-22T02:31:49.722569+00:00',\n",
              " 'id': '1ef303f9-4149-6b56-8001-a80d1e3c9dc6',\n",
              " 'channel_values': {'messages': [HumanMessage(content=\"what's it known for?\", id='ea0d1672-05e9-4d77-9dff-b33bd5c824e7'),\n",
              "   AIMessage(content='Could you please specify what \"it\" refers to? Are you asking about a specific city, person, object, or something else?', response_metadata={'token_usage': {'completion_tokens': 28, 'prompt_tokens': 57, 'total_tokens': 85}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_3e7d703517', 'finish_reason': 'stop', 'logprobs': None}, id='run-f0381dc0-d891-4203-8f77-3155ba17998c-0', usage_metadata={'input_tokens': 57, 'output_tokens': 28, 'total_tokens': 85})],\n",
              "  'agent': 'agent'},\n",
              " 'channel_versions': {'__start__': 2,\n",
              "  'messages': 3,\n",
              "  'start:agent': 3,\n",
              "  'agent': 3},\n",
              " 'versions_seen': {'__start__': {'__start__': 1},\n",
              "  'agent': {'start:agent': 2},\n",
              "  'tools': {}},\n",
              " 'pending_sends': []}"
            ]
          },
          "execution_count": 17,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# get the state of the thread\n",
        "checkpointer.get({\"configurable\": {\"thread_id\": \"2\"}})"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "bd07ea78-12e9-475a-84e9-0d34f99c6024",
      "metadata": {},
      "source": [
        "### With LangGraph Cloud\n",
        "\n",
        "Let's now reproduce the same using LangGraph Cloud. Note that instead of using a checkpointer we just create a new thread on the backend and pass the ID to the API"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "id": "90b312d3-4b51-4953-8c78-8263a90b397a",
      "metadata": {},
      "outputs": [],
      "source": [
        "thread = await client.threads.create()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "id": "3e523086-29ab-4b21-b762-21136d32e6fa",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "The weather in NYC might be cloudy.\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's the weather in nyc\")]}\n",
        "wait_output = await client.runs.wait(thread[\"thread_id\"], \"agent\", input=inputs)\n",
        "convert_to_messages(wait_output[\"messages\"])[-1].pretty_print()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 20,
      "id": "f430c8ec-782c-4003-9e30-0736cbdd37ce",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "New York City (NYC) is known for a variety of iconic landmarks, cultural institutions, and vibrant neighborhoods. Some of the most notable features include:\n",
            "\n",
            "1. **Statue of Liberty**: A symbol of freedom and democracy.\n",
            "2. **Times Square**: Known for its bright lights, Broadway theaters, and bustling atmosphere.\n",
            "3. **Central Park**: A large urban park offering a natural retreat in the middle of the city.\n",
            "4. **Empire State Building**: An iconic skyscraper with an observation deck offering panoramic views of the city.\n",
            "5. **Broadway**: Famous for its world-class theater productions.\n",
            "6. **Wall Street**: The financial hub of the United States.\n",
            "7. **Museums**: Including the Metropolitan Museum of Art, the Museum of Modern Art (MoMA), and the American Museum of Natural History.\n",
            "8. **Diverse Cuisine**: A melting pot of culinary experiences from around the world.\n",
            "9. **Cultural Diversity**: A rich tapestry of cultures, languages, and traditions.\n",
            "10. **Fashion**: A global fashion capital, home to New York Fashion Week.\n",
            "\n",
            "These are just a few highlights of what makes NYC a unique and vibrant city.\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's it known for?\")]}\n",
        "wait_output = await client.runs.wait(thread[\"thread_id\"], \"agent\", input=inputs)\n",
        "convert_to_messages(wait_output[\"messages\"])[-1].pretty_print()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 21,
      "id": "cb34efef-805d-455c-be3e-e2234d97b7cf",
      "metadata": {},
      "outputs": [],
      "source": [
        "thread = await client.threads.create()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 22,
      "id": "7628e108-338b-4eaf-9d57-defc2c7e2b46",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "Could you please specify what \"it\" refers to? Are you asking about a specific city, person, object, or something else?\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's it known for?\")]}\n",
        "wait_output = await client.runs.wait(thread[\"thread_id\"], \"agent\", input=inputs)\n",
        "convert_to_messages(wait_output[\"messages\"])[-1].pretty_print()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 23,
      "id": "22a3f9e6-a550-4074-95eb-be3866b77718",
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "{'values': {'messages': [{'content': \"what's it known for?\",\n",
              "    'additional_kwargs': {},\n",
              "    'response_metadata': {},\n",
              "    'type': 'human',\n",
              "    'name': None,\n",
              "    'id': 'b62078f1-7c44-4a0e-b7b0-05e475ae3188',\n",
              "    'example': False},\n",
              "   {'content': 'Could you please specify what \"it\" refers to? Are you asking about a specific city, person, object, or something else?',\n",
              "    'additional_kwargs': {},\n",
              "    'response_metadata': {'finish_reason': 'stop'},\n",
              "    'type': 'ai',\n",
              "    'name': None,\n",
              "    'id': 'run-502c6cf3-d584-4e31-98a6-5e59f1d2a72f',\n",
              "    'example': False,\n",
              "    'tool_calls': [],\n",
              "    'invalid_tool_calls': [],\n",
              "    'usage_metadata': None}]},\n",
              " 'next': [],\n",
              " 'config': {'configurable': {'thread_id': 'fcff410c-9adb-416f-a7ce-09b230afcac9',\n",
              "   'thread_ts': '1ef303f9-7d8e-6d0a-8001-2b4ce14235da'}},\n",
              " 'metadata': {'step': 1,\n",
              "  'run_id': '1ef303f9-73e6-6c6b-b407-39938d3dfd7e',\n",
              "  'source': 'loop',\n",
              "  'writes': {'agent': {'messages': [{'id': 'run-502c6cf3-d584-4e31-98a6-5e59f1d2a72f',\n",
              "      'name': None,\n",
              "      'type': 'ai',\n",
              "      'content': 'Could you please specify what \"it\" refers to? Are you asking about a specific city, person, object, or something else?',\n",
              "      'example': False,\n",
              "      'tool_calls': [],\n",
              "      'usage_metadata': None,\n",
              "      'additional_kwargs': {},\n",
              "      'response_metadata': {'finish_reason': 'stop'},\n",
              "      'invalid_tool_calls': []}]}},\n",
              "  'user_id': '',\n",
              "  'graph_id': 'agent',\n",
              "  'thread_id': 'fcff410c-9adb-416f-a7ce-09b230afcac9',\n",
              "  'created_by': 'system',\n",
              "  'assistant_id': 'fe096781-5601-53d2-b2f6-0d3403f7e9ca'},\n",
              " 'created_at': '2024-06-22T02:31:56.042330+00:00',\n",
              " 'parent_config': {'configurable': {'thread_id': 'fcff410c-9adb-416f-a7ce-09b230afcac9',\n",
              "   'thread_ts': '1ef303f9-7400-6e2c-8000-e8d5075bfa2a'}}}"
            ]
          },
          "execution_count": 23,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "# get the state of the thread\n",
        "await client.threads.get_state(thread[\"thread_id\"])"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6b0d8ec5-316f-48e5-bebf-8d66c8dbd450",
      "metadata": {},
      "source": [
        "## Breakpoints\n",
        "\n",
        "### With LangGraph"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 24,
      "id": "b3722e75-b9f2-4a55-aae3-f6829a7a929e",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "================================\u001b[1m Human Message \u001b[0m=================================\n",
            "\n",
            "what's the weather in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "Tool Calls:\n",
            "  get_weather (call_cYp3BijeW2JNQ9RqJRdkrbMu)\n",
            " Call ID: call_cYp3BijeW2JNQ9RqJRdkrbMu\n",
            "  Args:\n",
            "    city: sf\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's the weather in sf\")]}\n",
        "async for chunk in graph_with_memory.astream(\n",
        "    inputs,\n",
        "    stream_mode=\"values\",\n",
        "    interrupt_before=[\"tools\"],\n",
        "    config={\"configurable\": {\"thread_id\": \"3\"}},\n",
        "):\n",
        "    chunk[\"messages\"][-1].pretty_print()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 25,
      "id": "6a58a513-7adc-4523-9145-6777f20521e4",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
            "Name: get_weather\n",
            "\n",
            "It's always sunny in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "The weather in San Francisco is sunny!\n"
          ]
        }
      ],
      "source": [
        "async for chunk in graph_with_memory.astream(\n",
        "    None,\n",
        "    stream_mode=\"values\",\n",
        "    interrupt_before=[\"tools\"],\n",
        "    config={\"configurable\": {\"thread_id\": \"3\"}},\n",
        "):\n",
        "    chunk[\"messages\"][-1].pretty_print()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "a84f11dd-e6db-4fc7-bde6-67db5bc01d0f",
      "metadata": {},
      "source": [
        "### With LangGraph Cloud"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "addd1bf8-d0da-40c2-9913-e145deed6b6d",
      "metadata": {},
      "source": [
        "Similar to the persistence example, we need to create a thread so we can persist state and continue from the breakpoint."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 26,
      "id": "96591cf8-98fc-4fa0-a03a-29e18e672126",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "================================\u001b[1m Human Message \u001b[0m=================================\n",
            "\n",
            "what's the weather in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "Tool Calls:\n",
            "  get_weather (call_MVQEJtPYAj1nJ7J6YaCeLX8a)\n",
            " Call ID: call_MVQEJtPYAj1nJ7J6YaCeLX8a\n",
            "  Args:\n",
            "    city: sf\n"
          ]
        }
      ],
      "source": [
        "thread = await client.threads.create()\n",
        "\n",
        "async for chunk in client.runs.stream(\n",
        "    thread[\"thread_id\"],\n",
        "    \"agent\",\n",
        "    input=inputs,\n",
        "    stream_mode=\"values\",\n",
        "    interrupt_before=[\"tools\"],\n",
        "):\n",
        "    if chunk.event == \"values\":\n",
        "        messages = convert_to_messages(chunk.data[\"messages\"])\n",
        "        messages[-1].pretty_print()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 27,
      "id": "bb7f74bd-7ce1-4cd3-9203-2a8aed7b8620",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
            "Name: get_weather\n",
            "\n",
            "It's always sunny in sf\n",
            "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
            "\n",
            "The weather in San Francisco is currently sunny.\n"
          ]
        }
      ],
      "source": [
        "async for chunk in client.runs.stream(\n",
        "    thread[\"thread_id\"],\n",
        "    \"agent\",\n",
        "    input=None,\n",
        "    stream_mode=\"values\",\n",
        "    interrupt_before=[\"tools\"],\n",
        "):\n",
        "    if chunk.event == \"values\":\n",
        "        messages = convert_to_messages(chunk.data[\"messages\"])\n",
        "        messages[-1].pretty_print()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6af293ee-7866-4326-ba17-e3ffbb0c96c7",
      "metadata": {},
      "source": [
        "## Steaming events"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "cb4072c9-775e-4bf5-8a1a-fb822e6de9d7",
      "metadata": {},
      "source": [
        "For streaming events, in LangGraph you need to use `.astream_events` method on the `CompiledGraph`. In LangGraph Cloud this is done via passing `stream_mode=\"events\"`"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "4a158573-240a-44a3-b0ef-0cf9334042f2",
      "metadata": {},
      "source": [
        "### With LangGraph"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 28,
      "id": "94b815e4-1dd2-4999-9e73-6e29836d9160",
      "metadata": {},
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/Users/vadymbarda/.virtualenvs/langgraph-example-dev/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future.\n",
            "  warn_beta(\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "Invalid Tool Calls:\n",
            "  get_weather (call_dsr61w9qcahi8CC7LV2S29O3)\n",
            " Call ID: call_dsr61w9qcahi8CC7LV2S29O3\n",
            "  Args:\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "Tool Calls:\n",
            "   (None)\n",
            " Call ID: None\n",
            "  Args:\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "Invalid Tool Calls:\n",
            "  None (None)\n",
            " Call ID: None\n",
            "  Args:\n",
            "    city\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "Invalid Tool Calls:\n",
            "  None (None)\n",
            " Call ID: None\n",
            "  Args:\n",
            "    \":\"\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "Invalid Tool Calls:\n",
            "  None (None)\n",
            " Call ID: None\n",
            "  Args:\n",
            "    sf\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "Invalid Tool Calls:\n",
            "  None (None)\n",
            " Call ID: None\n",
            "  Args:\n",
            "    \"}\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            "The\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            " weather\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            " in\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            " San\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            " Francisco\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            " is\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            " currently\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            " sunny\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            ".\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            " Enjoy\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            " the\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            " sunshine\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n",
            "\n",
            "!\n",
            "============================\u001b[1m Aimessagechunk Message \u001b[0m============================\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's the weather in sf\")]}\n",
        "async for chunk in graph.astream_events(inputs, version=\"v2\"):\n",
        "    if chunk[\"event\"] == \"on_chat_model_stream\":\n",
        "        chunk[\"data\"][\"chunk\"].pretty_print()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "08996d90-a3ff-4655-9763-1dd4971344d4",
      "metadata": {},
      "source": [
        "### With LangGraph Cloud"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 32,
      "id": "547bfcd2-01fe-4e7c-8734-0d02beb2c36e",
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "{'content': '', 'additional_kwargs': {'tool_calls': [{'index': 0, 'id': 'call_JYWaAecaAV92cOlZwRHi9B7M', 'function': {'arguments': '', 'name': 'get_weather'}, 'type': 'function'}]}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-855fec3d-15df-4ae8-b74d-208a0e463be9', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [{'name': 'get_weather', 'args': '', 'id': 'call_JYWaAecaAV92cOlZwRHi9B7M', 'error': None}], 'usage_metadata': None, 'tool_call_chunks': [{'name': 'get_weather', 'args': '', 'id': 'call_JYWaAecaAV92cOlZwRHi9B7M', 'index': 0}]}\n",
            "{'content': '', 'additional_kwargs': {'tool_calls': [{'index': 0, 'id': None, 'function': {'arguments': '{\"', 'name': None}, 'type': None}]}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-855fec3d-15df-4ae8-b74d-208a0e463be9', 'example': False, 'tool_calls': [{'name': '', 'args': {}, 'id': None}], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': [{'name': None, 'args': '{\"', 'id': None, 'index': 0}]}\n",
            "{'content': '', 'additional_kwargs': {'tool_calls': [{'index': 0, 'id': None, 'function': {'arguments': 'city', 'name': None}, 'type': None}]}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-855fec3d-15df-4ae8-b74d-208a0e463be9', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [{'name': None, 'args': 'city', 'id': None, 'error': None}], 'usage_metadata': None, 'tool_call_chunks': [{'name': None, 'args': 'city', 'id': None, 'index': 0}]}\n",
            "{'content': '', 'additional_kwargs': {'tool_calls': [{'index': 0, 'id': None, 'function': {'arguments': '\":\"', 'name': None}, 'type': None}]}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-855fec3d-15df-4ae8-b74d-208a0e463be9', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [{'name': None, 'args': '\":\"', 'id': None, 'error': None}], 'usage_metadata': None, 'tool_call_chunks': [{'name': None, 'args': '\":\"', 'id': None, 'index': 0}]}\n",
            "{'content': '', 'additional_kwargs': {'tool_calls': [{'index': 0, 'id': None, 'function': {'arguments': 'sf', 'name': None}, 'type': None}]}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-855fec3d-15df-4ae8-b74d-208a0e463be9', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [{'name': None, 'args': 'sf', 'id': None, 'error': None}], 'usage_metadata': None, 'tool_call_chunks': [{'name': None, 'args': 'sf', 'id': None, 'index': 0}]}\n",
            "{'content': '', 'additional_kwargs': {'tool_calls': [{'index': 0, 'id': None, 'function': {'arguments': '\"}', 'name': None}, 'type': None}]}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-855fec3d-15df-4ae8-b74d-208a0e463be9', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [{'name': None, 'args': '\"}', 'id': None, 'error': None}], 'usage_metadata': None, 'tool_call_chunks': [{'name': None, 'args': '\"}', 'id': None, 'index': 0}]}\n",
            "{'content': '', 'additional_kwargs': {}, 'response_metadata': {'finish_reason': 'tool_calls'}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-855fec3d-15df-4ae8-b74d-208a0e463be9', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': '', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': 'The', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': ' weather', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': ' in', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': ' San', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': ' Francisco', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': ' is', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': ' currently', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': ' sunny', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': '.', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': ' Enjoy', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': ' the', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': ' sunshine', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': '!', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n",
            "{'content': '', 'additional_kwargs': {}, 'response_metadata': {'finish_reason': 'stop'}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-19a0bdff-8724-4730-8052-c3ac89525461', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}\n"
          ]
        }
      ],
      "source": [
        "inputs = {\"messages\": [(\"human\", \"what's the weather in sf\")]}\n",
        "async for chunk in client.runs.stream(\n",
        "    None, \"agent\", input=inputs, stream_mode=\"events\"\n",
        "):\n",
        "    if chunk.event == \"events\" and chunk.data[\"event\"] == \"on_chat_model_stream\":\n",
        "        print(chunk.data[\"data\"][\"chunk\"])"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3 (ipykernel)",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.11.1"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}
