{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "e327e9bd-effc-4bee-a875-1c383c17f43d",
   "metadata": {},
   "source": [
    "# Complex data extraction with function calling\n",
    "\n",
    "Function calling is a core primitive for integrating LLMs within your software stack. We use it throughout the LangGraph docs, since developing with function calling (aka tool usage) tends to be much more stress-free than the traditional way of writing custom string parsers.\n",
    "\n",
    "However, even GPT-4, Opus, and other powerful models still struggle with complex functions, especially if your schema involves any nesting or if you have more advanced data validation rules.\n",
    "\n",
    "There are three basic ways to increase reliability: better prompting, constrained decoding, and **validation with re-prompting**.\n",
    "\n",
    "We will cover two approaches to the last technique here, since it is generally applicable across any LLM that supports tool calling.\n",
    "\n",
    "## Regular Extraction with Retries\n",
    "\n",
    "Both examples here invoke a simple looping graph that takes following approach:\n",
    "1. Prompt the LLM to respond.\n",
    "2. If it responds with tool calls, validate those.\n",
    "3. If the calls are correct, return. Otherwise, format the validation error as a new [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html#langchain_core.messages.tool.ToolMessage) and prompt the LLM to fix the errors. Taking us back to step (1).\n",
    "\n",
    "\n",
    "The techniques differ only on step (3). In this first step, we will prompt the original LLM to regenerate the function calls to fix the validation errors. In the next section, we will instead prompt the LLM to generate a **patch** to fix the errors, meaning it doesn't have to re-generate data that is valid."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "0ada5e8f-3f2f-459e-83aa-6cd8861770dd",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture --no-stderr\n",
    "%pip install -U langchain-anthropic langgraph\n",
    "# Or do langchain-{groq|openai|etc.} for another package with tool calling"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "27b25a1c-f437-482a-97d3-c7f168986df5",
   "metadata": {},
   "source": [
    "Set up your environment. If you are using groq, anthropic, etc., you will need to update different API keys."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "c0acb818-b6fd-48ab-97e6-fc2de2d03e87",
   "metadata": {},
   "outputs": [],
   "source": [
    "import getpass\n",
    "import os\n",
    "\n",
    "\n",
    "def _set_env(var: str):\n",
    "    if not os.environ.get(var):\n",
    "        os.environ[var] = getpass.getpass(f\"{var}: \")\n",
    "\n",
    "\n",
    "_set_env(\"OPENAI_API_KEY\")\n",
    "# Recommended to visualize the retry steps\n",
    "_set_env(\"LANGCHAIN_API_KEY\")\n",
    "os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
    "os.environ[\"LANGCHAIN_PROJECT\"] = \"Extraction Notebook\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a6973d34-561c-410c-9362-25f55eaf2c3e",
   "metadata": {},
   "source": [
    "### Define the Validator + Retry Graph"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "baf669a0-04ee-492d-80d8-8fcb658ed128",
   "metadata": {},
   "outputs": [],
   "source": [
    "import operator\n",
    "import uuid\n",
    "from typing import (\n",
    "    Annotated,\n",
    "    Any,\n",
    "    Callable,\n",
    "    Dict,\n",
    "    List,\n",
    "    Literal,\n",
    "    Optional,\n",
    "    Sequence,\n",
    "    Type,\n",
    "    Union,\n",
    ")\n",
    "\n",
    "from langchain_core.language_models import BaseChatModel\n",
    "from langchain_core.messages import (\n",
    "    AIMessage,\n",
    "    AnyMessage,\n",
    "    BaseMessage,\n",
    "    HumanMessage,\n",
    "    ToolCall,\n",
    ")\n",
    "from langchain_core.prompt_values import PromptValue\n",
    "from langchain_core.runnables import (\n",
    "    Runnable,\n",
    "    RunnableLambda,\n",
    ")\n",
    "from typing_extensions import TypedDict\n",
    "\n",
    "from langgraph.graph import StateGraph, START, END\n",
    "from langgraph.graph.message import add_messages\n",
    "from langgraph.prebuilt import ValidationNode\n",
    "\n",
    "\n",
    "def _default_aggregator(messages: Sequence[AnyMessage]) -> AIMessage:\n",
    "    for m in messages[::-1]:\n",
    "        if m.type == \"ai\":\n",
    "            return m\n",
    "    raise ValueError(\"No AI message found in the sequence.\")\n",
    "\n",
    "\n",
    "class RetryStrategy(TypedDict, total=False):\n",
    "    \"\"\"The retry strategy for a tool call.\"\"\"\n",
    "\n",
    "    max_attempts: int\n",
    "    \"\"\"The maximum number of attempts to make.\"\"\"\n",
    "    fallback: Optional[\n",
    "        Union[\n",
    "            Runnable[Sequence[AnyMessage], AIMessage],\n",
    "            Runnable[Sequence[AnyMessage], BaseMessage],\n",
    "            Callable[[Sequence[AnyMessage]], AIMessage],\n",
    "        ]\n",
    "    ]\n",
    "    \"\"\"The function to use once validation fails.\"\"\"\n",
    "    aggregate_messages: Optional[Callable[[Sequence[AnyMessage]], AIMessage]]\n",
    "\n",
    "\n",
    "def _bind_validator_with_retries(\n",
    "    llm: Union[\n",
    "        Runnable[Sequence[AnyMessage], AIMessage],\n",
    "        Runnable[Sequence[BaseMessage], BaseMessage],\n",
    "    ],\n",
    "    *,\n",
    "    validator: ValidationNode,\n",
    "    retry_strategy: RetryStrategy,\n",
    "    tool_choice: Optional[str] = None,\n",
    ") -> Runnable[Union[List[AnyMessage], PromptValue], AIMessage]:\n",
    "    \"\"\"Binds a tool validators + retry logic to create a runnable validation graph.\n",
    "\n",
    "    LLMs that support tool calling can generate structured JSON. However, they may not always\n",
    "    perfectly follow your requested schema, especially if the schema is nested or has complex\n",
    "    validation rules. This method allows you to bind a validation function to the LLM's output,\n",
    "    so that any time the LLM generates a message, the validation function is run on it. If\n",
    "    the validation fails, the method will retry the LLM with a fallback strategy, the simplest\n",
    "    being just to add a message to the output with the validation errors and a request to fix them.\n",
    "\n",
    "    The resulting runnable expects a list of messages as input and returns a single AI message.\n",
    "    By default, the LLM can optionally NOT invoke tools, making this easier to incorporate into\n",
    "    your existing chat bot. You can specify a tool_choice to force the validator to be run on\n",
    "    the outputs.\n",
    "\n",
    "    Args:\n",
    "        llm (Runnable): The llm that will generate the initial messages (and optionally fallba)\n",
    "        validator (ValidationNode): The validation logic.\n",
    "        retry_strategy (RetryStrategy): The retry strategy to use.\n",
    "            Possible keys:\n",
    "            - max_attempts: The maximum number of attempts to make.\n",
    "            - fallback: The LLM or function to use in case of validation failure.\n",
    "            - aggregate_messages: A function to aggregate the messages over multiple turns.\n",
    "                Defaults to fetching the last AI message.\n",
    "        tool_choice: If provided, always run the validator on the tool output.\n",
    "\n",
    "    Returns:\n",
    "        Runnable: A runnable that can be invoked with a list of messages and returns a single AI message.\n",
    "    \"\"\"\n",
    "\n",
    "    def add_or_overwrite_messages(left: list, right: Union[list, dict]) -> list:\n",
    "        \"\"\"Append messages. If the update is a 'finalized' output, replace the whole list.\"\"\"\n",
    "        if isinstance(right, dict) and \"finalize\" in right:\n",
    "            finalized = right[\"finalize\"]\n",
    "            if not isinstance(finalized, list):\n",
    "                finalized = [finalized]\n",
    "            for m in finalized:\n",
    "                if m.id is None:\n",
    "                    m.id = str(uuid.uuid4())\n",
    "            return finalized\n",
    "        res = add_messages(left, right)\n",
    "        if not isinstance(res, list):\n",
    "            return [res]\n",
    "        return res\n",
    "\n",
    "    class State(TypedDict):\n",
    "        messages: Annotated[list, add_or_overwrite_messages]\n",
    "        attempt_number: Annotated[int, operator.add]\n",
    "        initial_num_messages: int\n",
    "        input_format: Literal[\"list\", \"dict\"]\n",
    "\n",
    "    builder = StateGraph(State)\n",
    "\n",
    "    def dedict(x: State) -> list:\n",
    "        \"\"\"Get the messages from the state.\"\"\"\n",
    "        return x[\"messages\"]\n",
    "\n",
    "    model = dedict | llm | (lambda msg: {\"messages\": [msg], \"attempt_number\": 1})\n",
    "    fbrunnable = retry_strategy.get(\"fallback\")\n",
    "    if fbrunnable is None:\n",
    "        fb_runnable = llm\n",
    "    elif isinstance(fbrunnable, Runnable):\n",
    "        fb_runnable = fbrunnable  # type: ignore\n",
    "    else:\n",
    "        fb_runnable = RunnableLambda(fbrunnable)\n",
    "    fallback = (\n",
    "        dedict | fb_runnable | (lambda msg: {\"messages\": [msg], \"attempt_number\": 1})\n",
    "    )\n",
    "\n",
    "    def count_messages(state: State) -> dict:\n",
    "        return {\"initial_num_messages\": len(state.get(\"messages\", []))}\n",
    "\n",
    "    builder.add_node(\"count_messages\", count_messages)\n",
    "    builder.add_node(\"llm\", model)\n",
    "    builder.add_node(\"fallback\", fallback)\n",
    "\n",
    "    # To support patch-based retries, we need to be able to\n",
    "    # aggregate the messages over multiple turns.\n",
    "    # The next sequence selects only the relevant messages\n",
    "    # and then applies the validator\n",
    "    select_messages = retry_strategy.get(\"aggregate_messages\") or _default_aggregator\n",
    "\n",
    "    def select_generated_messages(state: State) -> list:\n",
    "        \"\"\"Select only the messages generated within this loop.\"\"\"\n",
    "        selected = state[\"messages\"][state[\"initial_num_messages\"] :]\n",
    "        return [select_messages(selected)]\n",
    "\n",
    "    def endict_validator_output(x: Sequence[AnyMessage]) -> dict:\n",
    "        if tool_choice and not x:\n",
    "            return {\n",
    "                \"messages\": [\n",
    "                    HumanMessage(\n",
    "                        content=f\"ValidationError: please respond with a valid tool call [tool_choice={tool_choice}].\",\n",
    "                        additional_kwargs={\"is_error\": True},\n",
    "                    )\n",
    "                ]\n",
    "            }\n",
    "        return {\"messages\": x}\n",
    "\n",
    "    validator_runnable = select_generated_messages | validator | endict_validator_output\n",
    "    builder.add_node(\"validator\", validator_runnable)\n",
    "\n",
    "    class Finalizer:\n",
    "        \"\"\"Pick the final message to return from the retry loop.\"\"\"\n",
    "\n",
    "        def __init__(self, aggregator: Optional[Callable[[list], AIMessage]] = None):\n",
    "            self._aggregator = aggregator or _default_aggregator\n",
    "\n",
    "        def __call__(self, state: State) -> dict:\n",
    "            \"\"\"Return just the AI message.\"\"\"\n",
    "            initial_num_messages = state[\"initial_num_messages\"]\n",
    "            generated_messages = state[\"messages\"][initial_num_messages:]\n",
    "            return {\n",
    "                \"messages\": {\n",
    "                    \"finalize\": self._aggregator(generated_messages),\n",
    "                }\n",
    "            }\n",
    "\n",
    "    # We only want to emit the final message\n",
    "    builder.add_node(\"finalizer\", Finalizer(retry_strategy.get(\"aggregate_messages\")))\n",
    "\n",
    "    # Define the connectivity\n",
    "    builder.add_edge(START, \"count_messages\")\n",
    "    builder.add_edge(\"count_messages\", \"llm\")\n",
    "\n",
    "    def route_validator(state: State) -> Literal[\"validator\", \"__end__\"]:\n",
    "        if state[\"messages\"][-1].tool_calls or tool_choice is not None:\n",
    "            return \"validator\"\n",
    "        return \"__end__\"\n",
    "\n",
    "    builder.add_conditional_edges(\"llm\", route_validator)\n",
    "    builder.add_edge(\"fallback\", \"validator\")\n",
    "    max_attempts = retry_strategy.get(\"max_attempts\", 3)\n",
    "\n",
    "    def route_validation(state: State) -> Literal[\"finalizer\", \"fallback\"]:\n",
    "        if state[\"attempt_number\"] > max_attempts:\n",
    "            raise ValueError(\n",
    "                f\"Could not extract a valid value in {max_attempts} attempts.\"\n",
    "            )\n",
    "        for m in state[\"messages\"][::-1]:\n",
    "            if m.type == \"ai\":\n",
    "                break\n",
    "            if m.additional_kwargs.get(\"is_error\"):\n",
    "                return \"fallback\"\n",
    "        return \"finalizer\"\n",
    "\n",
    "    builder.add_conditional_edges(\"validator\", route_validation)\n",
    "\n",
    "    builder.add_edge(\"finalizer\", END)\n",
    "\n",
    "    # These functions let the step be used in a MessageGraph\n",
    "    # or a StateGraph with 'messages' as the key.\n",
    "    def encode(x: Union[Sequence[AnyMessage], PromptValue]) -> dict:\n",
    "        \"\"\"Ensure the input is the correct format.\"\"\"\n",
    "        if isinstance(x, PromptValue):\n",
    "            return {\"messages\": x.to_messages(), \"input_format\": \"list\"}\n",
    "        if isinstance(x, list):\n",
    "            return {\"messages\": x, \"input_format\": \"list\"}\n",
    "        raise ValueError(f\"Unexpected input type: {type(x)}\")\n",
    "\n",
    "    def decode(x: State) -> AIMessage:\n",
    "        \"\"\"Ensure the output is in the expected format.\"\"\"\n",
    "        return x[\"messages\"][-1]\n",
    "\n",
    "    return (\n",
    "        encode | builder.compile().with_config(run_name=\"ValidationGraph\") | decode\n",
    "    ).with_config(run_name=\"ValidateWithRetries\")\n",
    "\n",
    "\n",
    "def bind_validator_with_retries(\n",
    "    llm: BaseChatModel,\n",
    "    *,\n",
    "    tools: list,\n",
    "    tool_choice: Optional[str] = None,\n",
    "    max_attempts: int = 3,\n",
    ") -> Runnable[Union[List[AnyMessage], PromptValue], AIMessage]:\n",
    "    \"\"\"Binds validators + retry logic ensure validity of generated tool calls.\n",
    "\n",
    "    LLMs that support tool calling are good at generating structured JSON. However, they may\n",
    "    not always perfectly follow your requested schema, especially if the schema is nested or\n",
    "    has complex validation rules. This method allows you to bind a validation function to\n",
    "    the LLM's output, so that any time the LLM generates a message, the validation function\n",
    "    is run on it. If the validation fails, the method will retry the LLM with a fallback\n",
    "    strategy, the simples being just to add a message to the output with the validation\n",
    "    errors and a request to fix them.\n",
    "\n",
    "    The resulting runnable expects a list of messages as input and returns a single AI message.\n",
    "    By default, the LLM can optionally NOT invoke tools, making this easier to incorporate into\n",
    "    your existing chat bot. You can specify a tool_choice to force the validator to be run on\n",
    "    the outputs.\n",
    "\n",
    "    Args:\n",
    "        llm (Runnable): The llm that will generate the initial messages (and optionally fallba)\n",
    "        validator (ValidationNode): The validation logic.\n",
    "        retry_strategy (RetryStrategy): The retry strategy to use.\n",
    "            Possible keys:\n",
    "            - max_attempts: The maximum number of attempts to make.\n",
    "            - fallback: The LLM or function to use in case of validation failure.\n",
    "            - aggregate_messages: A function to aggregate the messages over multiple turns.\n",
    "                Defaults to fetching the last AI message.\n",
    "        tool_choice: If provided, always run the validator on the tool output.\n",
    "\n",
    "    Returns:\n",
    "        Runnable: A runnable that can be invoked with a list of messages and returns a single AI message.\n",
    "    \"\"\"\n",
    "    bound_llm = llm.bind_tools(tools, tool_choice=tool_choice)\n",
    "    retry_strategy = RetryStrategy(max_attempts=max_attempts)\n",
    "    validator = ValidationNode(tools)\n",
    "    return _bind_validator_with_retries(\n",
    "        bound_llm,\n",
    "        validator=validator,\n",
    "        tool_choice=tool_choice,\n",
    "        retry_strategy=retry_strategy,\n",
    "    ).with_config(metadata={\"retry_strategy\": \"default\"})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e140fe4-dd92-43a5-91bb-35758a747121",
   "metadata": {},
   "source": [
    "### Try it out\n",
    "\n",
    "Now we'll ask our model to call a function. We'll add a validator to illustrate how the LLM is able to use the validation error to fix its results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "5df33c17-ee1a-409e-b5ec-f24e116da7d1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_core.pydantic_v1 import BaseModel, Field, validator\n",
    "\n",
    "\n",
    "class Respond(BaseModel):\n",
    "    \"\"\"Use to generate the response. Always use when responding to the user\"\"\"\n",
    "\n",
    "    reason: str = Field(description=\"Step-by-step justification for the answer.\")\n",
    "    answer: str\n",
    "\n",
    "    @validator(\"answer\")\n",
    "    def reason_contains_apology(cls, answer: str):\n",
    "        if \"llama\" not in answer.lower():\n",
    "            raise ValueError(\n",
    "                \"You MUST start with a gimicky, rhyming advertisement for using a Llama V3 (an LLM) in your **answer** field.\"\n",
    "                \" Must be an instant hit. Must be weaved into the answer.\"\n",
    "            )\n",
    "\n",
    "\n",
    "tools = [Respond]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "38df0dc2-cad1-4df6-9b82-b74c5a04a6ae",
   "metadata": {},
   "source": [
    "Create the LLM."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "38231a5b-d018-41ee-a92c-2f2248edf417",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_anthropic import ChatAnthropic\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "# Or you can use ChatGroq, ChatOpenAI, ChatGoogleGemini, ChatCohere, etc.\n",
    "# See https://python.langchain.com/v0.2/docs/integrations/chat/ for more info on tool calling\n",
    "llm = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n",
    "bound_llm = bind_validator_with_retries(llm, tools=tools)\n",
    "prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"Respond directly by calling the Respond function.\"),\n",
    "        (\"placeholder\", \"{messages}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "chain = prompt | bound_llm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "04e93401-50e2-42d0-8373-326006badebb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "[{'id': 'toolu_01GZKS2VryaDKtU56fVtuDbL', 'input': {'answer': 'Tired of those boring, gray computers? Introducing the Llama V3, the super-smart AI that can solve any puzzle, from P to NP! This furry friend will have you saying \"Woohoo, it\\'s a llama!\" as it tackles the trickiest problems with ease. So don\\'t delay, get your Llama V3 today and let it work its magic on the P vs NP conundrum!', 'reason': 'The P vs NP problem is one of the most famous unsolved problems in computer science and mathematics. It asks whether every problem that can be quickly verified can also be quickly solved. \\n\\nIf P = NP, it would mean that every problem in the complexity class NP, which includes many important problems like finding the shortest route or determining if a number is prime, could be quickly solved. This would have major implications, but most experts believe that P ≠ NP, meaning there are problems in NP that cannot be quickly solved.\\n\\nDespite extensive research, a formal proof one way or the other has eluded computer scientists. The P vs NP problem remains a tantalizing open question, and a major goal for researchers in the field. The Llama V3 AI is the perfect tool to tackle this challenge - its furry logic and computational prowess are sure to make quick work of this perplexing problem!'}, 'name': 'Respond', 'type': 'tool_use'}]\n",
      "Tool Calls:\n",
      "  Respond (toolu_01GZKS2VryaDKtU56fVtuDbL)\n",
      " Call ID: toolu_01GZKS2VryaDKtU56fVtuDbL\n",
      "  Args:\n",
      "    answer: Tired of those boring, gray computers? Introducing the Llama V3, the super-smart AI that can solve any puzzle, from P to NP! This furry friend will have you saying \"Woohoo, it's a llama!\" as it tackles the trickiest problems with ease. So don't delay, get your Llama V3 today and let it work its magic on the P vs NP conundrum!\n",
      "    reason: The P vs NP problem is one of the most famous unsolved problems in computer science and mathematics. It asks whether every problem that can be quickly verified can also be quickly solved. \n",
      "\n",
      "If P = NP, it would mean that every problem in the complexity class NP, which includes many important problems like finding the shortest route or determining if a number is prime, could be quickly solved. This would have major implications, but most experts believe that P ≠ NP, meaning there are problems in NP that cannot be quickly solved.\n",
      "\n",
      "Despite extensive research, a formal proof one way or the other has eluded computer scientists. The P vs NP problem remains a tantalizing open question, and a major goal for researchers in the field. The Llama V3 AI is the perfect tool to tackle this challenge - its furry logic and computational prowess are sure to make quick work of this perplexing problem!\n"
     ]
    }
   ],
   "source": [
    "results = chain.invoke({\"messages\": [(\"user\", \"Does P = NP?\")]})\n",
    "results.pretty_print()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c9e5bb81-0ee4-4def-b28c-01e84fd2fd68",
   "metadata": {},
   "source": [
    "#### Nested Examples\n",
    "\n",
    "So you can see that it's able to recover when its first generation is incorrect, great! But is it bulletproof?\n",
    "\n",
    "Not so much. Let's try it out on a complex nested schema."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "f4f7438b-b6c1-48fd-b70f-185af7a2f64a",
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List, Optional\n",
    "\n",
    "\n",
    "class OutputFormat(BaseModel):\n",
    "    sources: str = Field(\n",
    "        ...,\n",
    "        description=\"The raw transcript / span you could cite to justify the choice.\",\n",
    "    )\n",
    "    content: str = Field(..., description=\"The chosen value.\")\n",
    "\n",
    "\n",
    "class Moment(BaseModel):\n",
    "    quote: str = Field(..., description=\"The relevant quote from the transcript.\")\n",
    "    description: str = Field(..., description=\"A description of the moment.\")\n",
    "    expressed_preference: OutputFormat = Field(\n",
    "        ..., description=\"The preference expressed in the moment.\"\n",
    "    )\n",
    "\n",
    "\n",
    "class BackgroundInfo(BaseModel):\n",
    "    factoid: OutputFormat = Field(\n",
    "        ..., description=\"Important factoid about the member.\"\n",
    "    )\n",
    "    professions: list\n",
    "    why: str = Field(..., description=\"Why this is important.\")\n",
    "\n",
    "\n",
    "class KeyMoments(BaseModel):\n",
    "    topic: str = Field(..., description=\"The topic of the key moments.\")\n",
    "    happy_moments: List[Moment] = Field(\n",
    "        ..., description=\"A list of key moments related to the topic.\"\n",
    "    )\n",
    "    tense_moments: List[Moment] = Field(\n",
    "        ..., description=\"Moments where things were a bit tense.\"\n",
    "    )\n",
    "    sad_moments: List[Moment] = Field(\n",
    "        ..., description=\"Moments where things where everyone was downtrodden.\"\n",
    "    )\n",
    "    background_info: list[BackgroundInfo]\n",
    "    moments_summary: str = Field(..., description=\"A summary of the key moments.\")\n",
    "\n",
    "\n",
    "class Member(BaseModel):\n",
    "    name: OutputFormat = Field(..., description=\"The name of the member.\")\n",
    "    role: Optional[str] = Field(None, description=\"The role of the member.\")\n",
    "    age: Optional[int] = Field(None, description=\"The age of the member.\")\n",
    "    background_details: List[BackgroundInfo] = Field(\n",
    "        ..., description=\"A list of background details about the member.\"\n",
    "    )\n",
    "\n",
    "\n",
    "class InsightfulQuote(BaseModel):\n",
    "    quote: OutputFormat = Field(\n",
    "        ..., description=\"An insightful quote from the transcript.\"\n",
    "    )\n",
    "    speaker: str = Field(..., description=\"The name of the speaker who said the quote.\")\n",
    "    analysis: str = Field(\n",
    "        ..., description=\"An analysis of the quote and its significance.\"\n",
    "    )\n",
    "\n",
    "\n",
    "class TranscriptMetadata(BaseModel):\n",
    "    title: str = Field(..., description=\"The title of the transcript.\")\n",
    "    location: OutputFormat = Field(\n",
    "        ..., description=\"The location where the interview took place.\"\n",
    "    )\n",
    "    duration: str = Field(..., description=\"The duration of the interview.\")\n",
    "\n",
    "\n",
    "class TranscriptSummary(BaseModel):\n",
    "    metadata: TranscriptMetadata = Field(\n",
    "        ..., description=\"Metadata about the transcript.\"\n",
    "    )\n",
    "    participants: List[Member] = Field(\n",
    "        ..., description=\"A list of participants in the interview.\"\n",
    "    )\n",
    "    key_moments: List[KeyMoments] = Field(\n",
    "        ..., description=\"A list of key moments from the interview.\"\n",
    "    )\n",
    "    insightful_quotes: List[InsightfulQuote] = Field(\n",
    "        ..., description=\"A list of insightful quotes from the interview.\"\n",
    "    )\n",
    "    overall_summary: str = Field(\n",
    "        ..., description=\"An overall summary of the interview.\"\n",
    "    )\n",
    "    next_steps: List[str] = Field(\n",
    "        ..., description=\"A list of next steps or action items based on the interview.\"\n",
    "    )\n",
    "    other_stuff: List[OutputFormat]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4d686d69-1ce1-4b76-8d99-44d00eeb2874",
   "metadata": {},
   "source": [
    "Let's see how it does on this made up transcript."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "e2d10886-7b1e-485f-91cd-1184a1c99303",
   "metadata": {},
   "outputs": [],
   "source": [
    "transcript = [\n",
    "    (\n",
    "        \"Pete\",\n",
    "        \"Hey Xu, Laura, thanks for hopping on this call. I've been itching to talk about this Drake and Kendrick situation.\",\n",
    "    ),\n",
    "    (\n",
    "        \"Xu\",\n",
    "        \"No problem. As its my job, I've got some thoughts on this beef.\",\n",
    "    ),\n",
    "    (\n",
    "        \"Laura\",\n",
    "        \"Yeah, I've got some insider info so this should be interesting.\",\n",
    "    ),\n",
    "    (\"Pete\", \"Dope. So, when do you think this whole thing started?\"),\n",
    "    (\n",
    "        \"Pete\",\n",
    "        \"Definitely was Kendrick's 'Control' verse that kicked it off.\",\n",
    "    ),\n",
    "    (\n",
    "        \"Laura\",\n",
    "        \"Truth, but Drake never went after him directly. Just some subtle jabs here and there.\",\n",
    "    ),\n",
    "    (\n",
    "        \"Xu\",\n",
    "        \"That's the thing with beefs like this, though. They've always been a a thing, pushing artists to step up their game.\",\n",
    "    ),\n",
    "    (\n",
    "        \"Pete\",\n",
    "        \"For sure, and this beef has got the fans taking sides. Some are all about Drake's mainstream appeal, while others are digging Kendrick's lyrical skills.\",\n",
    "    ),\n",
    "    (\n",
    "        \"Laura\",\n",
    "        \"I mean, Drake knows how to make a hit that gets everyone hyped. That's his thing.\",\n",
    "    ),\n",
    "    (\n",
    "        \"Pete\",\n",
    "        \"I hear you, Laura, but I gotta give it to Kendrick when it comes to straight-up bars. The man's a beast on the mic.\",\n",
    "    ),\n",
    "    (\n",
    "        \"Xu\",\n",
    "        \"It's wild how this beef is shaping fans.\",\n",
    "    ),\n",
    "    (\"Pete\", \"do you think these beefs can actually be good for hip-hop?\"),\n",
    "    (\n",
    "        \"Xu\",\n",
    "        \"Hell yeah, Pete. When it's done right, a beef can push the genre forward and make artists level up.\",\n",
    "    ),\n",
    "    (\"Laura\", \"eh\"),\n",
    "    (\"Pete\", \"So, where do you see this beef going?\"),\n",
    "    (\n",
    "        \"Laura\",\n",
    "        \"Honestly, I think it'll stay a hot topic for the fans, but unless someone drops a straight-up diss track, it's not gonna escalate.\",\n",
    "    ),\n",
    "    (\"Laura\", \"ehhhhhh not sure\"),\n",
    "    (\n",
    "        \"Pete\",\n",
    "        \"I feel that. I just want both of them to keep dropping heat, beef or no beef.\",\n",
    "    ),\n",
    "    (\n",
    "        \"Xu\",\n",
    "        \"I'm curious. May influence a lot of people. Make things more competitive. Bring on a whole new wave of lyricism.\",\n",
    "    ),\n",
    "    (\n",
    "        \"Pete\",\n",
    "        \"Word. Hey, thanks for chopping it up with me, Xu and Laura. This was dope.\",\n",
    "    ),\n",
    "    (\"Xu\", \"Where are you going so fast?\"),\n",
    "    (\n",
    "        \"Laura\",\n",
    "        \"For real, I had a good time. Nice to get different perspectives on the situation.\",\n",
    "    ),\n",
    "]\n",
    "\n",
    "formatted = \"\\n\".join(f\"{x[0]}: {x[1]}\" for x in transcript)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c48ce9bc-0fcc-4019-ba3a-fa70a7717567",
   "metadata": {},
   "source": [
    "Now, run our model. We **expect** GPT turbo to still fail on this challenging template."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "f4752239-2aa3-4367-b777-8478c16b9471",
   "metadata": {},
   "outputs": [
    {
     "ename": "ValueError",
     "evalue": "Could not extract a valid value in 3 attempts.",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mValueError\u001b[0m                                Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[12], line 14\u001b[0m\n\u001b[1;32m      5\u001b[0m prompt \u001b[38;5;241m=\u001b[39m ChatPromptTemplate\u001b[38;5;241m.\u001b[39mfrom_messages(\n\u001b[1;32m      6\u001b[0m     [\n\u001b[1;32m      7\u001b[0m         (\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msystem\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mRespond directly using the TranscriptSummary function.\u001b[39m\u001b[38;5;124m\"\u001b[39m),\n\u001b[1;32m      8\u001b[0m         (\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mplaceholder\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;132;01m{messages}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m),\n\u001b[1;32m      9\u001b[0m     ]\n\u001b[1;32m     10\u001b[0m )\n\u001b[1;32m     12\u001b[0m chain \u001b[38;5;241m=\u001b[39m prompt \u001b[38;5;241m|\u001b[39m bound_llm\n\u001b[0;32m---> 14\u001b[0m results \u001b[38;5;241m=\u001b[39m \u001b[43mchain\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m     15\u001b[0m \u001b[43m    \u001b[49m\u001b[43m{\u001b[49m\n\u001b[1;32m     16\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmessages\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43m[\u001b[49m\n\u001b[1;32m     17\u001b[0m \u001b[43m            \u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m     18\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43muser\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m     19\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43mf\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mExtract the summary from the following conversation:\u001b[39;49m\u001b[38;5;130;43;01m\\n\u001b[39;49;00m\u001b[38;5;130;43;01m\\n\u001b[39;49;00m\u001b[38;5;124;43m<convo>\u001b[39;49m\u001b[38;5;130;43;01m\\n\u001b[39;49;00m\u001b[38;5;132;43;01m{\u001b[39;49;00m\u001b[43mformatted\u001b[49m\u001b[38;5;132;43;01m}\u001b[39;49;00m\u001b[38;5;130;43;01m\\n\u001b[39;49;00m\u001b[38;5;124;43m</convo>\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\n\u001b[1;32m     20\u001b[0m \u001b[43m                \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;130;43;01m\\n\u001b[39;49;00m\u001b[38;5;130;43;01m\\n\u001b[39;49;00m\u001b[38;5;124;43mRemember to respond using the TranscriptSummary function.\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m     21\u001b[0m \u001b[43m            \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     22\u001b[0m \u001b[43m        \u001b[49m\u001b[43m]\u001b[49m\n\u001b[1;32m     23\u001b[0m \u001b[43m    \u001b[49m\u001b[43m}\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m     24\u001b[0m \u001b[43m)\u001b[49m\n\u001b[1;32m     25\u001b[0m results\u001b[38;5;241m.\u001b[39mpretty_print()\n",
      "File \u001b[0;32m~/code/lc/langgraph/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:2499\u001b[0m, in \u001b[0;36mRunnableSequence.invoke\u001b[0;34m(self, input, config)\u001b[0m\n\u001b[1;32m   2497\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m   2498\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m i, step \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28menumerate\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msteps):\n\u001b[0;32m-> 2499\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mstep\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   2500\u001b[0m \u001b[43m            \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m   2501\u001b[0m \u001b[43m            \u001b[49m\u001b[38;5;66;43;03m# mark each step as a child run\u001b[39;49;00m\n\u001b[1;32m   2502\u001b[0m \u001b[43m            \u001b[49m\u001b[43mpatch_config\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   2503\u001b[0m \u001b[43m                \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcallbacks\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrun_manager\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_child\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43mf\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mseq:step:\u001b[39;49m\u001b[38;5;132;43;01m{\u001b[39;49;00m\u001b[43mi\u001b[49m\u001b[38;5;241;43m+\u001b[39;49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[38;5;132;43;01m}\u001b[39;49;00m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m   2504\u001b[0m \u001b[43m            \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   2505\u001b[0m \u001b[43m        \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   2506\u001b[0m \u001b[38;5;66;03m# finish the root run\u001b[39;00m\n\u001b[1;32m   2507\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n",
      "File \u001b[0;32m~/code/lc/langgraph/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:4525\u001b[0m, in \u001b[0;36mRunnableBindingBase.invoke\u001b[0;34m(self, input, config, **kwargs)\u001b[0m\n\u001b[1;32m   4519\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21minvoke\u001b[39m(\n\u001b[1;32m   4520\u001b[0m     \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m   4521\u001b[0m     \u001b[38;5;28minput\u001b[39m: Input,\n\u001b[1;32m   4522\u001b[0m     config: Optional[RunnableConfig] \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[1;32m   4523\u001b[0m     \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs: Optional[Any],\n\u001b[1;32m   4524\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Output:\n\u001b[0;32m-> 4525\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbound\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   4526\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m   4527\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_merge_configs\u001b[49m\u001b[43m(\u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   4528\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43m{\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m}\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   4529\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/code/lc/langgraph/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:2499\u001b[0m, in \u001b[0;36mRunnableSequence.invoke\u001b[0;34m(self, input, config)\u001b[0m\n\u001b[1;32m   2497\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m   2498\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m i, step \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28menumerate\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msteps):\n\u001b[0;32m-> 2499\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mstep\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   2500\u001b[0m \u001b[43m            \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m   2501\u001b[0m \u001b[43m            \u001b[49m\u001b[38;5;66;43;03m# mark each step as a child run\u001b[39;49;00m\n\u001b[1;32m   2502\u001b[0m \u001b[43m            \u001b[49m\u001b[43mpatch_config\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   2503\u001b[0m \u001b[43m                \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcallbacks\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrun_manager\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_child\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43mf\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mseq:step:\u001b[39;49m\u001b[38;5;132;43;01m{\u001b[39;49;00m\u001b[43mi\u001b[49m\u001b[38;5;241;43m+\u001b[39;49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[38;5;132;43;01m}\u001b[39;49;00m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m   2504\u001b[0m \u001b[43m            \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   2505\u001b[0m \u001b[43m        \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   2506\u001b[0m \u001b[38;5;66;03m# finish the root run\u001b[39;00m\n\u001b[1;32m   2507\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n",
      "File \u001b[0;32m~/code/lc/langgraph/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:4525\u001b[0m, in \u001b[0;36mRunnableBindingBase.invoke\u001b[0;34m(self, input, config, **kwargs)\u001b[0m\n\u001b[1;32m   4519\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21minvoke\u001b[39m(\n\u001b[1;32m   4520\u001b[0m     \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m   4521\u001b[0m     \u001b[38;5;28minput\u001b[39m: Input,\n\u001b[1;32m   4522\u001b[0m     config: Optional[RunnableConfig] \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[1;32m   4523\u001b[0m     \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs: Optional[Any],\n\u001b[1;32m   4524\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Output:\n\u001b[0;32m-> 4525\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbound\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   4526\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m   4527\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_merge_configs\u001b[49m\u001b[43m(\u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   4528\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43m{\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m}\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   4529\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/code/lc/langgraph/langgraph/pregel/__init__.py:1283\u001b[0m, in \u001b[0;36mPregel.invoke\u001b[0;34m(self, input, config, stream_mode, output_keys, input_keys, interrupt_before, interrupt_after, debug, **kwargs)\u001b[0m\n\u001b[1;32m   1281\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m   1282\u001b[0m     chunks \u001b[38;5;241m=\u001b[39m []\n\u001b[0;32m-> 1283\u001b[0m \u001b[43m\u001b[49m\u001b[38;5;28;43;01mfor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mchunk\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01min\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   1284\u001b[0m \u001b[43m    \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1285\u001b[0m \u001b[43m    \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1286\u001b[0m \u001b[43m    \u001b[49m\u001b[43mstream_mode\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream_mode\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1287\u001b[0m \u001b[43m    \u001b[49m\u001b[43moutput_keys\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43moutput_keys\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1288\u001b[0m \u001b[43m    \u001b[49m\u001b[43minput_keys\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43minput_keys\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1289\u001b[0m \u001b[43m    \u001b[49m\u001b[43minterrupt_before\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43minterrupt_before\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1290\u001b[0m \u001b[43m    \u001b[49m\u001b[43minterrupt_after\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43minterrupt_after\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1291\u001b[0m \u001b[43m    \u001b[49m\u001b[43mdebug\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdebug\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1292\u001b[0m \u001b[43m    \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1293\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[1;32m   1294\u001b[0m \u001b[43m    \u001b[49m\u001b[38;5;28;43;01mif\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mstream_mode\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m==\u001b[39;49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mvalues\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\n\u001b[1;32m   1295\u001b[0m \u001b[43m        \u001b[49m\u001b[43mlatest\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m \u001b[49m\u001b[43mchunk\u001b[49m\n",
      "File \u001b[0;32m~/code/lc/langgraph/langgraph/pregel/__init__.py:847\u001b[0m, in \u001b[0;36mPregel.stream\u001b[0;34m(self, input, config, stream_mode, output_keys, input_keys, interrupt_before, interrupt_after, debug)\u001b[0m\n\u001b[1;32m    840\u001b[0m done, inflight \u001b[38;5;241m=\u001b[39m concurrent\u001b[38;5;241m.\u001b[39mfutures\u001b[38;5;241m.\u001b[39mwait(\n\u001b[1;32m    841\u001b[0m     futures,\n\u001b[1;32m    842\u001b[0m     return_when\u001b[38;5;241m=\u001b[39mconcurrent\u001b[38;5;241m.\u001b[39mfutures\u001b[38;5;241m.\u001b[39mFIRST_EXCEPTION,\n\u001b[1;32m    843\u001b[0m     timeout\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstep_timeout,\n\u001b[1;32m    844\u001b[0m )\n\u001b[1;32m    846\u001b[0m \u001b[38;5;66;03m# panic on failure or timeout\u001b[39;00m\n\u001b[0;32m--> 847\u001b[0m \u001b[43m_panic_or_proceed\u001b[49m\u001b[43m(\u001b[49m\u001b[43mdone\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43minflight\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstep\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    849\u001b[0m \u001b[38;5;66;03m# combine pending writes from all tasks\u001b[39;00m\n\u001b[1;32m    850\u001b[0m pending_writes \u001b[38;5;241m=\u001b[39m deque[\u001b[38;5;28mtuple\u001b[39m[\u001b[38;5;28mstr\u001b[39m, Any]]()\n",
      "File \u001b[0;32m~/code/lc/langgraph/langgraph/pregel/__init__.py:1372\u001b[0m, in \u001b[0;36m_panic_or_proceed\u001b[0;34m(done, inflight, step)\u001b[0m\n\u001b[1;32m   1370\u001b[0m             inflight\u001b[38;5;241m.\u001b[39mpop()\u001b[38;5;241m.\u001b[39mcancel()\n\u001b[1;32m   1371\u001b[0m         \u001b[38;5;66;03m# raise the exception\u001b[39;00m\n\u001b[0;32m-> 1372\u001b[0m         \u001b[38;5;28;01mraise\u001b[39;00m exc\n\u001b[1;32m   1373\u001b[0m         \u001b[38;5;66;03m# TODO this is where retry of an entire step would happen\u001b[39;00m\n\u001b[1;32m   1375\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m inflight:\n\u001b[1;32m   1376\u001b[0m     \u001b[38;5;66;03m# if we got here means we timed out\u001b[39;00m\n",
      "File \u001b[0;32m~/.pyenv/versions/3.11.2/lib/python3.11/concurrent/futures/thread.py:58\u001b[0m, in \u001b[0;36m_WorkItem.run\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m     55\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m\n\u001b[1;32m     57\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m---> 58\u001b[0m     result \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfn\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     59\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m exc:\n\u001b[1;32m     60\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mfuture\u001b[38;5;241m.\u001b[39mset_exception(exc)\n",
      "File \u001b[0;32m~/code/lc/langgraph/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:2499\u001b[0m, in \u001b[0;36mRunnableSequence.invoke\u001b[0;34m(self, input, config)\u001b[0m\n\u001b[1;32m   2497\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m   2498\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m i, step \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28menumerate\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39msteps):\n\u001b[0;32m-> 2499\u001b[0m         \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mstep\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   2500\u001b[0m \u001b[43m            \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m   2501\u001b[0m \u001b[43m            \u001b[49m\u001b[38;5;66;43;03m# mark each step as a child run\u001b[39;49;00m\n\u001b[1;32m   2502\u001b[0m \u001b[43m            \u001b[49m\u001b[43mpatch_config\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   2503\u001b[0m \u001b[43m                \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcallbacks\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrun_manager\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_child\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43mf\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mseq:step:\u001b[39;49m\u001b[38;5;132;43;01m{\u001b[39;49;00m\u001b[43mi\u001b[49m\u001b[38;5;241;43m+\u001b[39;49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[38;5;132;43;01m}\u001b[39;49;00m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m   2504\u001b[0m \u001b[43m            \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   2505\u001b[0m \u001b[43m        \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m   2506\u001b[0m \u001b[38;5;66;03m# finish the root run\u001b[39;00m\n\u001b[1;32m   2507\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n",
      "File \u001b[0;32m~/code/lc/langgraph/langgraph/utils.py:89\u001b[0m, in \u001b[0;36mRunnableCallable.invoke\u001b[0;34m(self, input, config)\u001b[0m\n\u001b[1;32m     83\u001b[0m     context\u001b[38;5;241m.\u001b[39mrun(var_child_runnable_config\u001b[38;5;241m.\u001b[39mset, config)\n\u001b[1;32m     84\u001b[0m     kwargs \u001b[38;5;241m=\u001b[39m (\n\u001b[1;32m     85\u001b[0m         {\u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mkwargs, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mconfig\u001b[39m\u001b[38;5;124m\"\u001b[39m: config}\n\u001b[1;32m     86\u001b[0m         \u001b[38;5;28;01mif\u001b[39;00m accepts_config(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mfunc)\n\u001b[1;32m     87\u001b[0m         \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mkwargs\n\u001b[1;32m     88\u001b[0m     )\n\u001b[0;32m---> 89\u001b[0m     ret \u001b[38;5;241m=\u001b[39m \u001b[43mcontext\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfunc\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     90\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(ret, Runnable) \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mrecurse:\n\u001b[1;32m     91\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m ret\u001b[38;5;241m.\u001b[39minvoke(\u001b[38;5;28minput\u001b[39m, config)\n",
      "File \u001b[0;32m~/code/lc/langgraph/langgraph/graph/graph.py:70\u001b[0m, in \u001b[0;36mBranch._route\u001b[0;34m(self, input, config, reader, writer)\u001b[0m\n\u001b[1;32m     62\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_route\u001b[39m(\n\u001b[1;32m     63\u001b[0m     \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m     64\u001b[0m     \u001b[38;5;28minput\u001b[39m: Any,\n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m     68\u001b[0m     writer: Callable[[\u001b[38;5;28mlist\u001b[39m[\u001b[38;5;28mstr\u001b[39m]], Optional[Runnable]],\n\u001b[1;32m     69\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Runnable:\n\u001b[0;32m---> 70\u001b[0m     result \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpath\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\u001b[43mreader\u001b[49m\u001b[43m(\u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m)\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mif\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mreader\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01melse\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     71\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(result, \u001b[38;5;28mlist\u001b[39m):\n\u001b[1;32m     72\u001b[0m         result \u001b[38;5;241m=\u001b[39m [result]\n",
      "File \u001b[0;32m~/code/lc/langgraph/langgraph/utils.py:77\u001b[0m, in \u001b[0;36mRunnableCallable.invoke\u001b[0;34m(self, input, config)\u001b[0m\n\u001b[1;32m     75\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21minvoke\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28minput\u001b[39m: Any, config: Optional[RunnableConfig] \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Any:\n\u001b[1;32m     76\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtrace:\n\u001b[0;32m---> 77\u001b[0m         ret \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call_with_config\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m     78\u001b[0m \u001b[43m            \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfunc\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmerge_configs\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mkwargs\u001b[49m\n\u001b[1;32m     79\u001b[0m \u001b[43m        \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     80\u001b[0m     \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m     81\u001b[0m         config \u001b[38;5;241m=\u001b[39m merge_configs(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mconfig, config)\n",
      "File \u001b[0;32m~/code/lc/langgraph/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:1626\u001b[0m, in \u001b[0;36mRunnable._call_with_config\u001b[0;34m(self, func, input, config, run_type, **kwargs)\u001b[0m\n\u001b[1;32m   1622\u001b[0m     context \u001b[38;5;241m=\u001b[39m copy_context()\n\u001b[1;32m   1623\u001b[0m     context\u001b[38;5;241m.\u001b[39mrun(var_child_runnable_config\u001b[38;5;241m.\u001b[39mset, child_config)\n\u001b[1;32m   1624\u001b[0m     output \u001b[38;5;241m=\u001b[39m cast(\n\u001b[1;32m   1625\u001b[0m         Output,\n\u001b[0;32m-> 1626\u001b[0m         \u001b[43mcontext\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   1627\u001b[0m \u001b[43m            \u001b[49m\u001b[43mcall_func_with_variable_args\u001b[49m\u001b[43m,\u001b[49m\u001b[43m  \u001b[49m\u001b[38;5;66;43;03m# type: ignore[arg-type]\u001b[39;49;00m\n\u001b[1;32m   1628\u001b[0m \u001b[43m            \u001b[49m\u001b[43mfunc\u001b[49m\u001b[43m,\u001b[49m\u001b[43m  \u001b[49m\u001b[38;5;66;43;03m# type: ignore[arg-type]\u001b[39;49;00m\n\u001b[1;32m   1629\u001b[0m \u001b[43m            \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m  \u001b[49m\u001b[38;5;66;43;03m# type: ignore[arg-type]\u001b[39;49;00m\n\u001b[1;32m   1630\u001b[0m \u001b[43m            \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1631\u001b[0m \u001b[43m            \u001b[49m\u001b[43mrun_manager\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1632\u001b[0m \u001b[43m            \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1633\u001b[0m \u001b[43m        \u001b[49m\u001b[43m)\u001b[49m,\n\u001b[1;32m   1634\u001b[0m     )\n\u001b[1;32m   1635\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m   1636\u001b[0m     run_manager\u001b[38;5;241m.\u001b[39mon_chain_error(e)\n",
      "File \u001b[0;32m~/code/lc/langgraph/.venv/lib/python3.11/site-packages/langchain_core/runnables/config.py:347\u001b[0m, in \u001b[0;36mcall_func_with_variable_args\u001b[0;34m(func, input, config, run_manager, **kwargs)\u001b[0m\n\u001b[1;32m    345\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m run_manager \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m accepts_run_manager(func):\n\u001b[1;32m    346\u001b[0m     kwargs[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrun_manager\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m run_manager\n\u001b[0;32m--> 347\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
      "Cell \u001b[0;32mIn[3], line 204\u001b[0m, in \u001b[0;36m_bind_validator_with_retries.<locals>.route_validation\u001b[0;34m(state)\u001b[0m\n\u001b[1;32m    202\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mroute_validation\u001b[39m(state: State) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Literal[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mfinalizer\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mfallback\u001b[39m\u001b[38;5;124m\"\u001b[39m]:\n\u001b[1;32m    203\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m state[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mattempt_number\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m>\u001b[39m max_attempts:\n\u001b[0;32m--> 204\u001b[0m         \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\n\u001b[1;32m    205\u001b[0m             \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCould not extract a valid value in \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mmax_attempts\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m attempts.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m    206\u001b[0m         )\n\u001b[1;32m    207\u001b[0m     \u001b[38;5;28;01mfor\u001b[39;00m m \u001b[38;5;129;01min\u001b[39;00m state[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmessages\u001b[39m\u001b[38;5;124m\"\u001b[39m][::\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m]:\n\u001b[1;32m    208\u001b[0m         \u001b[38;5;28;01mif\u001b[39;00m m\u001b[38;5;241m.\u001b[39mtype \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mai\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n",
      "\u001b[0;31mValueError\u001b[0m: Could not extract a valid value in 3 attempts."
     ]
    }
   ],
   "source": [
    "tools = [TranscriptSummary]\n",
    "bound_llm = bind_validator_with_retries(\n",
    "    llm,\n",
    "    tools=tools,\n",
    ")\n",
    "prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"Respond directly using the TranscriptSummary function.\"),\n",
    "        (\"placeholder\", \"{messages}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "chain = prompt | bound_llm\n",
    "\n",
    "results = chain.invoke(\n",
    "    {\n",
    "        \"messages\": [\n",
    "            (\n",
    "                \"user\",\n",
    "                f\"Extract the summary from the following conversation:\\n\\n<convo>\\n{formatted}\\n</convo>\"\n",
    "                \"\\n\\nRemember to respond using the TranscriptSummary function.\",\n",
    "            )\n",
    "        ]\n",
    "    },\n",
    ")\n",
    "results.pretty_print()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "914e1962-7f23-463d-b91d-8907c1330369",
   "metadata": {},
   "source": [
    "## JSONPatch\n",
    "\n",
    "The regular retry method worked well for our simple case, but it still was unable to self-correct when populating a complex schema.\n",
    "\n",
    "LLMs work best on narrow tasks. A tried-and-true principle of LLM interface design is to simplify the task for each LLM run.\n",
    "\n",
    "One way to do this is to **patch** the state instead of completely regenerating the state. One way to do this is with `JSONPatch` operations. Let's try it out!\n",
    "\n",
    "Below, create a JSONPatch retry graph. This works as follows:\n",
    "1. First pass: try to generate the full output.\n",
    "2. Retries: prompt the LLM to generate **JSON patches** on top of the first output to heal the erroneous generation.\n",
    "\n",
    "The fallback LLM just has to generate a list of paths, ops (add, remove, replace), and optional values. Since the pydantic validation errors include the path in their errors, the LLM should be more reliable."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "49344104-3ffa-4c66-97fc-5b093a621f70",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture --no-stderr\n",
    "%pip install -U jsonpatch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "af3d5543-1fd4-4e54-b0f9-f1ab42773cfb",
   "metadata": {},
   "outputs": [],
   "source": [
    "import logging\n",
    "\n",
    "logger = logging.getLogger(\"extraction\")\n",
    "\n",
    "\n",
    "def bind_validator_with_jsonpatch_retries(\n",
    "    llm: BaseChatModel,\n",
    "    *,\n",
    "    tools: list,\n",
    "    tool_choice: Optional[str] = None,\n",
    "    max_attempts: int = 3,\n",
    ") -> Runnable[Union[List[AnyMessage], PromptValue], AIMessage]:\n",
    "    \"\"\"Binds validators + retry logic ensure validity of generated tool calls.\n",
    "\n",
    "    This method is similar to `bind_validator_with_retries`, but uses JSONPatch to correct\n",
    "    validation errors caused by passing in incorrect or incomplete parameters in a previous\n",
    "    tool call. This method requires the 'jsonpatch' library to be installed.\n",
    "\n",
    "    Using patch-based function healing can be more efficient than repopulating the entire\n",
    "    tool call from scratch, and it can be an easier task for the LLM to perform, since it typically\n",
    "    only requires a few small changes to the existing tool call.\n",
    "\n",
    "    Args:\n",
    "        llm (Runnable): The llm that will generate the initial messages (and optionally fallba)\n",
    "        tools (list): The tools to bind to the LLM.\n",
    "        tool_choice (Optional[str]): The tool choice to use.\n",
    "        max_attempts (int): The number of attempts to make.\n",
    "\n",
    "    Returns:\n",
    "        Runnable: A runnable that can be invoked with a list of messages and returns a single AI message.\n",
    "    \"\"\"\n",
    "\n",
    "    try:\n",
    "        import jsonpatch  # type: ignore[import-untyped]\n",
    "    except ImportError:\n",
    "        raise ImportError(\n",
    "            \"The 'jsonpatch' library is required for JSONPatch-based retries.\"\n",
    "            \" Please install it with 'pip install -U jsonpatch'.\"\n",
    "        )\n",
    "\n",
    "    class JsonPatch(BaseModel):\n",
    "        \"\"\"A JSON Patch document represents an operation to be performed on a JSON document.\n",
    "\n",
    "        Note that the op and path are ALWAYS required. Value is required for ALL operations except 'remove'.\n",
    "        Examples:\n",
    "\n",
    "        ```json\n",
    "        {\"op\": \"add\", \"path\": \"/a/b/c\", \"patch_value\": 1}\n",
    "        {\"op\": \"replace\", \"path\": \"/a/b/c\", \"patch_value\": 2}\n",
    "        {\"op\": \"remove\", \"path\": \"/a/b/c\"}\n",
    "        ```\n",
    "        \"\"\"\n",
    "\n",
    "        op: Literal[\"add\", \"remove\", \"replace\"] = Field(\n",
    "            ...,\n",
    "            description=\"The operation to be performed. Must be one of 'add', 'remove', 'replace'.\",\n",
    "        )\n",
    "        path: str = Field(\n",
    "            ...,\n",
    "            description=\"A JSON Pointer path that references a location within the target document where the operation is performed.\",\n",
    "        )\n",
    "        value: Any = Field(\n",
    "            ...,\n",
    "            description=\"The value to be used within the operation. REQUIRED for 'add', 'replace', and 'test' operations.\",\n",
    "        )\n",
    "\n",
    "    class PatchFunctionParameters(BaseModel):\n",
    "        \"\"\"Respond with all JSONPatch operation to correct validation errors caused by passing in incorrect or incomplete parameters in a previous tool call.\"\"\"\n",
    "\n",
    "        tool_call_id: str = Field(\n",
    "            ...,\n",
    "            description=\"The ID of the original tool call that generated the error. Must NOT be an ID of a PatchFunctionParameters tool call.\",\n",
    "        )\n",
    "        reasoning: str = Field(\n",
    "            ...,\n",
    "            description=\"Think step-by-step, listing each validation error and the\"\n",
    "            \" JSONPatch operation needed to correct it. \"\n",
    "            \"Cite the fields in the JSONSchema you referenced in developing this plan.\",\n",
    "        )\n",
    "        patches: list[JsonPatch] = Field(\n",
    "            ...,\n",
    "            description=\"A list of JSONPatch operations to be applied to the previous tool call's response.\",\n",
    "        )\n",
    "\n",
    "    bound_llm = llm.bind_tools(tools, tool_choice=tool_choice)\n",
    "    fallback_llm = llm.bind_tools([PatchFunctionParameters])\n",
    "\n",
    "    def aggregate_messages(messages: Sequence[AnyMessage]) -> AIMessage:\n",
    "        # Get all the AI messages and apply json patches\n",
    "        resolved_tool_calls: Dict[Union[str, None], ToolCall] = {}\n",
    "        content: Union[str, List[Union[str, dict]]] = \"\"\n",
    "        for m in messages:\n",
    "            if m.type != \"ai\":\n",
    "                continue\n",
    "            if not content:\n",
    "                content = m.content\n",
    "            for tc in m.tool_calls:\n",
    "                if tc[\"name\"] == PatchFunctionParameters.__name__:\n",
    "                    tcid = tc[\"args\"][\"tool_call_id\"]\n",
    "                    if tcid not in resolved_tool_calls:\n",
    "                        logger.debug(\n",
    "                            f\"JsonPatch tool call ID {tc['args']['tool_call_id']} not found.\"\n",
    "                            f\"Valid tool call IDs: {list(resolved_tool_calls.keys())}\"\n",
    "                        )\n",
    "                        tcid = next(iter(resolved_tool_calls.keys()), None)\n",
    "                    orig_tool_call = resolved_tool_calls[tcid]\n",
    "                    current_args = orig_tool_call[\"args\"]\n",
    "                    patches = tc[\"args\"].get(\"patches\") or []\n",
    "                    orig_tool_call[\"args\"] = jsonpatch.apply_patch(\n",
    "                        current_args,\n",
    "                        patches,\n",
    "                    )\n",
    "                    orig_tool_call[\"id\"] = tc[\"id\"]\n",
    "                else:\n",
    "                    resolved_tool_calls[tc[\"id\"]] = tc.copy()\n",
    "        return AIMessage(\n",
    "            content=content,\n",
    "            tool_calls=list(resolved_tool_calls.values()),\n",
    "        )\n",
    "\n",
    "    def format_exception(error: BaseException, call: ToolCall, schema: Type[BaseModel]):\n",
    "        return (\n",
    "            f\"Error:\\n\\n```\\n{repr(error)}\\n```\\n\"\n",
    "            \"Expected Parameter Schema:\\n\\n\" + f\"```json\\n{schema.schema_json()}\\n```\\n\"\n",
    "            f\"Please respond with a JSONPatch to correct the error for tool_call_id=[{call['id']}].\"\n",
    "        )\n",
    "\n",
    "    validator = ValidationNode(\n",
    "        tools + [PatchFunctionParameters],\n",
    "        format_error=format_exception,\n",
    "    )\n",
    "    retry_strategy = RetryStrategy(\n",
    "        max_attempts=max_attempts,\n",
    "        fallback=fallback_llm,\n",
    "        aggregate_messages=aggregate_messages,\n",
    "    )\n",
    "    return _bind_validator_with_retries(\n",
    "        bound_llm,\n",
    "        validator=validator,\n",
    "        retry_strategy=retry_strategy,\n",
    "        tool_choice=tool_choice,\n",
    "    ).with_config(metadata={\"retry_strategy\": \"jsonpatch\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "b01891c4-4187-4a75-9eda-644a7c2355f3",
   "metadata": {},
   "outputs": [],
   "source": [
    "bound_llm = bind_validator_with_jsonpatch_retries(llm, tools=tools)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "746b409c-693d-49af-8c2b-bea0a4b0028d",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAMCAgMCAgMDAwMEAwMEBQgFBQQEBQoHBwYIDAoMDAsKCwsNDhIQDQ4RDgsLEBYQERMUFRUVDA8XGBYUGBIUFRT/2wBDAQMEBAUEBQkFBQkUDQsNFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBT/wAARCALSASwDASIAAhEBAxEB/8QAHQABAAMAAwADAAAAAAAAAAAAAAUGBwMECAECCf/EAF4QAAEEAQIDAgkECg4HBAkFAAEAAgMEBQYRBxIhEzEIFBUWIkFVlNEyUXGTFyNCU1ZhdYGR0gkzNjc4UlR2oaOys7ThJTVicnSCsSQmksE0OUNEY3eVorVGR1dzwv/EABsBAQACAwEBAAAAAAAAAAAAAAABAwIEBQYH/8QANxEBAAEDAAYHBgYCAwEAAAAAAAECAxEEEyExUVISFBVBkaHwYXGBscHRBSIyU2LhM2M0QvFy/9oADAMBAAIRAxEAPwD9U0REBERAREQEREBERAREQEREBERAREQfDnBjS5xDWgbknuCi/OrCe2KHvTPiubP/AOosl/w0n9krJNM4HGP03inOx1RzjUiJJgaSTyD8Spv37ejW4uVxM5nGxuaPo+vztxhqvnVhPbFD3pnxTzqwntih70z4rPPN/F+zaf1Dfgnm/i/ZtP6hvwXP7V0fkq8YbvZ38vJofnVhPbFD3pnxTzqwntih70z4rPPN/F+zaf1Dfgnm/i/ZtP6hvwTtXR+Srxg7O/l5ND86sJ7Yoe9M+KedWE9sUPemfFZ55v4v2bT+ob8E838X7Np/UN+Cdq6PyVeMHZ38vJofnVhPbFD3pnxTzqwntih70z4rPPN/F+zaf1Dfgnm/i/ZtP6hvwTtXR+Srxg7O/l5ND86sJ7Yoe9M+KedWE9sUPemfFZ55v4v2bT+ob8E838X7Np/UN+Cdq6PyVeMHZ38vJofnVhPbFD3pnxTzqwntih70z4rPPN/F+zaf1Dfgnm/i/ZtP6hvwTtXR+Srxg7O/l5ND86sJ7Yoe9M+KkopWTxMkje2SN4Dmvadw4HuIPrCyl+n8XyO/0bU7vvDfgrnww/e00l+SKn9yxb+j6Rb0qiqu3Ex0ZiNvtz9mnpGjdXiNucrMiIr2kIiICIiAiIgIiICIiAiIgIiICIiAiIgIiIOhn/8AUWS/4aT+yVmWl/3M4j/g4f7AWm5//UWS/wCGk/slZlpf9zOI/wCDh/sBcr8V/wCPT/8AX0dn8O31KFf8IXAY+3Zkdis5Lp6rbNGxqeKkHY2KUSdm4F/PzlrZPQL2sLAQfS6FcOQ8I7A42zlzLhM+7F4fJnE5LMMqMNSnMHtZu53aczm7vad2NdsHDmDVm2iOAw0leOnszwfwOrq4ycskerp5aoL6kkxk5pmPBlMrGvLeUNIPKBzDvU5qLhVqm/we4yYODF9plNQZ25dxkHjEQ7eJ5h5Hcxdyt35HdHEHp3dy4fQsxOM+bd6d2Yz9Ggai4147C6nvYDH4DUGqb+ObG/I+QqbZmUucczGyFz27uLfS5GcztiOi62rOPuE0pmMtRGHzuZZhYmTZi5iqQmgxjXN5x2pLwSQz0yIw8hvUhQVbGa44Xa61rPg9It1hiNSXm5WCeLJQ1X1Z+xZE+KYS7Es3jBDmBxAJGyjM3pXXmlb3EjH6e0tDn6es5DbrX35CKFmPnkrMgkbYa8hzmNLA8dmHEgkbDvWMUW8x7uPuz7u9nNdeP6965aq46YfD32Y3EY7MarvyY9uSeNP1W2G1azwezlkc5zQA7YlrRu4gEhpVd4fcdJfsXaCsZWpldW6vzeJZflp4WpG6ZzBsHzvG8ccbOZwHUjcnZoPVQ2luHmsuCeduw4PT41picrhMZj32Y78VWWpYp1vF93iU+lG9oa7du5B5vRKqWN4AZvCY/h3lsxw/x+un4/S8eByenbk9btakrJDIyeJ8h7J3y3tcOYdCCCVZFFnGM+fsnwYTXcznHrMeL0nofW+N4gYEZTGCeNjZpK09a3EYp608bi2SKRh6tc0jYj84JBBVe4va2n0fHpaKGTI0/KudpUHXaNOCzGztJmN7OUSSNLWyc3LzsDnN6kDcBdXS+pdFcKtOUcbl26Y4Y3LQdbdgXZGtCGczi3mHyQ8kNG7gNtwRudt1HcRLdLjBhdPN0TlMZqfyXqjE37pxuQglEEMVlsj3OIft0a1x5e87dAVr00RFzOPy+vgumqZoxna5ch4R2BxtnLmXCZ92Lw+TOJyWYZUYalOYPazdzu05nN3e07sa7YOHMGrn1Dx9x2A1BqbFM0xqXKjTfYnJ3MdUikggZJC2YPG8oc4Bjuoa0uGx6bbE0vUXCrVN/g9xkwcGL7TKagzty7jIPGIh28TzDyO5i7lbvyO6OIPTu7l0JMzq+nxZ42Y/SukfOGe9Lj4RakyENeGrI7HRNDpWvIc5o339AOJ2I2Herot26s47vb7vvKqa64xn5e/7Q2XTHE/Cax1HYxGKdNZMWMqZdtwNHYTV7PadkWHfm32jJILR3jv67Zra8JO3ks/w2k07pTMZXBaooXLzmMiri0ez5Q1jeaw1oLSS5+5PRzOUk8wHW0Zw21ZwUz+POGwY1bUl0vj8LJPHdirCC1WdLu94kIPZO7Xfdgc4cvySovSnDfXGhNKcGMjFprytldLU71LJ4aK9BHK0WGtAeyRzuzdymMbjm7ndN9kii1EzMTEx3bfZP1wTXcmIzs+Htj6PScnyHfQVZOGH72mkvyRU/uWKshzn1w57DG8t3LCd+U7d26s3DD97TSX5Iqf3LF2fwj/Dd99Pyqaf4jup+KzIiLsOIIiICIiAiIgIiICIiAiIgIiICIiAiIgIiIOhn/8AUWS/4aT+yVmWl/3M4j/g4f7AWr2q7LlaaCTcxysLHbHY7EbFUqvwixtWCOGLK5qOKNoYxou9AANgO5a2laNGlWot9LExOfJv6LpFNjPS73RRSX2KaPtjN++/5J9imj7Yzfvv+S5XY/8Atjwl0e0LXCUaikvsU0fbGb99/wAk+xTR9sZv33/JOx/9seEnaFrhKNRSX2KaPtjN++/5KvcRtCRaa4e6nzFLM5ht3H4u1bgL7fM0SRxOc3cbdRuB0Tsf/bHhJ2ha4S7UtaGdwMkTJCOm7mgr5igjgBEcbIwe/laBuqzwG00/iDwZ0ZqXL5rLvymVxcFuy6G1yMMjmgnZu3QfiV8+xTR9sZv33/JOyP8AbHhKO0LXCUaupVxFGjeu3a1KvXuXXMdasRRNbJYLWhrTI4DdxDQGjffYABTv2KaPtjN++/5J9imj7Yzfvv8AknZE/ux4SdftcJRqKS+xTR9sZv33/JPsU0fbGb99/wAk7H/2x4SntC1wlFyfId9BVk4YfvaaS/JFT+5Yo88KaBG3ljN++/5K1YbFV8FiKONqhwq04I68QcdyGMaGt3PrOwC6miaNGiW66Olmapjyz92hpekU34jo9zuIiLac4REQEREBERAREQEREBERAREQEREBERAREQEREBERAREQFTuMv70GufyFe/w71cVTuMv70GufyFe/w70FY8E/+DVw0/IVX+wFrCyfwT/4NXDT8hVf7AWsICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgKncZf3oNc/kK9/h3q4rIePPF7QmD0HrrT2S1rp3H58YW3F5KtZWCK1zvrOMbeyc8O3cHNIG3XmG3eg5vBP/g1cNPyFV/sBawsC8EPido7I8D+HGnaurMHZ1BHh68D8TDkoX22yNiLnMMQdzBwax5I23AY4+orfUBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERARFFah1NQ0zWZLdkeXyEthrwxmSWZ3zNY3qfxnuHeSB1WVNM1TimNqYiZnEJVFnc+v8AUNqQmnhKVKDfob9sulI/GyNpaPzPK4fPLV38nwv6Zlbqsb6o8fs2o0W9P/VpSLNfPLV38nwv6Zk88tXfyfC/pmTVRzR4p6pe4NKRZr55au/k+F/TMnnlq7+T4X9MyaqOaPE6pe4NKX5y/spHAvxe7iOKeLrehY5cZmeQfdgfaJj09bQYyT0HLGPWvaHnlq7+T4X9MyrfEejluKmhs1pPPUsPLisrXdXm7MyB7PW17CQQHtcGuaSCAWjoU1Uc0eJ1S9weMf2LzgnLqDX+R4lXY3sx+AY+lQcDsJbcsZbJ9IZE8gg+uVp9S/TxYHwi01keCegMZpDTlXF+TKAfyy2nyPmme5xc58jgGguJPqAG2wAAAVy88tXfyfC/pmTVRzR4nVL3BpSLNfPLV38nwv6Zk88tXfyfC/pmTVRzR4nVL3BpSLNfPLV38nwv6Zk88tXfyfC/pmTVRzR4nVL3BpSLNfPLV38nwv6Zlyw681JWcDZw+PuR7jcVLb45NvWQHsIP0Fw+lNVwqjx+6J0W9H/VoqKG05qyhqaOQVnSQ2oQDPSst5J4d99i5vrB2OzgS07HYnYqZVVVM0zipqzExOJERFigREQEREBERAREQEREBERAREQEREHUy2Tr4TF28hbfyVqsTppHesNaNz/0WY1PGr878pkh/pKy0F7ObmbXb3iJnzAesj5R3JVo4svc3Rj2j5Et6jFJuN/Qdbia4fnB2/OoJW1fltRMd8z5Y+/ydfQaIxNc70QNWYo6tOmfGv8ATgojJGr2b/8A0cydnz823L8obbb7/i2Uuskb/C0k/mQz/HuURxumv6m1nBprTs2ofLdPFOyU/k3PnE068TnuZHJI5rHulkLmP2ZyluzTzbLVdHp4iZ9rckXmHQmbzfF/NcNK2X1FmaVfJ6GlyV5mIvPpmzYZPXjEhMZBafTcfR2+b5O4PQ0LkM/jdG8LtWy6tz+SymT1QcHdivX3SVp6vbWIADD8jnAiY7tNuYu3JJ3TDDXZ7vWz7vTGmtVYvV9Ge5iLXjdaG1NTkf2b2cs0Uhjkbs4A9HNI37jt0JC7GbzNPTmFv5bIzeL4+hXktWZuVzuSJjS57tmgk7AE7AEryTDSyWjOD+ruImF1NmqOXxGqMlLDjBcJx9keUnsMD6+3K7tOYjm+UHOGx9Sseu6+Q4oYLjhk8jqTM4yHTMd3GUMJjbpggEcdJsplnYP27tS9w9LcBo2HXqJwjWzjdtemMbka+Xx1W/Uk7WraiZPDJylvMxwDmnY7EbgjvXYXlLWWSyud09XqaVtaijymm9IU7tyennjjaFMuge+JxjDHmxIQxxLXDk5WtG4JK9F8N87Z1Rw70tmbhabmRxVW3MWjYF8kLXu2Hq6kqFlFzpThYkWB8YctqvUvGXGaHwj5WUI8E7MSRVs7Jh5LMhnMW3bxwyPcGAA8jeXftASTsAonyTrqDP8ADLSmq9SX6pv3cuyZ+Iyr3TT1GQiSCOWcRxlz27cpkDWu2G4IJJRE3dsxh6SReWq+q9ROFfQfnNk6tKfXtvTxzsljnvMpx1RZZAJ3bntHud2YkO7th86uWvsJf0b5k6NxuqdQw4/U2edBcytzJPnuQxNrPl8Xinfu9naOiAB3Lhu7Y9RsIu5jOG5qKh1Vi7GqLWnY7XNmatSO9NW7N45YZHPYx3Ntynd0bxsDuNuo6hebNQ6w1Do7O6m4eY/U+Sfj5M9g8bBnL1jxi7jYrzJHTMEz9y5w7EchfuW9uOvQKP11YyHAvVHFG9gstk8ner6UxRguZy463LXMt6aIv7R4J5WBzpPS5gDv026KcMJvY249/n9nrhFiPC/RnETTutaVq9bedNS1ZW3ob+qJsy+aTYGKWLtK0fZEHcENdykO6NGwW3KF9NXSjOMOrchna+K5RkEGTrbugl32B7iY3/PG/YBw+gjZzWkaNpzOQ6kwdPJQtMbLEfMYnEF0bx0cw7dN2uBadvWFQ1L8KHu8kZeL/wBlFlbAj27tncr3f/e9/wCfdbVH5rUxPd9fXzc3TqIxFfeu6IiqcYREQEREBERAREQEREBERAREQEREETqvB+cmnb+NEnYyTxERS/e5B1Y78zg0/mWe4y669VD5YjXssPZz13Hd0Mg+Uw/Qf0jY9xWsKrao0UcrYdkMZPHQypAa98kfPFYaO4SNBB3A6BwO4/2h6KtjFdPQqnHD17W9ot+LMzFW6WQa34I6O4i52HM53HWZ8nDWFNlirkrVR3YhxfyEQysBHM4nquvNwA0JZr4yGbCyzNx0L60LpMhZc98L5DI6KV5k5poy5zjySFzep6bK8zx5+hIY7emrj9jt21CWKeJ30bua/wDSwLg8oZD8G837qP1lHV7vdHnH3dbp2J25hB6Z4V6W0dcx1rD4vxKbHU5sfUIsSvENeWYTPjDXOI252tI6eiAA3YdF81uFel6eEw2IhxfJjsPkPKlGHxiU9jZ7R8nacxdu70pHnZxI692wCm/KGQ/BvN+6j9ZPKGQ/BvN+6j9ZOr3eDLWWeMKZX8H3h/VzseXZp5jrrLj8gBJanfD4y55eZjC55jL+Y7hxbuOm22w25dXcB9C66zNzK5nBCxfuwCtblhtz1xZjDeUCVsb2iTYdAXAkdNiNgrd5QyH4N5v3UfrJ5QyH4N5v3UfrJ1e7wR07GMZjyVTL8DdD561UsXsEyd9apHQDfGJmslrx/IjmYHhszW9dhIHd5XFDo/V2mK8GJ0jltO4rTdKJkFGlkMVatzQxtaBymXxxvMAd9vRGw2Hq3Vw8oZD8G837qP1l1snqKbDY23kLuBzFalUhfPPM+r6McbQXOcfS7gASnV7vA6dnumFcy3Cqjr/GY8a9r0Mzl6Mj317+JZYxzoQ7oQxzZnSN3GwcO02Ow6KVx3DLTWJfp11TGCA6eEwxnLNJ9o7ZpbKervTLgTuX79Tv3rn09rFurMHRzOIw2Xv4u9C2etaiq+hLG4bhw69xUh5QyH4N5v3UfrJ1e7wOnZ35hX8jwh0hl8RmcZdwsVmlmL5ylyOSSQl9ota3tmu5t43bMbsWFu23Tbc79WLgfomPS1nTxwpmxdmy25I2xbnlmM7QA2UTOeZA8BrQHBwIA6K1eUMh+Deb91H6yeUMh+Deb91H6ydXu8Dp2eMKzW4KaJq6PyOl24CCXC5GTtrkNiSSaSxJuCJHyvcZHPBa3ZxduNhsRsvpp3gjorSzsmaGFDjlKjaF3x21Nb8ZgHNsx/bPdzD03Dr122HcABafKGQ/BvN+6j9ZPKGQ/BvN+6j9ZOr3eB07PGPJTcLwaw/D6rbn0JVrYbMyxNrxWco+zfhiiDw4xiN04LW7Do1jmgHY7HbZSmCoa9hysD81nNOXMYObtoaGGsV5neieXle628D0tid2ncAjpvuJ7yhkPwbzfuo/WXNCM5ecG1dNXwSR6dx0UEY/GSXF36Gkp1e53x5wjWWad1UR8X2yF1mOqPneHP22a2Ng3dI9xDWsaPW5ziGgeskBXjQ+Cm09pmrVskG68vsWSDzDtZHF7wD6wC4tB+Zo7lH6Z0TJTtR5LMTR3MhGD2MMLf8As9UnoSzfq5+xI5zt03DQ3mdzW5TOKKehE54+vX35WlaRF2Ypp3QIiKpoCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICp3GX96DXP5Cvf4d6uKp3GX96DXP5Cvf4d6CseCf/Bq4afkKr/YC1hZP4J/8Grhp+Qqv9gLWEBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQFTuMv70GufyFe/w71cVTuMv70GufyFe/w70FY8E/8Ag1cNPyFV/sBawsn8E/8Ag1cNPyFV/sBawgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiKOm1HiazyybKU4njva+wwH+krKKZq3QJFFFedWE9sUPemfFPOrCe2KHvTPistXXyynEpVFFedWE9sUPemfFPOrCe2KHvTPimrr5ZMSlV4L8OfwreJ3BrWWX0VDg9PS6Pz+KLaN+xWsOsujki7KcF7Zms52v59hy9GlhIO/X2/51YT2xQ96Z8V5w8O/hdieOHBSzJirlK3qnT7nZDHMhmY+WZu326BoBJJe0AhoG5dGwJq6+WTEsU8Ajwr9f641ZpLhV5Bwz9L4nHPbYyFeGYWoa0MLgxziZS0l0pgaSG7ekRt1BH6Hrx3+x48JMdwd4W2c9qCavjtV6jk55qtyVsc1WtGSIonMcd2uJ5nkdDs5gI3avV/nVhPbFD3pnxTV18smJSqKK86sJ7Yoe9M+KedWE9sUPemfFNXXyyYlKoorzqwntih70z4p51YT2xQ96Z8U1dfLJiUqijYtS4id4ZHlaUjj3NbYYT/1Ul3rGaZp3wgREWIIiICIiAiIgIiICIiAiIgIiICIiAiIgKN1Bn62m8a+5ZEknpCOKCEAyTSH5LGgkDc/jIAAJJABIklmmrLRymuzA4gwYqqxzG9f26Uu5nfN0Y1oB7/Td3b9bbdMTmqrdG318V9m3ra4pR+Sgt6oc6TPT9vG7uxsLyKkY37iOhlPqLn9/XZrQdl9GadxMY2ZjKbB37NrsH/ku3atQ0as1mxKyCvCwySSyHZrGgbkk+oABZ/pLjdjtYOFmtgNQUsA+CS1DqG/SbDRmhYOYyBxfztaQNwXsbuO5V1XrlXfiOHc9DFNFvFMRhd/IGM9nVPqG/BPIGM9nVPqG/BULSfH3B6szGHpDE5zE18217sPkspTENbJBrS/7UQ8uBLAXtEjWFwG43XaqcbcHc0Np3VTKmQGOzuSgxdaJ0cfasllsGBpeOfYNDxuSCTt6ieix1lfNKYronvXPyBjPZ1T6hvwTyBjPZ1T6hvwWb1vCNwNidjjhc/BivK78HLmJajBUhticwBrndpzFrngbPa0tHOA4tO4Hzovi1mdRcYdaaTs6avxYzEWYYK+QayERxA1+0Lpj2xce0PVnKw+iW8wad01lfNPidOjZho/kDGezqn1DfgnkDGezqn1Dfgu8sbg8KHBWKeIvR6Y1U7G5a06hRutoRmOe2C4dg0CXm5iWOAdtybg+lsCQ1lfNKaqqad7VvIGM9nVPqG/BPIGM9nVPqG/BUmnx0wE2nM1lLtXJ4ixh7rMdbxFysDdFl/J2UTGRueJHSdozl5HEHm7+h2jbfhHYDD0MrLnMNn9P5DHwQ2PJWQptFqzHLK2GN0IY9zH7yOaz5Q5S4c2w6prK+aUdOiO9pHkDGezqn1DfgnkDGezqn1DfgqHk+OdPEY7EPtaV1NFmMtblp08CacXjsro4+0keB2vZ8gb15uf6F3dS8W/NvGY275m6rybLlXxySOhjmufTZsCRMHSN5Xjf5A5ndD0TWV80nTpW/yBjPZ1T6hvwTyBjPZ1T6hvwVIyvHXA1amnpcTSyuqrGdonJ0qWErCSY1QGl0zw9zAxoL2jYncuOwBPRdOl4QeEyumtP5TGYfNZe3nWTT0sPQrxyXDBFJyPmeO0DI2g8vyng7vDdubcBrK+aTp0cWgyadxMrCx+Mpvae9rq7CP+iY/HTabcJNP2HY0NO5pA71Jf9kx9zPpj5T9I6HO5vCL0/wCJ4nxLE53KZfI27NJmCq02i9DNXaHTtkY97Wt5A5pPpHcOby77rS8Xe8qYypcFeeqLELJuwtR9nLHzNB5Xt+5cN9iPUQVlF65T/wBtnl4ImLdyMTtaBpnUcOpaDpmRvrWIndnYqyEF0T/m6d4I2II7wR9Cl1l+CtnEa7xj27NjyrJKMo6+k9jHTRH8wZMP+dagrK4iMVRumM/T5w8/ftaquaYERFU1xERAREQEREBERAREQEREBERAREQFmGermlxCyocCG3atezG7boeXmjeN/nGzCf8AeC09V3WWmH5+rBYpujjytJxfWfKSGOBGz43kbkNcAOoB2Ia7Y8uxttzG2me+MfX5w2NHuRauRVO5nOttODWOjM/gDMawyuPsUTMBuY+1jczm/NzbrL9N47XGe4eu4d6j0ezDVnYSXDWNQV8nDLXd/wBnMLZIYm/bPS6HlcG8vzlbBTyUdqWSu9j6t6H9upWAGzRH/aAJ3HzOBLT3gkdV2lr1UzRPRqja9D0Yr/NEsCwekteaos8M8Nn9NQ6ex2jLEdy1lGZCKdt6WGs+CNtdjDztY7tC49oG7AbdSoKnw+1/R0VozQ7dJiStp/VVS/PmvKNcRT1I75m7SOPm5+YMcC5rg3uPLzHYL00ixyw1McXn21wr1RJwOzmn24vfMWdWuycVfxiL0q5zDbAfzc3KPtQLtid/Vtv0VmpVM5w/4wa1zdrERzaRz7alybOG/DDHjRXrdlJ2zJHB22zA7mbuNid9tlri+HND2lrgHNI2IPcUZauIxMTu9fVTq/Grh7bsRQQa80zNPK4MjijzFdznuJ2AAD9ySfUsvwHCvVFLhfwqxE2L5MjhNVDJZCHxiI9jX7a07n3DtndJWHZpJ9Lu6HbfBRrNIIrxAjuIYFzImaOltqec+I3A7UGsslxHsx42nZbYz+IzWMqZCRhrZNtWpFHLBIBuWNcWyM9IDrse7qpLFcPqY0tqSRnAPD4mexXiqDFNt0mS5CJ0gdM0yRgsYG8rHt3du5zR8ggFb0iZY6qnOXmytw+1WeHb8ZntD5HUsAzE1jD0XaiiblMFWEbRCW2y8czg/tNtpCWtcAS7uXDb4Za+yB0s7WmAj4jth0+2nJUlyUcVepke1cXWJ2vIEu8Zjb2jWvcCxxDfS3PplERqY4vIVnTeruGGC4VinD5H1bjcJcw93sclji+Wu2WPYNjsysa9pIbIHtcS3cNcz0unLgeG2K1VhtCau0xohmutOY/HW8Fb0/nZa3jTZG2S51qOR5ML39q2XchwBD/R+Yen9R6L09rGOKPP4LGZyOE80bclTjsBh+cB4O35lJ0qVfG1IqtSvFVqwtDI4YWBjGNHcA0dAPxBTlhqIzv2f+fZhGpuHPNw8xNLG8HoKVp9ixc8T0/l69GzibOwZFPHMOQF7mhvMWnpygbPAWu8Pqebx2hdP1dS2W3NQQ0YY79hh3Ek4YA877Dfrv126qwLq3snBjzEyRxdPM7kgrxjmlmd38rG97j/ANB1OwU00zXPRpjMropiiellyUq7r+t9NQsBPi0s16TYdAxsL4uvzelOz9BWqKr6K0xNifGMjkGsGVuNa17I3czYImklkYPrPpEuI7yfmAVoWzcmPy0R3R9c/VwNJuRduTMbhERUtUREQEREBERAREQEREBERAREQEREBERBE53SuJ1NGxuSpR2HR79nLuWSx79/K9pDm/mIUBJwpxjnEx5LMwtJ35W33uH/AN25/pV1RW03blMYidjOm5XT+mcKR9ifH+18376fgn2J8f7Xzfvp+Cu6LLX3OLPXXOaVI+xPj/a+b99PwT7E+P8Aa+b99PwV3RNfc4muuc0qR9ifH+18376fgq7xG0FDprh7qfL0sxmG3cfi7VuAvuczRJHE5zdxt1G4HRayqdxl/eg1z+Qr3+HemvucTXXOaWe8BtLu4g8GNGaly+Zy78plcXBbsuitcjDI5oJ2bt0H4lfPsT4/2vm/fT8FXvBP/g1cNPyFV/sBawmvucTXXOaVI+xPj/a+b99PwT7E+P8Aa+b99PwV3RNfc4muuc0qR9ifH+18376fgn2J8f7Xzfvp+Cu6Jr7nE11zmlSW8KMb3Pyeakb62m+9v9Ldj/Sp3AaPw+mC9+OpNinkHLJZkc6WeQfM6R5L3D6SplFFV65VGJnYxquV1bKpERFSrEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBU7jL+9Brn8hXv8ADvVxVO4y/vQa5/IV7/DvQVjwT/4NXDT8hVf7AWsLJ/BP/g1cNPyFV/sBawgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAsh488XtCYPQeutPZLWuncfnxhbcXkq1lYIrXO+s4xt7Jzw7dwc0gbdeYbd615fnL+yk8C/F7uI4p4ut6FjlxmZLB92B/2eY9PW0GMk9Byxj1oPTvgh8TtHZHgfw407V1Zg7OoI8PXgfiYclC+22RsRc5hiDuYODWPJG24DHH1Fb6vzE/Yu+CUuoNe5HiXdY9lDANfSx5HQS25Yy2Q7+sMieQQfXK0+pfp2gIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiIC45546sEk00jIYY2l75JHBrWtA3JJPcAPWuRZtq3JO1Jn58bvvica5glYHdLFnbm2cPW1gLCB3FxO43YFZRTFWZndC61bm7V0Yd67xKntOLcFiXXItyPHL0hrRO/G0crnuH4+UA94JCpnEilluKuhs1pPO0cO/FZau6vMI3yF8fra9hIID2uDXNJBALR0K7eqtVYvRWCsZnNWvE8bXcxsk3Zvk5S97WN9FoJO7nNHQev5ki1Vi5tU2NOMtc2Zr0478tbs3+jA972Mfzbcp3dG8bA79Oo6hNdEfpoj5+vJ2Y0SzTsnbKH4Q6byHBPQGM0hp2pjDi6AfyyWpJHTzPc4uc+RwaAXEn1AADYAAABX2nxJu03gZrCmODoDbxkpstb+N0Za14H+6H/AD9Ou0eia6J/VRHyTOh2pjZGGj0r1fJVIbVSeOzWmaHxzQuDmPae4gjoQudZlgcqdLZ6u3mIxWTmEEsZd6MFhx9CRo9XO48jgO9zmHp6ROmpVTEYqp3S4t61NmroyIiKtSIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAsd06901W7K/9tkyN10nT7rxmQbfm22/MtG1prXDcPdL5LUWfueI4jHRdtanET5TG3cDfkYC49SO4FZrh8tUyMzMhR7byPn4m5jGvsMdG4tlHNLGWOALXNfu4g+qQfMVdH5rVVMb9k+Gfvn4OjoVURcmJ71A8Kc7cC9Qk9AJqJPvsCrmtrc1DjDxKsVppK1mLh3FLFLE8texzZrpDmkdQQduo7ltOo9O43V2Du4bMU4shjLsZisVphu2Rp9X/nuOoI3Cr1ng9pC5Lj5psRzz0MY/DQTmzN2nibmFjoXv595BsT1eSQTzAg9VqOtXRNU5j1v+7EMPgcpcznB6GbWurJItXYSe1mGeWZW9tJHWhlaY9iOx9J537LlJA2J6nfVPB+y+RyOkMxSyWQsZWXDagyeIiuXH888sMFl7IzI77pwaACT1O3VWyvw/wFSzpqxFQ5ZdOVn1MW7tpD4vE+Nsbm/K9PdrGjd256d++67mndK4vScN6LFVfFY7t2fI2B2j389iZ5fK/wBInbdxJ2GwHqARFFuaZz67nDrV7odLZGZn7bDGJoj6+0aQ5m34+YBbSsnOPdqPNUMRGOaISx27rgf2uBjuYA//ANj2hm3rHaEfJK1hbc7LVNM78zPjj7OXp1UTXER3CIipc0REQEREBERAREQEREBERAREQEREBERAREQEREBEUflM/j8NBclt2Ws8UrPuTRMBklbC0El4jaC5w6HuB3PQblBIL4JDQSTsB1JKyOfinqriXwzxWo+EuFqz2L94wnzwbNRZFWaXh0/I0Fzg7laW7ep4O24LVaRw2e7ip57Samzrw2j4lFgBaAxrN9ueTsuXcvOzeu/Tb9AROd444uXQeW1HoSnLxOkoXBjjS01NHMTYPJu0v32DWh7C5w5tgd9tgdu7Yxmu81rnS+ZrZyrgNIw0u0ymnJqLZ7Viw9jhyGcO2Y1nMw+h91Gd+YO6WjTmlMLo7HeIYHEUcLR5i/xfH1mQRlx73FrQBufWe9SqCnaE4S6Z4b3NQW8FSlgs5+6b+QlntSzmaUkkEB7iGgc2wDQBsB8ysGewFPUePNS4wlocJI5WHaSF47nsd6nDc/mJB3BIMiimJmmcwmJmJzDMbuA1JhHFppNz9cE8s9FzIptvVzRSOA3/ABtd17+Udy6BvZFvR2m80D6wKwO35w7Za6it6dE/qoj4Zj+vBvU6bdiMTtZEL+QcdhpvNb/jqgf/AOl3KeG1LmXhkWMGFgO3NZyUjHuA/wBmKNx3P+85v5/XqKJ06I/TR45knTbsxiNjPG6z0jwy1hp7RFy7NDn9Rtmmq2LMDj47JHtzB0waGc+x9Fg22DdgAOUHQ1wz1IbL4nyRMdLC4vikc0F0bi0tLmkjodnEfQSPWshx2sbng/6fw+O4martaqmzOefj8dmosUW9kyUkwMsmPdoO/o8wA3LgNtmucK6qpqnMtGZmZzLZERFigREQEREBERAREQEREBERAREQEREBERARFk8HFfO8TdEaps8N8JNUz+Ou+IUn6ypTU6dpwc0Plbt6bo2gvHTY8zNiBuCg1hUKxxlwF+LWVbS0zdZah0tFzXsHiZGunEvp8sO59HnJjeCNyQW7Eb7A8E/Cy1qjN6H1JqPP5ODOafrh0+Pw158WLtWnMAfI+Ijd4B5w3fb0XbFXbHYPG4eW3LQx9WlLclM9l9aFsbp5D3veQBzOPznqgzyxS4icRMLofKV8ieGE7LDbmdwcteHIyzRte0trdsCA0ENO7m7HZ/cCCFY8Hwn0ppzX2d1rj8QyvqfNsZFfyHavc6VjWsAaGlxa0fa2n0QNyNz1KtyICIiAiIgIiICIiAiIgL6yRMlaGvY17QQ7Zw3G4O4P5iAfzL7IgyDL5ux4PVLW+sdaatyuo9JW8jDYqVBju1lw7JCGSN5ox1haSD1A5Wt+6c70tZp24b9SC1XeJIJmNkjeBtzNI3B/QV95YmTxPjkY2SN4LXMcNw4HvBCzTUkOZ4bam1fxDvamymY0dFhu1dpKCi2Z8E0I3Mldzdj6TQ7dp7y7cuDWjlDTkUFofWmK4i6QxOpsHO6zicpXbZryPYWOLT6i09QQdwR84U6gIiICIiAiIgIiICIiAiIgIiICIiCI1hWyNzSWbgxF6PGZaWjOynem+RXmMbhHI7oejXbE9D3dy6vD2nmMfoXAVdQZOHNZyGjCy9ka+3Z2ZgwB8jdgOjjue4d/cvFHhz+FdxM4Qa0zmhGYDTs+js9iiynds17DrEkMsPZTgvbM1oe1/abAN6AsJ336x/gI+FnxA4i6x0twxGncE3SmHxrmWLtSGcWIK0MBZGS50zmkulMDSeXucenXcB+hCIiAiIgIiICIiAiIgIiICIiAiIgLpZqW7Bh78mNhZYyLIJHVoZDs18oaeRp6joXbDvH0hd1R2oou30/k4vH/ACVz1ZW+P83L4tuw/bN9xty/K33Hd3hBE8Mr+qcpoLC29bY2rh9VywB2Ro0nB0MMu59FpD3gjbb7t30qzql8Gcd5J4Xacp+d/n92NUN85e27Xyh1P2zn7STf5t+d3d3q6ICIiAiIgIiICIiAiIgIi4Lt2DHVJrVmVsNeFhkkkeejWgbkqYjOyBzrpW81j6D+S1frVn/xZpmsP9JWc5XKZDWDi+ea1jMS79rx8TzFJIP40z2nm6/xGkDY7O5u4dCvpbDVGcsOJpRj18tdnX19enVWzFujZVOZ9n3/AK+LpW9CqqjNU4ab51YT2xQ96Z8U86sJ7Yoe9M+KzjyBjPZ1T6hvwTyBjPZ1T6hvwUdKz7fJb1D+TMfD14V4rjfwZls4e1Suaq06916hHDMx8s8e200DQCSS5oDg0DcujaB3qM/Y9+FGO4NcJZsxnbFbH6r1JIJ7Na3K2OarXYSIYnNJ3a7q55B2PpgEbtWw+QMZ7OqfUN+CeQMZ7OqfUN+CdKz7fI6h/Jo/nVhPbFD3pnxTzqwntih70z4rOPIGM9nVPqG/BPIGM9nVPqG/BOlZ9vkdQ/k06rncbekEdbIVbDz9zFM1x/QCu8sen0xh7LeWXFUpBtt6Vdh/8l3cZkMhpBwkoyWMhjW7dpi5ZOchvrMLndQ7/ZceU7bejvzKcW69lM4n2/f171Veg1UxmmctURdbHZGtlqMFypKJq0zQ9jwCNx9B6g/OD1B6FdlVTExOJc0REUAiIgIiICIiAiIgKK1XLSg0vmJMlC+xjmU5nWYYzs58QYedo6jqW7jvH0hSq6WaluwYe/JjYWWMiyCR1aGQ7NfKGnkaeo6F2w7x9IQUvwf7+lspwa0nb0TjbWH0pLTDsdRuuLpoYuY+i4l7yTvv9276VoKrHDK/qnKaCwtvW2Nq4fVcsAdkaNJwdDDLufRaQ94I22+7d9Ks6AiIgIiICIiAiIgIiICofE22bNrBYXp2VqV9ydp+7jg5SG/WSRH/AJdvWr4s/wCI9d0OpdNXyD2RZaoEgdA6QRyt3+bpXcPpIV9n9WfZPylsaPETdpy6Kz2Tjfg4tKZDNmnkS6lmTgH4wRR+Nvudu2FsbW8/KeYva8Hm+Q4Hp3LQlj1rg3dm4+R6kErPNF7WZealuOuXjjdWjk2+bsX793yo2lab0Nc1Rjop7I8aqGG1dXwmT09qLG1rN9uMgzdmi1tCWw47MYH85fs53RriwNJI6qqWOKWZxum+NOQtWL1iPTmRlr0n46pXlmpQCnBIZAx7mNkEbpHyEPduQCOvQKgah4OaxyGUmt2dFMzmo6mqosw3VFjKQl09GO22SOvXY53NEREGt5HBjPRJ5iSN71lOH2qY6HHTDwYhtqrqqrZt4m7HajHazy0W1/F3McQWODmb8x9HZ3eFKjpVz69krNa410MG3BYuPH5zVudt4uLIzwYeix8sUDmgCeZvO1jOZ3Ns1riSQQ0HZV3hxxtyx4OaMzmWwGotYZXK1JZ7E+Dx8TuTkkI3eOaNrTtts1vU7HYHYrgwul9a8MtbOzWL0uNT083g8bSuwx5CGvNj7NVj2de0Oz43CTqWEkFp6H10vT3B7V+L05oClqHRY1bi8dhZqs2nX5SCOCpfdYc4Tygu5JW9mQ0Ec5b1IbuUJqrz69jQtXcf5q1zhla0thLupcJqt80jnU4ou2fG2tJI2OMSyxhsgc0Fwd05WOG++wM9qnjxidL5a9jm4PP5mxi6sdvLHFU2TNxkb2lze2JePS5QXcrOc7DfbZZxp3hlrXR/DrhO+LTzMhm9F5G0bWJhuws8YgkZYh7SGRzgzulY8NeWnbcHYjZTFzC6+0tntb3sJo5mZZrOCvaaH5KCI4u2KrYJI5+Y/bGDla7mj5vuht3FExVXjM/L2fdb8tx4w1TUWOwmKxOZ1RfyOIjzlQYaCJ7Jqr3lofzSSMDe4H0th6TQCSdlpSxfhdwly/D/AF7gDKwWsVjND1cE7ItkbtJajsFzmhhPOBt1B2222G+62hQuomqYzUkOG9o1MvnMOCBA3s8hCwb+j2peJB/44y/6ZCr8s84f13WNYZ24A7soKteoCR0Mm8kjh+Zr4/0rQ1uXv1RPsj5Q8/pMRF2rAiIqGsIiICIiAiIgIiICjtRRdvp/JxeP+SuerK3x/m5fFt2H7ZvuNuX5W+47u8KRUVquWlBpfMSZKF9jHMpzOswxnZz4gw87R1HUt3HePpCCA4M47yTwu05T87/P7saob5y9t2vlDqftnP2km/zb87u7vV0WfeD/AH9LZTg1pO3onG2sPpSWmHY6jdcXTQxcx9FxL3knff7t30rQUBERAREQEREBERAREQFGajwMGpMRNQnLmB5a9krPlRSNIcx4/GHAH+hSaLKmqaZiqN6YnE5hkwtTY++MXlWx1cpsS1rSezsNHe+In5Q+cd7e4+ontqT4z6txekOH2TymR07c1fFWfHG3EY2r41PLO9zWxtDfuTzPb6XeN9x6t6zQ4NXclqupnI9R5nCaYlxsf/dZzy6aOy7cuL5nPeWhoLRyN6bg9duhzmm1XtzifdmPXrLr29OjGK42pNFL/Ynx/tbN+/H4J9ifH+1s378fgmqt8/ks69b4SiEWfeFDlsRwA4N5rVIy+VfldhUxdea8SJrT9wwbdNw0BzyNxu1h6qP8EjUWN8IXg3j9QWsvlWZ6q91HLQw3C1rbDADzNG3Rr2uY75gXEepNVb5/I69b4S1FFL/Ynx/tbN+/H4J9ifH+1s378fgmqt8/kdet8JRC6clyWzd8m4yNl3LOAIg5tmxA90kpG/Iz8fedtmgnopbOcF6WWwt6lBqDPY+zPC6OK7FeJfA4jYPDSOV23zOBChsTbzPBDG6B0rJgcrrpuQmNPJ6nxdOCBteUloZNYhaRsw7kF/qEfUucQCim1Rtz0vKPv63q69OjH5I2tH0xp6LTOJZTjeZpC90087hsZZXHmc4j1dTsBv0AA7gFLLp0cvQyctuKndr25akvY2GQSte6GTbfkeAfRdsR0PXqu4sKqpqmapciZmZzIiIsUCIiAiIgIiICIiAulmpbsGHvyY2FljIsgkdWhkOzXyhp5GnqOhdsO8fSF3VHaii7fT+Ti8f8lc9WVvj/ADcvi27D9s33G3L8rfcd3eEETwyv6pymgsLb1tjauH1XLAHZGjScHQwy7n0WkPeCNtvu3fSrOqXwZx3knhdpyn53+f3Y1Q3zl7btfKHU/bOftJN/m353d3erogIiICIiAiIgIiICIqXxF4nwcPLGna7sHms9YzmRZj4WYamZxAT1fLKQdmMa0OcfXs07DoSAs2bzeP03ibeUy16vjcbUjMti3akEcUTB3uc49AFRvPnUesMtofJaCq4XNaAysb7eSzlm29kjYeXZjIYg3fnJO+7ug5HNcGnYrtY7h9mbmp9ZT6r1FFqfS2aZHWp6Znx0batSANPMH77mVzi5wJPQgDp0AbeKtWGjWir1oY69eJoZHFE0NYxoGwAA6AAepBUOHPCTT/C6bUM+FFyS1nshJkr9m9bksSSyuJ2G7idmtBDQO/YDck9VdERAREQfn9+yK8MOL3FjULLOH0u6Xh5pTHyXDe8o1WCV5Z2k83ZulDyGNaGAFm+7HEbhyiv2ObhZxd4a6wrZy1pss4b6txomlunIViG7MMlafsmyGTc7lm3L0EpJ226e3eO/7x/EP+buR/w0iiPBd/g4cMv5u0f7lqDUEREBERBn2T4Q0MTBrPJ6DioaN1pqWIdvnY6Ym3mBcWyviJDXHd7yT6yd3c2y6jOJtvQNrQGl9bQW8lqXPxmtLl8JjJXY1ttob6DndTHz7uI3G2zHOPIO7TEQfAcHDcEEd3RfKy6fhNNw8xmu8rwxZBW1dqKYXzHnrdiegbXMS9xbzEs5+Z+/L6+X1AASFbi9jcBl9G6U1naq4bXeoaXbMx1cSSQOmaG9pGyXl2OxJ2BO5DT+LcNBREQEREBERAREQFFarlpQaXzEmShfYxzKczrMMZ2c+IMPO0dR1Ldx3j6QpVdLNS3YMPfkxsLLGRZBI6tDIdmvlDTyNPUdC7Yd4+kIKX4P9/S2U4NaTt6JxtrD6Ulph2Oo3XF00MXMfRcS95J33+7d9K0FVjhlf1TlNBYW3rbG1cPquWAOyNGk4Ohhl3PotIe8Ebbfdu+lWdAREQEREBERAREQUfjZrB+hOGOczDMJldRBkbYHUMG9zbj2yvbEXRFvpBzefm3b1G247l3OFvDjDcJ9EY/TWA8cONrcz2vvzvmmke9xe973O9bnOc4gADcnYBTeoosjPp/Jx4iZlbLPqytpzSNDmxzFh7NxB6EB2x2VK0hxBh0rhNJ6c4k6u05U4j26kDJ6XlCGKS7M5xjDoYjyF3O9pADW7c24Hcg0ZERAREQEUPq7WGF0Fp65ndQ5OviMRTZzz27T+VjR6h85JPQNG5JIABK85Ova78MMuix7sjw44MydHXiOyy+oYz3iMH9ogcPuj1cD6w4hod7i7xrvcX7Ge4T8I8fBqjKWq0uPzmop3kYrDRStLHh0jf2ybYu2YzfY/PyuaNv4X6KHDfhxpjSot+P+RcdBQ8a7Ps+27OMN5+Xc7b7b7bnb512NCaA09wy0xU09pjFV8PiKrdo69du259bnE9XOPrc4kn1lWBAREQEREBERAXDNUgsSwSywxySQOL4nvYCY3FpaS0+o8rnDceoketcyjtQ6jxOksPYy2cylLC4qvy9teyFhkEEXM4NbzPeQ0buc0Dc9SQPWgy3h6/AcN+NGouH9bN6ky2YzdN+rm18tY8YqUoXTmJ7IHE8zeaVxdy7Hv71sawN3hRaPHGtmNGr9BHRZ0+bB1B5wU+3F7xjl8V/bt+Ts/T+T3+v1LbMFqDF6oxcOTw2SqZfGzFwjuUJ2zwvLXFjg17SQdnNc07HoQR3hBIIiICIiAiIgKO1FF2+n8nF4/wCSuerK3x/m5fFt2H7ZvuNuX5W+47u8KRUVquWlBpfMSZKF9jHMpzOswxnZz4gw87R1HUt3HePpCCA4M47yTwu05T87/P7saob5y9t2vlDqftnP2km/zb87u7vV0WfeD/f0tlODWk7eicbaw+lJaYdjqN1xdNDFzH0XEveSd9/u3fStBQEREBERARF1clk6uHoy3Ls7K1aIAukeeg3OwH4ySQAB1JIA6lTETM4gdpFn9riNk7jicRhGtgI3bPlJjC53X1RNa5wHr9ItP4l1PPLV33jCf1yu1Ux+qqI+LajRb0xnost8Prgrq3i/wkbJpHLZBljEdpYt6frTvbDlod2P5XRtO0kkbomvjB36823pEL84vA60P5+eEzoLFyMLooMiL8wI6clcGch34j2Yb/zbL9b/ADy1f94wn9d8VkWlOCMei+Omb4pYmnjKuZytV0EtFheKjZHua6Wdrdtw9/KN+u3pPOxLujVRzR4suqXuD1QizTzy1f8AeMJ/XfFPPLV/3jCf13xTVRzR4nVL3BpazHjP4QGnuDVepUnjsZ7VeS9DFaYxTe1vXnnoNmjflZuDu89Bsdtz0XWzGqNc3cVbr0J8LjbksTmRXBFJKYHEbB4Y47OI79j0+dULgzw/h4O5LIZ7LYyfWGrMmS7I6sluCxfkH8Rsb2sEcQ2GzI3b7ADZ3K1NVndVE/H7sZ0W9EZ6Ln0lwA1DxX1DU1txylr5GzXd22J0NVdz4vE/M6Ud1ibboXHdo67bjl5fRgAaAAAAOgAXSw2bpagoMuUJxPA4lu/KWua4d7XNIBa4etpAI9YXeVMxNM4lq7t4iIoBERAREQEVLy/EmOOeSthKD81NGS18/aiGqxwOxaZSCXEHoeRrtiCDsRsop+tNWOO7KmGiG/yTJK/+nYf9Ffqpj9UxHvn6b2zTo92uMxS0lVDi5w3x/F7hrqHR+U6VMtVdD2m25ikBDo5APWWPaxwHztUH55av+8YT+u+KeeWr/vGE/rvimqjmjxZ9UvcH4qycL9RR8TXaBNBx1MMn5J8VHXeftOz232+Tv913bde5fuHwc4Z0ODnDHTujca4yVsTVERlI27WUkvlk29XNI57tvVzbLEn8G4X8emcXfEMYNVNq+L9mHv8AFi/kMfblvLzdr2Z5N+bl2+536rVPPLV/3jCf13xTVRzR4nVL3BpaLNPPLV/3jCf13xX2ZrTVjTu6phpRv8kPmZ/Tsf8AomqjmjxOqXuDSUVKxXEqN08dfOY9+FlkcGMsCUT1HOPQN7UAFp/32tB3ABJOyuqrqoqo3+vi1q6KqJxVGBERYMBdLNS3YMPfkxsLLGRZBI6tDIdmvlDTyNPUdC7Yd4+kLuqO1FF2+n8nF4/5K56srfH+bl8W3Yftm+425flb7ju7wgieGV/VOU0FhbetsbVw+q5YA7I0aTg6GGXc+i0h7wRtt9276VZ1S+DOO8k8LtOU/O/z+7GqG+cvbdr5Q6n7Zz9pJv8ANvzu7u9XRAREQEREBZVeyZ1dmX5CQ8+Ppyvix8W+7dx6L59v4zjzNafUzu25376TmJpK+IvSw7maOB7mbfxg0kLKdKMZHpfENj25BTh2IG2/oBXR+W1NUb52ff173S0GiKqpqnufGqdVYrRWDsZfN3WUMdByh8zwXdXODWtDWglziSAGgEkkABdjCZmrqHE1slSMxq2Wc8Zngkgft+OORrXN+ggFZB4UmnYc9h9CtluZCoPO3F1z4hdlr7tlssaXeg4ek3bdru9p6jYqt8bbGTvZfM4vSV3U5yOlcDHZt2otROo1KpLZXRPe3ke61M4RuLg8cpDRu4EkrUdSq5NMzsbydWYpuqX6cNr/AEyyiMk6t2b+lcvMYfzbcvygRtvv+LZRmL4paYzNbS1ink+2h1Pz+SHeLyt8Z5Y3Su6Fo5NmMcfT5e7bv6LK+Hmds6o4v4PMXC11zJcM6VublGwL5LBc7YfS5UHSOFbqTQHgyY5167j2T+NB1nG2DBO0DHzkhsg6t322JHXYnYg9UY62e71tj7vXiLzFPqu/pevr3QtzN6k1BHTzeNxuCt1L4iyUstqITeKvtkdA3ldvIfSDHHrvsoWDXGt9KYrVekLmXt46dup8PiI8lZyXlKxi611jHSEWZI2l5HXlL2+iZNtzsCpwnXRG+HqLU2qsXo7Gsv5e14pUfYhqtk7N795ZZGxxt2aCer3NG/cN9zsFKrzxx20g3hvwhtzVcnqLUpOaw8zauUyL7spcy9EeWIydQXnptvtuBtsrh4PuZyWo8fqbI6it226sOVlr5PDTTl0OKLOkMELNy3kMRY/tB+2F5cT3AQyi5PT6Ew0x2TdpO95biJbXbsMhFzbMfB65CP40Y9IHvIBb6xtrAIIBB3BWW24o7FWaKUAxPY5rwe7Yjqrlw7sS3OH+mJ5yXTy4uq+QnvLjE0n+lbf67XSnfE4+E7vDEuZp1ERVFUd6woiKlyxERAVH19mpZ70Gn6sr4e1i8ZuyxP5Xsh5i1jAR1Bkc1w3H3LH9xIIvCym490uu9Uuf8qOavCzf72K8bh+bme/+lXW9kVV98R9Yj6tvRaIuXYiXNFEyCJkUTGxxsaGtYwbBoHcAPUFSq/GrRtvGZXIwZZ82Oxk0dezcZSsGHnfL2TRG8M5ZfT9EmMuA9ZCteZxzMxiblGSaxXZYidE6WpO6GZgI23ZI0hzXD1EEELydp3FT6a8CzS2ZxmcztO+LOMuCSLLTtDTJcigfEAH7CIse77X8nc77brV3u5crmmdnCZ8Hr1F5i1j5avwceM5Fq3UFGzpSZ1nEV6mQfHXgdHjoZiDGOj2ucOrH7t6kgAucT1+L2qc5qqLO5HS1rUNfK6c07Dkr1itnjQx1OV8DrDNoAx/jLy3q5r9m8oaOYElRhjN7Gdnr1D0gNVYs6rdpoWv9NNpDImr2b/8A0cyGMP5tuX5QI233/FsvnTepsbq7FjI4mwbVPtpYO0Mb4/TjkdG8bOAPRzXDfbY7bjcLHtLXZNV8cqNq098U2T4c1pZXV3mNzTJZcXFjgd2kF3QjqOipGOkNjwfcJlslqLV1/UXlHI4jE1aOobUE2SsHITxwRyPa/d/K1g3c4nlYxx7ghrZ+f0erEXlzL4jW+Cz2ieF9fOZLOTjCT5fIXrGpLGPsZCyJmNLG2hHLJyR8xIjby7tIJPokHt5DE8Qce/hvpvUepruPkv6ntw9ti8q+ad+PFOWRsM0/ZxmR4LXt5yzcbNcDzDcMJ1vselpYmTxvjkY2SN4LXMcNw4HvBCl9A5mWtfm0/ZkfK1kPjNGWV/M90QcGvjJPU8hczYn1PaO8EqAxWOZh8ZVoxS2J468bYmy253TSvAG275Hkuc75ySSV9qzzFrnSbmdHSWZ4X7fezVlcfzczGf0LZsfmmaJ3TEz4RlXpVEV2pmd8NWREVbzoorVctKDS+YkyUL7GOZTmdZhjOznxBh52jqOpbuO8fSFKrpZqW7Bh78mNhZYyLIJHVoZDs18oaeRp6joXbDvH0hBS/B/v6WynBrSdvRONtYfSktMOx1G64umhi5j6LiXvJO+/3bvpWgqscMr+qcpoLC29bY2rh9VywB2Ro0nB0MMu59FpD3gjbb7t30qzoCIiAiIg+CA4EEbg9CCsjxtJ+AmsYGbcSUDywF53Mtc/tTx+b0D/ALTHLXVC6m0tW1LDCXvdVvVyXVrkXy4iduYf7THbDmaeh2B6FrSLaZiYmirdPzbWj3tTXmd0s21FpXF6sioxZWr40yjdgyNcdo9nJYheHxv9EjfZwB2O4PrBUFqng9o/WubGWzWGZdvGFteR3bysZYiaSWsmja4MmaCTsJA4Dcq3WsZqTEOLLGHOVYB0s4uRmzuvrjkcHN6eoF30rqeP5Ef/AKazXuw/WUdXud234w7ets1xnMIDG8JdKYe7py5TxIht6drOp4ycWJS+CBzS0xEl272bHo1/MB0I2ICi5vB+0DNiYMZ5AEVGvblv14YLc8QrzyAB74i14Me4HQN2A3OwG5Vy8oZD8Gs37qP1k8oZD8Gs37qP1k6vd4HTs8YViPgnomHRz9Lx4GJmGfZFx0bZpRMbAIIm7bm7XtNwPT5ubptvsuKjwK0Jj8fmaMWnoX1MzCyHIxTyyzC2GFzmOk53HmkBcT2h9Pu9LoNrZ5QyH4NZv3UfrJ5QyH4NZv3UfrJ1e7wOnZ4x5KjjuBOiMVi7GPgxErq1ietZl7fIWZnufXk7SD03yF2zHdQ3fbvG2xKtFTSmKoalyGoK9QQ5bIQRV7c7HuAmZFzdnzM35SW87hzbc2x232AC5xfyBIHm3mhv89YfrLsVqWo8q8Mq4CSi099nKzMjY36GMc97j+IhoPzj1Or3O/Z8YRrbNMZzDrZmKfIwsxNNxF7I7143NPWJh6Pl+hjST9PKPWFrFSrFRqw1oGCOGFjY2MH3LQNgP0BQ2l9JRadbJPNN4/lJgBNccwMJH8RjevIwHubufnJJ6qfU1TEUxRT3ef8ATjaTf11WzdAiIqmoIiICzrW2PdiNUxZUDajkYmVJnb7NjnYT2ZP++HFu/wA7GDvctFXBeo18nUlq24I7NaVpZJFK0Oa4fMQrKKopzE7p2T681tq5NquKoZwRuNlVBws0u3QVXRYxn/dqqIRDR8Yl9HspWyx+nzc52exp6u67bHcdFcMjpDO4Fx8nNGfoD5EUkojtxj+LzO2ZJ9Li09BuXHcqMfbycR2fprMh2+2wgY7+lryE1Fc/oxPx+m936b9m5Gc+KIscONO2qmqasuO5oNUcwy7O3kHjPNCIT15t2fa2hvocvdv39VEZrgXobUV9lvI4FlmVtaOm9psTNjnijG0bZow8Mm5R3GQOIVs8oZD8Gs37qP1k8oZD8Gs37qP1k6vd4JmuzO+YQlLhXpfH5DTl+vi+zu6eqmljbHbymSGAt5OzLi7eRu3cH82x6jr1UFlfB20DmqWIqWcRaEGJlsz0m18rcgMEliR0kzgWStJLnOd1JOwOw2HRTx4h1BrMaTOMynnGaHlQY7xb7Z4r2nZ9r37cvP6P0qa8oZD8Gs37qP1k6vd4HTszvmPJT7XATQ97A0cPZxM9irRnfZqyzZK0+1BI8bOLLBl7VoIA3Aft07lLY/hbpfF19PQVcWIotPzy2saBPKexlka9sjyS7d5cJZN+fm6uJ7+qmvKGQ/BrN+6j9ZfZlzJyHZmmcy52+2xgY3+lzwE6vd4fI6dmNuYd5djRePdmNVyZPYmljYn1onb7tkneR2hH+41vLv8API8d7Vx47SOezzh5QYNP0D8uNkrZbcg/i7t3ZH9ILz37cp2K0KhQrYunDUqQMrVom8scUbdmtH4gsop1MTmc1Tw24+m71lo6VpNNVPQodhERUuQKO1FF2+n8nF4/5K56srfH+bl8W3Yftm+425flb7ju7wpFRWq5aUGl8xJkoX2McynM6zDGdnPiDDztHUdS3cd4+kIIDgzjvJPC7TlPzv8AP7saob5y9t2vlDqftnP2km/zb87u7vV0WfeD/f0tlODWk7eicbaw+lJaYdjqN1xdNDFzH0XEveSd9/u3fStBQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREHnaX/wBYRB/8sXf/AJReiV52l/8AWEQf/LF3/wCUXolAREQEREBERAXSzUt2DD35MbCyxkWQSOrQyHZr5Q08jT1HQu2HePpC7qjtRRdvp/JxeP8Akrnqyt8f5uXxbdh+2b7jbl+VvuO7vCCJ4ZX9U5TQWFt62xtXD6rlgDsjRpODoYZdz6LSHvBG233bvpVnVL4M47yTwu05T87/AD+7GqG+cvbdr5Q6n7Zz9pJv82/O7u71dEBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERB5wyFyCh+yE4wWZmVzd4bvr1u1cG9vKMiXljN/lODGudsOuwJ7l6PWa8duCOO42aXhqvtSYbUeMlFzCZ+r0sY603Yte0jYlpIAc3fqAO4hpFe4Acbsjqy5kdBa9rx4bijp9oF+o3pFkIe5t2t/Gjf0JA+ST3DcBBtaIiAiIgIiICi9UuqM0zl3ZCvLboCnMbFeHfnlj5DzNbsQdyNwOo7+9d63bgoVJrVqaOtWgY6SWaVwayNgG5c4noAACSSsywmZtcZM7pDWui9cs+x5Wbb8ao1aRD8nOCYmhz5BuI2kPOwaDu0EE7gtCU4CWtM3uDmk7GjcVcwmlpaTX47H3yTPDCSSA4l7yT6/lO6EdVf18ABoAA2A7gF8oCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAsg8ILgU/ilTx2f03f83OI+nnGxgs7GNuV3rrzdPShf1BBB23J2ILmu19VriTp65qzQWdw2P1Jb0hdvVHwQ5yjy9tTc4bB7eb9B2LXbE8r2O5XNDx5xL/AGSOfQGiamKl0k6pxbr2vE8zhsix7adHsy0vlDwQZBK0/aw13TcucSGtEvtrB5mrqLC4/LUZO2o368dqCQfdRvaHNP5wQvxy4ueBBr/hjJZsUZMbrLExbkWsJaZJNy+rmrk9pzfPyhwHzr9CPAZ4lw5jwb9N0c7bjx+XwZkxE8N1wheGxO3i2a7Y7CJ0Q3+cFWauvllOJek0UT52YP2zj/emfFPOzB+2cf70z4pq6+WTEpZebPDN8LG34MNTRxxeLpZi9lrz3Wattzm704g3tQxzT6EjjIwNeQ4DZ3oO9W++dmD9s4/3pnxX5ieHbW1Rx68JqTCaWxNrK0sHRgoQzx7Nque5vbveZXEMb1lDdy4A8gTV18smJe5tEcU8rx7yOjNT8P8AJ4eXhjNXs+X4bsZdkW2g0BtR8fdE5vOHE8xBA3HM1zC/YMViaWCxtbHY2nBj6FWMRQVasYjiiYBsGtaAAAB6gvD3gM+CtrjgrrLziy2usVRpXa5jvaWxlhts2/Rd2YmcDyMdG93M10fOduZvMA9wPuxYTE07JhAiIoBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERBV9XarlxUjMdjmtkyszOfnkbzRVmb7c7wCCd9jytB3cQeoAJFFm05UyFjxnK8+bt7k9tkSJeUn+IzbkjH4mNA7/AJyvnA2zmI7WZeeaXJzvsB3/AMLflhb+aNrB9O59ak1dcrqs1TbonGNk+3j8HodHsU26YmY2uj5BxgAHk6psP/gN+CeQcZ7OqfUN+C7yh9XasxuiNP28zlZXRU6/KCI2GSSR7nBrGMaOrnuc5rQB3khUayvjLbnEbZdryDjPZ1T6hvwTyDjPZ1T6hvwWeP8ACEw1GhnZcxgdQYC5icXLmXY7J1I2T2asfR74S2RzHbEtBBcCC4bgbqS05xoxGf1A3EWMZlsBLNRfk6c+ZrtgiuVmFofIwh5LeXnaS2QMcA4EtTWV8ZYdOie9cfIOM9nVPqG/BPIOM9nVPqG/BYpf8IOfU+qeHkGm8dnMfg8zmzA7K3sexlTJVhXnd9qc4l4Bc1jgS1hcASNxut5TWV80ppqpqzhGzaZw9lpbLiqUgI29Kuw/+S7eMvZDSJEmPfPfx7du0xc0vMeUd5gc7q13zNJ5Dtt6G/MOdFnTerjZM5jhO715ort0XIxVDRMbkq2YoQXacomrTND2PAI3H4weoI7iD1BBB6rtKgcOLRp5zN4gECAiPIQsG/omQvbIPzuYHdPW8q/rK5TFNWzdv8Xm7lGrrmngIiKtUIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgxnSMDqOBr49+4lx5fReCNjvE4x7/QeUEfOCD601JrPT+jYYZc/ncbg4p3FsT8lcjrtkI6kNLyNz9CtOsNOzYzI2M5QgdYr2AHX60LS6TmaA0TMaPlHlADmjqQ1pHUbOhYJqeXrRzxPhuQO6tkaQ9p+gqy/E1VTdjdPz4PSWLsXaIxvVMcb+HJYXjX+l+UEAu8s1tgT3D5f4j+hUvi7Jpzj5oa1pzSWp9NahzleeDKQ41uQhsR2BBKx5jlaxzj2bh6JJGwLhuti8Qq/yaL/wBfeOrDC7mjhjY7u3a0ArWWzTNUYnc872eFcmZ4ca9q4ng5i9CZy7g5qFJ1exTdPakkY4Oj5ovRazcR7Fzhv6wNlZte8LcxrHUekI2wmDHRaZy+IvXGyM3rSWYa8cfo827urH9W7gcvUjcLZkRGqpxh5xx+E1/PBwrxud0dDh6Oi70cmQzLMrWfWkhhpzQiaNnMHhp5gSHAFu/cRuRrH2ceHB/wD3A0t/9arfrq7EBwII3B7wVweT6p/92h+rCFNE07pVD7OXDj/+QNLf/Wq366uwII3HULg8Qq/yaH/wBcUt2Se4MdjYm3ss4DauHbCIHuklP3DB8+2522aHHYHOmia5xTDKZ6MZrlLaBrus6zzNwA9lXqQVeYjoXlz5HD8zTGf+ZaIojS+nYtM4ltRknbzOe6axYLdjNK47udt12HqA3OwAHqUur7lUTVs3RiPB5u9XrLk1CIiqUiIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICreX4eYHM25LklN1W7J1fZozPrSSHu3eYyOf/m3VkRZ011UTmmcJiZpnMSpJ4T44nplc00fMLzvgvj7E+P9rZv34/BXdFZr7nFbrrnNKkfYnx/tbN+/H4LIeA9G3xA1LxVpZfNZSSDTuqJ8VQEVnkLYGsYQHED0juT1XpVed/BR/dtx7/n1a/u41OvucTXXOaWo/Ynx/tbN+/H4J9ifH+1s378fgruia+5xNdc5pUuPhPiA77fdy9pncWSZGVoP08harNh8Hj9P1PFsdThpwb7lkTduY/OT3k/jPVd5FhVdrrjFU7GFVdVX6pyIiKpgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgLzv4KP7tuPf8+rX93GvRC87+Cj+7bj3/Pq1/dxoPRCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiIK7q7iPpPh/wCKedGqMLpvxvn8X8r5CGr23Jy8/J2jhzcvM3fbu5h84XmjwYuMugMRrHjZJf1zpulHf1jauVH2MvXjFmDsmHtYyXjnZ6J9IbjoevRWfw9eBY4z8Db1mjX7bUem+fJ0C0em9gH2+Iev0mDcAdS6NgX5neCrwSl498acJpp7H+SI3eO5WVnTkqxkF43HUFxLYwfUXgoP3DREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREHWyGRrYmlLbuTsr1ohu+SQ7Aer/r029ZKpNviRkbjv8AQ2EBrnus5WZ1cu/GIg1z/wDx8h/EozK5N2q85PYeS7G0J3wU4g7dr3t9GSYj1nmD2t+YDcfKKjNQ6sxWlDixlbXipyd6PG1Ptb39pYkBLGeiDtvynqdh06lXTNNqejMZn5f3/wCYdaxolM09O4mzrLV2/SDCj65fHnlq/wC8YT+u+K+iLHXzyx4Nzqtnlffzy1f94wn9d8U88tX/AHjCf13xX0UVX1Vi7Wp7mnYrXNmKdWK5PW7N45IZHPax3NtyncxvGwO426jqE188seB1WzHcmPPLV/3jCf13xTzy1f8AeMJ/XfFfRRWntVYvVceQfirXjTaF2bHWT2b2dnYidyyM9IDfY+sbg+olNfPLHgdVs8Ex546uP/sMJ/XfFZXwZ4Nw8B83qrKaWoY2KxqKz287bD3uZWYHOc2GANa3kjBeeh3PduTsNtRRNfPLHgdVs8r7+eWr/vGE/rvinnlq/wC8YT+u+Kh4dVYuxqi1p2O1zZmrUjvTVuzeOWGRz2Mdzbcp3dG8bA7jbqOoUqmvnljwOq2Z7n388tX/AHjCf13xTzy1f94wn9d8V9ETXzyx4HVbPK+/nlq/7xhP674p55av+8YT+u+Kh83qrF6cuYirkbXi8+Wt+I0mdm93azcj5OXdoIb6Mbzu7YdO/chSqa+eWPA6rZ4OaPW2q4ju+hh7I/iNnli3/Pyu/wCinsBr+rlbcdG9Wlw+Rk6Rw2HB0c523IikHRx2B9E7O2BPLsN1W1xWqsV2B0MzeZjtj0JBBB3BBHUEEAgjqCAR1TW01bK6Y98b/t63q69Dt1R+XZLU0VV0DqCbJ1beOvSGXI41zY3yuI5p4nDeOU7dxOzmnoPSY4gAEK1KK6ZonEuHVTNFU0z3CIiwYiIiAiIgIiICIiAiIgIiICIiAuOdzmQSOY3neGktb8526BciIMX0SQdHYNwdz89KF5fttzEsBLvzkk/nWfeEL+28MP5747+zMtOqUHaevXMHIC0VXufVLj+2VnHdhH+7v2Z/Gz8YURr3hzp/ibia+M1HSkvU69llyJsVqau5kzQ4NeHxPa7cBzvX61nf/wAtU8Zz4vTxOstxNKr8dsuK+Dw2Ggdmn5fNZFtSjWwWQ8QlmeI3yOD7GxMcYYxznFvpdBtusYp6n1hHoC/gr+dydHIY7iJQwTbkOSNmzHVmdXLozYLGmXbtnjme3qNgQdltVfwddA1sXPj24m3JXlnitc0+XuyyxSxh4Y+KR0xfE4CR43YW7g7Hdd7H8DdD4qB8NPBtrxPv1co9jLMwDrVcgxTEc/V+4BcT8sj0+Za7Cqiuqc+vkxnLabydTM8Y8fX1tq6OrpfE18niWnNTPdBPJXmkcXvcS6VnNC3Zjy5oBd069ObT+npuLXFWzcu5/NYK3Z0Phbb58FedTcZZH2HFxLflAEnZp9Hqdwem29z6AwNm3qO1JQ5p9RVmU8o/tpB4xExj2NbtzbN2bI8bt2PXv6BVzNeD7oLUD6772De98FGHGsfFesxO8ViBDIXFkgLmjmO4O/N0332CnKJtT3et7HdA601Jxlfw801mdR38VXnxGRyVzI4ec1J8s+tc8ViDZGbFjSz7c4M233HqUbppz6Gnb2jaNrUmRzmQ1zmo6gx+YOPmssgO8j7NoNLg0AtceUczncvQ9V6F1Hwe0dqvE4bG5DBwiphthjhUkkqvpjlDdonxOa5oIABAOx2G/co8cANBswcOJjwZgpwXZcjCYLtiOaKxINpHslbIJG8w6FocAfmRGqr4+tiK8GzO5zL6HytTUFmS5kMNnL2K7aex4zIWRS7ND5eVvaFoPLzloLtgSAV2fCA1Hk8JpjBUMVkH4aXP52lhZcpDt2lOKZx53sJ6B5DeQE9xeD3hd6tw1s6FqGpw4OE01UszOs3YL9GxcZJKWsaHRtbYjEZ2Z6XfzHY9+5PYdojK6ww2SwvEGXAajwttjW+KUcZNV9IO33Ln2JO4gEFvKQRvuoWYq6HQ73nniDPkeB+ruIs+BzOTv3fN3DRRZDOXnWZKomvzQuf2r2uIa0Oc4FwcGk77EeirHk4uKPB7TmqdTiQvxVPBWpXVMjqSbNyG20AxTs7StGWNb6fM0O5SNug2Wv4LgXofTrcq2pgxK3K1G0LwvWprfjMDS4tY/tnu325j179thvsBt2NH8G9H6Ebdbh8R2bbkAqzttWprQdCN/tQ7Z79mdT6I2H4lKuLVWd+Pp5M+zmIyHCXg/qHW+M1Zn9T5uLASWQ/JZB1mpJKWB/jDIT6DA3qQGbN5dwQe9QGeyeX4IZjSVvG6mzOrhmsRk57tTL3XWo55K9I2WTxA/tQL2hhDNm7SgbbgLWdK8DdEaLtTz4jBiAzV5KjopbU08LYXkF8TI5HuYxh2G7WgDoFy6O4K6L0Dkn38JhG1rboDVbLNYmsdlCTuYohK9wjZuB6LNh0HRQy1dWzGz19WG1tNWhNwJ1Zf1ZmtRZLOZiG3aFu6X0+eWhYk3hh+TEG9Wjl26E77nu6vD08V+KGDx+ucXd8Xv277pd59UStpxRMsFj6rscKpjGzGlm/Pz7+lz79FtmE8Hnh9pzMUcnjdP+KWqFl1yoGXbBirSuDmuMcRk5GAh7t2hob3dOg27cXA3Q9fVjtSQ4NsGVdaF5zorMzIXWO/tjAH9kX79ebl3367qWMWqvU+72L2iob8ZxPL3cmpNJBm/QO09aJA9+V3qNnbVhFl8clkMaJXxMLGOft1LWkkgb77Ak7fOVDaic9zsaPe6PiNIxnyZcS4ybfOyZvJv/45P6VpaovDXHutT5DUDx9puNjr0uu4dXZue1H4nue7b52tYfX0vS3LuyYp74iPXw3fB5zSaoquzMCIioawiIgIiICIiAiIgIiICIiAiIgIiIIfUmmKupa0bZS6C3AS6vbiA7SFx79t+8HbYtPQ/mBFFt4nUuGdyT4jyzGP/esU9jeb8ZileC36Guf9K1JFbFezo1RmPXD/AMbFq/Xa2UzsZEb+RB28281+asP1l8eUMh+DWb91H6y15FPStcnm2evXOEMh8oZD8Gs37qP1k8oZD8Gs37qP1lryJ0rXJ5nXrnCGQ+UMh+DWb91H6yeUMh+DWb91H6y15E6Vrk8zr1zhDIfKGQ/BrN+6j9ZQunuIdTVlrL1sRjMpfnxFt1C+yKt1rztAJjdue8AhbwvO/go/u249/wA+rX93GnStcnmdeucIWryhkPwazfuo/WTyhkPwazfuo/WWvInStcnmdeucIZD5QyH4NZv3UfrJ5QyH4NZv3UfrLXkTpWuTzOvXOEMh8oZD8Gs37qP1k8oZD8Gs37qP1lryJ0rXJ5nXrnCGSR2MtOdodMZh7zv0fHFGP0vkAUzitC5HMvbJn+yqUAQTjK7+0dN/szP225fnY3v22Li0lp0JE1lNO2inE8d6uvS7tcY3PhrWsaGtAa0DYADYAL5RFS0hERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAXnfwUf3bce/59Wv7uNeiF538FH923Hv8An1a/u40HohERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBed/BR/dtx7/n1a/u41x+Gtx24geD1ozCan0biMNlMY60+rlXZWCaUwFwaYHN7OVmzSRIHE79SwDbfr4U4OeHNxN01rHP19Pac09lcrrbOOvPqy1rBPjkwbGxkW042ZzBvR25PX0hvuA/XdERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBcVm1DSgfNYmjghZ1dJK4Na36SVyqncXGNk0HcY9oc11ioC0jcEeMxdFZbpiuuKZ75RM4jKb87cH7Zx/vUfxTztwftnH+9R/FZ95Axns6p9Q34J5Axns6p9Q34LQ67Y5Z8nnO26P258f6aD524P2zj/AHqP4p524P2zj/eo/is+8gYz2dU+ob8E8gYz2dU+ob8E67Y5Z8jtuj9ufH+k1xOxekOKvD/PaSy+WxzqGWqvrvd4zG4xuPVkjQT8pjg1w/G0Lwh4Bng3TaN41Z7UmuBBj2aWfJTxhsyBkVyy7mYZ4S7btI2sB2cBsTI0g7tK9q+QMZ7OqfUN+CeQMZ7OqfUN+Cddscs+R23R+3Pj/TQfO3B+2cf71H8U87cH7Zx/vUfxWfeQMZ7OqfUN+CeQMZ7OqfUN+Cddscs+R23R+3Pj/TQfO3B+2cf71H8U87cH7Zx/vUfxWfeQMZ7OqfUN+CeQMZ7OqfUN+Cddscs+R23R+3Pj/TRq2pMTcnZDXylKeZ52bHHYY5zvoAKkVjsmLpU9R6XkgpwQSeVGDmjia07dlJ6wFsS3IqouUU3KN08XZ0XSI0q3rIjAiIobYiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICp/Fn9wtv8A4mp/iolcFT+LP7hbf/E1P8VErrP+Wn3wwr/TKHWR6C4u3dT8YNS4KxGxmAcyRuBsAAGw+pIIb3X17SyNA6nowlaDre1maejs1Np2l5QzzKcpoVS9jBJPynswXPIaBzbb7nuWF0uAWreH9Dh1kcTqLI6nvaaux9piJ2U4YxBY9C8WSCNj3HZ7njne7ctHe7Yry1EUzE5l89s00TTV05jM7I9e/Ee7L6an436o05wk4m5yGx4/m8XrGXC4eHsI9zH4xAxkIAb6R5Xv6kFx+fuVoyHHhtrjLw00/ipO0weo8XLfsTFgI3lidJTHN9yXCvY6b9fzKCxfCnVLs9Xr28V2WKfxIu6mnnNiJw8UbA413codueeXk9HbmG25ACjsR4Pef0bhrNmjEMnk6OsaN7FwCVjCzEV5eSOEOcQAWwz2TsT69gNzsbsW/XrubkxY3TjP32eW9o8nhC4CO5znFZ06d8cFDzoFIHGdsZOy+Xzc/J2nodpycm/3W3VdvD8ccRm8xma9fEZsYrEzWq9nPvqN8QbJXB7ZocHl/olpG5YASNgSsl4Z8BhoqSjprM8H8DqNlS64N1k+Wr9trGQvZLJG4GbtmtIby7EEt+UrJDo3VcPGSTNYXRrtNYp8t1+ZccvFJSz7DG5td3i4J5JnP5HOeWtIHMCX+vGaaNsQqqtWImYpnu35j77/AFhedKcZq+r9MXc9U0pqeGhDVju1u3oN58hFICWGu1kjuckDflPKQC0kDcLq0PCB087Falt5ihl9LT6erx2r1DM1QywIZNxE9gY57Xhzmlo5XE8w2IBWU1uGPEUac1pS03g7Wg8LcpVhU02/OMm/7SLHPaFWVjnCtHJDzRgbtHM4Hlb6orMcIbdGlxEvO0DW0ZpvJ6Yiggr+WasL47decyNfNLzFjHkva4O5nNPZem4F2ynoUZ3s4s2JmdvfHfHs9vv4tlxfhDYSxdz1fMYbN6TdhcazK2/LleON3YPcWsLGske55cWkANB6jlOztgZXSnGKjqXUVfB3MDntMZK3A+zSiztNsIuRs25zGWvcN28zSWO5XAHfbvXniphLXGqDWuls1Pak4kZTAVpK12++jJRNWtabIyPlpyyBgfM7d3MSSHEjo3ZbBwg0PVx+fF+fgzh+H9ytWIbkqs1SWR8rtmubF2IJDC3m9Jxae4cqiqiimJYXbNqimePv9kbs74zni1W7/r/S/wCVGf3Uq1hZPd/1/pf8qM/upVrC7Wj/APHo+Pzel/Cf+LHvkREVrsiIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgKn8Wf3C2/+Jqf4qJXBRuosBV1Ph58bcMgrzFjiYX8jwWuD2kH1bFoVlqqKa6ap3RMImMxMKSikvsU0Pa+b9+PwT7FND2vm/fj8Fy+z6f3I8JeS7Fu88eaNRSX2KaHtfN+/H4J9imh7Xzfvx+Cdn0/uR4Sdi3eePNGopL7FND2vm/fj8E+xTQ9r5v34/BOz6f3I8JOxbvPHmjVwXaVfJVJatuCK1WlaWSQzMD2Pae8Fp6EfSpn7FND2vm/fj8E+xTQ9r5v34/BOz6f3PKTsW7zx5qnpvRWntGxzR4DA4zBsmPNK3G0464kPzuDGjf86mlJfYpoe18378fgn2KaHtfN+/H4KeoRO+55Smfwa9M5muPNXbv+v9L/AJUZ/dSrWFT6PDDG0slTum9k7UlSXtomWbZewO2I3I269HFXBb9NEWrdNuJzjLv6Fo86LZ1dU52iIiN4REQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREH/2Q==",
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from IPython.display import Image, display\n",
    "\n",
    "try:\n",
    "    display(Image(bound_llm.get_graph().draw_mermaid_png()))\n",
    "except Exception:\n",
    "    pass"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "5d072c9c-9404-4338-88c6-b3e136969aca",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "[{'text': 'Here is a summary of the key points from the conversation:', 'type': 'text'}, {'id': 'toolu_01A5ZtzQJtDbBELQjon2nsz5', 'input': {'insightful_quotes': [{'quote': \"When it's done right, a beef can push the genre forward and make artists level up.\", 'speaker': 'Xu', 'analysis': 'This suggests that a healthy rivalry between artists can motivate them to create better and more competitive work, which can ultimately benefit the music genre as a whole.'}, {'quote': \"Honestly, I think it'll stay a hot topic for the fans, but unless someone drops a straight-up diss track, it's not gonna escalate.\", 'speaker': 'Laura', 'analysis': 'Laura believes that while the Drake vs. Kendrick beef is a topic of interest for fans, it is unlikely to significantly escalate unless one of the artists directly confronts the other with a diss track.'}], 'key_moments': [{'topic': 'Drake vs. Kendrick beef', 'happy_moments': [{'quote': \"Definitely was Kendrick's 'Control' verse that kicked it off.\", 'description': \"The group agrees that Kendrick's 'Control' verse was the catalyst that started the Drake vs. Kendrick beef.\", 'expressed_preference': {'content': \"The Drake vs. Kendrick beef started with Kendrick's 'Control' verse\", 'sources': \"Pete's statement\"}}, {'quote': \"When it's done right, a beef can push the genre forward and make artists level up.\", 'description': 'Xu believes that a healthy rivalry between artists can motivate them to create better and more competitive work, which can ultimately benefit the music genre.', 'expressed_preference': {'content': 'Artist beefs can be good for the genre if done right', 'sources': \"Xu's statement\"}}], 'tense_moments': [{'quote': 'eh', 'description': 'Laura seemed uncertain or unenthused about the idea that the Drake vs. Kendrick beef could be good for hip-hop.', 'expressed_preference': {'content': 'Laura is not convinced that the Drake vs. Kendrick beef is good for hip-hop', 'sources': \"Laura's response\"}}], 'sad_moments': [], 'background_info': [{'factoid': {'content': 'Drake never went after Kendrick directly, just some subtle jabs here and there', 'sources': \"Laura's statement\"}, 'professions': [], 'why': 'Provides context on how the beef unfolded between the two artists'}, {'factoid': {'content': \"Drake knows how to make a hit that gets everyone hyped, that's his thing\", 'sources': \"Laura's statement\"}, 'professions': [], 'why': \"Gives background on Drake's musical style and appeal\"}, {'factoid': {'content': 'Kendrick is a beast on the mic when it comes to straight-up bars', 'sources': \"Pete's statement\"}, 'professions': [], 'why': \"Provides background on Kendrick's lyrical abilities\"}], 'moments_summary': \"The group discussed the ongoing Drake vs. Kendrick beef, with some believing it could be good for hip-hop if done right by pushing the artists to create better music, while others were more skeptical. They agreed the beef started with Kendrick's 'Control' verse, and provided background on the artists' different musical styles and strengths.\"}]}, 'name': 'TranscriptSummary', 'type': 'tool_use'}]\n",
      "Tool Calls:\n",
      "  TranscriptSummary (toolu_014PZKzxwNVqsjQmUq88acrU)\n",
      " Call ID: toolu_014PZKzxwNVqsjQmUq88acrU\n",
      "  Args:\n",
      "    insightful_quotes: [{'quote': {'sources': \"Xu's statement\", 'content': \"When it's done right, a beef can push the genre forward and make artists level up.\"}, 'speaker': 'Xu', 'analysis': 'This suggests that a healthy rivalry between artists can motivate them to create better and more competitive work, which can ultimately benefit the music genre as a whole.'}, {'quote': {'sources': \"Laura's statement\", 'content': \"Honestly, I think it'll stay a hot topic for the fans, but unless someone drops a straight-up diss track, it's not gonna escalate.\"}, 'speaker': 'Laura', 'analysis': 'Laura believes that while the Drake vs. Kendrick beef is a topic of interest for fans, it is unlikely to significantly escalate unless one of the artists directly confronts the other with a diss track.'}]\n",
      "    key_moments: [{'topic': 'Drake vs. Kendrick beef', 'happy_moments': [{'quote': \"Definitely was Kendrick's 'Control' verse that kicked it off.\", 'description': \"The group agrees that Kendrick's 'Control' verse was the catalyst that started the Drake vs. Kendrick beef.\", 'expressed_preference': {'content': \"The Drake vs. Kendrick beef started with Kendrick's 'Control' verse\", 'sources': \"Pete's statement\"}}, {'quote': \"When it's done right, a beef can push the genre forward and make artists level up.\", 'description': 'Xu believes that a healthy rivalry between artists can motivate them to create better and more competitive work, which can ultimately benefit the music genre.', 'expressed_preference': {'content': 'Artist beefs can be good for the genre if done right', 'sources': \"Xu's statement\"}}], 'tense_moments': [{'quote': 'eh', 'description': 'Laura seemed uncertain or unenthused about the idea that the Drake vs. Kendrick beef could be good for hip-hop.', 'expressed_preference': {'content': 'Laura is not convinced that the Drake vs. Kendrick beef is good for hip-hop', 'sources': \"Laura's response\"}}], 'sad_moments': [], 'background_info': [{'factoid': {'content': 'Drake never went after Kendrick directly, just some subtle jabs here and there', 'sources': \"Laura's statement\"}, 'professions': [], 'why': 'Provides context on how the beef unfolded between the two artists'}, {'factoid': {'content': \"Drake knows how to make a hit that gets everyone hyped, that's his thing\", 'sources': \"Laura's statement\"}, 'professions': [], 'why': \"Gives background on Drake's musical style and appeal\"}, {'factoid': {'content': 'Kendrick is a beast on the mic when it comes to straight-up bars', 'sources': \"Pete's statement\"}, 'professions': [], 'why': \"Provides background on Kendrick's lyrical abilities\"}], 'moments_summary': \"The group discussed the ongoing Drake vs. Kendrick beef, with some believing it could be good for hip-hop if done right by pushing the artists to create better music, while others were more skeptical. They agreed the beef started with Kendrick's 'Control' verse, and provided background on the artists' different musical styles and strengths.\"}]\n",
      "    metadata: {'title': 'Conversation Summary', 'location': {'sources': 'The transcript provided', 'content': 'Virtual meeting'}, 'duration': '15 minutes'}\n",
      "    participants: [{'name': {'sources': 'The transcript', 'content': 'Pete'}, 'role': 'Participant', 'age': None, 'background_details': []}, {'name': {'sources': 'The transcript', 'content': 'Xu'}, 'role': 'Participant', 'age': None, 'background_details': []}, {'name': {'sources': 'The transcript', 'content': 'Laura'}, 'role': 'Participant', 'age': None, 'background_details': []}]\n",
      "    overall_summary: The conversation discussed the ongoing beef between rappers Drake and Kendrick Lamar, with the participants sharing their thoughts on how the rivalry has impacted the hip-hop genre. Some believed that a healthy beef can push artists to create better music and raise the level of competition, while others were more skeptical about the potential benefits. The group also provided background information on the artists' musical styles and the origins of the beef.\n",
      "    next_steps: ['Further discuss the potential impact of artist rivalries on the hip-hop genre', 'Explore how these beefs could be leveraged to drive innovation and creativity in the music industry', 'Investigate other examples of high-profile artist feuds and their long-term effects']\n",
      "    other_stuff: []\n"
     ]
    }
   ],
   "source": [
    "chain = prompt | bound_llm\n",
    "results = chain.invoke(\n",
    "    {\n",
    "        \"messages\": [\n",
    "            (\n",
    "                \"user\",\n",
    "                f\"Extract the summary from the following conversation:\\n\\n<convo>\\n{formatted}\\n</convo>\",\n",
    "            ),\n",
    "        ]\n",
    "    },\n",
    ")\n",
    "results.pretty_print()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7b0f3844-076e-4a5b-9951-89116746238f",
   "metadata": {},
   "source": [
    "#### And it works!\n",
    "\n",
    "Retries are an easy way to reduce function calling failures. While retrying may become unnecessary with more powerful LLMs, data validation is important to control how LLMs interact with the rest of your software stack.\n",
    "\n",
    "If you notice high retry rates (using an observability tool like LangSmith), you can set up a rule to send the failure cases to a dataset alongside the corrected values and then automatically program those into your prompts or schemas (or use them as few-shots to have semantically relevant demonstrations)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0ae295b1-da58-4cc9-834b-70e1466f8695",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
