{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "92e8cf57-f30d-4417-92d6-7e240c08005d",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install --quiet neo4j langchain-community langchain-core langchain-openai langgraph"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "2be1a8a0-20a4-47b6-a3c0-79faf9917fa7",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from operator import add\n",
    "\n",
    "import re\n",
    "import ast\n",
    "import getpass\n",
    "from typing import List, Dict, Literal, Annotated\n",
    "from typing_extensions import TypedDict\n",
    "from IPython.display import Image, display\n",
    "from langgraph.graph import StateGraph, START, END\n",
    "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_community.graphs import Neo4jGraph\n",
    "from langchain_community.vectorstores import Neo4jVector\n",
    "\n",
    "from pydantic import BaseModel, Field"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "39290339-ed6c-40ab-a856-f429ea0eda32",
   "metadata": {},
   "source": [
    "# Implementing GraphReader with Neo4j and LangGraph\n",
    "\n",
    "Large Language Models (LLMs) are great at traditional NLP tasks like summarization and sentiment analysis but the stronger models also demonstrate promising reasoning abilities. LLM reasoning is often understood as the ability to tackle complex problems by formulating a plan, executing it, and assessing progress at each step. Based on this evaluation, they can adapt by revising the plan or taking alternative actions. The rise of agents is becoming an increasingly compelling approach to answering complex questions in RAG applications.\n",
    "\n",
    "In this blog post, we’ll explore the implementation of the GraphReader agent. This agent is designed to retrieve information from a structured knowledge graph that follows a predefined schema. Unlike the typical graphs you might see in presentations, this one is closer to a document or lexical graph, containing documents, their chunks, and relevant metadata in the form of atomic facts.\n",
    "\n",
    "![image](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*aws90bkQ8PPNYFFIbXdMpA.png)\n",
    "\n",
    "The image above illustrates a knowledge graph, beginning at the top with a document node labeled Joan of Arc. This document is broken down into text chunks, represented by numbered circular nodes (0, 1, 2, 3), which are connected sequentially through NEXT relationships, indicating the order in which the chunks appear in the document. Below the text chunks, the graph further breaks down into atomic facts, where specific statements about the content are represented. Finally, at the bottom level of the graph, we see the key elements, represented as circular nodes with topics like historical icons, Dane, French nation, and France. These elements act as metadata, linking the facts to the broader themes and concepts relevant to the document.\n",
    "\n",
    "Once we have constructed the knowledge graph, we will follow the implementation provided in the GraphReader paper.\n",
    "\n",
    "![image](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*PokbiJwPnZ6Fndfo9tPwbg.png)\n",
    "\n",
    "The agent exploration process involves initializing the agent with a rational plan and selecting initial nodes to start the search in a graph. The agent explores these nodes by first gathering atomic facts, then reading relevant text chunks, and updating its notebook. The agent can decide to explore more chunks, neighboring nodes, or terminate based on gathered information. When the agent decided to terminate, the answer reasoning step is executed to generate the final answer.\n",
    "\n",
    "In this blog post, we will implement the GraphReader paper using Neo4j as the storage layer and LangChain in combination with LangGraph to define the agent and its flow."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb835be5-c1d1-422f-8ff8-7be3f5205395",
   "metadata": {},
   "source": [
    "**Make sure to run the graphreader import notebook first!!!**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "f815dc99-6b4c-4033-8f31-9b98678b3625",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/pandas/core/arrays/masked.py:60: UserWarning: Pandas requires version '1.3.6' or newer of 'bottleneck' (version '1.3.5' currently installed).\n",
      "  from pandas.core import (\n"
     ]
    }
   ],
   "source": [
    "os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n",
    "os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n",
    "os.environ[\"NEO4J_PASSWORD\"] = \"password\"\n",
    "\n",
    "neo4j_graph = Neo4jGraph(refresh_schema=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "56d9f604-4e51-471a-b94d-b0e9c12427fa",
   "metadata": {},
   "outputs": [
    {
     "name": "stdin",
     "output_type": "stream",
     "text": [
      "OpenAI API Key: ········\n"
     ]
    }
   ],
   "source": [
    "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n",
    "#\"gpt-4-0125-preview\"\n",
    "model = ChatOpenAI(model=\"gpt-4-turbo\", temperature=0.2)\n",
    "embeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fcd65269-2823-4593-92ed-66a32730bfa9",
   "metadata": {},
   "source": [
    "# GraphReader Agent\n",
    "We’re ready to implement GraphReader, a graph-based agent system. The agent starts with a couple of predefined steps, followed by the steps in which it can traverse the graph autonomously, meaning the agent decides the following steps and how to traverse the graph.\n",
    "\n",
    "![image](https://miro.medium.com/v2/resize:fit:740/format:webp/1*brUZD1mkh9tDg8AwMEmxTw.png)\n",
    "\n",
    "The process begins with a rational planning stage, after which the agent makes an initial selection of nodes (key elements) to work with. Next, the agent checks atomic facts linked to the selected key elements. Since all these steps are predefined, they are visualized with a full line.\n",
    "\n",
    "Depending on the outcome of the atomic fact check, the flow proceeds to either read relevant text chunks or explore the neighbors of the initial key elements in search of more relevant information. Here, the next step is conditional and based on the results of an LLM and is, therefore, visualized with a dotted line.\n",
    "\n",
    "In the chunk check stage, the LLM reads and evaluates whether the information gathered from the current text chunk is sufficient. Based on this evaluation, the LLM has a few options. It can decide to read additional text chunks if the information seems incomplete or unclear. Alternatively, the LLM may choose to explore neighboring key elements, looking for more context or related information that the initial selection might not have captured. If, however, the LLM determines that enough relevant information has been gathered, it will proceed directly to the answer reasoning step. At this point, the LLM generates the final answer based on the collected information.\n",
    "\n",
    "Throughout this process, the agent dynamically navigates the flow based on the outcomes of the conditional checks, making decisions on whether to repeat steps or continue forward depending on the specific situation. This provides flexibility in handling different inputs while maintaining a structured progression through the steps.\n",
    "\n",
    "Now, we’ll go over the steps and implement them using LangGraph abstraction. You can learn more about LangGraph through LangChain’s academy course.\n",
    "\n",
    "## LangGraph state\n",
    "To build a LangGraph implementation, we start by defining a state passed along the steps in the flow."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "a4a393cb-945d-4047-b57e-5d66e6ac0c17",
   "metadata": {},
   "outputs": [],
   "source": [
    "class InputState(TypedDict):\n",
    "    question: str\n",
    "\n",
    "class OutputState(TypedDict):\n",
    "    answer: str\n",
    "    analysis: str\n",
    "    previous_actions: List[str]\n",
    "\n",
    "class OverallState(TypedDict):\n",
    "    question: str\n",
    "    rational_plan: str\n",
    "    notebook: str\n",
    "    previous_actions: Annotated[List[str], add]\n",
    "    check_atomic_facts_queue: List[str]\n",
    "    check_chunks_queue: List[str]\n",
    "    neighbor_check_queue: List[str]\n",
    "    chosen_action: str"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "f0482101-455a-4247-8295-aeaaec376a31",
   "metadata": {},
   "outputs": [],
   "source": [
    "def parse_function(input_str):\n",
    "    # Regular expression to capture the function name and arguments\n",
    "    pattern = r'(\\w+)(?:\\((.*)\\))?'\n",
    "    \n",
    "    match = re.match(pattern, input_str)\n",
    "    if match:\n",
    "        function_name = match.group(1)  # Extract the function name\n",
    "        raw_arguments = match.group(2)  # Extract the arguments as a string        \n",
    "        # If there are arguments, attempt to parse them\n",
    "        arguments = []\n",
    "        if raw_arguments:\n",
    "            try:\n",
    "                # Use ast.literal_eval to safely evaluate and convert the arguments\n",
    "                parsed_args = ast.literal_eval(f'({raw_arguments})')  # Wrap in tuple parentheses\n",
    "                # Ensure it's always treated as a tuple even with a single argument\n",
    "                arguments = list(parsed_args) if isinstance(parsed_args, tuple) else [parsed_args]\n",
    "            except (ValueError, SyntaxError):\n",
    "                # In case of failure to parse, return the raw argument string\n",
    "                arguments = [raw_arguments.strip()]\n",
    "        \n",
    "\n",
    "        return {\n",
    "            'function_name': function_name,\n",
    "            'arguments': arguments\n",
    "        }\n",
    "    else:\n",
    "        return None"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5843878-dd36-4901-b085-fb5d8be69bdd",
   "metadata": {},
   "source": [
    "For more advanced use cases, multiple separate states can be used. In our implementation, we have separate input and output states, which define the input and output of the LangGraph, and a separate overall state, which is passed between steps.\n",
    "\n",
    "By default, the state is overwritten when returned from a node. However, you can define other operations. For example, with the previous_actions we define that the state is appended or added instead of overwritten.\n",
    "\n",
    "The agent begins by maintaining a notebook to record supporting facts, which are eventually used to derive the final answer. Other states will be explained as we go along.\n",
    "\n",
    "Let’s move on to defining the nodes in the LangGraph.\n",
    "\n",
    "## Rational plan\n",
    "In the rational plan step, the agent breaks the question into smaller steps, identifies the key information required, and creates a logical plan. The logical plan allows the agent to handle complex multi-step questions.\n",
    "\n",
    "While the code is unavailable, all the prompts are in the appendix, so we can easily copy them.\n",
    "The authors don’t explicitly state whether the prompt is provided in the system or user message. For the most part, I have decided to put the instructions as a system message.\n",
    "\n",
    "The following code shows how to construct a chain using the above rational plan as the system message."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "cce402f5-3a3b-44e7-be41-a9ed8e8010ef",
   "metadata": {},
   "outputs": [],
   "source": [
    "rational_plan_system = \"\"\"As an intelligent assistant, your primary objective is to answer the question by gathering\n",
    "supporting facts from a given article. To facilitate this objective, the first step is to make\n",
    "a rational plan based on the question. This plan should outline the step-by-step process to\n",
    "resolve the question and specify the key information required to formulate a comprehensive answer.\n",
    "Example:\n",
    "#####\n",
    "User: Who had a longer tennis career, Danny or Alice?\n",
    "Assistant: In order to answer this question, we first need to find the length of Danny’s\n",
    "and Alice’s tennis careers, such as the start and retirement of their careers, and then compare the\n",
    "two.\n",
    "#####\n",
    "Please strictly follow the above format. Let’s begin.\"\"\"\n",
    "\n",
    "rational_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\n",
    "            \"system\",\n",
    "            rational_plan_system,\n",
    "        ),\n",
    "        (\n",
    "            \"human\",\n",
    "            (\n",
    "                \"{question}\"\n",
    "            ),\n",
    "        ),\n",
    "    ]\n",
    ")\n",
    "\n",
    "rational_chain = rational_prompt | model | StrOutputParser()\n",
    "\n",
    "def rational_plan_node(state: InputState) -> OverallState:\n",
    "    rational_plan = rational_chain.invoke({\"question\": state.get(\"question\")})\n",
    "    print(\"-\" * 20)\n",
    "    print(f\"Step: rational_plan\")\n",
    "    print(f\"Rational plan: {rational_plan}\")\n",
    "    return {\n",
    "        \"rational_plan\": rational_plan,\n",
    "        \"previous_actions\": [\"rational_plan\"],\n",
    "    }"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0eb952ad-81c0-4103-9db9-f7ce59f3090c",
   "metadata": {},
   "source": [
    "The function starts by invoking the LLM chain, which produces the rational plan. We do a little printing for debugging and then update the state as the function’s output. I like the simplicity of this approach.\n",
    "\n",
    "## Initial node selection\n",
    "In the next step, we select the initial nodes based on the question and rational plan. The prompt starts by giving the LLM some context about the overall agent system, followed by the task instructions. The idea is to have the LLM select the top 10 most relevant nodes and score them. The authors simply put all the key elements from the database in the prompt for an LLM to select from. However, I think that approach doesn’t really scale. Therefore, we will create and use a vector index to retrieve a list of input nodes for the prompt."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2acdee6c-acaa-4e94-9998-4ec311ca38dd",
   "metadata": {},
   "outputs": [],
   "source": [
    "neo4j_vector = Neo4jVector.from_existing_graph(\n",
    "    embedding=embeddings,\n",
    "    index_name=\"keyelements\",\n",
    "    node_label=\"KeyElement\",\n",
    "    text_node_properties=[\"id\"],\n",
    "    embedding_node_property=\"embedding\",\n",
    "    retrieval_query=\"RETURN node.id AS text, score, {} AS metadata\"\n",
    ")\n",
    "\n",
    "def get_potential_nodes(question: str) -> List[str]:\n",
    "    data = neo4j_vector.similarity_search(question, k=50)\n",
    "    return [el.page_content for el in data]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a5b26445-09e0-4c5f-91ed-97aa8aacae38",
   "metadata": {},
   "source": [
    "The from_existing_graph method pulls the defined text_node_properties from the graph and calculates embeddings where they are missing. Here, we simply embed the id property of KeyElement nodes.\n",
    "\n",
    "Now let’s define the chain. We’ll first copy the prompt."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "58e34514-616a-416b-8907-c4b4abe2e102",
   "metadata": {},
   "outputs": [],
   "source": [
    "initial_node_system = \"\"\"\n",
    "As an intelligent assistant, your primary objective is to answer questions based on information\n",
    "contained within a text. To facilitate this objective, a graph has been created from the text,\n",
    "comprising the following elements:\n",
    "1. Text Chunks: Chunks of the original text.\n",
    "2. Atomic Facts: Smallest, indivisible truths extracted from text chunks.\n",
    "3. Nodes: Key elements in the text (noun, verb, or adjective) that correlate with several atomic\n",
    "facts derived from different text chunks.\n",
    "Your current task is to check a list of nodes, with the objective of selecting the most relevant initial nodes from the graph to efficiently answer the question. You are given the question, the\n",
    "rational plan, and a list of node key elements. These initial nodes are crucial because they are the\n",
    "starting point for searching for relevant information.\n",
    "Requirements:\n",
    "#####\n",
    "1. Once you have selected a starting node, assess its relevance to the potential answer by assigning\n",
    "a score between 0 and 100. A score of 100 implies a high likelihood of relevance to the answer,\n",
    "whereas a score of 0 suggests minimal relevance.\n",
    "2. Present each chosen starting node in a separate line, accompanied by its relevance score. Format\n",
    "each line as follows: Node: [Key Element of Node], Score: [Relevance Score].\n",
    "3. Please select at least 10 starting nodes, ensuring they are non-repetitive and diverse.\n",
    "4. In the user’s input, each line constitutes a node. When selecting the starting node, please make\n",
    "your choice from those provided, and refrain from fabricating your own. The nodes you output\n",
    "must correspond exactly to the nodes given by the user, with identical wording.\n",
    "Finally, I emphasize again that you need to select the starting node from the given Nodes, and\n",
    "it must be consistent with the words of the node you selected. Please strictly follow the above\n",
    "format. Let’s begin.\n",
    "\"\"\"\n",
    "\n",
    "initial_node_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\n",
    "            \"system\",\n",
    "            initial_node_system,\n",
    "        ),\n",
    "        (\n",
    "            \"human\",\n",
    "            (\n",
    "                \"\"\"Question: {question}\n",
    "Plan: {rational_plan}\n",
    "Nodes: {nodes}\"\"\"\n",
    "            ),\n",
    "        ),\n",
    "    ]\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58db91e3-4f32-43ab-91f7-8f84b77e5b96",
   "metadata": {},
   "source": [
    "Again, we put most of the instructions as the system message. Since we have multiple inputs, we can define them in the human message. However, we need a more structured output this time. Instead of writing a parsing function that takes in text and outputs a JSON, we can simply use the use_structured_outputmethod to define the desired output structure."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1375fb1c-087f-4598-85a7-6ef080aa3c09",
   "metadata": {},
   "outputs": [],
   "source": [
    "class Node(BaseModel):\n",
    "    key_element: str = Field(description=\"\"\"Key element or name of a relevant node\"\"\")\n",
    "    score: int = Field(description=\"\"\"Relevance to the potential answer by assigning\n",
    "a score between 0 and 100. A score of 100 implies a high likelihood of relevance to the answer,\n",
    "whereas a score of 0 suggests minimal relevance.\"\"\")\n",
    "\n",
    "class InitialNodes(BaseModel):\n",
    "    initial_nodes: List[Node] = Field(description=\"List of relevant nodes to the question and plan\")\n",
    "\n",
    "initial_nodes_chain = initial_node_prompt | model.with_structured_output(InitialNodes)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4947cb81-bbf9-4ce5-9fc5-78b99cd6ff52",
   "metadata": {},
   "source": [
    "We want to output a list of nodes containing the key element and the score. We can easily define the output using a Pydantic model. Additionally, it is vital to add descriptions to each of the field, so we can guide the LLM as much as possible.\n",
    "\n",
    "The last thing in this step is to define the node as a function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f98c28e9-b5cb-4fdc-ba6e-401b5642e221",
   "metadata": {},
   "outputs": [],
   "source": [
    "def initial_node_selection(state: OverallState) -> OverallState:\n",
    "    potential_nodes = get_potential_nodes(state.get(\"question\"))\n",
    "    initial_nodes = initial_nodes_chain.invoke(\n",
    "        {\n",
    "            \"question\": state.get(\"question\"),\n",
    "            \"rational_plan\": state.get(\"rational_plan\"),\n",
    "            \"nodes\": potential_nodes,\n",
    "        }\n",
    "    )\n",
    "    # paper uses 5 initial nodes\n",
    "    check_atomic_facts_queue = [\n",
    "        el.key_element\n",
    "        for el in sorted(\n",
    "            initial_nodes.initial_nodes,\n",
    "            key=lambda node: node.score,\n",
    "            reverse=True,\n",
    "        )\n",
    "    ][:5]\n",
    "    return {\n",
    "        \"check_atomic_facts_queue\": check_atomic_facts_queue,\n",
    "        \"previous_actions\": [\"initial_node_selection\"],\n",
    "    }"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7caf07c1-efd7-42ca-b2f2-b470c3e39659",
   "metadata": {},
   "source": [
    "In the initial node selection, we start by getting a list of potential nodes using the vector similarity search based on the input. An option is to use rational plan instead. The LLM is prompted to output the 10 most relevant nodes. However, the authors say that we should use only 5 initial nodes. Therefore, we simply order the nodes by their score and take the top 5 ones. We then update the check_atomic_facts_queue with the selected initial key elements.\n",
    "\n",
    "## Atomic fact check\n",
    "In this step, we take the initial key elements and inspect the linked atomic facts. All prompts start by giving the LLM some context, followed by task instructions. The LLM is instructed to read the atomic facts and decide whether to read the linked text chunks or if the atomic facts are irrelevant, search for more information by exploring the neighbors. The last bit of the prompt is the output instructions. We will use the structured output method again to avoid manually parsing and structuring the output.\n",
    "\n",
    "Since chains are very similar in their implementation, different only by prompts, we’ll avoid showing every definition in this blog post. However, we’ll look at the LangGraph node definitions to better understand the flow."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "fb60c98b-4a26-4cc2-bb4b-a5802f73e5e5",
   "metadata": {},
   "outputs": [],
   "source": [
    "atomic_fact_check_system = \"\"\"As an intelligent assistant, your primary objective is to answer questions based on information\n",
    "contained within a text. To facilitate this objective, a graph has been created from the text,\n",
    "comprising the following elements:\n",
    "1. Text Chunks: Chunks of the original text.\n",
    "2. Atomic Facts: Smallest, indivisible truths extracted from text chunks.\n",
    "3. Nodes: Key elements in the text (noun, verb, or adjective) that correlate with several atomic\n",
    "facts derived from different text chunks.\n",
    "Your current task is to check a node and its associated atomic facts, with the objective of\n",
    "determining whether to proceed with reviewing the text chunk corresponding to these atomic facts.\n",
    "Given the question, the rational plan, previous actions, notebook content, and the current node’s\n",
    "atomic facts and their corresponding chunk IDs, you have the following Action Options:\n",
    "#####\n",
    "1. read_chunk(List[ID]): Choose this action if you believe that a text chunk linked to an atomic\n",
    "fact may hold the necessary information to answer the question. This will allow you to access\n",
    "more complete and detailed information.\n",
    "2. stop_and_read_neighbor(): Choose this action if you ascertain that all text chunks lack valuable\n",
    "information.\n",
    "#####\n",
    "Strategy:\n",
    "#####\n",
    "1. Reflect on previous actions and prevent redundant revisiting nodes or chunks.\n",
    "2. You can choose to read multiple text chunks at the same time.\n",
    "3. Atomic facts only cover part of the information in the text chunk, so even if you feel that the\n",
    "atomic facts are slightly relevant to the question, please try to read the text chunk to get more\n",
    "complete information.\n",
    "#####\n",
    "Finally, it is emphasized again that even if the atomic fact is only slightly relevant to the\n",
    "question, you should still look at the text chunk to avoid missing information. You should only\n",
    "choose stop_and_read_neighbor() when you are very sure that the given text chunk is irrelevant to\n",
    "the question. Please strictly follow the above format. Let’s begin.\n",
    "\"\"\"\n",
    "\n",
    "class AtomicFactOutput(BaseModel):\n",
    "    updated_notebook: str = Field(description=\"\"\"First, combine your current notebook with new insights and findings about\n",
    "the question from current atomic facts, creating a more complete version of the notebook that\n",
    "contains more valid information.\"\"\")\n",
    "    rational_next_action: str = Field(description=\"\"\"Based on the given question, the rational plan, previous actions, and\n",
    "notebook content, analyze how to choose the next action.\"\"\")\n",
    "    chosen_action: str = Field(description=\"\"\"1. read_chunk(List[ID]): Choose this action if you believe that a text chunk linked to an atomic\n",
    "fact may hold the necessary information to answer the question. This will allow you to access\n",
    "more complete and detailed information.\n",
    "2. stop_and_read_neighbor(): Choose this action if you ascertain that all text chunks lack valuable\n",
    "information.\"\"\")\n",
    "\n",
    "atomic_fact_check_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\n",
    "            \"system\",\n",
    "            atomic_fact_check_system,\n",
    "        ),\n",
    "        (\n",
    "            \"human\",\n",
    "            (\n",
    "                \"\"\"Question: {question}\n",
    "Plan: {rational_plan}\n",
    "Previous actions: {previous_actions}\n",
    "Notebook: {notebook}\n",
    "Atomic facts: {atomic_facts}\"\"\"\n",
    "            ),\n",
    "        ),\n",
    "    ]\n",
    ")\n",
    "\n",
    "atomic_fact_chain = atomic_fact_check_prompt | model.with_structured_output(AtomicFactOutput)\n",
    "\n",
    "def get_atomic_facts(key_elements: List[str]) -> List[Dict[str, str]]:\n",
    "    data = neo4j_graph.query(\"\"\"\n",
    "    MATCH (k:KeyElement)<-[:HAS_KEY_ELEMENT]-(fact)<-[:HAS_ATOMIC_FACT]-(chunk)\n",
    "    WHERE k.id IN $key_elements\n",
    "    RETURN distinct chunk.id AS chunk_id, fact.text AS text\n",
    "    \"\"\", params={\"key_elements\": key_elements})\n",
    "    return data\n",
    "\n",
    "def get_neighbors_by_key_element(key_elements):\n",
    "    print(f\"Key elements: {key_elements}\")\n",
    "    data = neo4j_graph.query(\"\"\"\n",
    "    MATCH (k:KeyElement)<-[:HAS_KEY_ELEMENT]-()-[:HAS_KEY_ELEMENT]->(neighbor)\n",
    "    WHERE k.id IN $key_elements AND NOT neighbor.id IN $key_elements\n",
    "    WITH neighbor, count(*) AS count\n",
    "    ORDER BY count DESC LIMIT 50\n",
    "    RETURN collect(neighbor.id) AS possible_candidates\n",
    "    \"\"\", params={\"key_elements\":key_elements})\n",
    "    return data\n",
    "\n",
    "def atomic_fact_check(state: OverallState) -> OverallState:\n",
    "    atomic_facts = get_atomic_facts(state.get(\"check_atomic_facts_queue\"))\n",
    "    print(\"-\" * 20)\n",
    "    print(f\"Step: atomic_fact_check\")\n",
    "    print(\n",
    "        f\"Reading atomic facts about: {state.get('check_atomic_facts_queue')}\"\n",
    "    )\n",
    "    atomic_facts_results = atomic_fact_chain.invoke(\n",
    "        {\n",
    "            \"question\": state.get(\"question\"),\n",
    "            \"rational_plan\": state.get(\"rational_plan\"),\n",
    "            \"notebook\": state.get(\"notebook\"),\n",
    "            \"previous_actions\": state.get(\"previous_actions\"),\n",
    "            \"atomic_facts\": atomic_facts,\n",
    "        }\n",
    "    )\n",
    "\n",
    "    notebook = atomic_facts_results.updated_notebook\n",
    "    print(\n",
    "        f\"Rational for next action after atomic check: {atomic_facts_results.rational_next_action}\"\n",
    "    )\n",
    "    chosen_action = parse_function(atomic_facts_results.chosen_action)\n",
    "    print(f\"Chosen action: {chosen_action}\")\n",
    "    response = {\n",
    "        \"notebook\": notebook,\n",
    "        \"chosen_action\": chosen_action.get(\"function_name\"),\n",
    "        \"check_atomic_facts_queue\": [],\n",
    "        \"previous_actions\": [\n",
    "            f\"atomic_fact_check({state.get('check_atomic_facts_queue')})\"\n",
    "        ],\n",
    "    }\n",
    "    if chosen_action.get(\"function_name\") == \"stop_and_read_neighbor\":\n",
    "        neighbors = get_neighbors_by_key_element(\n",
    "            state.get(\"check_atomic_facts_queue\")\n",
    "        )\n",
    "        response[\"neighbor_check_queue\"] = neighbors\n",
    "    elif chosen_action.get(\"function_name\") == \"read_chunk\":\n",
    "        response[\"check_chunks_queue\"] = chosen_action.get(\"arguments\")[0]\n",
    "    return response\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3cb3812c-5c2c-4955-a4ea-1da7fc2868e5",
   "metadata": {},
   "source": [
    "The atomic fact check node starts by invoking the LLM to evaluate the atomic facts of the selected nodes. Since we are using the use_structured_output we can parse the updated notebook and the chosen action output in a straightforward manner. If the selected action is to get additional information by inspecting the neighbors, we use a function to find those neighbors and append them to the check_atomic_facts_queue. Otherwise, we append the selected chunks to the check_chunks_queue. We update the overall state by updating the notebook, queues, and the chosen action.\n",
    "\n",
    "## Text chunk check\n",
    "As you might imagine by the name of the LangGraph node, in this step, the LLM reads the selected text chunk and decides the best next step based on the provided information. The LLM is instructed to read the text chunk and decide on the best approach. My gut feeling is that sometimes relevant information is at the start or the end of a text chunk, and parts of the information might be missing due to the chunking process. Therefore, the authors decided to give the LLM the option to read a previous or next chunk. If the LLM decides it has enough information, it can hop on to the final step. Otherwise, it has the option to search for more details using the search_more function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "02237ea5-0567-4e9d-8e76-de942e942f61",
   "metadata": {},
   "outputs": [],
   "source": [
    "chunk_read_system_prompt = \"\"\"As an intelligent assistant, your primary objective is to answer questions based on information\n",
    "within a text. To facilitate this objective, a graph has been created from the text, comprising the\n",
    "following elements:\n",
    "1. Text Chunks: Segments of the original text.\n",
    "2. Atomic Facts: Smallest, indivisible truths extracted from text chunks.\n",
    "3. Nodes: Key elements in the text (noun, verb, or adjective) that correlate with several atomic\n",
    "facts derived from different text chunks.\n",
    "Your current task is to assess a specific text chunk and determine whether the available information\n",
    "suffices to answer the question. Given the question, rational plan, previous actions, notebook\n",
    "content, and the current text chunk, you have the following Action Options:\n",
    "#####\n",
    "1. search_more(): Choose this action if you think that the essential information necessary to\n",
    "answer the question is still lacking.\n",
    "2. read_previous_chunk(): Choose this action if you feel that the previous text chunk contains\n",
    "valuable information for answering the question.\n",
    "3. read_subsequent_chunk(): Choose this action if you feel that the subsequent text chunk contains\n",
    "valuable information for answering the question.\n",
    "4. termination(): Choose this action if you believe that the information you have currently obtained\n",
    "is enough to answer the question. This will allow you to summarize the gathered information and\n",
    "provide a final answer.\n",
    "#####\n",
    "Strategy:\n",
    "#####\n",
    "1. Reflect on previous actions and prevent redundant revisiting of nodes or chunks.\n",
    "2. You can only choose one action.\n",
    "#####\n",
    "Please strictly follow the above format. Let’s begin\n",
    "\"\"\"\n",
    "\n",
    "class ChunkOutput(BaseModel):\n",
    "    updated_notebook: str = Field(description=\"\"\"First, combine your previous notes with new insights and findings about the\n",
    "question from current text chunks, creating a more complete version of the notebook that contains\n",
    "more valid information.\"\"\")\n",
    "    rational_next_move: str = Field(description=\"\"\"Based on the given question, rational plan, previous actions, and\n",
    "notebook content, analyze how to choose the next action.\"\"\")\n",
    "    chosen_action: str = Field(description=\"\"\"1. search_more(): Choose this action if you think that the essential information necessary to\n",
    "answer the question is still lacking.\n",
    "2. read_previous_chunk(): Choose this action if you feel that the previous text chunk contains\n",
    "valuable information for answering the question.\n",
    "3. read_subsequent_chunk(): Choose this action if you feel that the subsequent text chunk contains\n",
    "valuable information for answering the question.\n",
    "4. termination(): Choose this action if you believe that the information you have currently obtained\n",
    "is enough to answer the question. This will allow you to summarize the gathered information and\n",
    "provide a final answer.\"\"\")\n",
    "\n",
    "chunk_read_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\n",
    "            \"system\",\n",
    "            chunk_read_system_prompt,\n",
    "        ),\n",
    "        (\n",
    "            \"human\",\n",
    "            (\n",
    "                \"\"\"Question: {question}\n",
    "Plan: {rational_plan}\n",
    "Previous actions: {previous_actions}\n",
    "Notebook: {notebook}\n",
    "Chunk: {chunk}\"\"\"\n",
    "            ),\n",
    "        ),\n",
    "    ]\n",
    ")\n",
    "\n",
    "chunk_read_chain = chunk_read_prompt | model.with_structured_output(ChunkOutput)\n",
    "\n",
    "def get_subsequent_chunk_id(chunk):\n",
    "    data = neo4j_graph.query(\"\"\"\n",
    "    MATCH (c:Chunk)-[:NEXT]->(next)\n",
    "    WHERE c.id = $id\n",
    "    RETURN next.id AS next\n",
    "    \"\"\")\n",
    "    return data\n",
    "\n",
    "def get_previous_chunk_id(chunk):\n",
    "    data = neo4j_graph.query(\"\"\"\n",
    "    MATCH (c:Chunk)<-[:NEXT]-(previous)\n",
    "    WHERE c.id = $id\n",
    "    RETURN previous.id AS previous\n",
    "    \"\"\")\n",
    "    return data\n",
    "\n",
    "def get_chunk(chunk_id: str) -> List[Dict[str, str]]:\n",
    "    data = neo4j_graph.query(\"\"\"\n",
    "    MATCH (c:Chunk)\n",
    "    WHERE c.id = $chunk_id\n",
    "    RETURN c.id AS chunk_id, c.text AS text\n",
    "    \"\"\", params={\"chunk_id\": chunk_id})\n",
    "    return data\n",
    "\n",
    "def chunk_check(state: OverallState) -> OverallState:\n",
    "    check_chunks_queue = state.get(\"check_chunks_queue\")\n",
    "    chunk_id = check_chunks_queue.pop()\n",
    "    print(\"-\" * 20)\n",
    "    print(f\"Step: read chunk({chunk_id})\")\n",
    "\n",
    "    chunks_text = get_chunk(chunk_id)\n",
    "    read_chunk_results = chunk_read_chain.invoke(\n",
    "        {\n",
    "            \"question\": state.get(\"question\"),\n",
    "            \"rational_plan\": state.get(\"rational_plan\"),\n",
    "            \"notebook\": state.get(\"notebook\"),\n",
    "            \"previous_actions\": state.get(\"previous_actions\"),\n",
    "            \"chunk\": chunks_text,\n",
    "        }\n",
    "    )\n",
    "\n",
    "    notebook = read_chunk_results.updated_notebook\n",
    "    print(\n",
    "        f\"Rational for next action after reading chunks: {read_chunk_results.rational_next_move}\"\n",
    "    )\n",
    "    chosen_action = parse_function(read_chunk_results.chosen_action)\n",
    "    print(f\"Chosen action: {chosen_action}\")\n",
    "    response = {\n",
    "        \"notebook\": notebook,\n",
    "        \"chosen_action\": chosen_action.get(\"function_name\"),\n",
    "        \"previous_actions\": [f\"read_chunks({chunk_id})\"],\n",
    "    }\n",
    "    if chosen_action.get(\"function_name\") == \"read_subsequent_chunk\":\n",
    "        subsequent_id = get_subsequent_chunk_id(chunk_id)\n",
    "        check_chunks_queue.append(subsequent_id)\n",
    "    elif chosen_action.get(\"function_name\") == \"read_previous_chunk\":\n",
    "        previous_id = get_previous_chunk_id(chunk_id)\n",
    "        check_chunks_queue.append(previous_id)\n",
    "    elif chosen_action.get(\"function_name\") == \"search_more\":\n",
    "        # Go over to next chunk\n",
    "        # Else explore neighbors\n",
    "        if not check_chunks_queue:\n",
    "            response[\"chosen_action\"] = \"search_neighbor\"\n",
    "            # Get neighbors/use vector similarity\n",
    "            print(f\"Neighbor rational: {read_chunk_results.rational_next_move}\")\n",
    "            neighbors = get_potential_nodes(\n",
    "                read_chunk_results.rational_next_move\n",
    "            )\n",
    "            response[\"neighbor_check_queue\"] = neighbors\n",
    "\n",
    "    response[\"check_chunks_queue\"] = check_chunks_queue\n",
    "    return response"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b35ae4c1-e192-47f7-9d11-7b28f5e8b3fe",
   "metadata": {},
   "source": [
    "We start by popping a chunk ID from the queue and retrieving its text from the graph. Using the retrieved text and additional information from the overall state of the LangGraph system, we invoke the LLM chain. If the LLM decides it wants to read previous or subsequent chunks, we append their IDs to the queue. On the other hand, if the LLM chooses to search for more information, we have two options. If there are any other chunks to read in the queue, we move to reading them. Otherwise, we can use the vector search to get more relevant key elements and repeat the process by reading their atomic facts and so on.\n",
    "\n",
    "The paper is slightly dubious about the search_more function. On the one hand, it states that the search_more function can only read other chunks in the queue. On the other hand, in their example in the appendix, the function clearly explores the neighbors.\n",
    "\n",
    "To clarify, I emailed the authors, and they confirmed that the search_morefunction first tries to go through additional chunks in the queue. If none are present, it moves on to exploring the neighbors. Since how to explore the neighbors isn’t explicitly defined, we again use the vector similarity search to find potential nodes.\n",
    "\n",
    "## Neighbor selection\n",
    "When the LLM decides to explore the neighbors, we have helper functions to find potential key elements to explore. However, we don’t explore all of them. Instead, an LLM decides which of them is worth exploring, if any. Based on the provided potential neighbors, the LLM can decide which to explore. If none are worth exploring, it can decide to terminate the flow and move on to the answer reasoning step.\n",
    "\n",
    "The code is:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "26942568-7ae9-426b-9237-3f53bd76e0a5",
   "metadata": {},
   "outputs": [],
   "source": [
    "neighbor_select_system_prompt = \"\"\"\n",
    "As an intelligent assistant, your primary objective is to answer questions based on information\n",
    "within a text. To facilitate this objective, a graph has been created from the text, comprising the\n",
    "following elements:\n",
    "1. Text Chunks: Segments of the original text.\n",
    "2. Atomic Facts: Smallest, indivisible truths extracted from text chunks.\n",
    "3. Nodes: Key elements in the text (noun, verb, or adjective) that correlate with several atomic\n",
    "facts derived from different text chunks.\n",
    "Your current task is to assess all neighboring nodes of the current node, with the objective of determining whether to proceed to the next neighboring node. Given the question, rational\n",
    "plan, previous actions, notebook content, and the neighbors of the current node, you have the\n",
    "following Action Options:\n",
    "#####\n",
    "1. read_neighbor_node(key element of node): Choose this action if you believe that any of the\n",
    "neighboring nodes may contain information relevant to the question. Note that you should focus\n",
    "on one neighbor node at a time.\n",
    "2. termination(): Choose this action if you believe that none of the neighboring nodes possess\n",
    "information that could answer the question.\n",
    "#####\n",
    "Strategy:\n",
    "#####\n",
    "1. Reflect on previous actions and prevent redundant revisiting of nodes or chunks.\n",
    "2. You can only choose one action. This means that you can choose to read only one neighbor\n",
    "node or choose to terminate.\n",
    "#####\n",
    "Please strictly follow the above format. Let’s begin.\n",
    "\"\"\"\n",
    "\n",
    "class NeighborOutput(BaseModel):\n",
    "    rational_next_move: str = Field(description=\"\"\"Based on the given question, rational plan, previous actions, and\n",
    "notebook content, analyze how to choose the next action.\"\"\")\n",
    "    chosen_action: str = Field(description=\"\"\"You have the following Action Options:\n",
    "1. read_neighbor_node(key element of node): Choose this action if you believe that any of the\n",
    "neighboring nodes may contain information relevant to the question. Note that you should focus\n",
    "on one neighbor node at a time.\n",
    "2. termination(): Choose this action if you believe that none of the neighboring nodes possess\n",
    "information that could answer the question.\"\"\")\n",
    "\n",
    "neighbor_select_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\n",
    "            \"system\",\n",
    "            neighbor_select_system_prompt,\n",
    "        ),\n",
    "        (\n",
    "            \"human\",\n",
    "            (\n",
    "                \"\"\"Question: {question}\n",
    "Plan: {rational_plan}\n",
    "Previous actions: {previous_actions}\n",
    "Notebook: {notebook}\n",
    "Neighbor nodes: {nodes}\"\"\"\n",
    "            ),\n",
    "        ),\n",
    "    ]\n",
    ")\n",
    "\n",
    "neighbor_select_chain = neighbor_select_prompt | model.with_structured_output(NeighborOutput)\n",
    "\n",
    "def neighbor_select(state: OverallState) -> OverallState:\n",
    "    print(\"-\" * 20)\n",
    "    print(f\"Step: neighbor select\")\n",
    "    print(f\"Possible candidates: {state.get('neighbor_check_queue')}\")\n",
    "    neighbor_select_results = neighbor_select_chain.invoke(\n",
    "        {\n",
    "            \"question\": state.get(\"question\"),\n",
    "            \"rational_plan\": state.get(\"rational_plan\"),\n",
    "            \"notebook\": state.get(\"notebook\"),\n",
    "            \"nodes\": state.get(\"neighbor_check_queue\"),\n",
    "            \"previous_actions\": state.get(\"previous_actions\"),\n",
    "        }\n",
    "    )\n",
    "    print(\n",
    "        f\"Rational for next action after selecting neighbor: {neighbor_select_results.rational_next_move}\"\n",
    "    )\n",
    "    chosen_action = parse_function(neighbor_select_results.chosen_action)\n",
    "    print(f\"Chosen action: {chosen_action}\")\n",
    "    # Empty neighbor select queue\n",
    "    response = {\n",
    "        \"chosen_action\": chosen_action.get(\"function_name\"),\n",
    "        \"neighbor_check_queue\": [],\n",
    "        \"previous_actions\": [\n",
    "            f\"neighbor_select({chosen_action.get('arguments', [''])[0] if chosen_action.get('arguments', ['']) else ''})\"\n",
    "        ],\n",
    "    }\n",
    "    if chosen_action.get(\"function_name\") == \"read_neighbor_node\":\n",
    "        response[\"check_atomic_facts_queue\"] = [\n",
    "            chosen_action.get(\"arguments\")[0]\n",
    "        ]\n",
    "    return response\n",
    "\n",
    "\n",
    "        \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d90b890-7611-4ef1-b55f-e2361e1a3e6e",
   "metadata": {},
   "source": [
    "Here, we execute the LLM chain and parse results. If the chosen action is to explore any neighbors, we add them to the check_atomic_facts_queue .\n",
    "\n",
    "Answer reasoning\n",
    "The last step in our flow is to ask the LLM to construct the final answer based on the collected information in the notebook.This node implementation is fairly straightforward as you can see by the code:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "71098d92-8abc-4851-b491-bd36db1e8f71",
   "metadata": {},
   "outputs": [],
   "source": [
    "answer_reasoning_system_prompt = \"\"\"\n",
    "As an intelligent assistant, your primary objective is to answer questions based on information\n",
    "within a text. To facilitate this objective, a graph has been created from the text, comprising the\n",
    "following elements:\n",
    "1. Text Chunks: Segments of the original text.\n",
    "2. Atomic Facts: Smallest, indivisible truths extracted from text chunks.\n",
    "3. Nodes: Key elements in the text (noun, verb, or adjective) that correlate with several atomic\n",
    "facts derived from different text chunks.\n",
    "You have now explored multiple paths from various starting nodes on this graph, recording key information for each path in a notebook.\n",
    "Your task now is to analyze these memories and reason to answer the question.\n",
    "Strategy:\n",
    "#####\n",
    "1. You should first analyze each notebook content before providing a final answer.\n",
    "2. During the analysis, consider complementary information from other notes and employ a\n",
    "majority voting strategy to resolve any inconsistencies.\n",
    "3. When generating the final answer, ensure that you take into account all available information.\n",
    "#####\n",
    "Example:\n",
    "#####\n",
    "User:\n",
    "Question: Who had a longer tennis career, Danny or Alice?\n",
    "Notebook of different exploration paths:\n",
    "1. We only know that Danny’s tennis career started in 1972 and ended in 1990, but we don’t know\n",
    "the length of Alice’s career.\n",
    "2. ......\n",
    "Assistant:\n",
    "Analyze:\n",
    "The summary of search path 1 points out that Danny’s tennis career is 1990-1972=18 years.\n",
    "Although it does not indicate the length of Alice’s career, the summary of search path 2 finds this\n",
    "information, that is, the length of Alice’s tennis career is 15 years. Then we can get the final\n",
    "answer, that is, Danny’s tennis career is longer than Alice’s.\n",
    "Final answer:\n",
    "Danny’s tennis career is longer than Alice’s.\n",
    "#####\n",
    "Please strictly follow the above format. Let’s begin\n",
    "\"\"\"\n",
    "\n",
    "answer_reasoning_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\n",
    "            \"system\",\n",
    "            answer_reasoning_system_prompt,\n",
    "        ),\n",
    "        (\n",
    "            \"human\",\n",
    "            (\n",
    "                \"\"\"Question: {question}\n",
    "Notebook: {notebook}\"\"\"\n",
    "            ),\n",
    "        ),\n",
    "    ]\n",
    ")\n",
    "\n",
    "class AnswerReasonOutput(BaseModel):\n",
    "    analyze: str = Field(description=\"\"\"You should first analyze each notebook content before providing a final answer.\n",
    "    During the analysis, consider complementary information from other notes and employ a\n",
    "majority voting strategy to resolve any inconsistencies.\"\"\")\n",
    "    final_answer: str = Field(description=\"\"\"When generating the final answer, ensure that you take into account all available information.\"\"\")\n",
    "\n",
    "answer_reasoning_chain = answer_reasoning_prompt | model.with_structured_output(AnswerReasonOutput)\n",
    "\n",
    "def answer_reasoning(state: OverallState) -> OutputState:\n",
    "    print(\"-\" * 20)\n",
    "    print(\"Step: Answer Reasoning\")\n",
    "    final_answer = answer_reasoning_chain.invoke(\n",
    "        {\"question\": state.get(\"question\"), \"notebook\": state.get(\"notebook\")}\n",
    "    )\n",
    "    return {\n",
    "        \"answer\": final_answer.final_answer,\n",
    "        \"analysis\": final_answer.analyze,\n",
    "        \"previous_actions\": [\"answer_reasoning\"],\n",
    "    }\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b5dc1d6c-65a6-4036-b904-4b02d5ffe724",
   "metadata": {},
   "source": [
    "We simply input the original question and the notebook with the collected information to the chain and ask it to formulate the final answer and provide the explanation in the analysis part.\n",
    "\n",
    "## LangGraph flow definition\n",
    "The only thing left is to define the LangGraph flow and how it should traverse between the nodes. I am quite fond of the simple approach the LangChain team has chosen."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "c7b57214-a6db-4bf9-8616-4f1dd3ceac11",
   "metadata": {},
   "outputs": [],
   "source": [
    "def atomic_fact_condition(\n",
    "    state: OverallState,\n",
    ") -> Literal[\"neighbor_select\", \"chunk_check\"]:\n",
    "    if state.get(\"chosen_action\") == \"stop_and_read_neighbor\":\n",
    "        return \"neighbor_select\"\n",
    "    elif state.get(\"chosen_action\") == \"read_chunk\":\n",
    "        return \"chunk_check\"\n",
    "\n",
    "def chunk_condition(\n",
    "    state: OverallState,\n",
    ") -> Literal[\"answer_reasoning\", \"chunk_check\", \"neighbor_select\"]:\n",
    "    if state.get(\"chosen_action\") == \"termination\":\n",
    "        return \"answer_reasoning\"\n",
    "    elif state.get(\"chosen_action\") in [\"read_subsequent_chunk\", \"read_previous_chunk\", \"search_more\"]:\n",
    "        return \"chunk_check\"\n",
    "    elif state.get(\"chosen_action\") == \"search_neighbor\":\n",
    "        return \"neighbor_select\"\n",
    "\n",
    "def neighbor_condition(\n",
    "    state: OverallState,\n",
    ") -> Literal[\"answer_reasoning\", \"atomic_fact_check\"]:\n",
    "    if state.get(\"chosen_action\") == \"termination\":\n",
    "        return \"answer_reasoning\"\n",
    "    elif state.get(\"chosen_action\") == \"read_neighbor_node\":\n",
    "        return \"atomic_fact_check\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "1e37d7ba-2847-42c8-9f68-20cc517334d2",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAMCAgMCAgMDAwMEAwMEBQgFBQQEBQoHBwYIDAoMDAsKCwsNDhIQDQ4RDgsLEBYQERMUFRUVDA8XGBYUGBIUFRT/2wBDAQMEBAUEBQkFBQkUDQsNFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBT/wAARCALZATMDASIAAhEBAxEB/8QAHQABAAIDAQEBAQAAAAAAAAAAAAUGBAcIAwIBCf/EAFoQAAEEAQIDAgkECwsKBQMFAAEAAgMEBQYRBxIhEzEUFRciQVFWlNMIVNHSFiMyNkJSVWF0kpUkMzdxdYGTsbKz1CU0NWJjcnORtMEJJkOhw0RTgkaio6Th/8QAGgEBAQEBAQEBAAAAAAAAAAAAAAECAwUEBv/EADYRAQABAgIGCAQGAgMAAAAAAAABAhEDEhQhUVKR0QQxQWFicZKhEzOx0gUiMoHB8CNCU8Lh/9oADAMBAAIRAxEAPwD+qaIiAiIgIiICIiAiLznnjrQyTTSNihjaXvke4Na1oG5JJ7gEHosO5mKGPdy2r1as71TStYf/AHKgoqtzWTG2bUtnGYV+zoKURdDYsN/Gmd90wHptG3lIH3Z3JYzNqaH07RYGwYPHs6bF3gzC49d+pI3PXr1X0ZKKNVc6+7mtojre/wBlWE/LFD3pn0p9lWE/LFD3pn0r9+xbC/kih7sz6E+xbC/kih7sz6E/w9/sup+fZVhPyxQ96Z9KfZVhPyxQ96Z9K/fsWwv5Ioe7M+hPsWwv5Ioe7M+hP8Pf7Gp+fZVhPyxQ96Z9KfZVhPyxQ96Z9K/fsWwv5Ioe7M+hPsWwv5Ioe7M+hP8AD3+xqBqnCuOwy9An1Cyz6VIxTRzxtkie2SN3c5h3B/nUd9i2FII8UUNj0P7mZ9CjpuHuGjkdPi4DgLp7rWJ2gJPrcwDkk/ie1w/5BLYM9sx/f7tTUsqKEw+XstuuxWVa1uQYztI7EbeWK3GDsXMG52cNxzNJ6bjbcEFTa5VUzRNpQREWAREQEREBERAREQEREBERAREQEREBERAREQEREBERAVY1ltkbuCwjtjBftGSy0/hQxMMhb/O8RAjuLS4H1GzqsakHguq9LXXb9kZZ6TiBvymSPmaT6gTEG/xuHrX0YH67908bTb3WOtZ0RUW5x54Z463PUt8RNJ1bUEjopYJs3WY+N7Ts5rml+4IIIIPcvnRelrny346xxKvaMx+A1BlrOOngrZHJ0abH0qMs0YkY2V5eHfcOaSWscBv1IXsflCcLGkg8S9HgjvBz1X4i1dqzT2f1pxdwWq+H+nGUIpL9GSfXmMz0DqWXxbQDPFPWY7eY7c7GHldts0h7R0AWjgxxxznELNa7qZbSGWoVsHmLlSvcZDB2QjhbFywODZ3vdYPO53mt5CCNnA9FL6X+UFi9QZu1h7+mtTaUykeOlytern6LIHXK0ZAkdEWyOBLS5m7HFrhzDcd+1Pw2kOImmLPFzTmKxHg0epr2RzGF1dHehEVWeeqxsbHwk9qHMlYPODSNiD6FRNDcEtS4vXGl8xW4ZDS8dfAZLE5a7NmK9q7dtzQsLbErg8mRhfEWhxcX7y7ljWjdBbta/KyszcB8pr/R2jM+6sKlexSv5apAys7tJGsduzwgPPJuQSByk7Fpe3qt8aVztjUmDr5Czhcjp6aUuDsflRF4RHs4gc3ZSSM67bjZx6Eb7HotLXeD2pMv8iyhw8FaGnquLTdSoas8zSwWYWxuMZe0lvVzOXmBI6777K8Y/jjg8TShZxAt4jhrnpG9oMLnM9S7cxdwlBbJsWlweAf9U93cg2QioJ+UDwuEbXniTpAMcS0O8e1diRtuN+0/OP8AmFYtK6603rqvPY03qHFahggeGSy4q7FZbG4jcBxY4gHb0FBicQ9qWnnZpuwnwrxkGvO/RjP34dPxojI3+cH0KzqtcSd5NC5qqzftr1d1CIBvMe0m+1M6f7zwrI1oY0NHcBsF9FWvBp85/hex+oiL50EREBERAREQEREBERAREQEREBERAREQEREBERAREQFgZvDwZ7FzUpy5jJOVzZIzs+N7XBzHtPoc1zWuB9YCz0ViZpmJjrgQWH1A42G4vLdnVzLR3NBbFaAHWSEnvHrZuXM7juOVzpV2OqucSa0JJ6kmMdV55TEUs3UNXIVYrlckO7OZgcA4dzh6iO8EdR6FB/YHHCOWnnM5SiA2EbLzpQ0fm7UPP/v07u5d/wDHXrvafb+/t+7WqU/4tqfNYP6MfQvdjGxsDWNDWjoA0bAKsfYRP7U57+ni+En2ET+1Oe/p4vhJ8PD3/aS0bVpRVb7CJ/anPf08XwlU9B47KaivauiuapzIZi81JQr9lNED2QgheOb7WfO5pHerpt0T4eHv+0lo2tqrxlqQTu5pYY5HbbbvYCVXPsIn9qc9/TxfCT7CJ/anPf08Xwk+Hh7/ALSWjasHi2p81g/ox9C/J5qWFpzWZn16NWMc8sshbGxo9bidgP51AjRE4PXVGecO7YzxfDXtS0Ji61uO1Y8JytuIh0c2SsvsdmR3FjXHlYfztAKZMKOuq/lHNNTzqRv1ZlKmSlidFiKTjLSZI0tfYlLS3tnNPc0Bx5QepJ5umzVZkRc6683lHUTIiIuaCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiIC17wmIOV4i7En/AMzzb7/otX8/0LYS17wm38a8Re7755u4D5rV79v+6DYSIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAtecJRtluI3nA/+aJu4d37lq962GtecJNvG3Efb2om36bf/SVUGw0REBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERARYGbzNfA46S5ZD3NaWsbHE3mfI9xAaxo9JJIHq69SBuVWDqDV0p52YvD12nqI5Lsr3D+MiIDf8Ai/5nvXfDwK8SLx1d82Wy7IqR481h8xwfvU3w08eaw+Y4P3qb4a66LXtjjBZd0VI8eaw+Y4P3qb4aePNYfMcH71N8NNFr2xxgsu6KkePNYfMcH71N8NPHmsPmOD96m+Gmi17Y4wWXdFSPHmsPmOD96m+GnjzWHzHB+9TfDTRa9scYLM7ijq7IaB4fZ7UeMwjtRXMXWdaGMZY7B07G7F+z+V2xDOZwGx3I29O65R+SF8sifjLxWzWmqOhJqcGXuWM3byPjESNoxCCONoLRC3n3fGxu+4/fPzbHp92a1e9pa6hg3NI2INmbY/8A8a1HwK4Az8Acxq/I4Clh5JtQ3TYIlnlHgsI3La7D2e/KHOcd/T5v4qaLXtjjBZ0oipHjzWHzHB+9TfDTx5rD5jg/epvhpote2OMFl3RUjx5rD5jg/epvhp481h8xwfvU3w00WvbHGCy7oqR481h8xwfvU3w08eaw+Y4P3qb4aaLXtjjBZd0VI8eaw+Y4P3qb4aePNYfMcH71N8NNFr2xxgsu6KkePNYfMcH71N8NZmN1XfgvVqucpVqotP7KC1TndJGZNtwx4c1pYTseU9QSNtwS0HM9GxIi+qf3gstaIi+VBERAREQEREBERAREQEREBERAREQVDiOf3PgR6Dl4Nx/M4rMWHxH/AHjAfyvB/U9Zi9Oj5NP7rPUIiIgiwcpnMfhDTGQuwUzcsNqVhPIGmaZwJbGzf7pxDXHYddgT6Fj29VYujqXH6fmslmXvwTWa1fsnnnjiLBI7mA5RsZGdCQTv032KglkRFQRFD6e1didVyZePFW/Cn4m8/G3R2b2dlYY1rnM84Dm2D2ndu469/eoJhEWNk8lWw2Nt5C5J2NSrC+eaTlLuVjWlzjsASdgD0HVUZKLBwOcpanweOzGMn8JxuQrR2603I5naRSND2O5XAEbtIOxAPrCzkBERAREQEUPkdXYnE6kw+Bt2+yy2XZPJRr9m89q2ENdKeYDlbyh7fuiN9+m6mFAUBrQluOxxHf44xo7vXdhB/wDYqfVf1t/o3HfyxjP+ugXbB+ZT5wsdcNhoiLx0EREBERAREQEREBERAREQEREBERBUOI/7xgP5Xg/qesxYfEf94wH8rwf1PWYvTo+TR+6z1NT/ACqMrlMJwI1JcwuSs4fJsfTZBeqSGOSIutwtJBBB7nEEekEg9CqxxPwP2Ot0zonD5TWub1FnLNm9EI9US0y9sUcYmfNaIc6KIFzCI4m7cz+jdt1ufVukcTrrAWcJnKnh2MsOjdLB2j4+Yse2RnnMIcNnMaeh9HqUdrjhjpviOMec/j3Wpce90lWxBZmrTQlw2eGyRPa8Bw2BbvsdhuDssTF0cr2GZPiJw54Uw6qy2Vfk8bxHnwMlqrlZY5XMY60xrnSxdnzyNEbGibla77ojbndvsjidlcpwx4p0bOGyWXvR1tCZi03F3MjPPBYmqNg7B7o3OIdIeZwLz5zubqStkjgFoFukLel2adhhwNm6MiacM8sYisjl2kic14dCfMB+1lvXc97jvM4/hrpzGZDDXoaDzbw9GXHUpprM0ro68pYZGHneefcxs85+56d/UqRTI0Tp6bM6Hl4Lahi1rnNS3NaWYq2XpZG8Z61hs9OSd00EP3MIjexu3ZgDldsd1X+G+R1BiuHfA/W0ur9RZTK5/NVsVkocjkpJqs9eVs7duxPmhzezYRJtzkg8znbrf2keA2g9CZ9mawen46eQibIyu91iaVlVr/u2wRve5kId6RG1vTp3KQp8JNJ0NOabwMGK7PE6dtx3sXX8JlPg80fNyO5i/mdtzu6OJHXqO5TLI5xs6z1Cdd6Z1tpu5qQaUy2s2YR0+Z1AZa9yKSxJBI2HH9nyxxtc13I/na/7WCQd91t75PJ/ylxbHpGubv8A09VSs3ycOHVjKS5B+nB4S+4MizluWGsr2RIJe2hYJOWF5eAS6MNJ677gneRyfDqTE5jK5/RIxOC1JmJIjk7mRqz2obLGNIB7Fk8QEn3Pn95A2O/QixEwIP5UuWyGC4C6svYrIWsVfiih7K5SmdFNETYjaS17eoOxI/nWvteV73DbXU+msfqLNZzEah0jmbV2hmcg+66rJXjj7OdjpCXMD+0ewtBDdx0A2Vr4o8MeJPErh1n9MZDUGk7UWRgZG1kGJs0/OErH7ukNmbYbNd0DDvuOoV00ZwV0XoC7kbmFwrYrmQiFezZt2Jrcr4R/6XNM95Ef+oCG/mSYmZGlDdnzfCzgppHDP1BPqC9pivdjr4TOHDwiCOtXa+WxYa1z9g57A1rWncuO42UPojUuqeIFDgriMzqbMVJL13UOOys2NvmKW4yo6RsQfKwNJO0Td5Ghrj5xBaXFbyk+Ttw+fhMTiW4B0FLEvmfRbXv2YZK4lO8jGyMkDxG7Ybx78nQDboFI6e4K6K0naxM+HwcePdibNq3QjhnlEVaSwzkm5I+blDXAfc7coJJABJKmWRXvk85LIzY/W+Gv5O5l49P6ot4qnayExmsGu2OGVjXyO855b2pbzOJJAG5Xz8pTI5OhpHTkWKy97CT3tUYmhJbx8vZytimtMjeAdiDu1x6EEesFWC7ofL6emuyaDtYXBuyt6XJZU5ejYv8AhFh7Y2c7OWzH2fmxgEDcHpsB13+IdDZvU7YoNfXMHnadS3XyNKPE4+zQdFZhkEkcjnG1JzgOAPLsB067jotWm1hozXurdR8JbHFPT2F1HlJqkNbAzUr+XtvuzYs3bT61iRskpc4tDWh4DiQ135ui/eKOrs/8m/Magpaf1BmdSwS6Ot5dsGfuuvyUrUM8UTLAc/dwY4TOJZ9yTH0A6ronJcNdM5jI527fxEN2fOUo8dke3c57LFeMvLGFhPKNu1f1AB69T0G0ZpPghojRUOUixeCj2ylfwS4+9PLcfNBsR2JfO97uz2J8wHl69ymWRo3UWlp+FHFPhzmYNSZ3Wt7xDn7hdl77rMc8sdWF/NC3ujDyduVmzdg3YdNz88HMTxZ1RW0JreHK9tBlHVr+VntarltVrVWVu80cdDwRscD2gnlDHgtLNi53Urc+kPk86A0HnMfl8Jg31Mhj45YaksmQszCCOQAPYxskjmhpAGzdth6AF7aa4B6C0fqVmew2AbQyEUkk0IitT+DwPkBD3RwF5ijJDnAlrB3lSKZGwFX9bf6Nx38sYz/roFYFX9bf6Nx38sYz/roF9WD8ynzhY64bDREXjoIiICIiAiIgIiICIiAiIgIiICIiCocR/wB4wH8rwf1PWYszU+DOfxghjlEFmGWOxBK4EtbIxwI5gCN2nqCN+4lVl2R1FCeSTSNuV46F9W7WdGT/AKpfIxxH8bR/EvTwpivDimJi8X65iPq11wmkUJ42z/sbk/eqfx08bZ/2NyfvVP466fD8UeqnmWTaKE8bZ/2NyfvVP46eNs/7G5P3qn8dPh+KPVTzLJtFCeNs/wCxuT96p/HTxtn/AGNyfvVP46fD8UeqnmWTaKE8bZ/2NyfvVP46eNs/7G5P3qn8dPh+KPVTzLJtFCeNs/7G5P3qn8dRuH1vkM9Nk4qOlMpM/G2zStDt6rezmDGPLesw382Rh3G469/enw/FHqp5lltRQnjbP+xuT96p/HTxtn/Y3J+9U/jp8PxR6qeZZNooTxtn/Y3J+9U/jp42z/sbk/eqfx0+H4o9VPMsm0UJ42z/ALG5P3qn8dPG2f8AY3J+9U/jp8PxR6qeZZNooTxtn/Y3J+9U/jp42z/sbk/eqfx0+H4o9VPMsm1X9bf6Nx38sYz/AK6BevjbP+xuT96p/HXvWxOW1HdpnIY92Hx9WZll0cszJJppGEOY37W4ta0OG5JJJ5QAOpI1TbDqiuqYtHfE/SSItN13REXjMiIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAtf8KBtlOIfTbfU03o2/+lrfmH/f+P1bAWveEzeXK8RehG+p5j1G2/7lq/8ANBsJERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAWvOEpBy3EbY7/APmibfpt/wDS1f8AmthrX3Cfm8a8ROYuI+yebbmHTbwWt3fm/wD9QbBREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEReFy7Xx1WSzbniq1ohzPmmeGMYPWSegViJmbQPdFVvKno72pxB/OLsZ/7p5VNHe1GJ98j+ld9GxtyeEtZZ2LSiq3lU0d7UYn3yP6U8qmjvajE++R/SmjY25PCTLOxaUVW8qmjvajE++R/SnlU0d7UYn3yP6U0bG3J4SZZ2LSiq3lU0d7UYn3yP6U8qmjvajE++R/SmjY25PCTLOxaUVW8qmjvajE++R/SnlU0d7UYn3yP6U0bG3J4SZZ2JrOZ/GaYxU+TzORqYnGwbdtcvTthhj3cGjme4gDckAbnvIC1LwT4n6LzWqNbY7G6twV/I5DUc81SpVyUMktlgqQEvjY15L2gMf1A28x3qKseutQ8POIejszprL6ixE+NylZ9WZptxnYOHRw3PeDsR+cBcV/IR4J4jhXxU1ZqjV2ZxtebCzS4vDPlssa2xzbh9lm56tLCGg7bHnd6k0bG3J4SZZ2P6NIqt5VNHe1GJ98j+lPKpo72oxPvkf0po2NuTwkyzsWlFVvKpo72oxPvkf0p5VNHe1GJ98j+lNGxtyeEmWdi0oqt5VNHe1GJ98j+lPKpo72oxPvkf0po2NuTwkyzsWlFVvKpo72oxPvkf0p5VNHe1GJ98j+lNGxtyeEmWdi0oqt5VNHe1GJ98j+lPKpo72oxPvkf0po2NuTwkyzsWlFB4nXGnc9abWx2cx96y4Ethgsse92w3OzQdzsO/wBSnFyqoqom1UWlnqERFgEREBERAREQEREBERAREQFSdSEZHXFChYHa1a1J1xsLurTKZA1ryO4loB237ubfvV2VHy/8JbP5I/8AmX2dF/XM90tQlURF3ZEREBERAREQEREBERAREQEREBERAREQEREGDmsXBl8bNXsN3BHMx46OjeOrXtI6tc0gEEEEEAgqZ0blJc5pDB5Kc809yhBYkOwG7nxtceg7upWDN+9P/wB0pwx/g20n/JNT+5Ys4uvB8p+sTya7FmREXnMiIiAiIgIiICIiAiIgIiICo+X/AIS2fyR/8yvCo+X/AIS2fyR/8y+zov6p8pajtSq1dxU1Pdw3EXhhjRWuNxuTy8kTrdLKmv8AbW1pntjmg7J3bRFrXHbnbs5rT12W0VROIuhL+rtU8PslTmrRQaezLsjabO5wc+M1Z4do9mkF3NK07EgbA9fQessqJhflKZTIUMLnLeiTR0nkc6dPnJjKskmjn8KfWZJ2PZjeIyNAJLg4EnzSACahxK4sav0/p3jndw9exj87g8hQrtM+Z8JrwQSRRls1eN1cCIuY9hdH53nPcebzRvbq3AjPw8GcNpF1zGnJUtUtzkkolk7EwDLOucoPJvz9m4DbYDm6b7dVkaw4CZXVtfjRWdkalSPWjqUmOlbzPdA6CrDH9ubygAGSL8Eu80+vosfmsMnXus9S4+7wsjzmDdh35XUbaloYXULi2GTs5nRRv3rN8Ihexj3Oaez2c1g3PevHK/KNl0zxUx+ks7gcfRrZHJjF1bEGoK9i9zPJEMslJoD2RvIHncxI5m7gKTz2h9ba7paBsZ8YCllcDqaHL22Y6xPJA+uyvPHtGXxhxkLpQdiANgfO9evYfk563oQ47HVpdKPp4vVjNTtys3b+MMoRbM3JYdybRuDHlvODJvyMGzRvsm/YNkcCszkMvf4ntv3rN1tPWVyrWFiZ0gghbBWLY2bk8rAXOIaOnU+tWTi3xCbwq4eZjVT8e/KMxzY3mpHKI3SB0rGbBxBG45t+vfttuO9VHAULPBHP63yOYm8N0xqLNeNKXivG3Ll6GxJC1sscscEb9ox2ILX+s7HbcLD4m5ejx74fZ3RWmzkauXyEMbopc1gsjQrAMmje7mllrhoOzTsOpJ9C1e0d4yX/ACgJdL5TUNLXWnPsYfi8E/Ucbqt9t4T1GP5HtOzGcsrXFg5BzAl42cV8YXj7kKeZxVTXOkXaLqZihYyGOuHItuAtgi7aWOdrWNMUgi3fsOcENcObcL74ocDJuJ+tcpat24K+DyOj7enHlpcbEc8tiKVkrW7cpa0Rk9Xb77DbbqoSXgrrTiTlcMeJF/BtxmFxt2jAzAOmdJdltVjWfPL2rQI9onP2Y3m85+/N0AU/MIPJcYtYa01Pwivs01d0npTN58PrWzlgZr9Y07DmMsV2AcgeOWQNLnjzBvsdl6n5bOnnZNtiOviZdMOvigLbdR1fGZBl7Ltxj/3zs+br91z8nncmyzcHwi4nOm4Z47PXtLWcNonIxzNu032GW7sMdWWvGXRuZyMfs9u7Q4gncgjbYynDDhXrvhW2jpWi/SuR0PSuPfXvXGTjJsqOkdJ2BYG9m57eYtEnOOgBLVPzDeK1Tp/jZf1NxI1HgKem4WYPT1x1LJZazlY47EBbCJe18ELOYwncNa/m69+2wO0o/jvpWN7mmDU27TsdtJZYj/mKypGqOEWo+JvErB6itQ6dxmDqWDMzLUWWYczboPhc00p2OYAGu5/O3d026MB3WpnYIrT/AMtHBZ3O4VoqYpuCzN+KhTnh1FVmyTXSv5IpJqDfPjYXFu/nOc0OBc0ddvXS/G+Phxwu1VqDUd2bJzfZtlcRj4715sYc43ZGQw9tK7lijY1pO5IaxjDsOgCsHCThzxB4bw4PS9qXSmT0jh94IcoY5hk5qzWkQsdHyiNr2+YC8PIIb9zud1B3/k76msYLO46DL4mKWrq92sdNW5IpH8s75ZJJILcfcWfbXsDmHch2+wI2OfzD7wvywsDJjdTyZepS8PwlSG42DTeZhzMNxs0ohjjjljDdpTKWtLHtbtztO5B3EhxF4l8S8TwY17m5tGVtKZXHYmS3RnGajucp5XF7iBDsJI2jm5SHNcdhzdSRnag4Y6x4ncO87hNUv05p7JzSVrGLsaebLO2vPBK2Zr5XStZzgvYzzQ0bDfqSdxm3NI8QuIug9W6Z1vJpnGw5bEy4+CbAPsTObLIxzTK/tWt2b1BDBueh84q6xXdS691ZX0/wslz+FdjX5XUNGrYmw2onAjm2MRk3rDto5PtnPF5u3KBzHfp75L5S9mnBldRQ6Oms8OsVk3Yy5qPxgxsoLJhDLPHV5CXwskJBdzg7NJDSAsqzw415qrSmh6WoXaer5LTuosfkZH42xO6KarXZs4jniBEriSQ37nbbzlXMt8n/AFpZ0xnOHNTKYOLhzl8pLckuP7bxnXrTWPCJqzIw3s3bvL2iQvGzXfckhT8wnNU/KIzOCt6/kpaGOTw2iJ2tyl7xsyJ74fB4p3PhiMZLnta927HFo2aNnkuLW7ppXIsjSr2oHc8E8bZY3etrhuD/AMitP5TgxmruA44UIrNBsmuGyjGl0jw2Hmx8dYdt5nm+ewnzebzdvT0W1dN46XD6dxdCZzHTVasUD3Rklpc1gaSN9um4W4v2jOm/en/7pThj/BtpP+San9yxJv3p/wDulOGP8G2k/wCSan9yxMX5M+cfSWuxZkRF5zIiIgIiICIiAiIgIiICIiAqPl/4S2fyR/8AMrwqTqYNxmtqOQskRVLFJ1MTO6MbKJA5rSe4FwJ23I3Ldu8hfZ0X9cx3S1CTRO9F3ZEREBERAREQEREBERAREQEREBERAREQEREHxN+9P/3SnDH+DbSf8k1P7liw81la+Ix8s87uu3LHG3q+V56NYxo6ucSQAACSSFNaNxUuC0hg8bOOWanRgrvG++zmRtaeo7+oWcbVg6+2fpE82uxMIiLzmRERAREQEREBERAREQEREBeVqpBeryV7MMdiCQcr4pWhzXD1EHoV6orE21wKueFmjCfvSwn7Oh+qvzyWaM9ksJ+z4vqq0ou+kY2/PGWs07VW8lmjPZLCfs+L6qeSzRnslhP2fF9VWlE0jG354yZp2qt5LNGeyWE/Z8X1U8lmjPZLCfs+L6qtKJpGNvzxkzTtVbyWaM9ksJ+z4vqp5LNGeyWE/Z8X1VaUTSMbfnjJmnaq3ks0Z7JYT9nxfVTyWaM9ksJ+z4vqq0omkY2/PGTNO1VvJZoz2Swn7Pi+qqNwx4d6WvZLXjbWncVabX1FLDA2WnE8QxitXIY3oeVu7nHbp1cenVbiWvuE5JyvETc7ganmA7+n7lrev/smkY2/PGTNO1MeSzRnslhP2fF9VPJZoz2Swn7Pi+qrSiaRjb88ZM07VW8lmjPZLCfs+L6qeSzRnslhP2fF9VWlE0jG354yZp2qt5LNGeyWE/Z8X1U8lmjPZLCfs+L6qtKJpGNvzxkzTtVbyWaM9ksJ+z4vqp5LNGeyWE/Z8X1VaUTSMbfnjJmnaq3ks0Z7JYT9nxfVTyWaM9ksJ+z4vqq0omkY2/PGTNO1C4nROnsBYFjGYLG4+cbgS1akcbhv0PUAHqppEXKququb1TeUvcREWEEREBERAREQEREBERAREQEREBERAREQEREBERAREQFr3hMCMrxF3Zyb6nmIPXzv3LV69f8At6lsJa84StLcrxGPK5u+qJjufT+5avUINhoiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiIC17wmbtleIvQDfU8x6b/Na3fv8A9lYeINnUVPROasaSip2NSQ1Xy0IMhG6SCWVo3DHBr2HztuUbOGxIK4z+Q38o3idxk4q6nx+QwmBpaeNiXLZmxBVsMmisOjZDHFE50zmjcxNJDgTs1/Xu2Du1ERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQYWby0OBw17JWA90FSB872xjdzg1pJAHpJ22A9aqDq+o8k3t7GorGKlfs41cbBXdFF/q80sT3PI6Au6bkbhrQeUSfFL+DnUf6DL/ZK9l6OBEU4ee0TMzPXET1W2+bXVF0J4nzvtpmPd6P8Ahk8T5320zHu9H/DKbRds/hj0xyLoTxPnfbTMe70f8MnifO+2mY93o/4ZTaJn8MemORdCeJ877aZj3ej/AIZPE+d9tMx7vR/wym0TP4Y9Mci6E8T5320zHu9H/DJ4nzvtpmPd6P8AhlNomfwx6Y5F0J4nzvtpmPd6P+GVY0Zwbq8PbedtaezuSxc+cuuyGQfFBTJnnd3uO8B2Hfs1uzRudh1K2EiZ/DHpjkXQnifO+2mY93o/4ZPE+d9tMx7vR/wym0TP4Y9Mci6E8T5320zHu9H/AAyeJ877aZj3ej/hlNomfwx6Y5F0J4nzvtpmPd6P+GTxPnfbTMe70f8ADKbRM/hj0xyLoTxPnfbTMe70f8MnifO+2mY93o/4ZTaJn8MemORdCeJ877aZj3ej/hln4LMZHHZqvicna8ZMtskfWuOjbHIHM2Lo3hoDT0O4cAO4gj0nMULe+/jSf/Es/wBw5NWJE0zEdUzqiI6omeyCJuvqIi8hkREQEREBERAREQEREBERAREQEREFW4pfwc6j/QZf7JXsvHil/BzqP9Bl/sley9HC+THnP0hrsEVf4hZ6vpXQWpM1bqy3quOxtm3LVgcWyTMjic5zGkdxIBAPo3XK3BfDUsBxr0tQr/YvTxurtL3Z7+n9PzzTRlm8DovCXSyvE0mz5AJAxhIEg6hJm0susNH6wxWvNPV83hbBtY2w+VkUxjczmMcjo3dHAEecx3eFMrh/SmP0/pP5Jokwgp4K/dzUdDVdzGuEN2Kg3LPil7Qt89obE/l3Pc1x7gvrjJRwGim8UcFw8fDW0y7QRv5GpjLBkq17wtNbXkGxIZK+Iy77bFwY1x371jPquO3lE5vUHiW7iK/izIX/ABja8F7alB2kdXzHP7SY7jkj8zl5uvnOaNuq54fwN0M/5SFfT79PwSYS5o+S/ax75JHQ2bTLccbbErS7aSUNkeO0du7d2++/VVrQ1dmW0z8mTN3h4ZmYc1bxzMhOeefwdkF1oYXnqRtFH+qFc3YOwkXMHycsPoTUuExWr9V2aVnipJlbDL9jIXy27XvCeRgqtYXgta1oa1sQGxAB2O+66fWom8XFNw3E+lqHXOV01jcVlLjcVJ4PezDIWCjBY7NsnYF5eHOfyvYTysIHMASFclyZo3E6R4U6L+URqePSsE4oZy/QdBS3rySVfBaruwEjesbOeRzi4fc7lw6hUmuc1wmz3EHHaQt6frZCfh7Yy7cXo98zoa9lkzGtl2klfzyiOR5DwGcwDTy+vGa3WO47tyHHU57dh/Z14I3SyP2J5WtG5Ow69wWFpjUdDWGm8XncXK6fGZOrHcqyuYWF8UjQ5hLT1G4I6HqtHaT4ccIZdAXTpZuLzOUzGnJhLJ4b4TayMZY0vkmYXkyO5+Tdzhu0u26b7Kc+SLi9K43gVpF+nIMbBatYmlPlDQ5A+Sz2LWvdLy9efma4EnruCtRM3G6FXNfa7ocOcDHlslDZnrSXatENqta5/PPMyFh2c5o5Q6QE9d9t9ge5a/8AlU5SfGcNaLXXZ8Xg7ecx1PO360hifXx0k7Wzu7RvVgIIaXDuDiqBxk0Nw4wXCO/i9Esx1aG5nMCL8GHvEkNdfiDHnleSwuHPs8bE7b7nbombDqJFxzxYpWOCl7i3iOHjJsHRk0jjcs6tTdI5teR16aCzYjbzbtd4O0uJaQSWA77jde2O0Fd0LiM/qfh/rDSUuQg0tfsR4jSNWdr8iTFvDYkbJcn53MkALX8vMS4tJPMpm7LDqrV+paui9J5rUN6OaWliaU9+eOuAZHRxRue4NBIBcQ07bkDf0hZOEy0OfwtDJ12vZBdrx2Y2ygBwa9ocAQCRvsfWVzBHpXhZj/k86jzembmPvanyOh8g+XIHIma9e3pl0z5QXkvcHfdbjzCSPN7lJfJldfj1xP8AZ1Wii1jb0/RsYJ8UvaQMw4jY0wQ7gbSMl6zfjF8ZHmgJm1jppQt77+NJ/wDEs/3DlNKFvffxpP8A4ln+4cvoo/28qvpKwvqIi8hBERAREQEREBERAREQEREBERAREQVbil/BzqP9Bl/sley8uKI34c6k/NQlJJ7gOU9V6r0cL5Mec/SGux+PY2RjmuaHNcNi0jcEKu4bhtpHTs1eXE6WwuMlrzPsQvp4+GF0Ur2lj3tLWjZzmktJHUgkHorGirKDr6E01TyGTvQaexUF7KNLL9mOlE2S2094lcG7vB9IduvChw20ji8FdwlLS2FqYa7v4VjoMdCyvY37+eMN5Xfzgqxolhh+Jcf42blfAa3jRsBqtu9i3thCXBxjD9ubkLgDy77bgFYdbR2ApV8bXr4PGwQYyV09GKKpG1tSRwcHPiAGzHEPfuW7E8zvWVMIgrd3hzpm3mpc63A4qHUbmkMzbcfA65G7bYOEjmE7j0b7jp3KAHDbVQIJ4samI9RoYnr/AP0lsNEsI+vp7FVIsjFBjKcMeRlfPdZHXY0WpHNDXPlAHnuLWtaS7ckAD0KPwfDzSumX1nYfTOHxLqvaGB1GhFCYu0AEnJytHLzcrd9u/Yb9ysCJYQGC4faW0tkbOQw2msRiL9nft7VChFBLLudzzOa0F3X1qGv8K6lZ8kmkb54fzWZDLel0/jKLXXnegzGWB/MW7u2I2PnnfdXhEtAqWntE5LHm5HnNXZPV9KzCYXUstTotiAPedoa8ZduNxs4kbHuXvjuGOjsPjpMfQ0ng6NCSaOw+rWxsMcTpY3B0by0NALmuAIPeCAQrMiWGEcJjjlZcn4BV8ZTV21ZLnYt7Z8Ic5zYy/bcsBc4hpO27ifSo7TegtMaOmszYDTmJwctk7zyY2jFXdL1384saOb+dTyIKxW4X6Np2MjPX0lgoJ8lG+G9LHjYWutMeNntlIbu8OBIIduD6VLHTeJNnGWTi6RsYtjo6Evg7Oeo1zQ1zYjtuwFoDSG7bgAKRRLAoW99/Gk/+JZ/uHKaUNdG+uNKbeiSyT/F2Dh/3C60f7eVX0lYXxEReQgiIgIiICIiAiIgIiICIiAiIgIiIPK3VhvVZq1iNssEzHRyRuG4c0jYg/wAYKqD9K6iogQ47M0ZarOkfjGpJJM1voDntlHPsNhuQD06kk7q6Iu2Hi14eqnmt7KR4g1h+U8H7hN8ZPEGsPyng/cJvjK7ou2lYmyOELdSPEGsPyng/cJvjJ4g1h+U8H7hN8ZXdE0rE2RwgupHiDWH5TwfuE3xlx7rX/wAQt+E1lJpbT2Dkz+XZedjnR2cZNQ+3B/Jyhj5TIDzdNnsa4ddwD0XcuTzXg9gUqUbL2S3ic+qJQ10UL5OUzP8AU0BshH4xYQOu+2pNR/JK0fqbi5pTiTbfO7VOGtutW7BDeXJbBxhEjRsAYXGMMcNz2cYY7m2Y5jSsTZHCC6618FrV8Ebpshgopi0F7G0pnBrtuoB7Ub/x7BffiDWH5TwfuE3xld0TSsTZHCC7XMdDXEmdnpC1gPBIq7JTaFeQuMjnOHZ9n2242DQeY9DzbDfY7Z/iDWH5TwfuE3xlK6RibPYzeT8HxzJLl57BYx8vameOLaJpld+OCxwLR9ztt37qxJpWJsjhBdSPEGsPyng/cJvjJ4g1h+U8H7hN8ZXdE0rE2RwgupHiDWH5TwfuE3xk8Qaw/KeD9wm+MruiaVibI4QXUjxBrD8p4P3Cb4yeINYflPB+4TfGV3RNKxNkcILteZDEa+rOa+tPgLkLWPdI0V5mSlwbuxrAZeVxcennOaB37lYOLyGoMlcq4+TL4XH5mek2/wCKblGRlpkRdyElonIPK8hri0kAlvXzm77RWJlsTSzuMs47I1YrtGzGYpq8zQ5kjT3ggppWJsjhBdU/EGsPyng/cJvjKVwGl56V45HK3WZDIhhiiMMJhhgYSCQxhc48x2G7iTvsNg0bg/lvCZbFxXpsFkTNM+KCOtjsq7mqQmM7OIe1vagvZ0Jc54BDXBv3Qf6T6xq4yxNHmIJcJELsVKtauOZ2Nt8g+1mNzXHbd3mbPDTzDYA8zS7FXSMSqMur9oiEun0RF8yCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAo/MZCxRZXjq05rc9mUQtdGwFkO4JMkm7m+Y3bqAdz0ABJUgq3pmvFlMnf1DLWqCzOTSq2q1kz9pUje4sO/3LS57nuIb6OQOJLRsEph8SMXWaJJ33rrmNbPfnZG2awWjYOfyNa319AABv0CkERAXlZsR1K8s8r2RRRML3vkcGtaANyST3D869VX9fukbofPNhOKE8lKWKIZx5bRL3NLWicjr2ZJAcB1IJAQfugaD8bozDQzV8dVsurMmsR4gk1O2eOeUxE9XML3OIcep33Pep9ecEEdaGOGFjYoo2hjGMGwa0DYAD1L0QEREBERAREQEREBfhAIII3B9BX6iCvx6Rjxs7ZMNalw7JMg/IXIImtkitueNpGuDweTmOz94y0843O/M4O+Kup7OOFOvqGm2hbmbYe6zTL56TGxHfd8xY3sy5mz9ngAEOaHO5dzY1+OaHtLXAOaRsQRuCEH5HI2WNr2OD2OAc1zTuCD3EFfSrk+Bs6fgfNpxjGtgqR1q2De8Q0QGP3HJysJidyFzRt5vRm7fNUtjczTy7rjKswkkp2HVbEZBa+KQAO5XNPUbtcxw9bXtcNw4EhmoiICIiAiIgIiICIiAiIgIiICIiAiIgjdS5N2G07k77JacElatJKyTITdjXa4NJaZH/gs323d6BuvrT2KiwWCx+OgrVqUVWuyFtemzlhj5WgcrB6Gj0KN1+BJpaeB0WKnbZmr1nQ5t21WQSTxxljvW4h2zW/hPLR6VYkBERAVc1+3tNNOiIxLhNbpwlmbcW1n89mJvL+eQ77Rt9MhYPSrGq7rhokx2OjdHiZQ7KUjyZh20fm2GOBj9cwLeaMfjhqCxIiICIiAiIgIiICIiAiIgIiICgc/JJjsvh78bcrZa6XwGSrRDXw7SkbTTNI3AYWDzmkEc533HdPLQnHf5UnC3hln6emtV6mymIzNW7TuPr0KVkEx87XhzpBEWSQ7fdta4kgOaBv0Qb7RV/QOvcFxO0lj9T6auuyGDvh7q1p0EkBkDXuYTySNa4Dma7vA37xuCCrAgIiICIiAiIgIiICIiAiIgIiICIiCu63Y6XH46IRYqcOylIlmYP2vZthj+aMemZvLzR/67Wn0KxKu6zj7VuEb2WKlHjSu4jKu222JO8PrmG27R/GrEgIiICrutGl8eGaI8TLvlKxIyx2A2dvvD65htuz86sSrmsgHOwIIw5/yrD/pc7Hud/m/+3/F//JBY0REBERAREQEREFey+tauMuyU4Kd3K2otu2joxBwiJG4DnOc1ocRseXffYgkAEb4HlEl9lc9+pX+Mo3Q7jLgnyu6yS3bj3u9JcbMm5U+vVnCwsOZomm9u+WptGpheUSX2Vz36lf4yeUSX2Vz36lf4yzUUy4W57zzLxsYXlEl9lc9+pX+MnlEl9lc9+pX+Ms1Ey4W57zzLxsYXlEl9lc9+pX+MuZvlj8BpflKyaVv43AZTGZjG2RXt2bDIB2lFx3eBtKd3sO5aDsDzu3IXUyJlwtz3nmXjYr+mtQ1dIadxuDxWjs5WxuOrx1a8IZX82NjQ1o/fup2Hf6VJ+USX2Vz36lf4yzUTLhbnvPMvGxheUSX2Vz36lf4yeUSX2Vz36lf4yzUTLhbnvPMvGxheUSX2Vz36lf4yeUSX2Vz36lf4yzUTLhbnvPMvGxhDiI/fztL51jfS4xwHb+YSkn+YKx4jL1c5SbapyGSIktIc0tcxw6FrmnYtcD0II3CiVH6PeW6x1REOkZZUmI9by17Sf+TGj+YLnXh0VUVVUxaY5xH8nWuaIi89kREQEREBERAREQV3WMfayYD7Ti5uXKQu/wApu5S3Zr/Og9cw/BHq5lYlXdYRGWXT+0GMm5cpE7/KTtizzX+dB65h6B6i5WJAREQFXdYAmXAbNwzv8qRf6X7x5r/83/2/4v5uZWJV3WDC+XAbRYqXbKRE+NDsW+a/zoP9t+L+bmQWJERAREQEREBERBrvQf3uN/Srf/UyKwqvaD+9xv6Vb/6mRWFexjfMq85WeuRFpzTXEDiBxI1Dl7emK+nKGkMVmZcQTlmzyXL3YScliWMxuDIm8weGBwcTy7nlBWrMNr7WfDDH8XNV46rg7Wk8Tra5LkK1rtvDp2EwNlMTmkMZytII5g7mO/3PTf55qR1si09a4xZmDTvG6+2tQM2iHWRjmmN/LL2ePjst7bz/ADvPeQeXl83b09VAR8XeImpszn6Gna+ma4w+nsbm3yZGGw8zSWYZXmFrWSDYExHZ5J5enmv33FzQOgEXH+p/lFY/T/EbTGvX1WNv6i4eQeLMXLMGtfbsXGOZG6Q7BrGkkuedgGtJ9QXWOAblGYSi3Ny1JsuIW+FvoRujgMu3ndm1znODd+7ckpFVxnotYcQuIWpY+IOH0JoutiznbePmy9u/mhI+tUqskbENo43NdI9737AczQA0nqtf61Gvjxw4ZR136cbq52AzLbE8jJ3UGN7ar57YwRI4kBo5S8bFx847dUyOj0XNmQ+VHma2ldP03YynX1tkMtksPZMdO5fpV3UJCyxM2Gu108jSez5WjbbtPOcA3c+1D5SGq5cH4vdpuKbVl3OwYPE2p6N3G4652sTpTYMdhgma2Jscge0cxJaNnbO3EzQOjUXNPHLUGf0VhOHmZ4j3sGIsfrerYNrCQTxx9g2naJ5o5HPdz77gBpO/TbqdltzgxrPL8RtC1dU5SKjVrZdxt42rSJc6Gm796Ez+Yh0pHV3KGhpPLtu0k2KrzYXpRukPv41R/wACl/VMpJRukPv41R/wKX9Uy6T8rE8v+0NR1SuqIi8pkREQEREBERAREQV3V8BnlwG1TH2uTKRPJvv5TF5r/Pi9co9A9RcrEq7rGv4RJgD4HRt9nlIX73peQxea/wC2RfjSDfoPSCVYkBERAVc1jH2kun/tOLm5crE7/KTuUs8x/nQeuYegeouVjVd1fEZZ9PbQYyflykbj4ydsWAMk86D1yj0D1FyCxIiICIiAiIgIiINd6D+9xv6Vb/6mRWFV7Qf3uN/Srf8A1MisK9jG+ZV5ys9ctSY/gxqPSupcrY0nrt2E05lcm7LWsNPiY7TmTyODp+xmc8dm2QgktLXbFxLdl55n5P3jbhxxJ0p4+7L7McpZyXhfgfN4H2vZ+Zydp9s27Pv3bvv3DZbfRcMsI0vrb5P+X1BY1/DhNanT+H1rAW5Km7FtsyMmNYVy+KQyN5WuY1nM0tJOx5XMJ3E9pLg19i+Z1Ff8ceE+N8HjsN2fg3J2XgsczO035zzc/bb8vTbl7zv02UiZYGldN/JixeNbhq+YvxZ6hR0YNHTVpKXZ9uzna50wdzuLCQNuUb7b783RTWDy+pOGOBxumbWndTa/lx9dsIz9GOjC2wwbhgc2a21xe1vKHO2AcQXenYbQRLRHUNQ5nRGY4kZrF62w0mV4Zatxkc2M/wAr1K11tum8se5kkUU7mlvO0FpEjXAtduNiFM4vhblGa00pqfNan8dZPC46/Qmf4A2AWjZlieHANdtGGCINDdnEjYl24JOxUS0DSM3ybJIYYruK1VLitUUdR5PP43Lx0WyMgF2Rzpq0kLn7SsLXBpPM0ktBHL3Kb1DwczOsdH0aed1pNY1VjMozMYzUFPHRVxTnYC1rWwbuDo+VzwWvcS4PO57ttpomWBqa9wc1DqmLTZ1ZrODO2MLqGvnYzHhWVoiIoZIxC1gkcRu6Tn53OcQW9B6rNw54deTifUNalkO1wF++7IUcWYOXxc6Tzp42P5iHRukLntbyt5S5w6gja5oloBRukPv41R/wKX9UyklG6Q+/jVH/AAKX9Uy6T8rE8v8AtDUdUrqiIvKZEREBERAREQEREFc1nAJjgSa+OsdnlYHjxhJydmdnDmi9co380encqxqu60h7WHDO8Fx9ns8rWf8A5Rk5BH5+3PH65Rv5o9J6KxICIiAq7q6ETXNNAwY2blyjHb5F/K9m0MvnQD8KX1D8UvPoViVc1VEJsvpMGtjrHLlHP5r0nLJFtUsefXH4Uu+w2/EdIfQgsaIiAiKDv6zxNCSWFth1+1FZiqS1cdE61NDJJ1YJGRhxjG3nFz9mgdSQOqCcRV8XtQ5GYCDHV8TDDkTFK7IPEz7FRo/fImxO2aXno3nO7QN3N381flbR4fPTs5TKZDL2qlqW1A+SbsI2F/QMMUXIx7WDo3tA4jv3LuqDIl1fimXadOKybdm52/YtqRumaTD++Bz2gtZsfN88jziG9/RYlbIaizUVWWHGRYGvYqSOkGSkEtutOekbTFETG4D7pxEvqaPSRN47GU8PSip0KkFGpECI69aMRxs3O52aAAOpJWSg1hh3y8PseMTnH2LDo5JJI8nFUe6OyHvc/ciNpEbgXEFp9QI3BWV9n+D+cz+5zfUWxUXoaTRVrrpm/nb+JavEtdfZ/g/nM/uc31E+z/B/OZ/c5vqLYqJpGFuTxj7U1NdfZ/g/nM/uc31F5N4ladfakrNvvdZjY2R8Iqzc7WuLg1xHJuASxwB9PKfUVes5lxhKHhPgdy+4yRxNr0YTLI5z3hg6dAGgu3LnENaASSACV+YPFy4ukG2rIv35Dz2bnYMiMz/91o6Bo2a0EkhrWgucdyWkYW5PGPtNSiWeJ+mqcteOxkTBJYf2cLJK8rTK/YnlaC3qdgTsPUvf7P8AB/OZ/c5vqLgrVGR446g+WLorPa603kKWPxOeosipUJG2qOLhsTsiYx8sTnRxvkBaCXlrnczemxav6dJpGFuTxj7TU119n+D+cz+5zfUT7P8AB/OZ/c5vqLYqJpGFuTxj7TU119n+D+cz+5zfUXnDxJ07YdK2K8+V0T+zkDKsxLHbA8p8zodiDt+cLZKicrip3zR3sdO+C5B2kng4eGQXHGPlayfzHHYERnnaA8cgAJaXNc0jC3J4x9pqVD7P8H85n9zm+on2f4P5zP7nN9RXbFZePJscwtFe9C2M2qbntdJXc9gcGu5SQe/vBIOx2JWemkYW5PGPtNTXY19hXHZs1l7vQ1lGdxP8QDNypvRmNsC7lsxZhfV8YGJsMEo2kbFG0hrnj8EuLnnlPUDl32O4FpRc68emaZpoptfvv37IL7BERfGgiIgIiICIiAiIgruuITLjaDm1cdbMeUov5ck/kYweExgvYf8A7rQd2D0v5R6VYlXdfV/CNNP2q4+2YrVSwI8pJyQNMdmOQPLvQ5vLzN/12tViQEREBV3PxGfU+l2+CY+w2KeeftbTwJ65ED2c8DfS49oWOPoa93rWfc1DUp5Onjtp57loyBjK8D5Gs5GB7u0eByxjYtALyNy5oG5KpxoZrVGs8a/KMxmKjhwdkT16djtcjUmnka1pZNsC1hZG7ctaPPj6OcAguWZ1FjsBXnmu2Wx9hC6w+KNpkmMYIBc2NoL3dSBs0EkkDvKjbuWz2SiyMGGxbaM0bIHVL+Y28HnLyDJ9qY7tRyN9Dwzd3Tu3cpHFaaxuGfHLWqtNtlWOmbs7nTWpIY/uGyTvJkk23J3e4kkkkkklSaCvXtHR5p+RZl8jdyVG1NFLHQMghiriPbZjTEGue1zvOcJHOB7tuUbKbr1IKnadhDHD2rzI/s2hvO897jt3k+teyICIiAiIgIiICIoXVmTnx+LbHRsY+DKXJW1aTclM6OOSQ7kgcvnOcGNe4Nb1PL3gbkBj4qEZ3OS5ixV7NlJ0lTHTR3u1jnhcIzJKY2+Y13O1zBvzODWnq3tHNVgke2JjnvcGMaCXOcdgB6ysfFYqng8ZUx2OqxUqFSJkFetAwMjijaA1rGtHQAAAAKI4g5TxTo7JytysmEnljFWvkYqwsvrTzOEUT2xHcPIe9mwPT19N0HzoK2ctp2PMDIx5SDLvdkK1mKqawNaQ81dpaQHEth7Npc7qSCdh0aLGvwDYAb7/AJ1+oCIiAiIgjcniX2porVWxJTuROa4vi5QLDG820Um7Xbs3ce7qNyQRud/TDZN2VpMllrSUbQ2E9OZ7HyQP2B5XFjnN32IPQncEH0rOUDnKJx1o52jHWhssaxuRkNR0s1qpHzu7NvZ+eXMMj3MGzurntDfthICeRY+OyFbLY+repzNsVLUTZoZmHdsjHAFrh+YggrIQEREBERAREQEREBERBX+IFJ2R0NnoI6uPuzGlK6KvlXFlSSQMJYJnDq1nMBuR3Dc+hTsUrZ4mSMcHse0Oa5p3BB9IPpXxcqQ36k9axGyaCZjo5I5Bu17SNiCPSCCofQUk0misGLLcey3HTihnjxMhkqMlY0Ne2Jx68gc0gb9QBseqCeUBLJc1HYLKdqTHY2vPBI29VkjkddDSXSRAEODGbhjS77o/bAAzZrz76vyVnE6ZyNmjPj6+REJZTflpjFV8Id5sIkcOoaXuaOnU77DqQs7F4yphcbUx9CrDSo1YmQQVq0YjiijaAGsY0dGtAAAA7gEHxiMLj8BUNXG0oKFd0j5nR14wwOke4vkedu9znOc5zj1JJJJJUTpYx5TI5jOMGJsRW5WVql/HbvkmqwggNmkPRxbM+1sG+aA71ly9tTZF57HDUb7sdmcjHJ4NYbUNgQtZt2khH3I2DgAXnl5nMGzt+UzFWrDRrRV60MdevE0MjiiaGsY0DYAAdAAPQg9UREBERAREQEREBERAVcuTjIa8x9ITY2VmPpyXpa0sfPcilkPZQSxnuYwsFthPe7fYdA7exqu4Kwbuq9TyC5QssrSV6Jirx7T13NhExjmf6SRYa9re4NeD+EUFiVe1nbFari4/Gs+JfYydSNkkEHaum2lDzCeh5WyNY5hd6A4qwquauvNqX9MQnKz419rKiFkcMPaC2RBM8wvP4DSGF3N62AelBY0REBERAREQEREFf09PJVzOaxM0uRtOikbejsXIQIhHO55EUUg6P5HMeNj5zWuYD0LSbAq7kCa+u8PJ/lmQWKVmAsrjmxzCHRPD5/xZOhaw+kGQH0KxICIiAixsjdbjsfatvaXMgidKQO8hoJ/7LXlDTlbU+OqZPONfkL9qJkz+aZ/ZRlzQeSNm+zWjfYdNz3kkkk/ThYMYkTVVNo48ls2Yi115PdPfkyP9d/0p5PdPfkyP9d/0rvo+Fvzwj7jU2Ki115PdPfkyP9d/0p5PdPfkyP8AXf8ASmj4W/PCPuNTYqLXXk909+TI/wBd/wBKeT3T35Mj/Xf9KaPhb88I+41NiqvaP7Oq3L46MYiLwLISjwbEnbsWybTN7Zn4ErhLzu9B5w78JVvye6e/Jkf67/pXwzhtpqOSSRuJha+QgvcHOBdsNhud+vRNHwt+eEfcanGHy/8AR2uqHH7S79BXsxVn1rDWidVxluSFlm7TlBie8NcATGHRPa533BG+423XTfE2nnuGnyL9TUdT6gk1NnqWmZ6t3KzNHNO97Cw9dhzbB/IHOHM4NDnbuJKuUnDLS8s8U78NA+aHfs5HFxczcbHY79N/Tslrhlpe9XfBZw0FiB42dFKXOa7+ME7FNHwt+eEfcanMf/h5ay456oksWdXtu5zQM9bkr5jP2X+FMka57mmAu3M7XF7muLu4Nbs8cgY7uZa5bw704xoa3FxtaBsAHu2A/wCa/fJ7p78mR/rv+lNHwt+eEfcamxUWuvJ7p78mR/rv+lPJ7p78mR/rv+lNHwt+eEfcamxUWuvJ7p78mR/rv+lPJ7p78mR/rv8ApTR8LfnhH3GpsVFrLKYOvpHE3cvg2uo3aUL7DWNlf2U3I0uMcjSSC1222+2433BBAWyKthtutFOzcMkYHjfv2I3XDFwYw4iqmbxP7cyYeqIi+ZBERAVc0dY8Mfn5hdpXWuyszAaUXIY+RrIzHIfwpGlhBd/EPQrGq5oS027ir0zblK8Dlb8fa0IeyY0stSxmNw9MjCwse70ua4+lBY1XNU5AU83pCA5OfHm3lHwiCKDtG3dqVqTsXu/9No5O05vxomt/CVjVd1LkDTz2koBlZaHheRkiNaOt2rbwFOw/snO2+1Acvac3pMQb+EgsSIiAiIgIiICIiCu6g6ao0sf8tHexO3bH/wCaf5u872/9Tp5n+0LFYlW9Rlo1PpMGTLsJtz7Nof5q/wDc0vS1/qelv+0DFZEBERBF6q+9jMfoc39gqvaZ+9zFfokX9gKw6q+9jMfoc39gqvaZ+9zFfokX9gL0cH5M+f8ADXYkkRa60t8obh9rXJYqjhtQC3NlQfAZHU7EUNhwaXOjZK+MMMgAO8fNzjYggEK3hlsVFr+9x80FjdUP0/Z1DFHkY7LaUjuwmNaOwSAIX2AzsmybkDkLwdzttusLH8a8QzPcQ3ZXN46pgdKurRTOkq2q9is9weH9qZWBkgc5o7Mw83MPWS3eXgbNRaO118qLAVuHep87pC4y7ewEuPbdZlcfZrR12WLbIj2gkbGQ7kL3Ab9PNcQQesnqb5ROAs8M9e57RuRgyeY0zjJbzqd6rPDsRG50bnRvEb3Ru5Ts5p2Ox2KZoG3kWvuHXHPSHEe2zGYzMxy5xlRluWlJXmrucwgbyRCVre0jBO3MwuHUdeq+tM8etB6x1HHgsPqGK5kZjK2uOwlZDaMe/aCCZzBHNy7Ens3O2AJ9CXgX9FSNFcaNH8Rci6lpzJzZSZrZS98dCw2KMxv5HtfK6MMa8HbzCQ4gggEEFWnOZyhprDXctlLcVHG0oXWLFmZ2zIo2jdzifzAK3iRnIqJi+OOiMvpTJakizfYYXHFrbNm7VnqlhcByAMlY1zubmby8oPNuAN91U9cfKPw8fDLUeoNGXa+SyuHlpsmo5OpPA+IT2I4w58MgjkALXOLXdASPTsQpeBudFTdccYNJcObtannsq6tdsRGdlWtVmtTCIHYyOZCx7ms36c7gG7g9ei8NQcb9D6ZwuGytzUEElPNN58b4DHJbkuNA3Loo4Wve8AEbkN2G/XZW8C8otOam+U5pvAas0TjYYb2QxmpKlm6MhVxtyZ0TI9gwCJkLnOLnFwcOhYGguADmlbjSJieoQ+s/vPzv6BP/AHblbsL/AKGofo8f9kKo6z+8/O/oE/8AduVuwv8Aoah+jx/2Qs4/yafOfpC9jNREXnoIiICrugrfh2Aml8YVcn/lHIN7enD2UY5bkzezLfS9m3I534TmOd6VYlXdBWjc08+Q3qeRPh95nb0IuziHLblbybfjs25HH0ua4+lBYlXNS3xU1DpKDxvJj/Cr8sfgjK/aC/tUnf2Tnf8AphvL2vN6TEG/hKxqu6lyHgeodJweN5Mf4Xfli8EZW7QX9qk7+yc//wBMN5e05vSYg38JBYkREBERAREQEREFd1FIWal0q3my45rUw2oAGqf3NKf3V6mfi/7TkViVd1C0nUulSH5doFqYltAE1XfueTpa9TPS3/aBisSAiIgi9VfexmP0Ob+wVXtM/e5iv0SL+wFYdVfexmP0Ob+wVXtM/e5iv0SL+wF6OD8mfP8AhrsSS5Y0pozPVeA/AChJgsjDkcXqWjYu1n1JGy1Ix4SHvlbtuxoDxuXbDzh611OiTF2XHmh+FtKjRscP9daY4jZO7LmJ2yWMbkL/AIkuQS2nSssksmbAwAPDntIDuZpPKSVOcWdFZq/leNmQrYnKyS+Eacv4l9bHyWG2p62zujBsZmNcB2gZu4DfYEgA9TqI1ZpPE64wFvCZym2/i7QaJYHPczm5XBzTzNIIIcAQQQQQs5dQ5Iy5s8Q9L8T7Ute/PxBzEmnpbOnIsDcpuhpV70YZIyOdgkmB+3udJtsA3bYAbm8/KB0fnM3rDiHLjMNfux3uGdjHwy1qz3tns+EyFsLXAbOk2duG9+x32W5tB8JdKcNJLkunsWalm4GtsWrFma1PI1u/K0yzPe/lG52bvsNz0VvTLq1jmTLYrM8atW6Lgxum87phum8RkoL2WzdF1MCWxSNZkEJPWXz3c7nN3aOzGxJIWDpynntU4LgroiLRObwGS0ZkKVrMX79Iw0oGVK74niGf7mYzOcAOzLujiXbLqlFco1T8l7A3NN8F8VSyGOnxd4XcjJLXtQuhlHPenc1zmuAPVpaQT3gj0bK48Ssfjstw/wBQ08viLWfxk9GWOxjKLOexZYWndkY3Hnn0bEdduoWfqbSeF1pizjc/iaeax5eJDVvwNmjLh3HlcCNwoPT3BnQWksvBlcJo3BYnJwcwiuUsfFFKzmaWu2c1oI3BIP5iVbWiw5py+ntfaz0NYrso6vzGlNL6kxeTxceUjfjc9dpsjkFqFpBje50TnsdG88rnFpG5IBUzqzh7Q1jwv11lNKaW13JqI1addh1bPcfZuQxW47LooGWpXO83s3egbudsN9yur0Wcg5Y1TiJoeMGV1lmMDxCsae1Rh8e+g/S0t6vZpSxNfz1rdetI17Se0DgXjZpLxuCSs/HaZZwT4g6T1Ri9F6jtaOl0s/FMp1YJMjkMTZktutv7aMOfIQ/tC1zml2zowD02K6YRXKND8Qc7kJNbcKOIUek9SS4inHlK12lBjnS36vhEcYic+uwucATCd9t9uZu+y3pXmFmvFKGPjEjQ7kkbyubuN9iPQfzL0RaiLCH1n95+d/QJ/wC7crdhf9DUP0eP+yFUdZ/efnf0Cf8Au3K3YX/Q1D9Hj/shZx/k0+c/SF7GaiIvPQREQFXdBWDa086Q28fd/d95va4yPkh6W5Ry7fjt25Xn0va8+lWJV3QM/hOnTJ4Vj7m926O1xkfJD0tSjl2/HbtyvPpeHn0oLEq7qXI+B6h0nB43fjvC78sXgja3ai/tUnf2Rft9qDeTteb0mIN/CViVd1RkfAc5pGM5d+NbbyclfwVtbtRfPgdl4hc7/wBIDs+15/SYQ38NBYkREBERAREQEREFa1LK2PVOkWG/eql9ucCvWjLobP7mlPLMR0a0bcwJ/Ca0elWVV7UFswak0vEJ8hEJ7UzTHUiDoJNq8h2ncerWjbdpHe8NHpVhQEREEfqGvJbwGTgiaXyy1pWMaPSSwgBVfSc7LOlsPLG4OY+nCQR/uBXhVi/w/pWrctitdyOLdM4ySR0bJZG5x73chBaCT1PKBudyepJX2YOJRFM0V6u1Y6rPRFh+TlntDnfemfUTycs9oc770z6i758Le9pW0bWYiw/Jyz2hzvvTPqJ5OWe0Od96Z9RM+Fve0lo2sxFh+TlntDnfemfUTycs9oc770z6iZ8Le9pLRtZiLD8nLPaHO+9M+oo/UWhmY3T+Tt/ZXmaHg9WWXwqSVsrYeVhPOWBm7gNt9h37bJnwt72ktG1OIojD6CZdxNKx9lGatdrAyTt2TNY2TdoPMGlm7Qe/Y926y/Jyz2hzvvTPqJnwt72ktG1mIsPycs9oc770z6ieTlntDnfemfUTPhb3tJaNrMRYfk5Z7Q533pn1E8nLPaHO+9M+omfC3vaS0bWYiw/Jyz2hzvvTPqJ5OWe0Od96Z9RM+Fve0lo2sDXErYdGZ17zsPAZx6ySWEAAekk9AFdMZC+tjakUg2fHCxrh6iGgFQeP0DSp24bFi5kMo+FwkiZesc7GPHc7kAAJHeCQdjsR1AKsy4Y+JTVTFFGu2tJ2CIi+NBERAVd0BP4TpiKXwjG2uexaIlxLOSuR4RJ3D8Ydzj6Xhx9KsSrvD6Z1nRuLndZxtwzRmUWMQ3lqyBzi4OjHqII6+k7n0oLEq7rDIeLJMBK7LSYqJ+Uigexlbthb52vY2Bx2+1gvcx3P6CwDuJViVd1/fOJ0rayPjSbDxUZILc9uCt4Q7sI5mPlZyekPja9hI6tDyR1AQWJERAREQEREBERBXc/aMOp9Lwi1fg7aefeGtFzQzbQPO0zvwWjvHrcAFYlXM5Z5NX6ZgF67XMnhL/BoIuaGwGxjpK78Hl5gW+sqxoCIiAiIgIiICIiAiIgLDzJkGHvGG0aM3YScloRdr2LuU7P5Pwtj15fTtssxEEVpTJxZrS2HyEF8ZWG3ThsMvti7IWWuYHCUM/B5gebl9G+ylVXdB5E3tPtglzHj29j5paFy6angpfPE8tfvFsA3uHUeaRs5vQhWJAREQEREBERAREQEREBERBj5G5FjsfatzTRV4YInSvmmdysY1oJLnH0AAbkqN0VXkqaOwUMr6UszKMDZJMbF2VZ7uzbzOiZ+CwnctHoBC8NfWhDpa3XFqhUsXyzH13ZKLtYHSzOEbGOj/D3LtuXuPp6bqfYxsTGsY0MY0bNa0bAD1BB9Lwv1n3KNivHPJVfLG6Ns8W3PGSCA5u/pHeF7oghdGZQZfTFCczz2pWsME01mDsJHyxuMchdH+Cedjug6ercKaVfxT5MdqnK0JH5S1HbDcjDNYjDqtccrYnV45B1Gzmdpyu6/bncpIHKywICIiAiIgIiIK5krYGvsDTF+3C99C7Y8Dji3gnax9ZpdI/8ABc0yt5R6ed5/BVjVdNsy8QW1m37TW18WZJKIg/c7jJKAyQyfjjsntDR6HEn0KxICIiAiIgIiICIiAiIgIiIIKxJZxGo2TPkv3aGS7Ou2vFA2SKlI0PJkLh57WvHK07hzQWtPm7kmdXlbqQX6s1azDHYrTMdHLDK0OZIwjYtcD0IIJBBUFHLY0rIyCVk1zDOfWqU/B4prNiAu3YTMd3Oczm7M9p15edxfs1hegsSL5Y9sjGua4Oa4bhwO4IX0gIiICIiAiIgIiICIoW9m3WLj8dijHbuQzxxXS2Vo8CY9pfzu3B3dygbM2J3ewkBp3QY8s7c9qplevcYa+GcXXar6RdzTvY0w8szvNaWtLnFrd3efHuWg7OsSxMTjhicbXptsWLfZN5TPblMssh9LnOPeSfVsB3AAABZaAiIghNV4+axShv04J7eSxj3W6tWC34P4S8Rvb2T3HzS1weRs8coPK7oWhwlalpl2tHPGRyvG+wcHbH0jdpI3B6dCe5eyrEroNE5F0zjQx2nrsvnhkD2Pbfll+7c4bs5ZS7qSG7P6kuMnmhZ0REBERAREQV7CWX3dW6keLtyWCsa1LwOaDkghkEZlc+N34Zc2eMOPcCwDvBVhVe0LK+7gjkH2MjO3IWJrcbcnF2UsMb3ksjDPwWtbygA9dup6kqwoCIiAiIgIiICIiAiIgIqbY1dlspJI7A0qUlGN7ohbvTPb2zmu5XFjGtPmbggOJG+24BaQ4+XjrWPzbB/0031V9cdFxO20futl3RUjx1rH5tg/6ab6qeOtY/NsH/TTfVV0WvbHFbJV2Jl0uySfC1zLj447ViXDVmN7SxO89oOxe97WMcXc45XENJk3JbsSdc4j5WGh9Qcda3C7HWJZ8w+jJYmsPaY44bLQ13gZBG/bBnaF4O3IWhh3fzNZaruS1rapzwxDD05JI3MbZglkMkRI2D2h8bm7jvHM0jp1BHRciwf+Hzk8NxCq6zwut7VfO17wyQtZCY25ZJ+fnL5HmNpeXHfmJ6nc+tNFr2xxLO+EVI8dax+bYP8Appvqp461j82wf9NN9VNFr2xxLLuipHjrWPzbB/0031U8dax+bYP+mm+qmi17Y4ll3RUjx1rH5tg/6ab6qeOtY/NsH/TTfVTRa9scSy7qjcbOLGN4I8Mc7rLJxGzDjoeaKo2QMdZmcQ2OIEg7cziNzsdhudjtsvrx1rH5tg/6ab6q078pbgbqv5SenMVgchmqGCxVO0bc0NLnebL+XlZuXN6coL/R+F+ZNFr2xxLNncPuLWM4+aPo5jRN2QYS/XmitZEOay3jbHI3aHsnNe3tml/NueZg5QRzhwJ2JUpxUYRHE0gAAFz3F737NDQXOJJcdgBuSSdu9c1/Jy+TpY+TNTzUOm5YL8mY7A25Mlbe8bxdpyljWRNDf3x2/fvsPUtyeOtY/NsH/TTfVTRa9scSy7oqR461j82wf9NN9VPHWsfm2D/ppvqpote2OJZd0VI8dax+bYP+mm+qs3EaqvsyEFHOU69WS0S2tZpzOkikeAXFjg5oLHbAkd4Ox6g7A5q6NXEX1T+6WWpfjm8zSDvsRt0Oy/UXyogcM+fCzx4a261ZhYxoq5S9Yie+0TzkxEDlcXsazfctO7diXOdz7TyxcjjKuWrtht14rLGSMmYJo2vDJGOD45AHAgOa5rXA+ggH0LAw121Xmbisk+W3fji7Q32UzDBO3mIGxBc0PA23buCerg0N6AJlERAUFrWxJDpq3DCMmJrhZRjmw8QfZrumeIhM3fzWiPn7Qud0aGEkHbYzqr16GTJ6yx8Tq+QjrY2F10Wo5uzrSyvD4hE5o6yEN53bHo0lh6nlICcrQCrXihD3yCNgYHyOLnO2G25J6k/nXqiICIiAiIgIiICIiAiIg1zw5O+g8AfSacZP8fKFY1XOHH3haf8A0KL+yFY17OP82vzn6rPXIiIuKCIiAiIgIiICIiAiIgIiiZtVYuDVNbTj7JbmbNSS9FW7N55oWPax7ufblGzntGxO/XoOhQSyIiAoLVHSzp4+kZev1/WU6oLVP+cae/lev/WV1wv1tU9bYCIi8dkWHlcVWzNTwa0wvjEkcrS1xaWvY8PY4EekOa0/zepZiIIbHZSetdhxWUeH5GVs00U0ELmwyxNk2A5uoEgY6MuaSNyXFoIaeWZWHlsXXzeNs0LXa9hYjMbnQTPhkbuPumSMIexw7w5pDmkAgggFYUGTnx+QhoZJwkfblkbTnhjdyvY1gdyyHbZsm3P6dnCMkbE8oCZ7lXdG03dhkcrPQmx17LW32JoZrYsbNaBDEWlpLWB0UUb+RvQOe7vcXOP3rYPtYF+OirQXX5ORlF1exaNcPikO05DgeYubD2rwG9Tyd4+6EzTqQ4+pBVrRMgrwMbFFEwbNY0DYAD0AAbIPZERAREQEREBERAREQEREGueHH3haf/Qov7IVjVc4cfeFp/8AQov7IVjXs4/za/Ofqs9cuQrOs9QnXemdbabuakGlMtrNmEdPmdQGWvciksSQSNhx/Z8scbXNdyP52v8AtYJB33TLZHUFHQHEHX0er9RHLae1zaq0ajslJ4E2q3JsjNd8H3MjCyRwHPuWjlDS0ABbzm+Thw6sZSXIP04PCX3BkWctyw1leyJBL20LBJywvLwCXRhpPXfcE7zdnhJpO3prNaflxXPiMzfflL1fwmUdtZfMJnP5g/mbvI0O2aQOm223RfJllGpLMeY0px0lfrfLanGI1DlHVNO3sXlXNxbWvrlraU9ZpBjlBbI9soB5nAecNiDQcFxq1ngX4Sndu2rVDhlPJU1zcsF8j7kT7LqteUu73lsINpx6k7A966Pj4I6Kj1v9l3iXnz3hBtieW1M+Js5bymVsLnmNr+XpzBoP51NzaEwE8GpIZMXA6PUe/jZpB/de8LYDzdf/ALbGt6bek95JNyyOWXar1/qanw6xkF282XiDPltSSQSZ2XGSNrt5HVKcNlsUr4mtgex5YxoJIPUbu3mdS4jiZprTOlsJm9TXMS3Ja7o1KdjH5mS7cioSQSdrBLZdDEZfPa4tL2Hbdu/NyhdAav4V6V13gKOFzeHit4+g5j6bWPfDJVcwcrXRSRua+MgdN2uHRY9Dg5pDGYfD4uviSylick3L1GutTPcy2ObaZzy8ukPnu35y4HfqO5TLIsGmtPw6Xw0GMr2r1yKEuImyVyW3O7mcXedLI5znbb7Dc9AAB0C1FxKx+R1V8obSOm26jzWGwk+nchbuVcRfkq+EOZPXazdzCC0gv35mkO23G+ziDf8AUWP4gT5aV+Bz2mqOMIb2cGRwtizM07Dm3kZbjaeu+2zRsNh17196f0VYOZp6j1O/HZLVdSvNRgv4uvNUiZWkdG90fZPmkBJdG08xO/Tpt133OvUOeLEXEnijq7iBDgL1uqNNZN2FxvLq6bHeCCOGNzJpq4qyizzlxeXSvIcN2gDYkuMuZ1fkoM+cVez0erdG6agt5y5jdQnH4mpc7B028cHZONpztiS14DeUNG7SSt66u4B6D11qCXN5nANsZOaNsViaG1PXFpjfuWzNje1swA6ASB3Tp3L11VwN0PrbOyZjNYGO7dmhZXsb2JWRWY2b8jZomvDJeXc7c7XbehYyyNOa5zmoKOU0zxC1Pk9Qw8PpsPjpZZNM5E1m4y29wdJLarjbt4X87G7+dygHzfSo/U2Tz+e0jxl4gv1pm8Jl9H5XIVcRj6d0xUYWU2NdEyWv9zMZz1JkBO0jQ3bYLct35OfDzJTYqS3gHWfFlavTrxzX7LojDAd4WSMMnLKGHqO0Dlk6i4B6B1ZqaTP5XT0VvJSviknJnmZDZfHt2bpoWvEcpbsNi9ru4epMsjR+Vs57Wr+Nubl1TqTCT4DG08liqFDJSQQU53YiKd27AfPaXjrG7dnVx5d3ErOzGqs1Nq7xtHm8pA7L8J7uZlpx3pRWitt8G5ZooubljeOd3nNAPU9Vv6Xhzp2eTVcj8fzP1TG2LMHt5P3U0QdgB915n2scvmcvr7+qxZOE+lZXwPdit3QYSTTkZ8Il83Hv5OeH7v09mzz/ALrp911KuWRpDSE+oNL5ngpkK+ps5m7essRY8Z1sxkHz15phjvCY3sjPmxEPby7sDd2uO+56qu8FMxmtWa44Z3Y9S6tzGaab82tsdkJ7DKNGdsEjGs7PZsUZbM7kZG3cOA5iCWhw6aj4c6ehfpV8eOAfpaN0WHJmkPgzTAYCPuvP+1nl8/m9ff1Wk+GnyeNW6O1zgsgJsRp3D4yaR80WDzWVsNvRGN7GwGrZkMMLAXNd5pcQWDl2UtMSOkVBap/zjT38r1/6yp1QWqf8409/K9f+sr6sL9cNU9bYCIi8dkREQF4XqUGSpWKlqJs1axG6KWN3c9jhsQf4wSvdEFYo4DJR6lhdcGOuYXH12jGzyCV+QjlLAyTne5xB80O88ec7tNiByl0lnREBERAREQEREBERAREQEREGueHH3haf/Qov7IVjULHiczpOLwGjinZnHRk+DPgsRxyxsJ3EbxIQDy7kBwPUAdN+94z1D7H3/fKvxV7VdsSua6aotM364j6y1MXm6aRQvjPUPsff98q/FTxnqH2Pv++VfirGTxR6o5lk0ihfGeofY+/75V+KnjPUPsff98q/FTJ4o9UcyyaRQvjPUPsff98q/FTxnqH2Pv8AvlX4qZPFHqjmWTSKF8Z6h9j7/vlX4qjdS61yWkdP5LN5XSuQrY3HV32rMws1nlkbGlzjytkJOwB6Abpk8UeqOZZbEUL4z1D7H3/fKvxU8Z6h9j7/AL5V+KmTxR6o5lk0ihfGeofY+/75V+KnjPUPsff98q/FTJ4o9UcyyaRQvjPUPsff98q/FTxnqH2Pv++Vfipk8UeqOZZNIoXxnqH2Pv8AvlX4qeM9Q+x9/wB8q/FTJ4o9UcyyaUFqn/ONPfyvX/rK+/GeofY+/wC+VfirJxuHymdyVK1lKAxVOjL4RHXdO2WaaXlc0c3Ju1rW82/RxJIH3Ib51i2H+aqY1d8T9JIi2tdERF47IiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgKgfKABdwP14AOYnCWwBtvv8AanejY7/8j/EVf1r35QrBJwK1+08xDsHcHmt5j+9O7h6Sg2EiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgLXvyhnBvAniAT0Awdw/ch3/ou9B7/4lsJa/wDlBc/kM192ZeH+JLfKYxu4HsnbbfnQbAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERARQOT19prC2HV72fxlSy3feCW2wSD1+bvv/AOywfKxo32nxfvLfpXeOj41UXiieEraVsWoflN670vguE2s8LltR4nGZW7g7Xg9C5eiinnDo3tBZG57S7cggbd5BG6uXlY0b7T4v3lv0rlf/AMQTR+l+NXC6tltPZbH39Wafl7SvBXma6WzXeQJYgB1JB5Xgf6rturlrRsfcnhJadjr3TOr8DrWhJe09m8dnqUcphfZxluOzG2QAEsLmEgOAc07d+zh61LrnT5KOO0XwH4JYPTcupcWMrKDeyZFpvW1IBzjv/BAazp+Jutv+VjRvtPi/eW/SmjY+5PCS07FsRVPysaN9p8X7y36VKYbWOB1FIY8VmsfkZB1MdW0yRw/jAO4WasDFoi9VExHlJaUwiIuCCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIg+ZJGxRue9wYxoLnOcdgAO8krQeueItvWs769GxNT0+N2sbE4xyXB+O9wIIYfQzpuDu7fflbf+N+UfQ0M+rG4tdk7EdEkfiO3dIP52Me3/APJaYA2Gw7l+t/Buh0VUz0iuLze0cydUPOvVhqR9nBEyGMfgxtDR/wAgvREX6tkRVbVXEGrpnJ18ZFjclnMrNEbHgOKhbJJHCDy9o8uc1rW79BudyQdgdioeTjZhXx4MUcflspazAtCvTq1QJmSV3NbNFI17m9m5pd+EQ3zT17t+M42HTMxM/wB/so2CiobuM2CbpWHNdhkHSzXXYyPEtr73XW2uLXQCMHbmHKSfO5dhvusXhjrrJax1dreG5XuUKmPnqR1cfkIGRTVw6uHPDuXfm3du4Hmd0I2Oynx8OaqaYm9+VxsZeU9OGyWmWJr3MO7Xkec0+sHvB/OF6ou6thcN+Jdqher4bN2X2qk7mxVL0zi6SOQnZscjj1cHHYBx67nY77gjci5UtV2268kLiQHtLdx3j84/OF0ZoDNy6j0VhcjOQ6zPVY6YjuMgGz//ANwK/G/jPQ6MKYx8OLX1T5tdcXT6Ii/MgiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIg17xyoPs6JZbYCRj7kNp4b+JuY3n+INkLj+ZpWm11Daqw3qs1axG2aCZhjkjeN2vaRsQR6iCueNaaMs6AskSl82EJ/c99w3EY32Ecp9Dh0Acejunp3C/X/gvSqMk9HqnXe8d/cTrhQslxK0jhr01LIaqwlG5CeWSvZyMMcjDtvs5rnAjv9KxvK7oUf/rTT37Vg+urOa0Ep5zFG8u68xaDuvzwGt83i/UC/STGJfVMcP8A1hpbWej62rNdVta4zTuI4mYK3jhj3122a7zBJHK9wlifIeRwPM5rgCCCPT3KdxmhLNPVnD6/R01U09j6FXJG9TpSR9nVlmbDygbcvOSWO3LR6OvoW0WRtjaGsaGNHoaNgv1cY6NTeap65mJ7Ou8T59m1Wijw81Vh8gNRUcUy9fx2rMjk4cY+1HH4XUsxmPma/cta8A8wDtu477emd0jlH6T1XrLUGs20tG185YqGlHkslX3kEVdrHDmD9twR1H5+m46rbC+ZIY5gBIxrwO7mG6kdGiib0T3917W8+rvFV8rmhdt/s009t/KsH11n4TXmmdS3DUxGosTlbQYZDBSvRTPDQQC7la4nbcjr+cKX8Cr/ADeL9QI5lakx0xbFA1o86QgNAH5yu8RiX1zHD/0fVmxHUrSzyu5YomF7neoAbldD8OcRNgtC4OlZYYrUdVhmjd3skcOZzT/ESR/MtX8OeHs+p7lbKZGB8OFheyaKOVuxuOHVp2PURggHr91/u777zX5T8a6VRiTGBRN7a589nNvqiwiIvy6CIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAvl7GyMcx7Q9jhsWuG4I9RX0iCmXeD2j70zpThY6zjvuKM0lVpJ7ztE5o3/OsXyG6O+Y3v2vc+Kr6i+yOm9Kpi0YtXGea3naoXkN0d8xvfte58VPIbo75je/a9z4qvqLWndL/5avVPMvO1QvIbo75je/a9z4qeQ3R3zG9+17nxVfUTTul/8tXqnmXnaoXkN0d8xvfte58VSWH4V6UwVllithYX2IzzMmtufZew+trpC4tP5wrWixV0zpNcZasSqY85LyIiL5EEREBERAREQEREBERAREQEREBERB//2Q==",
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "langgraph = StateGraph(OverallState, input=InputState, output=OutputState)\n",
    "langgraph.add_node(rational_plan_node)\n",
    "langgraph.add_node(initial_node_selection)\n",
    "langgraph.add_node(atomic_fact_check)\n",
    "langgraph.add_node(chunk_check)\n",
    "langgraph.add_node(answer_reasoning)\n",
    "langgraph.add_node(neighbor_select)\n",
    "\n",
    "langgraph.add_edge(START, \"rational_plan_node\")\n",
    "langgraph.add_edge(\"rational_plan_node\", \"initial_node_selection\")\n",
    "langgraph.add_edge(\"initial_node_selection\", \"atomic_fact_check\")\n",
    "langgraph.add_conditional_edges(\n",
    "    \"atomic_fact_check\",\n",
    "    atomic_fact_condition,\n",
    ")\n",
    "langgraph.add_conditional_edges(\n",
    "    \"chunk_check\",\n",
    "    chunk_condition,\n",
    ")\n",
    "langgraph.add_conditional_edges(\n",
    "    \"neighbor_select\",\n",
    "    neighbor_condition,\n",
    ")\n",
    "langgraph.add_edge(\"answer_reasoning\", END)\n",
    "\n",
    "langgraph = langgraph.compile()\n",
    "\n",
    "# View\n",
    "display(Image(langgraph.get_graph().draw_mermaid_png()))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6a1c197e-4892-4398-98ed-4a9918a5258a",
   "metadata": {},
   "source": [
    "We begin by defining the state graph object, where we can define the information passed along in the LangGraph. Each node is simply added with the add_node method. Normal edges, where one step always follows the other, can be added with a add_edge method. On the other hand, if the traversals is dependent on previous actions, we can use the add_conditional_edge and pass in the function that selects the next node.\n",
    "\n",
    "## Evaluation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "e22c45f8-1f41-4e0a-a921-9b3f51b7adc6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--------------------\n",
      "Step: rational_plan\n",
      "Rational plan: To answer this question, we first need to identify the specific battles Joan of Arc participated in during her military career. Then, we need to determine the outcomes of these battles to see if any were lost by her forces.\n",
      "--------------------\n",
      "Step: atomic_fact_check\n",
      "Reading atomic facts about: ['Joan of Arc', 'defeats', 'siege of Orléans', 'siege of Compiègne', 'Loire Campaign']\n",
      "Rational for next action after atomic check: Read the text chunk to confirm the details of the battles Joan of Arc lost and gather more information on other battles she might have participated in.\n",
      "Chosen action: {'function_name': 'read_chunk', 'arguments': [['b1422cb348f645771b8ab54fc29c3272']]}\n",
      "--------------------\n",
      "Step: read chunk(b1422cb348f645771b8ab54fc29c3272)\n",
      "Rational for next action after reading chunks: The current text chunk confirms Joan of Arc's participation in the unsuccessful siege of Paris and the failed siege of La Charité, both of which were defeats. This information directly answers the question about whether Joan of Arc lost any battles. No further information is needed to answer the question.\n",
      "Chosen action: {'function_name': 'termination', 'arguments': []}\n",
      "--------------------\n",
      "Step: Answer Reasoning\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'answer': 'Yes, Joan of Arc did lose battles, specifically the sieges of Paris and La Charité.',\n",
       " 'analysis': 'The notebook content indicates that Joan of Arc participated in two specific military engagements that were not successful: the siege of Paris in September 1429 and the siege of La Charité in November of the same year. Both of these sieges ended in defeat, which directly implies that Joan of Arc did experience losses in battles.',\n",
       " 'previous_actions': ['rational_plan',\n",
       "  'initial_node_selection',\n",
       "  \"atomic_fact_check(['Joan of Arc', 'defeats', 'siege of Orléans', 'siege of Compiègne', 'Loire Campaign'])\",\n",
       "  'read_chunks(b1422cb348f645771b8ab54fc29c3272)',\n",
       "  'answer_reasoning']}"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "langgraph.invoke({\"question\":\"Did Joan of Arc lose any battles?\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "6f41c1ab-7973-4e20-888a-2be5c94475a4",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--------------------\n",
      "Step: rational_plan\n",
      "Rational plan: To answer the question about the current weather in Spain, we need to access real-time weather data or forecasts from a reliable meteorological source. This would include information on temperature, precipitation, and other relevant weather conditions across different regions of Spain.\n",
      "--------------------\n",
      "Step: atomic_fact_check\n",
      "Reading atomic facts about: ['Spain']\n",
      "Rational for next action after atomic check: Since there are no atomic facts provided, we should move to another node that might contain relevant information about the current weather in Spain.\n",
      "Chosen action: {'function_name': 'stop_and_read_neighbor', 'arguments': []}\n",
      "Key elements: ['Spain']\n",
      "--------------------\n",
      "Step: neighbor select\n",
      "Possible candidates: [{'possible_candidates': []}]\n",
      "Rational for next action after selecting neighbor: Since there are no possible candidates in the neighbor nodes that could provide information about the current weather in Spain, and the notebook confirms no relevant atomic facts have been found, it is appropriate to terminate the search.\n",
      "Chosen action: {'function_name': 'termination', 'arguments': []}\n",
      "--------------------\n",
      "Step: Answer Reasoning\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'answer': 'The information about the weather in Spain is not available in the provided data.',\n",
       " 'analysis': 'The notebook content indicates that there are no relevant atomic facts provided regarding the weather in Spain.',\n",
       " 'previous_actions': ['rational_plan',\n",
       "  'initial_node_selection',\n",
       "  \"atomic_fact_check(['Spain'])\",\n",
       "  'neighbor_select()',\n",
       "  'answer_reasoning']}"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "langgraph.invoke({\"question\":\"What is the weather in Spain?\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "21900334-d1dc-4761-a576-61c4569f920e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--------------------\n",
      "Step: rational_plan\n",
      "Rational plan: In order to answer this question, we first need to identify the cities Joan of Arc visited during her early life. Next, we need to list the cities where she won battles. Finally, we will compare these two lists to see if there are any cities that appear on both lists, indicating that she visited them in her early life and also won battles there later.\n",
      "--------------------\n",
      "Step: atomic_fact_check\n",
      "Reading atomic facts about: ['Joan of Arc', 'siege of Orléans', 'siege of Compiègne', 'Loire Campaign', 'Rouen']\n",
      "Rational for next action after atomic check: To find more about the cities Joan of Arc visited in her early life and where she won battles, we should read more about her early life and military campaigns.\n",
      "Chosen action: {'function_name': 'read_chunk', 'arguments': [['b1422cb348f645771b8ab54fc29c3272']]}\n",
      "--------------------\n",
      "Step: read chunk(b1422cb348f645771b8ab54fc29c3272)\n",
      "Rational for next action after reading chunks: We have identified several cities Joan of Arc visited during her early life and where she participated in military actions. However, we still need to confirm if any of these cities were visited by her before she became involved in battles there. We should continue to search for more specific information about her early visits to these cities before her military involvement.\n",
      "Chosen action: {'function_name': 'search_more', 'arguments': []}\n",
      "Neighbor rational: We have identified several cities Joan of Arc visited during her early life and where she participated in military actions. However, we still need to confirm if any of these cities were visited by her before she became involved in battles there. We should continue to search for more specific information about her early visits to these cities before her military involvement.\n",
      "--------------------\n",
      "Step: neighbor select\n",
      "Possible candidates: ['Joan of Arc', 'Trial of Joan of Arc', 'The Passion of Joan of Arc', 'French national heroine', \"Joan's trial\", 'siege of Orléans', 'Joan', 'Loire Campaign', \"Joan's rehabilitation trial\", 'siege of Compiègne', 'French army', 'French military leader', 'Rouen', 'Burgundian forces', 'Burgundian troops', 'Charles VII', 'Archbishop of Paris', 'French allies', \"Hundred Years' War\", 'Saint Catherine', 'Saint Margaret', 'Normandy', 'Rouen prison', 'Compiègne', 'northeast France', 'Burgundians', 'French nationalists', 'Paris', 'Bishop Pierre Cauchon', 'French morale', 'early 1430', 'French Revolution', 'Jean Bréhal', 'France', 'unsuccessful siege', 'French nation', 'French silent historical film', 'canonization', 'historical icons', '15th century', 'English allies', 'failed siege', 'patron saints', 'military leader', 'medieval architecture', 'siege', 'patron saint', 'Renée Jeanne Falconetti', 'French', 'September 1429']\n",
      "Rational for next action after selecting neighbor: To identify the cities where Joan of Arc won battles, we need to explore nodes related to her military campaigns and battles. The node 'Loire Campaign' is directly related to her military activities and could provide information about battles she won.\n",
      "Chosen action: {'function_name': 'read_neighbor_node', 'arguments': ['Loire Campaign']}\n",
      "--------------------\n",
      "Step: atomic_fact_check\n",
      "Reading atomic facts about: ['Loire Campaign']\n",
      "Rational for next action after atomic check: To answer the question, we need to identify if any of the cities mentioned (Orléans, Paris, La Charité, Compiègne, Patay, Reims) were visited by Joan in her early life and also where she later won battles. We should continue to explore nodes related to these cities or Joan's early life.\n",
      "Chosen action: {'function_name': 'stop_and_read_neighbor', 'arguments': []}\n",
      "Key elements: ['Loire Campaign']\n",
      "--------------------\n",
      "Step: neighbor select\n",
      "Possible candidates: [{'possible_candidates': ['Joan of Arc', 'coronation', 'Charles VII', 'English', 'French', 'aggressive pursuit', 'victory', 'Patay', 'Reims']}]\n",
      "Rational for next action after selecting neighbor: To answer the question, we need to identify cities Joan of Arc visited in her early life and compare them to where she won battles. We have information about her early life visits and battle victories. The neighbor node 'Patay' could potentially provide more specific details about the battle she won there, which is relevant to the question.\n",
      "Chosen action: {'function_name': 'read_neighbor_node', 'arguments': ['Patay']}\n",
      "--------------------\n",
      "Step: atomic_fact_check\n",
      "Reading atomic facts about: ['Patay']\n",
      "Rational for next action after atomic check: Since the atomic facts about the Loire Campaign and Patay have been reviewed and added to the notebook, we should now focus on identifying other cities Joan of Arc visited during her early life and where she won battles to compare them.\n",
      "Chosen action: {'function_name': 'stop_and_read_neighbor', 'arguments': []}\n",
      "Key elements: ['Patay']\n",
      "--------------------\n",
      "Step: neighbor select\n",
      "Possible candidates: [{'possible_candidates': ['Joan of Arc', 'coronation', 'Charles VII', 'English', 'French', 'aggressive pursuit', 'Loire Campaign', 'victory', 'Reims']}]\n",
      "Rational for next action after selecting neighbor: To determine if Joan of Arc visited any cities in her early life where she later won battles, we need to explore more about her early life and the cities she visited. The neighbor node 'Joan of Arc' seems promising as it might contain more specific information about her early life and the cities she visited during that period.\n",
      "Chosen action: {'function_name': 'read_neighbor_node', 'arguments': ['Joan of Arc']}\n",
      "--------------------\n",
      "Step: atomic_fact_check\n",
      "Reading atomic facts about: ['Joan of Arc']\n",
      "Rational for next action after atomic check: read_chunk(['3083850e9b139392cac70ed8831d3224', '6df8a5fb56909eebbdceb33906cdee85', '3385858910bd9b857986df94ef9d6ef6'])\n",
      "Chosen action: {'function_name': 'read_chunk', 'arguments': [['3083850e9b139392cac70ed8831d3224', '6df8a5fb56909eebbdceb33906cdee85', '3385858910bd9b857986df94ef9d6ef6']]}\n",
      "--------------------\n",
      "Step: read chunk(3385858910bd9b857986df94ef9d6ef6)\n",
      "Rational for next action after reading chunks: The current text chunk provides information about Joan of Arc's capture, trial, and execution, but does not contribute to answering the question about cities she visited in her early life and where she won battles later. We need to continue searching for more specific information regarding her early life visits and battle victories.\n",
      "Chosen action: {'function_name': 'search_more', 'arguments': []}\n",
      "--------------------\n",
      "Step: read chunk(6df8a5fb56909eebbdceb33906cdee85)\n",
      "Rational for next action after reading chunks: The current chunk does not provide additional information about the cities Joan of Arc visited in her early life or where she won battles. To answer the question, we need to identify more cities she visited early in her life and where she won battles. We should continue searching for relevant information.\n",
      "Chosen action: {'function_name': 'search_more', 'arguments': []}\n",
      "--------------------\n",
      "Step: read chunk(3083850e9b139392cac70ed8831d3224)\n",
      "Rational for next action after reading chunks: The current chunk provides information about Joan of Arc's capture, trial, and execution, but does not provide new details about the cities she visited early in life or where she won battles. To answer the question, we need to find more information about the cities she visited during her early life and compare them to where she won battles.\n",
      "Chosen action: {'function_name': 'search_more', 'arguments': []}\n",
      "Neighbor rational: The current chunk provides information about Joan of Arc's capture, trial, and execution, but does not provide new details about the cities she visited early in life or where she won battles. To answer the question, we need to find more information about the cities she visited during her early life and compare them to where she won battles.\n",
      "--------------------\n",
      "Step: neighbor select\n",
      "Possible candidates: ['Trial of Joan of Arc', 'Joan of Arc', 'The Passion of Joan of Arc', \"Joan's trial\", 'French national heroine', 'siege of Orléans', \"Joan's rehabilitation trial\", 'siege of Compiègne', 'Joan', 'Loire Campaign', 'Rouen prison', 'French military leader', 'Charles VII', \"Hundred Years' War\", 'Rouen', 'French army', 'French Revolution', 'French morale', 'Normandy', 'Saint Catherine', 'Burgundian forces', 'Compiègne', 'Archbishop of Paris', 'French allies', 'Burgundian troops', 'French nationalists', 'northeast France', 'unsuccessful siege', 'burned at the stake', 'failed siege', 'trial transcripts', 'French silent historical film', 'Burgundians', 'Bishop Pierre Cauchon', 'French nation', 'Saint Margaret', 'Jean Bréhal', 'early 1430', 'inquisitorial court', 'Paris', 'pro-English church court', 'siege', '15th century', 'France', 'canonization', \"court's faith\", 'historical icons', 'trial', 'French', 'history']\n",
      "Rational for next action after selecting neighbor: To find out if Joan of Arc visited any cities in her early life where she later won battles, we need to explore nodes related to her early life and battles. The node 'northeast France' might provide information about her early life since it mentions the region where she was born.\n",
      "Chosen action: {'function_name': 'read_neighbor_node', 'arguments': ['northeast France']}\n",
      "--------------------\n",
      "Step: atomic_fact_check\n",
      "Reading atomic facts about: ['northeast France']\n",
      "Rational for next action after atomic check: The atomic fact confirms Joan of Arc's birthplace in Domrémy, northeast France, which is already noted in the notebook. This information does not provide new insights into the cities she visited in her early life or where she won battles later. Therefore, it might be more productive to explore other nodes or text chunks that could contain relevant information about her early visits or battle victories.\n",
      "Chosen action: {'function_name': 'stop_and_read_neighbor', 'arguments': []}\n",
      "Key elements: ['northeast France']\n",
      "--------------------\n",
      "Step: neighbor select\n",
      "Possible candidates: [{'possible_candidates': ['Joan of Arc', 'birth', 'propertied peasant family', 'Domrémy']}]\n",
      "Rational for next action after selecting neighbor: Since we have already explored the node 'Joan of Arc' extensively and gathered information about her early life and battles, revisiting this node would be redundant. The other neighbors like 'birth', 'propertied peasant family', and 'Domrémy' are unlikely to provide additional relevant information about battles she won later in those cities. Therefore, it seems appropriate to conclude the search as we have gathered sufficient information to answer the question.\n",
      "Chosen action: {'function_name': 'termination', 'arguments': []}\n",
      "--------------------\n",
      "Step: Answer Reasoning\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'answer': \"Yes, Joan of Arc did visit a city in her early life where she won a battle later. She visited Orléans and subsequently played a key role in lifting the siege of the city during the Hundred Years' War.\",\n",
       " 'analysis': \"The notebook mentions that Joan of Arc visited Orléans during the Hundred Years' War and played a significant role in lifting the siege there. This indicates that she visited Orléans in her early life and later won a battle in the same city. There is no information about her visiting Paris, La Charité, Compiègne, Patay, Reims, or Rouen in her early life before engaging in battles there.\",\n",
       " 'previous_actions': ['rational_plan',\n",
       "  'initial_node_selection',\n",
       "  \"atomic_fact_check(['Joan of Arc', 'siege of Orléans', 'siege of Compiègne', 'Loire Campaign', 'Rouen'])\",\n",
       "  'read_chunks(b1422cb348f645771b8ab54fc29c3272)',\n",
       "  'neighbor_select(Loire Campaign)',\n",
       "  \"atomic_fact_check(['Loire Campaign'])\",\n",
       "  'neighbor_select(Patay)',\n",
       "  \"atomic_fact_check(['Patay'])\",\n",
       "  'neighbor_select(Joan of Arc)',\n",
       "  \"atomic_fact_check(['Joan of Arc'])\",\n",
       "  'read_chunks(3385858910bd9b857986df94ef9d6ef6)',\n",
       "  'read_chunks(6df8a5fb56909eebbdceb33906cdee85)',\n",
       "  'read_chunks(3083850e9b139392cac70ed8831d3224)',\n",
       "  'neighbor_select(northeast France)',\n",
       "  \"atomic_fact_check(['northeast France'])\",\n",
       "  'neighbor_select()',\n",
       "  'answer_reasoning']}"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "langgraph.invoke({\"question\":\"Did Joan of Arc visit any cities in early life where she won battles later?\"})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c5922ace-5fd5-4407-b425-c30501ad3550",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
