{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a href=\"https://colab.research.google.com/github/milvus-io/bootcamp/blob/master/bootcamp/RAG/advanced_rag/langgraph-graphrag-agent-local.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "! pip install -U langchain_community arxiv tiktoken langchainhub pymilvus langchain langgraph tavily-python sentence-transformers langchain-milvus langchain-ollama langchain-huggingface beautifulsoup4 langchain-experimental neo4j json-repair langchain-openai langchain-ollama"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# LangGraph GraphRAG agent with Llama 3.x and GPT4o\n",
    "\n",
    "\n",
    "Let's build an Advanced RAG with a GraphRAG agent that will run a combination of Llama 3.1 and GPT4o, for Llama 3.1 we will use Ollama. The idea is that we use GPT4o for advanced tasks, like generating the Neo4j query and Llama3.1 for the rest. \n",
    "\n",
    "## Ideas\n",
    "\n",
    "We'll combine ideas from three RAG papers into a RAG agent:\n",
    "\n",
    "- **Routing:**  Adaptive RAG ([paper](https://arxiv.org/abs/2403.14403)). Route questions to different retrieval approaches\n",
    "- **Fallback:** Corrective RAG ([paper](https://arxiv.org/pdf/2401.15884.pdf)). Fallback to web search if docs are not relevant to query\n",
    "- **Self-correction:** Self-RAG ([paper](https://arxiv.org/abs/2310.11511)). Fix answers w/ hallucinations or don’t address question\n",
    "\n",
    "![langgraph_adaptive_rag.png](imgs/RAG_Agent_langGraph.png)\n",
    "\n",
    "Note that this will incorperate [a few general ideas for agents](https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/):\n",
    "\n",
    "- **Reflection**: The self-correction mechanism is a form of reflection, where the LangGraph agent reflects on its retrieval and generations\n",
    "- **Planning**: The control flow laid out in the graph is a form of planning \n",
    "- **Tool use**: Specific nodes in the control flow (e.g., web search) will use tools\n",
    "\n",
    "## Local models\n",
    "\n",
    "### LLM\n",
    "\n",
    "Use [Ollama](https://ollama.ai/) and [llama3](https://ollama.ai/library/llama3):\n",
    "\n",
    "```\n",
    "ollama pull llama3.1\n",
    "```\n",
    "\n",
    "### Env Variables\n",
    "Variables needed in an .env file or loaded as variables at start:\n",
    "\n",
    "Required:\n",
    "```\n",
    "OPENAI_API_KEY=sk-...\n",
    "TAVILY_API_KEY=tvly-...\n",
    "NEO4J_URI=...\n",
    "NEO4J_USERNAME=...\n",
    "NEO4J_PASSWORD=...\n",
    "```\n",
    "\n",
    "### Search\n",
    "\n",
    "Uses [Tavily](https://tavily.com/#api)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from dotenv import load_dotenv\n",
    "import os\n",
    "\n",
    "load_dotenv()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.globals import set_verbose, set_debug\n",
    "\n",
    "set_debug(False)\n",
    "set_verbose(False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [],
   "source": [
    "### LLM\n",
    "\n",
    "local_llm = \"llama3.1\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "import arxiv\n",
    "\n",
    "search_query = \"agent OR 'large language model' OR 'prompt engineering'\"\n",
    "max_results = 20\n",
    "\n",
    "client = arxiv.Client()\n",
    "search = arxiv.Search(\n",
    "    query=search_query, max_results=max_results, sort_by=arxiv.SortCriterion.Relevance\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Number of papers: 20\n",
      "Number of chunks: 20\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/kv/3dw9p_ts4b114chqt9m027pc0000gn/T/ipykernel_10082/2843254102.py:28: LangChainDeprecationWarning: Default values for HuggingFaceEmbeddings.model_name were deprecated in LangChain 0.2.16 and will be removed in 0.4.0. Explicitly pass a model_name to the HuggingFaceEmbeddings constructor instead.\n",
      "  embedding=HuggingFaceEmbeddings(),\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
     ]
    }
   ],
   "source": [
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "from langchain_community.document_loaders import WebBaseLoader\n",
    "from langchain_milvus import Milvus\n",
    "from langchain_community.embeddings import HuggingFaceEmbeddings\n",
    "\n",
    "\n",
    "docs = []\n",
    "for result in client.results(search):\n",
    "    docs.append(\n",
    "        {\"title\": result.title, \"summary\": result.summary, \"url\": result.entry_id}\n",
    "    )\n",
    "\n",
    "text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(\n",
    "    chunk_size=500, chunk_overlap=50\n",
    ")\n",
    "doc_splits = text_splitter.create_documents(\n",
    "    [doc[\"summary\"] for doc in docs], metadatas=docs\n",
    ")\n",
    "\n",
    "print(f\"Number of papers: {len(docs)}\")\n",
    "print(f\"Number of chunks: {len(doc_splits)}\")\n",
    "\n",
    "\n",
    "# Add to Milvus\n",
    "vectorstore = Milvus.from_documents(\n",
    "    documents=doc_splits,\n",
    "    collection_name=\"rag_milvus\",\n",
    "    embedding=HuggingFaceEmbeddings(),\n",
    "    connection_args={\"uri\": \"./milvus_ingest_v2.db\"},\n",
    ")\n",
    "retriever = vectorstore.as_retriever()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_community.chat_models import ChatOllama\n",
    "\n",
    "llm = ChatOllama(model=local_llm, format=\"json\", temperature=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Graph documents: 20\n",
      "Nodes from 1st graph doc:[Node(id='Prompt Design And Engineering', type='Topic'), Node(id='Large Language Models', type='Topic'), Node(id='Chain-Of-Thought', type='Topic'), Node(id='Reflection', type='Topic'), Node(id='Llm-Based Agents', type='Topic'), Node(id='Tools For Prompt Engineers', type='Topic'), Node(id='This Paper', type='Paper', properties={'summary': 'This paper introduces core concepts, advanced techniques like Chain-of-Thought and Reflection, and the principles behind building LLM-based agents. It also provides a survey of tools for prompt engineers.'})]\n",
      "Relationships from 1st graph doc:[Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Prompt Design And Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Chain-Of-Thought', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Reflection', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Llm-Based Agents', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Tools For Prompt Engineers', type='Topic'), type='DISCUSSES')]\n"
     ]
    }
   ],
   "source": [
    "# GraphRAG Setup\n",
    "from langchain_community.graphs import Neo4jGraph\n",
    "from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
    "from langchain_core.documents import Document\n",
    "from langchain_experimental.llms.ollama_functions import OllamaFunctions\n",
    "from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_ollama import ChatOllama\n",
    "\n",
    "graph = Neo4jGraph()\n",
    "\n",
    "graph_llm = ChatOpenAI(temperature=0, model_name=\"gpt-4o\")\n",
    "\n",
    "graph_transformer = LLMGraphTransformer(\n",
    "    llm=graph_llm,\n",
    "    allowed_nodes=[\"Paper\", \"Author\", \"Topic\"],\n",
    "    node_properties=[\"title\", \"summary\", \"url\"],\n",
    "    allowed_relationships=[\"AUTHORED\", \"DISCUSSES\", \"RELATED_TO\"],\n",
    ")\n",
    "\n",
    "graph_documents = graph_transformer.convert_to_graph_documents(doc_splits)\n",
    "\n",
    "graph.add_graph_documents(graph_documents)\n",
    "\n",
    "print(f\"Graph documents: {len(graph_documents)}\")\n",
    "print(f\"Nodes from 1st graph doc:{graph_documents[0].nodes}\")\n",
    "print(f\"Relationships from 1st graph doc:{graph_documents[0].relationships}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Document 0:\n",
      "  Nodes: [Node(id='Prompt Design And Engineering', type='Topic'), Node(id='Large Language Models', type='Topic'), Node(id='Chain-Of-Thought', type='Topic'), Node(id='Reflection', type='Topic'), Node(id='Llm-Based Agents', type='Topic'), Node(id='Tools For Prompt Engineers', type='Topic'), Node(id='This Paper', type='Paper', properties={'summary': 'This paper introduces core concepts, advanced techniques like Chain-of-Thought and Reflection, and the principles behind building LLM-based agents. It also provides a survey of tools for prompt engineers.'})]\n",
      "  Relationships: [Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Prompt Design And Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Chain-Of-Thought', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Reflection', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Llm-Based Agents', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='This Paper', type='Paper'), target=Node(id='Tools For Prompt Engineers', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 1:\n",
      "  Nodes: [Node(id='Unlocking Reasoning Capability Of Large Language Models', type='Paper', properties={'title': 'Unlocking Reasoning Capability of Large Language Models', 'summary': 'This paper discusses two methods to enhance reasoning in large language models: prompt engineering and multi-agent discussion. It theoretically justifies multi-agent discussion from the symmetry of agents and reports empirical results on the interplay of prompts and discussion mechanisms. It also proposes a scalable discussion mechanism based on conquer and merge.'}), Node(id='Prompt Engineering', type='Topic'), Node(id='Multi-Agent Discussion', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Unlocking Reasoning Capability Of Large Language Models', type='Paper'), target=Node(id='Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Unlocking Reasoning Capability Of Large Language Models', type='Paper'), target=Node(id='Multi-Agent Discussion', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 2:\n",
      "  Nodes: [Node(id='Simulation Of Human Interactions Using Llms', type='Paper', properties={'title': 'Simulation of Human Interactions using LLMs', 'summary': 'The research investigates simulations of human interactions using large language models (LLMs). It presents two simulations of believable proxies of human behavior: a two-agent negotiation and a six-agent murder mystery game.'}), Node(id='Agent-Based Modeling', type='Topic'), Node(id='Large Language Models', type='Topic'), Node(id='Chatgpt', type='Topic'), Node(id='Prompt Engineering', type='Topic'), Node(id='Park Et Al. (2023)', type='Paper')]\n",
      "  Relationships: [Relationship(source=Node(id='Simulation Of Human Interactions Using Llms', type='Paper'), target=Node(id='Agent-Based Modeling', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Simulation Of Human Interactions Using Llms', type='Paper'), target=Node(id='Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Simulation Of Human Interactions Using Llms', type='Paper'), target=Node(id='Chatgpt', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Simulation Of Human Interactions Using Llms', type='Paper'), target=Node(id='Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Simulation Of Human Interactions Using Llms', type='Paper'), target=Node(id='Park Et Al. (2023)', type='Paper'), type='RELATED_TO')]\n",
      "---\n",
      "Document 3:\n",
      "  Nodes: [Node(id='Ai Community', type='Topic'), Node(id='Artificial General Intelligence', type='Topic'), Node(id='Language Agents', type='Topic'), Node(id='Large Language Models', type='Topic'), Node(id='Agent Symbolic Learning', type='Topic'), Node(id='Paper On Agent Symbolic Learning', type='Paper', properties={'summary': 'Introduces agent symbolic learning, a framework enabling language agents to optimize themselves in a data-centric way using symbolic optimizers.'}), Node(id='Self-Evolving Agents', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Ai Community', type='Topic'), target=Node(id='Artificial General Intelligence', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Language Agents', type='Topic'), target=Node(id='Artificial General Intelligence', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Language Agents', type='Topic'), target=Node(id='Large Language Models', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Agent Symbolic Learning', type='Topic'), target=Node(id='Language Agents', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Paper On Agent Symbolic Learning', type='Paper'), target=Node(id='Agent Symbolic Learning', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Agent Symbolic Learning', type='Topic'), target=Node(id='Self-Evolving Agents', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 4:\n",
      "  Nodes: [Node(id='Reprompt', type='Paper', properties={'title': 'RePrompt', 'summary': \"RePrompt is a novel method that uses 'gradient descent' to optimize step-by-step instructions in the prompt of LLM agents based on chat history from interactions with LLM agents. It aims to improve performance in specific domains by optimizing the prompt.\"}), Node(id='Large Language Models', type='Topic'), Node(id='Automatic Prompt Engineering', type='Topic'), Node(id='Pddl Generation', type='Topic'), Node(id='Travel Planning', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Automatic Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Pddl Generation', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Travel Planning', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 5:\n",
      "  Nodes: [Node(id='Traditional Base Station Siting Methods', type='Topic'), Node(id='Large Language Models', type='Topic'), Node(id='Prompt Engineering', type='Topic'), Node(id='Agent Engineering', type='Topic'), Node(id='Network Optimization', type='Topic'), Node(id='Artificial Intelligence As A Service', type='Topic'), Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), Node(id='Prompt-Optimized Llm', type='Topic'), Node(id='Human-In-The-Loop Llm', type='Topic'), Node(id='Llm-Empowered Autonomous Bss Agent', type='Topic'), Node(id='Cooperative Multiple Llm-Based Autonomous Bss Agents', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Traditional Base Station Siting Methods', type='Topic'), target=Node(id='Network Optimization', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Large Language Models', type='Topic'), target=Node(id='Network Optimization', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Network Optimization', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Agent Engineering', type='Topic'), target=Node(id='Network Optimization', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), target=Node(id='Network Optimization', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Artificial Intelligence As A Service', type='Topic'), target=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Prompt-Optimized Llm', type='Topic'), target=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Human-In-The-Loop Llm', type='Topic'), target=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Llm-Empowered Autonomous Bss Agent', type='Topic'), target=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Cooperative Multiple Llm-Based Autonomous Bss Agents', type='Topic'), target=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 6:\n",
      "  Nodes: [Node(id='Large Language Models', type='Topic'), Node(id='Prompt Engineering', type='Topic'), Node(id='Hallucinations', type='Topic'), Node(id='Tool-Calling Agents', type='Topic'), Node(id='Paper On Prompting Strategies', type='Paper', properties={'summary': 'This paper provides a comprehensive empirical evaluation of different prompting strategies and frameworks aimed at reducing hallucinations in LLMs. Various prompting techniques are applied to a broad set of benchmark datasets to assess the accuracy and hallucination rate of each method. Additionally, the paper investigates the influence of tool-calling agents on hallucination rates in the same benchmarks.'})]\n",
      "  Relationships: [Relationship(source=Node(id='Paper On Prompting Strategies', type='Paper'), target=Node(id='Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Paper On Prompting Strategies', type='Paper'), target=Node(id='Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Paper On Prompting Strategies', type='Paper'), target=Node(id='Hallucinations', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Paper On Prompting Strategies', type='Paper'), target=Node(id='Tool-Calling Agents', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Large Language Models', type='Topic'), target=Node(id='Hallucinations', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Hallucinations', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Tool-Calling Agents', type='Topic'), target=Node(id='Hallucinations', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 7:\n",
      "  Nodes: [Node(id='Instruction-Following Agents', type='Topic'), Node(id='Language Grounding', type='Topic'), Node(id='Pretrained Vision-Language Models', type='Topic'), Node(id='Embodied Agents', type='Topic'), Node(id='Model Distillation', type='Topic'), Node(id='Hindsight Experience Replay', type='Topic'), Node(id='3D Rendered Environment', type='Topic'), Node(id='Fewshot Prompting', type='Topic'), Node(id='Abstract Category Membership', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Instruction-Following Agents', type='Topic'), target=Node(id='Language Grounding', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Language Grounding', type='Topic'), target=Node(id='Pretrained Vision-Language Models', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Pretrained Vision-Language Models', type='Topic'), target=Node(id='Embodied Agents', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Model Distillation', type='Topic'), target=Node(id='Pretrained Vision-Language Models', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Hindsight Experience Replay', type='Topic'), target=Node(id='Pretrained Vision-Language Models', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='3D Rendered Environment', type='Topic'), target=Node(id='Embodied Agents', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Fewshot Prompting', type='Topic'), target=Node(id='Abstract Category Membership', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 8:\n",
      "  Nodes: [Node(id='Pre-Trained And Frozen Large Language Models', type='Topic'), Node(id='Scene Rearrangement Instructions', type='Topic'), Node(id=\"Robot'S Visuomotor Functions\", type='Topic'), Node(id='Open-Domain Natural Language', type='Topic'), Node(id=\"User'S Idiosyncratic Procedures\", type='Topic'), Node(id='Prompt Engineering', type='Topic'), Node(id='Helper', type='Paper', properties={'title': 'HELPER: An Embodied Agent with External Memory for Human-Robot Dialogue Parsing', 'summary': 'HELPER is an embodied agent equipped with an external memory of language-program pairs that parses free-form human-robot dialogue into action programs through retrieval-augmented LLM prompting.', 'url': 'https://helper-agent-llm.github.io'}), Node(id='Teach Benchmark', type='Topic'), Node(id='Execution From Dialog History (Edh)', type='Topic'), Node(id='Trajectory From Dialogue (Tfd)', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Pre-Trained And Frozen Large Language Models', type='Topic'), target=Node(id='Scene Rearrangement Instructions', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Pre-Trained And Frozen Large Language Models', type='Topic'), target=Node(id=\"Robot'S Visuomotor Functions\", type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Open-Domain Natural Language', type='Topic'), target=Node(id=\"User'S Idiosyncratic Procedures\", type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Helper', type='Paper'), target=Node(id='Teach Benchmark', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Helper', type='Paper'), target=Node(id='Execution From Dialog History (Edh)', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Helper', type='Paper'), target=Node(id='Trajectory From Dialogue (Tfd)', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 9:\n",
      "  Nodes: [Node(id='Promptagent', type='Paper', properties={'title': 'PromptAgent: Autonomous Expert-Level Prompt Optimization', 'summary': 'PromptAgent is an optimization method that autonomously crafts prompts equivalent in quality to those handcrafted by experts. It views prompt optimization as a strategic planning problem and employs a principled planning algorithm, rooted in Monte Carlo tree search, to navigate the expert-level prompt space. It applies to 12 tasks spanning three practical domains: BIG-Bench Hard (BBH), domain-specific, and general NLP tasks, significantly outperforming strong baselines.'}), Node(id='Big-Bench Hard (Bbh)', type='Topic'), Node(id='Domain-Specific Nlp Tasks', type='Topic'), Node(id='General Nlp Tasks', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Promptagent', type='Paper'), target=Node(id='Big-Bench Hard (Bbh)', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptagent', type='Paper'), target=Node(id='Domain-Specific Nlp Tasks', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptagent', type='Paper'), target=Node(id='General Nlp Tasks', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 10:\n",
      "  Nodes: [Node(id='Recent Advancements In Large Language Models (Llms) And Prompt Engineering', type='Topic'), Node(id='Chatbot Customization', type='Topic'), Node(id='Prompt Evaluation', type='Topic'), Node(id='Awesum', type='Paper', properties={'title': 'Awesum: A Visual Analytics System for Prompt Evaluation in Text Summarization'}), Node(id='Feature-Oriented Workflow', type='Topic'), Node(id='Text Summarization', type='Topic'), Node(id='Summary Characteristics', type='Topic'), Node(id='Prompt Comparator', type='Topic'), Node(id='Dimensionality Reduction Techniques', type='Topic'), Node(id='Feature-Oriented Evaluation Of Llm Prompts', type='Topic'), Node(id='Human-Agent Interaction', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Recent Advancements In Large Language Models (Llms) And Prompt Engineering', type='Topic'), target=Node(id='Chatbot Customization', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Recent Advancements In Large Language Models (Llms) And Prompt Engineering', type='Topic'), target=Node(id='Prompt Evaluation', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Awesum', type='Paper'), target=Node(id='Prompt Evaluation', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Awesum', type='Paper'), target=Node(id='Feature-Oriented Workflow', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Feature-Oriented Workflow', type='Topic'), target=Node(id='Text Summarization', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Feature-Oriented Workflow', type='Topic'), target=Node(id='Summary Characteristics', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Awesum', type='Paper'), target=Node(id='Prompt Comparator', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Comparator', type='Topic'), target=Node(id='Dimensionality Reduction Techniques', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Feature-Oriented Evaluation Of Llm Prompts', type='Topic'), target=Node(id='Human-Agent Interaction', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 11:\n",
      "  Nodes: [Node(id='Leveraging_Multiple_Llm_Agents', type='Topic'), Node(id='Adaptive_Team-Building_Paradigm', type='Topic'), Node(id='Captain_Agent', type='Topic'), Node(id='Paper_1', type='Paper', properties={'title': 'Leveraging multiple large language model (LLM) agents', 'summary': 'Leveraging multiple large language model (LLM) agents has shown to be a promising approach for tackling complex tasks, while the effective design of multiple agents for a particular application remains an art. It is thus intriguing to answer a critical question: Given a task, how can we build a team of LLM agents to solve it effectively? Our new adaptive team-building paradigm offers a flexible solution, realized through a novel agent design named Captain Agent. It dynamically forms and manages teams for each step of a task-solving process, utilizing nested group conversations and reflection to ensure diverse expertise and prevent stereotypical outputs, allowing for a flexible yet structured approach to problem-solving. A comprehensive evaluation across six real-world scenarios demonstrates that Captain Agent significantly outperforms existing multi-agent methods with 21.94% improvement in average accuracy, providing outstanding performance without requiring task-specific prompt engineering. Our exploration of different backbone LLM and cost analysis further shows that Captain Agent can improve the conversation quality of weak LLM and achieve competitive performance with extremely low cost, which illuminates the application of multi-agent systems.'})]\n",
      "  Relationships: [Relationship(source=Node(id='Paper_1', type='Paper'), target=Node(id='Leveraging_Multiple_Llm_Agents', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Paper_1', type='Paper'), target=Node(id='Adaptive_Team-Building_Paradigm', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Paper_1', type='Paper'), target=Node(id='Captain_Agent', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Adaptive_Team-Building_Paradigm', type='Topic'), target=Node(id='Captain_Agent', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 12:\n",
      "  Nodes: [Node(id='Drama Engine', type='Paper', properties={'title': 'Drama Engine', 'summary': 'A novel framework for agentic interaction with large language models designed for narrative purposes, adapting multi-agent system principles to create dynamic, context-aware companions.'}), Node(id='Multi-Agent Workflows', type='Topic'), Node(id='Dynamic Prompt Assembly', type='Topic'), Node(id='Model-Agnostic Design', type='Topic'), Node(id='Companion Development', type='Topic'), Node(id='Mood Systems', type='Topic'), Node(id='Automatic Context Summarising', type='Topic'), Node(id='Multi-Agent Chats', type='Topic'), Node(id='Virtual Co-Workers For Creative Writing', type='Topic'), Node(id=\"System'S Architecture\", type='Topic'), Node(id='Prompt Assembly Process', type='Topic'), Node(id='Delegation Mechanisms', type='Topic'), Node(id='Moderation Techniques', type='Topic'), Node(id='Ethical Considerations', type='Topic'), Node(id='Future Extensions', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Multi-Agent Workflows', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Dynamic Prompt Assembly', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Model-Agnostic Design', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Companion Development', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Mood Systems', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Automatic Context Summarising', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Multi-Agent Chats', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Virtual Co-Workers For Creative Writing', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id=\"System'S Architecture\", type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Prompt Assembly Process', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Delegation Mechanisms', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Moderation Techniques', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Ethical Considerations', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Future Extensions', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 13:\n",
      "  Nodes: [Node(id='Prompt Engineering', type='Topic'), Node(id='Vision-Language Models', type='Topic'), Node(id='Multimodal-To-Text Generation Models', type='Topic'), Node(id='Image-Text Matching Models', type='Topic'), Node(id='Text-To-Image Generation Models', type='Topic'), Node(id='Natural Language Processing', type='Topic'), Node(id='Vision-Language Modeling', type='Topic'), Node(id='Flamingo', type='Topic'), Node(id='Clip', type='Topic'), Node(id='Stable Diffusion', type='Topic'), Node(id='Prompt Engineering On Vision-Language Models', type='Paper', properties={'title': 'Prompt Engineering on Vision-Language Models'})]\n",
      "  Relationships: [Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Vision-Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Natural Language Processing', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Vision-Language Modeling', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Engineering On Vision-Language Models', type='Paper'), target=Node(id='Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Engineering On Vision-Language Models', type='Paper'), target=Node(id='Multimodal-To-Text Generation Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Engineering On Vision-Language Models', type='Paper'), target=Node(id='Image-Text Matching Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Engineering On Vision-Language Models', type='Paper'), target=Node(id='Text-To-Image Generation Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Multimodal-To-Text Generation Models', type='Topic'), target=Node(id='Flamingo', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Image-Text Matching Models', type='Topic'), target=Node(id='Clip', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Text-To-Image Generation Models', type='Topic'), target=Node(id='Stable Diffusion', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 14:\n",
      "  Nodes: [Node(id='Recent Trends In Llms', type='Topic'), Node(id='Large Language Models As Autonomous Agents', type='Topic'), Node(id='Guidance, Navigation, And Control In Space', type='Topic'), Node(id='Kerbal Space Program Differential Games', type='Topic'), Node(id='Llm-Based Solution For Kspdg', type='Paper', properties={'title': 'LLM-based solution for KSPDG', 'summary': 'A pure LLM-based solution for the Kerbal Space Program Differential Games challenge, leveraging prompt engineering, few-shot prompting, and fine-tuning techniques to create an effective LLM-based agent that ranked 2nd in the competition.', 'url': 'https://github.com/ARCLab-MIT/kspdg'})]\n",
      "  Relationships: [Relationship(source=Node(id='Recent Trends In Llms', type='Topic'), target=Node(id='Large Language Models As Autonomous Agents', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Large Language Models As Autonomous Agents', type='Topic'), target=Node(id='Guidance, Navigation, And Control In Space', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Llm-Based Solution For Kspdg', type='Paper'), target=Node(id='Kerbal Space Program Differential Games', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 15:\n",
      "  Nodes: [Node(id='Prompt Stealing Attacks', type='Paper', properties={'title': 'Prompt Stealing Attacks', 'summary': 'The paper proposes a novel attack against large language models (LLMs) named prompt stealing attacks, which aim to steal well-designed prompts based on generated answers. The attack consists of two primary modules: the parameter extractor and the prompt reconstruction. The parameter extractor identifies the type of prompts and predicts roles or contexts used, while the prompt reconstructor generates reversed prompts similar to the original ones. The study highlights the security issues in LLMs and adds a new dimension to prompt engineering.'})]\n",
      "  Relationships: [Relationship(source=Node(id='Prompt Stealing Attacks', type='Paper'), target=Node(id='Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Stealing Attacks', type='Paper'), target=Node(id='Security Issues On Llms', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 16:\n",
      "  Nodes: [Node(id='Promptset', type='Paper', properties={'title': 'PromptSet', 'summary': 'A novel dataset with more than 61,000 unique developer prompts used in open source Python programs, introducing the notion of a static linter for prompts.'}), Node(id='Large Language Models', type='Topic'), Node(id='Prompting', type='Topic'), Node(id='Static Linter', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Promptset', type='Paper'), target=Node(id='Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptset', type='Paper'), target=Node(id='Prompting', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptset', type='Paper'), target=Node(id='Static Linter', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 17:\n",
      "  Nodes: [Node(id='Interaction With Large Language Models', type='Topic'), Node(id='Prompt', type='Topic'), Node(id='Prompt Design', type='Topic'), Node(id='Prompt Engineering', type='Topic'), Node(id='Prompt Editing Behavior', type='Topic'), Node(id='Prompt Engineering Practices', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Interaction With Large Language Models', type='Topic'), target=Node(id='Prompt', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt', type='Topic'), target=Node(id='Prompt Design', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Prompt Design', type='Topic'), target=Node(id='Prompt Engineering', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Prompt Editing Behavior', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Prompt Engineering Practices', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 18:\n",
      "  Nodes: [Node(id='Large Language Models', type='Topic'), Node(id='Security Tasks Automation', type='Topic'), Node(id='Security Operation Centers', type='Topic'), Node(id='Software Pentesting', type='Topic'), Node(id='Software Security Vulnerabilities', type='Topic'), Node(id='Owasp Benchmark Project 1.2', type='Topic'), Node(id='Sonarqube', type='Topic'), Node(id=\"Google'S Gemini-Pro\", type='Topic'), Node(id=\"Openai'S Gpt-3.5-Turbo\", type='Topic'), Node(id=\"Openai'S Gpt-4-Turbo\", type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Large Language Models', type='Topic'), target=Node(id='Security Tasks Automation', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Security Tasks Automation', type='Topic'), target=Node(id='Security Operation Centers', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Large Language Models', type='Topic'), target=Node(id='Software Pentesting', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Software Pentesting', type='Topic'), target=Node(id='Software Security Vulnerabilities', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Owasp Benchmark Project 1.2', type='Topic'), target=Node(id='Software Security Vulnerabilities', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Sonarqube', type='Topic'), target=Node(id='Software Security Vulnerabilities', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id=\"Google'S Gemini-Pro\", type='Topic'), target=Node(id='Large Language Models', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id=\"Openai'S Gpt-3.5-Turbo\", type='Topic'), target=Node(id='Large Language Models', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id=\"Openai'S Gpt-4-Turbo\", type='Topic'), target=Node(id='Large Language Models', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 19:\n",
      "  Nodes: [Node(id='Visual Navigation Using Large Language Models', type='Paper', properties={'summary': 'Recent efforts to enable visual navigation using large language models have mainly focused on developing complex prompt systems. These systems incorporate instructions, observations, and history into massive text prompts, which are then combined with pre-trained large language models to facilitate visual navigation. In contrast, our approach aims to fine-tune large language models for visual navigation without extensive prompt engineering. Our design involves a simple text prompt, current observations, and a history collector model that gathers information from previous observations as input. For output, our design provides a probability distribution of possible actions that the agent can take during navigation. We train our model using human demonstrations and collision signals from the Habitat-Matterport 3D Dataset (HM3D). Experimental results demonstrate that our method outperforms state-of-the-art behavior cloning methods and effectively reduces collision rates.'}), Node(id='Large Language Models', type='Topic'), Node(id='Visual Navigation', type='Topic'), Node(id='Prompt Engineering', type='Topic'), Node(id='Habitat-Matterport 3D Dataset (Hm3D)', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Visual Navigation Using Large Language Models', type='Paper'), target=Node(id='Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Visual Navigation Using Large Language Models', type='Paper'), target=Node(id='Visual Navigation', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Visual Navigation Using Large Language Models', type='Paper'), target=Node(id='Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Visual Navigation Using Large Language Models', type='Paper'), target=Node(id='Habitat-Matterport 3D Dataset (Hm3D)', type='Topic'), type='DISCUSSES')]\n",
      "---\n"
     ]
    }
   ],
   "source": [
    "# After converting to graph documents\n",
    "for i, doc in enumerate(graph_documents):\n",
    "    print(f\"Document {i}:\")\n",
    "    print(f\"  Nodes: {doc.nodes}\")\n",
    "    print(f\"  Relationships: {doc.relationships}\")\n",
    "    print(\"---\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Is our answer relevant to the question asked: {'score': 'yes'}\n"
     ]
    }
   ],
   "source": [
    "### Retrieval Grader\n",
    "\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain_community.chat_models import ChatOllama\n",
    "from langchain_core.output_parsers import JsonOutputParser\n",
    "\n",
    "# LLM\n",
    "llm = ChatOllama(model=local_llm, format=\"json\", temperature=0)\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are a grader assessing relevance \n",
    "    of a retrieved document to a user question. If the document contains keywords related to the user question, \n",
    "    grade it as relevant. It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \n",
    "    \n",
    "    Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.\n",
    "    Provide the binary score as a JSON with a single key 'score' and no premable or explaination.\n",
    "     \n",
    "    Here is the retrieved document: \n",
    "    {document}\n",
    "    \n",
    "    Here is the user question: \n",
    "    {question}\n",
    "    \"\"\",\n",
    "    input_variables=[\"question\", \"document\"],\n",
    ")\n",
    "\n",
    "retrieval_grader = prompt | llm | JsonOutputParser()\n",
    "question = \"Do we have articles that talk about Prompt Engineering?\"\n",
    "docs = retriever.invoke(question)\n",
    "doc_txt = docs[1].page_content\n",
    "print(\n",
    "    f'Is our answer relevant to the question asked: {retrieval_grader.invoke({\"question\": question, \"document\": doc_txt})}'\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Yes, we have articles that talk about Prompt Engineering. The context includes papers discussing core concepts, advanced techniques, and principles behind building Large Language Model (LLM)-based agents, as well as a survey of tools for prompt engineers. Additionally, there are studies on prompt engineering technologies to improve the quality of model outputs and proposed attacks against LLMs that aim to steal well-designed prompts.\n"
     ]
    }
   ],
   "source": [
    "### Generate\n",
    "\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain import hub\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are an assistant for question-answering tasks. \n",
    "    Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. \n",
    "    Use three sentences maximum and keep the answer concise:\n",
    "    Question: {question} \n",
    "    Context: {context} \n",
    "    Answer: \n",
    "    \"\"\",\n",
    "    input_variables=[\"question\", \"document\"],\n",
    ")\n",
    "\n",
    "llm = ChatOllama(model=local_llm, temperature=0)\n",
    "\n",
    "\n",
    "def format_docs(docs):\n",
    "    return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
    "\n",
    "\n",
    "rag_chain = prompt | llm | StrOutputParser()\n",
    "\n",
    "question = \"Do we have articles that talk about Prompt Engineering?\"\n",
    "docs = retriever.invoke(question)\n",
    "generation = rag_chain.invoke({\"context\": docs, \"question\": question})\n",
    "print(generation)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Failed to write data to connection ResolvedIPv4Address(('35.241.237.34', 7687)) (ResolvedIPv4Address(('35.241.237.34', 7687)))\n",
      "Failed to write data to connection IPv4Address(('a154a864.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('35.241.237.34', 7687)))\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Generated Cypher:\n",
      "\u001b[32;1m\u001b[1;3mcypher\n",
      "MATCH (p:Paper)\n",
      "WHERE toLower(p.title) CONTAINS toLower(\"Multi-Agent\")\n",
      "RETURN p.title, p.id\n",
      "\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "{'query': 'What paper talks about Multi-Agent?', 'result': [{'p.title': 'Multi-Agent Assistant Code Generation (AgentCoder)', 'p.id': 'Multi-Agent Assistant Code Generation (Agentcoder)'}, {'p.title': 'Framework for Automatically Generating Process Models with Multi-Agent Orchestration (MAO)', 'p.id': 'This Article'}, {'p.title': 'Collaborative Multi-Agent, Multi-Reasoning-Path (CoMM) Prompting Framework', 'p.id': 'This Work'}], 'intermediate_steps': [{'query': 'cypher\\nMATCH (p:Paper)\\nWHERE toLower(p.title) CONTAINS toLower(\"Multi-Agent\")\\nRETURN p.title, p.id\\n'}]}\n"
     ]
    }
   ],
   "source": [
    "### Graph Generate\n",
    "\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain.chains import GraphCypherQAChain\n",
    "from langchain_ollama import ChatOllama\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "\n",
    "cypher_prompt = PromptTemplate(\n",
    "    template=\"\"\"You are an expert at generating Cypher queries for Neo4j.\n",
    "    Use the following schema to generate a Cypher query that answers the given question.\n",
    "    Make the query flexible by using case-insensitive matching and partial string matching where appropriate.\n",
    "    Focus on searching paper titles as they contain the most relevant information.\n",
    "    \n",
    "    Schema:\n",
    "    {schema}\n",
    "    \n",
    "    Question: {question}\n",
    "    \n",
    "    Cypher Query:\"\"\",\n",
    "    input_variables=[\"schema\", \"question\"],\n",
    ")\n",
    "\n",
    "\n",
    "qa_prompt = PromptTemplate(\n",
    "    template=\"\"\"You are an assistant for question-answering tasks. \n",
    "    Use the following Cypher query results to answer the question. If you don't know the answer, just say that you don't know. \n",
    "    Use three sentences maximum and keep the answer concise. If topic information is not available, focus on the paper titles.\n",
    "    \n",
    "    Question: {question} \n",
    "    Cypher Query: {query}\n",
    "    Query Results: {context} \n",
    "    \n",
    "    Answer:\"\"\",\n",
    "    input_variables=[\"question\", \"query\", \"context\"],\n",
    ")\n",
    "\n",
    "llm = ChatOpenAI(model=\"gpt-4o\", temperature=0)\n",
    "\n",
    "graph_rag_chain = GraphCypherQAChain.from_llm(\n",
    "    cypher_llm=llm,\n",
    "    qa_llm=llm,\n",
    "    validate_cypher=True,\n",
    "    graph=graph,\n",
    "    verbose=True,\n",
    "    return_intermediate_steps=True,\n",
    "    return_direct=True,\n",
    "    cypher_prompt=cypher_prompt,\n",
    "    qa_prompt=qa_prompt,\n",
    ")\n",
    "\n",
    "question = \"What paper talks about Multi-Agent?\"\n",
    "generation = graph_rag_chain.invoke({\"query\": question})\n",
    "print(generation)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Composite Vector + Graph Generations\n",
    "\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain.chains.base import Chain\n",
    "\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are an assistant for question-answering tasks. \n",
    "    Use the following pieces of retrieved context from a vector store and a graph database to answer the question. If you don't know the answer, just say that you don't know. \n",
    "    Use three sentences maximum and keep the answer concise:\n",
    "    Question: {question} \n",
    "    Vector Context: {context} \n",
    "    Graph Context: {graph_context}\n",
    "    Answer: \n",
    "    \"\"\",\n",
    "    input_variables=[\"question\", \"context\", \"graph_context\"],\n",
    ")\n",
    "\n",
    "llm = ChatOllama(model=local_llm, temperature=0)\n",
    "\n",
    "# Example input data\n",
    "question = \"What paper talk about Multi-Agent?\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Document(metadata={'pk': 453568971862704139, 'summary': 'Leveraging multiple large language model (LLM) agents has shown to be a\\npromising approach for tackling complex tasks, while the effective design of\\nmultiple agents for a particular application remains an art. It is thus\\nintriguing to answer a critical question: Given a task, how can we build a team\\nof LLM agents to solve it effectively? Our new adaptive team-building paradigm\\noffers a flexible solution, realized through a novel agent design named Captain\\nAgent. It dynamically forms and manages teams for each step of a task-solving\\nprocess, utilizing nested group conversations and reflection to ensure diverse\\nexpertise and prevent stereotypical outputs, allowing for a flexible yet\\nstructured approach to problem-solving. A comprehensive evaluation across six\\nreal-world scenarios demonstrates that Captain Agent significantly outperforms\\nexisting multi-agent methods with 21.94% improvement in average accuracy,\\nproviding outstanding performance without requiring task-specific prompt\\nengineering. Our exploration of different backbone LLM and cost analysis\\nfurther shows that Captain Agent can improve the conversation quality of weak\\nLLM and achieve competitive performance with extremely low cost, which\\nilluminates the application of multi-agent systems.', 'title': 'Adaptive In-conversation Team Building for Language Model Agents', 'url': 'http://arxiv.org/abs/2405.19425v2'}, page_content='Leveraging multiple large language model (LLM) agents has shown to be a\\npromising approach for tackling complex tasks, while the effective design of\\nmultiple agents for a particular application remains an art. It is thus\\nintriguing to answer a critical question: Given a task, how can we build a team\\nof LLM agents to solve it effectively? Our new adaptive team-building paradigm\\noffers a flexible solution, realized through a novel agent design named Captain\\nAgent. It dynamically forms and manages teams for each step of a task-solving\\nprocess, utilizing nested group conversations and reflection to ensure diverse\\nexpertise and prevent stereotypical outputs, allowing for a flexible yet\\nstructured approach to problem-solving. A comprehensive evaluation across six\\nreal-world scenarios demonstrates that Captain Agent significantly outperforms\\nexisting multi-agent methods with 21.94% improvement in average accuracy,\\nproviding outstanding performance without requiring task-specific prompt\\nengineering. Our exploration of different backbone LLM and cost analysis\\nfurther shows that Captain Agent can improve the conversation quality of weak\\nLLM and achieve competitive performance with extremely low cost, which\\nilluminates the application of multi-agent systems.'), Document(metadata={'pk': 453568971862704140, 'summary': \"This technical report presents the Drama Engine, a novel framework for\\nagentic interaction with large language models designed for narrative purposes.\\nThe framework adapts multi-agent system principles to create dynamic,\\ncontext-aware companions that can develop over time and interact with users and\\neach other. Key features include multi-agent workflows with delegation, dynamic\\nprompt assembly, and model-agnostic design. The Drama Engine introduces unique\\nelements such as companion development, mood systems, and automatic context\\nsummarising. It is implemented in TypeScript. The framework's applications\\ninclude multi-agent chats and virtual co-workers for creative writing. The\\npaper discusses the system's architecture, prompt assembly process, delegation\\nmechanisms, and moderation techniques, as well as potential ethical\\nconsiderations and future extensions.\", 'title': 'Drama Engine: A Framework for Narrative Agents', 'url': 'http://arxiv.org/abs/2408.11574v1'}, page_content=\"This technical report presents the Drama Engine, a novel framework for\\nagentic interaction with large language models designed for narrative purposes.\\nThe framework adapts multi-agent system principles to create dynamic,\\ncontext-aware companions that can develop over time and interact with users and\\neach other. Key features include multi-agent workflows with delegation, dynamic\\nprompt assembly, and model-agnostic design. The Drama Engine introduces unique\\nelements such as companion development, mood systems, and automatic context\\nsummarising. It is implemented in TypeScript. The framework's applications\\ninclude multi-agent chats and virtual co-workers for creative writing. The\\npaper discusses the system's architecture, prompt assembly process, delegation\\nmechanisms, and moderation techniques, as well as potential ethical\\nconsiderations and future extensions.\"), Document(metadata={'pk': 453568971862704130, 'summary': 'The final frontier for simulation is the accurate representation of complex,\\nreal-world social systems. While agent-based modeling (ABM) seeks to study the\\nbehavior and interactions of agents within a larger system, it is unable to\\nfaithfully capture the full complexity of human-driven behavior. Large language\\nmodels (LLMs), like ChatGPT, have emerged as a potential solution to this\\nbottleneck by enabling researchers to explore human-driven interactions in\\npreviously unimaginable ways. Our research investigates simulations of human\\ninteractions using LLMs. Through prompt engineering, inspired by Park et al.\\n(2023), we present two simulations of believable proxies of human behavior: a\\ntwo-agent negotiation and a six-agent murder mystery game.', 'title': 'Exploring the Intersection of Large Language Models and Agent-Based Modeling via Prompt Engineering', 'url': 'http://arxiv.org/abs/2308.07411v1'}, page_content='The final frontier for simulation is the accurate representation of complex,\\nreal-world social systems. While agent-based modeling (ABM) seeks to study the\\nbehavior and interactions of agents within a larger system, it is unable to\\nfaithfully capture the full complexity of human-driven behavior. Large language\\nmodels (LLMs), like ChatGPT, have emerged as a potential solution to this\\nbottleneck by enabling researchers to explore human-driven interactions in\\npreviously unimaginable ways. Our research investigates simulations of human\\ninteractions using LLMs. Through prompt engineering, inspired by Park et al.\\n(2023), we present two simulations of believable proxies of human behavior: a\\ntwo-agent negotiation and a six-agent murder mystery game.'), Document(metadata={'pk': 453568971862704129, 'summary': 'Two ways has been discussed to unlock the reasoning capability of a large\\nlanguage model. The first one is prompt engineering and the second one is to\\ncombine the multiple inferences of large language models, or the multi-agent\\ndiscussion. Theoretically, this paper justifies the multi-agent discussion\\nmechanisms from the symmetry of agents. Empirically, this paper reports the\\nempirical results of the interplay of prompts and discussion mechanisms,\\nrevealing the empirical state-of-the-art performance of complex multi-agent\\nmechanisms can be approached by carefully developed prompt engineering. This\\npaper also proposes a scalable discussion mechanism based on conquer and merge,\\nproviding a simple multi-agent discussion solution with simple prompts but\\nstate-of-the-art performance.', 'title': 'On the Discussion of Large Language Models: Symmetry of Agents and Interplay with Prompts', 'url': 'http://arxiv.org/abs/2311.07076v1'}, page_content='Two ways has been discussed to unlock the reasoning capability of a large\\nlanguage model. The first one is prompt engineering and the second one is to\\ncombine the multiple inferences of large language models, or the multi-agent\\ndiscussion. Theoretically, this paper justifies the multi-agent discussion\\nmechanisms from the symmetry of agents. Empirically, this paper reports the\\nempirical results of the interplay of prompts and discussion mechanisms,\\nrevealing the empirical state-of-the-art performance of complex multi-agent\\nmechanisms can be approached by carefully developed prompt engineering. This\\npaper also proposes a scalable discussion mechanism based on conquer and merge,\\nproviding a simple multi-agent discussion solution with simple prompts but\\nstate-of-the-art performance.')]\n"
     ]
    }
   ],
   "source": [
    "# Get vector + graph answers\n",
    "docs = retriever.invoke(question)\n",
    "\n",
    "print(docs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The papers that talk about Multi-Agent are:\n",
      "\n",
      "* \"Adaptive In-conversation Team Building for Language Model Agents\"\n",
      "* \"Drama Engine: A Framework for Narrative Agents\"\n",
      "* \"Exploring the Intersection of Large Language Models and Agent-Based Modeling via Prompt Engineering\"\n",
      "* \"On the Discussion of Large Language Models: Symmetry of Agents and Interplay with Prompts\"\n",
      "\n",
      "These papers discuss various aspects of multi-agent systems, including team building, narrative agents, simulations of human interactions, and discussion mechanisms.\n"
     ]
    }
   ],
   "source": [
    "vector_context = rag_chain.invoke({\"context\": docs, \"question\": question})\n",
    "\n",
    "print(vector_context)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
      "Generated Cypher:\n",
      "\u001b[32;1m\u001b[1;3mcypher\n",
      "MATCH (p:Paper)\n",
      "WHERE toLower(p.title) CONTAINS toLower(\"multi-agent\")\n",
      "RETURN p.title, p.id\n",
      "\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "{'query': 'What paper talk about Multi-Agent?', 'result': [{'p.title': 'Multi-Agent Assistant Code Generation (AgentCoder)', 'p.id': 'Multi-Agent Assistant Code Generation (Agentcoder)'}, {'p.title': 'Framework for Automatically Generating Process Models with Multi-Agent Orchestration (MAO)', 'p.id': 'This Article'}, {'p.title': 'Collaborative Multi-Agent, Multi-Reasoning-Path (CoMM) Prompting Framework', 'p.id': 'This Work'}], 'intermediate_steps': [{'query': 'cypher\\nMATCH (p:Paper)\\nWHERE toLower(p.title) CONTAINS toLower(\"multi-agent\")\\nRETURN p.title, p.id\\n'}]}\n"
     ]
    }
   ],
   "source": [
    "graph_context = graph_rag_chain.invoke({\"query\": question})\n",
    "\n",
    "print(graph_context)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The papers that talk about Multi-Agent are:\n",
      "\n",
      "* \"Adaptive In-conversation Team Building for Language Model Agents\"\n",
      "* \"Drama Engine: A Framework for Narrative Agents\"\n",
      "* \"Exploring the Intersection of Large Language Models and Agent-Based Modeling via Prompt Engineering\"\n",
      "* \"On the Discussion of Large Language Models: Symmetry of Agents and Interplay with Prompts\"\n",
      "\n",
      "These papers discuss various aspects of multi-agent systems, including team building, narrative agents, simulations of human interactions, and discussion mechanisms.\n"
     ]
    }
   ],
   "source": [
    "composite_chain = prompt | llm | StrOutputParser()\n",
    "answer = composite_chain.invoke(\n",
    "    {\"question\": question, \"context\": vector_context, \"graph_context\": graph_context}\n",
    ")\n",
    "\n",
    "print(answer)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'score': 'yes'}"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### Hallucination Grader\n",
    "\n",
    "llm = ChatOllama(model=local_llm, format=\"json\", temperature=0)\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are a grader assessing whether \n",
    "    an answer is grounded in / supported by a set of facts. Give a binary score 'yes' or 'no' score to indicate \n",
    "    whether the answer is grounded in / supported by a set of facts. Provide the binary score as a JSON with a \n",
    "    single key 'score' and no preamble or explanation.\n",
    "    \n",
    "    Here are the facts:\n",
    "    {documents} \n",
    "\n",
    "    Here is the answer: \n",
    "    {generation}\n",
    "    \"\"\",\n",
    "    input_variables=[\"generation\", \"documents\"],\n",
    ")\n",
    "\n",
    "hallucination_grader = prompt | llm | JsonOutputParser()\n",
    "hallucination_grader.invoke({\"documents\": docs, \"generation\": generation})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'score': 'yes'}"
      ]
     },
     "execution_count": 49,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### Answer Grader\n",
    "\n",
    "llm = ChatOllama(model=local_llm, format=\"json\", temperature=0)\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are a grader assessing whether an \n",
    "    answer is useful to resolve a question. Give a binary score 'yes' or 'no' to indicate whether the answer is \n",
    "    useful to resolve a question. Provide the binary score as a JSON with a single key 'score' and no preamble or explanation.\n",
    "     \n",
    "    Here is the answer:\n",
    "    {generation} \n",
    "\n",
    "    Here is the question: {question}\n",
    "    \"\"\",\n",
    "    input_variables=[\"generation\", \"question\"],\n",
    ")\n",
    "\n",
    "answer_grader = prompt | llm | JsonOutputParser()\n",
    "answer_grader.invoke({\"question\": question, \"generation\": generation})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'datasource': 'vectorstore'}\n"
     ]
    }
   ],
   "source": [
    "### Router\n",
    "\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain_community.chat_models import ChatOllama\n",
    "from langchain_core.output_parsers import JsonOutputParser\n",
    "\n",
    "llm = ChatOllama(model=local_llm, format=\"json\", temperature=0)\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are an expert at routing a user question to the most appropriate data source. \n",
    "    You have three options:\n",
    "    1. 'vectorstore': Use for questions about LLM agents, prompt engineering, and adversarial attacks.\n",
    "    2. 'graphrag': Use for questions that involve relationships between entities, such as authors, papers, and topics, or when the question requires understanding connections between concepts.\n",
    "    3. 'web_search': Use for all other questions or when current information is needed.\n",
    "\n",
    "    You do not need to be stringent with the keywords in the question related to these topics. \n",
    "    Choose the most appropriate option based on the nature of the question.\n",
    "\n",
    "    Return a JSON with a single key 'datasource' and no preamble or explanation. \n",
    "    The value should be one of: 'vectorstore', 'graphrag', or 'web_search'.\n",
    "    \n",
    "    Question to route: \n",
    "    {question}\"\"\",\n",
    "    input_variables=[\"question\"],\n",
    ")\n",
    "\n",
    "question_router = prompt | llm | JsonOutputParser()\n",
    "question = \"llm agent memory\"\n",
    "docs = retriever.get_relevant_documents(question)\n",
    "doc_txt = docs[1].page_content\n",
    "print(question_router.invoke({\"question\": question}))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Search\n",
    "\n",
    "from langchain_community.tools.tavily_search import TavilySearchResults\n",
    "\n",
    "web_search_tool = TavilySearchResults(k=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll implement these as a control flow in LangGraph."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing_extensions import TypedDict\n",
    "from typing import List\n",
    "\n",
    "### State\n",
    "class GraphState(TypedDict):\n",
    "    \"\"\"\n",
    "    Represents the state of our graph.\n",
    "\n",
    "    Attributes:\n",
    "        question: question\n",
    "        generation: LLM generation\n",
    "        web_search: whether to add search\n",
    "        documents: list of documents\n",
    "        graph_context: results from graph search\n",
    "    \"\"\"\n",
    "\n",
    "    question: str\n",
    "    generation: str\n",
    "    web_search: str\n",
    "    documents: List[str]\n",
    "    graph_context: str\n",
    "\n",
    "\n",
    "from langchain.schema import Document\n",
    "\n",
    "### Nodes\n",
    "def retrieve(state):\n",
    "    \"\"\"\n",
    "    Retrieve documents from vectorstore\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): New key added to state, documents, that contains retrieved documents\n",
    "    \"\"\"\n",
    "    print(\"---RETRIEVE---\")\n",
    "    question = state[\"question\"]\n",
    "\n",
    "    # Retrieval\n",
    "    documents = retriever.invoke(question)\n",
    "    return {\"documents\": documents, \"question\": question}\n",
    "\n",
    "\n",
    "def generate(state):\n",
    "    \"\"\"\n",
    "    Generate answer using RAG on retrieved documents and graph context\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): New key added to state, generation, that contains LLM generation\n",
    "    \"\"\"\n",
    "    print(\"---GENERATE---\")\n",
    "    question = state[\"question\"]\n",
    "    documents = state.get(\"documents\", [])\n",
    "    graph_context = state.get(\"graph_context\", \"\")\n",
    "\n",
    "    # Composite RAG generation\n",
    "    generation = composite_chain.invoke(\n",
    "        {\"question\": question, \"context\": documents, \"graph_context\": graph_context}\n",
    "    )\n",
    "    return {\n",
    "        \"documents\": documents,\n",
    "        \"question\": question,\n",
    "        \"generation\": generation,\n",
    "        \"graph_context\": graph_context,\n",
    "    }\n",
    "\n",
    "\n",
    "def grade_documents(state):\n",
    "    \"\"\"\n",
    "    Determines whether the retrieved documents are relevant to the question\n",
    "    If any document is not relevant, we will set a flag to run web search\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): Filtered out irrelevant documents and updated web_search state\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---CHECK DOCUMENT RELEVANCE TO QUESTION---\")\n",
    "    question = state[\"question\"]\n",
    "    documents = state[\"documents\"]\n",
    "\n",
    "    # Score each doc\n",
    "    filtered_docs = []\n",
    "    web_search = \"No\"\n",
    "    for d in documents:\n",
    "        score = retrieval_grader.invoke(\n",
    "            {\"question\": question, \"document\": d.page_content}\n",
    "        )\n",
    "        grade = score[\"score\"]\n",
    "        # Document relevant\n",
    "        if grade.lower() == \"yes\":\n",
    "            print(\"---GRADE: DOCUMENT RELEVANT---\")\n",
    "            filtered_docs.append(d)\n",
    "        # Document not relevant\n",
    "        else:\n",
    "            print(\"---GRADE: DOCUMENT NOT RELEVANT---\")\n",
    "            # We do not include the document in filtered_docs\n",
    "            # We set a flag to indicate that we want to run web search\n",
    "            web_search = \"Yes\"\n",
    "            continue\n",
    "    return {\"documents\": filtered_docs, \"question\": question, \"web_search\": web_search}\n",
    "\n",
    "\n",
    "def web_search(state):\n",
    "    \"\"\"\n",
    "    Web search based on the question\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): Appended web results to documents\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---WEB SEARCH---\")\n",
    "    question = state[\"question\"]\n",
    "    documents = state.get(\"documents\", [])  # Use get() with a default empty list\n",
    "\n",
    "    # Web search\n",
    "    docs = web_search_tool.invoke({\"query\": question})\n",
    "    web_results = \"\\n\".join([d[\"content\"] for d in docs])\n",
    "    web_results = Document(page_content=web_results)\n",
    "    documents.append(web_results)\n",
    "\n",
    "    return {\"documents\": documents, \"question\": question}\n",
    "\n",
    "\n",
    "### Conditional edge\n",
    "def route_question(state):\n",
    "    print(\"---ROUTE QUESTION---\")\n",
    "    question = state[\"question\"]\n",
    "    print(question)\n",
    "    source = question_router.invoke({\"question\": question})\n",
    "    print(source)\n",
    "    print(source[\"datasource\"])\n",
    "\n",
    "    if source[\"datasource\"] == \"graphrag\":\n",
    "        print(\"---TRYING GRAPH SEARCH---\")\n",
    "        graph_result = graph_search({\"question\": question})\n",
    "        if graph_result[\"graph_context\"] != \"No results found in the graph database.\":\n",
    "            return \"graphrag\"\n",
    "        else:\n",
    "            print(\"---NO RESULTS IN GRAPH, FALLING BACK TO VECTORSTORE---\")\n",
    "            return \"retrieve\"\n",
    "    elif source[\"datasource\"] == \"vectorstore\":\n",
    "        print(\"---ROUTE QUESTION TO VECTORSTORE RAG---\")\n",
    "        return \"retrieve\"\n",
    "    elif source[\"datasource\"] == \"web_search\":\n",
    "        print(\"---ROUTE QUESTION TO WEB SEARCH---\")\n",
    "        return \"websearch\"\n",
    "\n",
    "\n",
    "def decide_to_generate(state):\n",
    "    \"\"\"\n",
    "    Determines whether to generate an answer, or add web search\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        str: Binary decision for next node to call\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---ASSESS GRADED DOCUMENTS---\")\n",
    "    question = state[\"question\"]\n",
    "    web_search = state[\"web_search\"]\n",
    "    filtered_documents = state[\"documents\"]\n",
    "\n",
    "    if web_search == \"Yes\":\n",
    "        # All documents have been filtered check_relevance\n",
    "        # We will re-generate a new query\n",
    "        print(\n",
    "            \"---DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, INCLUDE WEB SEARCH---\"\n",
    "        )\n",
    "        return \"websearch\"\n",
    "    else:\n",
    "        # We have relevant documents, so generate answer\n",
    "        print(\"---DECISION: GENERATE---\")\n",
    "        return \"generate\"\n",
    "\n",
    "\n",
    "def graph_search(state):\n",
    "    \"\"\"\n",
    "    Perform GraphRAG search using Neo4j\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): Updated state with graph search results\n",
    "    \"\"\"\n",
    "    print(\"---GRAPH SEARCH---\")\n",
    "    question = state[\"question\"]\n",
    "\n",
    "    # Use the graph_rag_chain to perform the search\n",
    "    result = graph_rag_chain.invoke({\"query\": question})\n",
    "\n",
    "    # Extract the relevant information from the result\n",
    "    # Adjust this based on what graph_rag_chain returns\n",
    "    graph_context = result.get(\"result\", \"\")\n",
    "\n",
    "    # You might want to combine this with existing documents or keep it separate\n",
    "    return {\"graph_context\": graph_context, \"question\": question}\n",
    "\n",
    "\n",
    "### Conditional edge\n",
    "def grade_generation_v_documents_and_question(state):\n",
    "    \"\"\"\n",
    "    Determines whether the generation is grounded in the document and answers question.\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        str: Decision for next node to call\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---CHECK HALLUCINATIONS---\")\n",
    "    question = state[\"question\"]\n",
    "    documents = state[\"documents\"]\n",
    "    generation = state[\"generation\"]\n",
    "\n",
    "    score = hallucination_grader.invoke(\n",
    "        {\"documents\": documents, \"generation\": generation}\n",
    "    )\n",
    "    grade = grade = score.get(\"score\", \"\").lower()\n",
    "\n",
    "    # Check hallucination\n",
    "    if grade == \"yes\":\n",
    "        print(\"---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---\")\n",
    "        # Check question-answering\n",
    "        print(\"---GRADE GENERATION vs QUESTION---\")\n",
    "        score = answer_grader.invoke({\"question\": question, \"generation\": generation})\n",
    "        grade = score[\"score\"]\n",
    "        if grade == \"yes\":\n",
    "            print(\"---DECISION: GENERATION ADDRESSES QUESTION---\")\n",
    "            return \"useful\"\n",
    "        else:\n",
    "            print(\"---DECISION: GENERATION DOES NOT ADDRESS QUESTION---\")\n",
    "            return \"not useful\"\n",
    "    else:\n",
    "        print(\"---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\")\n",
    "        return \"not supported\"\n",
    "\n",
    "\n",
    "from langgraph.graph import END, StateGraph\n",
    "\n",
    "workflow = StateGraph(GraphState)\n",
    "\n",
    "# Define the nodes\n",
    "workflow.add_node(\"websearch\", web_search)  # web search\n",
    "workflow.add_node(\"retrieve\", retrieve)  # retrieve\n",
    "workflow.add_node(\"grade_documents\", grade_documents)  # grade documents\n",
    "workflow.add_node(\"generate\", generate)  # generatae\n",
    "workflow.add_node(\"graphrag\", graph_search)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Graph Build"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set conditional entry point\n",
    "workflow.set_conditional_entry_point(\n",
    "    route_question,\n",
    "    {\n",
    "        \"websearch\": \"websearch\",\n",
    "        \"retrieve\": \"retrieve\",\n",
    "        \"graphrag\": \"graphrag\",\n",
    "    },\n",
    ")\n",
    "\n",
    "# Add edges\n",
    "workflow.add_edge(\"retrieve\", \"grade_documents\")\n",
    "workflow.add_edge(\"graphrag\", \"generate\")\n",
    "workflow.add_conditional_edges(\n",
    "    \"grade_documents\",\n",
    "    decide_to_generate,\n",
    "    {\n",
    "        \"websearch\": \"websearch\",\n",
    "        \"generate\": \"generate\",\n",
    "    },\n",
    ")\n",
    "workflow.add_edge(\"websearch\", \"generate\")\n",
    "workflow.add_conditional_edges(\n",
    "    \"generate\",\n",
    "    grade_generation_v_documents_and_question,\n",
    "    {\n",
    "        \"not supported\": \"generate\",\n",
    "        \"useful\": END,\n",
    "        \"not useful\": \"websearch\",\n",
    "    },\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "---ROUTE QUESTION---\n",
      "What are the types of Prompt Engineering?\n",
      "{'datasource': 'vectorstore'}\n",
      "vectorstore\n",
      "---ROUTE QUESTION TO VECTORSTORE RAG---\n",
      "---RETRIEVE---\n",
      "'Finished running: retrieve:'\n",
      "---CHECK DOCUMENT RELEVANCE TO QUESTION---\n",
      "---GRADE: DOCUMENT RELEVANT---\n",
      "---GRADE: DOCUMENT RELEVANT---\n",
      "---GRADE: DOCUMENT RELEVANT---\n",
      "---GRADE: DOCUMENT RELEVANT---\n",
      "---ASSESS GRADED DOCUMENTS---\n",
      "---DECISION: GENERATE---\n",
      "'Finished running: grade_documents:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n",
      "'Finished running: generate:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\n"
     ]
    },
    {
     "ename": "KeyboardInterrupt",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[54], line 8\u001b[0m\n\u001b[1;32m      5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mpprint\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m pprint\n\u001b[1;32m      7\u001b[0m inputs \u001b[38;5;241m=\u001b[39m {\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mquestion\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mWhat are the types of Prompt Engineering?\u001b[39m\u001b[38;5;124m\"\u001b[39m}\n\u001b[0;32m----> 8\u001b[0m \u001b[38;5;28;43;01mfor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43moutput\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01min\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mapp\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m(\u001b[49m\u001b[43minputs\u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[1;32m      9\u001b[0m \u001b[43m    \u001b[49m\u001b[38;5;28;43;01mfor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mkey\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mvalue\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01min\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43moutput\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mitems\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[1;32m     10\u001b[0m \u001b[43m        \u001b[49m\u001b[43mpprint\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43mf\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mFinished running: \u001b[39;49m\u001b[38;5;132;43;01m{\u001b[39;49;00m\u001b[43mkey\u001b[49m\u001b[38;5;132;43;01m}\u001b[39;49;00m\u001b[38;5;124;43m:\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/Library/Caches/pypoetry/virtualenvs/milvus-bootcamp-rag-MiiP0ihC-py3.11/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1221\u001b[0m, in \u001b[0;36mPregel.stream\u001b[0;34m(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)\u001b[0m\n\u001b[1;32m   1210\u001b[0m \u001b[38;5;66;03m# Similarly to Bulk Synchronous Parallel / Pregel model\u001b[39;00m\n\u001b[1;32m   1211\u001b[0m \u001b[38;5;66;03m# computation proceeds in steps, while there are channel updates\u001b[39;00m\n\u001b[1;32m   1212\u001b[0m \u001b[38;5;66;03m# channel updates from step N are only visible in step N+1\u001b[39;00m\n\u001b[1;32m   1213\u001b[0m \u001b[38;5;66;03m# channels are guaranteed to be immutable for the duration of the step,\u001b[39;00m\n\u001b[1;32m   1214\u001b[0m \u001b[38;5;66;03m# with channel updates applied only at the transition between steps\u001b[39;00m\n\u001b[1;32m   1215\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m loop\u001b[38;5;241m.\u001b[39mtick(\n\u001b[1;32m   1216\u001b[0m     input_keys\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39minput_channels,\n\u001b[1;32m   1217\u001b[0m     interrupt_before\u001b[38;5;241m=\u001b[39minterrupt_before,\n\u001b[1;32m   1218\u001b[0m     interrupt_after\u001b[38;5;241m=\u001b[39minterrupt_after,\n\u001b[1;32m   1219\u001b[0m     manager\u001b[38;5;241m=\u001b[39mrun_manager,\n\u001b[1;32m   1220\u001b[0m ):\n\u001b[0;32m-> 1221\u001b[0m \u001b[43m    \u001b[49m\u001b[38;5;28;43;01mfor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43m_\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01min\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mrunner\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mtick\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m   1222\u001b[0m \u001b[43m        \u001b[49m\u001b[43mloop\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mtasks\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mvalues\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1223\u001b[0m \u001b[43m        \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mstep_timeout\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1224\u001b[0m \u001b[43m        \u001b[49m\u001b[43mretry_policy\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mretry_policy\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m   1225\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[1;32m   1226\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;66;43;03m# emit output\u001b[39;49;00m\n\u001b[1;32m   1227\u001b[0m \u001b[43m        \u001b[49m\u001b[38;5;28;43;01mfor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mo\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01min\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43moutput\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[1;32m   1228\u001b[0m \u001b[43m            \u001b[49m\u001b[38;5;28;43;01myield\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mo\u001b[49m\n",
      "File \u001b[0;32m~/Library/Caches/pypoetry/virtualenvs/milvus-bootcamp-rag-MiiP0ihC-py3.11/lib/python3.11/site-packages/langgraph/pregel/runner.py:58\u001b[0m, in \u001b[0;36mPregelRunner.tick\u001b[0;34m(self, tasks, timeout, retry_policy)\u001b[0m\n\u001b[1;32m     56\u001b[0m end_time \u001b[38;5;241m=\u001b[39m timeout \u001b[38;5;241m+\u001b[39m time\u001b[38;5;241m.\u001b[39mmonotonic() \u001b[38;5;28;01mif\u001b[39;00m timeout \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[1;32m     57\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m futures:\n\u001b[0;32m---> 58\u001b[0m     done, _ \u001b[38;5;241m=\u001b[39m \u001b[43mconcurrent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfutures\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mwait\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m     59\u001b[0m \u001b[43m        \u001b[49m\u001b[43mfutures\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m     60\u001b[0m \u001b[43m        \u001b[49m\u001b[43mreturn_when\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mconcurrent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfutures\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mFIRST_COMPLETED\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m     61\u001b[0m \u001b[43m        \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mmax\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mend_time\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m-\u001b[39;49m\u001b[43m \u001b[49m\u001b[43mtime\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmonotonic\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mif\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mend_time\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01melse\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m     62\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     63\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m done:\n\u001b[1;32m     64\u001b[0m         \u001b[38;5;28;01mbreak\u001b[39;00m  \u001b[38;5;66;03m# timed out\u001b[39;00m\n",
      "File \u001b[0;32m~/.pyenv/versions/3.11.8/lib/python3.11/concurrent/futures/_base.py:305\u001b[0m, in \u001b[0;36mwait\u001b[0;34m(fs, timeout, return_when)\u001b[0m\n\u001b[1;32m    301\u001b[0m         \u001b[38;5;28;01mreturn\u001b[39;00m DoneAndNotDoneFutures(done, not_done)\n\u001b[1;32m    303\u001b[0m     waiter \u001b[38;5;241m=\u001b[39m _create_and_install_waiters(fs, return_when)\n\u001b[0;32m--> 305\u001b[0m \u001b[43mwaiter\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mevent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mwait\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtimeout\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    306\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m f \u001b[38;5;129;01min\u001b[39;00m fs:\n\u001b[1;32m    307\u001b[0m     \u001b[38;5;28;01mwith\u001b[39;00m f\u001b[38;5;241m.\u001b[39m_condition:\n",
      "File \u001b[0;32m~/.pyenv/versions/3.11.8/lib/python3.11/threading.py:629\u001b[0m, in \u001b[0;36mEvent.wait\u001b[0;34m(self, timeout)\u001b[0m\n\u001b[1;32m    627\u001b[0m signaled \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_flag\n\u001b[1;32m    628\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m signaled:\n\u001b[0;32m--> 629\u001b[0m     signaled \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_cond\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mwait\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtimeout\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    630\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m signaled\n",
      "File \u001b[0;32m~/.pyenv/versions/3.11.8/lib/python3.11/threading.py:327\u001b[0m, in \u001b[0;36mCondition.wait\u001b[0;34m(self, timeout)\u001b[0m\n\u001b[1;32m    325\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:    \u001b[38;5;66;03m# restore state no matter what (e.g., KeyboardInterrupt)\u001b[39;00m\n\u001b[1;32m    326\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m timeout \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m--> 327\u001b[0m         \u001b[43mwaiter\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43macquire\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m    328\u001b[0m         gotit \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[1;32m    329\u001b[0m     \u001b[38;5;28;01melse\u001b[39;00m:\n",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
     ]
    }
   ],
   "source": [
    "# Compile\n",
    "app = workflow.compile()\n",
    "\n",
    "# Test\n",
    "from pprint import pprint\n",
    "\n",
    "inputs = {\"question\": \"What are the types of Prompt Engineering?\"}\n",
    "for output in app.stream(inputs):\n",
    "    for key, value in output.items():\n",
    "        pprint(f\"Finished running: {key}:\")\n",
    "pprint(value[\"generation\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "---ROUTE QUESTION---\n",
      "Did Emmanuel Macron visit Germany recently?\n",
      "{'datasource': 'web_search'}\n",
      "web_search\n",
      "---ROUTE QUESTION TO WEB SEARCH---\n",
      "---WEB SEARCH---\n",
      "'Finished running: websearch:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---\n",
      "---GRADE GENERATION vs QUESTION---\n",
      "---DECISION: GENERATION ADDRESSES QUESTION---\n",
      "'Finished running: generate:'\n",
      "('Yes, Emmanuel Macron visited Germany recently. He arrived on May 26 for the '\n",
      " 'first state visit by a French president in 24 years. The trip was meant to '\n",
      " 'ease recent tensions and show unity between France and Germany.')\n"
     ]
    }
   ],
   "source": [
    "# Compile\n",
    "app = workflow.compile()\n",
    "\n",
    "# Test\n",
    "from pprint import pprint\n",
    "\n",
    "inputs = {\"question\": \"Did Emmanuel Macron visit Germany recently?\"}\n",
    "for output in app.stream(inputs):\n",
    "    for key, value in output.items():\n",
    "        pprint(f\"Finished running: {key}:\")\n",
    "pprint(value[\"generation\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "---ROUTE QUESTION---\n",
      "Which paper talk about Collaborative Multi-Agent?\n",
      "{'datasource': 'graphrag'}\n",
      "graphrag\n",
      "---TRYING GRAPH SEARCH---\n",
      "---GRAPH SEARCH---\n",
      "\n",
      "\n",
      "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
      "Generated Cypher:\n",
      "\u001b[32;1m\u001b[1;3mcypher\n",
      "MATCH (p:Paper)\n",
      "WHERE toLower(p.title) CONTAINS toLower(\"Collaborative Multi-Agent\")\n",
      "RETURN p.title, p.id\n",
      "\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "---GRAPH SEARCH---\n",
      "\n",
      "\n",
      "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
      "Generated Cypher:\n",
      "\u001b[32;1m\u001b[1;3mcypher\n",
      "MATCH (p:Paper)\n",
      "WHERE toLower(p.title) CONTAINS toLower(\"Collaborative Multi-Agent\")\n",
      "RETURN p.title, p.id\n",
      "\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "'Finished running: graphrag:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---\n",
      "---GRADE GENERATION vs QUESTION---\n",
      "---DECISION: GENERATION ADDRESSES QUESTION---\n",
      "'Finished running: generate:'\n",
      "('The paper \"Collaborative Multi-Agent, Multi-Reasoning-Path (CoMM) Prompting '\n",
      " 'Framework\" discusses Collaborative Multi-Agent.')\n"
     ]
    }
   ],
   "source": [
    "# Test\n",
    "from pprint import pprint\n",
    "\n",
    "inputs = {\"question\": \"Which paper talk about Collaborative Multi-Agent?\"}\n",
    "for output in app.stream(inputs):\n",
    "    for key, value in output.items():\n",
    "        pprint(f\"Finished running: {key}:\")\n",
    "pprint(value[\"generation\"])"
   ]
  },
  {
   "attachments": {
    "646a66c4-a22e-4740-95e3-ae7f2c2599df.png": {
     "image/png": "iVBORw0KGgoAAAANSUhEUgAAAQgAAAEICAYAAACj9mr/AAAAAXNSR0IArs4c6QAAAERlWElmTU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAABCKADAAQAAAABAAABCAAAAACxih4WAAAMDklEQVR4Ae3dwa5dqxFFUTvK//+yY732u5qNCgI2I81UgGLU1dKWOHJ+//n7n1/+Q4AAgX8R+M+//Hf+KwIECPwjICD8IRAg8KOAgPiRRoEAAQHhb4AAgR8FBMSPNAoECAgIfwMECPwoICB+pFEgQEBA+BsgQOBHAQHxI40CAQICwt8AAQI/CgiIH2kUCBAQEP4GCBD4UUBA/EijQICAgPA3QIDAjwIC4kcaBQIE/jsl+P3793SLo9dP/7mM8tm9f/VXw6n+p/tPz6/1q/ur81fXaz51vi+IElIn8LCAgHh4+K5OoAQERAmpE3hYQEA8PHxXJ1ACAqKE1Ak8LCAgHh6+qxMoAQFRQuoEHhYY/w6i7KbvsLX/tD59B6/10/vX/nX/Wl/91fo6f7r/6vXT/mv96vp0PtWfL4gSUifwsICAeHj4rk6gBARECakTeFhAQDw8fFcnUAICooTUCTwsICAeHr6rEygBAVFC6gQeFlj+O4iyXf2OW+/o1d+0vvp+1d/0/tP1df/af7q+fKb16m+6f/lM96/1viBKSJ3AwwIC4uHhuzqBEhAQJaRO4GEBAfHw8F2dQAkIiBJSJ/CwgIB4ePiuTqAEBEQJqRN4WGD77yBut6936tPfyau/ul/Nr9bX+bW/+loBXxBrfe1O4GoBAXH1+DRPYK2AgFjra3cCVwsIiKvHp3kCawUExFpfuxO4WkBAXD0+zRNYKyAg1vrancDVAn4HMRxfvePX7wCGx+fy6q82qPV1v1pf59f+tV59JuALYuZnNYFPCwiIT4/X5QjMBATEzM9qAp8WEBCfHq/LEZgJCIiZn9UEPi0gID49XpcjMBMQEDM/qwl8WmD77yC+/s5dvwOo+0/X7/7rnd6v7l/3q/NrfdVX71/nr677glgtbH8CFwsIiIuHp3UCqwUExGph+xO4WEBAXDw8rRNYLSAgVgvbn8DFAgLi4uFpncBqAQGxWtj+BC4WWP47iOk79sW2/7Re7+Tl8/X1Nd/p/Wv/qtd8av3tdV8Qt09Q/wQWCgiIhbi2JnC7gIC4fYL6J7BQQEAsxLU1gdsFBMTtE9Q/gYUCAmIhrq0J3C4gIG6foP4JLBQY/w6i3qkX9n7F1vWOfrtf9V/3ryFO19f+1X+t/3rdF8TXJ+x+BAYCAmKAZymBrwsIiK9P2P0IDAQExADPUgJfFxAQX5+w+xEYCAiIAZ6lBL4uICC+PmH3IzAQ+P33HfjPYP2v6Tt1HV/71/q6W+1f61efv3r/ut+0Xv2f7v/1+9f9fEGUkDqBhwUExMPDd3UCJSAgSkidwMMCAuLh4bs6gRIQECWkTuBhAQHx8PBdnUAJCIgSUifwsMD2fw9i+g4+nd3qd/ppf+VT/U/Pr/2rvzq/9q/1Vd/dX50/vf/q/X1B1F+YOoGHBQTEw8N3dQIlICBKSJ3AwwIC4uHhuzqBEhAQJaRO4GEBAfHw8F2dQAkIiBJSJ/CwwPh3EGVX77S1flq//fzpO3n5rd6//Ov8Wl/3O70+vV/5Te/vC2IqaD2BDwsIiA8P19UITAUExFTQegIfFhAQHx6uqxGYCgiIqaD1BD4sICA+PFxXIzAVEBBTQesJfFhg/P+Lsdpm9zvx7vOnvtP+p+fXO331V+urv+n+tb7Or/5X71/9Vd0XRAmpE3hYQEA8PHxXJ1ACAqKE1Ak8LCAgHh6+qxMoAQFRQuoEHhYQEA8P39UJlICAKCF1Ag8LLP/3IMq23oFXvyPX+dX/7vq0//Kt+9X5Va/za331V/Xav/pbvX/1V/Vp/74gasLqBB4WEBAPD9/VCZSAgCghdQIPCwiIh4fv6gRKQECUkDqBhwUExMPDd3UCJSAgSkidwMMC438Pot5hV9tO33mn/dX9q79aP+2v1ld/tX51fbVP3b/Or/WrfVbv7wtitbD9CVwsICAuHp7WCawWEBCrhe1P4GIBAXHx8LROYLWAgFgtbH8CFwsIiIuHp3UCqwUExGph+xO4WGD5vwdx+jtxvXNPZ1v7T31W71/3r/NrfdXLp86v9dPza331V+un9en9fUFMJ2A9gQ8LCIgPD9fVCEwFBMRU0HoCHxYQEB8erqsRmAoIiKmg9QQ+LCAgPjxcVyMwFRAQU0HrCXxYYPnvIOoduN5pa/10NtPza/20v+n60/3qftX/6f51v+p/9/19QdQE1Qk8LCAgHh6+qxMoAQFRQuoEHhYQEA8P39UJlICAKCF1Ag8LCIiHh+/qBEpAQJSQOoGHBca/g6h33NW2dX69I1e99p/er86v/U/vb3X/q/1q//KvevnsrvuC2D0B5xM4WEBAHDwcrRHYLSAgdk/A+QQOFhAQBw9HawR2CwiI3RNwPoGDBQTEwcPRGoHdAgJi9wScT+BggfHvIHbfbfpOXf1P96/1df70HX31+dP96/5VL5/qr+p1ftVr/+q/6tP9q39fECWkTuBhAQHx8PBdnUAJCIgSUifwsICAeHj4rk6gBARECakTeFhAQDw8fFcnUAICooTUCTwsMP4dxPQddrp+Ors6f7p/ra937lo/re8+f9r/dH51/9X71/2n59f+VfcFUULqBB4WEBAPD9/VCZSAgCghdQIPCwiIh4fv6gRKQECUkDqBhwUExMPDd3UCJSAgSkidwMMC499B1Dty2U7X1/5Vr/PrHbrqdX7Va//qv+rT82v97fWp3+339wVx+wT1T2ChgIBYiGtrArcLCIjbJ6h/AgsFBMRCXFsTuF1AQNw+Qf0TWCggIBbi2prA7QIC4vYJ6p/AQoHff995/0z2n77TT9dX77V/rX+9PvzzSL6aT51f67OBxf+Daf+1fnH7v3xBrBa2P4GLBQTExcPTOoHVAgJitbD9CVwsICAuHp7WCawWEBCrhe1P4GIBAXHx8LROYLWAgFgtbH8CFwuMfwdx8d3/L63f/g4/Raj7T9/xa//qf3p+7V/91fm1vs6vep1f631BlJA6gYcFBMTDw3d1AiUgIEpIncDDAgLi4eG7OoESEBAlpE7gYQEB8fDwXZ1ACQiIElIn8LDA+P8XY/U77u7ZTN+Rq//a/3bf6r/uX/Xav/xrfZ1f+1e99t/dny+ImqA6gYcFBMTDw3d1AiUgIEpIncDDAgLi4eG7OoESEBAlpE7gYQEB8fDwXZ1ACQiIElIn8LDA+HcQZVfvvLV+db3emev8ul/tX/U6v+q1/7T/On+6f62v86u+ev/yr/52131B7J6A8wkcLCAgDh6O1gjsFhAQuyfgfAIHCwiIg4ejNQK7BQTE7gk4n8DBAgLi4OFojcBuAQGxewLOJ3CwwPLfQdTdV78Tr37nrvvV+avvX/1Vfdp/3a/2r/6qXufX+ml9er/d/fuCmP4FWE/gwwIC4sPDdTUCUwEBMRW0nsCHBQTEh4fragSmAgJiKmg9gQ8LCIgPD9fVCEwFBMRU0HoCHxbY/juI2213v1PXO/vq/qbnT/ur8+vvq86v/Wt91au/3XVfELsn4HwCBwsIiIOHozUCuwUExO4JOJ/AwQIC4uDhaI3AbgEBsXsCzidwsICAOHg4WiOwW0BA7J6A8wkcLOB3EMPh1Dt5bb/6nbz6q/N3ry+/ab3uN91/ur76m86v+vMFUULqBB4WEBAPD9/VCZSAgCghdQIPCwiIh4fv6gRKQECUkDqBhwUExMPDd3UCJSAgSkidwMMC238HUe+8p8+m3qGr/+n9V58/3X+6vvxW7z+dz3R93W+6f/n6gighdQIPCwiIh4fv6gRKQECUkDqBhwUExMPDd3UCJSAgSkidwMMCAuLh4bs6gRIQECWkTuBhgeW/g6h33K/b1zv1ap/p+dP1Nd/av9av9lu9f91vd90XxO4JOJ/AwQIC4uDhaI3AbgEBsXsCzidwsICAOHg4WiOwW0BA7J6A8wkcLCAgDh6O1gjsFhAQuyfgfAIHC/z++w795+D+tEaAwEYBXxAb8R1N4HQBAXH6hPRHYKOAgNiI72gCpwsIiNMnpD8CGwUExEZ8RxM4XUBAnD4h/RHYKCAgNuI7msDpAgLi9Anpj8BGAQGxEd/RBE4XEBCnT0h/BDYKCIiN+I4mcLqAgDh9QvojsFFAQGzEdzSB0wUExOkT0h+BjQICYiO+owmcLvA/prAiSKAHhH8AAAAASUVORK5CYII="
    },
    "d90b545b-7fc3-4d01-a952-d4db9bea5453.png": {
     "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOgAAADoCAYAAADlqah4AAAAAXNSR0IArs4c6QAAAERlWElmTU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAAA6KADAAQAAAABAAAA6AAAAAB0OSBrAAAJkElEQVR4Ae3cUa4dtw4EwHcfsv8tO0YWoDbQYaiZKf/SIqXSacyHYP/8+v3nf/4QIHClwP+v3JVNESDwj4CA+iEQuFhAQC++HFsjIKB+AwQuFhDQiy/H1ggIqN8AgYsFBPTiy7E1AgLqN0DgYgEBvfhybI2AgPoNELhYQEAvvhxbIyCgfgMELhYQ0Isvx9YICKjfAIGLBQT04suxNQJ/tQQ/Pz9ti6vXt/9cNvm0/RPe9vx2f2n97fX2fn1Bb79h+/u0gIB++vod/nYBAb39huzv0wIC+unrd/jbBQT09huyv08LCOinr9/hbxcQ0NtvyP4+LVC/gya99h0o9W/r6Z0w9W/XT/dP/tP7T/PT+dv1qX9bn/bzBW1vyHoCgwICOoirNYFWQEBbQesJDAoI6CCu1gRaAQFtBa0nMCggoIO4WhNoBQS0FbSewKDA+Dto2vv0O9L0O1rqn86X6skvzU/rU326f5qf6q1f6r99fl/QdEPqBBYFBHQR32gCSUBAk5A6gUUBAV3EN5pAEhDQJKROYFFAQBfxjSaQBAQ0CakTWBRYfwddPPu/Mrp9h0vvbKl/qreHbPun87X7e/t6X9C337DzPVpAQB99fTb/dgEBffsNO9+jBQT00ddn828XENC337DzPVpAQB99fTb/dgEBffsNO9+jBbyDltfXvvOld8bUP61Px0v90/p2fur/9bov6Nd/Ac5/tYCAXn09Nvd1AQH9+i/A+a8WENCrr8fmvi4goF//BTj/1QICevX12NzXBQT0678A579aYP0dtH2H29ZN74DT55vun8637T99/u3z+YJu34D5BA4CAnrAUSKwLSCg2zdgPoGDgIAecJQIbAsI6PYNmE/gICCgBxwlAtsCArp9A+YTOAiMv4Pe/o52sHlEKfmmd8Lp9S1i2l/b//b1vqC335D9fVpAQD99/Q5/u4CA3n5D9vdpAQH99PU7/O0CAnr7DdnfpwUE9NPX7/C3Cwjo7Tdkf58W+Pn9Tvbr0wKXHz69A7bXl/q3PO3+2vlPX+8L+vQbtP9XCwjoq6/X4Z4uIKBPv0H7f7WAgL76eh3u6QIC+vQbtP9XCwjoq6/X4Z4uIKBPv0H7f7VA/Q46/Y7W6qd3uNv3357/9vXt/dy+vvX3BW0FrScwKCCgg7haE2gFBLQVtJ7AoICADuJqTaAVENBW0HoCgwICOoirNYFWQEBbQesJDAqM/7+46Z1q8Gx/1Hp6f+mdNc1P6//okIe/lOYflv5TSvtr+6f5qb49P+0v1X1Bk5A6gUUBAV3EN5pAEhDQJKROYFFAQBfxjSaQBAQ0CakTWBQQ0EV8owkkAQFNQuoEFgXG30HTO1k6+/Q7Vtpfmp/Wp/Olepqf1qf9pXrqn/aX+qf1aX7qn9ZPz2/7+4KmG1QnsCggoIv4RhNIAgKahNQJLAoI6CK+0QSSgIAmIXUCiwICuohvNIEkIKBJSJ3AokD9/+KmvbfvVKl/qrfvUGn/qX9an/bf1tv9tevb/bfr0/5T//b+2vm+oOmG1AksCgjoIr7RBJKAgCYhdQKLAgK6iG80gSQgoElIncCigIAu4htNIAkIaBJSJ7AoMP7vQafP1r4zpXeutn86/3T/9nxpfXu+1D/5pPWp3u4/rW/rvqCtoPUEBgUEdBBXawKtgIC2gtYTGBQQ0EFcrQm0AgLaClpPYFBAQAdxtSbQCghoK2g9gUGB+h20fWdKZ2vfwab7p/2l+a1fOz/tL/Wf3n/qP72/5DNd9wWdFtafQCEgoAWepQSmBQR0Wlh/AoWAgBZ4lhKYFhDQaWH9CRQCAlrgWUpgWkBAp4X1J1AI1O+gaXZ6p0rrp+vb72zbPun8yT/tP/VP69P81D+tT/On+6f9+YImIXUCiwICuohvNIEkIKBJSJ3AooCALuIbTSAJCGgSUiewKCCgi/hGE0gCApqE1AksCoy/g6azTb8zpf7T72Dp/NP16fMl3/Z8af9t/3b99P58Qdsbsp7AoICADuJqTaAVENBW0HoCgwICOoirNYFWQEBbQesJDAoI6CCu1gRaAQFtBa0nMCiw/g7anq19h0vr0zvX9vrkl/aX1rfnT/3b/aX+2/tP89P+fUGTkDqBRQEBXcQ3mkASENAkpE5gUUBAF/GNJpAEBDQJqRNYFBDQRXyjCSQBAU1C6gQWBep30Padp12/aPdHo9M7Xzp/uz5tsp3f9k/rUz35pPXp/Gn9dN0XdFpYfwKFgIAWeJYSmBYQ0Glh/QkUAgJa4FlKYFpAQKeF9SdQCAhogWcpgWkBAZ0W1p9AIVC/g7bvUMXe/5Ol0+9kyS/Nv319uqR2/6l/W0/7S/3T/aX1vqBJSJ3AooCALuIbTSAJCGgSUiewKCCgi/hGE0gCApqE1AksCgjoIr7RBJKAgCYhdQKLAvU7aNp7+w6U+rf16Xeu1D/5tOtbn+n16fzT81P/tL90P6l/qvuCJiF1AosCArqIbzSBJCCgSUidwKKAgC7iG00gCQhoElInsCggoIv4RhNIAgKahNQJLAqMv4Oms02/I6V3rLS/VG/3n9a3+0/9t8+X5rf19vxpfns/qb8vaBJSJ7AoIKCL+EYTSAICmoTUCSwKCOgivtEEkoCAJiF1AosCArqIbzSBJCCgSUidwKLA+jvo4tn/k9HtO1l6x2v7t+sTYtp/Wp/2t90/zU/7T+f3BU1C6gQWBQR0Ed9oAklAQJOQOoFFAQFdxDeaQBIQ0CSkTmBRQEAX8Y0mkAQENAmpE1gU8A46jJ/eydL49I6W+qf1aX7qn9ZPz0/90/5Tve2ffFLdFzQJqRNYFBDQRXyjCSQBAU1C6gQWBQR0Ed9oAklAQJOQOoFFAQFdxDeaQBIQ0CSkTmBRYP0dNL0zLdr8K6Onz9f2T++ALULqn/af6ql/2n/qn9ZP131Bp4X1J1AICGiBZymBaQEBnRbWn0AhIKAFnqUEpgUEdFpYfwKFgIAWeJYSmBYQ0Glh/QkUAuPvoO07VXG2Ryy93ad9J0znS/XpS9yen87nC5qE1AksCgjoIr7RBJKAgCYhdQKLAgK6iG80gSQgoElIncCigIAu4htNIAkIaBJSJ7Ao8PP7nevX4nyjCRA4CPiCHnCUCGwLCOj2DZhP4CAgoAccJQLbAgK6fQPmEzgICOgBR4nAtoCAbt+A+QQOAgJ6wFEisC0goNs3YD6Bg4CAHnCUCGwLCOj2DZhP4CAgoAccJQLbAgK6fQPmEzgICOgBR4nAtoCAbt+A+QQOAgJ6wFEisC3wN9NKg/9M3GiyAAAAAElFTkSuQmCC"
    }
   },
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Give us a star on Github ⭐️\n",
    "\n",
    "![image.png](attachment:d90b545b-7fc3-4d01-a952-d4db9bea5453.png)\n",
    "\n",
    "# Add me on LinkedIn\n",
    "![image.png](attachment:646a66c4-a22e-4740-95e3-ae7f2c2599df.png)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
