{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/use_cases/agents/langchain/langgraph-rag-agent-local.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "! pip install -U langchain_community arxiv tiktoken langchainhub pymilvus langchain langgraph tavily-python sentence-transformers langchain-milvus langchain-ollama langchain-huggingface beautifulsoup4 langchain-experimental neo4j json-repair langchain-openai langchain-ollama"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# LangGraph GraphRAG agent with Llama 3.1 and GPT4o\n",
    "\n",
    "\n",
    "Let's build an Advanced RAG with a GraphRAG agent that will run a combination of Llama 3.1 and GPT4o, for Llama 3.1 we will use Ollama. The idea is that we use GPT4o for advanced tasks, like generating the Neo4j query and Llama3.1 for the rest. \n",
    "\n",
    "## Ideas\n",
    "\n",
    "We'll combine ideas from three RAG papers into a RAG agent:\n",
    "\n",
    "- **Routing:**  Adaptive RAG ([paper](https://arxiv.org/abs/2403.14403)). Route questions to different retrieval approaches\n",
    "- **Fallback:** Corrective RAG ([paper](https://arxiv.org/pdf/2401.15884.pdf)). Fallback to web search if docs are not relevant to query\n",
    "- **Self-correction:** Self-RAG ([paper](https://arxiv.org/abs/2310.11511)). Fix answers w/ hallucinations or don’t address question\n",
    "\n",
    "![langgraph_adaptive_rag.png](imgs/RAG_Agent_langGraph.png)\n",
    "\n",
    "Note that this will incorperate [a few general ideas for agents](https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/):\n",
    "\n",
    "- **Reflection**: The self-correction mechanism is a form of reflection, where the LangGraph agent reflects on its retrieval and generations\n",
    "- **Planning**: The control flow laid out in the graph is a form of planning \n",
    "- **Tool use**: Specific nodes in the control flow (e.g., web search) will use tools\n",
    "\n",
    "## Local models\n",
    "\n",
    "### LLM\n",
    "\n",
    "Use [Ollama](https://ollama.ai/) and [llama3](https://ollama.ai/library/llama3):\n",
    "\n",
    "```\n",
    "ollama pull llama3\n",
    "```\n",
    "\n",
    "### Env Variables\n",
    "Variables needed in an .env file or loaded as variables at start:\n",
    "\n",
    "Required:\n",
    "```\n",
    "OPENAI_API_KEY=sk-...\n",
    "TAVILY_API_KEY=tvly-...\n",
    "NEO4J_URI=...\n",
    "NEO4J_USERNAME=...\n",
    "NEO4J_PASSWORD=...\n",
    "```\n",
    "\n",
    "### Search\n",
    "\n",
    "Uses [Tavily](https://tavily.com/#api)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from dotenv import load_dotenv\n",
    "import os\n",
    "\n",
    "load_dotenv()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.globals import set_verbose, set_debug\n",
    "\n",
    "set_debug(False)\n",
    "set_verbose(False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "### LLM\n",
    "\n",
    "local_llm = \"llama3.1\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import arxiv\n",
    "\n",
    "search_query = \"agent OR 'large language model' OR 'prompt engineering'\"\n",
    "max_results = 20\n",
    "\n",
    "# Fetch papers from arXiv\n",
    "client = arxiv.Client()\n",
    "search = arxiv.Search(\n",
    "    query=search_query, max_results=max_results, sort_by=arxiv.SortCriterion.Relevance\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Number of papers: 20\n",
      "Number of chunks: 20\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/kv/3dw9p_ts4b114chqt9m027pc0000gn/T/ipykernel_5442/932846617.py:26: LangChainDeprecationWarning: The class `HuggingFaceEmbeddings` was deprecated in LangChain 0.2.2 and will be removed in 1.0. An updated version of the class exists in the langchain-huggingface package and should be used instead. To use it run `pip install -U langchain-huggingface` and import as `from langchain_huggingface import HuggingFaceEmbeddings`.\n",
      "  embedding=HuggingFaceEmbeddings(),\n",
      "/var/folders/kv/3dw9p_ts4b114chqt9m027pc0000gn/T/ipykernel_5442/932846617.py:26: LangChainDeprecationWarning: Default values for HuggingFaceEmbeddings.model_name were deprecated in LangChain 0.2.16 and will be removed in 0.4.0. Explicitly pass a model_name to the HuggingFaceEmbeddings constructor instead.\n",
      "  embedding=HuggingFaceEmbeddings(),\n",
      "/Users/stephen/Library/Caches/pypoetry/virtualenvs/milvus-bootcamp-rag-MiiP0ihC-py3.11/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:13: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from tqdm.autonotebook import tqdm, trange\n",
      "/Users/stephen/Library/Caches/pypoetry/virtualenvs/milvus-bootcamp-rag-MiiP0ihC-py3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "### Milvus Lite Vectorstore\n",
    "\n",
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "from langchain_community.document_loaders import WebBaseLoader\n",
    "from langchain_milvus import Milvus\n",
    "from langchain_community.embeddings import HuggingFaceEmbeddings\n",
    "\n",
    "\n",
    "docs = []\n",
    "for result in client.results(search):\n",
    "    docs.append(\n",
    "        {\"title\": result.title, \"summary\": result.summary, \"url\": result.entry_id}\n",
    "    )\n",
    "\n",
    "text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(\n",
    "    chunk_size=500, chunk_overlap=50\n",
    ")\n",
    "doc_splits = text_splitter.create_documents(\n",
    "    [doc[\"summary\"] for doc in docs], metadatas=docs\n",
    ")\n",
    "\n",
    "print(f\"Number of papers: {len(docs)}\")\n",
    "print(f\"Number of chunks: {len(doc_splits)}\")\n",
    "\n",
    "\n",
    "# Add to Milvus\n",
    "vectorstore = Milvus.from_documents(\n",
    "    documents=doc_splits,\n",
    "    collection_name=\"rag_milvus\",\n",
    "    embedding=HuggingFaceEmbeddings(),\n",
    "    connection_args={\"uri\": \"./milvus_ingest.db\"},\n",
    ")\n",
    "retriever = vectorstore.as_retriever()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_community.chat_models import ChatOllama\n",
    "\n",
    "llm = ChatOllama(model=local_llm, format=\"json\", temperature=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Graph documents: 20\n",
      "Nodes from 1st graph doc:[Node(id='Prompt Design And Engineering Has Rapidly Become Essential For Maximizing The Potential Of Large Language Models', type='Paper', properties={'title': 'Prompt design and engineering has rapidly become essential for maximizing the potential of large language models'}), Node(id='Core Concepts', type='Topic'), Node(id='Advanced Techniques Like Chain-Of-Thought And Reflection', type='Topic'), Node(id='Principles Behind Building Llm-Based Agents', type='Topic'), Node(id='Survey Of Tools For Prompt Engineers', type='Topic')]\n",
      "Relationships from 1st graph doc:[Relationship(source=Node(id='Prompt Design And Engineering Has Rapidly Become Essential For Maximizing The Potential Of Large Language Models', type='Paper'), target=Node(id='Core Concepts', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Design And Engineering Has Rapidly Become Essential For Maximizing The Potential Of Large Language Models', type='Paper'), target=Node(id='Advanced Techniques Like Chain-Of-Thought And Reflection', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Design And Engineering Has Rapidly Become Essential For Maximizing The Potential Of Large Language Models', type='Paper'), target=Node(id='Principles Behind Building Llm-Based Agents', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Design And Engineering Has Rapidly Become Essential For Maximizing The Potential Of Large Language Models', type='Paper'), target=Node(id='Survey Of Tools For Prompt Engineers', type='Topic'), type='DISCUSSES')]\n"
     ]
    }
   ],
   "source": [
    "# GraphRAG Setup\n",
    "from langchain_community.graphs import Neo4jGraph\n",
    "from langchain_experimental.graph_transformers import LLMGraphTransformer\n",
    "from langchain_core.documents import Document\n",
    "from langchain_experimental.llms.ollama_functions import OllamaFunctions\n",
    "from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformer\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_ollama import ChatOllama\n",
    "\n",
    "graph = Neo4jGraph()\n",
    "\n",
    "graph_llm = ChatOpenAI(temperature=0, model_name=\"gpt-4o\")\n",
    "\n",
    "graph_transformer = LLMGraphTransformer(\n",
    "    llm=graph_llm,\n",
    "    allowed_nodes=[\"Paper\", \"Author\", \"Topic\"],\n",
    "    node_properties=[\"title\", \"summary\", \"url\"],\n",
    "    allowed_relationships=[\"AUTHORED\", \"DISCUSSES\", \"RELATED_TO\"],\n",
    ")\n",
    "\n",
    "graph_documents = graph_transformer.convert_to_graph_documents(doc_splits)\n",
    "\n",
    "graph.add_graph_documents(graph_documents)\n",
    "\n",
    "print(f\"Graph documents: {len(graph_documents)}\")\n",
    "print(f\"Nodes from 1st graph doc:{graph_documents[0].nodes}\")\n",
    "print(f\"Relationships from 1st graph doc:{graph_documents[0].relationships}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Document 0:\n",
      "  Nodes: [Node(id='Prompt Design And Engineering Has Rapidly Become Essential For Maximizing The Potential Of Large Language Models', type='Paper', properties={'title': 'Prompt design and engineering has rapidly become essential for maximizing the potential of large language models'}), Node(id='Core Concepts', type='Topic'), Node(id='Advanced Techniques Like Chain-Of-Thought And Reflection', type='Topic'), Node(id='Principles Behind Building Llm-Based Agents', type='Topic'), Node(id='Survey Of Tools For Prompt Engineers', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Prompt Design And Engineering Has Rapidly Become Essential For Maximizing The Potential Of Large Language Models', type='Paper'), target=Node(id='Core Concepts', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Design And Engineering Has Rapidly Become Essential For Maximizing The Potential Of Large Language Models', type='Paper'), target=Node(id='Advanced Techniques Like Chain-Of-Thought And Reflection', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Design And Engineering Has Rapidly Become Essential For Maximizing The Potential Of Large Language Models', type='Paper'), target=Node(id='Principles Behind Building Llm-Based Agents', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Design And Engineering Has Rapidly Become Essential For Maximizing The Potential Of Large Language Models', type='Paper'), target=Node(id='Survey Of Tools For Prompt Engineers', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 1:\n",
      "  Nodes: [Node(id='Two Ways To Unlock The Reasoning Capability Of A Large Language Model', type='Paper', properties={'title': 'Two ways to unlock the reasoning capability of a large language model'}), Node(id='Prompt Engineering', type='Topic'), Node(id='Multi-Agent Discussion', type='Topic'), Node(id='Scalable Discussion Mechanism Based On Conquer And Merge', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Two Ways To Unlock The Reasoning Capability Of A Large Language Model', type='Paper'), target=Node(id='Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Two Ways To Unlock The Reasoning Capability Of A Large Language Model', type='Paper'), target=Node(id='Multi-Agent Discussion', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Two Ways To Unlock The Reasoning Capability Of A Large Language Model', type='Paper'), target=Node(id='Scalable Discussion Mechanism Based On Conquer And Merge', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 2:\n",
      "  Nodes: [Node(id='Final_Frontier_Simulation', type='Topic', properties={'title': 'The final frontier for simulation'}), Node(id='Agent_Based_Modeling', type='Topic', properties={'title': 'Agent-based modeling'}), Node(id='Large_Language_Models', type='Topic', properties={'title': 'Large language models'}), Node(id='Chatgpt', type='Topic', properties={'title': 'ChatGPT'}), Node(id='Our_Research', type='Paper', properties={'title': 'Our research', 'summary': 'Investigates simulations of human interactions using LLMs. Presents two simulations: a two-agent negotiation and a six-agent murder mystery game.'}), Node(id='Park_Et_Al_2023', type='Paper', properties={'title': 'Park et al. (2023)'})]\n",
      "  Relationships: [Relationship(source=Node(id='Final_Frontier_Simulation', type='Topic'), target=Node(id='Agent_Based_Modeling', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Final_Frontier_Simulation', type='Topic'), target=Node(id='Large_Language_Models', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Large_Language_Models', type='Topic'), target=Node(id='Chatgpt', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Our_Research', type='Paper'), target=Node(id='Large_Language_Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Our_Research', type='Paper'), target=Node(id='Park_Et_Al_2023', type='Paper'), type='RELATED_TO')]\n",
      "---\n",
      "Document 3:\n",
      "  Nodes: [Node(id='Ai Community', type='Topic'), Node(id='Artificial General Intelligence', type='Topic'), Node(id='Language Agents', type='Topic'), Node(id='Large Language Models', type='Topic'), Node(id='Agent Symbolic Learning', type='Topic'), Node(id='Paper On Agent Symbolic Learning', type='Paper', properties={'title': 'Agent Symbolic Learning: A Systematic Framework for Self-Evolving Language Agents'})]\n",
      "  Relationships: [Relationship(source=Node(id='Ai Community', type='Topic'), target=Node(id='Artificial General Intelligence', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Language Agents', type='Topic'), target=Node(id='Artificial General Intelligence', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Language Agents', type='Topic'), target=Node(id='Large Language Models', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Agent Symbolic Learning', type='Topic'), target=Node(id='Language Agents', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Paper On Agent Symbolic Learning', type='Paper'), target=Node(id='Agent Symbolic Learning', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 4:\n",
      "  Nodes: [Node(id='Reprompt', type='Paper', properties={'title': 'RePrompt: Gradient Descent for Optimizing Step-by-Step Instructions in LLM Agents'}), Node(id='Large Language Models', type='Topic'), Node(id='Code Generation', type='Topic'), Node(id='Travel Planning', type='Topic'), Node(id='Robot Controls', type='Topic'), Node(id='Llm Agents', type='Topic'), Node(id='Automatic Prompt Engineering', type='Topic'), Node(id='Pddl Generation', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Code Generation', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Travel Planning', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Robot Controls', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Llm Agents', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Automatic Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Reprompt', type='Paper'), target=Node(id='Pddl Generation', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 5:\n",
      "  Nodes: [Node(id='Traditional Base Station Siting Methods', type='Topic'), Node(id='Drive Testing', type='Topic'), Node(id='User Feedback', type='Topic'), Node(id='Large Language Models', type='Topic'), Node(id='Prompt Engineering', type='Topic'), Node(id='Agent Engineering', type='Topic'), Node(id='Network Optimization', type='Topic'), Node(id='Well-Crafted Prompts', type='Topic'), Node(id='Autonomous Agents', type='Topic'), Node(id='Machine Language Based Llms', type='Topic'), Node(id='Human Users', type='Topic'), Node(id='Natural Language', type='Topic'), Node(id='Artificial Intelligence As A Service', type='Topic'), Node(id='Ai For More Ease', type='Topic'), Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), Node(id='Prompt-Optimized Llm', type='Topic'), Node(id='Human-In-The-Loop Llm', type='Topic'), Node(id='Llm-Empowered Autonomous Bss Agent', type='Topic'), Node(id='Cooperative Multiple Llm-Based Autonomous Bss Agents', type='Topic'), Node(id='Prompt-Assisted Llms', type='Topic'), Node(id='Llm-Based Agents', type='Topic'), Node(id='Paper On Llm-Empowered Bss Optimization', type='Paper', properties={'title': 'LLM-empowered BSS optimization framework'})]\n",
      "  Relationships: [Relationship(source=Node(id='Traditional Base Station Siting Methods', type='Topic'), target=Node(id='Drive Testing', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Traditional Base Station Siting Methods', type='Topic'), target=Node(id='User Feedback', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Large Language Models', type='Topic'), target=Node(id='Prompt Engineering', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Large Language Models', type='Topic'), target=Node(id='Agent Engineering', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Large Language Models', type='Topic'), target=Node(id='Network Optimization', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Well-Crafted Prompts', type='Topic'), target=Node(id='Large Language Models', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Autonomous Agents', type='Topic'), target=Node(id='Large Language Models', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Machine Language Based Llms', type='Topic'), target=Node(id='Human Users', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Machine Language Based Llms', type='Topic'), target=Node(id='Natural Language', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Artificial Intelligence As A Service', type='Topic'), target=Node(id='Ai For More Ease', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), target=Node(id='Prompt-Optimized Llm', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), target=Node(id='Human-In-The-Loop Llm', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), target=Node(id='Llm-Empowered Autonomous Bss Agent', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), target=Node(id='Cooperative Multiple Llm-Based Autonomous Bss Agents', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Prompt-Assisted Llms', type='Topic'), target=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Llm-Based Agents', type='Topic'), target=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Paper On Llm-Empowered Bss Optimization', type='Paper'), target=Node(id='Llm-Empowered Bss Optimization Framework', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 6:\n",
      "  Nodes: [Node(id='Instruction-Following Agents', type='Topic'), Node(id='Language', type='Topic'), Node(id='Observation', type='Topic'), Node(id='Action Spaces', type='Topic'), Node(id='Domain-Specific Engineering', type='Topic'), Node(id='Human Interaction Data', type='Topic'), Node(id='Pretrained Vision-Language Models (Vlms)', type='Topic'), Node(id='Embodied Agents', type='Topic'), Node(id='Model Distillation', type='Topic'), Node(id='Hindsight Experience Replay (Her)', type='Topic'), Node(id='Simple Prompting', type='Topic'), Node(id='Fewshot Prompting', type='Topic'), Node(id='Abstract Category Membership', type='Topic'), Node(id='Internet-Scale Vlms', type='Topic'), Node(id='Task-Relevant Groundings', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Instruction-Following Agents', type='Topic'), target=Node(id='Language', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Instruction-Following Agents', type='Topic'), target=Node(id='Observation', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Instruction-Following Agents', type='Topic'), target=Node(id='Action Spaces', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Language', type='Topic'), target=Node(id='Observation', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Language', type='Topic'), target=Node(id='Action Spaces', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Language', type='Topic'), target=Node(id='Domain-Specific Engineering', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Language', type='Topic'), target=Node(id='Human Interaction Data', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Pretrained Vision-Language Models (Vlms)', type='Topic'), target=Node(id='Embodied Agents', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Model Distillation', type='Topic'), target=Node(id='Pretrained Vision-Language Models (Vlms)', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Hindsight Experience Replay (Her)', type='Topic'), target=Node(id='Pretrained Vision-Language Models (Vlms)', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Simple Prompting', type='Topic'), target=Node(id='Supervision Signal', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Fewshot Prompting', type='Topic'), target=Node(id='Abstract Category Membership', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Internet-Scale Vlms', type='Topic'), target=Node(id='Task-Relevant Groundings', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Internet-Scale Vlms', type='Topic'), target=Node(id='Embodied Agents', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 7:\n",
      "  Nodes: [Node(id='Helper', type='Paper', properties={'title': 'HELPER: An Embodied Agent with External Memory for Parsing Human-Robot Dialogue into Action Programs', 'summary': 'HELPER is an embodied agent equipped with an external memory of language-program pairs that parses free-form human-robot dialogue into action programs through retrieval-augmented LLM prompting. It sets a new state-of-the-art in the TEACh benchmark in both Execution from Dialog History (EDH) and Trajectory from Dialogue (TfD).', 'url': 'https://helper-agent-llm.github.io'}), Node(id='Teach', type='Topic'), Node(id='Execution From Dialog History', type='Topic'), Node(id='Trajectory From Dialogue', type='Topic'), Node(id='Pre-Trained And Frozen Large Language Models', type='Topic'), Node(id=\"Robot'S Visuomotor Functions\", type='Topic'), Node(id='Retrieval-Augmented Llm Prompting', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Helper', type='Paper'), target=Node(id='Teach', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Helper', type='Paper'), target=Node(id='Execution From Dialog History', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Helper', type='Paper'), target=Node(id='Trajectory From Dialogue', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Helper', type='Paper'), target=Node(id='Pre-Trained And Frozen Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Helper', type='Paper'), target=Node(id=\"Robot'S Visuomotor Functions\", type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Helper', type='Paper'), target=Node(id='Retrieval-Augmented Llm Prompting', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 8:\n",
      "  Nodes: [Node(id='Promptagent', type='Paper', properties={'title': 'PromptAgent: Autonomous Expert-Level Prompt Optimization'}), Node(id='Prompt Optimization', type='Topic'), Node(id='Monte Carlo Tree Search', type='Topic'), Node(id='Big-Bench Hard (Bbh)', type='Topic'), Node(id='Chain-Of-Thought', type='Topic'), Node(id='Nlp Tasks', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Promptagent', type='Paper'), target=Node(id='Prompt Optimization', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptagent', type='Paper'), target=Node(id='Monte Carlo Tree Search', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptagent', type='Paper'), target=Node(id='Big-Bench Hard (Bbh)', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptagent', type='Paper'), target=Node(id='Chain-Of-Thought', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptagent', type='Paper'), target=Node(id='Nlp Tasks', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 9:\n",
      "  Nodes: [Node(id='Promptwizard', type='Paper', properties={'title': 'PromptWizard: A Novel Framework for Prompt Optimization', 'summary': 'PromptWizard leverages LLMs to iteratively synthesize and refine prompts tailored to specific tasks, optimizing both prompt instructions and in-context examples to maximize model performance. It incorporates negative examples and a critic to enhance instructions and examples with detailed reasoning steps, offering computational efficiency, adaptability, and effectiveness with smaller LLMs.'}), Node(id='Large Language Models', type='Topic'), Node(id='Prompting', type='Topic'), Node(id='Prompt Engineering', type='Topic'), Node(id='Automated Solutions', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Promptwizard', type='Paper'), target=Node(id='Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptwizard', type='Paper'), target=Node(id='Prompting', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptwizard', type='Paper'), target=Node(id='Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptwizard', type='Paper'), target=Node(id='Automated Solutions', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 10:\n",
      "  Nodes: [Node(id='Recent Advancements In Large Language Models (Llms) And Prompt Engineering', type='Topic'), Node(id='Chatbot Customization', type='Topic'), Node(id='Programming Skills', type='Topic'), Node(id='Prompt Evaluation', type='Topic'), Node(id='Dataset Scale', type='Topic'), Node(id='Our Study', type='Paper', properties={'summary': 'Based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation and introduced a feature-oriented workflow for systematic prompt evaluation.'}), Node(id='Text Summarization', type='Topic'), Node(id='Summary Characteristics', type='Topic'), Node(id='Awesum', type='Paper', properties={'summary': 'A visual analytics system that facilitates identifying optimal prompt refinements for text summarization through interactive visualizations, featuring a novel Prompt Comparator design.'}), Node(id='Prompt Comparator', type='Topic'), Node(id='Bubbleset-Inspired Design', type='Topic'), Node(id='Dimensionality Reduction Techniques', type='Topic'), Node(id='Practitioners From Various Domains', type='Topic'), Node(id='Non-Technical People', type='Topic'), Node(id='Nlg', type='Topic'), Node(id='Image-Generation Tasks', type='Topic'), Node(id='Feature-Oriented Evaluation Of Llm Prompts', type='Topic'), Node(id='Human-Agent Interaction', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Recent Advancements In Large Language Models (Llms) And Prompt Engineering', type='Topic'), target=Node(id='Chatbot Customization', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Chatbot Customization', type='Topic'), target=Node(id='Programming Skills', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Prompt Evaluation', type='Topic'), target=Node(id='Dataset Scale', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Our Study', type='Paper'), target=Node(id='Prompt Evaluation', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Our Study', type='Paper'), target=Node(id='Text Summarization', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Text Summarization', type='Topic'), target=Node(id='Summary Characteristics', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Awesum', type='Paper'), target=Node(id='Text Summarization', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Awesum', type='Paper'), target=Node(id='Prompt Comparator', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Comparator', type='Topic'), target=Node(id='Bubbleset-Inspired Design', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Bubbleset-Inspired Design', type='Topic'), target=Node(id='Dimensionality Reduction Techniques', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Awesum', type='Paper'), target=Node(id='Practitioners From Various Domains', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Our Study', type='Paper'), target=Node(id='Non-Technical People', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Our Study', type='Paper'), target=Node(id='Nlg', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Our Study', type='Paper'), target=Node(id='Image-Generation Tasks', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Our Study', type='Paper'), target=Node(id='Feature-Oriented Evaluation Of Llm Prompts', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Our Study', type='Paper'), target=Node(id='Human-Agent Interaction', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 11:\n",
      "  Nodes: [Node(id='Drama Engine', type='Paper', properties={'title': 'Drama Engine', 'summary': \"This technical report presents the Drama Engine, a novel framework for agentic interaction with large language models designed for narrative purposes. The framework adapts multi-agent system principles to create dynamic, context-aware companions that can develop over time and interact with users and each other. Key features include multi-agent workflows with delegation, dynamic prompt assembly, and model-agnostic design. The Drama Engine introduces unique elements such as companion development, mood systems, and automatic context summarising. It is implemented in TypeScript. The framework's applications include multi-agent chats and virtual co-workers for creative writing. The paper discusses the system's architecture, prompt assembly process, delegation mechanisms, and moderation techniques, as well as potential ethical considerations and future extensions.\"})]\n",
      "  Relationships: [Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Multi-Agent Workflows', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Dynamic Prompt Assembly', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Model-Agnostic Design', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Companion Development', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Mood Systems', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Automatic Context Summarising', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Multi-Agent Chats', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Virtual Co-Workers For Creative Writing', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id=\"System'S Architecture\", type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Prompt Assembly Process', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Delegation Mechanisms', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Moderation Techniques', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Ethical Considerations', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Drama Engine', type='Paper'), target=Node(id='Future Extensions', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 12:\n",
      "  Nodes: [Node(id='Prompt Engineering', type='Topic', properties={'summary': 'A technique that involves augmenting a large pre-trained model with task-specific hints, known as prompts, to adapt the model to new tasks.'}), Node(id='Natural Language Processing', type='Topic'), Node(id='Vision-Language Modeling', type='Topic'), Node(id='Multimodal-To-Text Generation Models', type='Topic'), Node(id='Flamingo', type='Topic'), Node(id='Image-Text Matching Models', type='Topic'), Node(id='Clip', type='Topic'), Node(id='Text-To-Image Generation Models', type='Topic'), Node(id='Stable Diffusion', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Natural Language Processing', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Vision-Language Modeling', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Multimodal-To-Text Generation Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Multimodal-To-Text Generation Models', type='Topic'), target=Node(id='Flamingo', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Image-Text Matching Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Image-Text Matching Models', type='Topic'), target=Node(id='Clip', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Prompt Engineering', type='Topic'), target=Node(id='Text-To-Image Generation Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Text-To-Image Generation Models', type='Topic'), target=Node(id='Stable Diffusion', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 13:\n",
      "  Nodes: [Node(id='Prompt Optimization', type='Topic'), Node(id='Large Language Model', type='Topic'), Node(id='Promst', type='Topic', properties={'summary': 'A new LLM-driven discrete prompt optimization framework that incorporates human-designed feedback rules to automatically offer direct suggestions for improvement.', 'url': 'https://github.com/yongchao98/PROMST'}), Node(id='Paper On Promst', type='Paper', properties={'title': 'PROMST: A New LLM-driven Discrete Prompt Optimization Framework', 'summary': 'PROMST incorporates human-designed feedback rules to automatically offer direct suggestions for improvement and uses a learned heuristic model to predict prompt performance, significantly outperforming other methods across 11 multi-step tasks.', 'url': 'https://yongchao98.github.io/MIT-REALM-PROMST/'})]\n",
      "  Relationships: [Relationship(source=Node(id='Prompt Optimization', type='Topic'), target=Node(id='Large Language Model', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Promst', type='Topic'), target=Node(id='Large Language Model', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Paper On Promst', type='Paper'), target=Node(id='Promst', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 14:\n",
      "  Nodes: [Node(id='Prompt Stealing Attacks', type='Paper', properties={'title': 'Prompt Stealing Attacks', 'summary': 'This paper proposes a novel attack against LLMs, named prompt stealing attacks, which aims to steal well-designed prompts based on the generated answers. The attack contains two primary modules: the parameter extractor and the prompt reconstruction.'}), Node(id='Prompt Engineering', type='Topic'), Node(id='Large Language Models', type='Topic'), Node(id='Chatgpt', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Prompt Stealing Attacks', type='Paper'), target=Node(id='Prompt Engineering', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Stealing Attacks', type='Paper'), target=Node(id='Large Language Models', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Prompt Stealing Attacks', type='Paper'), target=Node(id='Chatgpt', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 15:\n",
      "  Nodes: [Node(id='Recent Trends In Llms As Autonomous Agents', type='Topic'), Node(id='Guidance, Navigation, And Control In Space', type='Topic'), Node(id='Kerbal Space Program Differential Games (Kspdg) Challenge', type='Topic'), Node(id='Llm-Based Solution For Kspdg', type='Topic'), Node(id='Https://Github.Com/Arclab-Mit/Kspdg', type='Topic', properties={'url': 'https://github.com/ARCLab-MIT/kspdg'})]\n",
      "  Relationships: [Relationship(source=Node(id='Recent Trends In Llms As Autonomous Agents', type='Topic'), target=Node(id='Guidance, Navigation, And Control In Space', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Llm-Based Solution For Kspdg', type='Topic'), target=Node(id='Kerbal Space Program Differential Games (Kspdg) Challenge', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Llm-Based Solution For Kspdg', type='Topic'), target=Node(id='Https://Github.Com/Arclab-Mit/Kspdg', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 16:\n",
      "  Nodes: [Node(id='The Rise Of Capabilities Expressed By Large Language Models', type='Paper', properties={'title': 'The rise of capabilities expressed by large language models'}), Node(id='Promptset', type='Paper', properties={'title': 'PromptSet', 'summary': 'A novel dataset with more than 61,000 unique developer prompts used in open source Python programs.', 'url': 'https://huggingface.co/datasets/pisterlabs/promptset'}), Node(id='Huggingface', type='Topic'), Node(id='Github', type='Topic'), Node(id='Pisterlabs/Promptset', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Promptset', type='Paper'), target=Node(id='The Rise Of Capabilities Expressed By Large Language Models', type='Paper'), type='RELATED_TO'), Relationship(source=Node(id='Promptset', type='Paper'), target=Node(id='Huggingface', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptset', type='Paper'), target=Node(id='Github', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Promptset', type='Paper'), target=Node(id='Pisterlabs/Promptset', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 17:\n",
      "  Nodes: [Node(id='Interaction With Large Language Models (Llms)', type='Paper', properties={'title': 'Interaction with Large Language Models (LLMs)', 'summary': 'Interaction with Large Language Models (LLMs) is primarily carried out via prompting. A prompt is a natural language instruction designed to elicit certain behaviour or output from a model. In theory, natural language prompts enable non-experts to interact with and leverage LLMs. However, for complex tasks and tasks with specific requirements, prompt design is not trivial. Creating effective prompts requires skill and knowledge, as well as significant iteration in order to determine model behavior, and guide the model to accomplish a particular goal. We hypothesize that the way in which users iterate on their prompts can provide insight into how they think prompting and models work, as well as the kinds of support needed for more efficient prompt engineering. To better understand prompt engineering practices, we analyzed sessions of prompt editing behavior, categorizing the parts of prompts users iterated on and the types of changes they made. We discuss design implications and future directions based on these prompt engineering practices.'})]\n",
      "  Relationships: [Relationship(source=Node(id='Interaction With Large Language Models (Llms)', type='Paper'), target=Node(id='Prompting', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Interaction With Large Language Models (Llms)', type='Paper'), target=Node(id='Prompt Design', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Interaction With Large Language Models (Llms)', type='Paper'), target=Node(id='Prompt Engineering', type='Topic'), type='DISCUSSES')]\n",
      "---\n",
      "Document 18:\n",
      "  Nodes: [Node(id='Large Language Models', type='Topic'), Node(id='Security Tasks', type='Topic'), Node(id='Security Operation Centers', type='Topic'), Node(id='Software Pentesting', type='Topic'), Node(id='Software Security Vulnerabilities', type='Topic'), Node(id='Source Code', type='Topic'), Node(id='Ai Agent', type='Topic'), Node(id='Human Operators', type='Topic'), Node(id='Prompt Engineering', type='Topic'), Node(id='Owasp Benchmark Project 1.2', type='Topic'), Node(id='Sonarqube', type='Topic'), Node(id=\"Google'S Gemini-Pro\", type='Topic'), Node(id=\"Openai'S Gpt-3.5-Turbo\", type='Topic'), Node(id=\"Openai'S Gpt-4-Turbo\", type='Topic'), Node(id='Chat Completion Api', type='Topic'), Node(id='Assistant Api', type='Topic')]\n",
      "  Relationships: [Relationship(source=Node(id='Large Language Models', type='Topic'), target=Node(id='Security Tasks', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Security Tasks', type='Topic'), target=Node(id='Security Operation Centers', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Large Language Models', type='Topic'), target=Node(id='Software Pentesting', type='Topic'), type='DISCUSSES'), Relationship(source=Node(id='Software Pentesting', type='Topic'), target=Node(id='Software Security Vulnerabilities', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Software Security Vulnerabilities', type='Topic'), target=Node(id='Source Code', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Ai Agent', type='Topic'), target=Node(id='Software Pentesting', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Ai Agent', type='Topic'), target=Node(id='Human Operators', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Ai Agent', type='Topic'), target=Node(id='Prompt Engineering', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Owasp Benchmark Project 1.2', type='Topic'), target=Node(id='Software Security Vulnerabilities', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id='Sonarqube', type='Topic'), target=Node(id='Software Security Vulnerabilities', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id=\"Google'S Gemini-Pro\", type='Topic'), target=Node(id='Ai Agent', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id=\"Openai'S Gpt-3.5-Turbo\", type='Topic'), target=Node(id='Ai Agent', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id=\"Openai'S Gpt-4-Turbo\", type='Topic'), target=Node(id='Ai Agent', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id=\"Openai'S Gpt-4-Turbo\", type='Topic'), target=Node(id='Chat Completion Api', type='Topic'), type='RELATED_TO'), Relationship(source=Node(id=\"Openai'S Gpt-4-Turbo\", type='Topic'), target=Node(id='Assistant Api', type='Topic'), type='RELATED_TO')]\n",
      "---\n",
      "Document 19:\n",
      "  Nodes: [Node(id='Recent Efforts To Enable Visual Navigation Using Large Language Models', type='Paper', properties={'summary': 'Recent efforts to enable visual navigation using large language models have mainly focused on developing complex prompt systems. These systems incorporate instructions, observations, and history into massive text prompts, which are then combined with pre-trained large language models to facilitate visual navigation.'}), Node(id='Our Approach', type='Paper', properties={'summary': 'Our approach aims to fine-tune large language models for visual navigation without extensive prompt engineering. Our design involves a simple text prompt, current observations, and a history collector model that gathers information from previous observations as input. For output, our design provides a probability distribution of possible actions that the agent can take during navigation. We train our model using human demonstrations and collision signals from the Habitat-Matterport 3D Dataset (HM3D). Experimental results demonstrate that our method outperforms state-of-the-art behavior cloning methods and effectively reduces collision rates.'})]\n",
      "  Relationships: [Relationship(source=Node(id='Recent Efforts To Enable Visual Navigation Using Large Language Models', type='Paper'), target=Node(id='Our Approach', type='Paper'), type='RELATED_TO')]\n",
      "---\n"
     ]
    }
   ],
   "source": [
    "# After converting to graph documents\n",
    "for i, doc in enumerate(graph_documents):\n",
    "    print(f\"Document {i}:\")\n",
    "    print(f\"  Nodes: {doc.nodes}\")\n",
    "    print(f\"  Relationships: {doc.relationships}\")\n",
    "    print(\"---\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Is our answer relevant to the question asked: {'score': 'yes'}\n"
     ]
    }
   ],
   "source": [
    "### Retrieval Grader\n",
    "\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain_community.chat_models import ChatOllama\n",
    "from langchain_core.output_parsers import JsonOutputParser\n",
    "\n",
    "# LLM\n",
    "llm = ChatOllama(model=local_llm, format=\"json\", temperature=0)\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are a grader assessing relevance \n",
    "    of a retrieved document to a user question. If the document contains keywords related to the user question, \n",
    "    grade it as relevant. It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \n",
    "    \n",
    "    Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.\n",
    "    Provide the binary score as a JSON with a single key 'score' and no premable or explaination.\n",
    "     \n",
    "    Here is the retrieved document: \n",
    "    {document}\n",
    "    \n",
    "    Here is the user question: \n",
    "    {question}\n",
    "    \"\"\",\n",
    "    input_variables=[\"question\", \"document\"],\n",
    ")\n",
    "\n",
    "retrieval_grader = prompt | llm | JsonOutputParser()\n",
    "question = \"Do we have articles that talk about Prompt Engineering?\"\n",
    "docs = retriever.invoke(question)\n",
    "doc_txt = docs[1].page_content\n",
    "print(\n",
    "    f'Is our answer relevant to the question asked: {retrieval_grader.invoke({\"question\": question, \"document\": doc_txt})}'\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The study found that the prompt engineering intervention had a positive impact on undergraduate students' AI self-efficacy, AI knowledge, and proficiency in creating effective prompts. The findings suggest that prompt engineering education is important for specific higher education use cases and can facilitate students' effective navigation and leverage of large language models (LLMs) to support their coursework.\n"
     ]
    }
   ],
   "source": [
    "### Generate\n",
    "\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain import hub\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "\n",
    "# Prompt\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are an assistant for question-answering tasks. \n",
    "    Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. \n",
    "    Use three sentences maximum and keep the answer concise:\n",
    "    Question: {question} \n",
    "    Context: {context} \n",
    "    Answer: \n",
    "    \"\"\",\n",
    "    input_variables=[\"question\", \"document\"],\n",
    ")\n",
    "\n",
    "llm = ChatOllama(model=local_llm, temperature=0)\n",
    "\n",
    "\n",
    "# Post-processing\n",
    "def format_docs(docs):\n",
    "    return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
    "\n",
    "\n",
    "# Chain\n",
    "rag_chain = prompt | llm | StrOutputParser()\n",
    "\n",
    "# Run\n",
    "question = \"Do we have articles that talk about Prompt Engineering?\"\n",
    "docs = retriever.invoke(question)\n",
    "generation = rag_chain.invoke({\"context\": docs, \"question\": question})\n",
    "print(generation)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
      "Generated Cypher:\n",
      "\u001b[32;1m\u001b[1;3mcypher\n",
      "MATCH (p:Paper)\n",
      "WHERE toLower(p.title) CONTAINS toLower(\"Multi-Agent\")\n",
      "RETURN p.title AS PaperTitle, p.summary AS Summary, p.url AS URL\n",
      "\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "{'query': 'What paper talks about Multi-Agent?', 'result': [{'PaperTitle': 'Multi-Agent Assistant Code Generation (AgentCoder)', 'Summary': None, 'URL': None}, {'PaperTitle': 'Framework for Automatically Generating Process Models with Multi-Agent Orchestration (MAO)', 'Summary': None, 'URL': None}, {'PaperTitle': 'Collaborative Multi-Agent, Multi-Reasoning-Path (CoMM) Prompting Framework', 'Summary': 'In this work, we aim to push the upper bound of the reasoning capability of LLMs by proposing a collaborative multi-agent, multi-reasoning-path (CoMM) prompting framework. Specifically, we prompt LLMs to play different roles in a problem-solving team, and encourage different role-play agents to collaboratively solve the target task. In particular, we discover that applying different reasoning paths for different roles is an effective strategy to implement few-shot prompting approaches in the multi-agent scenarios. Empirical results demonstrate the effectiveness of the proposed methods on two college-level science problems over competitive baselines. Our further analysis shows the necessity of prompting LLMs to play different roles or experts independently.', 'URL': 'https://github.com/amazon-science/comm-prompt'}], 'intermediate_steps': [{'query': 'cypher\\nMATCH (p:Paper)\\nWHERE toLower(p.title) CONTAINS toLower(\"Multi-Agent\")\\nRETURN p.title AS PaperTitle, p.summary AS Summary, p.url AS URL\\n'}]}\n"
     ]
    }
   ],
   "source": [
    "### Graph Generate\n",
    "\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain.chains import GraphCypherQAChain\n",
    "from langchain_ollama import ChatOllama\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "\n",
    "cypher_prompt = PromptTemplate(\n",
    "    template=\"\"\"You are an expert at generating Cypher queries for Neo4j.\n",
    "    Use the following schema to generate a Cypher query that answers the given question.\n",
    "    Make the query flexible by using case-insensitive matching and partial string matching where appropriate.\n",
    "    Focus on searching paper titles as they contain the most relevant information.\n",
    "    \n",
    "    Schema:\n",
    "    {schema}\n",
    "    \n",
    "    Question: {question}\n",
    "    \n",
    "    Cypher Query:\"\"\",\n",
    "    input_variables=[\"schema\", \"question\"],\n",
    ")\n",
    "\n",
    "\n",
    "# QA prompt\n",
    "qa_prompt = PromptTemplate(\n",
    "    template=\"\"\"You are an assistant for question-answering tasks. \n",
    "    Use the following Cypher query results to answer the question. If you don't know the answer, just say that you don't know. \n",
    "    Use three sentences maximum and keep the answer concise. If topic information is not available, focus on the paper titles.\n",
    "    \n",
    "    Question: {question} \n",
    "    Cypher Query: {query}\n",
    "    Query Results: {context} \n",
    "    \n",
    "    Answer:\"\"\",\n",
    "    input_variables=[\"question\", \"query\", \"context\"],\n",
    ")\n",
    "\n",
    "llm = ChatOpenAI(model=\"gpt-4o\", temperature=0)\n",
    "\n",
    "# Chain\n",
    "graph_rag_chain = GraphCypherQAChain.from_llm(\n",
    "    cypher_llm=llm,\n",
    "    qa_llm=llm,\n",
    "    validate_cypher=True,\n",
    "    graph=graph,\n",
    "    verbose=True,\n",
    "    return_intermediate_steps=True,\n",
    "    return_direct=True,\n",
    "    cypher_prompt=cypher_prompt,\n",
    "    qa_prompt=qa_prompt,\n",
    ")\n",
    "\n",
    "# Run\n",
    "question = \"What paper talks about Multi-Agent?\"\n",
    "generation = graph_rag_chain.invoke({\"query\": question})\n",
    "print(generation)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Composite Vector + Graph Generations\n",
    "\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain.chains.base import Chain\n",
    "\n",
    "\n",
    "# Prompt\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are an assistant for question-answering tasks. \n",
    "    Use the following pieces of retrieved context from a vector store and a graph database to answer the question. If you don't know the answer, just say that you don't know. \n",
    "    Use three sentences maximum and keep the answer concise:\n",
    "    Question: {question} \n",
    "    Vector Context: {context} \n",
    "    Graph Context: {graph_context}\n",
    "    Answer: \n",
    "    \"\"\",\n",
    "    input_variables=[\"question\", \"context\", \"graph_context\"],\n",
    ")\n",
    "\n",
    "llm = ChatOllama(model=local_llm, temperature=0)\n",
    "\n",
    "# Example input data\n",
    "# question = \"What techniques are used for Multi-Agent? \"\n",
    "question = \"What paper talk about Multi-Agent?\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Document(metadata={'pk': 452500211585777722, 'summary': 'Leveraging multiple large language model (LLM) agents has shown to be a\\npromising approach for tackling complex tasks, while the effective design of\\nmultiple agents for a particular application remains an art. It is thus\\nintriguing to answer a critical question: Given a task, how can we build a team\\nof LLM agents to solve it effectively? Our new adaptive team-building paradigm\\noffers a flexible solution, realized through a novel agent design named Captain\\nAgent. It dynamically forms and manages teams for each step of a task-solving\\nprocess, utilizing nested group conversations and reflection to ensure diverse\\nexpertise and prevent stereotypical outputs. It allows for a flexible yet\\nstructured approach to problem-solving and can help reduce redundancy and\\nenhance output diversity. A comprehensive evaluation across six real-world\\nscenarios demonstrates that Captain Agent significantly outperforms existing\\nmulti-agent methods with 21.94% improvement in average accuracy, providing\\noutstanding performance without requiring task-specific prompt engineering.', 'title': 'Adaptive In-conversation Team Building for Language Model Agents', 'url': 'http://arxiv.org/abs/2405.19425v1'}, page_content='Leveraging multiple large language model (LLM) agents has shown to be a\\npromising approach for tackling complex tasks, while the effective design of\\nmultiple agents for a particular application remains an art. It is thus\\nintriguing to answer a critical question: Given a task, how can we build a team\\nof LLM agents to solve it effectively? Our new adaptive team-building paradigm\\noffers a flexible solution, realized through a novel agent design named Captain\\nAgent. It dynamically forms and manages teams for each step of a task-solving\\nprocess, utilizing nested group conversations and reflection to ensure diverse\\nexpertise and prevent stereotypical outputs. It allows for a flexible yet\\nstructured approach to problem-solving and can help reduce redundancy and\\nenhance output diversity. A comprehensive evaluation across six real-world\\nscenarios demonstrates that Captain Agent significantly outperforms existing\\nmulti-agent methods with 21.94% improvement in average accuracy, providing\\noutstanding performance without requiring task-specific prompt engineering.'), Document(metadata={'pk': 452500478732271642, 'summary': 'Leveraging multiple large language model (LLM) agents has shown to be a\\npromising approach for tackling complex tasks, while the effective design of\\nmultiple agents for a particular application remains an art. It is thus\\nintriguing to answer a critical question: Given a task, how can we build a team\\nof LLM agents to solve it effectively? Our new adaptive team-building paradigm\\noffers a flexible solution, realized through a novel agent design named Captain\\nAgent. It dynamically forms and manages teams for each step of a task-solving\\nprocess, utilizing nested group conversations and reflection to ensure diverse\\nexpertise and prevent stereotypical outputs. It allows for a flexible yet\\nstructured approach to problem-solving and can help reduce redundancy and\\nenhance output diversity. A comprehensive evaluation across six real-world\\nscenarios demonstrates that Captain Agent significantly outperforms existing\\nmulti-agent methods with 21.94% improvement in average accuracy, providing\\noutstanding performance without requiring task-specific prompt engineering.', 'title': 'Adaptive In-conversation Team Building for Language Model Agents', 'url': 'http://arxiv.org/abs/2405.19425v1'}, page_content='Leveraging multiple large language model (LLM) agents has shown to be a\\npromising approach for tackling complex tasks, while the effective design of\\nmultiple agents for a particular application remains an art. It is thus\\nintriguing to answer a critical question: Given a task, how can we build a team\\nof LLM agents to solve it effectively? Our new adaptive team-building paradigm\\noffers a flexible solution, realized through a novel agent design named Captain\\nAgent. It dynamically forms and manages teams for each step of a task-solving\\nprocess, utilizing nested group conversations and reflection to ensure diverse\\nexpertise and prevent stereotypical outputs. It allows for a flexible yet\\nstructured approach to problem-solving and can help reduce redundancy and\\nenhance output diversity. A comprehensive evaluation across six real-world\\nscenarios demonstrates that Captain Agent significantly outperforms existing\\nmulti-agent methods with 21.94% improvement in average accuracy, providing\\noutstanding performance without requiring task-specific prompt engineering.'), Document(metadata={'pk': 452500826504036378, 'summary': 'Leveraging multiple large language model (LLM) agents has shown to be a\\npromising approach for tackling complex tasks, while the effective design of\\nmultiple agents for a particular application remains an art. It is thus\\nintriguing to answer a critical question: Given a task, how can we build a team\\nof LLM agents to solve it effectively? Our new adaptive team-building paradigm\\noffers a flexible solution, realized through a novel agent design named Captain\\nAgent. It dynamically forms and manages teams for each step of a task-solving\\nprocess, utilizing nested group conversations and reflection to ensure diverse\\nexpertise and prevent stereotypical outputs. It allows for a flexible yet\\nstructured approach to problem-solving and can help reduce redundancy and\\nenhance output diversity. A comprehensive evaluation across six real-world\\nscenarios demonstrates that Captain Agent significantly outperforms existing\\nmulti-agent methods with 21.94% improvement in average accuracy, providing\\noutstanding performance without requiring task-specific prompt engineering.', 'title': 'Adaptive In-conversation Team Building for Language Model Agents', 'url': 'http://arxiv.org/abs/2405.19425v1'}, page_content='Leveraging multiple large language model (LLM) agents has shown to be a\\npromising approach for tackling complex tasks, while the effective design of\\nmultiple agents for a particular application remains an art. It is thus\\nintriguing to answer a critical question: Given a task, how can we build a team\\nof LLM agents to solve it effectively? Our new adaptive team-building paradigm\\noffers a flexible solution, realized through a novel agent design named Captain\\nAgent. It dynamically forms and manages teams for each step of a task-solving\\nprocess, utilizing nested group conversations and reflection to ensure diverse\\nexpertise and prevent stereotypical outputs. It allows for a flexible yet\\nstructured approach to problem-solving and can help reduce redundancy and\\nenhance output diversity. A comprehensive evaluation across six real-world\\nscenarios demonstrates that Captain Agent significantly outperforms existing\\nmulti-agent methods with 21.94% improvement in average accuracy, providing\\noutstanding performance without requiring task-specific prompt engineering.'), Document(metadata={'pk': 452500886655860762, 'summary': 'Leveraging multiple large language model (LLM) agents has shown to be a\\npromising approach for tackling complex tasks, while the effective design of\\nmultiple agents for a particular application remains an art. It is thus\\nintriguing to answer a critical question: Given a task, how can we build a team\\nof LLM agents to solve it effectively? Our new adaptive team-building paradigm\\noffers a flexible solution, realized through a novel agent design named Captain\\nAgent. It dynamically forms and manages teams for each step of a task-solving\\nprocess, utilizing nested group conversations and reflection to ensure diverse\\nexpertise and prevent stereotypical outputs. It allows for a flexible yet\\nstructured approach to problem-solving and can help reduce redundancy and\\nenhance output diversity. A comprehensive evaluation across six real-world\\nscenarios demonstrates that Captain Agent significantly outperforms existing\\nmulti-agent methods with 21.94% improvement in average accuracy, providing\\noutstanding performance without requiring task-specific prompt engineering.', 'title': 'Adaptive In-conversation Team Building for Language Model Agents', 'url': 'http://arxiv.org/abs/2405.19425v1'}, page_content='Leveraging multiple large language model (LLM) agents has shown to be a\\npromising approach for tackling complex tasks, while the effective design of\\nmultiple agents for a particular application remains an art. It is thus\\nintriguing to answer a critical question: Given a task, how can we build a team\\nof LLM agents to solve it effectively? Our new adaptive team-building paradigm\\noffers a flexible solution, realized through a novel agent design named Captain\\nAgent. It dynamically forms and manages teams for each step of a task-solving\\nprocess, utilizing nested group conversations and reflection to ensure diverse\\nexpertise and prevent stereotypical outputs. It allows for a flexible yet\\nstructured approach to problem-solving and can help reduce redundancy and\\nenhance output diversity. A comprehensive evaluation across six real-world\\nscenarios demonstrates that Captain Agent significantly outperforms existing\\nmulti-agent methods with 21.94% improvement in average accuracy, providing\\noutstanding performance without requiring task-specific prompt engineering.')]\n"
     ]
    }
   ],
   "source": [
    "# Get vector + graph answers\n",
    "docs = retriever.invoke(question)\n",
    "\n",
    "print(docs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The paper discusses \"Adaptive In-conversation Team Building for Language Model Agents\" and talks about Multi-Agent. It presents a new adaptive team-building paradigm that offers a flexible solution for building teams of LLM agents to solve complex tasks effectively. The approach, called Captain Agent, dynamically forms and manages teams for each step of the task-solving process, utilizing nested group conversations and reflection to ensure diverse expertise and prevent stereotypical outputs.\n"
     ]
    }
   ],
   "source": [
    "vector_context = rag_chain.invoke({\"context\": docs, \"question\": question})\n",
    "\n",
    "print(vector_context)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
      "Generated Cypher:\n",
      "\u001b[32;1m\u001b[1;3mcypher\n",
      "MATCH (p:Paper)\n",
      "WHERE toLower(p.title) CONTAINS toLower(\"multi-agent\")\n",
      "RETURN p.title AS PaperTitle, p.summary AS Summary, p.url AS URL\n",
      "\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "{'query': 'What paper talk about Multi-Agent?', 'result': [{'PaperTitle': 'Multi-Agent Assistant Code Generation (AgentCoder)', 'Summary': None, 'URL': None}, {'PaperTitle': 'Framework for Automatically Generating Process Models with Multi-Agent Orchestration (MAO)', 'Summary': None, 'URL': None}, {'PaperTitle': 'Collaborative Multi-Agent, Multi-Reasoning-Path (CoMM) Prompting Framework', 'Summary': 'In this work, we aim to push the upper bound of the reasoning capability of LLMs by proposing a collaborative multi-agent, multi-reasoning-path (CoMM) prompting framework. Specifically, we prompt LLMs to play different roles in a problem-solving team, and encourage different role-play agents to collaboratively solve the target task. In particular, we discover that applying different reasoning paths for different roles is an effective strategy to implement few-shot prompting approaches in the multi-agent scenarios. Empirical results demonstrate the effectiveness of the proposed methods on two college-level science problems over competitive baselines. Our further analysis shows the necessity of prompting LLMs to play different roles or experts independently.', 'URL': 'https://github.com/amazon-science/comm-prompt'}], 'intermediate_steps': [{'query': 'cypher\\nMATCH (p:Paper)\\nWHERE toLower(p.title) CONTAINS toLower(\"multi-agent\")\\nRETURN p.title AS PaperTitle, p.summary AS Summary, p.url AS URL\\n'}]}\n"
     ]
    }
   ],
   "source": [
    "graph_context = graph_rag_chain.invoke({\"query\": question})\n",
    "\n",
    "print(graph_context)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The paper \"Collaborative Multi-Agent, Multi-Reasoning-Path (CoMM) Prompting Framework\" talks about Multi-Agent. It proposes a framework that prompts LLMs to play different roles in a problem-solving team and encourages different role-play agents to collaboratively solve the target task. The paper presents empirical results demonstrating the effectiveness of the proposed methods on two college-level science problems.\n"
     ]
    }
   ],
   "source": [
    "# Run the chain\n",
    "composite_chain = prompt | llm | StrOutputParser()\n",
    "answer = composite_chain.invoke(\n",
    "    {\"question\": question, \"context\": vector_context, \"graph_context\": graph_context}\n",
    ")\n",
    "\n",
    "print(answer)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'query': 'What paper talks about Multi-Agent?',\n",
       " 'result': [{'PaperTitle': 'Adaptive In-conversation Team Building for Language Model Agents',\n",
       "   'Summary': 'Leveraging multiple large language model (LLM) agents has shown to be a promising approach for tackling complex tasks, while the effective design of multiple agents for a particular application remains an art. It is thus intriguing to answer a critical question: Given a task, how can we build a team of LLM agents to solve it effectively? Our new adaptive team-building paradigm offers a flexible solution, realized through a novel agent design named Captain Agent.',\n",
       "   'URL': 'http://arxiv.org/abs/2405.19425v1'},\n",
       "  {'PaperTitle': 'Collaborative Multi-Agent, Multi-Reasoning-Path (CoMM) Prompting Framework',\n",
       "   'Summary': 'In this work, we aim to push the upper bound of the reasoning capability of LLMs by proposing a collaborative multi-agent, multi-reasoning-path (CoMM) prompting framework.',\n",
       "   'URL': 'https://github.com/amazon-science/comm-prompt'}],\n",
       " 'intermediate_steps': [{'query': 'cypher\\nMATCH (p:Paper)\\nWHERE toLower(p.title) CONTAINS toLower(\"Multi-Agent\")\\nRETURN p.title AS PaperTitle, p.summary AS Summary, p.url AS URL\\n'}]}"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### Hallucination Grader\n",
    "\n",
    "# LLM\n",
    "llm = ChatOllama(model=local_llm, format=\"json\", temperature=0)\n",
    "\n",
    "# Prompt\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are a grader assessing whether \n",
    "    an answer is grounded in / supported by a set of facts. Give a binary score 'yes' or 'no' score to indicate \n",
    "    whether the answer is grounded in / supported by a set of facts. Provide the binary score as a JSON with a \n",
    "    single key 'score' and no preamble or explanation.\n",
    "    \n",
    "    Here are the facts:\n",
    "    {documents} \n",
    "\n",
    "    Here is the answer: \n",
    "    {generation}\n",
    "    \"\"\",\n",
    "    input_variables=[\"generation\", \"documents\"],\n",
    ")\n",
    "\n",
    "hallucination_grader = prompt | llm | JsonOutputParser()\n",
    "hallucination_grader.invoke({\"documents\": docs, \"generation\": generation})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'score': 'yes'}"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### Answer Grader\n",
    "\n",
    "# LLM\n",
    "llm = ChatOllama(model=local_llm, format=\"json\", temperature=0)\n",
    "\n",
    "# Prompt\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are a grader assessing whether an \n",
    "    answer is useful to resolve a question. Give a binary score 'yes' or 'no' to indicate whether the answer is \n",
    "    useful to resolve a question. Provide the binary score as a JSON with a single key 'score' and no preamble or explanation.\n",
    "     \n",
    "    Here is the answer:\n",
    "    {generation} \n",
    "\n",
    "    Here is the question: {question}\n",
    "    \"\"\",\n",
    "    input_variables=[\"generation\", \"question\"],\n",
    ")\n",
    "\n",
    "answer_grader = prompt | llm | JsonOutputParser()\n",
    "answer_grader.invoke({\"question\": question, \"generation\": generation})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/kv/3dw9p_ts4b114chqt9m027pc0000gn/T/ipykernel_5442/2486244726.py:30: LangChainDeprecationWarning: The method `BaseRetriever.get_relevant_documents` was deprecated in langchain-core 0.1.46 and will be removed in 1.0. Use invoke instead.\n",
      "  docs = retriever.get_relevant_documents(question)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'datasource': 'vectorstore'}\n"
     ]
    }
   ],
   "source": [
    "### Router\n",
    "\n",
    "from langchain.prompts import PromptTemplate\n",
    "from langchain_community.chat_models import ChatOllama\n",
    "from langchain_core.output_parsers import JsonOutputParser\n",
    "\n",
    "# LLM\n",
    "llm = ChatOllama(model=local_llm, format=\"json\", temperature=0)\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=\"\"\"You are an expert at routing a user question to the most appropriate data source. \n",
    "    You have three options:\n",
    "    1. 'vectorstore': Use for questions about LLM agents, prompt engineering, and adversarial attacks.\n",
    "    2. 'graphrag': Use for questions that involve relationships between entities, such as authors, papers, and topics, or when the question requires understanding connections between concepts.\n",
    "    3. 'web_search': Use for all other questions or when current information is needed.\n",
    "\n",
    "    You do not need to be stringent with the keywords in the question related to these topics. \n",
    "    Choose the most appropriate option based on the nature of the question.\n",
    "\n",
    "    Return a JSON with a single key 'datasource' and no preamble or explanation. \n",
    "    The value should be one of: 'vectorstore', 'graphrag', or 'web_search'.\n",
    "    \n",
    "    Question to route: \n",
    "    {question}\"\"\",\n",
    "    input_variables=[\"question\"],\n",
    ")\n",
    "\n",
    "question_router = prompt | llm | JsonOutputParser()\n",
    "question = \"llm agent memory\"\n",
    "docs = retriever.get_relevant_documents(question)\n",
    "doc_txt = docs[1].page_content\n",
    "print(question_router.invoke({\"question\": question}))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "### Search\n",
    "\n",
    "from langchain_community.tools.tavily_search import TavilySearchResults\n",
    "\n",
    "web_search_tool = TavilySearchResults(k=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll implement these as a control flow in LangGraph."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing_extensions import TypedDict\n",
    "from typing import List\n",
    "\n",
    "### State\n",
    "\n",
    "\n",
    "class GraphState(TypedDict):\n",
    "    \"\"\"\n",
    "    Represents the state of our graph.\n",
    "\n",
    "    Attributes:\n",
    "        question: question\n",
    "        generation: LLM generation\n",
    "        web_search: whether to add search\n",
    "        documents: list of documents\n",
    "        graph_context: results from graph search\n",
    "    \"\"\"\n",
    "\n",
    "    question: str\n",
    "    generation: str\n",
    "    web_search: str\n",
    "    documents: List[str]\n",
    "    graph_context: str\n",
    "\n",
    "\n",
    "from langchain.schema import Document\n",
    "\n",
    "### Nodes\n",
    "\n",
    "\n",
    "def retrieve(state):\n",
    "    \"\"\"\n",
    "    Retrieve documents from vectorstore\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): New key added to state, documents, that contains retrieved documents\n",
    "    \"\"\"\n",
    "    print(\"---RETRIEVE---\")\n",
    "    question = state[\"question\"]\n",
    "\n",
    "    # Retrieval\n",
    "    documents = retriever.invoke(question)\n",
    "    return {\"documents\": documents, \"question\": question}\n",
    "\n",
    "\n",
    "def generate(state):\n",
    "    \"\"\"\n",
    "    Generate answer using RAG on retrieved documents and graph context\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): New key added to state, generation, that contains LLM generation\n",
    "    \"\"\"\n",
    "    print(\"---GENERATE---\")\n",
    "    question = state[\"question\"]\n",
    "    documents = state.get(\"documents\", [])\n",
    "    graph_context = state.get(\"graph_context\", \"\")\n",
    "\n",
    "    # Composite RAG generation\n",
    "    generation = composite_chain.invoke(\n",
    "        {\"question\": question, \"context\": documents, \"graph_context\": graph_context}\n",
    "    )\n",
    "    return {\n",
    "        \"documents\": documents,\n",
    "        \"question\": question,\n",
    "        \"generation\": generation,\n",
    "        \"graph_context\": graph_context,\n",
    "    }\n",
    "\n",
    "\n",
    "def grade_documents(state):\n",
    "    \"\"\"\n",
    "    Determines whether the retrieved documents are relevant to the question\n",
    "    If any document is not relevant, we will set a flag to run web search\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): Filtered out irrelevant documents and updated web_search state\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---CHECK DOCUMENT RELEVANCE TO QUESTION---\")\n",
    "    question = state[\"question\"]\n",
    "    documents = state[\"documents\"]\n",
    "\n",
    "    # Score each doc\n",
    "    filtered_docs = []\n",
    "    web_search = \"No\"\n",
    "    for d in documents:\n",
    "        score = retrieval_grader.invoke(\n",
    "            {\"question\": question, \"document\": d.page_content}\n",
    "        )\n",
    "        grade = score[\"score\"]\n",
    "        # Document relevant\n",
    "        if grade.lower() == \"yes\":\n",
    "            print(\"---GRADE: DOCUMENT RELEVANT---\")\n",
    "            filtered_docs.append(d)\n",
    "        # Document not relevant\n",
    "        else:\n",
    "            print(\"---GRADE: DOCUMENT NOT RELEVANT---\")\n",
    "            # We do not include the document in filtered_docs\n",
    "            # We set a flag to indicate that we want to run web search\n",
    "            web_search = \"Yes\"\n",
    "            continue\n",
    "    return {\"documents\": filtered_docs, \"question\": question, \"web_search\": web_search}\n",
    "\n",
    "\n",
    "def web_search(state):\n",
    "    \"\"\"\n",
    "    Web search based on the question\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): Appended web results to documents\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---WEB SEARCH---\")\n",
    "    question = state[\"question\"]\n",
    "    documents = state.get(\"documents\", [])  # Use get() with a default empty list\n",
    "\n",
    "    # Web search\n",
    "    docs = web_search_tool.invoke({\"query\": question})\n",
    "    web_results = \"\\n\".join([d[\"content\"] for d in docs])\n",
    "    web_results = Document(page_content=web_results)\n",
    "    documents.append(web_results)\n",
    "\n",
    "    return {\"documents\": documents, \"question\": question}\n",
    "\n",
    "\n",
    "### Conditional edge\n",
    "\n",
    "\n",
    "def route_question(state):\n",
    "    print(\"---ROUTE QUESTION---\")\n",
    "    question = state[\"question\"]\n",
    "    print(question)\n",
    "    source = question_router.invoke({\"question\": question})\n",
    "    print(source)\n",
    "    print(source[\"datasource\"])\n",
    "\n",
    "    if source[\"datasource\"] == \"graphrag\":\n",
    "        print(\"---TRYING GRAPH SEARCH---\")\n",
    "        graph_result = graph_search({\"question\": question})\n",
    "        if graph_result[\"graph_context\"] != \"No results found in the graph database.\":\n",
    "            return \"graphrag\"\n",
    "        else:\n",
    "            print(\"---NO RESULTS IN GRAPH, FALLING BACK TO VECTORSTORE---\")\n",
    "            return \"retrieve\"\n",
    "    elif source[\"datasource\"] == \"vectorstore\":\n",
    "        print(\"---ROUTE QUESTION TO VECTORSTORE RAG---\")\n",
    "        return \"retrieve\"\n",
    "    elif source[\"datasource\"] == \"web_search\":\n",
    "        print(\"---ROUTE QUESTION TO WEB SEARCH---\")\n",
    "        return \"websearch\"\n",
    "\n",
    "\n",
    "def decide_to_generate(state):\n",
    "    \"\"\"\n",
    "    Determines whether to generate an answer, or add web search\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        str: Binary decision for next node to call\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---ASSESS GRADED DOCUMENTS---\")\n",
    "    question = state[\"question\"]\n",
    "    web_search = state[\"web_search\"]\n",
    "    filtered_documents = state[\"documents\"]\n",
    "\n",
    "    if web_search == \"Yes\":\n",
    "        # All documents have been filtered check_relevance\n",
    "        # We will re-generate a new query\n",
    "        print(\n",
    "            \"---DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, INCLUDE WEB SEARCH---\"\n",
    "        )\n",
    "        return \"websearch\"\n",
    "    else:\n",
    "        # We have relevant documents, so generate answer\n",
    "        print(\"---DECISION: GENERATE---\")\n",
    "        return \"generate\"\n",
    "\n",
    "\n",
    "def graph_search(state):\n",
    "    \"\"\"\n",
    "    Perform GraphRAG search using Neo4j\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): Updated state with graph search results\n",
    "    \"\"\"\n",
    "    print(\"---GRAPH SEARCH---\")\n",
    "    question = state[\"question\"]\n",
    "\n",
    "    # Use the graph_rag_chain to perform the search\n",
    "    result = graph_rag_chain.invoke({\"query\": question})\n",
    "\n",
    "    # Extract the relevant information from the result\n",
    "    # Adjust this based on what graph_rag_chain returns\n",
    "    graph_context = result.get(\"result\", \"\")\n",
    "\n",
    "    # You might want to combine this with existing documents or keep it separate\n",
    "    return {\"graph_context\": graph_context, \"question\": question}\n",
    "\n",
    "\n",
    "### Conditional edge\n",
    "\n",
    "\n",
    "def grade_generation_v_documents_and_question(state):\n",
    "    \"\"\"\n",
    "    Determines whether the generation is grounded in the document and answers question.\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        str: Decision for next node to call\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---CHECK HALLUCINATIONS---\")\n",
    "    question = state[\"question\"]\n",
    "    documents = state[\"documents\"]\n",
    "    generation = state[\"generation\"]\n",
    "\n",
    "    score = hallucination_grader.invoke(\n",
    "        {\"documents\": documents, \"generation\": generation}\n",
    "    )\n",
    "    grade = grade = score.get(\"score\", \"\").lower()\n",
    "\n",
    "    # Check hallucination\n",
    "    if grade == \"yes\":\n",
    "        print(\"---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---\")\n",
    "        # Check question-answering\n",
    "        print(\"---GRADE GENERATION vs QUESTION---\")\n",
    "        score = answer_grader.invoke({\"question\": question, \"generation\": generation})\n",
    "        grade = score[\"score\"]\n",
    "        if grade == \"yes\":\n",
    "            print(\"---DECISION: GENERATION ADDRESSES QUESTION---\")\n",
    "            return \"useful\"\n",
    "        else:\n",
    "            print(\"---DECISION: GENERATION DOES NOT ADDRESS QUESTION---\")\n",
    "            return \"not useful\"\n",
    "    else:\n",
    "        print(\"---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\")\n",
    "        return \"not supported\"\n",
    "\n",
    "\n",
    "from langgraph.graph import END, StateGraph\n",
    "\n",
    "workflow = StateGraph(GraphState)\n",
    "\n",
    "# Define the nodes\n",
    "workflow.add_node(\"websearch\", web_search)  # web search\n",
    "workflow.add_node(\"retrieve\", retrieve)  # retrieve\n",
    "workflow.add_node(\"grade_documents\", grade_documents)  # grade documents\n",
    "workflow.add_node(\"generate\", generate)  # generatae\n",
    "workflow.add_node(\"graphrag\", graph_search)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Graph Build"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set conditional entry point\n",
    "workflow.set_conditional_entry_point(\n",
    "    route_question,\n",
    "    {\n",
    "        \"websearch\": \"websearch\",\n",
    "        \"retrieve\": \"retrieve\",\n",
    "        \"graphrag\": \"graphrag\",\n",
    "    },\n",
    ")\n",
    "\n",
    "# Add edges\n",
    "workflow.add_edge(\"retrieve\", \"grade_documents\")\n",
    "workflow.add_edge(\"graphrag\", \"generate\")\n",
    "workflow.add_conditional_edges(\n",
    "    \"grade_documents\",\n",
    "    decide_to_generate,\n",
    "    {\n",
    "        \"websearch\": \"websearch\",\n",
    "        \"generate\": \"generate\",\n",
    "    },\n",
    ")\n",
    "workflow.add_edge(\"websearch\", \"generate\")\n",
    "workflow.add_conditional_edges(\n",
    "    \"generate\",\n",
    "    grade_generation_v_documents_and_question,\n",
    "    {\n",
    "        \"not supported\": \"generate\",\n",
    "        \"useful\": END,\n",
    "        \"not useful\": \"websearch\",\n",
    "    },\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "---ROUTE QUESTION---\n",
      "What are the types of Prompt Engineering?\n",
      "{'datasource': 'vectorstore'}\n",
      "vectorstore\n",
      "---ROUTE QUESTION TO VECTORSTORE RAG---\n",
      "---RETRIEVE---\n",
      "'Finished running: retrieve:'\n",
      "---CHECK DOCUMENT RELEVANCE TO QUESTION---\n",
      "---GRADE: DOCUMENT NOT RELEVANT---\n",
      "---GRADE: DOCUMENT NOT RELEVANT---\n",
      "---GRADE: DOCUMENT NOT RELEVANT---\n",
      "---GRADE: DOCUMENT NOT RELEVANT---\n",
      "---ASSESS GRADED DOCUMENTS---\n",
      "---DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, INCLUDE WEB SEARCH---\n",
      "'Finished running: grade_documents:'\n",
      "---WEB SEARCH---\n",
      "'Finished running: websearch:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---\n",
      "---GRADE GENERATION vs QUESTION---\n",
      "---DECISION: GENERATION ADDRESSES QUESTION---\n",
      "'Finished running: generate:'\n",
      "('There are several types of Prompt Engineering, including Zero-shot '\n",
      " 'prompting, which involves asking a model to perform a task without any '\n",
      " 'examples or prior training on that specific task. Other common techniques '\n",
      " 'include Guided Prompting and Instructed Prompting, where the model is '\n",
      " 'provided with guidance or instructions to improve its performance. '\n",
      " \"Additionally, there's also the use of forecasting patterns and question \"\n",
      " 'refinement patterns to refine prompts and get better results from generative '\n",
      " 'AI models.')\n"
     ]
    }
   ],
   "source": [
    "# Compile\n",
    "app = workflow.compile()\n",
    "\n",
    "# Test\n",
    "from pprint import pprint\n",
    "\n",
    "inputs = {\"question\": \"What are the types of Prompt Engineering?\"}\n",
    "for output in app.stream(inputs):\n",
    "    for key, value in output.items():\n",
    "        pprint(f\"Finished running: {key}:\")\n",
    "pprint(value[\"generation\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Trace: \n",
    "\n",
    "https://smith.langchain.com/public/8d449b67-6bc4-4ecf-9153-759cd21df24f/r"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "---ROUTE QUESTION---\n",
      "Did Emmanuel Macron visit Germany recently?\n",
      "{'datasource': 'web_search'}\n",
      "web_search\n",
      "---ROUTE QUESTION TO WEB SEARCH---\n",
      "---WEB SEARCH---\n",
      "'Finished running: websearch:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---\n",
      "---GRADE GENERATION vs QUESTION---\n",
      "---DECISION: GENERATION ADDRESSES QUESTION---\n",
      "'Finished running: generate:'\n",
      "('Yes, Emmanuel Macron visited Germany recently. He made a state visit on May '\n",
      " '26 for three days, which was his first state visit to Germany in 24 years. '\n",
      " 'The visit aimed to ease recent tensions and emphasize strong ties between '\n",
      " 'the two countries.')\n"
     ]
    }
   ],
   "source": [
    "# Compile\n",
    "app = workflow.compile()\n",
    "\n",
    "# Test\n",
    "from pprint import pprint\n",
    "\n",
    "inputs = {\"question\": \"Did Emmanuel Macron visit Germany recently?\"}\n",
    "for output in app.stream(inputs):\n",
    "    for key, value in output.items():\n",
    "        pprint(f\"Finished running: {key}:\")\n",
    "pprint(value[\"generation\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "---ROUTE QUESTION---\n",
      "What paper talk about Multi-Agent?\n",
      "{'datasource': 'graphrag'}\n",
      "graphrag\n",
      "---TRYING GRAPH SEARCH---\n",
      "---GRAPH SEARCH---\n",
      "\n",
      "\n",
      "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
      "Generated Cypher:\n",
      "\u001b[32;1m\u001b[1;3mcypher\n",
      "MATCH (p:Paper)\n",
      "WHERE toLower(p.title) CONTAINS toLower(\"multi-agent\")\n",
      "RETURN p.title AS PaperTitle, p.summary AS Summary, p.url AS URL\n",
      "\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "---GRAPH SEARCH---\n",
      "\n",
      "\n",
      "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
      "Generated Cypher:\n",
      "\u001b[32;1m\u001b[1;3mcypher\n",
      "MATCH (p:Paper)\n",
      "WHERE toLower(p.title) CONTAINS toLower(\"Multi-Agent\")\n",
      "RETURN p.title AS PaperTitle, p.summary AS Summary, p.url AS URL\n",
      "\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n",
      "'Finished running: graphrag:'\n",
      "---GENERATE---\n",
      "---CHECK HALLUCINATIONS---\n",
      "---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---\n",
      "---GRADE GENERATION vs QUESTION---\n",
      "---DECISION: GENERATION ADDRESSES QUESTION---\n",
      "'Finished running: generate:'\n",
      "('The papers that talk about Multi-Agent are \"Collaborative Multi-Agent, '\n",
      " 'Multi-Reasoning-Path (CoMM) Prompting Framework\" and possibly others like '\n",
      " '\"Multi-Agent Assistant Code Generation (AgentCoder)\" and \"Framework for '\n",
      " 'Automatically Generating Process Models with Multi-Agent Orchestration '\n",
      " '(MAO)\".')\n"
     ]
    }
   ],
   "source": [
    "# Test\n",
    "from pprint import pprint\n",
    "\n",
    "inputs = {\"question\": \"What paper talk about Multi-Agent?\"}\n",
    "for output in app.stream(inputs):\n",
    "    for key, value in output.items():\n",
    "        pprint(f\"Finished running: {key}:\")\n",
    "pprint(value[\"generation\"])"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
