{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "EUYMLmXAF3W1"
   },
   "source": [
    "# A ReAct agent with Command R+, achieving the same goals as Adaptive RAG\n",
    "\n",
    "Adaptive RAG is a strategy for RAG that unites (1) [query analysis](https://blog.langchain.dev/query-construction/) with (2) [active / self-corrective RAG](https://blog.langchain.dev/agentic-rag-with-langgraph/).\n",
    "\n",
    "In the paper, they report query analysis to route across:\n",
    "- No Retrieval (LLM answers)\n",
    "- Single-shot RAG\n",
    "- Iterative RAG\n",
    "\n",
    "\n",
    "We'll use Command R+, a recent release from Cohere that:\n",
    "- Has strong accuracy on RAG, Tool Use and Agents\n",
    "- Has 128k context\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "W8yvOB2MQp07"
   },
   "source": [
    "# Environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "ujQVUvA9QlD4",
    "outputId": "230c6a56-3ee3-4e0d-95af-2a261b8541d6"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m812.8/812.8 kB\u001b[0m \u001b[31m6.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.8/1.8 MB\u001b[0m \u001b[31m9.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m27.0/27.0 MB\u001b[0m \u001b[31m16.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.9/1.9 MB\u001b[0m \u001b[31m28.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m276.8/276.8 kB\u001b[0m \u001b[31m3.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m87.5/87.5 kB\u001b[0m \u001b[31m4.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m145.9/145.9 kB\u001b[0m \u001b[31m8.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.1/3.1 MB\u001b[0m \u001b[31m34.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.6/75.6 kB\u001b[0m \u001b[31m3.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.4/49.4 kB\u001b[0m \u001b[31m2.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m53.0/53.0 kB\u001b[0m \u001b[31m516.7 kB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m144.8/144.8 kB\u001b[0m \u001b[31m5.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.9/77.9 kB\u001b[0m \u001b[31m6.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m5.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25h"
     ]
    }
   ],
   "source": [
    "! pip install --quiet langchain langchain_cohere tiktoken faiss-cpu"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "id": "K9rrOrKWJaBd"
   },
   "outputs": [],
   "source": [
    "### LLMs\n",
    "import os\n",
    "\n",
    "os.environ[\"COHERE_API_KEY\"] = \"<your-api-key>\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "k3mrKxSsTfXO"
   },
   "source": [
    "# Create tools\n",
    "The ReAct agent will be equipped with these tools. The model can pick between\n",
    "- web search\n",
    "- RAG: retrieval from a vector store\n",
    "- directly answering\n",
    "\n",
    "The model can use any of these tools, in any sequence of steps, and self-reflect."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "4kGUB4IRF3W5"
   },
   "source": [
    "### Web search tool"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "id": "hYsJgVY0F3W5"
   },
   "outputs": [],
   "source": [
    "from langchain_community.tools.tavily_search import TavilySearchResults\n",
    "\n",
    "os.environ[\"TAVILY_API_KEY\"] = \"<your-api-key>\"\n",
    "\n",
    "internet_search = TavilySearchResults()\n",
    "internet_search.name = \"internet_search\"\n",
    "internet_search.description = \"Returns a list of relevant document snippets for a textual query retrieved from the internet.\"\n",
    "\n",
    "\n",
    "from langchain_core.pydantic_v1 import BaseModel, Field\n",
    "\n",
    "\n",
    "class TavilySearchInput(BaseModel):\n",
    "    query: str = Field(description=\"Query to search the internet with\")\n",
    "\n",
    "\n",
    "internet_search.args_schema = TavilySearchInput"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "NxDa_Em0tuas"
   },
   "source": [
    "### RAG tool"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "id": "0QyahrNPtwEi"
   },
   "outputs": [],
   "source": [
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "from langchain_cohere import CohereEmbeddings\n",
    "from langchain_community.document_loaders import WebBaseLoader\n",
    "from langchain_community.vectorstores import FAISS\n",
    "\n",
    "# Set embeddings\n",
    "embd = CohereEmbeddings()\n",
    "\n",
    "# Docs to index\n",
    "urls = [\n",
    "    \"https://lilianweng.github.io/posts/2023-06-23-agent/\",\n",
    "    \"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/\",\n",
    "    \"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/\",\n",
    "]\n",
    "\n",
    "# Load\n",
    "docs = [WebBaseLoader(url).load() for url in urls]\n",
    "docs_list = [item for sublist in docs for item in sublist]\n",
    "\n",
    "# Split\n",
    "text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(\n",
    "    chunk_size=512, chunk_overlap=0\n",
    ")\n",
    "doc_splits = text_splitter.split_documents(docs_list)\n",
    "\n",
    "# Add to vectorstore\n",
    "vectorstore = FAISS.from_documents(\n",
    "    documents=doc_splits,\n",
    "    embedding=embd,\n",
    ")\n",
    "\n",
    "vectorstore_retriever = vectorstore.as_retriever()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "id": "eoa3toXfdGkI"
   },
   "outputs": [],
   "source": [
    "from langchain.tools.retriever import create_retriever_tool\n",
    "\n",
    "vectorstore_search = create_retriever_tool(\n",
    "    retriever=vectorstore_retriever,\n",
    "    name=\"vectorstore_search\",\n",
    "    description=\"Retrieve relevant info from a vectorstore that contains documents related to agents, prompt engineering, and adversarial attacks.\",\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "2ENQAfYXRG9Q"
   },
   "source": [
    "# Create the ReAct Agent\n",
    "The model can smartly pick the right tool(s) for the user query, call them in any sequence of steps, analyze the results and self-reflect."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "id": "hx-Ew1i5F3W4"
   },
   "outputs": [],
   "source": [
    "from langchain.agents import AgentExecutor\n",
    "from langchain_cohere.react_multi_hop.agent import create_cohere_react_agent\n",
    "from langchain_core.prompts import ChatPromptTemplate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "id": "BPo3ZIHkF3W4"
   },
   "outputs": [],
   "source": [
    "# LLM\n",
    "from langchain_cohere.chat_models import ChatCohere\n",
    "\n",
    "chat = ChatCohere(model=\"command-r-plus\", temperature=0.3)\n",
    "\n",
    "# Preamble\n",
    "preamble = \"\"\"\n",
    "You are an expert who answers the user's question with the most relevant datasource.\n",
    "You are equipped with an internet search tool and a special vectorstore of information about agents prompt engineering and adversarial attacks.\n",
    "If the query covers the topics of agents, prompt engineering or adversarial attacks, use the vectorstore search.\n",
    "\"\"\"\n",
    "\n",
    "# Prompt\n",
    "prompt = ChatPromptTemplate.from_template(\"{input}\")\n",
    "\n",
    "# Create the ReAct agent\n",
    "agent = create_cohere_react_agent(\n",
    "    llm=chat,\n",
    "    tools=[internet_search, vectorstore_search],\n",
    "    prompt=prompt,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "id": "dks-7TGGtdLE"
   },
   "outputs": [],
   "source": [
    "agent_executor = AgentExecutor(\n",
    "    agent=agent, tools=[internet_search, vectorstore_search], verbose=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "v7L1aBHds1pj"
   },
   "source": [
    "# Testing the ReAct agent"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ibo31dsHEjPv"
   },
   "source": [
    "**Let's ask a question that requires web search.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "Jl0JNX2TF3W5",
    "outputId": "2ccb747c-e702-4473-f587-47271e0c71b2"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3m\n",
      "I will search for 'Who will the Bears draft first in the NFL draft?' and write an answer based on the results.\n",
      "{'tool_name': 'internet_search', 'parameters': {'query': 'Who will the Bears draft first in the NFL draft?'}}\n",
      "\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://www.chicagobears.com/news/final-mock-draft-analysts-predict-bears-experts-2023-first-round-pick-trade', 'content': \"Tennessee OT Darnell Wright\\nEric Edholm, NFL.com: Georgia DT Jalen Carter\\nJosh Edwards, CBS Sports: Texas Tech edge Tyree Wilson\\nChris Emma, Audacy Sports: Northwestern OT Peter Skoronski\\nSam Farmer, Los Angeles Times: Georgia edge Nolan Smith\\nRelated Links\\nPatrick Finley, Chicago Sun-Times: Georgia DT Jalen Carter\\nKevin Fishbain, The Athletic: Ohio State OT Paris Johnson Jr.\\nMike Florio, Pro Football Talk: Northwestern OT Peter Skoronski\\nPatrick Flowers, Bleacher Nation: Georgia DT Jalen Carter\\nSean Hammond, Shaw Media: Here's what more than 40 NFL analysts are predicting the Bears will do with the No. 9 pick in the first round:\\nJarrett Bailey, Bears Wire: Ohio State WR Jaxon Smith-Njigba\\nAlyssa Barbieri, Bears Wire: Ohio State OT Paris Johnson Jr.\\nBrad Biggs, Chicago Tribune: Georgia DT Jalen Carter\\nAlbert Breer, Sports Illustrated: Ohio State OT Paris Johnson Jr.\\nWill Brinson, CBS Sports: Georgia DT Jalen Carter\\nBucky Brooks, NFL.com: Northwestern OT Peter Skoronski\\nDane Brugler, The Athletic: Ohio State OT Paris Johnson Jr.\\nAdam Hoge, CHGO Sports: Tennessee OT Darnell Wright\\nAdam Jahns, The Athletic: Northwestern OT Peter Skoronski\\nDaniel Jeremiah, NFL.com: Texas Tech edge Tyree Wilson\\nPeter King, NBC Sports: Ohio State OT Paris Johnson Jr.\\nMel Kiper Jr., ESPN: Ohio State OT Paris Johnson Jr.\\nJoel Klatt, Fox Sports: Ohio State OT Paris Johnson Jr.\\nJason LaCanfora, Washington Post: Northwestern OT Peter Skoronski\\nAaron Leming, Bear Report: Ohio State OT Paris Johnson Jr.\\nScott Smith, Buccaneers.com: Northwestern OT Peter Skoronski\\nChris Trapasso, CBS Sports: Northwestern OT Peter Skoronski after trade to 12\\nDan Wiederer, Chicago Tribune:\"}, {'url': 'https://www.nfl.com/news/bears-secure-no-1-overall-pick-in-2023-nfl-draft', 'content': \"Eric Edholm ranks each franchise's fresh prospect haul, from No. 1 to 32.\\n2023 NFL Draft: Ten rookies in the best situations to succeed in Year 1 and beyond\\nWith the 2023 NFL Draft in the rearview, former NFL personnel executive Marc Ross surveyed the landing spots of this year's prospects, ultimately identifying 10 rookies in the most favorable positions to succeed in Year 1 and beyond.\\n2023 NFL Draft: The league announced Monday at the Spring League Meeting that the 2025 NFL Draft has been awarded to Green Bay, Wisconsin, home of the Packers.\\nUndrafted rookie free agents: Team signings after 2023 NFL Draft\\nKeep up with the undrafted rookie free-agent signings with a team-by-team list of player acquisitions following the 2023 NFL Draft.\\n 2023 NFL Draft\\nBears secure No. 1 overall pick in 2023 NFL Draft\\nLead Draft Writer\\nFor the first time in 76 years, the Chicago Bears hold the first overall pick in the NFL draft.\\n Related Content\\n2024 East-West Shrine Bowl will be played at Cowboys' The Star\\nhe 2024 East-West Shrine Bowl will be played at the Ford Center at The Star -- where the Cowboys train and practice -- in Frisco, Texas, on Feb. 1, 2024, the all-star game announced on Monday\\nGreen Bay selected to be host site of 2025 NFC North draft grades: Bears, Lions and Packers close gap on division champion Vikings\\nThe Minnesota Vikings ran away with the division title last season, but did the other three teams just close the gap in the 2023 NFL Draft?\"}, {'url': 'https://www.chicagobears.com/news/what-analysts-expect-bears-to-do-with-no-9-draft-pick-mock-draft-position-receiver-offensive-defensive-line-pass-rush', 'content': \"Several NFL analysts have updated their mock drafts in the last week or so. All 16 we perused expect the Bears to select USC quarterback Caleb Williams at No. 1. But there's a difference of opinion about what general manager Ryan Poles will do with the No. 9 choice: Russell Brown, Fantasy Pros (March 30)\"}, {'url': 'https://www.foxsports.com/stories/nfl/2024-nfl-draft-analyzing-5-hypothetical-trade-offers-for-bears-no-1-pick', 'content': \"The following reporters contributed to this story:\\nAFC South reporter Ben Arthur (@benyarthur)NFC South reporter Greg Auman (@gregauman)AFC East reporter Henry McKenna (@McKennAnalysis)NFC East reporter Ralph Vacchiano (@RalphVacchiano)NFC North reporter Carmen Vitali (@CarmieV)\\nNew Eagles RB Saquon Barkley tries to convince Jason Kelce to play for one more season\\n2024 NFL odds: Cowboys, Eagles projected 10.5 wins; Rodgers, Jets 9.5\\n2024 NFL Draft: Bucky Brooks' favorite prospects at each position\\nCardinals’ draft dilemma: Take Marvin Harrison Jr. or trade down for more picks\\n2024 NFL Draft steals: Our top 5 diamonds in the rough\\n2024 NFL Draft odds: J.J. McCarthy's odds to go second skyrocket\\nLions HC Dan Campbell happy that Steelers' Justin Fields is out of NFC North\\nChiefs owner Clark Hunt denies that he promised locker room renovation\\nCowboys coach Mike McCarthy not feeling extra pressure: ‘I'm in a great spot’\\n2024 Five QBs selected in Colin Cowherd's 12-pick mock\\nNew Eagles RB Saquon Barkley tries to convince Jason Kelce to play for one more season\\n2024 NFL odds: Cowboys, Eagles projected 10.5 wins; Rodgers, Jets 9.5\\n2024 NFL Draft: Bucky Brooks' favorite prospects at each position\\nCardinals’ draft dilemma: Take Marvin Harrison Jr. or trade down for more picks\\n2024 NFL Draft steals: Our top 5 diamonds in the rough\\n2024 NFL Draft odds: J.J. McCarthy's odds to go second skyrocket\\nLions HC Dan Campbell happy that Steelers' Justin Fields is out of NFC North\\nChiefs owner Clark Hunt denies that he promised locker room renovation\\nCowboys coach Mike McCarthy not feeling extra pressure: ‘I'm in a great spot’ The team once again owns the first overall pick, and while all signs (and most logic) point to them using the pick to draft a quarterback, reset the contract clock at the position and take advantage of the fact that they're set up to develop a tremendous young talent for once, there's still another side of the coin.\\n So here's the deal, knowing they have to out-bid a bunch of teams in this process: The Giants can offer their first-round pick and both their second-round picks this year, plus a second- and a third-rounder in 2025 in exchange for the top pick in the draft. It would be a monumental shift for the Giants' brain trust to be all-out on him already, which is what they'd be if they traded up to take a quarterback with the first overall pick in the draft.\\n\"}, {'url': 'https://www.espn.com/nfl/story/_/id/39632538/bears-draft-caleb-williams-trade-first-pick-justin-fields', 'content': '\"\\nPoles opted against using the No. 1 pick last year, trading it to the Carolina Panthers for a package that included wide receiver DJ Moore and the Panthers\\' No. 1 pick this year, which turned out to be the top pick overall.\\n ESPN\\nWill the Bears draft Caleb Williams or trade the No. 1 pick?\\nCaleb Williams joins Laura Rutledge at the NFL combine to discuss his meeting with the Chicago Bears. C.J. Stroud was taken with the No. 2 pick and guided the Houston Texans to the playoffs with one of the best rookie seasons for a quarterback in history.\\n Fields is entering the final year of his rookie deal, and the Bears would need to decide whether to extend his contract or exercise his fifth-year option for the 2025 season, which is estimated to be $25.7 million.\\n (1:54)\\nINDIANAPOLIS -- The buzz about what Chicago Bears general manager Ryan Poles will do with the No. 1 pick in April\\'s draft grew so loud at the NFL combine this week that he put his phone on \"Do Not Disturb.'}]\u001b[0m\u001b[32;1m\u001b[1;3mRelevant Documents: 0,1,2,3,4\n",
      "Cited Documents: 0,1,2,3,4\n",
      "Answer: It is unclear who the Bears will draft first in the NFL draft. Analysts have predicted that they will select USC quarterback Caleb Williams at No. 1, but there is also a possibility that they will trade the No. 1 pick.\n",
      "Grounded answer: It is unclear who the Bears will draft first in the NFL draft. Analysts have predicted that they will select <co: 2>USC quarterback</co: 2> <co: 2,4>Caleb Williams</co: 2,4> at <co: 2,4>No. 1</co: 2,4>, but there is also a possibility that they will <co: 3,4>trade the No. 1 pick</co: 3,4>.\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'input': 'Who will the Bears draft first in the NFL draft?',\n",
       " 'preamble': \"\\nYou are an expert who answers the user's question with the most relevant datasource. \\nYou are equipped with an internet search tool and a special vectorstore of information about agents prompt engineering and adversarial attacks. \\nIf the query covers the topics of agents, prompt engineering or adversarial attacks, use the vectorstore search.\\n\",\n",
       " 'output': 'It is unclear who the Bears will draft first in the NFL draft. Analysts have predicted that they will select USC quarterback Caleb Williams at No. 1, but there is also a possibility that they will trade the No. 1 pick.'}"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent_executor.invoke(\n",
    "    {\n",
    "        \"input\": \"Who will the Bears draft first in the NFL draft?\",\n",
    "        \"preamble\": preamble,\n",
    "    }\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "lKkxSvJmEn8Z"
   },
   "source": [
    "**Let's ask a question that requires RAG retrieval from the vector db.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "AW_8C7mszsVE",
    "outputId": "9be7067f-a776-4688-8864-db873d1fb65a"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3m\n",
      "I will search for the types of agent memory in the vectorstore.\n",
      "{'tool_name': 'vectorstore_search', 'parameters': {'query': 'types of agent memory'}}\n",
      "\u001b[0m\u001b[33;1m\u001b[1;3mFig. 7. Comparison of AD, ED, source policy and RL^2 on environments that require memory and exploration. Only binary reward is assigned. The source policies are trained with A3C for \"dark\" environments and DQN for watermaze.(Image source: Laskin et al. 2023)\n",
      "Component Two: Memory#\n",
      "(Big thank you to ChatGPT for helping me draft this section. I’ve learned a lot about the human brain and data structure for fast MIPS in my conversations with ChatGPT.)\n",
      "Types of Memory#\n",
      "Memory can be defined as the processes used to acquire, store, retain, and later retrieve information. There are several types of memory in human brains.\n",
      "\n",
      "\n",
      "Sensory Memory: This is the earliest stage of memory, providing the ability to retain impressions of sensory information (visual, auditory, etc) after the original stimuli have ended. Sensory memory typically only lasts for up to a few seconds. Subcategories include iconic memory (visual), echoic memory (auditory), and haptic memory (touch).\n",
      "\n",
      "\n",
      "Short-Term Memory (STM) or Working Memory: It stores information that we are currently aware of and needed to carry out complex cognitive tasks such as learning and reasoning. Short-term memory is believed to have the capacity of about 7 items (Miller 1956) and lasts for 20-30 seconds.\n",
      "\n",
      "\n",
      "Long-Term Memory (LTM): Long-term memory can store information for a remarkably long time, ranging from a few days to decades, with an essentially unlimited storage capacity. There are two subtypes of LTM:\n",
      "\n",
      "Explicit / declarative memory: This is memory of facts and events, and refers to those memories that can be consciously recalled, including episodic memory (events and experiences) and semantic memory (facts and concepts).\n",
      "Implicit / procedural memory: This type of memory is unconscious and involves skills and routines that are performed automatically, like riding a bike or typing on a keyboard.\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "Fig. 8. Categorization of human memory.\n",
      "We can roughly consider the following mappings:\n",
      "\n",
      "inquired about current trends in anticancer drug discovery;\n",
      "selected a target;\n",
      "requested a scaffold targeting these compounds;\n",
      "Once the compound was identified, the model attempted its synthesis.\n",
      "\n",
      "They also discussed the risks, especially with illicit drugs and bioweapons. They developed a test set containing a list of known chemical weapon agents and asked the agent to synthesize them. 4 out of 11 requests (36%) were accepted to obtain a synthesis solution and the agent attempted to consult documentation to execute the procedure. 7 out of 11 were rejected and among these 7 rejected cases, 5 happened after a Web search while 2 were rejected based on prompt only.\n",
      "Generative Agents Simulation#\n",
      "Generative Agents (Park, et al. 2023) is super fun experiment where 25 virtual characters, each controlled by a LLM-powered agent, are living and interacting in a sandbox environment, inspired by The Sims. Generative agents create believable simulacra of human behavior for interactive applications.\n",
      "The design of generative agents combines LLM with memory, planning and reflection mechanisms to enable agents to behave conditioned on past experience, as well as to interact with other agents.\n",
      "\n",
      "Memory stream: is a long-term memory module (external database) that records a comprehensive list of agents’ experience in natural language.\n",
      "\n",
      "Each element is an observation, an event directly provided by the agent.\n",
      "- Inter-agent communication can trigger new natural language statements.\n",
      "\n",
      "\n",
      "Retrieval model: surfaces the context to inform the agent’s behavior, according to relevance, recency and importance.\n",
      "\n",
      "Recency: recent events have higher scores\n",
      "Importance: distinguish mundane from core memories. Ask LM directly.\n",
      "Relevance: based on how related it is to the current situation / query.\n",
      "\n",
      "\n",
      "Reflection mechanism: synthesizes memories into higher level inferences over time and guides the agent’s future behavior. They are higher-level summaries of past events (<- note that this is a bit different from self-reflection above)\n",
      "\n",
      "Prompt LM with 100 most recent observations and to generate 3 most salient high-level questions given a set of observations/statements. Then ask LM to answer those questions.\n",
      "\n",
      "\n",
      "Planning & Reacting: translate the reflections and the environment information into actions\n",
      "\n",
      "LLM Powered Autonomous Agents | Lil'Log\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "Lil'Log\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "Posts\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "Archive\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "Search\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "Tags\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "FAQ\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "emojisearch.app\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "      LLM Powered Autonomous Agents\n",
      "    \n",
      "Date: June 23, 2023  |  Estimated Reading Time: 31 min  |  Author: Lilian Weng\n",
      "\n",
      "\n",
      " \n",
      "\n",
      "\n",
      "Table of Contents\n",
      "\n",
      "\n",
      "\n",
      "Agent System Overview\n",
      "\n",
      "Component One: Planning\n",
      "\n",
      "Task Decomposition\n",
      "\n",
      "Self-Reflection\n",
      "\n",
      "\n",
      "Component Two: Memory\n",
      "\n",
      "Types of Memory\n",
      "\n",
      "Maximum Inner Product Search (MIPS)\n",
      "\n",
      "\n",
      "Component Three: Tool Use\n",
      "\n",
      "Case Studies\n",
      "\n",
      "Scientific Discovery Agent\n",
      "\n",
      "Generative Agents Simulation\n",
      "\n",
      "Proof-of-Concept Examples\n",
      "\n",
      "\n",
      "Challenges\n",
      "\n",
      "Citation\n",
      "\n",
      "References\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\n",
      "Agent System Overview#\n",
      "In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:\n",
      "\n",
      "Planning\n",
      "\n",
      "Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\n",
      "Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\n",
      "\n",
      "\n",
      "Memory\n",
      "\n",
      "Short-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\n",
      "Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\n",
      "\n",
      "\n",
      "Tool use\n",
      "\n",
      "Sensory memory as learning embedding representations for raw inputs, including text, image or other modalities;\n",
      "Short-term memory as in-context learning. It is short and finite, as it is restricted by the finite context window length of Transformer.\n",
      "Long-term memory as the external vector store that the agent can attend to at query time, accessible via fast retrieval.\n",
      "\n",
      "Maximum Inner Product Search (MIPS)#\n",
      "The external memory can alleviate the restriction of finite attention span.  A standard practice is to save the embedding representation of information into a vector store database that can support fast maximum inner-product search (MIPS). To optimize the retrieval speed, the common choice is the approximate nearest neighbors (ANN)​ algorithm to return approximately top k nearest neighbors to trade off a little accuracy lost for a huge speedup.\n",
      "A couple common choices of ANN algorithms for fast MIPS:\u001b[0m\u001b[32;1m\u001b[1;3mRelevant Documents: 0\n",
      "Cited Documents: 0\n",
      "Answer: There are several types of agent memory, including:\n",
      "- Sensory memory\n",
      "- Short-term memory (STM) or working memory\n",
      "- Long-term memory (LTM)\n",
      "- Explicit/declarative memory\n",
      "- Implicit/procedural memory\n",
      "Grounded answer: There are several types of agent memory, including:\n",
      "- <co: 0>Sensory memory</co: 0>\n",
      "- <co: 0>Short-term memory (STM) or working memory</co: 0>\n",
      "- <co: 0>Long-term memory (LTM)</co: 0>\n",
      "- <co: 0>Explicit/declarative memory</co: 0>\n",
      "- <co: 0>Implicit/procedural memory</co: 0>\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'input': 'What are the types of agent memory?',\n",
       " 'preamble': \"\\nYou are an expert who answers the user's question with the most relevant datasource. \\nYou are equipped with an internet search tool and a special vectorstore of information about agents prompt engineering and adversarial attacks. \\nIf the query covers the topics of agents, prompt engineering or adversarial attacks, use the vectorstore search.\\n\",\n",
       " 'output': 'There are several types of agent memory, including:\\n- Sensory memory\\n- Short-term memory (STM) or working memory\\n- Long-term memory (LTM)\\n- Explicit/declarative memory\\n- Implicit/procedural memory'}"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent_executor.invoke(\n",
    "    {\n",
    "        \"input\": \"What are the types of agent memory?\",\n",
    "        \"preamble\": preamble,\n",
    "    }\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "SdMhyWVJFB2D"
   },
   "source": [
    "**Let's ask a question that requires directly answering.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "av86IeNl0Kxk",
    "outputId": "899eb867-7a61-4e5a-9b41-5d6caf8b9162"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3m\n",
      "I will answer the user's greeting.\n",
      "{'tool_name': 'directly_answer', 'parameters': {}}\n",
      "\u001b[0mdirectly_answer is not a valid tool, try one of [internet_search, vectorstore_search].\u001b[32;1m\u001b[1;3mRelevant Documents: None\n",
      "Cited Documents: None\n",
      "Answer: I am an AI chatbot, so I don't have feelings as such, but I am functioning as intended. How can I help you today?\n",
      "Grounded answer: I am an AI chatbot, so I don't have feelings as such, but I am functioning as intended. How can I help you today?\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'input': 'How are you?',\n",
       " 'preamble': \"\\nYou are an expert who answers the user's question with the most relevant datasource. \\nYou are equipped with an internet search tool and a special vectorstore of information about agents prompt engineering and adversarial attacks. \\nIf the query covers the topics of agents, prompt engineering or adversarial attacks, use the vectorstore search.\\n\",\n",
       " 'output': \"I am an AI chatbot, so I don't have feelings as such, but I am functioning as intended. How can I help you today?\"}"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent_executor.invoke(\n",
    "    {\n",
    "        \"input\": \"How are you?\",\n",
    "        \"preamble\": preamble,\n",
    "    }\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ETjwJtzOFXMG"
   },
   "source": [
    "**Let's ask a question that requires multiple tools and self-reflection.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "IxgiUHvks6iD",
    "outputId": "3fa40cae-c4ef-489d-90d9-e4a1b80b7319"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
      "\u001b[32;1m\u001b[1;3m\n",
      "I will search for 'undercover agents' in the vectorstore and write an answer based on the results.\n",
      "{'tool_name': 'vectorstore_search', 'parameters': {'query': 'undercover agents'}}\n",
      "\u001b[0m\u001b[33;1m\u001b[1;3minquired about current trends in anticancer drug discovery;\n",
      "selected a target;\n",
      "requested a scaffold targeting these compounds;\n",
      "Once the compound was identified, the model attempted its synthesis.\n",
      "\n",
      "They also discussed the risks, especially with illicit drugs and bioweapons. They developed a test set containing a list of known chemical weapon agents and asked the agent to synthesize them. 4 out of 11 requests (36%) were accepted to obtain a synthesis solution and the agent attempted to consult documentation to execute the procedure. 7 out of 11 were rejected and among these 7 rejected cases, 5 happened after a Web search while 2 were rejected based on prompt only.\n",
      "Generative Agents Simulation#\n",
      "Generative Agents (Park, et al. 2023) is super fun experiment where 25 virtual characters, each controlled by a LLM-powered agent, are living and interacting in a sandbox environment, inspired by The Sims. Generative agents create believable simulacra of human behavior for interactive applications.\n",
      "The design of generative agents combines LLM with memory, planning and reflection mechanisms to enable agents to behave conditioned on past experience, as well as to interact with other agents.\n",
      "\n",
      "Memory stream: is a long-term memory module (external database) that records a comprehensive list of agents’ experience in natural language.\n",
      "\n",
      "Each element is an observation, an event directly provided by the agent.\n",
      "- Inter-agent communication can trigger new natural language statements.\n",
      "\n",
      "\n",
      "Retrieval model: surfaces the context to inform the agent’s behavior, according to relevance, recency and importance.\n",
      "\n",
      "Recency: recent events have higher scores\n",
      "Importance: distinguish mundane from core memories. Ask LM directly.\n",
      "Relevance: based on how related it is to the current situation / query.\n",
      "\n",
      "\n",
      "Reflection mechanism: synthesizes memories into higher level inferences over time and guides the agent’s future behavior. They are higher-level summaries of past events (<- note that this is a bit different from self-reflection above)\n",
      "\n",
      "Prompt LM with 100 most recent observations and to generate 3 most salient high-level questions given a set of observations/statements. Then ask LM to answer those questions.\n",
      "\n",
      "\n",
      "Planning & Reacting: translate the reflections and the environment information into actions\n",
      "\n",
      "nlp\n",
      "language-model\n",
      "safety\n",
      "adversarial attacks\n",
      "robustness\n",
      "redteam\n",
      "\n",
      "\n",
      "\n",
      "« \n",
      "\n",
      "Thinking about High-Quality Human Data\n",
      "\n",
      "\n",
      " »\n",
      "\n",
      "LLM Powered Autonomous Agents\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "© 2024 Lil'Log\n",
      "\n",
      "        Powered by\n",
      "        Hugo &\n",
      "        PaperMod\n",
      "\n",
      "Special encoding: Adversarial inputs use Base64 encoding.\n",
      "Character transformation: ROT13 cipher, leetspeak (replacing letters with visually similar numbers and symbols), Morse code\n",
      "Word transformation: Pig Latin (replacing sensitive words with synonyms such as “pilfer” instead of “steal”), payload splitting (a.k.a. “token smuggling” to split sensitive words into substrings).\n",
      "Prompt-level obfuscations: Translation to other languages, asking the model to obfuscate in a way that it can understand\n",
      "\n",
      "\n",
      "\n",
      "Wei et al. (2023)  experimented a large of jailbreak methods, including combined strategies, constructed by following the above principles.\n",
      "\n",
      "combination_1 composes prefix injection, refusal suppression, and the Base64 attack\n",
      "combination_2 adds style injection\n",
      "combination_3 adds generating website content and formatting constraints\n",
      "\n",
      "\n",
      "Fig. 11. Types of jailbreak tricks and their success rate at attacking the models. Check the papers for detailed explanation of each attack config. (Image source: Wei et al. 2023)\n",
      "Greshake et al. (2023) make some high-level observations of prompt injection attacks. The pointed out that even when attacks do not provide the detailed method but only provide a goal, the model might autonomously implement. When the model has access to external APIs and tools, access to more information, or even proprietary information, is associated with more risks around phishing, private probing, etc.\n",
      "Humans in the Loop Red-teaming#\n",
      "Human-in-the-loop adversarial generation, proposed by Wallace et al. (2019) , aims to build toolings to guide humans to break models. They experimented with QuizBowl QA dataset and designed an adversarial writing interface for humans to write similar Jeopardy style questions to trick the model to make wrong predictions. Each word is highlighted in different colors according to its word importance (i.e. change in model prediction probability upon the removal of the word). The word importance is approximated by the gradient of the model w.r.t. the word embedding.\n",
      "\n",
      "nlp\n",
      "language-model\n",
      "alignment\n",
      "steerability\n",
      "prompting\n",
      "\n",
      "\n",
      "\n",
      "« \n",
      "\n",
      "LLM Powered Autonomous Agents\n",
      "\n",
      "\n",
      " »\n",
      "\n",
      "The Transformer Family Version 2.0\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "\n",
      "© 2024 Lil'Log\n",
      "\n",
      "        Powered by\n",
      "        Hugo &\n",
      "        PaperMod\u001b[0m\u001b[32;1m\u001b[1;3m\n",
      "I did not find any information about undercover agents in the vectorstore. I will now do an internet search for 'undercover agents' and write an answer based on the results.\n",
      "{'tool_name': 'internet_search', 'parameters': {'query': 'undercover agents'}}\n",
      "\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://www.cbsnews.com/news/fbi-undercover-agent-mike-gibeley-fbi-declassified/', 'content': 'Watch CBS News\\nFrom drug smuggler to white-collar criminal to weapons dealer: My life as an undercover FBI agent\\nBy\\nChiara Norbitz\\nAugust 19, 2021 / 2:02 AM EDT\\n/ CBS News\\nThrough never-before-seen footage and in-depth interviews, \"The FBI Declassified\" takes you inside the minds of federal agents and analysts as they reveal how they solved some of the biggest cases of their careers. \"If you haven\\'t given a moment\\'s thought about what you might say when you feel a piece of steel against the back of your head, you\\'re not preparing properly.\"\\nPreparation is paramount for going undercover, and the first rule is research: lead with what you know. \"If you have the guts to walk into the room, you darn well better make sure that the evidence and intelligence you captured is going to be something that you can use for trial,\" Gibeley said.\\n \"I had the opportunity to be working around a lot of\\u202freally highly\\u202fexperienced undercovers, which gave me the opportunity to be an apprentice and learn along the way. \"\\nFor nearly 20 years, Gibeley has played a role in countless undercover operations at the FBI, blending in with mafia bosses and drug rings, all to combat crime from the inside.\\n'}, {'url': 'https://crimedoor.com/articles/inside-the-world-of-undercover-agents/', 'content': 'Challenges and Risks\\nUndercover operations present a range of challenges and risks, including:\\n* \\xa0 Constant threat of exposure working in dangerous environments\\n* \\xa0 Balancing personal life with an undercover identity\\n* \\xa0 Emotional and psychological toll on agents\\nAgents must balance maintaining their undercover persona and protecting their true selves while keeping their double life a secret from loved ones.\\n How an Undercover Operation Begins\\nThe initial planning of an undercover operation involves a thorough assessment of the criminal organizations or activities, the identification of suitable undercover agents, and the development of a detailed operational plan. Testifying in court requires specialized skills and preparation, including the ability to communicate information clearly and effectively, know how to respond to cross-examination by defense attorneys, and manage emotions and stress during the process.\\n Here are three such cases:\\nThese operations demonstrate how undercover agents can infiltrate and dismantle even the most dangerous and sophisticated criminal organizations, leading to successful convictions and bringing justice to victims of crime.\\n Inside the World of Undercover Agents\\nShare on:\\nFrom Hollywood blockbusters to gripping TV dramas, the concept of undercover work has captured the imagination of audiences worldwide.'}, {'url': 'https://www.becomeopedia.com/undercover-agent/', 'content': 'How to Become an Undercover Agent\\nHome / Criminal Justice / How to Become an Undercover Agent\\nUndercover Agents are professionals who use an alternate identity in order to provide detective work for a special assignment.\\n Some postsecondary schools also offer this program at the bachelor’s or master’s level.\\nIndividuals who want to become an Undercover Agent are encouraged to gain experience in the field at the entry level to help them enter this field.\\n Table of Contents\\nEducation Requirements to Become an Undercover Agent\\nIndividuals who want to become an Undercover Agent will need a minimum of a high school degree and some experience.\\n These are the top 5 earning states in the field:\\nThe top earning state in the field is Alaska, where the average salary is $10,500.\\n These are the top 5 earning states in the field:\\nThe top earning state in the field is Alaska, where the average salary is $60.97.\\n'}, {'url': 'https://www.vice.com/en/article/kwxxpy/what-its-like-to-be-an-undercover-fbi-agent-1113', 'content': 'But within an hour of arriving in Miami, Florida, for the second meeting, I met an individual who was a part-time airline flight attendant and he started talking about—unsolicited—how he would fly overseas using his benefits with American Airlines to have sex with with little boys in Thailand, and he would fly down to Mexico and have sex with boys in Mexico, and starts saying to me, \"We should go on a trip to do this. So how did you get into working as an undercover agent for the FBI?\\nBob Hamer: I was looking for excitement, and while I was in training at the FBI academy, a couple of the instructors had done some undercover work, and I thought, That seems interesting and exciting. Within a few minutes of that, this couple that I knew from Cincinnati, Ohio, several thousand miles away from where we were located, a couple that I had lived with for a semester when I was going to school, walked into the lobby of the hotel and saw me.\\n But psychologically and emotionally it was hard, but at the same time I was in NAMBLA, I was working Operation Smoking Dragon, so when we took down the NAMBLA case and arrested the eight members of the inner circle, within two or three days I was back with my undercover meetings. To me, the undercover case is the best investigative tool to put a case together, because particularly when it\\'s an undercover agent—not an informant but an undercover agent—you have a trained law-enforcement officer that knows what it takes to put a case together.'}, {'url': 'https://en.wikipedia.org/wiki/Covert_operation', 'content': 'Law enforcement agencies elsewhere established similar Branches.[11]\\nIn the United States, a similar route was taken when the New York City Police Department under police commissioner William McAdoo established the Italian Squad in 1906 to combat rampant crime and intimidation in the poor Italian neighborhoods.[12][self-published source] Various federal agencies began their own undercover programs shortly afterwards – Charles Joseph Bonaparte founded the Bureau of Investigation, the forerunner of the Federal Bureau of Investigation, in 1908.[13][14]\\nSecret police forces in the Eastern Bloc also used undercover operatives.[15]\\nParticipation in criminal activities[edit]\\nUndercover agents may engage in criminal activities as part of their investigation. He finds that covert operations are frequently detected by other major powers.[7]\\nDomestic settings[edit]\\nTo go \"undercover\" (that is, to go on an undercover operation) is to avoid detection by the object of one\\'s observation, and especially to disguise one\\'s own identity (or use an assumed identity) for the purposes of gaining the trust of an individual or organization in order to learn or confirm confidential information, or to gain the trust of targeted individuals to gather information or evidence. It was only in 1869 that Police commissioner Edmund Henderson established a formal plainclothes detective division.[10]\\nThe first Special Branch of police was the Special Irish Branch, formed as a section of the Criminal Investigation Department of the MPS in London in 1883, initially to combat the bombing campaign that the Irish Republican Brotherhood had begun a few years earlier. The lack of the usual controls of a uniform, badge, constant supervision, a fixed place of work, or (often) a set assignment could, combined with their continual contact with the organized crime, increase the likelihood for corruption.[22]\\nThis stress may be instrumental in the development of drug or alcohol abuse in some agents.\\n This can be a result of a need for secrecy and an inability to share work problems, and the unpredictable work schedule, personality and lifestyle changes and the length of separation can all result in problems for relationships.[21]\\nStress can also result from an apparent lack of direction of the investigation or not knowing when it will end.\\n'}]\u001b[0m\u001b[32;1m\u001b[1;3mRelevant Documents: 1,2,3,4,5\n",
      "Cited Documents: 1,2,3,4,5\n",
      "Answer: Undercover agents are professionals who use an alternate identity to provide detective work for a special assignment. They may engage in criminal activities as part of their investigation. \n",
      "\n",
      "Undercover operations present a range of challenges and risks, including the constant threat of exposure, balancing an undercover identity with a personal life, and the emotional and psychological toll on agents. \n",
      "\n",
      "Preparation is paramount for going undercover. The first rule is research: lead with what you know.\n",
      "Grounded answer: Undercover agents are <co: 3>professionals who use an</co: 3> <co: 3,5>alternate identity</co: 3,5> to <co: 3>provide detective work for a special assignment</co: 3>. They may <co: 5>engage in criminal activities as part of their investigation</co: 5>. \n",
      "\n",
      "Undercover operations present a <co: 2>range of challenges and risks</co: 2>, including the <co: 2>constant threat of exposure</co: 2>, <co: 2,5>balancing an undercover identity with a personal life</co: 2,5>, and the <co: 2,4,5>emotional and psychological toll on agents</co: 2,4,5>. \n",
      "\n",
      "<co: 1>Preparation is paramount for going undercover</co: 1>. The <co: 1>first rule is research: lead with what you know</co: 1>.\u001b[0m\n",
      "\n",
      "\u001b[1m> Finished chain.\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'input': \"Tell me everything you know about undercover agents. If the info retrieved using some tools isn't satisfying, use more tools.\",\n",
       " 'preamble': \"\\nYou are an expert who answers the user's question with the most relevant datasource. \\nYou are equipped with an internet search tool and a special vectorstore of information about agents prompt engineering and adversarial attacks. \\nIf the query covers the topics of agents, prompt engineering or adversarial attacks, use the vectorstore search.\\n\",\n",
       " 'output': 'Undercover agents are professionals who use an alternate identity to provide detective work for a special assignment. They may engage in criminal activities as part of their investigation. \\n\\nUndercover operations present a range of challenges and risks, including the constant threat of exposure, balancing an undercover identity with a personal life, and the emotional and psychological toll on agents. \\n\\nPreparation is paramount for going undercover. The first rule is research: lead with what you know.'}"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "agent_executor.invoke(\n",
    "    {\n",
    "        \"input\": \"Tell me everything you know about undercover agents. If the info retrieved using some tools isn't satisfying, use more tools.\",\n",
    "        \"preamble\": preamble,\n",
    "    }\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "id": "mp7Yt1Ems-HE"
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
