{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "b7973eef-7e5e-4ca4-844a-1d016f5a638b",
   "metadata": {},
   "source": [
    "# Building an Agent around a Query Pipeline\n",
    "\n",
    "In this cookbook we show you how to build an agent around a query pipeline.\n",
    "\n",
    "Agents offer the ability to do complex, sequential reasoning on top of any query DAG that you have setup. Conceptually this is also one of the ways you can add a \"loop\" to the graph.\n",
    "\n",
    "We show you two examples of agents you can implement:\n",
    "- a full ReAct agent that can do tool picking\n",
    "- a \"simple\" agent that adds a retry layer around a text-to-sql query engine."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "763a442a-cfcd-4e63-9121-e3a45dc3acff",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import SQLDatabase\n",
    "from sqlalchemy import (\n",
    "    create_engine,\n",
    "    MetaData,\n",
    "    Table,\n",
    "    Column,\n",
    "    String,\n",
    "    Integer,\n",
    "    select,\n",
    "    column,\n",
    ")\n",
    "\n",
    "engine = create_engine(\"sqlite:///chinook.db\")\n",
    "sql_database = SQLDatabase(engine)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0aacdcad-f0c1-40f4-b319-6c4cf3b309c7",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.query_pipeline import QueryPipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "122dce22-6d3a-4d4a-a265-a6a3d3f90d26",
   "metadata": {},
   "source": [
    "## Setup\n",
    "\n",
    "### Setup Data\n",
    "\n",
    "We use the chinook database as sample data. [Source](https://www.sqlitetutorial.net/sqlite-sample-database/)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cce0a58f",
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install llama-index-llms-openai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "926b79ba-1868-46ca-bb7e-2bd3c907773c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n",
      "                                 Dload  Upload   Total   Spent    Left  Speed\n",
      "100  298k  100  298k    0     0  2327k      0 --:--:-- --:--:-- --:--:-- 2387k\n",
      "curl: (6) Could not resolve host: .\n",
      "Archive:  ./chinook.zip\n",
      "  inflating: chinook.db              \n"
     ]
    }
   ],
   "source": [
    "!curl \"https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip\" -O ./chinook.zip\n",
    "!unzip ./chinook.zip"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "419f97cf-63c1-456b-babd-b07d9ce4b937",
   "metadata": {},
   "source": [
    "### Setup Observability\n",
    "\n",
    "We setup Arize Phoenix for observability."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5fb303b7-2fb1-496b-aff8-57844cdc519b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# define global callback setting\n",
    "from llama_index.core.settings import Settings\n",
    "from llama_index.core.callbacks import CallbackManager\n",
    "\n",
    "callback_manager = CallbackManager()\n",
    "Settings.callback_manager = callback_manager"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "754f11d3-f053-46f7-acb4-ae8ee7d3fe07",
   "metadata": {},
   "outputs": [],
   "source": [
    "# setup Arize Phoenix for logging/observability\n",
    "import phoenix as px\n",
    "import llama_index.core\n",
    "\n",
    "px.launch_app()\n",
    "llama_index.core.set_global_handler(\"arize_phoenix\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fbbc38d8-2a78-4457-888d-cfe8960a9c6b",
   "metadata": {},
   "source": [
    "## Setup Text-to-SQL Query Engine / Tool\n",
    "\n",
    "Now we setup a simple text-to-SQL tool: given a query, translate text to SQL, execute against database, and get back a result. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d72eb79e-05b4-4260-b086-e837b01cce77",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.query_engine import NLSQLTableQueryEngine\n",
    "from llama_index.core.tools import QueryEngineTool\n",
    "\n",
    "sql_query_engine = NLSQLTableQueryEngine(\n",
    "    sql_database=sql_database,\n",
    "    tables=[\"albums\", \"tracks\", \"artists\"],\n",
    "    verbose=True,\n",
    ")\n",
    "sql_tool = QueryEngineTool.from_defaults(\n",
    "    query_engine=sql_query_engine,\n",
    "    name=\"sql_tool\",\n",
    "    description=(\n",
    "        \"Useful for translating a natural language query into a SQL query\"\n",
    "    ),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af9d8d26-65c8-4788-a0ad-8d2448b23254",
   "metadata": {},
   "source": [
    "## Setup ReAct Agent Pipeline\n",
    "\n",
    "We now setup a ReAct pipeline for a single step using our Query Pipeline syntax. This is a multi-part process that does the following:\n",
    "1. Takes in agent inputs\n",
    "2. Calls ReAct prompt using LLM to generate next action/tool (or returns a response).\n",
    "3. If tool/action is selected, call tool pipeline to execute tool + collect response.\n",
    "4. If response is generated, get response.\n",
    "\n",
    "Throughout this we'll use a variety of agent-specific query components. Unlike normal query pipelines, these are specifically designed for query pipelines that are used in a `QueryPipelineAgentWorker`:\n",
    "- An `AgentInputComponent` that allows you to convert the agent inputs (Task, state dictionary) into a set of inputs for the query pipeline.\n",
    "- An `AgentFnComponent`: a general processor that allows you to take in the current Task, state, as well as any arbitrary inputs, and returns an output. In this cookbook we define a function component to format the ReAct prompt. However, you can put this anywhere.\n",
    "- [Not used in this notebook] An `CustomAgentComponent`: similar to `AgentFnComponent`, you can implement `_run_component` to define your own logic, with access to Task and state. It is more verbose but more flexible than `AgentFnComponent` (e.g. you can define init variables, and callbacks are in the base class).\n",
    "\n",
    "Note that any function passed into `AgentFnComponent` and `AgentInputComponent` MUST include `task` and `state` as input variables, as these are inputs passed from the agent. \n",
    "\n",
    "Note that the output of an agentic query pipeline MUST be `Tuple[AgentChatResponse, bool]`. You'll see this below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9884b8c0-0ec3-4b14-9538-db9f5be63466",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.query_pipeline import QueryPipeline as QP\n",
    "\n",
    "qp = QP(verbose=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a024201f-a39b-4e23-a567-9737026dd771",
   "metadata": {},
   "source": [
    "### Define Agent Input Component\n",
    "\n",
    "Here we define the agent input component, called at the beginning of every agent step. Besides passing along the input, we also do initialization/state modification."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9b1a9252-57df-47cc-9fcf-84d87b1e0102",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.agent.react.types import (\n",
    "    ActionReasoningStep,\n",
    "    ObservationReasoningStep,\n",
    "    ResponseReasoningStep,\n",
    ")\n",
    "from llama_index.core.agent import Task, AgentChatResponse\n",
    "from llama_index.core.query_pipeline import (\n",
    "    AgentInputComponent,\n",
    "    AgentFnComponent,\n",
    "    CustomAgentComponent,\n",
    "    QueryComponent,\n",
    "    ToolRunnerComponent,\n",
    ")\n",
    "from llama_index.core.llms import MessageRole\n",
    "from typing import Dict, Any, Optional, Tuple, List, cast\n",
    "\n",
    "\n",
    "## Agent Input Component\n",
    "## This is the component that produces agent inputs to the rest of the components\n",
    "## Can also put initialization logic here.\n",
    "def agent_input_fn(task: Task, state: Dict[str, Any]) -> Dict[str, Any]:\n",
    "    \"\"\"Agent input function.\n",
    "\n",
    "    Returns:\n",
    "        A Dictionary of output keys and values. If you are specifying\n",
    "        src_key when defining links between this component and other\n",
    "        components, make sure the src_key matches the specified output_key.\n",
    "\n",
    "    \"\"\"\n",
    "    # initialize current_reasoning\n",
    "    if \"current_reasoning\" not in state:\n",
    "        state[\"current_reasoning\"] = []\n",
    "    reasoning_step = ObservationReasoningStep(observation=task.input)\n",
    "    state[\"current_reasoning\"].append(reasoning_step)\n",
    "    return {\"input\": task.input}\n",
    "\n",
    "\n",
    "agent_input_component = AgentInputComponent(fn=agent_input_fn)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd914690-ee4d-4b3b-bb95-9cce0994b9c1",
   "metadata": {},
   "source": [
    "### Define Agent Prompt\n",
    "\n",
    "Here we define the agent component that generates a ReAct prompt, and after the output is generated from the LLM, parses into a structured object."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8a1957da-bc2b-4186-abf0-11148a23dba7",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.agent import ReActChatFormatter\n",
    "from llama_index.core.query_pipeline import InputComponent, Link\n",
    "from llama_index.core.llms import ChatMessage\n",
    "from llama_index.core.tools import BaseTool\n",
    "\n",
    "\n",
    "## define prompt function\n",
    "def react_prompt_fn(\n",
    "    task: Task, state: Dict[str, Any], input: str, tools: List[BaseTool]\n",
    ") -> List[ChatMessage]:\n",
    "    # Add input to reasoning\n",
    "    chat_formatter = ReActChatFormatter()\n",
    "    return chat_formatter.format(\n",
    "        tools,\n",
    "        chat_history=task.memory.get() + state[\"memory\"].get_all(),\n",
    "        current_reasoning=state[\"current_reasoning\"],\n",
    "    )\n",
    "\n",
    "\n",
    "react_prompt_component = AgentFnComponent(\n",
    "    fn=react_prompt_fn, partial_dict={\"tools\": [sql_tool]}\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7acd7088-7804-48c5-8c7f-0c5c112d27f0",
   "metadata": {},
   "source": [
    "### Define Agent Output Parser + Tool Pipeline\n",
    "\n",
    "Once the LLM gives an output, we have a decision tree:\n",
    "1. If an answer is given, then we're done. Process the output\n",
    "2. If an action is given, we need to execute the specified tool with the specified args, and then process the output.\n",
    "\n",
    "Tool calling can be done via the `ToolRunnerComponent` module. This is a simple wrapper module that takes in a list of tools, and can be \"executed\" with the specified tool name (every tool has a name) and tool action.\n",
    "\n",
    "We implement this overall module `OutputAgentComponent` that subclasses `CustomAgentComponent`.\n",
    "\n",
    "Note: we also implement `sub_query_components` to pass through higher-level callback managers to the tool runner submodule."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "702e3070-40df-491b-b468-d19ba98edc6b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import Set, Optional\n",
    "from llama_index.core.agent.react.output_parser import ReActOutputParser\n",
    "from llama_index.core.llms import ChatResponse\n",
    "from llama_index.core.agent.types import Task\n",
    "\n",
    "\n",
    "def parse_react_output_fn(\n",
    "    task: Task, state: Dict[str, Any], chat_response: ChatResponse\n",
    "):\n",
    "    \"\"\"Parse ReAct output into a reasoning step.\"\"\"\n",
    "    output_parser = ReActOutputParser()\n",
    "    reasoning_step = output_parser.parse(chat_response.message.content)\n",
    "    return {\"done\": reasoning_step.is_done, \"reasoning_step\": reasoning_step}\n",
    "\n",
    "\n",
    "parse_react_output = AgentFnComponent(fn=parse_react_output_fn)\n",
    "\n",
    "\n",
    "def run_tool_fn(\n",
    "    task: Task, state: Dict[str, Any], reasoning_step: ActionReasoningStep\n",
    "):\n",
    "    \"\"\"Run tool and process tool output.\"\"\"\n",
    "    tool_runner_component = ToolRunnerComponent(\n",
    "        [sql_tool], callback_manager=task.callback_manager\n",
    "    )\n",
    "    tool_output = tool_runner_component.run_component(\n",
    "        tool_name=reasoning_step.action,\n",
    "        tool_input=reasoning_step.action_input,\n",
    "    )\n",
    "    observation_step = ObservationReasoningStep(observation=str(tool_output))\n",
    "    state[\"current_reasoning\"].append(observation_step)\n",
    "    # TODO: get output\n",
    "\n",
    "    return {\"response_str\": observation_step.get_content(), \"is_done\": False}\n",
    "\n",
    "\n",
    "run_tool = AgentFnComponent(fn=run_tool_fn)\n",
    "\n",
    "\n",
    "def process_response_fn(\n",
    "    task: Task, state: Dict[str, Any], response_step: ResponseReasoningStep\n",
    "):\n",
    "    \"\"\"Process response.\"\"\"\n",
    "    state[\"current_reasoning\"].append(response_step)\n",
    "    response_str = response_step.response\n",
    "    # Now that we're done with this step, put into memory\n",
    "    state[\"memory\"].put(ChatMessage(content=task.input, role=MessageRole.USER))\n",
    "    state[\"memory\"].put(\n",
    "        ChatMessage(content=response_str, role=MessageRole.ASSISTANT)\n",
    "    )\n",
    "\n",
    "    return {\"response_str\": response_str, \"is_done\": True}\n",
    "\n",
    "\n",
    "process_response = AgentFnComponent(fn=process_response_fn)\n",
    "\n",
    "\n",
    "def process_agent_response_fn(\n",
    "    task: Task, state: Dict[str, Any], response_dict: dict\n",
    "):\n",
    "    \"\"\"Process agent response.\"\"\"\n",
    "    return (\n",
    "        AgentChatResponse(response_dict[\"response_str\"]),\n",
    "        response_dict[\"is_done\"],\n",
    "    )\n",
    "\n",
    "\n",
    "process_agent_response = AgentFnComponent(fn=process_agent_response_fn)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "03c1382b-f9ef-42a6-b117-91d7b2a14921",
   "metadata": {},
   "source": [
    "### Stitch together Agent Query Pipeline\n",
    "\n",
    "We can now stitch together the top-level agent pipeline: agent_input -> react_prompt -> llm -> react_output.\n",
    "\n",
    "The last component is the if-else component that calls sub-components."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "219ec2ed-474e-450e-866c-15772055b8b1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.query_pipeline import QueryPipeline as QP\n",
    "from llama_index.llms.openai import OpenAI\n",
    "\n",
    "qp.add_modules(\n",
    "    {\n",
    "        \"agent_input\": agent_input_component,\n",
    "        \"react_prompt\": react_prompt_component,\n",
    "        \"llm\": OpenAI(model=\"gpt-4-1106-preview\"),\n",
    "        \"react_output_parser\": parse_react_output,\n",
    "        \"run_tool\": run_tool,\n",
    "        \"process_response\": process_response,\n",
    "        \"process_agent_response\": process_agent_response,\n",
    "    }\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "be90408b-0893-4cab-a7c7-32206979eecd",
   "metadata": {},
   "outputs": [],
   "source": [
    "# link input to react prompt to parsed out response (either tool action/input or observation)\n",
    "qp.add_chain([\"agent_input\", \"react_prompt\", \"llm\", \"react_output_parser\"])\n",
    "\n",
    "# add conditional link from react output to tool call (if not done)\n",
    "qp.add_link(\n",
    "    \"react_output_parser\",\n",
    "    \"run_tool\",\n",
    "    condition_fn=lambda x: not x[\"done\"],\n",
    "    input_fn=lambda x: x[\"reasoning_step\"],\n",
    ")\n",
    "# add conditional link from react output to final response processing (if done)\n",
    "qp.add_link(\n",
    "    \"react_output_parser\",\n",
    "    \"process_response\",\n",
    "    condition_fn=lambda x: x[\"done\"],\n",
    "    input_fn=lambda x: x[\"reasoning_step\"],\n",
    ")\n",
    "\n",
    "# whether response processing or tool output processing, add link to final agent response\n",
    "qp.add_link(\"process_response\", \"process_agent_response\")\n",
    "qp.add_link(\"run_tool\", \"process_agent_response\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "207ff8b0-efc3-482d-8224-f3b82c9a1c2c",
   "metadata": {},
   "source": [
    "### Visualize Query Pipeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cd5e739b-ef99-44dd-91a3-a495fa11e4f7",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "agent_dag.html\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "        <iframe\n",
       "            width=\"100%\"\n",
       "            height=\"600px\"\n",
       "            src=\"agent_dag.html\"\n",
       "            frameborder=\"0\"\n",
       "            allowfullscreen\n",
       "            \n",
       "        ></iframe>\n",
       "        "
      ],
      "text/plain": [
       "<IPython.lib.display.IFrame at 0x297838880>"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from pyvis.network import Network\n",
    "\n",
    "net = Network(notebook=True, cdn_resources=\"in_line\", directed=True)\n",
    "net.from_nx(qp.clean_dag)\n",
    "net.show(\"agent_dag.html\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f7785354-1fe0-49b4-8b48-08ce51853a4e",
   "metadata": {},
   "source": [
    "### Setup Agent Worker around Text-to-SQL Query Pipeline\n",
    "\n",
    "This is our way to setup an agent around a text-to-SQL Query Pipeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d01ea94b-3d3f-4b26-b71a-eea3d1d1ec0d",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.agent import QueryPipelineAgentWorker, AgentRunner\n",
    "from llama_index.core.callbacks import CallbackManager\n",
    "\n",
    "agent_worker = QueryPipelineAgentWorker(qp)\n",
    "agent = AgentRunner(\n",
    "    agent_worker, callback_manager=CallbackManager([]), verbose=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "373bd661-4a6b-4c5b-b35b-0082921cc3b9",
   "metadata": {},
   "source": [
    "### Run the Agent\n",
    "\n",
    "Let's try the agent on some sample queries."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6ea55110-98fb-4acf-b2f8-eee2b15c087b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# start task\n",
    "task = agent.create_task(\n",
    "    \"What are some tracks from the artist AC/DC? Limit it to 3\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7f35e816-e76b-43ba-8eb4-ddda8ee97a0b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> Running step edb9926c-7290-4b8d-ac80-1421432a0ea6. Step input: What are some tracks from the artist AC/DC? Limit it to 3\n",
      "\u001b[1;3;38;2;155;135;227m> Running module agent_input with input: \n",
      "state: {'sources': [], 'memory': ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method Encoding.encode of <Encoding 'cl100k_base'>>, allowed_special='all'), chat_store=SimpleChatSto...\n",
      "task: task_id='b9b747a7-880f-4e91-9eed-b64574cbb6d0' input='What are some tracks from the artist AC/DC? Limit it to 3' memory=ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method ...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module react_prompt with input: \n",
      "input: What are some tracks from the artist AC/DC? Limit it to 3\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module llm with input: \n",
      "messages: [ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='\\nYou are designed to help with a variety of tasks, from answering questions     to providing summaries to other types of analyses.\\n\\n## Too...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module react_output_parser with input: \n",
      "chat_response: assistant: Thought: I need to use a tool to help me answer the question.\n",
      "Action: sql_tool\n",
      "Action Input: {\"input\": \"Select track_name from tracks where artist_name = 'AC/DC' limit 3\"}\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module run_tool with input: \n",
      "reasoning_step: thought='I need to use a tool to help me answer the question.' action='sql_tool' action_input={'input': \"Select track_name from tracks where artist_name = 'AC/DC' limit 3\"}\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module process_agent_response with input: \n",
      "response_dict: {'response_str': 'Observation: {\\'output\\': ToolOutput(content=\\'The top 3 tracks by AC/DC are \"For Those About To Rock (We Salute You)\", \"Put The Finger On You\", and \"Let\\\\\\'s Get It Up\".\\', tool_nam...\n",
      "\n",
      "\u001b[0m"
     ]
    }
   ],
   "source": [
    "step_output = agent.run_step(task.task_id)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2a13e449-db6c-43a3-99b7-70d23687e8b8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> Running step 37e2312b-540b-4c79-9261-15318d4796d9. Step input: None\n",
      "\u001b[1;3;38;2;155;135;227m> Running module agent_input with input: \n",
      "state: {'sources': [], 'memory': ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method Encoding.encode of <Encoding 'cl100k_base'>>, allowed_special='all'), chat_store=SimpleChatSto...\n",
      "task: task_id='b9b747a7-880f-4e91-9eed-b64574cbb6d0' input='What are some tracks from the artist AC/DC? Limit it to 3' memory=ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method ...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module react_prompt with input: \n",
      "input: What are some tracks from the artist AC/DC? Limit it to 3\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module llm with input: \n",
      "messages: [ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='\\nYou are designed to help with a variety of tasks, from answering questions     to providing summaries to other types of analyses.\\n\\n## Too...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module react_output_parser with input: \n",
      "chat_response: assistant: Thought: The user has repeated the request, possibly due to not noticing the previous response. I will provide the information again.\n",
      "\n",
      "Answer: The top 3 tracks by AC/DC are \"For Those About...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module process_response with input: \n",
      "response_step: thought='The user has repeated the request, possibly due to not noticing the previous response. I will provide the information again.' response='The top 3 tracks by AC/DC are \"For Those About To Rock ...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module process_agent_response with input: \n",
      "response_dict: {'response_str': 'The top 3 tracks by AC/DC are \"For Those About To Rock (We Salute You)\", \"Put The Finger On You\", and \"Let\\'s Get It Up\".', 'is_done': True}\n",
      "\n",
      "\u001b[0m"
     ]
    }
   ],
   "source": [
    "step_output = agent.run_step(task.task_id)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "22dffeec-651e-43e7-8929-827566c8f6da",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "step_output.is_last"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "44372a37-0b1e-4d90-8f42-9816546cc66e",
   "metadata": {},
   "outputs": [],
   "source": [
    "response = agent.finalize_response(task.task_id)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7ba842bd-cc86-422b-8c17-2ff87c1962b6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The top 3 tracks by AC/DC are \"For Those About To Rock (We Salute You)\", \"Put The Finger On You\", and \"Let's Get It Up\".\n"
     ]
    }
   ],
   "source": [
    "print(str(response))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c6ef125b-ccd7-4937-820a-e3b0a8b1f825",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "> Running step 781d6e78-5bfe-4330-b8fc-3242deb6f64a. Step input: What are some tracks from the artist AC/DC? Limit it to 3\n",
      "\u001b[1;3;38;2;155;135;227m> Running module agent_input with input: \n",
      "state: {'sources': [], 'memory': ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method Encoding.encode of <Encoding 'cl100k_base'>>, allowed_special='all'), chat_store=SimpleChatSto...\n",
      "task: task_id='c09dd358-19e8-4fcc-8b82-326783ba4af2' input='What are some tracks from the artist AC/DC? Limit it to 3' memory=ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method ...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module react_prompt with input: \n",
      "input: What are some tracks from the artist AC/DC? Limit it to 3\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module llm with input: \n",
      "messages: [ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='\\nYou are designed to help with a variety of tasks, from answering questions     to providing summaries to other types of analyses.\\n\\n## Too...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module react_output_parser with input: \n",
      "chat_response: assistant: Thought: I need to use a tool to help me answer the question.\n",
      "Action: sql_tool\n",
      "Action Input: {\"input\": \"SELECT track_name FROM tracks WHERE artist_name = 'AC/DC' LIMIT 3\"}\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module run_tool with input: \n",
      "reasoning_step: thought='I need to use a tool to help me answer the question.' action='sql_tool' action_input={'input': \"SELECT track_name FROM tracks WHERE artist_name = 'AC/DC' LIMIT 3\"}\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module process_agent_response with input: \n",
      "response_dict: {'response_str': 'Observation: {\\'output\\': ToolOutput(content=\\'The top three tracks by AC/DC are \"For Those About To Rock (We Salute You)\", \"Put The Finger On You\", and \"Let\\\\\\'s Get It Up\".\\', tool...\n",
      "\n",
      "\u001b[0m> Running step a65d44a6-7a98-49ec-86ce-eb4b3bcd6a48. Step input: None\n",
      "\u001b[1;3;38;2;155;135;227m> Running module agent_input with input: \n",
      "state: {'sources': [], 'memory': ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method Encoding.encode of <Encoding 'cl100k_base'>>, allowed_special='all'), chat_store=SimpleChatSto...\n",
      "task: task_id='c09dd358-19e8-4fcc-8b82-326783ba4af2' input='What are some tracks from the artist AC/DC? Limit it to 3' memory=ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method ...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module react_prompt with input: \n",
      "input: What are some tracks from the artist AC/DC? Limit it to 3\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module llm with input: \n",
      "messages: [ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content='\\nYou are designed to help with a variety of tasks, from answering questions     to providing summaries to other types of analyses.\\n\\n## Too...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module react_output_parser with input: \n",
      "chat_response: assistant: Thought: The user has repeated the request for tracks from the artist AC/DC, limited to 3, despite having already received an answer. It's possible that the user did not see the previous re...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module process_response with input: \n",
      "response_step: thought=\"The user has repeated the request for tracks from the artist AC/DC, limited to 3, despite having already received an answer. It's possible that the user did not see the previous response, or ...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module process_agent_response with input: \n",
      "response_dict: {'response_str': 'The top three tracks by AC/DC are \"For Those About To Rock (We Salute You)\", \"Put The Finger On You\", and \"Let\\'s Get It Up\".', 'is_done': True}\n",
      "\n",
      "\u001b[0m"
     ]
    }
   ],
   "source": [
    "# run this e2e\n",
    "agent.reset()\n",
    "response = agent.chat(\n",
    "    \"What are some tracks from the artist AC/DC? Limit it to 3\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "de05be63-319f-48e3-bc16-6155b23b13a0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The top three tracks by AC/DC are \"For Those About To Rock (We Salute You)\", \"Put The Finger On You\", and \"Let's Get It Up\".\n"
     ]
    }
   ],
   "source": [
    "print(str(response))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e8f8458f-4fab-420f-bb80-214e1f11c0a8",
   "metadata": {},
   "source": [
    "## Setup Simple Retry Agent Pipeline for Text-to-SQL \n",
    "\n",
    "Instead of the full ReAct pipeline that does tool picking, let's try a much simpler agent pipeline that only does text-to-SQL, with retry-logic.\n",
    "\n",
    "We try a simple text-based \"retry\" prompt where given the user input and previous conversation history, can generate a modified query that outputs the right result."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7adc8aa9-5b72-4525-8bad-052c5df28d6a",
   "metadata": {},
   "source": [
    "### Define Core Modules\n",
    "\n",
    "- agent input\n",
    "- retry prompt\n",
    "- output processor (including a validation prompt)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5b62f3c5-6c4d-4532-ab34-fafe9abb77a1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llms.openai import OpenAI\n",
    "\n",
    "# llm = OpenAI(model=\"gpt-3.5-turbo\")\n",
    "llm = OpenAI(model=\"gpt-4-1106-preview\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "06dd3c71-66d3-4a7d-ba03-abecd61268d0",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.agent import Task, AgentChatResponse\n",
    "from typing import Dict, Any\n",
    "from llama_index.core.query_pipeline import (\n",
    "    AgentInputComponent,\n",
    "    AgentFnComponent,\n",
    ")\n",
    "\n",
    "\n",
    "def agent_input_fn(task: Task, state: Dict[str, Any]) -> Dict:\n",
    "    \"\"\"Agent input function.\"\"\"\n",
    "    # initialize current_reasoning\n",
    "    if \"convo_history\" not in state:\n",
    "        state[\"convo_history\"] = []\n",
    "        state[\"count\"] = 0\n",
    "    state[\"convo_history\"].append(f\"User: {task.input}\")\n",
    "    convo_history_str = \"\\n\".join(state[\"convo_history\"]) or \"None\"\n",
    "    return {\"input\": task.input, \"convo_history\": convo_history_str}\n",
    "\n",
    "\n",
    "agent_input_component = AgentInputComponent(fn=agent_input_fn)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f18356fa-df3b-4fbf-ab5b-c2708b0bfaa4",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import PromptTemplate\n",
    "\n",
    "retry_prompt_str = \"\"\"\\\n",
    "You are trying to generate a proper natural language query given a user input.\n",
    "\n",
    "This query will then be interpreted by a downstream text-to-SQL agent which\n",
    "will convert the query to a SQL statement. If the agent triggers an error,\n",
    "then that will be reflected in the current conversation history (see below).\n",
    "\n",
    "If the conversation history is None, use the user input. If its not None,\n",
    "generate a new SQL query that avoids the problems of the previous SQL query.\n",
    "\n",
    "Input: {input}\n",
    "Convo history (failed attempts): \n",
    "{convo_history}\n",
    "\n",
    "New input: \"\"\"\n",
    "retry_prompt = PromptTemplate(retry_prompt_str)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f14e034c-b5b3-4f7d-b840-b7f2f4926d79",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import Response\n",
    "from typing import Tuple\n",
    "\n",
    "validate_prompt_str = \"\"\"\\\n",
    "Given the user query, validate whether the inferred SQL query and response from executing the query is correct and answers the query.\n",
    "\n",
    "Answer with YES or NO.\n",
    "\n",
    "Query: {input}\n",
    "Inferred SQL query: {sql_query}\n",
    "SQL Response: {sql_response}\n",
    "\n",
    "Result: \"\"\"\n",
    "validate_prompt = PromptTemplate(validate_prompt_str)\n",
    "\n",
    "MAX_ITER = 3\n",
    "\n",
    "\n",
    "def agent_output_fn(\n",
    "    task: Task, state: Dict[str, Any], output: Response\n",
    ") -> Tuple[AgentChatResponse, bool]:\n",
    "    \"\"\"Agent output component.\"\"\"\n",
    "    print(f\"> Inferred SQL Query: {output.metadata['sql_query']}\")\n",
    "    print(f\"> SQL Response: {str(output)}\")\n",
    "    state[\"convo_history\"].append(\n",
    "        f\"Assistant (inferred SQL query): {output.metadata['sql_query']}\"\n",
    "    )\n",
    "    state[\"convo_history\"].append(f\"Assistant (response): {str(output)}\")\n",
    "\n",
    "    # run a mini chain to get response\n",
    "    validate_prompt_partial = validate_prompt.as_query_component(\n",
    "        partial={\n",
    "            \"sql_query\": output.metadata[\"sql_query\"],\n",
    "            \"sql_response\": str(output),\n",
    "        }\n",
    "    )\n",
    "    qp = QP(chain=[validate_prompt_partial, llm])\n",
    "    validate_output = qp.run(input=task.input)\n",
    "\n",
    "    state[\"count\"] += 1\n",
    "    is_done = False\n",
    "    if state[\"count\"] >= MAX_ITER:\n",
    "        is_done = True\n",
    "    if \"YES\" in validate_output.message.content:\n",
    "        is_done = True\n",
    "\n",
    "    return AgentChatResponse(response=str(output)), is_done\n",
    "\n",
    "\n",
    "agent_output_component = AgentFnComponent(fn=agent_output_fn)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "44015fad-a37d-48e7-90d0-77d62ff4ab83",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.query_pipeline import (\n",
    "    QueryPipeline as QP,\n",
    "    Link,\n",
    "    InputComponent,\n",
    ")\n",
    "\n",
    "qp = QP(\n",
    "    modules={\n",
    "        \"input\": agent_input_component,\n",
    "        \"retry_prompt\": retry_prompt,\n",
    "        \"llm\": llm,\n",
    "        \"sql_query_engine\": sql_query_engine,\n",
    "        \"output_component\": agent_output_component,\n",
    "    },\n",
    "    verbose=True,\n",
    ")\n",
    "qp.add_link(\"input\", \"retry_prompt\", src_key=\"input\", dest_key=\"input\")\n",
    "qp.add_link(\n",
    "    \"input\", \"retry_prompt\", src_key=\"convo_history\", dest_key=\"convo_history\"\n",
    ")\n",
    "qp.add_chain([\"retry_prompt\", \"llm\", \"sql_query_engine\", \"output_component\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "64185c9e-b90f-4acb-b498-64a1bb4f8751",
   "metadata": {},
   "source": [
    "### Visualize Query Pipeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "142dd638-578c-42c0-aa06-395052ca210a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "agent_dag.html\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "        <iframe\n",
       "            width=\"100%\"\n",
       "            height=\"600px\"\n",
       "            src=\"agent_dag.html\"\n",
       "            frameborder=\"0\"\n",
       "            allowfullscreen\n",
       "            \n",
       "        ></iframe>\n",
       "        "
      ],
      "text/plain": [
       "<IPython.lib.display.IFrame at 0x2b28a9780>"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from pyvis.network import Network\n",
    "\n",
    "net = Network(notebook=True, cdn_resources=\"in_line\", directed=True)\n",
    "net.from_nx(qp.dag)\n",
    "net.show(\"agent_dag.html\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab09a855-ef74-457c-8aa6-2292adf12278",
   "metadata": {},
   "source": [
    "### Define Agent Worker"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f3764e8c-78a3-41cd-97f4-8c6c1642ff1e",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.agent import QueryPipelineAgentWorker, AgentRunner\n",
    "from llama_index.core.callbacks import CallbackManager\n",
    "\n",
    "# callback manager is passed from query pipeline to agent worker/agent\n",
    "agent_worker = QueryPipelineAgentWorker(qp)\n",
    "agent = AgentRunner(\n",
    "    agent_worker, callback_manager=CallbackManager(), verbose=False\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0d9a327a-e9ee-481e-b427-701245cdd2f1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1;3;38;2;155;135;227m> Running module input with input: \n",
      "state: {'sources': [], 'memory': ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method Encoding.encode of <Encoding 'cl100k_base'>>, allowed_special='all'), chat_store=SimpleChatSto...\n",
      "task: task_id='2d8a63de-7410-4422-98f3-f0ca41884f58' input=\"How many albums did the artist who wrote 'Restless and Wild' release? (answer should be non-zero)?\" memory=ChatMemoryBuffer(token_limit=3000, toke...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module retry_prompt with input: \n",
      "input: How many albums did the artist who wrote 'Restless and Wild' release? (answer should be non-zero)?\n",
      "convo_history: User: How many albums did the artist who wrote 'Restless and Wild' release? (answer should be non-zero)?\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module llm with input: \n",
      "messages: You are trying to generate a proper natural language query given a user input.\n",
      "\n",
      "This query will then be interpreted by a downstream text-to-SQL agent which\n",
      "will convert the query to a SQL statement. I...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module sql_query_engine with input: \n",
      "input: assistant: Given the conversation history, it seems that the previous attempt to generate a SQL query from the user's question may have resulted in an error. To avoid the same problem, we need to reph...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module output_component with input: \n",
      "output: I'm sorry, but there seems to be an error in the SQL query. The query you provided is invalid SQL. Please double-check the syntax and try again.\n",
      "\n",
      "\u001b[0m> Inferred SQL Query: SELECT COUNT(albums.AlbumId) \n",
      "FROM albums \n",
      "JOIN tracks ON albums.AlbumId = tracks.AlbumId \n",
      "WHERE tracks.Name = 'Restless and Wild' \n",
      "AND albums.ArtistId = tracks.Composer \n",
      "AND COUNT(albums.AlbumId) > 0\n",
      "> SQL Response: I'm sorry, but there seems to be an error in the SQL query. The query you provided is invalid SQL. Please double-check the syntax and try again.\n",
      "\u001b[1;3;38;2;155;135;227m> Running module input with input: \n",
      "state: {'sources': [], 'memory': ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method Encoding.encode of <Encoding 'cl100k_base'>>, allowed_special='all'), chat_store=SimpleChatSto...\n",
      "task: task_id='2d8a63de-7410-4422-98f3-f0ca41884f58' input=\"How many albums did the artist who wrote 'Restless and Wild' release? (answer should be non-zero)?\" memory=ChatMemoryBuffer(token_limit=3000, toke...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module retry_prompt with input: \n",
      "input: How many albums did the artist who wrote 'Restless and Wild' release? (answer should be non-zero)?\n",
      "convo_history: User: How many albums did the artist who wrote 'Restless and Wild' release? (answer should be non-zero)?\n",
      "Assistant (inferred SQL query): SELECT COUNT(albums.AlbumId) \n",
      "FROM albums \n",
      "JOIN tracks ON album...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module llm with input: \n",
      "messages: You are trying to generate a proper natural language query given a user input.\n",
      "\n",
      "This query will then be interpreted by a downstream text-to-SQL agent which\n",
      "will convert the query to a SQL statement. I...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module sql_query_engine with input: \n",
      "input: assistant: Given the previous error, it seems that the SQL query was incorrect because it attempted to use an aggregate function (`COUNT`) in the `WHERE` clause, which is not allowed. Additionally, th...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module output_component with input: \n",
      "output: The number of albums released by the artist who composed the track 'Restless and Wild' is 1.\n",
      "\n",
      "\u001b[0m> Inferred SQL Query: SELECT COUNT(DISTINCT albums.AlbumId) AS NumAlbums\n",
      "FROM tracks\n",
      "JOIN albums ON tracks.AlbumId = albums.AlbumId\n",
      "JOIN artists ON albums.ArtistId = artists.ArtistId\n",
      "WHERE tracks.Name = 'Restless and Wild'\n",
      "GROUP BY artists.ArtistId\n",
      "HAVING NumAlbums > 0;\n",
      "> SQL Response: The number of albums released by the artist who composed the track 'Restless and Wild' is 1.\n",
      "\u001b[1;3;38;2;155;135;227m> Running module input with input: \n",
      "state: {'sources': [], 'memory': ChatMemoryBuffer(token_limit=3000, tokenizer_fn=functools.partial(<bound method Encoding.encode of <Encoding 'cl100k_base'>>, allowed_special='all'), chat_store=SimpleChatSto...\n",
      "task: task_id='2d8a63de-7410-4422-98f3-f0ca41884f58' input=\"How many albums did the artist who wrote 'Restless and Wild' release? (answer should be non-zero)?\" memory=ChatMemoryBuffer(token_limit=3000, toke...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module retry_prompt with input: \n",
      "input: How many albums did the artist who wrote 'Restless and Wild' release? (answer should be non-zero)?\n",
      "convo_history: User: How many albums did the artist who wrote 'Restless and Wild' release? (answer should be non-zero)?\n",
      "Assistant (inferred SQL query): SELECT COUNT(albums.AlbumId) \n",
      "FROM albums \n",
      "JOIN tracks ON album...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module llm with input: \n",
      "messages: You are trying to generate a proper natural language query given a user input.\n",
      "\n",
      "This query will then be interpreted by a downstream text-to-SQL agent which\n",
      "will convert the query to a SQL statement. I...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module sql_query_engine with input: \n",
      "input: assistant: Given the conversation history, it seems that the previous SQL query was successful in retrieving the number of albums released by the artist who composed the track 'Restless and Wild'. How...\n",
      "\n",
      "\u001b[0m\u001b[1;3;38;2;155;135;227m> Running module output_component with input: \n",
      "output: I apologize, but there seems to be an error in the SQL query provided. Please double-check the syntax and try again.\n",
      "\n",
      "\u001b[0m> Inferred SQL Query: SELECT COUNT(DISTINCT albums.AlbumId) AS TotalAlbums\n",
      "FROM albums\n",
      "JOIN artists ON albums.ArtistId = artists.ArtistId\n",
      "WHERE artists.ArtistId = (\n",
      "    SELECT artists.ArtistId\n",
      "    FROM tracks\n",
      "    JOIN albums ON tracks.AlbumId = albums.AlbumId\n",
      "    JOIN artists ON albums.ArtistId = artists.ArtistId\n",
      "    WHERE tracks.Name = 'Restless and Wild'\n",
      "    LIMIT 1\n",
      ")\n",
      "AND TotalAlbums > 0;\n",
      "> SQL Response: I apologize, but there seems to be an error in the SQL query provided. Please double-check the syntax and try again.\n",
      "I apologize, but there seems to be an error in the SQL query provided. Please double-check the syntax and try again.\n"
     ]
    }
   ],
   "source": [
    "response = agent.chat(\n",
    "    \"How many albums did the artist who wrote 'Restless and Wild' release? (answer should be non-zero)?\"\n",
    ")\n",
    "print(str(response))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "llama_index_v3",
   "language": "python",
   "name": "llama_index_v3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
