{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Advanced RAG with LlamaParse\n",
    "\n",
    "<a href=\"https://colab.research.google.com/github/run-llama/llama_parse/blob/main/examples/parse/demo_advanced.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
    "\n",
    "This notebook is a complete walkthrough for using LlamaParse with advanced indexing/retrieval techniques in LlamaIndex over the Apple 10K Filing. \n",
    "\n",
    "This allows us to ask sophisticated questions that aren't possible with \"naive\" parsing/indexing techniques with existing models.\n",
    "\n",
    "Status:\n",
    "| Last Executed | Version | State      |\n",
    "|---------------|---------|------------|\n",
    "| Aug-18-2025   | 0.6.61  | Maintained |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install llama-cloud-services \"llama-index>=0.13.2<0.14.0\" \"llama-index-embeddings-huggingface>=0.6.0<0.7.0\" torchvision \"sentence-transformers<5.0\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget \"https://s2.q4cdn.com/470004039/files/doc_financials/2021/q4/_10-K-2021-(As-Filed).pdf\" -O apple_2021_10k.pdf"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Some OpenAI and LlamaParse details"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "# API access to llama-cloud\n",
    "os.environ[\"LLAMA_CLOUD_API_KEY\"] = \"llx-...\"\n",
    "\n",
    "# Using OpenAI API for embeddings/llms\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llms.openai import OpenAI\n",
    "from llama_index.embeddings.openai import OpenAIEmbedding\n",
    "from llama_index.core import Settings\n",
    "\n",
    "embed_model = OpenAIEmbedding(model_name=\"text-embedding-3-small\")\n",
    "llm = OpenAI(model=\"gpt-5-mini\")\n",
    "\n",
    "Settings.llm = llm\n",
    "Settings.embed_model = embed_model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Using brand new `LlamaParse` PDF reader for PDF Parsing\n",
    "\n",
    "We also compare three different retrieval/query engine strategies:\n",
    "1. Baseline using default parsing from `SimpleDirectoryReader`\n",
    "2. Using raw markdown text as nodes for building index and apply simple query engine for generating the results;\n",
    "3. Using markdown + page screenshots to help retrieve the proper nodes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Started parsing the file under job_id f347cb97-dfe2-4677-991a-5ceba6d9fc6a\n"
     ]
    }
   ],
   "source": [
    "from llama_cloud_services import LlamaParse\n",
    "\n",
    "result = await LlamaParse(\n",
    "    # The parsing mode\n",
    "    parse_mode=\"parse_page_with_agent\",\n",
    "    # The model to use\n",
    "    model=\"openai-gpt-4-1-mini\",\n",
    "    # Whether to use high resolution OCR (Slower)\n",
    "    high_res_ocr=True,\n",
    "    # Adaptive long table. LlamaParse will try to detect long tables across pages\n",
    "    adaptive_long_table=True,\n",
    "    outlined_table_extraction=True,\n",
    "    output_tables_as_HTML=True,\n",
    "    # Whether to take a screenshot of the page, needed for screenshot-retrieval\n",
    "    take_screenshot=True,\n",
    ").aparse(\"./apple_2021_10k.pdf\")\n",
    "\n",
    "markdown_nodes = await result.aget_markdown_nodes(split_by_page=True)\n",
    "screenshot_image_nodes = await result.aget_image_nodes(\n",
    "    include_screenshot_images=True,\n",
    "    include_object_images=False,\n",
    "    image_download_dir=\"./images\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import SimpleDirectoryReader\n",
    "\n",
    "baseline_documents = SimpleDirectoryReader(\n",
    "    input_files=[\"apple_2021_10k.pdf\"]\n",
    ").load_data()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Setup Baseline Index\n",
    "\n",
    "For comparison, we setup a naive RAG pipeline with default parsing and standard chunking, indexing, retrieval."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 20:53:51,246 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-08-18 20:53:52,143 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    }
   ],
   "source": [
    "from llama_index.core import VectorStoreIndex\n",
    "\n",
    "baseline_index = VectorStoreIndex.from_documents(baseline_documents)\n",
    "baseline_query_engine = baseline_index.as_query_engine(similarity_top_k=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Setup our LlamaParse Indexes\n",
    "\n",
    "Using both the markdown and screenshot images, we can build two different indexes.\n",
    "\n",
    "1. An index over just the markdown documents\n",
    "2. A custom index that uses the markdown + screenshot images to help with response quality."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 20:53:53,070 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    }
   ],
   "source": [
    "from llama_index.core import VectorStoreIndex\n",
    "\n",
    "markdown_index = VectorStoreIndex(nodes=markdown_nodes)\n",
    "markdown_query_engine = markdown_index.as_query_engine(similarity_top_k=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/loganmarkewich/llama_parse/py/.venv/lib/python3.12/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n",
      "2025-08-18 20:53:55,230 - INFO - Load pretrained SentenceTransformer: llamaindex/vdr-2b-multi-v1\n",
      "Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.\n",
      "2025-08-18 20:54:05,369 - INFO - 2 prompts are loaded, with the keys: ['query', 'text']\n",
      "Generating embeddings:   0%|          | 0/82 [00:00<?, ?it/s]2025-08-18 20:54:06,599 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "Generating embeddings: 100%|██████████| 82/82 [00:01<00:00, 61.24it/s]\n",
      "Generating image embeddings: 100%|██████████| 82/82 [26:06<00:00, 19.11s/it]\n"
     ]
    }
   ],
   "source": [
    "from llama_index.core.indices import MultiModalVectorStoreIndex\n",
    "from llama_index.embeddings.huggingface import HuggingFaceEmbedding\n",
    "from llama_index.core import Settings\n",
    "\n",
    "# could also use other API-based multimodal models like voyageai or jinaai\n",
    "# Note: this may take quite a while if running on CPU!\n",
    "image_embed_model = HuggingFaceEmbedding(\n",
    "    model_name=\"llamaindex/vdr-2b-multi-v1\",\n",
    "    embed_batch_size=2,\n",
    "    trust_remote_code=True,\n",
    "    cache_folder=\"./hf_cache\",\n",
    "    device=\"cpu\",  # set to \"cuda\" if you have a GPU or remove to auto-detect\n",
    ")\n",
    "\n",
    "multi_modal_index = MultiModalVectorStoreIndex(\n",
    "    nodes=[*markdown_nodes, *screenshot_image_nodes],\n",
    "    embed_model=Settings.embed_model,\n",
    "    image_embed_model=image_embed_model,\n",
    "    show_progress=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Below, we will create a custom query engine that does a few things\n",
    "1. Retrieves both image nodes and text nodes\n",
    "2. Combines them into two lists -- one where images and texts come from the same page, and one where we have texts alone\n",
    "3. Use a Jinja-based `RichPromptTemplate` to format the retrieved content automatically into a list of multimodal chat messages\n",
    "4. Send our messages to the LLM and return a result\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.async_utils import asyncio_run\n",
    "from llama_index.core.llms import LLM\n",
    "from llama_index.core.query_engine import CustomQueryEngine\n",
    "from llama_index.core.prompts import RichPromptTemplate\n",
    "from llama_index.core.response import Response\n",
    "from llama_index.core.schema import NodeWithScore\n",
    "from llama_index.core import Settings\n",
    "\n",
    "TEXT_IMAGE_PROMPT_TEMPLATE = RichPromptTemplate(\n",
    "    \"\"\"\n",
    "<context>\n",
    "Here is some retrieved content from a knowledge base:\n",
    "{% for image_path, text in images_and_texts %}\n",
    "<page>\n",
    "<text>{{ text }}</text>\n",
    "\n",
    "</page>\n",
    "{% endfor %}\n",
    "{% for text in texts %}\n",
    "<page>\n",
    "<text>{{ text }}</text>\n",
    "</page>\n",
    "{% endfor %}\n",
    "</context>\n",
    "\n",
    "Using the context, answer the following question:\n",
    "<query>{{ query_str }}</query>\n",
    "\"\"\"\n",
    ")\n",
    "\n",
    "\n",
    "class SimpleMultiModalQueryEngine(CustomQueryEngine):\n",
    "    def __init__(\n",
    "        self,\n",
    "        index: MultiModalVectorStoreIndex,\n",
    "        image_top_k: int = 4,\n",
    "        text_top_k: int = 4,\n",
    "        llm: LLM | None = None,\n",
    "        **kwargs\n",
    "    ):\n",
    "        super().__init__(**kwargs)\n",
    "        self._retriever = index.as_retriever(\n",
    "            similarity_top_k=text_top_k, image_similarity_top_k=image_top_k\n",
    "        )\n",
    "        self._llm = llm or Settings.llm\n",
    "\n",
    "    def _match_images_and_texts(\n",
    "        self, text_results: list[NodeWithScore], image_results: list[NodeWithScore]\n",
    "    ) -> tuple[list[NodeWithScore], list[NodeWithScore]]:\n",
    "        # combine results, prioritize images and texts\n",
    "        # if both an image and matching text was retrieved, that is a strong indicator\n",
    "        images_and_texts = []\n",
    "        text_keys = {\n",
    "            (x.metadata[\"page_number\"], x.metadata[\"file_name\"]): x\n",
    "            for x in text_results\n",
    "        }\n",
    "        for image_result in image_results:\n",
    "            key = (\n",
    "                image_result.metadata[\"page_number\"],\n",
    "                image_result.metadata[\"file_name\"],\n",
    "            )\n",
    "            # add matching text to results if available\n",
    "            if key in text_keys:\n",
    "                text_result = text_keys[key]\n",
    "                images_and_texts.append(\n",
    "                    (image_result.node.image_path, text_result.node.text)\n",
    "                )\n",
    "\n",
    "                # remove from list\n",
    "                text_keys.pop(key)\n",
    "\n",
    "        # get the remaining texts as a fallback\n",
    "        texts = [result.node.text for result in text_keys.values()]\n",
    "\n",
    "        return images_and_texts, texts\n",
    "\n",
    "    def custom_query(self, query_str: str) -> Response:\n",
    "        # wrap the async method to avoid code duplication\n",
    "        # asyncio_run is a slightly safer asyncio.run() call\n",
    "        return asyncio_run(self.acustom_query(query_str))\n",
    "\n",
    "    async def acustom_query(self, query_str: str) -> Response:\n",
    "        text_results = await self._retriever.atext_retrieve(query_str)\n",
    "        image_results = await self._retriever.atext_to_image_retrieve(query_str)\n",
    "\n",
    "        images_and_texts, texts = self._match_images_and_texts(\n",
    "            text_results, image_results\n",
    "        )\n",
    "        messages = TEXT_IMAGE_PROMPT_TEMPLATE.format_messages(\n",
    "            images_and_texts=images_and_texts, texts=texts, query_str=str(query_str)\n",
    "        )\n",
    "\n",
    "        response = await self._llm.achat(messages)\n",
    "\n",
    "        return Response(\n",
    "            response.message.content, source_nodes=[*text_results, *image_results]\n",
    "        )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "multimodal_query_engine = SimpleMultiModalQueryEngine(\n",
    "    index=multi_modal_index,\n",
    "    image_top_k=3,\n",
    "    text_top_k=3,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Try out the Query Engines and Compare!\n",
    "\n",
    "Now with our three query engines assembled, we can compare each approach with a rough \"vibes-based\" evaluation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 21:20:29,006 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-08-18 21:20:38,721 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "***********Baseline Query Engine***********\n",
      "The total fair value of marketable securities in 2020 was $153,814 million (approximately $153.8 billion).\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 21:20:39,233 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-08-18 21:20:48,185 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "***********Markdown Query Engine***********\n",
      "The total fair value was $191,830 million (approximately $191.83 billion).\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 21:20:48,515 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-08-18 21:21:09,275 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "***********MultiModal Query Engine***********\n",
      "The table shows:\n",
      "\n",
      "- Total fair value (cash, cash equivalents and marketable securities) in 2020: $191,830 million (≈ $191.83 billion).  \n",
      "- Total marketable securities (current + non‑current) in 2020: $52,927 + $100,887 = $153,814 million (≈ $153.81 billion).\n"
     ]
    }
   ],
   "source": [
    "query = \"What were the total fair value of marketable securities in 2020\"\n",
    "\n",
    "response_1 = await baseline_query_engine.aquery(query)\n",
    "print(\"\\n***********Baseline Query Engine***********\")\n",
    "print(response_1)\n",
    "\n",
    "response_2 = await markdown_query_engine.aquery(query)\n",
    "print(\"\\n***********Markdown Query Engine***********\")\n",
    "print(response_2)\n",
    "\n",
    "response_3 = await multimodal_query_engine.aquery(query)\n",
    "print(\"\\n***********MultiModal Query Engine***********\")\n",
    "print(response_3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we can see, the multimodal and markdown query engines are able to retrieve the correct content, while the default query engine struggles to find the correct total value."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can also inspect the source nodes, and see the pages that were retrieved. Here is the correct page for the total fair value of marketable securities in 2020:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'images/page_42.jpg'"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response_3.source_nodes[4].node.image_path"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Lets try a few more queries to see how the query engines perform."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 21:35:33,281 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-08-18 21:35:40,959 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "***********Baseline Query Engine***********\n",
      "- Second quarter 2021 fixed-rate notes (2026–2061): effective interest rates 0.75%–2.81%\n",
      "- Fourth quarter 2021 fixed-rate notes (2028–2061): effective interest rates 1.43%–2.86%\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 21:35:41,285 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-08-18 21:35:49,132 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "***********Markdown Query Engine***********\n",
      "- Floating-rate notes (2022): 0.48% – 0.63%\n",
      "- Fixed-rate 0.000% – 4.650% notes (2022 – 2060): 0.03% – 4.78%\n",
      "- Second-quarter 2021 fixed-rate notes (0.700% – 2.800%, 2026 – 2061): 0.75% – 2.81%\n",
      "- Fourth-quarter 2021 fixed-rate notes (1.400% – 2.850%, 2028 – 2061): 1.43% – 2.86%\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 21:35:49,411 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-08-18 21:36:06,767 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "***********MultiModal Query Engine***********\n",
      "The effective interest rate ranges reported for the 2021 debt issuances were:\n",
      "\n",
      "- Floating‑rate notes (2022): 0.48% – 0.63%  \n",
      "- Fixed‑rate 0.000% – 4.650% notes (2022–2060): 0.03% – 4.78%  \n",
      "- Q2 2021 fixed‑rate notes (0.700% – 2.800%, maturities 2026–2061): 0.75% – 2.81%  \n",
      "- Q4 2021 fixed‑rate notes (1.400% – 2.850%, maturities 2028–2061): 1.43% – 2.86%\n"
     ]
    }
   ],
   "source": [
    "query = \"What were the effective interest rates of all debt issuances in 2021\"\n",
    "\n",
    "response_1 = await baseline_query_engine.aquery(query)\n",
    "print(\"\\n***********Baseline Query Engine***********\")\n",
    "print(response_1)\n",
    "\n",
    "response_2 = await markdown_query_engine.aquery(query)\n",
    "print(\"\\n***********Markdown Query Engine***********\")\n",
    "print(response_2)\n",
    "\n",
    "response_3 = await multimodal_query_engine.aquery(query)\n",
    "print(\"\\n***********MultiModal Query Engine***********\")\n",
    "print(response_3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "***********Baseline Query Engine***********\n",
      "The federal deferred tax amounts for the years 2019 to 2021 are as follows (in millions):\n",
      "\n",
      "- **2019**: $(2,939)\n",
      "- **2020**: $(3,619)\n",
      "- **2021**: $(7,176)\n",
      "\n",
      "These figures represent the deferred tax expense for each respective year.\n",
      "\n",
      "***********Markdown Query Engine***********\n",
      "As of September 25, 2021, the total deferred tax assets and liabilities for the years 2021 and 2020 are as follows:\n",
      "\n",
      "**Deferred Tax Assets:**\n",
      "- 2021: $25,176 million\n",
      "- 2020: $19,336 million\n",
      "\n",
      "**Deferred Tax Liabilities:**\n",
      "- 2021: $7,200 million\n",
      "- 2020: $10,138 million\n",
      "\n",
      "**Net Deferred Tax Assets:**\n",
      "- 2021: $13,073 million\n",
      "- 2020: $8,157 million\n",
      "\n",
      "The information for 2019 is not provided in the context.\n",
      "\n",
      "***********MultiModal Query Engine***********\n",
      "The federal deferred tax assets and liabilities for the years 2019 to 2021 are as follows:\n",
      "\n",
      "### Deferred Tax Assets (in millions):\n",
      "- **2021**: $25,176\n",
      "- **2020**: $19,336\n",
      "- **2019**: Not specified in the provided content.\n",
      "\n",
      "### Deferred Tax Liabilities (in millions):\n",
      "- **2021**: $7,200\n",
      "- **2020**: $10,138\n",
      "- **2019**: Not specified in the provided content.\n",
      "\n",
      "### Net Deferred Tax Assets (in millions):\n",
      "- **2021**: $13,073\n",
      "- **2020**: $8,157\n",
      "- **2019**: Not specified in the provided content.\n",
      "\n",
      "The significant components of deferred tax assets and liabilities reflect the effects of tax credits and temporary differences between financial statement carrying amounts and their respective tax bases.\n"
     ]
    }
   ],
   "source": [
    "query = \"federal deferred tax in 2019-2021\"\n",
    "\n",
    "response_1 = await baseline_query_engine.aquery(query)\n",
    "print(\"\\n***********Baseline Query Engine***********\")\n",
    "print(response_1)\n",
    "\n",
    "response_2 = await markdown_query_engine.aquery(query)\n",
    "print(\"\\n***********Markdown Query Engine***********\")\n",
    "print(response_2)\n",
    "\n",
    "response_3 = await multimodal_query_engine.aquery(query)\n",
    "print(\"\\n***********MultiModal Query Engine***********\")\n",
    "print(response_3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 21:36:07,790 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-08-18 21:36:14,197 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "***********Baseline Query Engine***********\n",
      "State current tax (in millions):\n",
      "- 2019: +$475 million\n",
      "- 2020: +$455 million\n",
      "- 2021: +$1,620 million\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 21:36:14,584 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-08-18 21:36:22,084 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "***********Markdown Query Engine***********\n",
      "2019 — Current state taxes: $475 million (change vs prior year: n/a)  \n",
      "2020 — Current state taxes: $455 million (change vs 2019: −$20 million)  \n",
      "2021 — Current state taxes: $1,620 million (change vs 2020: +$1,165 million)\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-08-18 21:36:22,441 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "2025-08-18 21:36:33,498 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "***********MultiModal Query Engine***********\n",
      "The current state tax amounts (in millions) per the Note 5 table are:\n",
      "\n",
      "- 2019: $475\n",
      "- 2020: $455  (−$20 vs 2019; −4.2%)\n",
      "- 2021: $1,620 (+$1,165 vs 2020; +256.0%)\n",
      "\n",
      "All amounts are in millions of dollars.\n"
     ]
    }
   ],
   "source": [
    "query = \"current state taxes per year in 2019-2021 (include +/-)\"\n",
    "\n",
    "response_1 = await baseline_query_engine.aquery(query)\n",
    "print(\"\\n***********Baseline Query Engine***********\")\n",
    "print(response_1)\n",
    "\n",
    "response_2 = await markdown_query_engine.aquery(query)\n",
    "print(\"\\n***********Markdown Query Engine***********\")\n",
    "print(response_2)\n",
    "\n",
    "response_3 = await multimodal_query_engine.aquery(query)\n",
    "print(\"\\n***********MultiModal Query Engine***********\")\n",
    "print(response_3)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
