{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Building a RAG Pipeline over IKEA Product Instruction Manuals\n",
    "\n",
    "<a href=\"https://colab.research.google.com/github/run-llama/llama_cloud_services/blob/main/examples/parse/multimodal/product_manual_rag.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This cookbook shows how to use LlamaParse and OpenAI's multimodal models to query over IKEA instruction manual PDFs, which mainly contain images and diagrams to show how one can assemble the product.\n",
    "\n",
    "LlamaParse and multimodal LLMs can interpret these diagrams and translate them into textual instructions. With textual assistance, confusing visual instructions within the IKEA product manuals can be made easier to understand and interpret. Additionally, textual instructions can be helpful for those who are visually impaired.\n",
    "\n",
    "Status:\n",
    "| Last Executed | Version | State      |\n",
    "|---------------|---------|------------|\n",
    "| Aug-20-2025   | 0.6.61  | Maintained |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Install and Setup\n",
    "\n",
    "Install LlamaIndex, download the data, and configure the API keys."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install \"llama-index>=0.13.0<0.14.0\" llama-cloud-services"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget https://github.com/user-attachments/files/16461058/data.zip -O data.zip\n",
    "!unzip -o data.zip\n",
    "!rm data.zip"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Set up your OpenAI and LlamaCloud keys."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\n",
    "os.environ[\"LLAMA_CLOUD_API_KEY\"] = \"llx-...\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Code Implementation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Load data from the parser."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_cloud_services import LlamaParse\n",
    "\n",
    "parser = LlamaParse(\n",
    "    parse_mode=\"parse_page_with_agent\",\n",
    "    model=\"openai-gpt-4-1-mini\",\n",
    "    high_res_ocr=True,\n",
    "    outlined_table_extraction=True,\n",
    "    output_tables_as_HTML=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "DATA_DIR = \"data\"\n",
    "\n",
    "\n",
    "def get_data_files(data_dir=DATA_DIR) -> list[str]:\n",
    "    files = []\n",
    "    for f in os.listdir(data_dir):\n",
    "        fname = os.path.join(data_dir, f)\n",
    "        if os.path.isfile(fname):\n",
    "            files.append(fname)\n",
    "    return files\n",
    "\n",
    "\n",
    "files = get_data_files()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Load data into docs, and save images from PDFs into `data_images` directory."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Getting job results:   0%|          | 0/5 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Started parsing the file under job_id 0d3de1c0-e4c6-4cca-9e85-b738b301119a\n",
      "Started parsing the file under job_id 48ef73aa-fe6b-4e67-a4c0-ebe5d1fc532c\n",
      "Started parsing the file under job_id 71cdf344-d4c1-40ca-812c-3ada19aeca5a\n",
      "Started parsing the file under job_id 747a4847-7971-4e3b-87c5-6ce93a05c260\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Getting job results:  20%|██        | 1/5 [00:14<00:58, 14.62s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Started parsing the file under job_id a2a9fd6a-fa25-4410-8ccc-9da7d38e1590\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Getting job results: 100%|██████████| 5/5 [00:38<00:00,  7.78s/it]\n"
     ]
    }
   ],
   "source": [
    "results = await parser.aparse(files)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "all_text_nodes = []\n",
    "\n",
    "for result in results:\n",
    "    text_nodes = result.get_markdown_nodes(split_by_page=True)\n",
    "    image_nodes = await result.aget_image_nodes(\n",
    "        include_object_images=False,\n",
    "        include_screenshot_images=True,\n",
    "        image_download_dir=\"./data_images\",\n",
    "    )\n",
    "\n",
    "    for text_node, image_node in zip(text_nodes, image_nodes):\n",
    "        text_node.metadata[\"image_path\"] = image_node.image_path\n",
    "        all_text_nodes.append(text_node)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Index the documents."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import (\n",
    "    VectorStoreIndex,\n",
    "    Settings,\n",
    ")\n",
    "from llama_index.embeddings.openai import OpenAIEmbedding\n",
    "from llama_index.llms.openai import OpenAI\n",
    "\n",
    "embed_model = OpenAIEmbedding(model=\"text-embedding-3-large\")\n",
    "llm = OpenAI(\"gpt-5-mini\")\n",
    "\n",
    "Settings.llm = llm\n",
    "Settings.embed_model = embed_model\n",
    "\n",
    "index = VectorStoreIndex(nodes=all_text_nodes)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create a custom query engine that uses OpenAI for multi-modal response generation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.query_engine import CustomQueryEngine\n",
    "from llama_index.core.retrievers import BaseRetriever\n",
    "from llama_index.core.schema import MetadataMode\n",
    "from llama_index.core.base.response.schema import Response\n",
    "from llama_index.core.llms import ChatMessage, TextBlock, ImageBlock\n",
    "\n",
    "\n",
    "qa_prompt_block_text = \"\"\"\\\n",
    "Below we give parsed text from slides in two different formats, as well as the image.\n",
    "\n",
    "---------------------\n",
    "{context_str}\n",
    "---------------------\n",
    "\"\"\"\n",
    "\n",
    "image_prefix_block = TextBlock(text=\"And here are the corresponding images per page\\n\")\n",
    "\n",
    "image_suffix = \"\"\"\\\n",
    "Given the context information and not prior knowledge, answer the query. Explain whether you got the answer\n",
    "from the parsed markdown or raw text or image, and if there's discrepancies, and your reasoning for the final answer.\n",
    "\n",
    "Query: {query_str}\n",
    "Answer: \"\"\"\n",
    "\n",
    "\n",
    "class MultimodalQueryEngine(CustomQueryEngine):\n",
    "    \"\"\"Custom multimodal Query Engine.\n",
    "\n",
    "    Takes in a retriever to retrieve a set of document nodes and respond using an LLM + retrieved text/images.\n",
    "\n",
    "    \"\"\"\n",
    "\n",
    "    retriever: BaseRetriever\n",
    "    llm: OpenAI\n",
    "\n",
    "    def __init__(self, **kwargs) -> None:\n",
    "        \"\"\"Initialize.\"\"\"\n",
    "        super().__init__(**kwargs)\n",
    "\n",
    "    def custom_query(self, query_str: str):\n",
    "        # retrieve text nodes\n",
    "        nodes = self.retriever.retrieve(query_str)\n",
    "        # create ImageNode items from text nodes\n",
    "        image_blocks = [\n",
    "            ImageBlock(path=n.metadata[\"image_path\"])\n",
    "            for n in nodes\n",
    "            if n.metadata.get(\"image_path\")\n",
    "        ]\n",
    "\n",
    "        # create context string from text nodes, dump into the prompt\n",
    "        context_str = \"\\n\\n\".join(\n",
    "            [r.get_content(metadata_mode=MetadataMode.LLM) for r in nodes]\n",
    "        )\n",
    "\n",
    "        formatted_msg = ChatMessage(\n",
    "            role=\"user\",\n",
    "            blocks=[\n",
    "                TextBlock(text=qa_prompt_block_text.format(context_str=context_str)),\n",
    "                image_prefix_block,\n",
    "                *image_blocks,\n",
    "                TextBlock(text=image_suffix.format(query_str=query_str)),\n",
    "            ],\n",
    "        )\n",
    "\n",
    "        # synthesize an answer from formatted text and images\n",
    "        llm_response = self.llm.chat([formatted_msg])\n",
    "\n",
    "        return Response(\n",
    "            response=str(llm_response.message.content),\n",
    "            source_nodes=nodes,\n",
    "        )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create a query engine instance."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "query_engine = MultimodalQueryEngine(\n",
    "    retriever=index.as_retriever(similarity_top_k=3),\n",
    "    llm=llm,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## Example Queries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "Answer (parts included in the UPPSPEL kit)\n",
       "\n",
       "I read the parts inventory diagram (image of the parts page). The parsed slide text only mentioned caster wheels and clips in the assembly steps, so the full parts list came from the image. The image is clear but some small part numbers are tiny; below I list the parts, quantities and the part numbers that are visible.\n",
       "\n",
       "- 2x long screws (107603)  \n",
       "- 6x large screws/dowels (100214)  \n",
       "- 5x cam screws / binding-post screws (118331)  \n",
       "- 12x threaded connector dowels / cross dowels (100498)  \n",
       "- 4x cylindrical spacers (106986)  \n",
       "- 2x ribbed wooden dowels (101350)  \n",
       "- 4x small screws (100413)  \n",
       "- 4x hex/Allen-head screws (100181)  \n",
       "- 2x wall plugs (111322)  \n",
       "- 2x short screws (109067)  \n",
       "- 12x small wood screws (109560)  \n",
       "- 17x cam lock nuts (102534)  \n",
       "- 4x oval/cover caps (135049 / FRE001)  \n",
       "- 2x metal brackets / wall-mount plates (128985)  \n",
       "- 4x mushroom-shaped plastic pegs / feet (128409 / 128303)  \n",
       "- 1x small Allen key (100001)  \n",
       "- 2x larger Allen keys (108490)  \n",
       "- 2x round shallow plastic bowls (123602 / 123603)  \n",
       "- 2x round deeper plastic bowls (126873 / FRE002)\n",
       "\n",
       "Notes / discrepancies:\n",
       "- The parsed text (markdown) included only partial info (mentions of caster wheels and clips) and did not contain the full inventory. The complete inventory above was taken from the parts-diagram image.  \n",
       "- Some part numbers on the image are very small and I transcribed them as best as they appear; a few numbers may be slightly off due to image resolution."
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from IPython.display import display, Markdown\n",
    "\n",
    "response = query_engine.query(\"What parts are included in the Uppspel?\")\n",
    "display(Markdown(str(response)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "Answer: According to the parsed page text, the Tuffing is depicted as a bunk bed — a simple metal‑frame bunk with safety rails on the top bunk and a ladder in the middle (IKEA logo at the bottom right).\n",
       "\n",
       "Where I got this:\n",
       "- Primary source for the description: the parsed markdown/alt‑text for page 1, which explicitly describes the bunk bed.\n",
       "\n",
       "Discrepancies / notes:\n",
       "- The actual image shown in the attached files (the large drawing with the big FREDDE title) is a different IKEA product (a desk with raised shelves), not the bunk bed described in the parsed text. Page 18’s parsed text shows a person fitting a fabric/mesh over a rectangular frame, and page 37 is a blank/credits page. Because the visual files and the parsed descriptions conflict, I relied on the parsed markdown description for the answer but there is uncertainty — the raw image content does not match that description."
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "response = query_engine.query(\"What does the Tuffing look like?\")\n",
    "display(Markdown(str(response)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "Step 4: Use 4x screws (part numbers 118331 and 112996) to attach the two panels as shown. Insert the screws into the indicated holes and tighten with a screwdriver.\n",
       "\n",
       "Source and notes:\n",
       "- This answer comes from the parsed text for page 6 (the raw parsed instructions).\n",
       "- The accompanying image for page 6, however, shows a close-up of inserting/rotating a cylindrical cam/dowel (labelled 106986), which doesn't visually match the parsed text's described screws/part numbers. Because you asked me to use only the provided context, I reported the parsed-text instruction as step 4 and noted the image/text discrepancy above."
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "response = query_engine.query(\"What is step 4 of assembling the Nordli?\")\n",
    "display(Markdown(str(response)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "Answer: Call IKEA for help (use the phone number on the manual or contact your local IKEA store).\n",
       "\n",
       "Source & reasoning: I read the parsed page text and inspected the image. Both show a confused person with a question mark, then a second panel of a person on the phone holding the instructions with an IKEA store in the background — indicating you should call IKEA. The three parsed variants (smagora, tuffing, uppspel) and the raw image all agree on this instruction, so there are no meaningful discrepancies."
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "response = query_engine.query(\n",
    "    \"What should I do if I'm confused with reading the manual?\"\n",
    ")\n",
    "display(Markdown(str(response)))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
