{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "f4b2c37d-3b5a-47aa-95b9-d28e0bc83f77",
   "metadata": {},
   "source": [
    "# Dynamic Section Retrieval with LlamaParse\n",
    "\n",
    "<a href=\"https://colab.research.google.com/github/run-llama/llama_cloud_services/blob/main/examples/parse/advanced_rag/dynamic_section_retrieval.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
    "\n",
    "This notebook showcases a concept called \"dynamic section retrieval\".\n",
    "\n",
    "A common problem with naive RAG approaches is that each document is hierarchically organized by section, but standard chunking/retrieval searches for chunks that can be fragments of the entire section and miss out on relevant context.\n",
    "\n",
    "Dynamic section retrieval takes into account entire contiguous sections as metadata during retrieval, avoiding the problem of retrieving section fragments. \n",
    "1. First, tag chunks of a long document with the sections they correspond to, through structured extraction.\n",
    "2. Do two-pass retrieval. After initial semantic search, dynamically pull in the entire section through metadata filtering.\n",
    "\n",
    "![](dynamic_section_retrieval_img.png)\n",
    "\n",
    "This helps provide a solution to the common chunking problem of retrieving chunks that are only subsets of the entire section you're meant to retrieve.\n",
    "\n",
    "Status:\n",
    "| Last Executed | Version | State      |\n",
    "|---------------|---------|------------|\n",
    "| Aug-19-2025   | 0.6.61  | Maintained |"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2e4f707a-c7b5-473f-b4a6-881e2245e82d",
   "metadata": {},
   "source": [
    "## Setup\n",
    "\n",
    "Install core packages and download relevant files. Here we load some popular ICLR 2024 papers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9aa458bc-bc8d-46fe-9a57-021dd8d9e525",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install \"llama-index>=0.13.0<0.14.0\" \"llama-index-vector-stores-chroma>=0.5.1<0.6.0\"\n",
    "!pip install llama-cloud-services"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "79821400-caaf-42f1-99d8-74c184c19e29",
   "metadata": {},
   "outputs": [],
   "source": [
    "# NOTE: uncomment more papers if you want to do research over a larger subset of docs\n",
    "\n",
    "urls = [\n",
    "    # \"https://openreview.net/pdf?id=VtmBAGCN7o\",\n",
    "    # \"https://openreview.net/pdf?id=6PmJoRfdaK\",\n",
    "    # \"https://openreview.net/pdf?id=LzPWWPAdY4\",\n",
    "    \"https://openreview.net/pdf?id=VTF8yNQM66\",\n",
    "    \"https://openreview.net/pdf?id=hSyW5go0v8\",\n",
    "    # \"https://openreview.net/pdf?id=9WD9KwssyT\",\n",
    "    # \"https://openreview.net/pdf?id=yV6fD7LYkF\",\n",
    "    # \"https://openreview.net/pdf?id=hnrB5YHoYu\",\n",
    "    # \"https://openreview.net/pdf?id=WbWtOYIzIK\",\n",
    "    \"https://openreview.net/pdf?id=c5pwL0Soay\",\n",
    "    # \"https://openreview.net/pdf?id=TpD2aG1h0D\",\n",
    "]\n",
    "\n",
    "papers = [\n",
    "    # \"metagpt.pdf\",\n",
    "    # \"longlora.pdf\",\n",
    "    # \"loftq.pdf\",\n",
    "    \"swebench.pdf\",\n",
    "    \"selfrag.pdf\",\n",
    "    # \"zipformer.pdf\",\n",
    "    # \"values.pdf\",\n",
    "    # \"finetune_fair_diffusion.pdf\",\n",
    "    # \"knowledge_card.pdf\",\n",
    "    \"metra.pdf\",\n",
    "    # \"vr_mcl.pdf\",\n",
    "]\n",
    "\n",
    "data_dir = \"iclr_docs\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "80137d15-f22b-47eb-adce-ac295ced7e71",
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir \"{data_dir}\"\n",
    "for url, paper in zip(urls, papers):\n",
    "    !wget \"{url}\" -O \"{data_dir}/{paper}\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "974ce0a5-931a-4c1f-b8f3-af670c08eb0f",
   "metadata": {},
   "source": [
    "#### Define LLM and Embedding Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "75a05e99-56e2-4db9-baae-f9401100dcc3",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import Settings\n",
    "from llama_index.llms.openai import OpenAI\n",
    "from llama_index.embeddings.openai import OpenAIEmbedding\n",
    "\n",
    "embed_model = OpenAIEmbedding(model=\"text-embedding-3-large\", api_key=\"sk-...\")\n",
    "llm = OpenAI(model=\"gpt-5-mini\", api_key=\"sk-...\")\n",
    "\n",
    "Settings.embed_model = embed_model\n",
    "Settings.llm = llm"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2f16859f-c69e-4edf-acb6-0a5a0784275a",
   "metadata": {},
   "source": [
    "#### Parse Documents"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d6cd2cc9-673f-4f53-81fb-cc990950d345",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_cloud_services import LlamaParse\n",
    "\n",
    "parser = LlamaParse(\n",
    "    parse_mode=\"parse_page_with_agent\",\n",
    "    model=\"openai-gpt-4-1-mini\",\n",
    "    high_res_ocr=True,\n",
    "    adaptive_long_table=True,\n",
    "    outlined_table_extraction=True,\n",
    "    output_tables_as_HTML=True,\n",
    "    api_key=\"llx-...\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f9d6f0e8-323e-4786-a4a8-e393441ecd61",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Getting job results:   0%|          | 0/3 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Started parsing the file under job_id d8f0df2d-5b55-4e4f-bbe9-81cf4b8a4782\n",
      "Started parsing the file under job_id 6aef247f-f548-43f5-9ddb-cf8ba8373130\n",
      "Started parsing the file under job_id 5c1c4baf-fa43-4ed4-b671-16c45f99461c\n",
      "..."
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Getting job results:  67%|██████▋   | 2/3 [01:40<00:46, 46.97s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "....."
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Getting job results: 100%|██████████| 3/3 [05:49<00:00, 116.59s/it]\n"
     ]
    }
   ],
   "source": [
    "from pathlib import Path\n",
    "\n",
    "paths_to_parse = []\n",
    "for paper_path in papers:\n",
    "    paper_base = Path(paper_path).stem\n",
    "    full_paper_path = str(Path(data_dir) / paper_path)\n",
    "    paths_to_parse.append(full_paper_path)\n",
    "\n",
    "\n",
    "results = await parser.aparse(paths_to_parse)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d52878b-aabf-418e-a4c7-9903a77dd8c8",
   "metadata": {},
   "source": [
    "#### Get Text Nodes\n",
    "\n",
    "Using each result object, we can create a list of text nodes with metadata attached."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "346fe5ef-171e-4a54-9084-7a7805103a13",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.schema import TextNode\n",
    "\n",
    "\n",
    "# attach image metadata to the text nodes\n",
    "def get_text_nodes(result):\n",
    "    \"\"\"Split docs into nodes, by separator.\"\"\"\n",
    "    nodes = []\n",
    "\n",
    "    md_texts = [page.md for page in result.pages]\n",
    "\n",
    "    for idx, md_text in enumerate(md_texts):\n",
    "        chunk_metadata = {\n",
    "            \"page_num\": idx + 1,\n",
    "            \"paper_path\": result.file_name,\n",
    "        }\n",
    "        node = TextNode(\n",
    "            text=md_text,\n",
    "            metadata=chunk_metadata,\n",
    "        )\n",
    "        nodes.append(node)\n",
    "\n",
    "    return nodes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f591669c-5a8e-491d-9cef-0b754abbf26f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# this will combine all nodes from all papers into a single list\n",
    "all_text_nodes = []\n",
    "text_nodes_dict = {}\n",
    "for result in results:\n",
    "    text_nodes = get_text_nodes(result)\n",
    "    all_text_nodes.extend(text_nodes)\n",
    "    text_nodes_dict[result.file_name] = text_nodes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2e8fb9df",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "106\n"
     ]
    }
   ],
   "source": [
    "print(len(all_text_nodes))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b25f253-3aa0-4689-be6e-d0c722b8b48c",
   "metadata": {},
   "source": [
    "## Add Section Metadata\n",
    "\n",
    "The first step is to extract out a map of all sections from the text of each document. We create a workflow that extracts out if a section heading exists on each page, and merges it together into a combined list. We then run a reflection step to review/correct the extracted sections to make sure everything is correct.\n",
    "\n",
    "Once we have a map of all the sections and the page numbers they start at, we can add the appropriate section ID as metadata to each chunk."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8fdb689-cc94-4da2-ba12-9267e8ee8623",
   "metadata": {},
   "source": [
    "#### Define Section Schema to Extract Into\n",
    "\n",
    "Here we define the output schema which allows us to extract out the section metadata from each section of the document. This will give us a full table of contents of each section."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "66358783-1d7f-489d-a85b-35bcb9620912",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pydantic import BaseModel, Field\n",
    "from typing import List, Optional\n",
    "\n",
    "\n",
    "class SectionOutput(BaseModel):\n",
    "    \"\"\"The metadata for a given section. Includes the section name, title, page that it starts on, and more.\"\"\"\n",
    "\n",
    "    section_name: str = Field(\n",
    "        ..., description=\"The current section number (e.g. section_name='3.2')\"\n",
    "    )\n",
    "    section_title: str = Field(\n",
    "        ...,\n",
    "        description=\"The current section title associated with the number (e.g. section_title='Experimental Results')\",\n",
    "    )\n",
    "\n",
    "    start_page_number: int = Field(..., description=\"The start page number.\")\n",
    "    is_subsection: bool = Field(\n",
    "        ...,\n",
    "        description=\"True if it's a subsection (e.g. Section 3.2). False if it's not a subsection (e.g. Section 3)\",\n",
    "    )\n",
    "    description: Optional[str] = Field(\n",
    "        None,\n",
    "        description=\"The extracted line from the source text that indicates this is a relevant section.\",\n",
    "    )\n",
    "\n",
    "    def get_section_id(self):\n",
    "        \"\"\"Get section id.\"\"\"\n",
    "        return f\"{self.section_name}: {self.section_title}\"\n",
    "\n",
    "\n",
    "class SectionsOutput(BaseModel):\n",
    "    \"\"\"A list of all sections.\"\"\"\n",
    "\n",
    "    sections: List[SectionOutput]\n",
    "\n",
    "\n",
    "class ValidSections(BaseModel):\n",
    "    \"\"\"A list of indexes, each corresponding to a valid section.\"\"\"\n",
    "\n",
    "    valid_indexes: List[int] = Field(\n",
    "        \"List of valid section indexes. Do NOT include sections to remove.\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bff90c77-f92e-4c5e-a441-70f81adb68fb",
   "metadata": {},
   "source": [
    "#### Extract into Section Outputs\n",
    "\n",
    "Use LlamaIndex structured output capabilities to iterate through each page and extract out relevant section metadata. Note: some pages may contain no section metadata (there are no sections that begin on that page)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dcfcd3a6-4739-4624-a6ed-678e41119575",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llms.openai import OpenAI\n",
    "from llama_index.core.prompts import ChatPromptTemplate, ChatMessage\n",
    "from llama_index.core.llms import LLM\n",
    "from llama_index.core.async_utils import run_jobs, asyncio_run\n",
    "import json\n",
    "\n",
    "\n",
    "async def aget_sections(\n",
    "    doc_text: str, llm: Optional[LLM] = None\n",
    ") -> List[SectionOutput]:\n",
    "    \"\"\"Get extracted sections from a provided text.\"\"\"\n",
    "\n",
    "    system_prompt = \"\"\"\\\n",
    "    You are an AI document assistant tasked with extracting out section metadata from a document text. \n",
    "    \n",
    "- You should ONLY extract out metadata if the document text contains the beginning of a section.\n",
    "- The metadata schema is listed below - you should extract out the section_name, section_title, start page number, description.\n",
    "- A valid section MUST begin with a hashtag (#) and have a number (e.g. \"1 Introduction\" or \"Section 1 Introduction\"). \\\n",
    "Note: Not all hashtag (#) lines are valid sections. \n",
    "\n",
    "- You can extract out multiple section metadata if there are multiple sections on the page. \n",
    "- If there are no sections that begin in this document text, do NOT extract out any sections. \n",
    "- A valid section MUST be clearly delineated in the document text. Do NOT extract out a section if it is mentioned, \\\n",
    "but is not actually the start of a section in the document text.\n",
    "- A Figure or Table does NOT count as a section.\n",
    "    \n",
    "    The user will give the document text below.\n",
    "    \n",
    "    \"\"\"\n",
    "    llm = llm or OpenAI(model=\"gpt-5-mini\", api_key=\"sk-...\")\n",
    "    sllm = llm.as_structured_llm(SectionsOutput)\n",
    "\n",
    "    messages = [\n",
    "        ChatMessage(content=system_prompt, role=\"system\"),\n",
    "        ChatMessage(content=f\"Document text: {doc_text}\", role=\"user\"),\n",
    "    ]\n",
    "    result = await sllm.achat(messages)\n",
    "    return result.raw.sections\n",
    "\n",
    "\n",
    "async def arefine_sections(\n",
    "    sections: List[SectionOutput], llm: Optional[LLM] = None\n",
    ") -> List[SectionOutput]:\n",
    "    \"\"\"Refine sections based on extracted text.\"\"\"\n",
    "\n",
    "    system_prompt = \"\"\"\\\n",
    "    You are an AI review assistant tasked with reviewing and correcting another agent's work in extracting sections from a document.\n",
    "\n",
    "    Below is the list of sections with indexes. The sections may be incorrect in the following manner:\n",
    "    - There may be false positive sections - some sections may be wrongly extracted - you can tell by the sequential order of the rest of the sections\n",
    "    - Some sections may be incorrectly marked as subsections and vice-versa\n",
    "    - You can use the description which contains extracted text from the source document to see if it actually qualifies as a section.\n",
    "\n",
    "    Given this, return the list of indexes that are valid. Do NOT include the indexes to be removed.\n",
    "    \n",
    "    \"\"\"\n",
    "    llm = llm or OpenAI(model=\"gpt-5-mini\", api_key=\"sk-...\")\n",
    "    sllm = llm.as_structured_llm(ValidSections)\n",
    "\n",
    "    section_texts = \"\\n\".join(\n",
    "        [f\"{idx}: {json.dumps(s.model_dump())}\" for idx, s in enumerate(sections)]\n",
    "    )\n",
    "\n",
    "    messages = [\n",
    "        ChatMessage(content=system_prompt, role=\"system\"),\n",
    "        ChatMessage(content=f\"Sections in text:\\n\\n{section_texts}\", role=\"user\"),\n",
    "    ]\n",
    "\n",
    "    result = await sllm.achat(messages)\n",
    "    valid_indexes = result.raw.valid_indexes\n",
    "\n",
    "    new_sections = [s for idx, s in enumerate(sections) if idx in valid_indexes]\n",
    "    return new_sections\n",
    "\n",
    "\n",
    "async def acreate_sections(text_nodes_dict):\n",
    "    sections_dict = {}\n",
    "    for paper_path, text_nodes in text_nodes_dict.items():\n",
    "        all_sections = []\n",
    "\n",
    "        tasks = [aget_sections(n.get_content(metadata_mode=\"all\")) for n in text_nodes]\n",
    "\n",
    "        async_results = await run_jobs(tasks, workers=8, show_progress=True)\n",
    "        all_sections = [s for r in async_results for s in r]\n",
    "\n",
    "        all_sections = await arefine_sections(all_sections)\n",
    "        sections_dict[paper_path] = all_sections\n",
    "    return sections_dict"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6e360a5c-29bd-4d86-9a21-f46013bab39a",
   "metadata": {},
   "outputs": [],
   "source": [
    "sections_dict = asyncio_run(acreate_sections(text_nodes_dict))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d930f0e5-5295-46b0-b54b-e2da4fb25fe5",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SectionOutput(section_name='1', section_title='Introduction', start_page_number=1, is_subsection=False, description='## 1 Introduction'),\n",
       " SectionOutput(section_name='2.2', section_title='TASK FORMULATION', start_page_number=3, is_subsection=True, description='## 2.2 TASK FORMULATION'),\n",
       " SectionOutput(section_name='2.3', section_title='FEATURES OF SWE-BENCH', start_page_number=3, is_subsection=True, description='## 2.3 FEATURES OF SWE-BENCH'),\n",
       " SectionOutput(section_name='3', section_title='SWE-LLAMA: FINE-TUNING CODELLAMA FOR SWE-BENCH', start_page_number=3, is_subsection=False, description='## 3 SWE-LLAMA: FINE-TUNING CODELLAMA FOR SWE-BENCH'),\n",
       " SectionOutput(section_name='4', section_title='EXPERIMENTAL SETUP', start_page_number=4, is_subsection=False, description='# 4 EXPERIMENTAL SETUP'),\n",
       " SectionOutput(section_name='4.1', section_title='RETRIEVAL-BASED APPROACH', start_page_number=4, is_subsection=True, description='## 4.1 RETRIEVAL-BASED APPROACH'),\n",
       " SectionOutput(section_name='4.2', section_title='INPUT FORMAT', start_page_number=5, is_subsection=True, description='## 4.2 INPUT FORMAT'),\n",
       " SectionOutput(section_name='4.3', section_title='MODELS', start_page_number=5, is_subsection=True, description='## 4.3 MODELS'),\n",
       " SectionOutput(section_name='5', section_title='RESULTS', start_page_number=5, is_subsection=False, description='# 5 RESULTS'),\n",
       " SectionOutput(section_name='5.1', section_title='A QUALITATIVE ANALYSIS OF SWE-LLAMA GENERATIONS', start_page_number=8, is_subsection=True, description='# 5.1 A QUALITATIVE ANALYSIS OF SWE-LLAMA GENERATIONS'),\n",
       " SectionOutput(section_name='6', section_title='RELATED WORK', start_page_number=8, is_subsection=False, description='# 6 RELATED WORK'),\n",
       " SectionOutput(section_name='8', section_title='ETHICS STATEMENT', start_page_number=10, is_subsection=False, description='# 8 ETHICS STATEMENT'),\n",
       " SectionOutput(section_name='9', section_title='REPRODUCIBILITY STATEMENT', start_page_number=10, is_subsection=False, description='# 9 REPRODUCIBILITY STATEMENT'),\n",
       " SectionOutput(section_name='10', section_title='ACKNOWLEDGEMENTS', start_page_number=10, is_subsection=False, description='# 10 ACKNOWLEDGEMENTS'),\n",
       " SectionOutput(section_name='A.1', section_title='HIGH LEVEL OVERVIEW', start_page_number=15, is_subsection=True, description='### A.1 HIGH LEVEL OVERVIEW'),\n",
       " SectionOutput(section_name='A.2', section_title='CONSTRUCTION PROCESS', start_page_number=16, is_subsection=True, description='## A.2 CONSTRUCTION PROCESS'),\n",
       " SectionOutput(section_name='A.3', section_title='EXECUTION-BASED VALIDATION', start_page_number=18, is_subsection=True, description='### A.3 EXECUTION-BASED VALIDATION'),\n",
       " SectionOutput(section_name='A.4', section_title='EVALUATION PROCEDURE', start_page_number=19, is_subsection=True, description='## A.4 EVALUATION PROCEDURE'),\n",
       " SectionOutput(section_name='A.5', section_title='EVALUATION TEST SET CHARACTERIZATION', start_page_number=20, is_subsection=True, description='## A.5 EVALUATION TEST SET CHARACTERIZATION'),\n",
       " SectionOutput(section_name='A.6', section_title='DEVELOPMENT SET CHARACTERIZATION', start_page_number=23, is_subsection=True, description='## A.6 DEVELOPMENT SET CHARACTERIZATION'),\n",
       " SectionOutput(section_name='B.1', section_title='TRAINING DETAILS', start_page_number=24, is_subsection=True, description='## B.1 TRAINING DETAILS'),\n",
       " SectionOutput(section_name='C.1', section_title='RESULTS WITH “ORACLE” RETRIEVAL', start_page_number=24, is_subsection=True, description='## C.1 RESULTS WITH “ORACLE” RETRIEVAL'),\n",
       " SectionOutput(section_name='C.2', section_title='EVALUATION TEST SET', start_page_number=24, is_subsection=True, description='## C.2 EVALUATION TEST SET'),\n",
       " SectionOutput(section_name='C.3', section_title='GPT-4 EVALUATION SUBSET RESULTS', start_page_number=24, is_subsection=True, description='## C.3 GPT-4 EVALUATION SUBSET RESULTS'),\n",
       " SectionOutput(section_name='C.4', section_title='EXTENDED TEMPORAL ANALYSIS', start_page_number=25, is_subsection=True, description='## C.4 EXTENDED TEMPORAL ANALYSIS'),\n",
       " SectionOutput(section_name='C.5', section_title='F2P, P2P RATE ANALYSIS', start_page_number=25, is_subsection=True, description='## C.5 F2P, P2P RATE ANALYSIS'),\n",
       " SectionOutput(section_name='C.7', section_title='SOFTWARE ENGINEERING METRICS', start_page_number=27, is_subsection=True, description='## C.7 SOFTWARE ENGINEERING METRICS'),\n",
       " SectionOutput(section_name='D.1', section_title='RETRIEVAL DETAILS', start_page_number=28, is_subsection=True, description='## D.1 RETRIEVAL DETAILS'),\n",
       " SectionOutput(section_name='D.2', section_title='INFERENCE SETTINGS', start_page_number=29, is_subsection=True, description='## D.2 INFERENCE SETTINGS'),\n",
       " SectionOutput(section_name='D.3', section_title='PROMPT TEMPLATE EXAMPLE', start_page_number=29, is_subsection=True, description='## D.3 PROMPT TEMPLATE EXAMPLE')]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sections_dict[\"iclr_docs/swebench.pdf\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8c6f237b-df2a-4d9e-91bf-b0bbb88ef183",
   "metadata": {},
   "outputs": [],
   "source": [
    "# [Optional] SAVE\n",
    "import pickle\n",
    "\n",
    "pickle.dump(sections_dict, open(\"sections_dict.pkl\", \"wb\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7497b614-250e-4f3e-8940-b361996a00b6",
   "metadata": {},
   "outputs": [],
   "source": [
    "# [Optional] LOAD\n",
    "sections_dict = pickle.load(open(\"sections_dict.pkl\", \"rb\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "28b01141-d9c1-424c-937a-8707867180b1",
   "metadata": {},
   "source": [
    "#### Annotate each chunk with the section metadata\n",
    "\n",
    "In the section above we've extracted out a TOC of all sections/subsections and their page numbers. Given this we can just do one forward pass through all the chunks, and annotate them with the section they correspond to (e.g. the section/subsection with the highest page number less than the page number of the chunk). "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "38133abe-7800-424a-9259-71df6d154d31",
   "metadata": {},
   "outputs": [],
   "source": [
    "def annotate_chunks_with_sections(chunks, sections):\n",
    "    main_sections = [s for s in sections if not s.is_subsection]\n",
    "    # subsections include the main sections too (some sections have no subsections etc.)\n",
    "    sub_sections = sections\n",
    "\n",
    "    main_section_idx, sub_section_idx = 0, 0\n",
    "    for idx, c in enumerate(chunks):\n",
    "        cur_page = c.metadata[\"page_num\"]\n",
    "        while (\n",
    "            main_section_idx + 1 < len(main_sections)\n",
    "            and main_sections[main_section_idx + 1].start_page_number <= cur_page\n",
    "        ):\n",
    "            main_section_idx += 1\n",
    "        while (\n",
    "            sub_section_idx + 1 < len(sub_sections)\n",
    "            and sub_sections[sub_section_idx + 1].start_page_number <= cur_page\n",
    "        ):\n",
    "            sub_section_idx += 1\n",
    "\n",
    "        cur_main_section = main_sections[main_section_idx]\n",
    "        cur_sub_section = sub_sections[sub_section_idx]\n",
    "\n",
    "        c.metadata[\"section_id\"] = cur_main_section.get_section_id()\n",
    "        c.metadata[\"sub_section_id\"] = cur_sub_section.get_section_id()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d125b0c0-acb0-4f56-9ef3-f06d452ae3cd",
   "metadata": {},
   "outputs": [],
   "source": [
    "for paper_path, text_nodes in text_nodes_dict.items():\n",
    "    sections = sections_dict[paper_path]\n",
    "    annotate_chunks_with_sections(text_nodes, sections)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b1ab80d2-cac4-417d-aaac-7ea9dfed49f7",
   "metadata": {},
   "source": [
    "You can choose to save these nodes if you'd like."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2272ae05-89f6-46a9-9b9f-915e15908128",
   "metadata": {},
   "outputs": [],
   "source": [
    "# SAVE\n",
    "import pickle\n",
    "\n",
    "pickle.dump(text_nodes_dict, open(\"iclr_text_nodes.pkl\", \"wb\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ebf0173-af45-4fae-aca4-2ceb266f8357",
   "metadata": {},
   "source": [
    "**LOAD**: If you've already saved nodes, run the below cell to load from an existing file."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c1e5425b-4872-47b3-86f5-f6a068788a2b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# LOAD\n",
    "import pickle\n",
    "\n",
    "text_nodes_dict = pickle.load(open(\"iclr_text_nodes.pkl\", \"rb\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "642e90f1-1d32-4925-b37d-2af8a0ca9712",
   "metadata": {},
   "outputs": [],
   "source": [
    "all_text_nodes = []\n",
    "for paper_path, text_nodes in text_nodes_dict.items():\n",
    "    all_text_nodes.extend(text_nodes)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8d7b566b-5ec1-4e49-b4d4-e863af2aabc6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "106"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(all_text_nodes)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d03a4de3-39ce-40c3-b37b-b6bbc597ddb1",
   "metadata": {},
   "source": [
    "### Build Indexes\n",
    "\n",
    "Once the text nodes are ready, we feed into our vector store index abstraction, which will index these nodes into a simple in-memory vector store (of course, you should definitely check out our 40+ vector store integrations!)\n",
    "\n",
    "Besides vector indexing, we **also** store a mapping of paper path to the summary index. This allows us to perform document-level retrieval - retrieve all chunks relevant to a given document."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "add64e3e-12df-4d5a-beba-b3018325e15b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.vector_stores.chroma import ChromaVectorStore\n",
    "from llama_index.core import VectorStoreIndex\n",
    "\n",
    "persist_dir = \"chroma_storage\"\n",
    "\n",
    "vector_store = ChromaVectorStore.from_params(\n",
    "    collection_name=\"text_nodes\", persist_dir=persist_dir\n",
    ")\n",
    "index = VectorStoreIndex.from_vector_store(vector_store)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e46583a-6c6b-4a5e-b78a-d06721ae7d1c",
   "metadata": {},
   "source": [
    "**NOTE**: Don't run the block below if you've already inserted the nodes. Only run if it's your first time!!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9777f302-699a-4417-99b8-2be4e7cd60f5",
   "metadata": {},
   "outputs": [],
   "source": [
    "index.insert_nodes(all_text_nodes)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d46f14ff-45b1-41f4-84e2-a6e5d6637809",
   "metadata": {},
   "source": [
    "## Setup Dynamic, Section-Level Retrieval\n",
    "\n",
    "We now setup a retriever that will allow us to retrieve an entire contiguous section in a document, instead of a chunk of it. This is useful for preserving the entire context within a doc.\n",
    "\n",
    "- Step 1: Do chunk-level retrieval to find the relevant chunks.\n",
    "- Step 2: For each chunk, identify the section that it corresponds to.\n",
    "- Step 3: Do a second retrieval pass using metadata filters to find the entire contiguous section that matches the chunk, and return that as a continguous node.\n",
    "- Step 4: Feed the contiguous sections into the LLM."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "652cb067-da39-42cb-a303-faa346f72e13",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llms.openai import OpenAI\n",
    "\n",
    "llm = OpenAI(model=\"gpt-5-mini\", api_key=\"sk-...\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "253f0c57-f5b4-4dbd-a0a0-62a42bd5bbdc",
   "metadata": {},
   "outputs": [],
   "source": [
    "chunk_retriever = index.as_retriever(similarity_top_k=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b0a564cb-bfdb-48a5-9d67-10390c3a6c28",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.vector_stores.types import (\n",
    "    VectorStoreInfo,\n",
    "    VectorStoreQuerySpec,\n",
    "    MetadataInfo,\n",
    "    MetadataFilters,\n",
    "    FilterCondition,\n",
    ")\n",
    "from llama_index.core.schema import NodeWithScore\n",
    "from typing import List\n",
    "\n",
    "\n",
    "def section_retrieve(query: str, verbose: bool = False) -> List[NodeWithScore]:\n",
    "    \"\"\"Retrieve sections.\"\"\"\n",
    "    if verbose:\n",
    "        print(f\">> Identifying the right sections to retrieve\")\n",
    "    chunk_nodes = chunk_retriever.retrieve(query)\n",
    "\n",
    "    all_section_nodes = {}\n",
    "    for node in chunk_nodes:\n",
    "        section_id = node.node.metadata[\"section_id\"]\n",
    "        if verbose:\n",
    "            print(f\">> Retrieving section: {section_id}\")\n",
    "        filters = MetadataFilters.from_dicts(\n",
    "            [\n",
    "                {\"key\": \"section_id\", \"value\": section_id, \"operator\": \"==\"},\n",
    "                {\n",
    "                    \"key\": \"paper_path\",\n",
    "                    \"value\": node.node.metadata[\"paper_path\"],\n",
    "                    \"operator\": \"==\",\n",
    "                },\n",
    "            ],\n",
    "            condition=FilterCondition.AND,\n",
    "        )\n",
    "\n",
    "        # TODO: make node_ids not positional\n",
    "        section_nodes_raw = index.vector_store.get_nodes(node_ids=None, filters=filters)\n",
    "        section_nodes = [NodeWithScore(node=n) for n in section_nodes_raw]\n",
    "        # order and consolidate nodes\n",
    "        section_nodes_sorted = sorted(\n",
    "            section_nodes, key=lambda x: x.metadata[\"page_num\"]\n",
    "        )\n",
    "\n",
    "        all_section_nodes.update({n.id_: n for n in section_nodes_sorted})\n",
    "    return all_section_nodes.values()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "509de5ae-4d51-4b39-b67e-698cb84acd73",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      ">> Identifying the right sections to retrieve\n",
      ">> Retrieving section: 6: Conclusion\n",
      ">> Retrieving section: 5: EXPERIMENTS\n",
      ">> Retrieving section: 5: EXPERIMENTS\n"
     ]
    }
   ],
   "source": [
    "nodes = section_retrieve(\n",
    "    \"Give me details of all additional experimental results in the Metra paper\",\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "db64a838-5f19-46e0-b874-859a125f8dcd",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'page_num': 9, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': '6: Conclusion'}\n",
      "{'page_num': 10, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': '6: Conclusion'}\n",
      "{'page_num': 11, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': '6: Conclusion'}\n",
      "{'page_num': 12, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': '6: Conclusion'}\n",
      "{'page_num': 13, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': '6: Conclusion'}\n",
      "{'page_num': 14, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': '6: Conclusion'}\n",
      "{'page_num': 15, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': '6: Conclusion'}\n",
      "{'page_num': 16, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': '6: Conclusion'}\n",
      "{'page_num': 17, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': 'C.1: Universality of Inner Product Decomposition'}\n",
      "{'page_num': 18, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': 'C.2: Lipschitz Constraint under the Temporal Distance Metric'}\n",
      "{'page_num': 19, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': 'C.2: Lipschitz Constraint under the Temporal Distance Metric'}\n",
      "{'page_num': 20, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': 'E.2: DADS'}\n",
      "{'page_num': 21, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': 'F.1: FULL QUALITATIVE RESULTS'}\n",
      "{'page_num': 22, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': 'F.4: ADDITIONAL BASELINES'}\n",
      "{'page_num': 23, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': 'G.1: Environments'}\n",
      "{'page_num': 24, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': 'G.2: IMPLEMENTATION DETAILS'}\n",
      "{'page_num': 25, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '6: Conclusion', 'sub_section_id': 'G.2: IMPLEMENTATION DETAILS'}\n",
      "{'page_num': 6, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '5: EXPERIMENTS', 'sub_section_id': '5: EXPERIMENTS'}\n",
      "{'page_num': 7, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '5: EXPERIMENTS', 'sub_section_id': '5.2: QUALITATIVE COMPARISON'}\n",
      "{'page_num': 8, 'paper_path': 'iclr_docs/metra.pdf', 'section_id': '5: EXPERIMENTS', 'sub_section_id': '5.3: Quantitative Comparison'}\n"
     ]
    }
   ],
   "source": [
    "for n in nodes:\n",
    "    print(n.node.metadata)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d67303e6-ec65-499b-85bb-8189d220b466",
   "metadata": {},
   "source": [
    "### Try out Section-Level Retrieval as a Full RAG Pipeline\n",
    "\n",
    "Now that we've defined the retriever, we can plug the retrieved results into an LLM to create a full RAG pipeline! \n",
    "\n",
    "Our response synthesizers help handle dumping context into the LLM prompt window while accounting for context window limitations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bb382809-d38e-4f03-bf26-6e1bf0d98df6",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.query_engine import CustomQueryEngine\n",
    "from llama_index.core.response_synthesizers import TreeSummarize, BaseSynthesizer\n",
    "\n",
    "\n",
    "class SectionRetrieverRAGEngine(CustomQueryEngine):\n",
    "    \"\"\"RAG Query Engine.\"\"\"\n",
    "\n",
    "    synthesizer: BaseSynthesizer\n",
    "    verbose: bool = True\n",
    "\n",
    "    def __init__(self, *args, **kwargs):\n",
    "        super().__init__(synthesizer=TreeSummarize(llm=llm))\n",
    "\n",
    "    def custom_query(self, query_str: str):\n",
    "        nodes = section_retrieve(query_str, verbose=self.verbose)\n",
    "        response_obj = self.synthesizer.synthesize(query_str, nodes)\n",
    "        return response_obj"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "426d9426-a145-4f50-ad37-4dd82b5c7ae8",
   "metadata": {},
   "outputs": [],
   "source": [
    "query_engine = SectionRetrieverRAGEngine()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d1ec3f98-7181-4850-8b37-1e0aa751bf54",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      ">> Identifying the right sections to retrieve\n",
      ">> Retrieving section: 5: RESULTS\n",
      ">> Retrieving section: 3: SWE-LLAMA: FINE-TUNING CODELLAMA FOR SWE-BENCH\n",
      ">> Retrieving section: 4: EXPERIMENTAL SETUP\n",
      "Key findings about how difficulty correlates with context length\n",
      "\n",
      "- Performance falls as total input/context size grows. As the amount of code and other context provided to models increases, their ability to localize and produce correct edits drops noticeably (this behavior was observed across multiple models, e.g., Claude 2 and others).\n",
      "\n",
      "- Extra (irrelevant) context distracts models. When models are given a lot of code that is unrelated to the actual edit, they frequently struggle to find the problematic lines that need changing. This sensitivity includes the relative location of the target code within the larger context.\n",
      "\n",
      "- Increasing retriever recall doesn't fix it. Expanding retrieval windows (to include more files and therefore raise oracle recall) can actually hurt end-to-end performance because models become less effective at pinpointing the needed edits amid the extra material.\n",
      "\n",
      "- Collapsing context around the true edits helps. An ablation that collapses retrieved files to only the lines actually modified in the reference patch (±15 lines) improved results — for example, one model’s resolved rate rose from 4.8% to 5.9%, and another increased from ~1.3% to 3.4% — showing that concentrating context on the most relevant snippets makes the task easier.\n",
      "\n",
      "- Finetuned models are sensitive to context-distribution shifts. Models fine-tuned on tightly scoped (oracle) contexts performed worse when given BM25-retrieved context that contained many irrelevant files, indicating that training with one style of context can reduce robustness to different retrieval outputs.\n",
      "\n",
      "Implications\n",
      "- Better retrieval or context-compression methods (e.g., more precise retrieval, collapsing to edited regions, or preprocessing to highlight likely relevant locations) are likely more useful than simply increasing context size.\n",
      "- Robust model behavior requires not just larger windows but mechanisms for localization and filtering of relevant code within long contexts.\n"
     ]
    }
   ],
   "source": [
    "response = query_engine.query(\n",
    "    \"Tell me more about how difficulty correlates with context length in SWEBench\"\n",
    ")\n",
    "print(str(response))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "483f5615-ab58-4bc7-968b-7a9e116756e1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      ">> Identifying the right sections to retrieve\n",
      ">> Retrieving section: 10: ACKNOWLEDGEMENTS\n",
      ">> Retrieving section: 1: Introduction\n",
      ">> Retrieving section: 3: SWE-LLAMA: FINE-TUNING CODELLAMA FOR SWE-BENCH\n",
      "High-level summary\n",
      "- SWE-bench is a repository-scale, execution-validated benchmark of real GitHub issues paired with merged pull-request solutions. Each task gives a snapshot of a real codebase plus an issue description; the model must produce a patch that, when applied, makes the repository pass the tests that verify the issue was addressed.\n",
      "- The benchmark emphasizes realistic, hard software-engineering problems: large codebases, multi-file edits, long issue descriptions, and unit tests used for automatic verification.\n",
      "\n",
      "Data sources and collection\n",
      "- Candidate PRs are sourced from popular Python projects (selected from highly downloaded PyPI packages and mapped to their GitHub repositories). Repositories are filtered to ensure permissible licenses.\n",
      "- Pull requests are collected via the GitHub API and then filtered automatically.\n",
      "\n",
      "Task-instance selection criteria\n",
      "A PR becomes a candidate task only if it satisfies all of:\n",
      "- Status = merged (the PR was accepted).\n",
      "- The PR resolves one or more GitHub issues (detected via links like “fixes #N” in title/body/commits).\n",
      "- The PR introduces or edits test files (file paths containing test-related keywords).\n",
      "Only candidates that pass execution-based validation are kept.\n",
      "\n",
      "Task-instance components\n",
      "Each task instance encodes:\n",
      "- Codebase reference C: repo owner/name and the base commit (mirrored repositories are created so code can be retrieved reproducibly).\n",
      "- Problem statement P: aggregated issue titles and descriptions and any issue/PR comments up to the PR’s first commit (no post-solution comments that would leak the fix).\n",
      "- Tests T: the tests introduced/edited by the PR (extracted from the PR diff and stored as a .patch).\n",
      "- Solution δ (gold patch): the PR’s code changes excluding test edits (stored as a .patch).\n",
      "- Metadata fields: base_commit, created_at, instance_id, issue_numbers, repo, pull_number, version, env_install_commit, hints_text (collected comments), and cached test result mappings like FAIL_TO_PASS and PASS_TO_PASS.\n",
      "\n",
      "Execution-based validation (quality control)\n",
      "- Virtual execution contexts are created per repository release version (manual inspection of README/contributing to determine Python version, dependencies, install commands). Conda environments are used.\n",
      "- For each candidate instance the pipeline:\n",
      "  1. Checks out the base commit.\n",
      "  2. Installs the codebase in the corresponding env.\n",
      "  3. Applies the test patch T and runs tests (log_pre).\n",
      "  4. Applies the solution patch δ and runs tests again (log_post).\n",
      "- Candidates are discarded if any step fails (checkout, install, apply patch, test run).\n",
      "- Instances are retained only if at least one test changes from fail → pass (a true FAIL_TO_PASS) and if there are no trivial issues (e.g., ImportError or AttributeError in log_pre that indicate missing dependency/name issues).\n",
      "- Instances whose tests exercise newly created functions/classes (i.e., tests requiring names introduced by δ) are excluded because they would be impossible to solve from the problem statement alone.\n",
      "\n",
      "Task-instance format and artifacts\n",
      "- Finalized instances are saved in a single JSON file (task metadata and patch contents are included as patch-format strings).\n",
      "- For each instance the validation engine caches parsed test-to-status mappings for log_pre/log_post and creates ground-truth lists: FAIL_TO_PASS, PASS_TO_PASS (used during evaluation to check both that the fix was implemented and that prior behavior is preserved).\n",
      "- Mirrors of original repositories are created and stored to preserve exact base commits and enable reproducible checkout.\n",
      "\n",
      "Evaluation procedure (how models are scored)\n",
      "- Model input: problem statement P and the codebase C (usually limited by retrieval/long-context strategy). The model must generate a single .patch (a git/unified-diff style patch).\n",
      "- Per predicted patch the evaluation harness:\n",
      "  1. Resets repo to base commit.\n",
      "  2. Activates the executable context for the instance version.\n",
      "  3. Installs the codebase.\n",
      "  4. Applies the test patch T.\n",
      "  5. Attempts to apply the predicted patch \\hat{δ}. If applying fails, an automatic \"patch-fix\" step tries to repair the patch (e.g., strip extraneous context lines and recalculate headers); if it still fails the prediction is scored as failure.\n",
      "  6. Runs the repository’s test command to generate log_{\\hat{δ}}.\n",
      "  7. Parses log_{\\hat{δ}} into a test-to-status mapping using repository-specific parsers.\n",
      "  8. Declares the task solved only if all tests listed in FAIL_TO_PASS and PASS_TO_PASS have status = pass in log_{\\hat{δ}}.\n",
      "- The principal metric is % Resolved: fraction of task instances fully solved (all required tests pass).\n",
      "\n",
      "Patch-fixing and robustness\n",
      "- If a generated patch does not apply, the harness attempts an automated repair (e.g., removing context lines, fixing header offsets) before giving up. Applied-but-broken patches that then fail tests are classified according to pass/fail patterns (Resolved, Breaking Resolved, Partially Resolved, Work-in-Progress, No-Op, Regression) to provide finer-grained analysis.\n",
      "\n",
      "Dataset scale and characterization\n",
      "- Raw crawl: ~93k PRs across selected repositories; after conversion/filters and execution validation the final evaluation set contains 2,294 task instances.\n",
      "- Instances come from 12 widely used Python repositories with varied sizes and purposes (e.g., scikit-learn, Django, matplotlib, requests, pytest, sympy, astropy, etc.).\n",
      "- Typical instance properties: long problem descriptions (median ~140 words), large repositories (median ~thousands of files and hundreds of thousands of lines), and reference edits that usually touch ~1–2 files, edit a few functions, and modify a few dozen lines on average.\n",
      "- Tests: each instance has at least one FAIL_TO_PASS; many instances include many PASS_TO_PASS tests for regression protection (median tens to hundreds of pass-to-pass tests).\n",
      "\n",
      "Development set, train set, and extensions\n",
      "- A smaller development set (~225 instances, >10% of the main set) is provided for tuning and debugging.\n",
      "- A separate SWE-bench-train dataset (19k non-testing task instances from many repos) was prepared for fine-tuning models; fine-tuned models were released (SWE-Llama 7B and 13B) to study open-model performance on long contexts.\n",
      "- The collection pipeline and mirror strategy were designed to be easily extendable so the benchmark can be updated continuously with new PRs and support additional languages or repos.\n",
      "\n",
      "Reproducibility and release commitments\n",
      "- The codebase used to collect, validate, and evaluate task instances is organized and documented; mirrors and the JSON of task instances are provided so others can reproduce experiments.\n",
      "- Execution contexts, validation logs, and ground-truth test mappings are cached to avoid re-running expensive validation at evaluation time.\n",
      "- Plans include open-sourcing the task instances, collection/evaluation infrastructure, training data used for fine-tuning, and model weights along with documentation.\n",
      "\n",
      "Design decisions and safeguards\n",
      "- Using merged PRs that added tests provides a strong ground-truth signal that the PR truly solved the issue and allowed for reproducible verification.\n",
      "- Excluding instances with trivial dependency/name errors or tests that require newly-introduced symbol names ensures tasks are solvable from the given P + C without hidden knowledge.\n",
      "- Mirroring repositories preserves commit history and avoids breakage from later upstream edits.\n",
      "\n",
      "What solving a task means (concrete criterion)\n",
      "- A generated patch must apply and, after applying the repository’s tests, every test that the validation flagged as verifying the issue (FAIL_TO_PASS) must now pass, and all tests that previously passed but were intended to remain passing (PASS_TO_PASS) must still pass. Only then is the task counted as solved.\n",
      "\n",
      "Utility and intended uses\n",
      "- The benchmark measures model ability to: localize defects, reason across a large codebase, produce multi-line and multi-file edits in patch format, and use execution feedback (tests) as verification.\n",
      "- It is intended both as a hard evaluation for current models and as a development target for models and systems that perform repository-scale code edits, retrieval from large codebases, iterative editing with execution feedback, or agent-style multi-step repair.\n",
      "\n",
      "Limitations to be aware of\n",
      "- The benchmark focuses on repositories with permissive licenses and decent test coverage (popular projects), so it emphasizes bug fixes and features that were covered by tests and merged in those projects.\n",
      "- Some tasks that require creating new symbol names first introduced in the solution are excluded because they would not be solvable from the baseline inputs.\n",
      "- Execution environments are created per release version (manual aspects exist), and some instances are discarded when installation or environment setup cannot be reliably reproduced.\n",
      "\n",
      "Overall, SWE-bench provides a large, execution-validated, reproducible suite of real-world repository-scale code-editing tasks that require understanding long contexts and producing correct patch-format edits verified by the project’s own tests.\n"
     ]
    }
   ],
   "source": [
    "response = query_engine.query(\n",
    "    \"Give me a full overview of the benchmark details in SWE Bench\"\n",
    ")\n",
    "print(str(response))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "62b11a23-df6a-4d83-b35c-691bb4d125c0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      ">> Identifying the right sections to retrieve\n",
      ">> Retrieving section: 6: Conclusion\n",
      ">> Retrieving section: 5: EXPERIMENTS\n",
      ">> Retrieving section: 5: EXPERIMENTS\n",
      "Here are the additional experimental results and analyses reported.\n",
      "\n",
      "1) Full qualitative results (complete skill behaviors, 8 seeds)\n",
      "- Environments: state-based Ant and HalfCheetah; pixel-based Quadruped and Humanoid.\n",
      "- Skill parameterizations used in these visualizations: 2-D continuous skills for Ant and Humanoid, 4-D continuous skills for Quadruped, 16 discrete skills for HalfCheetah.\n",
      "- Main finding: across 8 random seeds METRA consistently discovers diverse locomotion behaviors (radial/x-y coverage, different locomotion modes) regardless of seed. The paper shows multiple sample trajectories per seed to illustrate robustness and diversity.\n",
      "\n",
      "2) Latent-space visualization\n",
      "- Setup: METRA trained with 2-D continuous latent space on Ant (state inputs) and Humanoid (pixel inputs).\n",
      "- Observation: the learned representation φ(s) captures the agent’s x-y coordinates in the 2-D latent space in both Ant and Humanoid. The learned φ trajectories align with the x-y trajectories, indicating METRA finds the temporally most spread-out manifold (x-y plane) even from pixels.\n",
      "- Note: with higher-dimensional or discrete latent spaces, METRA captures more diverse, non-linear behaviors beyond simple locomotion.\n",
      "\n",
      "3) Ablation: effect of latent-space size on learned skills\n",
      "- Latent-space sizes tested: 1-D, 2-D, 4-D continuous; discrete sets of sizes {2}, {4}, {8}, {16}, {24}.\n",
      "- Environments: Ant and HalfCheetah.\n",
      "- Result: skill diversity increases as the capacity (dimensionality / cardinality) of Z grows.\n",
      "  - 1-D: simple linear/one-dimensional coverage\n",
      "  - 2-D: radial coverage / 2-D spread\n",
      "  - 4-D: more complex radial / richer behaviors\n",
      "  - Discrete increases produce progressively more distinct discrete behaviors (more segments, more diverse skill classes)\n",
      "- Conclusion: METRA maximizes state coverage under latent capacity, so increasing Z’s capacity yields more diverse discovered behaviors.\n",
      "\n",
      "4) Additional baseline: DGPO comparison (discrete-skill comparison; 4 seeds)\n",
      "- Experimental setup: DIAYN, DGPO, and METRA were trained with 16 discrete skills for 10,000 epochs (≈16M environment steps).\n",
      "- Metrics reported: policy state coverage and total state coverage (means ± std).\n",
      "- Results (Table reproduced):\n",
      "  - HalfCheetah (policy state coverage)\n",
      "    - DIAYN: 6.75 ± 2.22\n",
      "    - DGPO: 6.75 ± 2.06\n",
      "    - METRA: 186.75 ± 16.21\n",
      "  - HalfCheetah (total state coverage)\n",
      "    - DIAYN: 19.50 ± 3.87\n",
      "    - DGPO: 22.25 ± 5.85\n",
      "    - METRA: 177.75 ± 17.10\n",
      "  - Ant (policy state coverage)\n",
      "    - DIAYN: 11.25 ± 5.44\n",
      "    - DGPO: 7.00 ± 3.83\n",
      "    - METRA: 1387.75 ± 77.38\n",
      "  - Ant (total state coverage)\n",
      "    - DIAYN: 107.75 ± 17.00\n",
      "    - DGPO: 121.50 ± 4.36\n",
      "    - METRA: 6313.25 ± 747.92\n",
      "- Interpretation given: DGPO (which maximizes a metric-agnostic KL-style objective in discrete Z) still produces limited state coverage similar to DIAYN, whereas METRA (a metric-aware Wasserstein formulation) achieves substantially greater coverage in these locomotion environments.\n",
      "\n",
      "5) Skill examples / qualitative descriptions by latent size\n",
      "- A tabulated description shows how skills change qualitatively with latent-size choices (examples):\n",
      "  - Ant (continuous Z):\n",
      "    - 1-D: linearly increasing coverage\n",
      "    - 2-D: radial coverage with 2-D spread\n",
      "    - 4-D: more complex radial coverage\n",
      "  - Ant / HalfCheetah (discrete Z):\n",
      "    - Discrete 2 / 4 / 8 / 16 / 24 skills: progressively more segments and more diverse behaviors, with 24 discrete skills showing the highest diversity.\n",
      "- The paper notes that with discrete Z METRA can discover qualitatively distinct behaviors such as flips or static postures (in addition to locomotion) when capacity is sufficient.\n",
      "\n",
      "6) Details on coverage metrics, datasets, and protocol used in these additional results\n",
      "- Policy state coverage: computed by sampling 48 deterministic trajectories using 48 randomly sampled skills at each evaluation epoch (used for skill-discovery method policy coverage plots).\n",
      "- Queue state coverage: computed from most recent 100,000 training trajectories (used for some comparisons).\n",
      "- Total state coverage: computed from the entire set of training trajectories up to the current epoch (used as a generous metric for pure-exploration baselines).\n",
      "- For locomotion coverage counting: x-y bins of 1×1 are counted for Ant, Quadruped, Humanoid; x bins for HalfCheetah. Kitchen uses task success counts for pre-defined subtasks.\n",
      "- Seeds: most qualitative and skill-discovery comparisons use 8 seeds; the DGPO comparison reported used 4 seeds.\n",
      "\n",
      "7) Additional notes and takeaways from the extra experiments\n",
      "- METRA’s learned φ(s) is effective for zero-shot goal selection because φ preserves temporal distances; the latent difference φ(g) − φ(s) gives a direction in Z to reach a goal.\n",
      "- Increasing latent capacity helps but requires choosing continuous vs. discrete Z appropriately for the desired types of behaviors.\n",
      "- The DGPO comparison further supports that metric-aware objectives (METRA) lead to substantially higher state coverage than metric-agnostic mutual-information/KL-style objectives.\n",
      "\n",
      "If you want, I can extract and present the specific numeric tables and captions (e.g., the full Table 1 numbers above) in CSV or another concise format, or summarize the visual findings into representative example trajectories for each latent-size setting.\n"
     ]
    }
   ],
   "source": [
    "response = query_engine.query(\n",
    "    \"Give me details of all additional experimental results in the Metra paper\"\n",
    ")\n",
    "print(str(response))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
