{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a href=\"https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/cookbooks/rerank_llamaparsed\n",
    "_pdfs.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
    "\n",
    "# Reranking top pages from PDF using LlamaParse and ZeroEntropy\n",
    "\n",
    "In this guide, we'll build a simple workflow to parse PDF documents into text using LlamaParse and then query and rerank the textual data. \n",
    "\n",
    "---\n",
    "\n",
    "### Pre-requisites\n",
    "- Python 3.8+\n",
    "- `zeroentropy` client\n",
    "- `llama_cloud_services` client\n",
    "- A ZeroEntropy API key ([Get yours here](https://dashboard.zeroentropy.dev))\n",
    "- A LlamaParse API key ([Get yours here](https://docs.cloud.llamaindex.ai/api_key))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### What You'll Learn\n",
    "- How to use LlamaParse to accurately convert PDF documents into markdown\n",
    "- How to use ZeroEntropy to semantically index and query the parsed documents\n",
    "- How to rerank your results using [ZeroEntropy's reranker zerank-1](https://www.zeroentropy.dev/blog/announcing-zeroentropys-first-reranker) to boost accuracy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Setting up your ZeroEntropy Client and LlamaParse Client\n",
    "\n",
    "First, install dependencies:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install zeroentropy python-dotenv llama_cloud_services requests"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now load your API keys and initialize the clients"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Get your API keys from the ZeroEntropy and LlamaParse websites\n",
    "# https://dashboard.zeroentropy.dev/\n",
    "# https://docs.cloud.llamaindex.ai/api_key\n",
    "ZEROENTROPY_API_KEY = \"your_api_key_here\"\n",
    "LLAMAPARSE_API_KEY = \"your_api_key_here\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from zeroentropy import AsyncZeroEntropy, ConflictError\n",
    "from llama_cloud_services import LlamaParse\n",
    "import os\n",
    "\n",
    "# We initialize the AsyncZeroEntropy client in order to parse multiple documents in parallel\n",
    "# If you want to parse a single document, you can use the synchronous client instead\n",
    "zclient = AsyncZeroEntropy(api_key=ZEROENTROPY_API_KEY)\n",
    "\n",
    "# We initialize the llama_parse client to parse the PDF documents into text\n",
    "llamaParser = LlamaParse(\n",
    "    api_key=LLAMAPARSE_API_KEY,\n",
    "    num_workers=1,  # if multiple files passed, split in `num_workers` API calls\n",
    "    result_type=\"text\",\n",
    "    verbose=True,\n",
    "    language=\"en\",  # optionally define a language, default=en\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Adding a collection to the ZeroEntropy client"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "collection_name = \"my_collection\"\n",
    "await zclient.collections.add(collection_name=collection_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now define a function to download and extract PDF files from Dropbox directly to memory:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading zip file from: https://www.dropbox.com/scl/fi/oi6kf91gz8h76d2wt57mb/example_docs.zip?rlkey=mf21tvyb65tyrjkr1t2szt226&dl=1\n",
      "Loaded example_docs/S-P-Global-2024-Annual-Report.pdf (2434264 bytes)\n",
      "Loaded example_docs/annual-report-sg-en-spy.pdf (603698 bytes)\n",
      "Loaded example_docs/dashboard-sp-500-factor.pdf (1717787 bytes)\n",
      "Successfully loaded 3 PDF files into memory\n"
     ]
    }
   ],
   "source": [
    "import requests\n",
    "import zipfile\n",
    "import asyncio\n",
    "import io\n",
    "from typing import List, Tuple\n",
    "\n",
    "\n",
    "def download_and_extract_dropbox_zip_to_memory(\n",
    "    url: str,\n",
    ") -> List[Tuple[str, bytes]]:\n",
    "    \"\"\"Download and extract a zip file from Dropbox URL directly to memory.\n",
    "\n",
    "    Returns:\n",
    "        List of tuples containing (filename, file_content_bytes)\n",
    "    \"\"\"\n",
    "    try:\n",
    "        # Download the zip file\n",
    "        print(f\"Downloading zip file from: {url}\")\n",
    "        response = requests.get(url, stream=True)\n",
    "        response.raise_for_status()\n",
    "\n",
    "        # Read zip content into memory\n",
    "        zip_content = io.BytesIO()\n",
    "        for chunk in response.iter_content(chunk_size=8192):\n",
    "            zip_content.write(chunk)\n",
    "        zip_content.seek(0)\n",
    "\n",
    "        # Extract files from zip in memory\n",
    "        files_in_memory = []\n",
    "        with zipfile.ZipFile(zip_content, \"r\") as zip_ref:\n",
    "            for file_info in zip_ref.infolist():\n",
    "                if (\n",
    "                    not file_info.is_dir()\n",
    "                    and file_info.filename.lower().endswith(\".pdf\")\n",
    "                ):\n",
    "                    file_content = zip_ref.read(file_info.filename)\n",
    "                    files_in_memory.append((file_info.filename, file_content))\n",
    "                    print(\n",
    "                        f\"Loaded {file_info.filename} ({len(file_content)} bytes)\"\n",
    "                    )\n",
    "\n",
    "        print(\n",
    "            f\"Successfully loaded {len(files_in_memory)} PDF files into memory\"\n",
    "        )\n",
    "        return files_in_memory\n",
    "\n",
    "    except Exception as e:\n",
    "        print(f\"Error downloading/extracting zip file: {e}\")\n",
    "        raise\n",
    "\n",
    "\n",
    "# Download and extract files from Dropbox directly to memory\n",
    "dropbox_url = \"https://www.dropbox.com/scl/fi/oi6kf91gz8h76d2wt57mb/example_docs.zip?rlkey=mf21tvyb65tyrjkr1t2szt226&dl=1\"\n",
    "files_in_memory = download_and_extract_dropbox_zip_to_memory(dropbox_url)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Parsing PDFs using LlamaParse"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's download the PDF files from Dropbox and parse them directly in memory using LlamaParse:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Parsing 3 PDF files...\n",
      "Started parsing the file under job_id a1324745-c58b-4a24-b757-c6a6a58e57cd\n",
      "Started parsing the file under job_id 326b947e-9d95-4dc3-aeaf-440b9cc03016\n",
      "Started parsing the file under job_id b8534aa0-ed69-4079-a720-1b2471066c6f\n",
      "............Successfully parsed 3 documents\n"
     ]
    }
   ],
   "source": [
    "# Create file-like objects for LlamaParse\n",
    "file_objects = []\n",
    "file_names = []\n",
    "\n",
    "for filename, file_content in files_in_memory:\n",
    "    # Create a file-like object from bytes\n",
    "    file_obj = io.BytesIO(file_content)\n",
    "    file_obj.name = filename  # Set the name attribute for LlamaParse\n",
    "    file_objects.append(file_obj)\n",
    "    file_names.append(filename)\n",
    "\n",
    "# Parse all PDF files at once using LlamaParse\n",
    "# Include extra_info with file names formatted as dictionaries for byte data parsing\n",
    "print(f\"Parsing {len(file_objects)} PDF files...\")\n",
    "\n",
    "# Use async parsing to avoid nested event loop issues\n",
    "text_data = await asyncio.gather(\n",
    "    *[\n",
    "        llamaParser.aparse(file_obj, extra_info={\"file_name\": name})\n",
    "        for file_obj, name in zip(file_objects, file_names)\n",
    "    ]\n",
    ")\n",
    "print(f\"Successfully parsed {len(text_data)} documents\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Organizing your documents\n",
    "\n",
    "Once parsed, we form a list of documents with a list of the pages within them. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Organized 3 documents with pages\n",
      "First document has 104 pages\n"
     ]
    }
   ],
   "source": [
    "docs = []\n",
    "\n",
    "for dindex, doc in enumerate(text_data):\n",
    "    pages = []\n",
    "    for index, page in enumerate(doc.pages):\n",
    "        pages.append(page.text)\n",
    "    docs.append(pages)\n",
    "\n",
    "print(f\"Organized {len(docs)} documents with pages\")\n",
    "if docs:\n",
    "    print(f\"First document has {len(docs[0])} pages\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Querying with ZeroEntropy\n",
    "We'll now define functions to upload the documents as text pages asynchroniously."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import asyncio\n",
    "from tqdm.asyncio import tqdm\n",
    "\n",
    "sem = asyncio.Semaphore(16)\n",
    "\n",
    "\n",
    "async def add_document_with_pages(\n",
    "    collection_name: str, filename: str, pages: list, doc_index: int\n",
    "):\n",
    "    \"\"\"Add a single document with multiple pages to the collection.\"\"\"\n",
    "    async with sem:  # Limit concurrent operations\n",
    "        for retry in range(3):  # Retry logic\n",
    "            try:\n",
    "                response = await zclient.documents.add(\n",
    "                    collection_name=collection_name,\n",
    "                    path=filename,  # Use the actual filename as path\n",
    "                    content={\n",
    "                        \"type\": \"text-pages\",\n",
    "                        \"pages\": pages,  # Send list of strings directly\n",
    "                    },\n",
    "                )\n",
    "                return response\n",
    "            except ConflictError:\n",
    "                print(\n",
    "                    f\"Document '{filename}' already exists in collection '{collection_name}'\"\n",
    "                )\n",
    "                break\n",
    "            except Exception as e:\n",
    "                if retry == 2:  # Last retry\n",
    "                    print(f\"Failed to add document '{filename}': {e}\")\n",
    "                    return None\n",
    "                await asyncio.sleep(0.1 * (retry + 1))  # Exponential backoff\n",
    "\n",
    "\n",
    "async def upload_documents_async(\n",
    "    docs: list, file_names: list, collection_name: str\n",
    "):\n",
    "    \"\"\"\n",
    "    Upload documents asynchronously to ZeroEntropy collection.\n",
    "\n",
    "    Args:\n",
    "        docs: 2D array where docs[i] contains the list of pages (strings) for document i\n",
    "        file_names: Array where file_names[i] contains the path for document i\n",
    "        collection_name: Name of the collection to add documents to\n",
    "    \"\"\"\n",
    "\n",
    "    # Validate input arrays have same length\n",
    "    if len(docs) != len(file_names):\n",
    "        raise ValueError(\"docs and file_names must have the same length\")\n",
    "\n",
    "    # Print starting message\n",
    "    print(f\"Starting upload of {len(docs)} documents...\")\n",
    "\n",
    "    # Create tasks for all documents\n",
    "    tasks = [\n",
    "        add_document_with_pages(collection_name, file_names[i], docs[i], i)\n",
    "        for i in range(len(docs))\n",
    "    ]\n",
    "\n",
    "    # Execute all tasks concurrently with progress bar\n",
    "    results = await tqdm.gather(*tasks, desc=\"Uploading Documents\")\n",
    "\n",
    "    # Count successful uploads\n",
    "    successful = sum(1 for result in results if result is not None)\n",
    "    print(f\"Successfully uploaded {successful}/{len(docs)} documents\")\n",
    "\n",
    "    return results"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Querying documents with ZeroEntropy\n",
    "First we will upload documents"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Starting upload of 3 documents...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Uploading Documents: 100%|██████████| 3/3 [00:00<00:00,  3.42it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Successfully uploaded 3/3 documents\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[DocumentAddResponse(message='Success!'),\n",
       " DocumentAddResponse(message='Success!'),\n",
       " DocumentAddResponse(message='Success!')]"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "await upload_documents_async(docs, file_names, collection_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Query for the top 5 pages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "response = await zclient.queries.top_pages(\n",
    "    collection_name=collection_name,\n",
    "    query=\"What are the top 100 stocks in the S&P 500?\",\n",
    "    k=5,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's define a function to rerank the pages in the response:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "async def rerank_top_pages_with_metadata(\n",
    "    query: str, top_pages_response, collection_name: str\n",
    "):\n",
    "    \"\"\"\n",
    "    Rerank the results from a top_pages query and return re-ordered list with metadata.\n",
    "\n",
    "    Args:\n",
    "        query: The query string to use for reranking\n",
    "        top_pages_response: The response object from zclient.queries.top_pages()\n",
    "        collection_name: Name of the collection to fetch page content from\n",
    "\n",
    "    Returns:\n",
    "        List of dicts with 'path', 'page_index', and 'rerank_score' in reranked order\n",
    "    \"\"\"\n",
    "\n",
    "    # Fetch page content and store metadata for each result\n",
    "    documents = []\n",
    "    metadata = []\n",
    "\n",
    "    for result in top_pages_response.results:\n",
    "        # Fetch the actual page content\n",
    "        page_info = await zclient.documents.get_page_info(\n",
    "            collection_name=collection_name,\n",
    "            path=result.path,\n",
    "            page_index=result.page_index,\n",
    "            include_content=True,\n",
    "        )\n",
    "\n",
    "        # Get page content and ensure it's not empty\n",
    "        page_content = page_info.page.content\n",
    "        if page_content and page_content.strip():\n",
    "            documents.append(page_content.strip())\n",
    "            metadata.append(\n",
    "                {\n",
    "                    \"path\": result.path,\n",
    "                    \"page_index\": result.page_index,\n",
    "                    \"original_score\": result.score,\n",
    "                }\n",
    "            )\n",
    "        else:\n",
    "            # Include empty pages with fallback content\n",
    "            documents.append(\"No content available\")\n",
    "            metadata.append(\n",
    "                {\n",
    "                    \"path\": result.path,\n",
    "                    \"page_index\": result.page_index,\n",
    "                    \"original_score\": result.score,\n",
    "                }\n",
    "            )\n",
    "\n",
    "    if not documents:\n",
    "        raise ValueError(\"No documents found to rerank\")\n",
    "\n",
    "    # Perform reranking\n",
    "    rerank_response = await zclient.models.rerank(\n",
    "        model=\"zerank-1\", query=query, documents=documents\n",
    "    )\n",
    "\n",
    "    # Create re-ordered list with metadata\n",
    "    reranked_results = []\n",
    "    for rerank_result in rerank_response.results:\n",
    "        original_metadata = metadata[rerank_result.index]\n",
    "        reranked_results.append(\n",
    "            {\n",
    "                \"path\": original_metadata[\"path\"],\n",
    "                \"page_index\": original_metadata[\"page_index\"],\n",
    "                \"rerank_score\": rerank_result.relevance_score,\n",
    "            }\n",
    "        )\n",
    "\n",
    "    return reranked_results"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Run the function and see the results!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Reranked Results with Metadata:\n",
      "Rank 1: example_docs/dashboard-sp-500-factor.pdf (Page 9) - Score: 0.8472\n",
      "Rank 2: example_docs/dashboard-sp-500-factor.pdf (Page 12) - Score: 0.8311\n",
      "Rank 3: example_docs/dashboard-sp-500-factor.pdf (Page 8) - Score: 0.7941\n",
      "Rank 4: example_docs/dashboard-sp-500-factor.pdf (Page 2) - Score: 0.4571\n",
      "Rank 5: example_docs/dashboard-sp-500-factor.pdf (Page 4) - Score: 0.4511\n"
     ]
    }
   ],
   "source": [
    "reranked_results = await rerank_top_pages_with_metadata(\n",
    "    query=\"What are the top 100 stocks in the S&P 500?\",\n",
    "    top_pages_response=response,\n",
    "    collection_name=collection_name,\n",
    ")\n",
    "\n",
    "# Display results\n",
    "print(\"Reranked Results with Metadata:\")\n",
    "for i, result in enumerate(reranked_results, 1):\n",
    "    print(\n",
    "        f\"Rank {i}: {result['path']} (Page {result['page_index']}) - Score: {result['rerank_score']:.4f}\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### ✅ That's It!\n",
    "\n",
    "You've now built a working semantic search engine that processes PDF files entirely in memory using ZeroEntropy and LlamaParse — no local file storage required!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "myenv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
