{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "ef6fd896",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# PDF Loader\n",
    "\n",
    "- Author: [Yejin Park](https://github.com/ppakyeah)\n",
    "- Peer Review : [Yun Eun](https://github.com/yuneun92), [MinJi Kang](https://www.linkedin.com/in/minji-kang-995b32230/)\n",
    "- Author: [Yejin Park](https://github.com/ppakyeah)\n",
    "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n",
    "\n",
    "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/06-DocumentLoader/02-PDFLoader.ipynb)\n",
    "[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/06-DocumentLoader/02-PDFLoader.ipynb)\n",
    "\n",
    "## Overview\n",
    "This tutorial covers various PDF processing methods using LangChain and popular PDF libraries.\n",
    "\n",
    "PDF processing is essential for extracting and analyzing text data from PDF documents.\n",
    "\n",
    "In this tutorial, we will explore different PDF loaders and their capabilities while working with LangChain's document processing framework.\n",
    "\n",
    "### Table of Contents\n",
    "\n",
    "- [Overview](#overview)\n",
    "- [Environment Setup](#environment-setup)\n",
    "- [How to load PDFs](#how-to-load-pdfs)\n",
    "- [PyPDF](#pypdf)\n",
    "- [PyMuPDF](#pymupdf)\n",
    "- [Unstructured](#unstructured)\n",
    "- [PyPDFium2](#pypdfium2)\n",
    "- [PDFMiner](#pdfminer)\n",
    "- [PDFPlumber](#pdfplumber)\n",
    "\n",
    "### References\n",
    "\n",
    "- [LangChain: How to load PDFs](https://python.langchain.com/docs/how_to/document_loader_pdf/)\n",
    "----"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "Sym3XiL3i2X8",
   "metadata": {
    "id": "Sym3XiL3i2X8"
   },
   "source": [
    "## Environment Setup\n",
    "\n",
    "Set up the environment. You may refer to [Environment Setup](https://wikidocs.net/257836) for more details.\n",
    "\n",
    "**[Note]**\n",
    "- ```langchain-opentutorial``` is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.\n",
    "- You can checkout the [```langchain-opentutorial```](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "YYOErkDKi2X8",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:31:56.036017Z",
     "start_time": "2025-01-02T12:31:54.026466Z"
    },
    "id": "YYOErkDKi2X8"
   },
   "outputs": [],
   "source": [
    "%%capture --no-stderr\n",
    "%pip install langchain-opentutorial"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "WuCBOIMGi2X9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:34:03.966683Z",
     "start_time": "2025-01-02T12:31:56.040547Z"
    },
    "id": "WuCBOIMGi2X9"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "[notice] A new release of pip available: 22.3.1 -> 24.3.1\n",
      "[notice] To update, run: pip install --upgrade pip\n"
     ]
    }
   ],
   "source": [
    "# Install required packages\n",
    "from langchain_opentutorial import package\n",
    "\n",
    "package.install(\n",
    "    [\n",
    "        \"langchain_community\",\n",
    "        \"langchain_text_splitters\",\n",
    "        \"pypdf\",\n",
    "        \"rapidocr-onnxruntime\",\n",
    "        \"pymupdf\",\n",
    "        \"unstructured[pdf]\"\n",
    "    ],\n",
    "    verbose=False,\n",
    "    upgrade=False,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "XuYDrHMCi2X9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:34:03.973674Z",
     "start_time": "2025-01-02T12:34:03.967009Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "XuYDrHMCi2X9",
    "outputId": "af2bbe8c-f137-413b-bc21-064d918fa530"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Environment variables have been set successfully.\n"
     ]
    }
   ],
   "source": [
    "# Set environment variables\n",
    "from langchain_opentutorial import set_env\n",
    "\n",
    "set_env(\n",
    "    {\n",
    "        \"OPENAI_API_KEY\": \"\",\n",
    "        \"LANGCHAIN_API_KEY\": \"\",\n",
    "        \"LANGCHAIN_TRACING_V2\": \"true\",\n",
    "        \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n",
    "        \"LANGCHAIN_PROJECT\": \"PDFLoader\",\n",
    "    }\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "mLc6owsBi2X-",
   "metadata": {
    "id": "mLc6owsBi2X-"
   },
   "source": [
    "## How to load PDFs\n",
    "\n",
    "[Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), a file format standardized by ISO 32000, was developed by Adobe in 1992 for presenting documents, which include text formatting and images in a way that is independent of application software, hardware, and operating systems.\n",
    "\n",
    "This guide covers how to load a PDF document into the LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) format. This format will be used downstream.\n",
    "\n",
    "LangChain integrates with a variety of PDF parsers. Some are simple and relatively low-level, while others support OCR and image processing or perform advanced document layout analysis.\n",
    "\n",
    "The right choice depends on your application.\n",
    "\n",
    "\n",
    "We will demonstrate these approaches on a [sample file](https://github.com/langchain-ai/langchain/blob/master/libs/community/tests/integration_tests/examples/layout-parser-paper.pdf).\n",
    "Download the sample file and copy it to your data folder."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "7c18fcef",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:34:03.977437Z",
     "start_time": "2025-01-02T12:34:03.975350Z"
    },
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "FILE_PATH = \"./data/layout-parser-paper.pdf\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "_efkSboXi2X_",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:34:03.987274Z",
     "start_time": "2025-01-02T12:34:03.985104Z"
    },
    "id": "_efkSboXi2X_"
   },
   "outputs": [],
   "source": [
    "def show_metadata(docs):\n",
    "    if docs:\n",
    "        print(\"[metadata]\")\n",
    "        print(list(docs[0].metadata.keys()))\n",
    "        print(\"\\n[examples]\")\n",
    "        max_key_length = max(len(k) for k in docs[0].metadata.keys())\n",
    "        for k, v in docs[0].metadata.items():\n",
    "            print(f\"{k:<{max_key_length}} : {v}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "QmmzBFFMi2X_",
   "metadata": {
    "id": "QmmzBFFMi2X_"
   },
   "source": [
    "## PyPDF\n",
    "\n",
    "\n",
    "[PyPDF](https://github.com/py-pdf/pypdf) is one of the most widely used Python libraries for PDF processing.\n",
    "\n",
    "Here we use PyPDF to load the PDF as an list of Document objects\n",
    "\n",
    "LangChain's [```PyPDFLoader```](\n",
    "https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html) integrates with PyPDF to parse PDF documents into LangChain Document objects.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "fLukm7mdi2X_",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:34:10.198931Z",
     "start_time": "2025-01-02T12:34:03.992128Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "fLukm7mdi2X_",
    "outputId": "efdeb1c5-583b-4147-84c9-79bef48f6196"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "LayoutParser: A Uniﬁed Toolkit for DL-Based DIA 11\n",
      "focuses on precision, eﬃciency, and robustness. The target documents may have\n",
      "complicated structures, and may require training multiple layout detection models\n",
      "to achieve the optimal accuracy. Light-weight pipelines are built for relatively\n",
      "simple d\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.document_loaders import PyPDFLoader\n",
    "\n",
    "# Initialize the PDF loader\n",
    "loader = PyPDFLoader(FILE_PATH)\n",
    "\n",
    "# Load data into Document objects\n",
    "docs = loader.load()\n",
    "\n",
    "# Print the contents of the document\n",
    "print(docs[10].page_content[:300])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "453f2103",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:34:10.202043Z",
     "start_time": "2025-01-02T12:34:10.196722Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "453f2103",
    "outputId": "0b668f5f-148b-4a43-ba90-569870f4e422"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[metadata]\n",
      "['source', 'page']\n",
      "\n",
      "[examples]\n",
      "source : ./data/layout-parser-paper.pdf\n",
      "page   : 0\n"
     ]
    }
   ],
   "source": [
    "# output metadata\n",
    "show_metadata(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "Vkc4IhxI_Xfx",
   "metadata": {
    "id": "Vkc4IhxI_Xfx"
   },
   "source": [
    "The ```load_and_split()``` method allows customizing how documents are chunked by passing a text splitter object, making it more flexible for different use cases."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "DY5g0LBw_hs3",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:34:10.566671Z",
     "start_time": "2025-01-02T12:34:10.203696Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "DY5g0LBw_hs3",
    "outputId": "7accba66-c2c5-4e66-b1e9-5e7caddf607f"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "LayoutParser: A Uniﬁed Toolkit for Deep\n",
      "Learning Based Document Image Analysis\n",
      "Zejiang Shen1 (\u0000 ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\n",
      "Lee4, Jacob Carlson3, and Weining Li5\n"
     ]
    }
   ],
   "source": [
    "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
    "\n",
    "# Load Documents and split into chunks. Chunks are returned as Documents.\n",
    "text_splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=200)\n",
    "docs = loader.load_and_split(text_splitter=text_splitter)\n",
    "print(docs[0].page_content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "96845496",
   "metadata": {
    "id": "96845496"
   },
   "source": [
    "### PyPDF(OCR)\n",
    "\n",
    "Some PDFs contain text images within scanned documents or pictures. You can also use the ```rapidocr-onnxruntime``` package to extract text from images."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "b5334000",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:35:43.954729Z",
     "start_time": "2025-01-02T12:34:10.567649Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "b5334000",
    "outputId": "7da75ba3-8ee9-4fb9-ad14-a8b87694e187"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "LayoutParser: A Uniﬁed Toolkit for DL-Based DIA 5\n",
      "Table 1: Current layout detection models in the LayoutParser model zoo\n",
      "Dataset Base Model1 Large ModelNotes\n",
      "PubLayNet [38] F / M M Layouts of modern scientiﬁc documents\n",
      "PRImA [3] M - Layouts of scanned modern magazines and scientiﬁc reports\n",
      "Newspaper\n"
     ]
    }
   ],
   "source": [
    "# Initialize PDF loader, enable image extraction option\n",
    "loader = PyPDFLoader(FILE_PATH, extract_images=True)\n",
    "\n",
    "# load PDF page\n",
    "docs = loader.load()\n",
    "\n",
    "# access page content\n",
    "print(docs[4].page_content[:300])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "0fe6caa9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:35:43.958839Z",
     "start_time": "2025-01-02T12:35:43.953814Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "0fe6caa9",
    "outputId": "bf7fbd16-dbd6-4ff7-eba0-0c472ff68c39"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[metadata]\n",
      "['source', 'page']\n",
      "\n",
      "[examples]\n",
      "source : ./data/layout-parser-paper.pdf\n",
      "page   : 0\n"
     ]
    }
   ],
   "source": [
    "show_metadata(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "IqAkH1vaeCLj",
   "metadata": {
    "id": "IqAkH1vaeCLj"
   },
   "source": [
    "### PyPDF Directory\n",
    "\n",
    "Import all PDF documents from directory."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "lA-e-hPweCLj",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:35:44.495238Z",
     "start_time": "2025-01-02T12:35:43.959963Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "lA-e-hPweCLj",
    "outputId": "257b978c-3c81-4e2a-950d-bb1466655b02"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "16\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.document_loaders import PyPDFDirectoryLoader\n",
    "\n",
    "# directory path\n",
    "loader = PyPDFDirectoryLoader(\"./data/\")\n",
    "\n",
    "# load documents\n",
    "docs = loader.load()\n",
    "\n",
    "# print the number of documents\n",
    "docs_len = len(docs)\n",
    "print(docs_len)\n",
    "\n",
    "# get document from a directory\n",
    "document = docs[docs_len - 1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "iCxqumC4eCLk",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:35:44.499097Z",
     "start_time": "2025-01-02T12:35:44.495745Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "iCxqumC4eCLk",
    "outputId": "3e8ed27c-59ca-4f4b-9e2c-1285e2498a08"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "16 Z. Shen et al.\n",
      "[23] Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z.,\n",
      "Desmaison, A., Antiga, L., Lerer, A.: Automatic diﬀerentiation in pytorch (2017)\n",
      "[24] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen,\n",
      "T., Lin, Z., Gimelshein, N., An\n"
     ]
    }
   ],
   "source": [
    "# print the contents of the document\n",
    "print(document.page_content[:300])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "ZXuC6hY7eCLk",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:35:44.505186Z",
     "start_time": "2025-01-02T12:35:44.499778Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "ZXuC6hY7eCLk",
    "outputId": "0dbc653a-096d-4b65-89a7-873e47b5a73b"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'source': 'data/layout-parser-paper.pdf', 'page': 15}\n"
     ]
    }
   ],
   "source": [
    "print(document.metadata)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a191d5c",
   "metadata": {
    "id": "7a191d5c"
   },
   "source": [
    "## PyMuPDF\n",
    "\n",
    "[PyMuPDF](https://github.com/pymupdf/PyMuPDF) is speed optimized and includes detailed metadata about the PDF and its pages. It returns one document per page.\n",
    "\n",
    "LangChain's [```PyMuPDFLoader```](\n",
    "https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyMuPDFLoader.html) integrates with PyMuPDF to parse PDF documents into LangChain Document objects."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "47e7a947",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:35:47.616063Z",
     "start_time": "2025-01-02T12:35:44.510395Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "47e7a947",
    "outputId": "831c7da6-f718-4f51-80aa-792ad9a14dae"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "LayoutParser: A Uniﬁed Toolkit for DL-Based DIA\n",
      "11\n",
      "focuses on precision, eﬃciency, and robustness. The target documents may have\n",
      "complicated structures, and may require training multiple layout detection models\n",
      "to achieve the optimal accuracy. Light-weight pipelines are built for relatively\n",
      "simple d\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.document_loaders import PyMuPDFLoader\n",
    "\n",
    "# create an instance of the PyMuPDF loader\n",
    "loader = PyMuPDFLoader(FILE_PATH)\n",
    "\n",
    "# load the document\n",
    "docs = loader.load()\n",
    "\n",
    "# print the contents of the document\n",
    "print(docs[10].page_content[:300])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "bbca8760",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:35:47.619069Z",
     "start_time": "2025-01-02T12:35:47.616724Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "bbca8760",
    "outputId": "dd8598e0-1c48-4425-d745-e301dfbb6245"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[metadata]\n",
      "['source', 'file_path', 'page', 'total_pages', 'format', 'title', 'author', 'subject', 'keywords', 'creator', 'producer', 'creationDate', 'modDate', 'trapped']\n",
      "\n",
      "[examples]\n",
      "source       : ./data/layout-parser-paper.pdf\n",
      "file_path    : ./data/layout-parser-paper.pdf\n",
      "page         : 0\n",
      "total_pages  : 16\n",
      "format       : PDF 1.5\n",
      "title        : \n",
      "author       : \n",
      "subject      : \n",
      "keywords     : \n",
      "creator      : LaTeX with hyperref\n",
      "producer     : pdfTeX-1.40.21\n",
      "creationDate : D:20210622012710Z\n",
      "modDate      : D:20210622012710Z\n",
      "trapped      : \n"
     ]
    }
   ],
   "source": [
    "show_metadata(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c89745c9",
   "metadata": {
    "id": "c89745c9"
   },
   "source": [
    "## Unstructured\n",
    "\n",
    "[Unstructured](https://docs.unstructured.io/welcome) is a powerful library designed to handle various unstructured and semi-structured document formats. It excels at automatically identifying and categorizing different components within documents.\n",
    "Currently supports loading text files, PowerPoints, HTML, PDFs, images, and more.\n",
    "\n",
    "LangChain's [```UnstructuredPDFLoader```](\n",
    "https://python.langchain.com/api_reference/unstructured/document_loaders/langchain_unstructured.document_loaders.UnstructuredLoader.html) integrates with Unstructured to parse PDF documents into LangChain Document objects.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "40cb362e",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:22.957614Z",
     "start_time": "2025-01-02T12:35:47.621864Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "40cb362e",
    "outputId": "66d20311-5955-4f94-cbd3-a109989a9854"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Matplotlib is building the font cache; this may take a moment.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1 2 0 2\n",
      "\n",
      "n u J\n",
      "\n",
      "1 2\n",
      "\n",
      "]\n",
      "\n",
      "V C . s c [\n",
      "\n",
      "2 v 8 4 3 5 1 . 3 0 1 2 : v i X r a\n",
      "\n",
      "LayoutParser: A Uniﬁed Toolkit for Deep Learning Based Document Image Analysis\n",
      "\n",
      "Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5\n",
      "\n",
      "1 Allen Institute for AI s\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.document_loaders import UnstructuredPDFLoader\n",
    "\n",
    "# create an instance of UnstructuredPDFLoader\n",
    "loader = UnstructuredPDFLoader(FILE_PATH)\n",
    "\n",
    "# load the data\n",
    "docs = loader.load()\n",
    "\n",
    "# print the contents of the document\n",
    "print(docs[0].page_content[:300])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "926b929e",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:22.958318Z",
     "start_time": "2025-01-02T12:38:22.949154Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "926b929e",
    "outputId": "062b5bd6-2082-4b48-fdee-8e658ae1566d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[metadata]\n",
      "['source']\n",
      "\n",
      "[examples]\n",
      "source : ./data/layout-parser-paper.pdf\n"
     ]
    }
   ],
   "source": [
    "show_metadata(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "N2QiACIAlq13",
   "metadata": {
    "id": "N2QiACIAlq13"
   },
   "source": [
    "Internally, unstructured creates different \"**elements**\" for each chunk of text. By default, these are combined, but can be easily separated by specifying ```mode=\"elements\"```."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "f6f97007",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:23.986342Z",
     "start_time": "2025-01-02T12:38:22.957979Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "f6f97007",
    "outputId": "354fe062-75da-4ebc-e83e-6a6636dd64a1"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1 2 0 2\n"
     ]
    }
   ],
   "source": [
    "# Create an instance of UnstructuredPDFLoader (mode=\"elements”)\n",
    "loader = UnstructuredPDFLoader(FILE_PATH, mode=\"elements\")\n",
    "\n",
    "# load the data\n",
    "docs = loader.load()\n",
    "\n",
    "# print the contents of the document\n",
    "print(docs[0].page_content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5mXq_Yx2i2YD",
   "metadata": {
    "id": "5mXq_Yx2i2YD"
   },
   "source": [
    "See the full set of element types for this particular article."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "EkW_QQp4i2YD",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:23.994814Z",
     "start_time": "2025-01-02T12:38:23.985486Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "EkW_QQp4i2YD",
    "outputId": "1117c6a6-3980-4657-a6aa-8ec6f32e8833"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'ListItem', 'NarrativeText', 'Title', 'UncategorizedText'}"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "set(doc.metadata[\"category\"] for doc in docs) # extract data categories"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "7ec0c096",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:24.002149Z",
     "start_time": "2025-01-02T12:38:23.997448Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "7ec0c096",
    "outputId": "c77f4ad7-710d-4aaa-912e-1a97cf610f59"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[metadata]\n",
      "['source', 'coordinates', 'file_directory', 'filename', 'languages', 'last_modified', 'page_number', 'filetype', 'category', 'element_id']\n",
      "\n",
      "[examples]\n",
      "source         : ./data/layout-parser-paper.pdf\n",
      "coordinates    : {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}\n",
      "file_directory : ./data\n",
      "filename       : layout-parser-paper.pdf\n",
      "languages      : ['eng']\n",
      "last_modified  : 2025-01-02T18:23:25\n",
      "page_number    : 1\n",
      "filetype       : application/pdf\n",
      "category       : UncategorizedText\n",
      "element_id     : d3ce55f220dfb75891b4394a18bcb973\n"
     ]
    }
   ],
   "source": [
    "show_metadata(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5097872",
   "metadata": {
    "id": "f5097872"
   },
   "source": [
    "## PyPDFium2\n",
    "\n",
    "LangChain's [```PyPDFium2Loader```](\n",
    "https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFium2Loader.html) integrates with [PyPDFium2](https://github.com/pypdfium2-team/pypdfium2) to parse PDF documents into LangChain Document objects."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "18c84bf5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:24.765674Z",
     "start_time": "2025-01-02T12:38:24.003606Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "18c84bf5",
    "outputId": "1a7e93d9-b067-47d4-aa21-93426088a458"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "LayoutParser: A Unified Toolkit for DL-Based DIA 11\r\n",
      "focuses on precision, efficiency, and robustness. The target documents may have\r\n",
      "complicated structures, and may require training multiple layout detection models\r\n",
      "to achieve the optimal accuracy. Light-weight pipelines are built for relatively\r\n",
      "s\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.document_loaders import PyPDFium2Loader\n",
    "\n",
    "# create an instance of the PyPDFium2 loader\n",
    "loader = PyPDFium2Loader(FILE_PATH)\n",
    "\n",
    "# load data\n",
    "docs = loader.load()\n",
    "\n",
    "# print the contents of the document\n",
    "print(docs[10].page_content[:300])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "98d80f7a",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "**Note**: When using ```PyPDFium2Loader```, you may notice warning messages related to ```get_text_range()```. These warnings are part of the library's internal operations and do not affect the PDF processing\n",
    "functionality. You can safely proceed with the tutorial despite these warnings, as they are\n",
    "a normal part of the development environment and do not impact the learning objectives."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "d4cd8966",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:24.769632Z",
     "start_time": "2025-01-02T12:38:24.764497Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "d4cd8966",
    "outputId": "b42dc031-e8dc-4184-c2c7-2dbbe3b7585d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[metadata]\n",
      "['source', 'page']\n",
      "\n",
      "[examples]\n",
      "source : ./data/layout-parser-paper.pdf\n",
      "page   : 0\n"
     ]
    }
   ],
   "source": [
    "show_metadata(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d2b2a6a",
   "metadata": {
    "id": "5d2b2a6a"
   },
   "source": [
    "## PDFMiner\n",
    "[PDFMiner](https://github.com/pdfminer/pdfminer.six) is a specialized Python library focused on text extraction and layout analysis from PDF documents.\n",
    "\n",
    "LangChain's [```PDFMinerLoader```](\n",
    "https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PDFMinerLoader.html) integrates with PDFMiner to parse PDF documents into LangChain Document objects.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "5feac159",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:25.688660Z",
     "start_time": "2025-01-02T12:38:24.770005Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "5feac159",
    "outputId": "4dc81805-2085-49fe-ba86-1ac68ba65c08"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1\n",
      "2\n",
      "0\n",
      "2\n",
      "\n",
      "n\n",
      "u\n",
      "J\n",
      "\n",
      "1\n",
      "2\n",
      "\n",
      "]\n",
      "\n",
      "V\n",
      "C\n",
      ".\n",
      "s\n",
      "c\n",
      "[\n",
      "\n",
      "2\n",
      "v\n",
      "8\n",
      "4\n",
      "3\n",
      "5\n",
      "1\n",
      ".\n",
      "3\n",
      "0\n",
      "1\n",
      "2\n",
      ":\n",
      "v\n",
      "i\n",
      "X\n",
      "r\n",
      "a\n",
      "\n",
      "LayoutParser: A Uniﬁed Toolkit for Deep\n",
      "Learning Based Document Image Analysis\n",
      "\n",
      "Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\n",
      "Lee4, Jacob Carlson3, and Weining Li5\n",
      "\n",
      "1 Allen Institute for AI\n",
      "s\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.document_loaders import PDFMinerLoader\n",
    "\n",
    "# Create a PDFMiner loader instance\n",
    "loader = PDFMinerLoader(FILE_PATH)\n",
    "\n",
    "# load data\n",
    "docs = loader.load()\n",
    "\n",
    "# print the contents of the document\n",
    "print(docs[0].page_content[:300])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "65a85f23",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:25.693111Z",
     "start_time": "2025-01-02T12:38:25.688967Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "65a85f23",
    "outputId": "d63d24bf-4b5f-4a0a-fc78-c33379c48fa2"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[metadata]\n",
      "['source']\n",
      "\n",
      "[examples]\n",
      "source : ./data/layout-parser-paper.pdf\n"
     ]
    }
   ],
   "source": [
    "show_metadata(docs)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "j91wStryi2YE",
   "metadata": {
    "id": "j91wStryi2YE"
   },
   "source": [
    "### Using PDFMiner to generate HTML text\n",
    "\n",
    "This method allows you to parse the output HTML content through [```BeautifulSoup```](https://www.crummy.com/software/BeautifulSoup/) to get more structured and richer information about font size, page numbers, PDF header/footer, etc. which can help you semantically split the text into sections."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "d299c2be",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:26.632033Z",
     "start_time": "2025-01-02T12:38:25.694448Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "d299c2be",
    "outputId": "4871dccc-590f-450b-979f-d301b439d6d0"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<html><head>\n",
      "<meta http-equiv=\"Content-Type\" content=\"text/html\">\n",
      "</head><body>\n",
      "<span style=\"position:absolute; border: gray 1px solid; left:0px; top:50px; width:612px; height:792px;\"></span>\n",
      "<div style=\"position:absolute; top:50px;\"><a name=\"1\">Page 1</a></div>\n",
      "<div style=\"position:absolute; border\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.document_loaders import PDFMinerPDFasHTMLLoader\n",
    "\n",
    "# create an instance of PDFMinerPDFasHTMLLoader\n",
    "loader = PDFMinerPDFasHTMLLoader(FILE_PATH)\n",
    "\n",
    "# load the document\n",
    "docs = loader.load()\n",
    "\n",
    "# print the contents of the document\n",
    "print(docs[0].page_content[:300])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "0aacfd09",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:26.639217Z",
     "start_time": "2025-01-02T12:38:26.633178Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "0aacfd09",
    "outputId": "3ccd237c-1b40-4d04-d7fe-54bed399f031"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[metadata]\n",
      "['source']\n",
      "\n",
      "[examples]\n",
      "source : ./data/layout-parser-paper.pdf\n"
     ]
    }
   ],
   "source": [
    "show_metadata(docs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "df728c9d",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:27.009831Z",
     "start_time": "2025-01-02T12:38:26.639587Z"
    },
    "id": "df728c9d"
   },
   "outputs": [],
   "source": [
    "from bs4 import BeautifulSoup\n",
    "\n",
    "soup = BeautifulSoup(docs[0].page_content, \"html.parser\") # initialize HTML parser\n",
    "content = soup.find_all(\"div\") # search for all div tags"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "15d75111",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:27.021797Z",
     "start_time": "2025-01-02T12:38:27.019140Z"
    },
    "id": "15d75111"
   },
   "outputs": [],
   "source": [
    "import re\n",
    "\n",
    "cur_fs = None\n",
    "cur_text = \"\"\n",
    "snippets = []  # collect all snippets of the same font size\n",
    "for c in content:\n",
    "    sp = c.find(\"span\")\n",
    "    if not sp:\n",
    "        continue\n",
    "    st = sp.get(\"style\")\n",
    "    if not st:\n",
    "        continue\n",
    "    fs = re.findall(\"font-size:(\\d+)px\", st)\n",
    "    if not fs:\n",
    "        continue\n",
    "    fs = int(fs[0])\n",
    "    if not cur_fs:\n",
    "        cur_fs = fs\n",
    "    if fs == cur_fs:\n",
    "        cur_text += c.text\n",
    "    else:\n",
    "        snippets.append((cur_text, cur_fs))\n",
    "        cur_fs = fs\n",
    "        cur_text = c.text\n",
    "snippets.append((cur_text, cur_fs))\n",
    "# Note: Possibility to add a strategy for removing duplicate snippets (since the header/footer of a PDF appears across multiple pages, it can be considered duplicate information when found)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "8061d2e5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:27.045619Z",
     "start_time": "2025-01-02T12:38:27.029184Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "8061d2e5",
    "outputId": "07232863-8448-4026-fc90-0007f921d7e9"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "page_content='Recently, various DL models and datasets have been developed for layout analysis\n",
      "tasks. The dhSegment [22] utilizes fully convolutional networks [20] for segmen-\n",
      "tation tasks on historical documents. Object detection-based methods like Faster\n",
      "R-CNN [28] and Mask R-CNN [12] are used for identifying document elements [38]\n",
      "and detecting tables [30, 26]. Most recently, Graph Neural Networks [29] have also\n",
      "been used in table detection [27]. However, these models are usually implemented\n",
      "individually and there is no uniﬁed framework to load and use such models.\n",
      "There has been a surge of interest in creating open-source tools for document\n",
      "image processing: a search of document image analysis in Github leads to 5M\n",
      "relevant code pieces 6; yet most of them rely on traditional rule-based methods\n",
      "or provide limited functionalities. The closest prior research to our work is the\n",
      "OCR-D project7, which also tries to build a complete toolkit for DIA. However,\n",
      "similar to the platform developed by Neudecker et al. [21], it is designed for\n",
      "analyzing historical documents, and provides no supports for recent DL models.\n",
      "The DocumentLayoutAnalysis project8 focuses on processing born-digital PDF\n",
      "documents via analyzing the stored PDF data. Repositories like DeepLayout9\n",
      "and Detectron2-PubLayNet10 are individual deep learning models trained on\n",
      "layout analysis datasets without support for the full DIA pipeline. The Document\n",
      "Analysis and Exploitation (DAE) platform [15] and the DeepDIVA project [2]\n",
      "aim to improve the reproducibility of DIA methods (or DL models), yet they\n",
      "are not actively maintained. OCR engines like Tesseract [14], easyOCR11 and\n",
      "paddleOCR12 usually do not come with comprehensive functionalities for other\n",
      "DIA tasks like layout analysis.\n",
      "Recent years have also seen numerous eﬀorts to create libraries for promoting\n",
      "reproducibility and reusability in the ﬁeld of DL. Libraries like Dectectron2 [35],\n",
      "6 The number shown is obtained by specifying the search type as ‘code’.\n",
      "7 https://ocr-d.de/en/about\n",
      "8 https://github.com/BobLd/DocumentLayoutAnalysis\n",
      "9 https://github.com/leonlulu/DeepLayout\n",
      "10 https://github.com/hpanwar08/detectron2\n",
      "11 https://github.com/JaidedAI/EasyOCR\n",
      "12 https://github.com/PaddlePaddle/PaddleOCR\n",
      "4\n",
      "Z. Shen et al.\n",
      "Fig. 1: The overall architecture of LayoutParser. For an input document image,\n",
      "the core LayoutParser library provides a set of oﬀ-the-shelf tools for layout\n",
      "detection, OCR, visualization, and storage, backed by a carefully designed layout\n",
      "data structure. LayoutParser also supports high level customization via eﬃcient\n",
      "layout annotation and model training functions. These improve model accuracy\n",
      "on the target samples. The community platform enables the easy sharing of DIA\n",
      "models and whole digitization pipelines to promote reusability and reproducibility.\n",
      "A collection of detailed documentation, tutorials and exemplar projects make\n",
      "LayoutParser easy to learn and use.\n",
      "AllenNLP [8] and transformers [34] have provided the community with complete\n",
      "DL-based support for developing and deploying models for general computer\n",
      "vision and natural language processing problems. LayoutParser, on the other\n",
      "hand, specializes speciﬁcally in DIA tasks. LayoutParser is also equipped with a\n",
      "community platform inspired by established model hubs such as Torch Hub [23]\n",
      "and TensorFlow Hub [1]. It enables the sharing of pretrained models as well as\n",
      "full document processing pipelines that are unique to DIA tasks.\n",
      "There have been a variety of document data collections to facilitate the\n",
      "development of DL models. Some examples include PRImA [3](magazine layouts),\n",
      "PubLayNet [38](academic paper layouts), Table Bank [18](tables in academic\n",
      "papers), Newspaper Navigator Dataset [16, 17](newspaper ﬁgure layouts) and\n",
      "HJDataset [31](historical Japanese document layouts). A spectrum of models\n",
      "trained on these datasets are currently available in the LayoutParser model zoo\n",
      "to support diﬀerent use cases.\n",
      "' metadata={'heading': '2 Related Work\\n', 'content_font': 9, 'heading_font': 11, 'source': './data/layout-parser-paper.pdf'}\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.documents import Document\n",
    "\n",
    "cur_idx = -1\n",
    "semantic_snippets = []\n",
    "# Assumption: headings have higher font size than their respective content\n",
    "for s in snippets:\n",
    "    # if current snippet's font size > previous section's heading => it is a new heading\n",
    "    if (\n",
    "        not semantic_snippets\n",
    "        or s[1] > semantic_snippets[cur_idx].metadata[\"heading_font\"]\n",
    "    ):\n",
    "        metadata = {\"heading\": s[0], \"content_font\": 0, \"heading_font\": s[1]}\n",
    "        metadata.update(docs[0].metadata)\n",
    "        semantic_snippets.append(Document(page_content=\"\", metadata=metadata))\n",
    "        cur_idx += 1\n",
    "        continue\n",
    "\n",
    "    # if current snippet's font size <= previous section's content => content belongs to the same section (one can also create\n",
    "    if (\n",
    "        not semantic_snippets[cur_idx].metadata[\"content_font\"]\n",
    "        or s[1] <= semantic_snippets[cur_idx].metadata[\"content_font\"]\n",
    "    ):\n",
    "        semantic_snippets[cur_idx].page_content += s[0]\n",
    "        semantic_snippets[cur_idx].metadata[\"content_font\"] = max(\n",
    "            s[1], semantic_snippets[cur_idx].metadata[\"content_font\"]\n",
    "        )\n",
    "        continue\n",
    "\n",
    "    # if current snippet's font size > previous section's content but less than previous section's heading than also make a new\n",
    "    metadata = {\"heading\": s[0], \"content_font\": 0, \"heading_font\": s[1]}\n",
    "    metadata.update(docs[0].metadata)\n",
    "    semantic_snippets.append(Document(page_content=\"\", metadata=metadata))\n",
    "    cur_idx += 1\n",
    "\n",
    "print(semantic_snippets[4])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "zknp67O4UaK6",
   "metadata": {
    "id": "zknp67O4UaK6"
   },
   "source": [
    "## PDFPlumber\n",
    "[PDFPlumber](https://github.com/jsvine/pdfplumber) is a PDF parsing library that excels at extracting text and tables from PDFs.\n",
    "\n",
    "LangChain's [```PDFPlumberLoader```](\n",
    "https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PDFPlumberLoader.html) integrates with PDFPlumber to parse PDF documents into LangChain Document objects.\n",
    "\n",
    "Like PyMuPDF, the output document contains detailed metadata about the PDF and its pages, and returns one document per page."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "e97bd7f1",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:29.054649Z",
     "start_time": "2025-01-02T12:38:27.035849Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "e97bd7f1",
    "outputId": "01062b2d-558e-4652-e006-d1afd64230af"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "LayoutParser: A Unified Toolkit for DL-Based DIA 11\n",
      "focuses on precision, efficiency, and robustness. The target documents may have\n",
      "complicatedstructures,andmayrequiretrainingmultiplelayoutdetectionmodels\n",
      "to achieve the optimal accuracy. Light-weight pipelines are built for relatively\n",
      "simple documen\n"
     ]
    }
   ],
   "source": [
    "from langchain_community.document_loaders import PDFPlumberLoader\n",
    "\n",
    "# create a PDF document loader instance\n",
    "loader = PDFPlumberLoader(FILE_PATH)\n",
    "\n",
    "# load the document\n",
    "docs = loader.load()\n",
    "\n",
    "# access the first document data\n",
    "print(docs[10].page_content[:300])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "e250ac42",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-01-02T12:38:29.127022Z",
     "start_time": "2025-01-02T12:38:29.077119Z"
    },
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "e250ac42",
    "outputId": "98a36632-918a-4703-ef7f-ef83b4474f92"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[metadata]\n",
      "['source', 'file_path', 'page', 'total_pages', 'Author', 'CreationDate', 'Creator', 'Keywords', 'ModDate', 'PTEX.Fullbanner', 'Producer', 'Subject', 'Title', 'Trapped']\n",
      "\n",
      "[examples]\n",
      "source          : ./data/layout-parser-paper.pdf\n",
      "file_path       : ./data/layout-parser-paper.pdf\n",
      "page            : 0\n",
      "total_pages     : 16\n",
      "Author          : \n",
      "CreationDate    : D:20210622012710Z\n",
      "Creator         : LaTeX with hyperref\n",
      "Keywords        : \n",
      "ModDate         : D:20210622012710Z\n",
      "PTEX.Fullbanner : This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live 2020) kpathsea version 6.3.2\n",
      "Producer        : pdfTeX-1.40.21\n",
      "Subject         : \n",
      "Title           : \n",
      "Trapped         : False\n"
     ]
    }
   ],
   "source": [
    "show_metadata(docs)"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.8-final"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
