{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "841e533d-ebb3-406d-9da7-b19e2c5f5866",
   "metadata": {
    "id": "841e533d-ebb3-406d-9da7-b19e2c5f5866"
   },
   "source": [
    "# PDF Document ingestion and chunking using Data Prep Kit\n",
    "\n",
    "This notebook will introduce two DPK transforms that are based on docling. \n",
    "\n",
    "Here is the workflow:\n",
    "\n",
    "- docling2parquet: Extract text from PDF documents\n",
    "- doc_chunk: Break document into chunks\n",
    "\n",
    "<b>Pre-requisite</b>: You need to download one or more PDF files for testing. The current release of the notebook was tested with [this file](https://github.com/user-attachments/files/20354534/opea_project_github_io_latest_introduction_index_html.pdf) provided by @ezelanza in [this issue](https://github.com/data-prep-kit/data-prep-kit/issues/1288). We assume the test PDF(s) is/are downloaded into the notebook folder. \n",
    "  "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b15976e3",
   "metadata": {
    "id": "b15976e3"
   },
   "source": [
    "## How to run this notebook\n",
    "\n",
    "If you have python 3.11 or higher on your machine, you can also download the notebook and run it locally using a local python environment setup as follows:\n",
    "\n",
    "```\n",
    "python -m venv venv\n",
    "source venv/bin/activate\n",
    "pip install jupyterlab\n",
    "jupyter lab docling.ipynb\n",
    "```\n",
    "\n",
    "For more advanced setup, please see setup [guide](https://github.com/data-prep-kit/data-prep-kit/blob/dev/doc/quick-start/quick-start.md).\n",
    "\n",
    "An earlier version of this notebook was tested successfully on Google Colab. However, continuous changes in the Colab environment could introduce unexpected behavior/breakage. If you wish to try the Colab environment, click on [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/data-prep-kit/data-prep-kit/blob/dev/recipes/DPK-sequence/docling.ipynb). \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c05f3e71",
   "metadata": {},
   "source": [
    "## Step-1 Install DPK and a couple of helper applictions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2ebee270",
   "metadata": {},
   "outputs": [],
   "source": [
    "#%%capture cap --no-stderr\n",
    "#%pip install \"data-prep-toolkit-transforms[docling2parquet,doc_chunk]\"\n",
    "%pip install \"data-prep-toolkit @ git+https://github.com/touma-I/data-prep-kit-pkg@numpy-dependencies#subdirectory=data-processing-lib\"\n",
    "%pip install \"data-prep-toolkit-transforms[docling2parquet,doc_chunk] @ git+https://github.com/touma-I/data-prep-kit-pkg@numpy-dependencies#subdirectory=transforms\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2449e5c7-078c-4ad6-a2f6-21d39d4da3fb",
   "metadata": {
    "id": "2449e5c7-078c-4ad6-a2f6-21d39d4da3fb"
   },
   "source": [
    "## Step-2: Extract Data from PDF (docling2parquet)\n",
    "\n",
    "This step we will read PDF files and extract the text data.\n",
    "\n",
    "[Docling2Parquet documentation](https://github.com/data-prep-kit/data-prep-kit/blob/dev/transforms/language/docling2parquet/README.md)\n",
    "\n",
    "We use the [Docling package](https://github.com/DS4SD/docling).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b0cd8ebd-bf71-42d6-a397-8df0c7b66a26",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "b0cd8ebd-bf71-42d6-a397-8df0c7b66a26",
    "outputId": "c3f18efd-2f3d-4ab8-88e3-6976f09ed5ac"
   },
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_docling2parquet import Docling2Parquet\n",
    "from dpk_docling2parquet import docling2parquet_contents_types\n",
    "\n",
    "result = Docling2Parquet(input_folder= \".\",\n",
    "                    output_folder= \"docling2parquet\",\n",
    "                    data_files_to_use=['.pdf'],\n",
    "                    docling2parquet_contents_type=docling2parquet_contents_types.MARKDOWN,   # markdown\n",
    "                    ).transform()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ca790e0",
   "metadata": {
    "id": "5ca790e0"
   },
   "source": [
    "### 2.1: Inspect Generated parquet\n",
    "\n",
    "Here we should see one entry per input file processed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "60feb712-ba31-42c2-b0a3-75e394ff8538",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import glob\n",
    "\n",
    "df = pd.concat(\n",
    "    (pd.read_parquet(parquet_file)\n",
    "    for parquet_file in glob.glob(\"docling2parquet/*.parquet\")),\n",
    "    ignore_index=True\n",
    ")\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "403ef191-d039-4dc7-bc41-af34156ac9ee",
   "metadata": {
    "id": "5ca790e0"
   },
   "source": [
    "## Step-3: Breakup document into chunks\n",
    "\n",
    "\n",
    "[doc_chunk Documentation](https://github.com/data-prep-kit/data-prep-kit/blob/dev/transforms/language/doc_chunk/README.md)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "215f2122-b298-4b54-8948-c35404651c5d",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_doc_chunk import DocChunk\n",
    "\n",
    "result = DocChunk(input_folder=\"docling2parquet\",\n",
    "        output_folder=\"doc_chunk\",\n",
    "        doc_chunk_chunking_type= \"li_markdown\",\n",
    "        doc_chunk_chunk_size_tokens = 128,  # default 128\n",
    "        doc_chunk_chunk_overlap_tokens=30   # default 30\n",
    "        ).transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c30bd86-fbd4-40fe-bb25-2295808ee3c9",
   "metadata": {
    "id": "5ca790e0"
   },
   "source": [
    "## 3.1: Inspect Generated parquet\n",
    "\n",
    "Here we should see one entry per input file processed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "14928622-6830-4b88-81c5-36e8dc77ffb1",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import glob\n",
    "\n",
    "df = pd.concat(\n",
    "    (pd.read_parquet(parquet_file)\n",
    "    for parquet_file in glob.glob(\"doc_chunk/*.parquet\")),\n",
    "    ignore_index=True\n",
    ")\n",
    "df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7a5185f5-ebe8-4f37-bf99-cb3db806f3e0",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
