{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "841e533d-ebb3-406d-9da7-b19e2c5f5866",
   "metadata": {},
   "source": [
    "<div style=\"background-color: #04D7FD; padding: 20px; text-align: left;\">\n",
    "    <h1 style=\"color: #000000; font-size: 36px; margin: 0;\">Data Processing for RAG with Data Prep Kit (RAY)</h1>\n",
    "    \n",
    "</div>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b15976e3",
   "metadata": {},
   "source": [
    "## Before Running the notebook\n",
    "\n",
    "Please complete [setting up python dev environment](./setup-python-dev-env.md)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "053ecf08-5f62-4b99-9347-8a0955843d21",
   "metadata": {},
   "source": [
    "## Overview\n",
    "\n",
    "This notebook will process PDF documents as part of RAG pipeline\n",
    "\n",
    "![](media/rag-overview-2.png)\n",
    "\n",
    "This notebook will perform steps 1, 2 and 3 in RAG pipeline.\n",
    "\n",
    "Here are the processing steps:\n",
    "\n",
    "Here are the processing steps:\n",
    "\n",
    "- **pdf2parquet** : Extract text (in markdown format) from PDF and store them as parquet files\n",
    "- **Exact Dedup**: Documents with exact content are filtered out\n",
    "- **Chunk documents**: Split the PDFs into 'meaningful sections' (paragraphs, sentences ..etc)\n",
    "- **Text encoder**: Convert chunks into vectors using embedding models"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e8b10be1",
   "metadata": {},
   "source": [
    "## Step-1: Configuration"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "b58e8a2e",
   "metadata": {},
   "outputs": [],
   "source": [
    "## setup path to utils folder\n",
    "import sys\n",
    "sys.path.append('../utils')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "33345487",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Ray configuration: CPUs=0.5,   memory=2 GB,  workers=2\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "from my_config import MY_CONFIG\n",
    "\n",
    "## RAY CONFIGURATION\n",
    "num_cpus_available =  os.cpu_count()\n",
    "# print (num_cpus_available)\n",
    "# MY_CONFIG.RAY_NUM_CPUS = num_cpus_available // 2  ## use half the available cores for processing\n",
    "MY_CONFIG.RAY_NUM_CPUS =  0.5\n",
    "MY_CONFIG.RAY_MEMORY_GB = 2  # GB\n",
    "# MY_CONFIG.RAY_RUNTIME_WORKERS = num_cpus_available // 3\n",
    "MY_CONFIG.RAY_RUNTIME_WORKERS = 2\n",
    "\n",
    "print (f\"Ray configuration: CPUs={MY_CONFIG.RAY_NUM_CPUS},   memory={MY_CONFIG.RAY_MEMORY_GB} GB,  workers={MY_CONFIG.RAY_RUNTIME_WORKERS}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "40c58856",
   "metadata": {},
   "source": [
    "## Step-2:  Data\n",
    "\n",
    "We will use white papers  about LLMs.  \n",
    "\n",
    "- [Granite Code Models](https://arxiv.org/abs/2405.04324)\n",
    "- [Attention is all you need](https://arxiv.org/abs/1706.03762)\n",
    "\n",
    "You can of course substite your own data below"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6bce5939",
   "metadata": {},
   "source": [
    "### 2.1 - Download data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "1bfde6eb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Cleared input directory\n",
      "\n",
      "input/attention.pdf (2.22 MB) downloaded successfully.\n",
      "\n",
      "input/granite.pdf (1.27 MB) downloaded successfully.\n",
      "\n",
      "input/granite2.pdf (1.27 MB) downloaded successfully.\n"
     ]
    }
   ],
   "source": [
    "import os, sys\n",
    "import shutil\n",
    "from file_utils import download_file\n",
    "\n",
    "shutil.rmtree(MY_CONFIG.INPUT_DATA_DIR, ignore_errors=True)\n",
    "shutil.os.makedirs(MY_CONFIG.INPUT_DATA_DIR, exist_ok=True)\n",
    "print (\"✅ Cleared input directory\")\n",
    " \n",
    "download_file (url = 'https://arxiv.org/pdf/1706.03762', local_file = os.path.join(MY_CONFIG.INPUT_DATA_DIR, 'attention.pdf' ))\n",
    "download_file (url = 'https://arxiv.org/pdf/2405.04324', local_file = os.path.join(MY_CONFIG.INPUT_DATA_DIR, 'granite.pdf' ))\n",
    "download_file (url = 'https://arxiv.org/pdf/2405.04324', local_file = os.path.join(MY_CONFIG.INPUT_DATA_DIR, 'granite2.pdf' )) # duplicate\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72510ae6-48b0-4b88-9e13-a623281c3a63",
   "metadata": {},
   "source": [
    "### 2.2  - Set input/output path variables for the pipeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "60ac8bee-0960-4309-b225-d7a211b14262",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Cleared output directory\n"
     ]
    }
   ],
   "source": [
    "import os, sys\n",
    "import shutil\n",
    "\n",
    "if not os.path.exists(MY_CONFIG.INPUT_DATA_DIR ):\n",
    "    raise Exception (f\"❌ Input folder MY_CONFIG.INPUT_DATA_DIR = '{MY_CONFIG.INPUT_DATA_DIR}' not found\")\n",
    "\n",
    "output_parquet_dir = os.path.join (MY_CONFIG.OUTPUT_FOLDER, '01_parquet_out')\n",
    "output_exact_dedupe_dir = os.path.join (MY_CONFIG.OUTPUT_FOLDER, '02_dedupe_out')\n",
    "output_chunk_dir = os.path.join (MY_CONFIG.OUTPUT_FOLDER, '03_chunk_out')\n",
    "output_embeddings_dir = os.path.join (MY_CONFIG.OUTPUT_FOLDER, '04_embeddings_out')\n",
    "\n",
    "\n",
    "## clear output folder\n",
    "shutil.rmtree(MY_CONFIG.OUTPUT_FOLDER, ignore_errors=True)\n",
    "shutil.os.makedirs(MY_CONFIG.OUTPUT_FOLDER, exist_ok=True)\n",
    "\n",
    "print (\"✅ Cleared output directory\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2449e5c7-078c-4ad6-a2f6-21d39d4da3fb",
   "metadata": {},
   "source": [
    "## Step-3: pdf2parquet -  Convert data from PDF to Parquet\n",
    "\n",
    "This step is reading the input folder containing all PDF files and ingest them in a parquet table using the [Docling package](https://github.com/DS4SD/docling).\n",
    "The documents are converted into a JSON format which allows to easily chunk it in the later steps.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9bb15f02-ab5c-4525-a536-cfa1fd2ba70b",
   "metadata": {},
   "source": [
    "### 3.1 -  Execute "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "d940a56a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🏃🏼 STAGE-1: Processing input='input' --> output='output/01_parquet_out'\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "16:19:05 INFO - docling2parquet parameters are : {'batch_size': -1, 'artifacts_path': None, 'contents_type': <docling2parquet_contents_types.MARKDOWN: 'text/markdown'>, 'do_table_structure': True, 'do_ocr': True, 'ocr_engine': <docling2parquet_ocr_engine.EASYOCR: 'easyocr'>, 'bitmap_area_threshold': 0.05, 'pdf_backend': <docling2parquet_pdf_backend.DLPARSE_V2: 'dlparse_v2'>, 'double_precision': 8}\n",
      "2025-10-03 16:19:05,948 - INFO - docling2parquet parameters are : {'batch_size': -1, 'artifacts_path': None, 'contents_type': <docling2parquet_contents_types.MARKDOWN: 'text/markdown'>, 'do_table_structure': True, 'do_ocr': True, 'ocr_engine': <docling2parquet_ocr_engine.EASYOCR: 'easyocr'>, 'bitmap_area_threshold': 0.05, 'pdf_backend': <docling2parquet_pdf_backend.DLPARSE_V2: 'dlparse_v2'>, 'double_precision': 8}\n",
      "16:19:05 INFO - pipeline id pipeline_id\n",
      "2025-10-03 16:19:05,952 - INFO - pipeline id pipeline_id\n",
      "16:19:05 INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "2025-10-03 16:19:05,953 - INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "16:19:05 INFO - number of workers 1 worker options {'num_cpus': 1, 'memory': 2147483648, 'max_restarts': -1}\n",
      "2025-10-03 16:19:05,955 - INFO - number of workers 1 worker options {'num_cpus': 1, 'memory': 2147483648, 'max_restarts': -1}\n",
      "16:19:05 INFO - actor creation delay 0\n",
      "2025-10-03 16:19:05,956 - INFO - actor creation delay 0\n",
      "16:19:05 INFO - job details {'job category': 'preprocessing', 'job name': 'docling2parquet', 'job type': 'ray', 'job id': 'job_id'}\n",
      "2025-10-03 16:19:05,957 - INFO - job details {'job category': 'preprocessing', 'job name': 'docling2parquet', 'job type': 'ray', 'job id': 'job_id'}\n",
      "16:19:05 INFO - data factory data_ max_files -1, n_sample -1\n",
      "2025-10-03 16:19:05,958 - INFO - data factory data_ max_files -1, n_sample -1\n",
      "16:19:05 INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.pdf'], files to checkpoint ['.parquet']\n",
      "2025-10-03 16:19:05,959 - INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.pdf'], files to checkpoint ['.parquet']\n",
      "16:19:05 INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "2025-10-03 16:19:05,960 - INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "16:19:05 INFO - Running locally\n",
      "2025-10-03 16:19:05,961 - INFO - Running locally\n",
      "2025-10-03 16:19:10,356\tINFO worker.py:1777 -- Started a local Ray instance. View the dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8265 \u001b[39m\u001b[22m\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 16:19:15 INFO - orchestrator started at 2025-10-03 16:19:15\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 2025-10-03 16:19:15,548 - INFO - orchestrator started at 2025-10-03 16:19:15\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 16:19:15 INFO - Number of files is 3, source profile {'max_file_size': 2.112621307373047, 'min_file_size': 1.2146415710449219, 'total_file_size': 4.541904449462891}\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 2025-10-03 16:19:15,554 - INFO - Number of files is 3, source profile {'max_file_size': 2.112621307373047, 'min_file_size': 1.2146415710449219, 'total_file_size': 4.541904449462891}\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 16:19:15 INFO - Cluster resources: {'cpus': 12, 'gpus': 0, 'memory': 29.393569946289062, 'object_store': 2.0}\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 2025-10-03 16:19:15,554 - INFO - Cluster resources: {'cpus': 12, 'gpus': 0, 'memory': 29.393569946289062, 'object_store': 2.0}\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 16:19:15 INFO - Number of workers - 1 with {'num_cpus': 1, 'memory': 2147483648, 'max_restarts': -1} each\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 2025-10-03 16:19:15,555 - INFO - Number of workers - 1 with {'num_cpus': 1, 'memory': 2147483648, 'max_restarts': -1} each\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 16:19:19 INFO - Initializing models\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:19,962 - INFO - Initializing models\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:19,966 - INFO - Initializing pipeline for StandardPdfPipeline with options hash 62defe454d0b28b0ad913c31325b2492\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:19,973 - INFO - Loading plugin 'docling_defaults'\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:19,974 - INFO - Registered picture descriptions: ['vlm', 'api']\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:19,980 - INFO - Loading plugin 'docling_defaults'\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:19,982 - INFO - Registered ocr engines: ['easyocr', 'ocrmac', 'rapidocr', 'tesserocr', 'tesseract']\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:20,091 - INFO - Accelerator device: 'mps'\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:22,240 - INFO - Accelerator device: 'mps'\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:23,105 - INFO - Accelerator device: 'mps'\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:24,147 - INFO - detected formats: [<InputFormat.PDF: 'pdf'>]\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:24,175 - INFO - Going to convert document batch...\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:24,175 - INFO - Processing document attention.pdf\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 16:19:40 INFO - Completed 1 files in 0.27 min\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 2025-10-03 16:19:40,367 - INFO - Completed 1 files in 0.27 min\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:40,333 - INFO - Finished converting document attention.pdf in 16.19 sec.\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:40,368 - INFO - detected formats: [<InputFormat.PDF: 'pdf'>]\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:40,369 - INFO - Going to convert document batch...\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:19:40,369 - INFO - Processing document granite.pdf\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 16:20:41 INFO - Completed 2 files in 1.295 min\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 2025-10-03 16:20:41,873 - INFO - Completed 2 files in 1.295 min\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 16:20:41 INFO - Completed 2 files (66.667%)  in 1.295 min. Waiting for completion\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 2025-10-03 16:20:41,874 - INFO - Completed 2 files (66.667%)  in 1.295 min. Waiting for completion\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:20:41,829 - INFO - Finished converting document granite.pdf in 61.46 sec.\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:20:41,874 - INFO - detected formats: [<InputFormat.PDF: 'pdf'>]\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:20:41,876 - INFO - Going to convert document batch...\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:20:41,876 - INFO - Processing document granite2.pdf\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 16:21:41 INFO - Completed processing 3 files in 2.297 min\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 2025-10-03 16:21:41,966 - INFO - Completed processing 3 files in 2.297 min\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 16:21:41 INFO - done flushing in 0.001 sec\n",
      "\u001b[36m(orchestrate pid=39800)\u001b[0m 2025-10-03 16:21:41,966 - INFO - done flushing in 0.001 sec\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m 2025-10-03 16:21:41,922 - INFO - Finished converting document granite2.pdf in 60.05 sec.\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m /Users/shahrokhdaijavad/miniforge3/envs/data-prep-kit-rag/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown\n",
      "\u001b[36m(RayTransformFileProcessor pid=39855)\u001b[0m   warnings.warn('resource_tracker: There appear to be %d '\n",
      "16:21:51 INFO - Completed execution in 2.767 min, execution result 0\n",
      "2025-10-03 16:21:51,976 - INFO - Completed execution in 2.767 min, execution result 0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Stage:1 completed successfully\n",
      "CPU times: user 361 ms, sys: 357 ms, total: 718 ms\n",
      "Wall time: 2min 47s\n"
     ]
    }
   ],
   "source": [
    "%%time \n",
    "\n",
    "from dpk_docling2parquet.ray import Docling2Parquet\n",
    "from data_processing.utils import GB\n",
    "from dpk_docling2parquet import docling2parquet_contents_types\n",
    "\n",
    "STAGE = 1 \n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{MY_CONFIG.INPUT_DATA_DIR}' --> output='{output_parquet_dir}'\\n\", flush=True)\n",
    "\n",
    "result = Docling2Parquet(input_folder= MY_CONFIG.INPUT_DATA_DIR,\n",
    "                    output_folder= output_parquet_dir, \n",
    "                    data_files_to_use=['.pdf'],\n",
    "                    docling2parquet_contents_type=docling2parquet_contents_types.MARKDOWN,\n",
    "                    \n",
    "                    \n",
    "                    ## runtime options\n",
    "                    # run_locally= True,\n",
    "                    # num_cpus= MY_CONFIG.RAY_NUM_CPUS,\n",
    "                    # memory= MY_CONFIG.RAY_MEMORY_GB * GB,\n",
    "                    # runtime_num_workers = MY_CONFIG.RAY_RUNTIME_WORKERS,\n",
    "\n",
    "                    ## debug\n",
    "                    run_locally= True,\n",
    "                    num_cpus=  1, \n",
    "                    memory= MY_CONFIG.RAY_MEMORY_GB * GB, \n",
    "                    runtime_num_workers = 1, ## Note: has to be one for this particular job, to prevent race condition when downloading models!\n",
    "               ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ca790e0",
   "metadata": {},
   "source": [
    "### 3.2 - Inspect Generated output\n",
    "\n",
    "Here we should see one entry per input file processed"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "fe59563d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Successfully read 3 parquet files with 3 total rows\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>filename</th>\n",
       "      <th>contents</th>\n",
       "      <th>num_pages</th>\n",
       "      <th>num_tables</th>\n",
       "      <th>num_doc_elements</th>\n",
       "      <th>document_id</th>\n",
       "      <th>document_hash</th>\n",
       "      <th>ext</th>\n",
       "      <th>hash</th>\n",
       "      <th>size</th>\n",
       "      <th>date_acquired</th>\n",
       "      <th>document_convert_time</th>\n",
       "      <th>source_filename</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>granite.pdf</td>\n",
       "      <td>## Granite Code Models: A Family of Open Found...</td>\n",
       "      <td>28</td>\n",
       "      <td>17</td>\n",
       "      <td>485</td>\n",
       "      <td>65999e4c-b0c3-4fc2-8a68-315d97f61bb7</td>\n",
       "      <td>3127757990743433032</td>\n",
       "      <td>pdf</td>\n",
       "      <td>58342470e7d666dca0be87a15fb0552f949a5632606fe1...</td>\n",
       "      <td>121131</td>\n",
       "      <td>2025-10-03T16:20:41.866150</td>\n",
       "      <td>61.461330</td>\n",
       "      <td>granite.pdf</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>granite2.pdf</td>\n",
       "      <td>## Granite Code Models: A Family of Open Found...</td>\n",
       "      <td>28</td>\n",
       "      <td>17</td>\n",
       "      <td>485</td>\n",
       "      <td>07de58e0-1718-41e9-8074-99f7cb136ba0</td>\n",
       "      <td>3127757990743433032</td>\n",
       "      <td>pdf</td>\n",
       "      <td>58342470e7d666dca0be87a15fb0552f949a5632606fe1...</td>\n",
       "      <td>121131</td>\n",
       "      <td>2025-10-03T16:21:41.959755</td>\n",
       "      <td>60.047916</td>\n",
       "      <td>granite2.pdf</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>attention.pdf</td>\n",
       "      <td>Provided proper attribution is provided, Googl...</td>\n",
       "      <td>15</td>\n",
       "      <td>4</td>\n",
       "      <td>513</td>\n",
       "      <td>3c534d48-9bde-4247-bfa6-8ae0c6f6e0aa</td>\n",
       "      <td>2949302674760005271</td>\n",
       "      <td>pdf</td>\n",
       "      <td>214960a61e817387f01087f0b0b323cf1ebd8035fffcab...</td>\n",
       "      <td>48981</td>\n",
       "      <td>2025-10-03T16:19:40.360972</td>\n",
       "      <td>16.186138</td>\n",
       "      <td>attention.pdf</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "        filename                                           contents  \\\n",
       "0    granite.pdf  ## Granite Code Models: A Family of Open Found...   \n",
       "1   granite2.pdf  ## Granite Code Models: A Family of Open Found...   \n",
       "2  attention.pdf  Provided proper attribution is provided, Googl...   \n",
       "\n",
       "   num_pages  num_tables  num_doc_elements  \\\n",
       "0         28          17               485   \n",
       "1         28          17               485   \n",
       "2         15           4               513   \n",
       "\n",
       "                            document_id        document_hash  ext  \\\n",
       "0  65999e4c-b0c3-4fc2-8a68-315d97f61bb7  3127757990743433032  pdf   \n",
       "1  07de58e0-1718-41e9-8074-99f7cb136ba0  3127757990743433032  pdf   \n",
       "2  3c534d48-9bde-4247-bfa6-8ae0c6f6e0aa  2949302674760005271  pdf   \n",
       "\n",
       "                                                hash    size  \\\n",
       "0  58342470e7d666dca0be87a15fb0552f949a5632606fe1...  121131   \n",
       "1  58342470e7d666dca0be87a15fb0552f949a5632606fe1...  121131   \n",
       "2  214960a61e817387f01087f0b0b323cf1ebd8035fffcab...   48981   \n",
       "\n",
       "                date_acquired  document_convert_time source_filename  \n",
       "0  2025-10-03T16:20:41.866150              61.461330     granite.pdf  \n",
       "1  2025-10-03T16:21:41.959755              60.047916    granite2.pdf  \n",
       "2  2025-10-03T16:19:40.360972              16.186138   attention.pdf  "
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from file_utils import read_parquet_files_as_df\n",
    "\n",
    "output_df = read_parquet_files_as_df(output_parquet_dir)\n",
    "# print (\"Output dimensions (rows x columns)= \", output_df.shape)\n",
    "output_df.head(5)\n",
    "## To display certain columns\n",
    "#parquet_df[['column1', 'column2', 'column3']].head(5)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c54f1d7",
   "metadata": {},
   "source": [
    "## Step-4: Eliminate Duplicate Documents\n",
    "\n",
    "We have 2 duplicate documnets here : `granite.pdf` and `granite2.pdf`.\n",
    "\n",
    "Note how the `hash` for these documents are same.\n",
    "\n",
    "We are going to perform **de-dupe**\n",
    "\n",
    "On the content of each document, a SHA256 hash is computed, followed by de-duplication of record having identical hashes.\n",
    "\n",
    "[Dedupe transform documentation](https://github.com/data-prep-kit/data-prep-kit/blob/dev/transforms/universal/ededup/README.md)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5133e8b7",
   "metadata": {},
   "source": [
    "### 4.1 - Execute "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "60014643",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🏃🏼 STAGE-2: Processing input='output/01_parquet_out' --> output='output/02_dedupe_out'\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "16:22:00 INFO - exact dedup params are {'doc_column': 'contents', 'doc_id_column': 'document_id', 'use_snapshot': False, 'snapshot_directory': None, 'hash_cpu': 0.5, 'num_hashes': 2}\n",
      "2025-10-03 16:22:00,031 - INFO - exact dedup params are {'doc_column': 'contents', 'doc_id_column': 'document_id', 'use_snapshot': False, 'snapshot_directory': None, 'hash_cpu': 0.5, 'num_hashes': 2}\n",
      "16:22:00 INFO - pipeline id pipeline_id\n",
      "2025-10-03 16:22:00,032 - INFO - pipeline id pipeline_id\n",
      "16:22:00 INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "2025-10-03 16:22:00,033 - INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "16:22:00 INFO - number of workers 2 worker options {'num_cpus': 0.5, 'memory': 2147483648, 'max_restarts': -1}\n",
      "2025-10-03 16:22:00,035 - INFO - number of workers 2 worker options {'num_cpus': 0.5, 'memory': 2147483648, 'max_restarts': -1}\n",
      "16:22:00 INFO - actor creation delay 0\n",
      "2025-10-03 16:22:00,036 - INFO - actor creation delay 0\n",
      "16:22:00 INFO - job details {'job category': 'preprocessing', 'job name': 'ededup', 'job type': 'ray', 'job id': 'job_id'}\n",
      "2025-10-03 16:22:00,036 - INFO - job details {'job category': 'preprocessing', 'job name': 'ededup', 'job type': 'ray', 'job id': 'job_id'}\n",
      "16:22:00 INFO - data factory data_ max_files -1, n_sample -1\n",
      "2025-10-03 16:22:00,037 - INFO - data factory data_ max_files -1, n_sample -1\n",
      "16:22:00 INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "2025-10-03 16:22:00,038 - INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "16:22:00 INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "2025-10-03 16:22:00,038 - INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "16:22:00 INFO - Running locally\n",
      "2025-10-03 16:22:00,039 - INFO - Running locally\n",
      "2025-10-03 16:22:01,650\tINFO worker.py:1777 -- Started a local Ray instance. View the dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8265 \u001b[39m\u001b[22m\n",
      "\u001b[36m(orchestrate pid=40921)\u001b[0m 16:22:06 INFO - orchestrator started at 2025-10-03 16:22:06\n",
      "\u001b[36m(orchestrate pid=40921)\u001b[0m 16:22:06 INFO - Number of files is 3, source profile {'max_file_size': 0.04417991638183594, 'min_file_size': 0.020964622497558594, 'total_file_size': 0.10931110382080078}\n",
      "\u001b[36m(orchestrate pid=40921)\u001b[0m 16:22:06 INFO - Cluster resources: {'cpus': 12, 'gpus': 0, 'memory': 32.05584564246237, 'object_store': 2.0}\n",
      "\u001b[36m(orchestrate pid=40921)\u001b[0m 16:22:06 INFO - Number of workers - 2 with {'num_cpus': 0.5, 'memory': 2147483648, 'max_restarts': -1} each\n",
      "\u001b[36m(orchestrate pid=40921)\u001b[0m 16:22:11 INFO - Completed 1 files in 0.001 min\n",
      "\u001b[36m(orchestrate pid=40921)\u001b[0m 16:22:11 INFO - Completed 1 files (33.333%)  in 0.001 min. Waiting for completion\n",
      "\u001b[36m(orchestrate pid=40921)\u001b[0m 16:22:11 INFO - Completed processing 3 files in 0.001 min\n",
      "\u001b[36m(orchestrate pid=40921)\u001b[0m 16:22:11 INFO - done flushing in 0.001 sec\n",
      "16:22:21 INFO - Completed execution in 0.354 min, execution result 0\n",
      "2025-10-03 16:22:21,267 - INFO - Completed execution in 0.354 min, execution result 0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Stage:2 completed successfully\n",
      "CPU times: user 90.7 ms, sys: 156 ms, total: 247 ms\n",
      "Wall time: 22.6 s\n"
     ]
    }
   ],
   "source": [
    "%%time \n",
    "\n",
    "from dpk_ededup.ray.transform import Ededup\n",
    "\n",
    "STAGE = 2\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_parquet_dir}' --> output='{output_exact_dedupe_dir}'\\n\", flush=True)\n",
    "\n",
    "result = Ededup(input_folder=output_parquet_dir,\n",
    "                output_folder=output_exact_dedupe_dir,\n",
    "                ededup_hash_cpu= 0.5,\n",
    "                ededup_num_hashes= 2,\n",
    "                ededup_doc_column=\"contents\",\n",
    "                ededup_doc_id_column=\"document_id\",\n",
    "                \n",
    "                ## runtime options\n",
    "                run_locally= True,\n",
    "                num_cpus= MY_CONFIG.RAY_NUM_CPUS,\n",
    "                memory= MY_CONFIG.RAY_MEMORY_GB * GB,\n",
    "                runtime_num_workers = MY_CONFIG.RAY_RUNTIME_WORKERS,\n",
    "    ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a15d456a",
   "metadata": {},
   "source": [
    "### 4.2 - Inspect Generated output\n",
    "\n",
    "We would see 2 documents: `attention.pdf`  and `granite.pdf`.  The duplicate `granite.pdf` has been filtered out!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "0d93c248",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Successfully read 3 parquet files with 3 total rows\n",
      "Successfully read 2 parquet files with 2 total rows\n",
      "Input files before exact dedupe : 3\n",
      "Output files after exact dedupe : 2\n",
      "Duplicate files removed :   1\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>filename</th>\n",
       "      <th>contents</th>\n",
       "      <th>num_pages</th>\n",
       "      <th>num_tables</th>\n",
       "      <th>num_doc_elements</th>\n",
       "      <th>document_id</th>\n",
       "      <th>document_hash</th>\n",
       "      <th>ext</th>\n",
       "      <th>hash</th>\n",
       "      <th>size</th>\n",
       "      <th>date_acquired</th>\n",
       "      <th>document_convert_time</th>\n",
       "      <th>source_filename</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>attention.pdf</td>\n",
       "      <td>Provided proper attribution is provided, Googl...</td>\n",
       "      <td>15</td>\n",
       "      <td>4</td>\n",
       "      <td>513</td>\n",
       "      <td>3c534d48-9bde-4247-bfa6-8ae0c6f6e0aa</td>\n",
       "      <td>2949302674760005271</td>\n",
       "      <td>pdf</td>\n",
       "      <td>214960a61e817387f01087f0b0b323cf1ebd8035fffcab...</td>\n",
       "      <td>48981</td>\n",
       "      <td>2025-10-03T16:19:40.360972</td>\n",
       "      <td>16.186138</td>\n",
       "      <td>attention.pdf</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>granite.pdf</td>\n",
       "      <td>## Granite Code Models: A Family of Open Found...</td>\n",
       "      <td>28</td>\n",
       "      <td>17</td>\n",
       "      <td>485</td>\n",
       "      <td>65999e4c-b0c3-4fc2-8a68-315d97f61bb7</td>\n",
       "      <td>3127757990743433032</td>\n",
       "      <td>pdf</td>\n",
       "      <td>58342470e7d666dca0be87a15fb0552f949a5632606fe1...</td>\n",
       "      <td>121131</td>\n",
       "      <td>2025-10-03T16:20:41.866150</td>\n",
       "      <td>61.461330</td>\n",
       "      <td>granite.pdf</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "        filename                                           contents  \\\n",
       "1  attention.pdf  Provided proper attribution is provided, Googl...   \n",
       "0    granite.pdf  ## Granite Code Models: A Family of Open Found...   \n",
       "\n",
       "   num_pages  num_tables  num_doc_elements  \\\n",
       "1         15           4               513   \n",
       "0         28          17               485   \n",
       "\n",
       "                            document_id        document_hash  ext  \\\n",
       "1  3c534d48-9bde-4247-bfa6-8ae0c6f6e0aa  2949302674760005271  pdf   \n",
       "0  65999e4c-b0c3-4fc2-8a68-315d97f61bb7  3127757990743433032  pdf   \n",
       "\n",
       "                                                hash    size  \\\n",
       "1  214960a61e817387f01087f0b0b323cf1ebd8035fffcab...   48981   \n",
       "0  58342470e7d666dca0be87a15fb0552f949a5632606fe1...  121131   \n",
       "\n",
       "                date_acquired  document_convert_time source_filename  \n",
       "1  2025-10-03T16:19:40.360972              16.186138   attention.pdf  \n",
       "0  2025-10-03T16:20:41.866150              61.461330     granite.pdf  "
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from file_utils import read_parquet_files_as_df\n",
    "\n",
    "input_df = read_parquet_files_as_df(output_parquet_dir)\n",
    "output_df = read_parquet_files_as_df(output_exact_dedupe_dir)\n",
    "\n",
    "# print (\"Input data dimensions (rows x columns)= \", input_df.shape)\n",
    "# print (\"Output data dimensions (rows x columns)= \", output_df.shape)\n",
    "print (f\"Input files before exact dedupe : {input_df.shape[0]:,}\")\n",
    "print (f\"Output files after exact dedupe : {output_df.shape[0]:,}\")\n",
    "print (\"Duplicate files removed :  \", (input_df.shape[0] - output_df.shape[0]))\n",
    "\n",
    "output_df.sample(min(3, output_df.shape[0]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72274586",
   "metadata": {},
   "source": [
    "##  Step-5: Doc chunks\n",
    "\n",
    "Split the documents in chunks.\n",
    "\n",
    "[Chunking transform documentation](https://github.com/data-prep-kit/data-prep-kit/blob/dev/transforms/language/doc_chunk/README.md)\n",
    "\n",
    "**Experiment with chunking size to find the setting that works best for your documents**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "369f2cd1",
   "metadata": {},
   "source": [
    "### 5.1 -  Execute "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "f1fbdbca",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🏃🏼 STAGE-3: Processing input='output/02_dedupe_out' --> output='output/03_chunk_out'\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "16:22:30 INFO - doc_chunk parameters are : {'chunking_type': 'li_markdown', 'content_column_name': 'contents', 'doc_id_column_name': 'document_id', 'output_chunk_column_name': 'contents', 'output_source_doc_id_column_name': 'source_document_id', 'output_jsonpath_column_name': 'doc_jsonpath', 'output_pageno_column_name': 'page_number', 'output_bbox_column_name': 'bbox', 'chunk_size_tokens': 128, 'chunk_overlap_tokens': 30, 'dl_min_chunk_len': None}\n",
      "2025-10-03 16:22:30,511 - INFO - doc_chunk parameters are : {'chunking_type': 'li_markdown', 'content_column_name': 'contents', 'doc_id_column_name': 'document_id', 'output_chunk_column_name': 'contents', 'output_source_doc_id_column_name': 'source_document_id', 'output_jsonpath_column_name': 'doc_jsonpath', 'output_pageno_column_name': 'page_number', 'output_bbox_column_name': 'bbox', 'chunk_size_tokens': 128, 'chunk_overlap_tokens': 30, 'dl_min_chunk_len': None}\n",
      "16:22:30 INFO - pipeline id pipeline_id\n",
      "2025-10-03 16:22:30,512 - INFO - pipeline id pipeline_id\n",
      "16:22:30 INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "2025-10-03 16:22:30,512 - INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "16:22:30 INFO - number of workers 2 worker options {'num_cpus': 0.5, 'memory': 2147483648, 'max_restarts': -1}\n",
      "2025-10-03 16:22:30,512 - INFO - number of workers 2 worker options {'num_cpus': 0.5, 'memory': 2147483648, 'max_restarts': -1}\n",
      "16:22:30 INFO - actor creation delay 0\n",
      "2025-10-03 16:22:30,513 - INFO - actor creation delay 0\n",
      "16:22:30 INFO - job details {'job category': 'preprocessing', 'job name': 'doc_chunk', 'job type': 'ray', 'job id': 'job_id'}\n",
      "2025-10-03 16:22:30,513 - INFO - job details {'job category': 'preprocessing', 'job name': 'doc_chunk', 'job type': 'ray', 'job id': 'job_id'}\n",
      "16:22:30 INFO - data factory data_ max_files -1, n_sample -1\n",
      "2025-10-03 16:22:30,514 - INFO - data factory data_ max_files -1, n_sample -1\n",
      "16:22:30 INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "2025-10-03 16:22:30,514 - INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "16:22:30 INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "2025-10-03 16:22:30,514 - INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "16:22:30 INFO - Running locally\n",
      "2025-10-03 16:22:30,515 - INFO - Running locally\n",
      "2025-10-03 16:22:31,911\tINFO worker.py:1777 -- Started a local Ray instance. View the dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8265 \u001b[39m\u001b[22m\n",
      "\u001b[36m(orchestrate pid=41128)\u001b[0m 16:22:37 INFO - orchestrator started at 2025-10-03 16:22:37\n",
      "\u001b[36m(orchestrate pid=41128)\u001b[0m 16:22:37 INFO - Number of files is 3, source profile {'max_file_size': 0.04416656494140625, 'min_file_size': 0.0028314590454101562, 'total_file_size': 0.067962646484375}\n",
      "\u001b[36m(orchestrate pid=41128)\u001b[0m 16:22:37 INFO - Cluster resources: {'cpus': 12, 'gpus': 0, 'memory': 31.943524170666933, 'object_store': 2.0}\n",
      "\u001b[36m(orchestrate pid=41128)\u001b[0m 16:22:37 INFO - Number of workers - 2 with {'num_cpus': 0.5, 'memory': 2147483648, 'max_restarts': -1} each\n",
      "\u001b[36m(orchestrate pid=41128)\u001b[0m 16:22:42 INFO - Completed 1 files in 0.001 min\n",
      "\u001b[36m(orchestrate pid=41128)\u001b[0m 16:22:42 INFO - Completed 1 files (33.333%)  in 0.001 min. Waiting for completion\n",
      "\u001b[36m(orchestrate pid=41128)\u001b[0m 16:22:42 INFO - Completed processing 3 files in 0.001 min\n",
      "\u001b[36m(orchestrate pid=41128)\u001b[0m 16:22:42 INFO - done flushing in 0.001 sec\n",
      "\u001b[36m(RayTransformFileProcessor pid=41156)\u001b[0m 16:22:42 WARNING - table is empty, skipping processing\n",
      "16:22:52 INFO - Completed execution in 0.372 min, execution result 0\n",
      "2025-10-03 16:22:52,834 - INFO - Completed execution in 0.372 min, execution result 0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Stage:3 completed successfully\n",
      "CPU times: user 529 ms, sys: 284 ms, total: 813 ms\n",
      "Wall time: 24.4 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_doc_chunk.ray.transform import DocChunk\n",
    "from data_processing.utils import GB\n",
    "\n",
    "STAGE = 3\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_exact_dedupe_dir}' --> output='{output_chunk_dir}'\\n\", flush=True)\n",
    "\n",
    "result = DocChunk(input_folder=output_exact_dedupe_dir,\n",
    "                        output_folder=output_chunk_dir,\n",
    "                        doc_chunk_chunking_type= \"li_markdown\",\n",
    "\n",
    "                        ## runtime options\n",
    "                        run_locally= True,\n",
    "                        num_cpus= MY_CONFIG.RAY_NUM_CPUS,\n",
    "                        memory= MY_CONFIG.RAY_MEMORY_GB * GB,\n",
    "                        runtime_num_workers = MY_CONFIG.RAY_RUNTIME_WORKERS,\n",
    "                        ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "213afdf6",
   "metadata": {},
   "source": [
    "### 5.2 - Inspect Generated output\n",
    "\n",
    "We would see documents are split into many chunks"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "d8138d43",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Successfully read 2 parquet files with 2 total rows\n",
      "Successfully read 2 parquet files with 61 total rows\n",
      "Files processed : 2\n",
      "Chunks created : 61\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>filename</th>\n",
       "      <th>num_pages</th>\n",
       "      <th>num_tables</th>\n",
       "      <th>num_doc_elements</th>\n",
       "      <th>document_hash</th>\n",
       "      <th>ext</th>\n",
       "      <th>hash</th>\n",
       "      <th>size</th>\n",
       "      <th>date_acquired</th>\n",
       "      <th>document_convert_time</th>\n",
       "      <th>source_filename</th>\n",
       "      <th>source_document_id</th>\n",
       "      <th>contents</th>\n",
       "      <th>document_id</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>32</th>\n",
       "      <td>granite.pdf</td>\n",
       "      <td>28</td>\n",
       "      <td>17</td>\n",
       "      <td>485</td>\n",
       "      <td>3127757990743433032</td>\n",
       "      <td>pdf</td>\n",
       "      <td>58342470e7d666dca0be87a15fb0552f949a5632606fe1...</td>\n",
       "      <td>121131</td>\n",
       "      <td>2025-10-03T16:20:41.866150</td>\n",
       "      <td>61.461330</td>\n",
       "      <td>granite.pdf</td>\n",
       "      <td>65999e4c-b0c3-4fc2-8a68-315d97f61bb7</td>\n",
       "      <td>## A Programming Languages\\n\\nABAP, Ada, Agda,...</td>\n",
       "      <td>5863a2b263e38396db7ecd7357befa898f6295c182e801...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>25</th>\n",
       "      <td>granite.pdf</td>\n",
       "      <td>28</td>\n",
       "      <td>17</td>\n",
       "      <td>485</td>\n",
       "      <td>3127757990743433032</td>\n",
       "      <td>pdf</td>\n",
       "      <td>58342470e7d666dca0be87a15fb0552f949a5632606fe1...</td>\n",
       "      <td>121131</td>\n",
       "      <td>2025-10-03T16:20:41.866150</td>\n",
       "      <td>61.461330</td>\n",
       "      <td>granite.pdf</td>\n",
       "      <td>65999e4c-b0c3-4fc2-8a68-315d97f61bb7</td>\n",
       "      <td>## 6.4 Code Reasoning, Understanding and Execu...</td>\n",
       "      <td>1c7f5e76a2aaad73f5f03549b065016b0703239538839d...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>42</th>\n",
       "      <td>attention.pdf</td>\n",
       "      <td>15</td>\n",
       "      <td>4</td>\n",
       "      <td>513</td>\n",
       "      <td>2949302674760005271</td>\n",
       "      <td>pdf</td>\n",
       "      <td>214960a61e817387f01087f0b0b323cf1ebd8035fffcab...</td>\n",
       "      <td>48981</td>\n",
       "      <td>2025-10-03T16:19:40.360972</td>\n",
       "      <td>16.186138</td>\n",
       "      <td>attention.pdf</td>\n",
       "      <td>3c534d48-9bde-4247-bfa6-8ae0c6f6e0aa</td>\n",
       "      <td>## 3.2.1 Scaled Dot-Product Attention\\n\\nWe ca...</td>\n",
       "      <td>372f585509e5743de4e6603483cad7dfa3bd5bbdfa131c...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "         filename  num_pages  num_tables  num_doc_elements  \\\n",
       "32    granite.pdf         28          17               485   \n",
       "25    granite.pdf         28          17               485   \n",
       "42  attention.pdf         15           4               513   \n",
       "\n",
       "          document_hash  ext  \\\n",
       "32  3127757990743433032  pdf   \n",
       "25  3127757990743433032  pdf   \n",
       "42  2949302674760005271  pdf   \n",
       "\n",
       "                                                 hash    size  \\\n",
       "32  58342470e7d666dca0be87a15fb0552f949a5632606fe1...  121131   \n",
       "25  58342470e7d666dca0be87a15fb0552f949a5632606fe1...  121131   \n",
       "42  214960a61e817387f01087f0b0b323cf1ebd8035fffcab...   48981   \n",
       "\n",
       "                 date_acquired  document_convert_time source_filename  \\\n",
       "32  2025-10-03T16:20:41.866150              61.461330     granite.pdf   \n",
       "25  2025-10-03T16:20:41.866150              61.461330     granite.pdf   \n",
       "42  2025-10-03T16:19:40.360972              16.186138   attention.pdf   \n",
       "\n",
       "                      source_document_id  \\\n",
       "32  65999e4c-b0c3-4fc2-8a68-315d97f61bb7   \n",
       "25  65999e4c-b0c3-4fc2-8a68-315d97f61bb7   \n",
       "42  3c534d48-9bde-4247-bfa6-8ae0c6f6e0aa   \n",
       "\n",
       "                                             contents  \\\n",
       "32  ## A Programming Languages\\n\\nABAP, Ada, Agda,...   \n",
       "25  ## 6.4 Code Reasoning, Understanding and Execu...   \n",
       "42  ## 3.2.1 Scaled Dot-Product Attention\\n\\nWe ca...   \n",
       "\n",
       "                                          document_id  \n",
       "32  5863a2b263e38396db7ecd7357befa898f6295c182e801...  \n",
       "25  1c7f5e76a2aaad73f5f03549b065016b0703239538839d...  \n",
       "42  372f585509e5743de4e6603483cad7dfa3bd5bbdfa131c...  "
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from file_utils import read_parquet_files_as_df\n",
    "\n",
    "input_df = read_parquet_files_as_df(output_exact_dedupe_dir)  ## for debug purposes\n",
    "output_df = read_parquet_files_as_df(output_chunk_dir)\n",
    "\n",
    "print (f\"Files processed : {input_df.shape[0]:,}\")\n",
    "print (f\"Chunks created : {output_df.shape[0]:,}\")\n",
    "\n",
    "# print (\"Input data dimensions (rows x columns)= \", input_df.shape)\n",
    "# print (\"Output data dimensions (rows x columns)= \", output_df.shape)\n",
    "\n",
    "output_df.sample(min(3, output_df.shape[0]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5370950a-2a3a-4143-8218-f9b4808099ba",
   "metadata": {},
   "source": [
    "## Step-6:   Calculate Embeddings for Chunks\n",
    "\n",
    "we will calculate embeddings for each chunk using an open source embedding model\n",
    "\n",
    "[Embeddings / Text Encoder documentation](https://github.com/data-prep-kit/data-prep-kit/blob/dev/transforms/language/text_encoder/README.md)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e6a88f8",
   "metadata": {},
   "source": [
    "### 6.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "76132f76",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🏃🏼 STAGE-4: Processing input='output/03_chunk_out' --> output='output/04_embeddings_out'\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "16:22:59 INFO - text_encoder parameters are : {'content_column_name': 'contents', 'output_embeddings_column_name': 'embeddings', 'model_name': 'ibm-granite/granite-embedding-30m-english'}\n",
      "2025-10-03 16:22:59,371 - INFO - text_encoder parameters are : {'content_column_name': 'contents', 'output_embeddings_column_name': 'embeddings', 'model_name': 'ibm-granite/granite-embedding-30m-english'}\n",
      "16:22:59 INFO - pipeline id pipeline_id\n",
      "2025-10-03 16:22:59,372 - INFO - pipeline id pipeline_id\n",
      "16:22:59 INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "2025-10-03 16:22:59,372 - INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "16:22:59 INFO - number of workers 2 worker options {'num_cpus': 0.5, 'memory': 2147483648, 'max_restarts': -1}\n",
      "2025-10-03 16:22:59,373 - INFO - number of workers 2 worker options {'num_cpus': 0.5, 'memory': 2147483648, 'max_restarts': -1}\n",
      "16:22:59 INFO - actor creation delay 0\n",
      "2025-10-03 16:22:59,373 - INFO - actor creation delay 0\n",
      "16:22:59 INFO - job details {'job category': 'preprocessing', 'job name': 'text_encoder', 'job type': 'ray', 'job id': 'job_id'}\n",
      "2025-10-03 16:22:59,373 - INFO - job details {'job category': 'preprocessing', 'job name': 'text_encoder', 'job type': 'ray', 'job id': 'job_id'}\n",
      "16:22:59 INFO - data factory data_ max_files -1, n_sample -1\n",
      "2025-10-03 16:22:59,374 - INFO - data factory data_ max_files -1, n_sample -1\n",
      "16:22:59 INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "2025-10-03 16:22:59,374 - INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "16:22:59 INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "2025-10-03 16:22:59,375 - INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "16:22:59 INFO - Running locally\n",
      "2025-10-03 16:22:59,375 - INFO - Running locally\n",
      "2025-10-03 16:23:00,890\tINFO worker.py:1777 -- Started a local Ray instance. View the dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8265 \u001b[39m\u001b[22m\n",
      "\u001b[36m(orchestrate pid=41340)\u001b[0m 16:23:06 INFO - orchestrator started at 2025-10-03 16:23:06\n",
      "\u001b[36m(orchestrate pid=41340)\u001b[0m 16:23:06 INFO - Number of files is 2, source profile {'max_file_size': 0.046176910400390625, 'min_file_size': 0.02873516082763672, 'total_file_size': 0.07491207122802734}\n",
      "\u001b[36m(orchestrate pid=41340)\u001b[0m 16:23:06 INFO - Cluster resources: {'cpus': 12, 'gpus': 0, 'memory': 31.93815460242331, 'object_store': 2.0}\n",
      "\u001b[36m(orchestrate pid=41340)\u001b[0m 16:23:06 INFO - Number of workers - 2 with {'num_cpus': 0.5, 'memory': 2147483648, 'max_restarts': -1} each\n",
      "\u001b[36m(orchestrate pid=41340)\u001b[0m 16:23:13 INFO - Completed 0 files (0.0%)  in 0.0 min. Waiting for completion\n",
      "\u001b[36m(orchestrate pid=41340)\u001b[0m 16:23:16 INFO - Completed processing 2 files in 0.044 min\n",
      "\u001b[36m(orchestrate pid=41340)\u001b[0m 16:23:16 INFO - done flushing in 0.001 sec\n",
      "\u001b[36m(RayTransformFileProcessor pid=41369)\u001b[0m /Users/shahrokhdaijavad/miniforge3/envs/data-prep-kit-rag/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown\n",
      "\u001b[36m(RayTransformFileProcessor pid=41369)\u001b[0m   warnings.warn('resource_tracker: There appear to be %d '\n",
      "16:23:26 INFO - Completed execution in 0.449 min, execution result 0\n",
      "2025-10-03 16:23:26,291 - INFO - Completed execution in 0.449 min, execution result 0\n",
      "\u001b[36m(RayTransformFileProcessor pid=41370)\u001b[0m /Users/shahrokhdaijavad/miniforge3/envs/data-prep-kit-rag/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown\n",
      "\u001b[36m(RayTransformFileProcessor pid=41370)\u001b[0m   warnings.warn('resource_tracker: There appear to be %d '\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Stage:4 completed successfully\n",
      "CPU times: user 197 ms, sys: 202 ms, total: 400 ms\n",
      "Wall time: 28.6 s\n"
     ]
    }
   ],
   "source": [
    "%%time \n",
    "\n",
    "from dpk_text_encoder.ray.transform import TextEncoder\n",
    "from data_processing.utils import GB\n",
    "\n",
    "STAGE  = 4\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_chunk_dir}' --> output='{output_embeddings_dir}'\\n\", flush=True)\n",
    "\n",
    "result = TextEncoder(input_folder= output_chunk_dir, \n",
    "               output_folder= output_embeddings_dir, \n",
    "               text_encoder_model_name = MY_CONFIG.EMBEDDING_MODEL,\n",
    "               \n",
    "               ## runtime options\n",
    "               run_locally= True,\n",
    "               num_cpus= MY_CONFIG.RAY_NUM_CPUS,\n",
    "               memory= MY_CONFIG.RAY_MEMORY_GB * GB,\n",
    "               runtime_num_workers = MY_CONFIG.RAY_RUNTIME_WORKERS,\n",
    "               ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b734852c",
   "metadata": {},
   "source": [
    "### 6.2 - Inspect Generated output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "7b1c1d09",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Successfully read 2 parquet files with 61 total rows\n",
      "Successfully read 2 parquet files with 61 total rows\n",
      "Input data dimensions (rows x columns)=  (61, 14)\n",
      "Output data dimensions (rows x columns)=  (61, 15)\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>filename</th>\n",
       "      <th>num_pages</th>\n",
       "      <th>num_tables</th>\n",
       "      <th>num_doc_elements</th>\n",
       "      <th>document_hash</th>\n",
       "      <th>ext</th>\n",
       "      <th>hash</th>\n",
       "      <th>size</th>\n",
       "      <th>date_acquired</th>\n",
       "      <th>document_convert_time</th>\n",
       "      <th>source_filename</th>\n",
       "      <th>source_document_id</th>\n",
       "      <th>contents</th>\n",
       "      <th>document_id</th>\n",
       "      <th>embeddings</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>granite.pdf</td>\n",
       "      <td>28</td>\n",
       "      <td>17</td>\n",
       "      <td>485</td>\n",
       "      <td>3127757990743433032</td>\n",
       "      <td>pdf</td>\n",
       "      <td>58342470e7d666dca0be87a15fb0552f949a5632606fe1...</td>\n",
       "      <td>121131</td>\n",
       "      <td>2025-10-03T16:20:41.866150</td>\n",
       "      <td>61.461330</td>\n",
       "      <td>granite.pdf</td>\n",
       "      <td>65999e4c-b0c3-4fc2-8a68-315d97f61bb7</td>\n",
       "      <td>## 2.1 Data Crawling and Filtering\\n\\nThe pret...</td>\n",
       "      <td>64f536c6e279db81ccda1ba14e35ca553e4250de346176...</td>\n",
       "      <td>[0.049523477, -0.015281111, 0.0320079, 0.04259...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>53</th>\n",
       "      <td>attention.pdf</td>\n",
       "      <td>15</td>\n",
       "      <td>4</td>\n",
       "      <td>513</td>\n",
       "      <td>2949302674760005271</td>\n",
       "      <td>pdf</td>\n",
       "      <td>214960a61e817387f01087f0b0b323cf1ebd8035fffcab...</td>\n",
       "      <td>48981</td>\n",
       "      <td>2025-10-03T16:19:40.360972</td>\n",
       "      <td>16.186138</td>\n",
       "      <td>attention.pdf</td>\n",
       "      <td>3c534d48-9bde-4247-bfa6-8ae0c6f6e0aa</td>\n",
       "      <td>## 5.4 Regularization\\n\\nWe employ three types...</td>\n",
       "      <td>da0d3c0141f14f51de889a843a3d2b70e60b680da1190c...</td>\n",
       "      <td>[-0.0034285104, -1.0524575e-05, 0.031760532, -...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>34</th>\n",
       "      <td>attention.pdf</td>\n",
       "      <td>15</td>\n",
       "      <td>4</td>\n",
       "      <td>513</td>\n",
       "      <td>2949302674760005271</td>\n",
       "      <td>pdf</td>\n",
       "      <td>214960a61e817387f01087f0b0b323cf1ebd8035fffcab...</td>\n",
       "      <td>48981</td>\n",
       "      <td>2025-10-03T16:19:40.360972</td>\n",
       "      <td>16.186138</td>\n",
       "      <td>attention.pdf</td>\n",
       "      <td>3c534d48-9bde-4247-bfa6-8ae0c6f6e0aa</td>\n",
       "      <td>## Attention Is All You Need\\n\\nAshish Vaswani...</td>\n",
       "      <td>573d1e20f1f71713f4f318a4abdffbb5109437d4fc02a2...</td>\n",
       "      <td>[-0.018462718, 0.053133022, -0.02388625, 0.042...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "         filename  num_pages  num_tables  num_doc_elements  \\\n",
       "4     granite.pdf         28          17               485   \n",
       "53  attention.pdf         15           4               513   \n",
       "34  attention.pdf         15           4               513   \n",
       "\n",
       "          document_hash  ext  \\\n",
       "4   3127757990743433032  pdf   \n",
       "53  2949302674760005271  pdf   \n",
       "34  2949302674760005271  pdf   \n",
       "\n",
       "                                                 hash    size  \\\n",
       "4   58342470e7d666dca0be87a15fb0552f949a5632606fe1...  121131   \n",
       "53  214960a61e817387f01087f0b0b323cf1ebd8035fffcab...   48981   \n",
       "34  214960a61e817387f01087f0b0b323cf1ebd8035fffcab...   48981   \n",
       "\n",
       "                 date_acquired  document_convert_time source_filename  \\\n",
       "4   2025-10-03T16:20:41.866150              61.461330     granite.pdf   \n",
       "53  2025-10-03T16:19:40.360972              16.186138   attention.pdf   \n",
       "34  2025-10-03T16:19:40.360972              16.186138   attention.pdf   \n",
       "\n",
       "                      source_document_id  \\\n",
       "4   65999e4c-b0c3-4fc2-8a68-315d97f61bb7   \n",
       "53  3c534d48-9bde-4247-bfa6-8ae0c6f6e0aa   \n",
       "34  3c534d48-9bde-4247-bfa6-8ae0c6f6e0aa   \n",
       "\n",
       "                                             contents  \\\n",
       "4   ## 2.1 Data Crawling and Filtering\\n\\nThe pret...   \n",
       "53  ## 5.4 Regularization\\n\\nWe employ three types...   \n",
       "34  ## Attention Is All You Need\\n\\nAshish Vaswani...   \n",
       "\n",
       "                                          document_id  \\\n",
       "4   64f536c6e279db81ccda1ba14e35ca553e4250de346176...   \n",
       "53  da0d3c0141f14f51de889a843a3d2b70e60b680da1190c...   \n",
       "34  573d1e20f1f71713f4f318a4abdffbb5109437d4fc02a2...   \n",
       "\n",
       "                                           embeddings  \n",
       "4   [0.049523477, -0.015281111, 0.0320079, 0.04259...  \n",
       "53  [-0.0034285104, -1.0524575e-05, 0.031760532, -...  \n",
       "34  [-0.018462718, 0.053133022, -0.02388625, 0.042...  "
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from file_utils import read_parquet_files_as_df\n",
    "\n",
    "input_df = read_parquet_files_as_df(output_chunk_dir)\n",
    "output_df = read_parquet_files_as_df(output_embeddings_dir)\n",
    "\n",
    "print (\"Input data dimensions (rows x columns)= \", input_df.shape)\n",
    "print (\"Output data dimensions (rows x columns)= \", output_df.shape)\n",
    "\n",
    "output_df.sample(min(3, output_df.shape[0]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8b80bc44",
   "metadata": {},
   "source": [
    "## Step-7: Copy output to final output dir"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "16dee3b8-31dc-4168-8adb-f2a0a0b5e207",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ Copied output from 'output/04_embeddings_out' --> 'output/output_final'\n"
     ]
    }
   ],
   "source": [
    "import shutil\n",
    "\n",
    "shutil.rmtree(MY_CONFIG.OUTPUT_FOLDER_FINAL, ignore_errors=True)\n",
    "shutil.copytree(src=output_embeddings_dir, dst=MY_CONFIG.OUTPUT_FOLDER_FINAL)\n",
    "\n",
    "print (f\"✅ Copied output from '{output_embeddings_dir}' --> '{MY_CONFIG.OUTPUT_FOLDER_FINAL}'\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "440e6b60-568c-46fa-abff-e3faa3b23610",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
