{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "cda1b68e",
   "metadata": {},
   "source": [
    "# Processing PDFs using Data Prep Kit (Ray version)\n",
    "\n",
    " [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/data-prep-kit/data-prep-kit/blob/dev/examples/pdf-processing-1/pdf_processing_1_ray.ipynb)\n",
    "\n",
    "This notebook will introduce DPK and showcase some of it's capabilities.\n",
    "\n",
    "Here is the workflow:\n",
    "\n",
    "- docling2parquet: Extract text from PDF documents\n",
    "- docid: compute hashes\n",
    "- exact dedupe : filter out identical documents\n",
    "- fuzzy dedupe : filter out 'near duplicates'\n",
    "- document quality: scoring documents for quality\n",
    "\n",
    "![](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev//examples/pdf-processing-1/images/data-prep-kit-3-workflow.png)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "97e10909",
   "metadata": {},
   "source": [
    "## How to run this notebook\n",
    "\n",
    "Two options:\n",
    "\n",
    "- **Option 1 - Google Colab:** easiest option.  no setup required.  Click this link to open this on google colab.  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/data-prep-kit/data-prep-kit/blob/dev/examples/pdf-processing-1/pdf_processing_1_ray.ipynb)\n",
    "- **Option 2 - Local python dev environment:**  Setup using this [guide](../../../README.md#-getting-started)\n",
    "\n",
    "The notebook will work as in both environments"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5fd8d1e",
   "metadata": {},
   "source": [
    "## Step-1: Figure out Runtime Environment\n",
    "\n",
    "### 1.1 - Determine runtime\n",
    "\n",
    "Determine if we are running on Google colab or local python environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1ac451e4",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "if os.getenv(\"COLAB_RELEASE_TAG\"):\n",
    "   print(\"Running in Colab\")\n",
    "   RUNNING_IN_COLAB = True\n",
    "else:\n",
    "   print(\"NOT in Colab\")\n",
    "   RUNNING_IN_COLAB = False"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e192811d",
   "metadata": {},
   "outputs": [],
   "source": [
    "## Download any code files we may need\n",
    "\n",
    "if RUNNING_IN_COLAB:\n",
    "    !wget -O 'file_utils.py'   'https://raw.githubusercontent.com/sujee/data-prep-kit/examples-refactor-pdf-1a--1129/examples/utils/file_utils.py'"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "39a8157d",
   "metadata": {},
   "source": [
    "### 1.2 - Install dependencies if running on Google Colab"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "42dea41d",
   "metadata": {},
   "outputs": [],
   "source": [
    "# %%capture\n",
    "\n",
    "import os\n",
    "\n",
    "if RUNNING_IN_COLAB:\n",
    "  ## setup a sandbox env to avoid conflicts with colab libraries\n",
    "  !pip install -q condacolab\n",
    "  import condacolab\n",
    "  condacolab.install()\n",
    "  !conda create -n my_env python=3.11 -y\n",
    "  !conda activate my_env\n",
    "  ## to install every thing thing use 'data-prep-toolkit-transforms[ray, all]==1.1.1.dev0\n",
    "  !pip install  --default-timeout=100  \\\n",
    "        'data-prep-toolkit-transforms[ray, all]==1.1.1.dev0' \\\n",
    "        humanfriendly\n",
    "  ## terminate the current kernel, so we restart the runtime\n",
    "  os.kill(os.getpid(), 9)\n",
    "  ## restart the session"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1130a993",
   "metadata": {},
   "source": [
    "### 1.3 - Restart Runtime\n",
    "\n",
    "After installing dependencies, be sure <font color=\"red\">restart runtime</font>, so libraries will be loaded\n",
    "\n",
    "You do this by going to **`Runtime --> Restart Session`**\n",
    "\n",
    "Then you can continue to the next step (no need to re-run the notebook)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ffd02da8",
   "metadata": {},
   "source": [
    "## Step-2: Configuration  & Utils"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d616f843",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "if os.getenv(\"COLAB_RELEASE_TAG\"):\n",
    "   print(\"Running in Colab\")\n",
    "   RUNNING_IN_COLAB = True\n",
    "else:\n",
    "   print(\"NOT in Colab\")\n",
    "   RUNNING_IN_COLAB = False"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "589ebdfa",
   "metadata": {},
   "outputs": [],
   "source": [
    "## setup path to utils folder\n",
    "import sys\n",
    "sys.path.append('../utils')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a4ef6ec5",
   "metadata": {},
   "source": [
    "### 2.2 - Setup input/output directories"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bb9baf41",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os, sys\n",
    "import shutil\n",
    "\n",
    "if RUNNING_IN_COLAB:\n",
    "    input_dir = \"input\"\n",
    "    shutil.os.makedirs(input_dir, exist_ok=True)\n",
    "else:\n",
    "    input_dir = \"../data-files/pdf-processing-1/\"\n",
    "\n",
    "output_dir = \"output\"\n",
    "\n",
    "output_docling2pq_dir = os.path.join (output_dir, '01_docling2pq_out')\n",
    "output_docid_dir = os.path.join (output_dir, '02_docid_out')\n",
    "output_exact_dedupe_dir = os.path.join (output_dir, '03_exact_dedupe_out')\n",
    "output_fuzzy_dedupe_dir = os.path.join (output_dir, '04_fuzzy_dedupe_out')\n",
    "output_doc_quality_dir = os.path.join (output_dir, '05_doc_quality_out')\n",
    "output_final_dir = os.path.join (output_dir, 'output_final')\n",
    "\n",
    "## clear output folder\n",
    "shutil.rmtree(output_dir, ignore_errors=True)\n",
    "shutil.os.makedirs(output_dir, exist_ok=True)\n",
    "print (\"✅ Cleared output directory\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4f250a34",
   "metadata": {},
   "source": [
    "### 2.3 - Runtime Configuration"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cd7b41a7",
   "metadata": {},
   "outputs": [],
   "source": [
    "from data_processing.utils import GB\n",
    "\n",
    "CONFIG_RAY_NUM_CPUS = 1 # CPUs per worker\n",
    "CONFIG_RAY_MEMORY = 2 * GB  # memory per worker\n",
    "CONFIG_RAY_RUNTIME_WORKERS = 4"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "70689bcc",
   "metadata": {},
   "source": [
    "## Step-3: Inspect the Data\n",
    "\n",
    "We will use simple PDFs.  The files are [here](https://github.com/data-prep-kit/data-prep-kit/tree/dev/examples/data-files/pdf-processing-1/)\n",
    "\n",
    "- [earth.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth.pdf) and exact duplicate [earth-copy.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth-copy.pdf)\n",
    "- [earth2.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth2.pdf) almost similar to earth.pdf (ONE word difference!)\n",
    "- [mars.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/mars.pdf)\n",
    "- [spam.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/spam.pdf) - contains spammy contents\n",
    "- [lorem-ipsum.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/lorem-ipsum.pdf) - contains 'lorem ipsum' placeholder\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0c626dc5",
   "metadata": {},
   "source": [
    "### 3.1 -Download Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "81bd9856",
   "metadata": {},
   "outputs": [],
   "source": [
    "from file_utils import download_file\n",
    "\n",
    "if RUNNING_IN_COLAB:\n",
    "\n",
    "    download_file ('https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth.pdf', os.path.join(input_dir, 'earth.pdf'))\n",
    "\n",
    "    download_file ('https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth-copy.pdf', os.path.join(input_dir, 'earth-copy.pdf'))\n",
    "\n",
    "    download_file ('https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth2.pdf', os.path.join(input_dir, 'earth2.pdf'))\n",
    "\n",
    "    download_file ('https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/mars.pdf', os.path.join(input_dir, 'mars.pdf'))\n",
    "\n",
    "    download_file ('https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/spam.pdf', os.path.join(input_dir, 'spam.pdf'))\n",
    "\n",
    "    download_file ('https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/lorem-ipsum.pdf', os.path.join(input_dir, 'lorem-ipsum.pdf'))\n",
    "else:\n",
    "    print ('Using input files from : ', input_dir)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2e6ebec9",
   "metadata": {},
   "source": [
    "## Step-4: Extract Data from PDF (docling2parquet)\n",
    "\n",
    "This step we will read PDF files and extract the text data.\n",
    "\n",
    "[Docling2Parquet documentation](https://github.com/data-prep-kit/data-prep-kit/blob/dev/transforms/language/docling2parquet/README.md)\n",
    "\n",
    "We use the [Docling package](https://github.com/DS4SD/docling).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e69f1ba5",
   "metadata": {},
   "source": [
    "### 4.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2ee389cb",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_docling2parquet.ray.transform import Docling2Parquet\n",
    "from dpk_docling2parquet.transform import docling2parquet_contents_types\n",
    "\n",
    "STAGE = 1\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{input_dir}' --> output='{output_docling2pq_dir}'\\n\", flush=True)\n",
    "\n",
    "\n",
    "result =  Docling2Parquet(input_folder= input_dir,\n",
    "                    output_folder= output_docling2pq_dir,\n",
    "                    data_files_to_use=['.pdf'],\n",
    "                    docling2parquet_contents_type=docling2parquet_contents_types.MARKDOWN,   # markdown\n",
    "                    \n",
    "                    # runtime config\n",
    "                    run_locally= True,\n",
    "                    num_cpus= CONFIG_RAY_NUM_CPUS,\n",
    "                    memory= CONFIG_RAY_MEMORY,\n",
    "                    runtime_num_workers = CONFIG_RAY_RUNTIME_WORKERS,\n",
    "                    ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8bcd1454",
   "metadata": {},
   "source": [
    "### 4.2 - Inspect Generated output\n",
    "\n",
    "Here we should see one entry per input file processed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1425c4a9",
   "metadata": {},
   "outputs": [],
   "source": [
    "from file_utils import read_parquet_files_as_df\n",
    "\n",
    "print (\"Displaying contents of : \", output_docling2pq_dir)\n",
    "output_df = read_parquet_files_as_df(output_docling2pq_dir)\n",
    "# print (\"Output dimensions (rows x columns)= \", output_df.shape)\n",
    "output_df.head(10)\n",
    "\n",
    "## To display certain columns\n",
    "#parquet_df[['column1', 'column2', 'column3']].head(5)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8685bcab",
   "metadata": {},
   "source": [
    "\n",
    "### 4.3 - Understand the output\n",
    "\n",
    "Here are some interesting attributes to note:\n",
    "\n",
    "- **filename** : original filename\n",
    "- **contents** : text\n",
    "- **document_id**: unique id (UUID) assignd to this document\n",
    "- **document_hash**: hash of documents\n",
    "- **hash** : hash of `contents` column\n",
    "- **pdf_convert_time** : time to convert this pdf in seconds\n",
    "\n",
    "**Note: you should notice the hash values are identical for the duplicate documents**\n",
    "\n",
    "Let's inspect the **contents** column."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9d1d65a6",
   "metadata": {},
   "outputs": [],
   "source": [
    "print (output_df[output_df['filename'] == 'earth.pdf'].iloc[0,]['contents'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ff4edaa2",
   "metadata": {},
   "outputs": [],
   "source": [
    "print (output_df[output_df['filename'] == 'spam.pdf'].iloc[0,]['contents'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "62faa3cc",
   "metadata": {},
   "outputs": [],
   "source": [
    "print (output_df[output_df['filename'] == 'lorem-ipsum.pdf'].iloc[0,]['contents'])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0d1a66e0",
   "metadata": {},
   "source": [
    "## Step-5:  Create DOC ID for Documents\n",
    "\n",
    "This transform annotates documents with document \"ids\". It supports the following transformations of the original data:\n",
    "\n",
    " - Adding document hash: this enables the addition of a document hash-based id to the data. The hash is calculated with `hashlib.sha256(doc.encode(\"utf-8\")).hexdigest()`. To enable this annotation, set **hash_column** to the name of the column, where you want to store it.\n",
    " - Adding integer document id: this allows the addition of an integer document id to the data that is unique across all rows in all tables provided to the transform() method. To enable this annotation, set **int_id_column** to the name of the column, where you want to store it.\n",
    "\n",
    "**This step is a pre-requisite for fuzzy dedup** in the pipeline.\n",
    "\n",
    "[DocID documentation](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/universal/doc_id)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d6b640e",
   "metadata": {},
   "source": [
    "### 5.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a0abf2df",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_doc_id.ray.transform import DocID\n",
    "\n",
    "STAGE = 2\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_docling2pq_dir}' --> output='{output_docid_dir}'\\n\", flush=True)\n",
    "\n",
    "result = DocID(input_folder= output_docling2pq_dir,\n",
    "                output_folder= output_docid_dir,\n",
    "                doc_id_doc_column= \"contents\",\n",
    "                doc_id_hash_column= \"doc_hash\",\n",
    "                # doc_id_int_column= \"doc_id_int\",\n",
    "                doc_id_int_column= \"int_id_column\",\n",
    "                \n",
    "                # runtime config\n",
    "                run_locally= True,\n",
    "                num_cpus= CONFIG_RAY_NUM_CPUS,\n",
    "                memory= CONFIG_RAY_MEMORY,\n",
    "                runtime_num_workers = CONFIG_RAY_RUNTIME_WORKERS,\n",
    "                ).transform()\n",
    "        \n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4757220",
   "metadata": {},
   "source": [
    "### 5.2 - Inspect Generated output\n",
    "\n",
    "You would see a new columns **doc_hash** and **int_id_column**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "27344d73",
   "metadata": {},
   "outputs": [],
   "source": [
    "from file_utils import read_parquet_files_as_df\n",
    "print (\"Displaying contents of : \", output_docid_dir)\n",
    "output_df = read_parquet_files_as_df(output_docid_dir)\n",
    "output_df.head(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "95967b4d",
   "metadata": {},
   "source": [
    "## Step-6: Eliminate Duplicate Documents\n",
    "\n",
    "We have 2 exact duplicates: **earth.pdf** , **earth-copy.pdf**\n",
    "\n",
    "Note how **doc_hash** for these documents are the same.\n",
    "\n",
    "[Exact dedupe information](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/universal/ededup)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0fde98b4",
   "metadata": {},
   "source": [
    "### 6.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d1520c5c",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_ededup.ray.transform import Ededup\n",
    "\n",
    "STAGE = 3\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_docid_dir}' --> output='{output_exact_dedupe_dir}'\\n\", flush=True)\n",
    "\n",
    "result = Ededup(input_folder=output_docid_dir,\n",
    "                output_folder=output_exact_dedupe_dir,\n",
    "                ededup_doc_column=\"contents\",\n",
    "                ededup_doc_id_column=\"doc_hash\",\n",
    "                ededup_num_hashes= 2,\n",
    "                \n",
    "                # runtime config\n",
    "                run_locally= True,\n",
    "                num_cpus= CONFIG_RAY_NUM_CPUS,\n",
    "                memory= CONFIG_RAY_MEMORY,\n",
    "                runtime_num_workers = CONFIG_RAY_RUNTIME_WORKERS,\n",
    "                ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d9889d82",
   "metadata": {},
   "source": [
    "### 6.2 - Inspect Generated output\n",
    "\n",
    "You can see one of **earth.pdf** or **earth-copy.pdf** will be eliminated."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e638f727",
   "metadata": {},
   "outputs": [],
   "source": [
    "from file_utils import read_parquet_files_as_df\n",
    "input_df = read_parquet_files_as_df(output_docid_dir)\n",
    "output_df = read_parquet_files_as_df(output_exact_dedupe_dir)\n",
    "\n",
    "# print (\"Input data dimensions (rows x columns)= \", input_df.shape)\n",
    "# print (\"Output data dimensions (rows x columns)= \", output_df.shape)\n",
    "print (f\"Input files before exact dedupe : {input_df.shape[0]:,}\")\n",
    "print (f\"Output files after exact dedupe : {output_df.shape[0]:,}\")\n",
    "print (\"Duplicate files removed :  \", (input_df.shape[0] - output_df.shape[0]))\n",
    "\n",
    "print (\"Displaying contents of : \", output_exact_dedupe_dir)\n",
    "output_df.head(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b0318d44",
   "metadata": {},
   "source": [
    "## Step-7: Fuzzy Dedupe\n",
    "\n",
    "In previous step, we removed **exact duplicates (identical documents)**.\n",
    "\n",
    "Fuzzy de-dupe can further filter out documents that are **not exactly identical, but nearly identical**\n",
    "\n",
    "Here is a simple example:\n",
    "\n",
    "`Our solar system is a vast and fascinating expanse`\n",
    "\n",
    "`The solar system is a vast and fascinating expanse`\n",
    "\n",
    "Only one word is different `Our` vs `The`.\n",
    "\n",
    "Imagine two documents with one extra blank line.  For our purposes they are the same.\n",
    "\n",
    "[Fuzzy dedupe documentation](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/universal/fdedup)\n",
    "\n",
    "### Tweaking fuzzy matches\n",
    "\n",
    "**`jaccard_similarity_threshold`** is the parameter used to tweak similarities between documents.  It's value is between 0 and 1.0.  Values close to 1.0 means more strict checking (fewer documents will qualify).  Lower threshold means more leniant matches (more documents will qualify)\n",
    "\n",
    "Adjust this value to find what works for your documents"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dcb65830",
   "metadata": {},
   "source": [
    "### 7.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b51c348f",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_fdedup.ray.transform import Fdedup\n",
    "\n",
    "STAGE = 4\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_exact_dedupe_dir}' --> output='{output_fuzzy_dedupe_dir}'\\n\", flush=True)\n",
    "\n",
    "result = Fdedup(input_folder=output_exact_dedupe_dir,\n",
    "                output_folder=output_fuzzy_dedupe_dir,\n",
    "                contents_column= \"contents\",\n",
    "                # document_id_column= \"doc_id\",\n",
    "                document_id_column= \"int_id_column\",\n",
    "                num_permutations= 112,\n",
    "                num_bands= 14,\n",
    "                num_minhashes_per_band= 8,\n",
    "                jaccard_similarity_threshold = 0.8, # between 0 - 1.  higher means more strict checking\n",
    "                operation_mode=\"filter_duplicates\",\n",
    "                # operation_mode=\"annotate\",\n",
    "                \n",
    "                # runtime config\n",
    "                run_locally= True,\n",
    "                ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed (result={result})\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f2b4cbc",
   "metadata": {},
   "source": [
    "### 7.2 - Inspect Output\n",
    "\n",
    "FuzzyDedupe will write documents that are filtered in **output/04_fuzzy_dedupe_out/cleaned** folder\n",
    "\n",
    "You will notice only one **earth.pdf** made it!  So fuzzy dedupe did filter out the almost identical doc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7bd62f93",
   "metadata": {},
   "outputs": [],
   "source": [
    "from file_utils import read_parquet_files_as_df\n",
    "input_df = read_parquet_files_as_df(output_exact_dedupe_dir)\n",
    "output_df = read_parquet_files_as_df(os.path.join(output_fuzzy_dedupe_dir, \"cleaned\"))\n",
    "\n",
    "# print (\"Input data dimensions (rows x columns)= \", input_df.shape)\n",
    "# print (\"Output data dimensions (rows x columns)= \", output_df.shape)\n",
    "print (f\"Input files before exact dedupe : {input_df.shape[0]:,}\")\n",
    "print (f\"Output files after exact dedupe : {output_df.shape[0]:,}\")\n",
    "print (\"Near duplicate files removed :  \", (input_df.shape[0] - output_df.shape[0]))\n",
    "\n",
    "print (\"Displaying contents of : \", output_fuzzy_dedupe_dir)\n",
    "output_df.head(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7320fb9",
   "metadata": {},
   "source": [
    "## Step-8: Document Quality\n",
    "\n",
    "This handy plugin will score documents across many metrics.\n",
    "\n",
    "Here we will look for 'bad words' metric.\n",
    "\n",
    "[Document quality documentation](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/language/doc_quality)\n",
    "\n",
    "By default it uses [bad words collection](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/language/doc_quality/dpk_doc_quality/ldnoobw).  You can supply a custom file by passing an argument `bad_word_filepath=/path/to/badwords_file`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "299ee9ec",
   "metadata": {},
   "source": [
    "### 8.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d7f781f3",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_doc_quality.ray.transform import DocQuality\n",
    "\n",
    "STAGE = 5\n",
    "output_fuzzy_dedupe_cleaned_dir = os.path.join(output_fuzzy_dedupe_dir, \"cleaned\")\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_fuzzy_dedupe_cleaned_dir}' --> output='{output_doc_quality_dir}'\\n\", flush=True)\n",
    "\n",
    "result = DocQuality(input_folder=output_fuzzy_dedupe_cleaned_dir,\n",
    "                    output_folder= output_doc_quality_dir,\n",
    "                    docq_text_lang = \"en\",\n",
    "                    docq_doc_content_column =\"contents\",\n",
    "                    \n",
    "                    # runtime config\n",
    "                    run_locally= True,\n",
    "                    num_cpus= CONFIG_RAY_NUM_CPUS,\n",
    "                    memory= CONFIG_RAY_MEMORY,\n",
    "                    runtime_num_workers = CONFIG_RAY_RUNTIME_WORKERS,\n",
    "                    ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed (result={result})\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "610eb298",
   "metadata": {},
   "source": [
    "### 8.2 - Inspect the Output\n",
    "\n",
    "We will see several new columns starting with the name **docq_**.\n",
    "\n",
    "Look at the column **docq_contain_bad_word**; this will flag documents with 'bad words'.\n",
    "\n",
    "Also inspect the column **docq_lorem_ipsum_ratio**; this will flag documents with 'lorem ipsum' text\n",
    "\n",
    "For more information see : [Doc Quality documentation](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/language/doc_quality)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "91eef1e5",
   "metadata": {},
   "outputs": [],
   "source": [
    "from file_utils import read_parquet_files_as_df\n",
    "output_df = read_parquet_files_as_df(output_doc_quality_dir)\n",
    "print (\"Displaying contents of : \", output_doc_quality_dir)\n",
    "output_df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8646465c",
   "metadata": {},
   "source": [
    "### 8.3 - Filtering 'quality' documents\n",
    "\n",
    "So from the output above we see **spam.pdf** is flagged for containing bad words (**docq_contain_bad_word=True**).\n",
    "\n",
    "Also **lorem.pdf** is flagged for place holder content **lorem ipsum**  (**docq_lorem_ipsum_ratio > 0**)\n",
    "\n",
    "We are going to filter them both out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a95a4449",
   "metadata": {},
   "outputs": [],
   "source": [
    "from file_utils import read_parquet_files_as_df\n",
    "all_docs_df = read_parquet_files_as_df(output_doc_quality_dir)\n",
    "\n",
    "# remove documents with badwords\n",
    "clean_docs_df = all_docs_df[all_docs_df['docq_contain_bad_word'] == False]\n",
    "\n",
    "# also filter out 'lorem ipsum' text\n",
    "clean_docs_df = clean_docs_df[clean_docs_df['docq_lorem_ipsum_ratio'] == 0]\n",
    "\n",
    "clean_docs_df.head(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f41ddd63",
   "metadata": {},
   "source": [
    "## Step-9: Copy output to final output dir"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ff92e5a3",
   "metadata": {},
   "outputs": [],
   "source": [
    "import shutil\n",
    "\n",
    "shutil.rmtree(output_final_dir, ignore_errors=True)\n",
    "shutil.os.makedirs(output_final_dir, exist_ok=True)\n",
    "\n",
    "output_final_dir_parquet = os.path.join (output_final_dir, 'pq')\n",
    "shutil.os.makedirs(output_final_dir_parquet, exist_ok=True)\n",
    "\n",
    "output_final_dir_markdown = os.path.join (output_final_dir, 'markdown')\n",
    "shutil.os.makedirs(output_final_dir_markdown, exist_ok=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2e9870d5",
   "metadata": {},
   "outputs": [],
   "source": [
    "## save parquet\n",
    "\n",
    "clean_docs_df.to_parquet(os.path.join(output_final_dir_parquet, \"clean_docs.parquet\"))\n",
    "print (f\"✅ Saved CLEAN parquet output to '{output_final_dir_parquet}'\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b04ac724",
   "metadata": {},
   "outputs": [],
   "source": [
    "## save markdown text\n",
    "\n",
    "for index, row in clean_docs_df.iterrows():\n",
    "    output_file_name = os.path.join (output_final_dir_markdown, row['filename'] + '.md')\n",
    "    with open(output_file_name, 'w') as output_file:\n",
    "        output_file.write(row['contents'])\n",
    "\n",
    "print (f\"✅ Saved CLEAN markdown output to '{output_final_dir_markdown}'\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c3c0f793-94b0-4981-a0eb-2a47116bb24a",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
