{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "841e533d-ebb3-406d-9da7-b19e2c5f5866",
   "metadata": {
    "id": "841e533d-ebb3-406d-9da7-b19e2c5f5866"
   },
   "source": [
    "# Processing PDFs using Data Prep Kit\n",
    "\n",
    "This notebook will introduce DPK and showcase some of it's capabilities.\n",
    "\n",
    "Here is the workflow:\n",
    "\n",
    "- docling2parquet: Extract text from PDF documents\n",
    "- docid: compute hashes\n",
    "- exact dedupe : filter out identical documents\n",
    "- fuzzy dedupe : filter out 'near duplicates'\n",
    "- document quality: scoring documents for quality\n",
    "\n",
    "![](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/pdf-processing-1/images/data-prep-kit-3-workflow.png)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b15976e3",
   "metadata": {
    "id": "b15976e3"
   },
   "source": [
    "## How to run this notebook\n",
    "\n",
    "If you have python 3.11 or higher on your machine, you can also download the notebook and run it locally using a local python environment setup as follows:\n",
    "\n",
    "```\n",
    "python -m venv venv\n",
    "source venv/bin/activate\n",
    "pip install jupyterlab\n",
    "jupyter lab pdf_processing_python.ipynb\n",
    "```\n",
    "\n",
    "For more advanced setup, please see setup [guide](../../doc/quick-start/quick-start.md)\n",
    "\n",
    "An earlier version of this notebook was tested successfully on Google Colab. However, continuous changes in the Colab environment could introduce unexpected behavior/breakage. If you wish to try the Colab environment, click on [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/data-prep-kit/data-prep-kit/blob/dev/recipes/DPK-sequence/pdf_processing_python.ipynb). \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c05f3e71",
   "metadata": {},
   "source": [
    "## Step-1 Install DPK and a couple of helper applictions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2ebee270",
   "metadata": {},
   "outputs": [],
   "source": [
    "#%%capture cap --no-stderr\n",
    "#%pip install 'data-prep-toolkit-transforms[language]' tqdm humanfriendly\n",
    "%pip install \"data-prep-toolkit @ git+https://github.com/touma-I/data-prep-kit-pkg@numpy-dependencies#subdirectory=data-processing-lib\"\n",
    "%pip install \"data-prep-toolkit-transforms[language] @ git+https://github.com/touma-I/data-prep-kit-pkg@numpy-dependencies#subdirectory=transforms\"\n",
    "%pip install tqdm humanfriendly\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e8b10be1",
   "metadata": {
    "id": "e8b10be1"
   },
   "source": [
    "## Step-2: Setup input/output directories"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "60ac8bee-0960-4309-b225-d7a211b14262",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "60ac8bee-0960-4309-b225-d7a211b14262",
    "outputId": "2a44fe91-9b87-49ec-b385-58bdcd2ba9af"
   },
   "outputs": [],
   "source": [
    "%%capture cap --no-stderr\n",
    "import os, sys\n",
    "import urllib.request\n",
    "import shutil\n",
    "import pandas as pd\n",
    "import glob\n",
    "\n",
    "shutil.os.makedirs(\"tmp/input\", exist_ok=True)\n",
    "urllib.request.urlretrieve(\"https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth.pdf\", \"tmp/input/earth.pdf\")\n",
    "urllib.request.urlretrieve(\"https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth-copy.pdf\", \"tmp/input/earth-copy.pdf\")\n",
    "urllib.request.urlretrieve(\"https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth2.pdf\", \"tmp/input/earth2.pdf\")\n",
    "urllib.request.urlretrieve(\"https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/mars.pdf\", \"tmp/input/mars.pdf\")\n",
    "urllib.request.urlretrieve(\"https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/spam.pdf\", \"tmp/input/spam.pdf\")\n",
    "urllib.request.urlretrieve(\"https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/lorem-ipsum.pdf\", \"tmp/input/lorem-ipsum.pdf\")\n",
    "\n",
    "input_dir = \"tmp/input\"\n",
    "output_dir = \"output\"\n",
    "\n",
    "\n",
    "output_docling2pq_dir = os.path.join (output_dir, '01_docling2pq_out')\n",
    "output_docid_dir = os.path.join (output_dir, '02_docid_out')\n",
    "output_exact_dedupe_dir = os.path.join (output_dir, '03_exact_dedupe_out')\n",
    "output_fuzzy_dedupe_dir = os.path.join (output_dir, '04_fuzzy_dedupe_out')\n",
    "output_doc_quality_dir = os.path.join (output_dir, '05_doc_quality_out')\n",
    "output_final_dir = os.path.join (output_dir, 'output_final')\n",
    "\n",
    "## clear output folder\n",
    "shutil.rmtree(output_dir, ignore_errors=True)\n",
    "shutil.os.makedirs(output_dir, exist_ok=True)\n",
    "print (\"✅ Cleared output directory\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc1972c3",
   "metadata": {
    "id": "dc1972c3"
   },
   "source": [
    "## Step-3: Inspect the Data\n",
    "\n",
    "We will use simple PDFs.  The files are [here](https://github.com/data-prep-kit/data-prep-kit/tree/dev/examples/data-files/pdf-processing-1/)\n",
    "\n",
    "- [earth.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth.pdf) and exact duplicate [earth-copy.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth-copy.pdf)\n",
    "- [earth2.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/earth2.pdf) almost similar to earth.pdf (ONE word difference!)\n",
    "- [mars.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/mars.pdf)\n",
    "- [spam.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/spam.pdf) - contains spammy contents\n",
    "- [lorem-ipsum.pdf](https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/pdf-processing-1/lorem-ipsum.pdf) - contains 'lorem ipsum' placeholder\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2449e5c7-078c-4ad6-a2f6-21d39d4da3fb",
   "metadata": {
    "id": "2449e5c7-078c-4ad6-a2f6-21d39d4da3fb"
   },
   "source": [
    "## Step-4: Extract Data from PDF (docling2parquet)\n",
    "\n",
    "This step we will read PDF files and extract the text data.\n",
    "\n",
    "[Docling2Parquet documentation](https://github.com/data-prep-kit/data-prep-kit/blob/dev/transforms/language/docling2parquet/README.md)\n",
    "\n",
    "We use the [Docling package](https://github.com/DS4SD/docling).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9bb15f02-ab5c-4525-a536-cfa1fd2ba70b",
   "metadata": {
    "id": "9bb15f02-ab5c-4525-a536-cfa1fd2ba70b"
   },
   "source": [
    "### 4.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b0cd8ebd-bf71-42d6-a397-8df0c7b66a26",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "b0cd8ebd-bf71-42d6-a397-8df0c7b66a26",
    "outputId": "c3f18efd-2f3d-4ab8-88e3-6976f09ed5ac"
   },
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_docling2parquet import Docling2Parquet\n",
    "from dpk_docling2parquet import docling2parquet_contents_types\n",
    "\n",
    "STAGE = 1\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{input_dir}' --> output='{output_docling2pq_dir}'\\n\", flush=True)\n",
    "\n",
    "result = Docling2Parquet(input_folder= input_dir,\n",
    "                    output_folder= output_docling2pq_dir,\n",
    "                    data_files_to_use=['.pdf'],\n",
    "                    docling2parquet_contents_type=docling2parquet_contents_types.MARKDOWN,   # markdown\n",
    "                    ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ca790e0",
   "metadata": {
    "id": "5ca790e0"
   },
   "source": [
    "### 4.2 - Inspect Generated output\n",
    "\n",
    "Here we should see one entry per input file processed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "60feb712-ba31-42c2-b0a3-75e394ff8538",
   "metadata": {},
   "outputs": [],
   "source": [
    "output_df = pd.concat(\n",
    "    (pd.read_parquet(parquet_file)\n",
    "    for parquet_file in glob.glob(f\"{output_docling2pq_dir}/*.parquet\")),\n",
    "    ignore_index=True\n",
    ")\n",
    "output_df.head(10)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5058a21",
   "metadata": {
    "id": "e5058a21"
   },
   "source": [
    "\n",
    "### 4.3 - Understand the output\n",
    "\n",
    "Here are some interesting attributes to note:\n",
    "\n",
    "- **filename** : original filename\n",
    "- **contents** : text\n",
    "- **document_id**: unique id (UUID) assignd to this document\n",
    "- **document_hash**: hash of documents\n",
    "- **hash** : hash of `contents` column\n",
    "- **pdf_convert_time** : time to convert this pdf in seconds\n",
    "\n",
    "**Note: you should notice the hash values are identical for the duplicate documents**\n",
    "\n",
    "Let's inspect the **contents** column."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f870e624",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "f870e624",
    "outputId": "698561bb-676e-42d8-c37d-74d161f8a2ce"
   },
   "outputs": [],
   "source": [
    "print (output_df[output_df['filename'] == 'earth.pdf'].iloc[0,]['contents'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e1a10c2d",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "e1a10c2d",
    "outputId": "a38e6f9a-8ae4-4643-ba6c-e9123a4f1322"
   },
   "outputs": [],
   "source": [
    "print (output_df[output_df['filename'] == 'spam.pdf'].iloc[0,]['contents'])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b37dd994",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "b37dd994",
    "outputId": "e6611294-d2f9-4dd1-e6d5-d07c5dc2614b"
   },
   "outputs": [],
   "source": [
    "print (output_df[output_df['filename'] == 'lorem-ipsum.pdf'].iloc[0,]['contents'])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7fc86d5b",
   "metadata": {
    "id": "7fc86d5b"
   },
   "source": [
    "## Step-5:  Create DOC ID for Documents\n",
    "\n",
    "This transform annotates documents with document \"ids\". It supports the following transformations of the original data:\n",
    "\n",
    " - Adding document hash: this enables the addition of a document hash-based id to the data. The hash is calculated with `hashlib.sha256(doc.encode(\"utf-8\")).hexdigest()`. To enable this annotation, set **hash_column** to the name of the column, where you want to store it.\n",
    " - Adding integer document id: this allows the addition of an integer document id to the data that is unique across all rows in all tables provided to the transform() method. To enable this annotation, set **int_id_column** to the name of the column, where you want to store it.\n",
    "\n",
    "**This step is a pre-requisite for fuzzy dedup** in the pipeline.\n",
    "\n",
    "[DocID documentation](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/universal/doc_id)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f516a253",
   "metadata": {
    "id": "f516a253"
   },
   "source": [
    "### 5.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cee20521",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "cee20521",
    "outputId": "4845cd77-ee87-490f-e2ff-2c3fc2d6e33d"
   },
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_doc_id import DocID\n",
    "\n",
    "STAGE = 2\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_docling2pq_dir}' --> output='{output_docid_dir}'\\n\", flush=True)\n",
    "\n",
    "result = DocID(input_folder= output_docling2pq_dir,\n",
    "                output_folder= output_docid_dir,\n",
    "                doc_id_doc_column= \"contents\",\n",
    "                doc_id_hash_column= \"doc_hash\",\n",
    "                # doc_id_int_column= \"doc_id_int\",\n",
    "                doc_id_int_column= \"int_id_column\",\n",
    "                #doc_id_start_id= 5\n",
    "                ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4bd6f382",
   "metadata": {
    "id": "4bd6f382"
   },
   "source": [
    "### 5.2 - Inspect Generated output\n",
    "\n",
    "You would see a new columns **doc_hash** and **int_id_column**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f3d4aba9",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 675
    },
    "id": "f3d4aba9",
    "outputId": "37cd94a3-d77e-47db-805b-a3dc82c28052"
   },
   "outputs": [],
   "source": [
    "docid_df = pd.concat(\n",
    "    (pd.read_parquet(parquet_file)\n",
    "    for parquet_file in glob.glob(f\"{output_docid_dir}/*.parquet\")),\n",
    "    ignore_index=True\n",
    ")\n",
    "docid_df.head(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c55f8d3f",
   "metadata": {
    "id": "c55f8d3f"
   },
   "source": [
    "## Step-6: Eliminate Duplicate Documents\n",
    "\n",
    "We have 2 exact duplicates: **earth.pdf** , **earth-copy.pdf**\n",
    "\n",
    "Note how **doc_hash** for these documents are the same.\n",
    "\n",
    "[Exact dedupe information](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/universal/ededup)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6f5ef1f7",
   "metadata": {
    "id": "6f5ef1f7"
   },
   "source": [
    "### 6.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "90eddb4c",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "90eddb4c",
    "outputId": "0e3c4856-8bad-4581-b6ac-b975417c552d"
   },
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_ededup import Ededup\n",
    "\n",
    "STAGE = 3\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_docid_dir}' --> output='{output_exact_dedupe_dir}'\\n\", flush=True)\n",
    "\n",
    "result = Ededup(input_folder=output_docid_dir,\n",
    "                output_folder=output_exact_dedupe_dir,\n",
    "                ededup_doc_column=\"contents\",\n",
    "                ededup_doc_id_column=\"doc_hash\"\n",
    "                ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4aacf09",
   "metadata": {
    "id": "f4aacf09"
   },
   "source": [
    "### 6.2 - Inspect Generated output\n",
    "\n",
    "You can see one of **earth.pdf** or **earth-copy.pdf** will be eliminated."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "060528f9-3eb0-4604-83ff-39a99aa2dcce",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 675
    },
    "id": "f3d4aba9",
    "outputId": "37cd94a3-d77e-47db-805b-a3dc82c28052"
   },
   "outputs": [],
   "source": [
    "dedup_df = pd.concat(\n",
    "    (pd.read_parquet(parquet_file)\n",
    "    for parquet_file in glob.glob(f\"{output_exact_dedupe_dir}/*.parquet\")),\n",
    "    ignore_index=True\n",
    ")\n",
    "dedup_df.head(10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1887b26d",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 647
    },
    "id": "1887b26d",
    "outputId": "8bc34f38-639e-4475-bf40-0dc0236f4a97"
   },
   "outputs": [],
   "source": [
    "print (\"Duplicate files removed :  \", (docid_df.shape[0] - dedup_df.shape[0]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76ea34e2",
   "metadata": {
    "id": "76ea34e2"
   },
   "source": [
    "## Step-7: Fuzzy Dedupe\n",
    "\n",
    "In previous step, we removed **exact duplicates (identical documents)**.\n",
    "\n",
    "Fuzzy de-dupe can further filter out documents that are **not exactly identical, but nearly identical**\n",
    "\n",
    "Here is a simple example:\n",
    "\n",
    "`Our solar system is a vast and fascinating expanse`\n",
    "\n",
    "`The solar system is a vast and fascinating expanse`\n",
    "\n",
    "Only one word is different `Our` vs `The`.\n",
    "\n",
    "Imagine two documents with one extra blank line.  For our purposes they are the same.\n",
    "\n",
    "[Fuzzy dedupe documentation](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/universal/fdedup)\n",
    "\n",
    "### Tweaking fuzzy matches\n",
    "\n",
    "**`jaccard_similarity_threshold`** is the parameter used to tweak similarities between documents.  It's value is between 0 and 1.0.  Values close to 1.0 means more strict checking (fewer documents will qualify).  Lower threshold means more leniant matches (more documents will qualify)\n",
    "\n",
    "Adjust this value to find what works for your documents"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79a37713",
   "metadata": {
    "id": "79a37713"
   },
   "source": [
    "### 7.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "37430b60",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "37430b60",
    "outputId": "d630d247-3abb-4eb4-8caa-ddd2f2380149"
   },
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_fdedup import Fdedup\n",
    "\n",
    "STAGE = 4\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_exact_dedupe_dir}' --> output='{output_fuzzy_dedupe_dir}'\\n\", flush=True)\n",
    "\n",
    "result = Fdedup(input_folder=output_exact_dedupe_dir,\n",
    "                output_folder=output_fuzzy_dedupe_dir,\n",
    "                contents_column= \"contents\",\n",
    "                # document_id_column= \"doc_id\",\n",
    "                document_id_column= \"int_id_column\",\n",
    "                num_permutations= 112,\n",
    "                num_bands= 14,\n",
    "                num_minhashes_per_band= 8,\n",
    "                jaccard_similarity_threshold = 0.8, # between 0 - 1.  higher means more strict checking\n",
    "                operation_mode=\"filter_duplicates\",\n",
    "                # operation_mode=\"annotate\",\n",
    "                ).transform()\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed (result={result})\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b2c83592",
   "metadata": {
    "id": "b2c83592"
   },
   "source": [
    "### 7.2 - Inspect Output\n",
    "\n",
    "FuzzyDedupe will write documents that are filtered in **output/04_fuzzy_dedupe_out/cleaned** folder\n",
    "\n",
    "You will notice only one **earth.pdf** made it!  So fuzzy dedupe did filter out the almost identical doc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "90ad5bfa-cdc5-4232-93c8-63a648ba471a",
   "metadata": {},
   "outputs": [],
   "source": [
    "fdedup_df = pd.concat(\n",
    "    (pd.read_parquet(parquet_file)\n",
    "    for parquet_file in glob.glob(f\"{output_fuzzy_dedupe_dir}/cleaned/*.parquet\")),\n",
    "    ignore_index=True\n",
    ")\n",
    "fdedup_df.head(10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "83199b65-dda7-48fd-8aed-7292438258b0",
   "metadata": {},
   "outputs": [],
   "source": [
    "print (\"Near duplicate files removed :  \", (dedup_df.shape[0] - fdedup_df.shape[0]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e0598a0",
   "metadata": {
    "id": "3e0598a0"
   },
   "source": [
    "## Step-8: Document Quality\n",
    "\n",
    "This handy plugin will score documents across many metrics.\n",
    "\n",
    "Here we will look for 'bad words' metric.\n",
    "\n",
    "[Document quality documentation](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/language/doc_quality)\n",
    "\n",
    "By default it uses [bad words collection](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/language/doc_quality/dpk_doc_quality/ldnoobw).  You can supply a custom file by passing an argument `bad_word_filepath=/path/to/badwords_file`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1949c2c4",
   "metadata": {
    "id": "1949c2c4"
   },
   "source": [
    "### 8.1 - Execute"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b485f598",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "b485f598",
    "outputId": "8b58bb29-b9e4-4986-8543-578619878eda"
   },
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_doc_quality import DocQuality\n",
    "\n",
    "STAGE = 5\n",
    "output_fuzzy_dedupe_cleaned_dir = os.path.join(output_fuzzy_dedupe_dir, \"cleaned\")\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_fuzzy_dedupe_cleaned_dir}' --> output='{output_doc_quality_dir}'\\n\", flush=True)\n",
    "\n",
    "result = DocQuality(input_folder=output_fuzzy_dedupe_cleaned_dir,\n",
    "                    output_folder= output_doc_quality_dir,\n",
    "                    docq_text_lang = \"en\",\n",
    "                    docq_doc_content_column =\"contents\",\n",
    "                    ).transform()\n",
    "\n",
    "if result == 0:\n",
    "    print (f\"✅ Stage:{STAGE} completed successfully\")\n",
    "else:\n",
    "    raise Exception (f\"❌ Stage:{STAGE}  failed (result={result})\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eccefd3e",
   "metadata": {
    "id": "eccefd3e"
   },
   "source": [
    "### 8.2 - Inspect the Output\n",
    "\n",
    "We will see several new columns starting with the name **docq_**.\n",
    "\n",
    "Look at the column **docq_contain_bad_word**; this will flag documents with 'bad words'.\n",
    "\n",
    "Also inspect the column **docq_lorem_ipsum_ratio**; this will flag documents with 'lorem ipsum' text\n",
    "\n",
    "For more information see : [Doc Quality documentation](https://github.com/data-prep-kit/data-prep-kit/tree/dev/transforms/language/doc_quality)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1f3225f8",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 503
    },
    "id": "1f3225f8",
    "outputId": "fec84677-ab4b-4b76-86e2-63f7681c6b97"
   },
   "outputs": [],
   "source": [
    "docquality_df = pd.concat(\n",
    "    (pd.read_parquet(parquet_file)\n",
    "    for parquet_file in glob.glob(f\"{output_doc_quality_dir}/*.parquet\")),\n",
    "    ignore_index=True\n",
    ")\n",
    "docquality_df.head(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "02fa3bd2",
   "metadata": {
    "id": "02fa3bd2"
   },
   "source": [
    "### 8.3 - Filtering 'quality' documents\n",
    "\n",
    "So from the output above we see **spam.pdf** is flagged for containing bad words (**docq_contain_bad_word=True**).\n",
    "\n",
    "Also **lorem.pdf** is flagged for place holder content **lorem ipsum**  (**docq_lorem_ipsum_ratio > 0**)\n",
    "\n",
    "We are going to filter them both out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5dac1c70",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 318
    },
    "id": "5dac1c70",
    "outputId": "9f5dca99-6416-405e-d570-92119eb32882"
   },
   "outputs": [],
   "source": [
    "# remove documents with badwords\n",
    "clean_docs_df = docquality_df[docquality_df['docq_contain_bad_word'] == False]\n",
    "\n",
    "# also filter out 'lorem ipsum' text\n",
    "clean_docs_df = clean_docs_df[clean_docs_df['docq_lorem_ipsum_ratio'] == 0]\n",
    "\n",
    "clean_docs_df.head(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5e12630-be6b-4188-a925-77117155617b",
   "metadata": {
    "id": "f5e12630-be6b-4188-a925-77117155617b"
   },
   "source": [
    "## Step-9: Copy output to final output dir"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "16dee3b8-31dc-4168-8adb-f2a0a0b5e207",
   "metadata": {
    "id": "16dee3b8-31dc-4168-8adb-f2a0a0b5e207"
   },
   "outputs": [],
   "source": [
    "import shutil\n",
    "\n",
    "shutil.rmtree(output_final_dir, ignore_errors=True)\n",
    "shutil.os.makedirs(output_final_dir, exist_ok=True)\n",
    "\n",
    "output_final_dir_parquet = os.path.join (output_final_dir, 'pq')\n",
    "shutil.os.makedirs(output_final_dir_parquet, exist_ok=True)\n",
    "\n",
    "output_final_dir_markdown = os.path.join (output_final_dir, 'markdown')\n",
    "shutil.os.makedirs(output_final_dir_markdown, exist_ok=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e06ce4f2",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "e06ce4f2",
    "outputId": "a1763c52-867a-435f-b5cb-035bfdee3c69"
   },
   "outputs": [],
   "source": [
    "## save parquet\n",
    "\n",
    "clean_docs_df.to_parquet(os.path.join(output_final_dir_parquet, \"clean_docs.parquet\"))\n",
    "print (f\"✅ Saved CLEAN parquet output to '{output_final_dir_parquet}'\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1e175302",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "1e175302",
    "outputId": "2ec4bd53-9bf9-42cc-8b0a-59e741948b46"
   },
   "outputs": [],
   "source": [
    "## save markdown text\n",
    "\n",
    "for index, row in clean_docs_df.iterrows():\n",
    "    output_file_name = os.path.join (output_final_dir_markdown, row['filename'] + '.md')\n",
    "    with open(output_file_name, 'w') as output_file:\n",
    "        output_file.write(row['contents'])\n",
    "\n",
    "print (f\"✅ Saved CLEAN markdown output to '{output_final_dir_markdown}'\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "14928622-6830-4b88-81c5-36e8dc77ffb1",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
