{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "6f4266e5-047d-4e49-a79a-85c99759e1c6",
   "metadata": {},
   "source": [
    "## PII+Image Redactor Example Notebook"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "12bab93e-70ee-4956-821a-2080db9c11a3",
   "metadata": {},
   "source": [
    "\n",
    "**Author**: Shahrokh Daijavad,\n",
    "**email**:shahrokh@us.ibm.com\n",
    "\n",
    "### Summary\n",
    "\n",
    "In the [PII recipe](../PII/Run_your_first_PII_redactor_transform.ipynb), it is shown how the basic PII (Personally Identifiable Information) transform is used to identify and redact sensitive information in text data, such as  \n",
    "Names, Email addresses, Phone numbers, Addresses, and Financial details (e.g., credit card numbers and crypto addresses). Here, we want to show one of the Multimedia transforms in DPK that will be used to blur the face of an image in a document, as an additional step in applying PII redaction beyond text. \n",
    "\n",
    " **Workflow Overview**\n",
    "\n",
    "- **Extracting and Converting Text and Image:** The content of a hypothetical invoice, originally in PDF format, is processed using the docling2parquet transform to extract both the text and image and convert them into a structured Parquet file, enabling easier handling and downstream processing by other DPK transforms. \n",
    "\n",
    "- **Redacting Sensitive Text Information:** The generated Parquet file serves as the input for the dpk_pii_redactor transform. This step scans the invoice data for personally identifiable information (PII) and applies masking techniques to redact any sensitive content, ensuring data privacy and compliance.\n",
    "\n",
    "- **Redacting Image using a face-blurring technique:** The generated output Parquet file from the previous stage serves as the input for the images/people transform. This step scans the input file, detects the face in the image, and blurs the face for additional data privacy. \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26f701c8",
   "metadata": {},
   "source": [
    "## How to run this notebook\n",
    "\n",
    "If you have python 3.11 or higher on your machine, you can also download the notebook and run it locally using a local python environment setup as follows:\n",
    "\n",
    "```\n",
    "python -m venv venv\n",
    "source venv/bin/activate\n",
    "pip install jupyterlab\n",
    "jupyter lab PII_Image_redactor.ipynb\n",
    "```\n",
    "\n",
    "For more advanced setup, please see setup [guide](https://github.com/data-prep-kit/data-prep-kit/blob/dev/doc/quick-start/quick-start.md).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10a3ed52-68aa-45bb-a2d3-1b08e5ad2ab2",
   "metadata": {},
   "source": [
    "### Pre-req: Install data-prep-kit toolkit"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fd7131c9-af67-4ac5-9d3d-a90ab7d11898",
   "metadata": {},
   "outputs": [],
   "source": [
    "#installing only docling2parquet and pii_redactor transforms from data-prep-toolkit \n",
    "%pip install \"data-prep-toolkit-transforms[docling2parquet,pii_redactor]==1.1.7.dev3\"\n",
    "%pip install pandas \n",
    "import pandas as pd\n",
    "import os"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d6f9a32-6ae4-4bf2-b939-70b579f18185",
   "metadata": {},
   "source": [
    "## Step 1: Configuration"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e34249a4-cf6e-4879-a8ed-3092653874a8",
   "metadata": {},
   "source": [
    "### Download Data and set input and output directories\n",
    "#### We will place the downloaded input file(s) in the `tmp/input` directory. For our use case, we have used a typical invoice data file, `Invoiceplusimage.pdf`, which contains an image and will undergo processing. The output for each transform run will be generated in separate sub-directories under the ouput directory, with directory names following the format `files_<transform_name>`, making it easy to verify the respective transform outputs. This concludes the setup section."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b2186d9f-f09f-4831-be8b-ab04bfe63e41",
   "metadata": {},
   "outputs": [],
   "source": [
    "import urllib.request\n",
    "import shutil\n",
    "shutil.os.makedirs(\"tmp/input\", exist_ok=True)\n",
    "urllib.request.urlretrieve(\"https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/recipes/input-data/PII-image/Invoiceplusimage.pdf\", \"tmp/input/Invoiceplusimage.pdf\")\n",
    "\n",
    "input_dir = \"tmp/input\"\n",
    "output_dir = \"output\"\n",
    "output_docling2pq_dir = os.path.join (output_dir, 'files_docling2parquet')\n",
    "output_piiredactor_dir = os.path.join (output_dir, 'files_piiredacted')\n",
    "output_people_dir = os.path.join (output_dir, 'files_people')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "85554c81",
   "metadata": {},
   "source": [
    "## Display the input PDF file"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9f0cec5d",
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import IFrame\n",
    "IFrame(src=f\"{input_dir}/Invoiceplusimage.pdf\", width=600, height=800)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "348d002f-1458-48c7-987c-ba45de4cb2f2",
   "metadata": {},
   "source": [
    "## Step 2: Invoke Docling2Parquet transform to proces pdf files"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f5aefb02",
   "metadata": {},
   "outputs": [],
   "source": [
    "from dpk_docling2parquet import Docling2Parquet\n",
    "from data_processing.utils import GB\n",
    "from dpk_docling2parquet.transform import docling2parquet_contents_types"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "93845ab6-0338-455e-9a84-c588326f7711",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "\n",
    "%%time\n",
    "\n",
    "from dpk_docling2parquet import Docling2Parquet\n",
    "from data_processing.utils import GB\n",
    "from dpk_docling2parquet import docling2parquet_contents_types\n",
    "\n",
    "STAGE = 1\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{input_dir}' --> output='{output_docling2pq_dir}'\\n\", flush=True)\n",
    "\n",
    "Docling2Parquet(input_folder= input_dir,\n",
    "               output_folder= output_docling2pq_dir,\n",
    "               data_files_to_use=['.pdf'],\n",
    "               docling2parquet_contents_type=docling2parquet_contents_types.MARKDOWN,\n",
    "               docling2parquet_generate_picture_images=True,\n",
    "               docling2parquet_pipeline=\"vlm\").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "07643473-5ca0-4392-bd1f-b140c551c0eb",
   "metadata": {},
   "source": [
    "## Step 3: Invoke PII Redactor redaction transform"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "24d847b4",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_pii_redactor import PIIRedactor\n",
    "\n",
    "STAGE = 2\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_docling2pq_dir}' --> output='{output_piiredactor_dir}'\\n\", flush=True)\n",
    "PIIRedactor(input_folder=output_docling2pq_dir,\n",
    "            output_folder= output_piiredactor_dir,\n",
    "            pii_redactor_entities = [\"PERSON\", \"EMAIL_ADDRESS\",\"ORGANIZATION\",\"PHONE_NUMBER\", \"LOCATION\",\"CRYPTO\"],\n",
    "            pii_redactor_operator = \"replace\",\n",
    "            pii_redactor_transformed_contents = \"title\").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "92d77a3e-37fe-40a5-92e8-6e8ec509ac73",
   "metadata": {},
   "source": [
    "## Step 4: Display Output in a Readable Format with masked PII information"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ca040517-6c90-4f87-9eef-a618f77a1a6b",
   "metadata": {},
   "outputs": [],
   "source": [
    "data = pd.read_parquet('output/files_piiredacted/Invoiceplusimage.parquet')\n",
    "print(data[\"title\"][0])\n",
    "print(data[\"detected_pii\"][0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20b81f51",
   "metadata": {},
   "source": [
    "## Step 5: Invoke People transform to blur the image from the Invoice file "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7c777d7b",
   "metadata": {},
   "source": [
    "#### At this point, we need to download the \"yolo\" model that the \"people\" transform uses from Hugging Face.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a0e7fe03",
   "metadata": {},
   "outputs": [],
   "source": [
    "shutil.os.makedirs(\"models\", exist_ok=True)\n",
    "!curl -L --output models/yolov8m_200e.pt https://huggingface.co/ZiqianLiu/yolov8_face/resolve/4b1db35121179d189754a3bf0b4a86aa44c03eef/yolov8m_200e.pt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1f683560",
   "metadata": {},
   "outputs": [],
   "source": [
    "#installing only people transform from data-prep-toolkit\n",
    "%pip install \"data-prep-toolkit-transforms[people]==1.1.7.dev3\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "73035a54",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "from dpk_people import People\n",
    "\n",
    "STAGE = 3\n",
    "print (f\"🏃🏼 STAGE-{STAGE}: Processing input='{output_piiredactor_dir}' --> output='{output_people_dir}'\\n\", flush=True)\n",
    "\n",
    "People(input_folder=output_piiredactor_dir,\n",
    "    output_folder=output_people_dir,\n",
    "    people_model_path=\"models/yolov8m_200e.pt\",\n",
    ").transform()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "505dc077",
   "metadata": {},
   "outputs": [],
   "source": [
    "import glob\n",
    "glob.glob(\"output_people_dir/*\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "704623ee",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pyarrow.parquet as pq\n",
    "import pandas as pd\n",
    "\n",
    "# Read the Parquet file into an Arrow Table\n",
    "df = pq.read_table('output/files_people/Invoiceplusimage.parquet').to_pandas()\n",
    "df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3f69704f",
   "metadata": {},
   "outputs": [],
   "source": [
    "import io\n",
    "from IPython.display import display\n",
    "import PIL.Image as Image\n",
    "    \n",
    "image = Image.open(io.BytesIO(df.iloc[0]['blurred_images'][0]))\n",
    "display(image)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
