{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "df8caa07",
   "metadata": {},
   "source": [
    "## **Demo on building data prep pipeline for fine tuning text data**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "54bfdd2a",
   "metadata": {},
   "source": [
    "**Authors**: Pooja Holkar, Aisha Darga\n",
    "\n",
    "**email**: poholkar@in.ibm.com,aisdarg1@in.ibm.com\n",
    "\n",
    "\n",
    "<a href=\"https://colab.research.google.com/github/data-prep-kit/data-prep-kit/blob/dev/examples/notebooks/fine tuning/language/fine-tune-language.ipynb\">\n",
    "  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
    "</a>\n",
    "\n",
    "\n",
    "This demo notebook shows how to use IBM's Data Processing Toolkit to build a data preparation pipeline for fine-tuning or extended pre-training of legal contracts. We will discuss various data preparation steps to process raw legal documents (contracts), tokenize them, and prepare them for fine-tuning using any large language models.\n",
    "\n",
    "Organizations working with legal contracts like Master Service Agreements (MSAs), often face challenges in managing, reviewing, and fine-tuning legal documents at scale. With a repository of contracts, there is a pressing need for a streamlined process to analyze, extract, and refine critical clauses and terms to ensure compliance, clarity, and adaptability.\n",
    "\n",
    "The data preparation steps demonstrated in this notebook include:\n",
    "\n",
    "- **Conversion of PDF to Parquet**\n",
    "- **Identification of Hate, Abuse and Profanity (HAP)**\n",
    "- **Identification of Personally Identifiable Information (PII)**\n",
    "- **De-duplication of Data**\n",
    "- **Document Chunking**\n",
    "- **Document Quality Assessment**\n",
    "- **Tokenization of the Data**\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cd7bdd02",
   "metadata": {},
   "source": [
    "## Setup\n",
    "\n",
    "Install data-prep-toolkit and datasets library. This notebook requires atleast 8 cpus.\n",
    "To run on google colab, it is recommended to change the runtime to TPUs to get the required number of cpus."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "88c7ab09",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "!pip install 'data-prep-toolkit-transforms[language]==1.1.1.dev0'\n",
    "import pyarrow.parquet as pq\n",
    "import pandas as pd"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a351a8db",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Must enable nested asynchronous io in a notebook as the crawler uses coroutine to speed up acquisition and downloads\n",
    "import nest_asyncio\n",
    "nest_asyncio.apply()\n",
    "\n",
    "import os"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bd6fee40",
   "metadata": {},
   "source": [
    "### Download Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5f359b0d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import urllib.request\n",
    "import shutil\n",
    "shutil.os.makedirs(\"input-data\", exist_ok=True)\n",
    "urllib.request.urlretrieve(\"https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/fine-tuning/language/MSA-DPK-1.pdf\", \"input-data/MSA-DPK-1.pdf\")\n",
    "urllib.request.urlretrieve(\"https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/fine-tuning/language/MSA-2.pdf\", \"input-data/MSA-2.pdf\")\n",
    "urllib.request.urlretrieve(\"https://raw.githubusercontent.com/data-prep-kit/data-prep-kit/dev/examples/data-files/fine-tuning/language/MSA-3.pdf\", \"input-data/MSA-3.pdf\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bde4d2e0",
   "metadata": {},
   "outputs": [],
   "source": [
    "# create parameters\n",
    "input_folder = os.path.join(\"input-data\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77f8978e",
   "metadata": {},
   "source": [
    "##### We will place the downloaded files in the `input-data` folder. For our use case, we have three MSA contracts that will undergo processing. The output for each transform run will be generated in separate folders, with folder names following the format `files-<transform_name>`, making it easy to verify the respective transform outputs. This concludes the setup section."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "504d8369",
   "metadata": {},
   "source": [
    "## Data Preparation Steps\n",
    "\n",
    "We now discuss the various data preparation steps to transform the raw legal contracts data to a tokenized format post cleaning and transforming the data. We use the [parquet data format](https://parquet.apache.org/) for all our operations. This helps to efficiently scale the data for actual production runs, beyond the demo.\n",
    "\n",
    "1. pdf2Parquet: Read the dataset from Hugging Face and convert into parquet format.  \n",
    "2. PIIRedactor: Remove sensitive information\n",
    "3. Ededup: Remove exact duplicates.  \n",
    "4. DocChunk: Chunks large legal documents into smaller, coherent sections\n",
    "5. DocQuality: Ensuring completeness, consistency, relevance.\n",
    "6. Tokenization: Tokenize the data for model fine tuning.\n",
    "\n",
    "The data processing pipeline is organized such that the output of the previous transform is used as input to the next one."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1431213f",
   "metadata": {},
   "source": [
    "## 1. Pdf data to Parquet\n",
    "\n",
    "This is the first component of this pipeline. It ingests all legal contract dataset from `input-data`  and converts it into\n",
    "parquet files for consumption by the next steps in this data processing pipeline.\n",
    "\n",
    "\n",
    "The output of this stage of the pipeline would be written to `files-pdf2parquet`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f6aa0b24",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "from dpk_docling2parquet.transform_python import Docling2Parquet\n",
    "Docling2Parquet(input_folder= input_folder,\n",
    "               output_folder= \"files-pdf2parquet\",\n",
    "               data_files_to_use=['.pdf'],\n",
    "               docling2parquet_contents_type='text/markdown').transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "55e05d71",
   "metadata": {},
   "source": [
    "## 2. Identification of HAP content\n",
    "\n",
    "The identification of HAP, ensures that the data used to train models is free from harmful or inappropriate content that could introduce bias into large language model (LLM) outputs.\n",
    "\n",
    "Hate, Abuse, or Profanity (HAP) detection, while not typically relevant in legal contract analysis, is becoming crucial for training legal language models. For our proof of concept, we are incorporating HAP detection to showcase its value in managing user-generated data effectively."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f5d1d1a4",
   "metadata": {},
   "outputs": [],
   "source": [
    "from dpk_hap.transform_python import HAP\n",
    "# create parameters\n",
    "HAP(input_folder=\"files-pdf2parquet\",\n",
    "        output_folder=\"files-hapoutput\",\n",
    "        model_name_or_path= 'ibm-granite/granite-guardian-hap-38m',\n",
    "        annotation_column= \"hap_score\",\n",
    "        doc_text_column= \"contents\",\n",
    "        inference_engine= \"CPU\",\n",
    "        max_length= 512,\n",
    "        batch_size= 128,\n",
    "        ).transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9439d0aa",
   "metadata": {},
   "source": [
    "## 3. PII Redactor Transform\n",
    "This transform redacts Personally Identifiable Information (PII) from the input data which is our parquet files generated at location `files-pdf2parquet` in step 1\n",
    "\n",
    "The transform leverages the Microsoft Presidio SDK for PII detection and uses the Flair recognizer for entity recognition.\n",
    "\n",
    "The transform detects the following PII entities in lgeal contracts:\n",
    "\n",
    "#####**PERSON:** Names of individuals\n",
    "#####**EMAIL_ADDRESS:** Email addresses\n",
    "#####**ORGANIZATION:** Names of organizations\n",
    "#####**PHONE_NUMBER:** Phone number\n",
    "#####**LOCATION:** Address\n",
    "\n",
    "All the redacted information is written in `files-piiredacted`. The Redaction Techniques used is `replace` which is default it Replaces detected PII with a placeholder. The other technique is to use redact which removes the detected PII from the text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "da2998bc",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "from dpk_pii_redactor.transform_python import PIIRedactor\n",
    "PIIRedactor(input_folder='files-pdf2parquet',\n",
    "            output_folder= 'files-piiredacted',\n",
    "            pii_redactor_entities = [\"PERSON\", \"EMAIL_ADDRESS\",\"ORGANIZATION\",\"PHONE_NUMBER\", \"LOCATION\"],\n",
    "            pii_redactor_operator = \"replace\",\n",
    "            pii_redactor_transformed_contents = \"title\").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ecacbbb1",
   "metadata": {},
   "source": [
    "The redacted output data, including the PII being redacted, can be viewed by inspecting one of the generated files within the `files-piiredacted` folder, such as `MSA-DPK-1.parquet`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "de6ec3db",
   "metadata": {},
   "outputs": [],
   "source": [
    "data_pii = pd.read_parquet('files-piiredacted/MSA-DPK-1.parquet')\n",
    "print(data_pii[\"title\"][0])\n",
    "print(data_pii[\"detected_pii\"][0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "83f36f19",
   "metadata": {},
   "source": [
    "#### The function below reads all the parquet files in the folder at once."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c6f231b7",
   "metadata": {},
   "outputs": [],
   "source": [
    "import glob\n",
    "def read_parquet_files_as_df (parquet_dir):\n",
    "    parquet_files = glob.glob(f'{parquet_dir}/*.parquet')\n",
    "    # read each parquet file into a DataFrame and store in a list\n",
    "    dfs = [pd.read_parquet (f) for f in parquet_files]\n",
    "    dfs = [df for df in dfs if not df.empty]  # filter out empty dataframes\n",
    "    # Concatenate all DataFrames into a single DataFrame\n",
    "    if len(dfs) > 0:\n",
    "        data_df = pd.concat(dfs, ignore_index=True)\n",
    "        return data_df\n",
    "    else:\n",
    "        return pd.DataFrame() # return empty df"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9fe59817",
   "metadata": {},
   "source": [
    "## 4. Exact deduplication\n",
    "\n",
    "This step will find exact duplicates in the 'content' column and remove them. This is done by computing SHA256 hash on the code files and remove records having identical hashes.\n",
    "\n",
    "The transform specific params for exact deduplication are: <br/>\n",
    " \n",
    " _ededup_doc_column_ - Name of column which has to be checked for deduplication <br/>\n",
    " _ededup_doc_id_column_ - Name of the column containing document id <br/>\n",
    "\n",
    " The output of this stage of the pipeline would be written to `files-ededup`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "28f856f7",
   "metadata": {},
   "outputs": [],
   "source": [
    "from dpk_ededup.transform_python import Ededup\n",
    "Ededup(input_folder=\"files-piiredacted\",\n",
    "    output_folder=\"files-ededup\",\n",
    "    ededup_doc_column=\"contents\",\n",
    "    ededup_doc_id_column=\"document_id\").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "caebe09b",
   "metadata": {},
   "source": [
    "##### The deduplicated output data can be verified for the three files generated in the `files-ededup folder`, as shown in the example below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e5de9071",
   "metadata": {},
   "outputs": [],
   "source": [
    "data_dedup = read_parquet_files_as_df('files-ededup')\n",
    "print (\"Displaying contents of : \", 'files-ededup')\n",
    "data_dedup.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aba7ad5c",
   "metadata": {},
   "source": [
    "## 5.Documents chunking\n",
    "\n",
    "This transform is chunking documents. It supports multiple chunker modules (see the doc_chunk_chunking_type parameter).\n",
    "\n",
    "The output of this stage of the pipeline would be written to `files-doc-chunk`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "67d2711f",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "from dpk_doc_chunk.transform_python import DocChunk\n",
    "DocChunk(input_folder='files-ededup',\n",
    "        output_folder='files-doc-chunk',\n",
    "        doc_chunk_chunking_type= \"li_markdown\").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d00dc987",
   "metadata": {},
   "source": [
    "## 6. Doc Quality\n",
    "\n",
    "This step evaluates the completeness, consistency, and relevance of documents to ensure high-quality input for downstream processing and model training.\n",
    "\n",
    "docq_text_lang - specifies language used in the text content. By default, \"en\" is used.\n",
    "doc_content_column - specifies column name that contains document text. By default, \"contents\" is used.\n",
    "\n",
    "The output from this stage of the pipeline will be saved in the `files-doc-quality` folder."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b330f6c2",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "from dpk_doc_quality.transform_python import DocQuality\n",
    "DocQuality(input_folder='files-doc-chunk',\n",
    "            output_folder= 'files-doc-quality',\n",
    "            docq_text_lang = \"en\",\n",
    "            docq_doc_content_column =\"contents\").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ec16ab4",
   "metadata": {},
   "source": [
    "##### We will see several new columns starting with the name `docq_.`\n",
    "\n",
    "The document quality output data can be verified of three files generated in the `files-doc-quality` folder , as shown in the example below.We will look at a metric docq_contain_bad_word and filter out any documents that have bad words. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0af9d2ff",
   "metadata": {},
   "outputs": [],
   "source": [
    "docq_df = read_parquet_files_as_df('files-doc-quality')\n",
    "print (\"Displaying contents of : \", 'files-doc-quality')\n",
    "docq_df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e03316d",
   "metadata": {},
   "source": [
    "Based on the document quality results, since `docq_lorem_ipsum_ratio > 0` and `docq_contain_bad_word = True` are not flagged, it means our document quality is good. Therefore, there’s no need to filter out the best documents."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "19161743",
   "metadata": {},
   "source": [
    "## 7. Tokenization\n",
    "\n",
    "Next, we tokenize the data to be used for fine tuning."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3f829f68",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "from dpk_tokenization.transform_python import Tokenization\n",
    "Tokenization(input_folder= \"files-doc-quality\",\n",
    "        output_folder= \"files-tokenization\",\n",
    "        tkn_tokenizer=  \"hf-internal-testing/llama-tokenizer\",\n",
    "        tkn_chunk_size= 20_000).transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "db0f2f64",
   "metadata": {},
   "source": [
    "This concludes the preprocessing steps necessary to prepare legal documents for fine-tuning. The processed data includes tokenized, high-quality text ready for legal domain tasks. The final output files can be found in the `files-tokenization` folder, ensuring they are optimized for downstream model training."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "34a417a7",
   "metadata": {},
   "source": [
    "The process to fine tune model using the processed data can be followed from [here](https://github.com/ibm-granite-community/granite-snack-cookbook/blob/main/recipes/Fine_Tuning/Finetuning_Granite_Pirate_Style.ipynb)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c8a6a03e-5b27-4ef5-a065-f26d6cc26fb5",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
