{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "installation-note",
   "metadata": {},
   "source": [
    "# Blocklist Transform Example\n",
    "\n",
    "This notebook demonstrates how to use the blocklist transform to annotate documents based on blocklisted domains.\n",
    "\n",
    "## Overview\n",
    "\n",
    "The blocklist transform identifies documents from blocklisted domains by:\n",
    "1. Reading domain blocklists from specified files\n",
    "2. Extracting domains from document URLs\n",
    "3. Adding an annotation column indicating which documents are from blocklisted domains\n",
    "\n",
    "This is useful for data quality control and content filtering in data preparation pipelines."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "pip-install-note",
   "metadata": {},
   "source": [
    "## Installation\n",
    "\n",
    "**Note:** These pip installs need to be adapted to use the appropriate release level. \n",
    "\n",
    "Alternatively, the venv running the jupyter lab could be pre-configured with a requirements file that includes the right release. \n",
    "\n",
    "**Example for transform developers working from git clone:**\n",
    "```bash\n",
    "make venv\n",
    "source venv/bin/activate\n",
    "pip install jupyterlab\n",
    "venv/bin/jupyter lab\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "install-dependencies",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "## This is here as a reference only\n",
    "# Users and application developers must use the right tag for the latest from pypi\n",
    "%pip install \"data-prep-toolkit-transforms[blocklist]==1.1.5\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "import-note",
   "metadata": {},
   "source": [
    "## Import Required Modules"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "import-modules",
   "metadata": {},
   "outputs": [],
   "source": [
    "from dpk_blocklist.runtime import Blocklist"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "parameters-note",
   "metadata": {},
   "source": [
    "## Configure and Run the Transform\n",
    "\n",
    "### Parameters\n",
    "\n",
    "The blocklist transform accepts the following key parameters:\n",
    "\n",
    "* **input_folder**: Path to the input parquet files\n",
    "* **output_folder**: Path where annotated parquet files will be written\n",
    "* **blocklist_blocked_domain_list_path**: Path to directory containing domain blocklist files (files matching pattern `domains*`)\n",
    "* **blocklist_annotation_column_name**: Name of the column to add with blocklist annotations (default: `\"blocklisted\"`)\n",
    "* **blocklist_source_url_column_name**: Name of the column containing source URLs (default: `\"title\"`)\n",
    "\n",
    "For a full list of parameters, please see the [README](./README.md).\n",
    "\n",
    "### Example: Basic Blocklist Annotation\n",
    "\n",
    "This example shows how to annotate documents using a blocklist of gambling and phishing domains:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "run-transform",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "21:29:56 INFO - data factory blocklist_ Missing local configuration\n",
      "21:29:56 INFO - data factory blocklist_ max_files -1, n_sample -1\n",
      "21:29:56 INFO - data factory blocklist_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "21:29:56 INFO - data factory blocklist_ Data Access:  DataAccessLocal\n",
      "21:29:56 INFO - pipeline id pipeline_id\n",
      "21:29:56 INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "21:29:56 INFO - data factory data_ max_files -1, n_sample -1\n",
      "21:29:56 INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "21:29:56 INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "21:29:56 INFO - orchestrator blocklist started at 2025-10-15 21:29:56\n",
      "21:29:56 INFO - Number of files is 1, source profile {'max_file_size': 0.0007181167602539062, 'min_file_size': 0.0007181167602539062, 'total_file_size': 0.0007181167602539062}\n",
      "21:29:56 INFO - Blocked domain list found locally from test-data/domains/arjel\n",
      "21:29:56 INFO - Added 3 domains to domain list\n",
      "21:29:56 INFO - Loading trie with 3 items.\n",
      "21:29:56 INFO - Completed 1 files (100.0%) in 0.0 min\n",
      "21:29:56 INFO - Done processing 1 files, waiting for flush() completion.\n",
      "21:29:56 INFO - done flushing in 0.0 sec\n",
      "21:29:56 INFO - Completed execution in 0.0 min, execution result 0\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "0"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Blocklist(\n",
    "    input_folder=\"test-data/input\",\n",
    "    output_folder=\"output\",\n",
    "    blocklist_blocked_domain_list_path=\"test-data/domains/arjel\",\n",
    "    blocklist_annotation_column_name=\"blocklisted\",\n",
    "    blocklist_source_url_column_name=\"title\"\n",
    ").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "output-note",
   "metadata": {},
   "source": [
    "## Verify the Output\n",
    "\n",
    "The output folder will contain the annotated parquet files. Let's check what was created:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "list-output",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['output\\\\metadata.json', 'output\\\\test1.parquet']"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import glob\n",
    "glob.glob(\"output/*\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "inspect-output",
   "metadata": {},
   "source": [
    "## Inspect the Results\n",
    "\n",
    "Let's read the output parquet file and examine the annotations:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "read-output",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Output table has 7 rows and 2 columns\n",
      "\n",
      "Columns: ['title', 'blocklisted']\n",
      "\n",
      "Sample data:\n",
      "                               title    blocklisted\n",
      "0                      https://poker          poker\n",
      "1                   https://poker.fr       poker.fr\n",
      "2              https://poker.foo.bar  poker.foo.bar\n",
      "3                https://abc.efg.com               \n",
      "4   http://asdf.qwer.com/welcome.htm               \n",
      "5  http://aasdf.qwer.com/welcome.htm               \n",
      "6         https://zxcv.xxx/index.asp               \n",
      "\n",
      "=== Blocklist Statistics ===\n",
      "Total documents: 7\n",
      "Blocklisted documents: 3\n",
      "Clean documents: 4\n",
      "Blocklist rate: 42.9%\n",
      "\n",
      "=== Blocklisted Domains Found ===\n",
      "                   title    blocklisted\n",
      "0          https://poker          poker\n",
      "1       https://poker.fr       poker.fr\n",
      "2  https://poker.foo.bar  poker.foo.bar\n"
     ]
    }
   ],
   "source": [
    "import pyarrow.parquet as pq\n",
    "import pandas as pd\n",
    "\n",
    "# Read the output parquet file\n",
    "output_files = glob.glob(\"output/*.parquet\")\n",
    "if output_files:\n",
    "    table = pq.read_table(output_files[0])\n",
    "    df = table.to_pandas()\n",
    "    print(f\"\\nOutput table has {len(df)} rows and {len(df.columns)} columns\")\n",
    "    print(f\"\\nColumns: {list(df.columns)}\")\n",
    "    print(f\"\\nSample data:\")\n",
    "    print(df.head(10))\n",
    "    \n",
    "    # Show blocklist statistics\n",
    "    blocklisted_count = (df['blocklisted'] != '').sum()\n",
    "    total_count = len(df)\n",
    "    print(f\"\\n=== Blocklist Statistics ===\")\n",
    "    print(f\"Total documents: {total_count}\")\n",
    "    print(f\"Blocklisted documents: {blocklisted_count}\")\n",
    "    print(f\"Clean documents: {total_count - blocklisted_count}\")\n",
    "    print(f\"Blocklist rate: {blocklisted_count/total_count*100:.1f}%\")\n",
    "    \n",
    "    if blocklisted_count > 0:\n",
    "        print(f\"\\n=== Blocklisted Domains Found ===\")\n",
    "        print(df[df['blocklisted'] != ''][['title', 'blocklisted']])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "metadata-note",
   "metadata": {},
   "source": [
    "## Check Transform Metadata\n",
    "\n",
    "The transform also produces a metadata.json file with processing statistics:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "read-metadata",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Transform Metadata:\n",
      "{\n",
      "  \"pipeline\": \"pipeline_id\",\n",
      "  \"job details\": {\n",
      "    \"job category\": \"preprocessing\",\n",
      "    \"job name\": \"blocklist\",\n",
      "    \"job type\": \"pure python\",\n",
      "    \"job id\": \"job_id\",\n",
      "    \"start_time\": \"2025-10-15 21:29:56\",\n",
      "    \"end_time\": \"2025-10-15 21:29:56\",\n",
      "    \"status\": \"success\"\n",
      "  },\n",
      "  \"code\": {\n",
      "    \"github\": \"UNDEFINED\",\n",
      "    \"build-date\": \"UNDEFINED\",\n",
      "    \"commit_hash\": \"UNDEFINED\",\n",
      "    \"path\": \"UNDEFINED\"\n",
      "  },\n",
      "  \"job_input_params\": {\n",
      "    \"blocked_domain_list_path\": \"test-data/domains/arjel\",\n",
      "    \"annotation_column_name\": \"blocklisted\",\n",
      "    \"source_url_column_name\": \"title\",\n",
      "    \"checkpointing\": false,\n",
      "    \"max_files\": -1,\n",
      "    \"random_samples\": -1,\n",
      "    \"files_to_use\": [\n",
      "      \".parquet\"\n",
      "    ],\n",
      "    \"num_processors\": 0\n",
      "  },\n",
      "  \"execution_stats\": {\n",
      "    \"cpus\": 6.0,\n",
      "    \"gpus\": 0,\n",
      "    \"memory\": 20.16,\n",
      "    \"object_store\": 0,\n",
      "    \"execution time, min\": 0.0\n",
      "  },\n",
      "  \"job_output_stats\": {\n",
      "    \"source_files\": 1,\n",
      "    \"source_size\": 753,\n",
      "    \"result_files\": 1,\n",
      "    \"result_size\": 1107,\n",
      "    \"processing_time\": 0.004,\n",
      "    \"total_docs_count\": 7,\n",
      "    \"block_listed_docs_count\": 3,\n",
      "    \"source_doc_count\": 7,\n",
      "    \"result_doc_count\": 7\n",
      "  },\n",
      "  \"source\": {\n",
      "    \"name\": \"c:\\\\VSC_NEW\\\\data-prep-kit\\\\transforms\\\\universal\\\\blocklist\\\\test-data\\\\input\",\n",
      "    \"type\": \"path\"\n",
      "  },\n",
      "  \"target\": {\n",
      "    \"name\": \"c:\\\\VSC_NEW\\\\data-prep-kit\\\\transforms\\\\universal\\\\blocklist\\\\output\",\n",
      "    \"type\": \"path\"\n",
      "  }\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "metadata_file = \"output/metadata.json\"\n",
    "try:\n",
    "    with open(metadata_file, 'r') as f:\n",
    "        metadata = json.load(f)\n",
    "    print(\"Transform Metadata:\")\n",
    "    print(json.dumps(metadata, indent=2))\n",
    "except FileNotFoundError:\n",
    "    print(f\"Metadata file not found at {metadata_file}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "custom-example",
   "metadata": {},
   "source": [
    "## Example: Using Multiple Blocklist Sources\n",
    "\n",
    "You can combine multiple domain blocklists by placing multiple `domains*` files in the same directory:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "multi-blocklist",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "21:30:38 INFO - data factory blocklist_ Missing local configuration\n",
      "21:30:38 INFO - data factory blocklist_ max_files -1, n_sample -1\n",
      "21:30:38 INFO - data factory blocklist_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "21:30:38 INFO - data factory blocklist_ Data Access:  DataAccessLocal\n",
      "21:30:38 INFO - pipeline id pipeline_id\n",
      "21:30:38 INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "21:30:38 INFO - data factory data_ max_files -1, n_sample -1\n",
      "21:30:38 INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "21:30:38 INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "21:30:38 INFO - orchestrator blocklist started at 2025-10-15 21:30:38\n",
      "21:30:38 INFO - Number of files is 1, source profile {'max_file_size': 0.0007181167602539062, 'min_file_size': 0.0007181167602539062, 'total_file_size': 0.0007181167602539062}\n",
      "21:30:38 INFO - Blocked domain list found locally from test-data/domains/gambling\n",
      "21:30:38 INFO - Added 12 domains to domain list\n",
      "21:30:38 INFO - Loading trie with 12 items.\n",
      "21:30:38 INFO - Completed 1 files (100.0%) in 0.0 min\n",
      "21:30:38 INFO - Done processing 1 files, waiting for flush() completion.\n",
      "21:30:38 INFO - done flushing in 0.0 sec\n",
      "21:30:38 INFO - Completed execution in 0.001 min, execution result 0\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "0"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# This example would use all domain files in test-data/domains/gambling/\n",
    "# which includes domains, domains.9309, and domains.24733\n",
    "\n",
    "Blocklist(\n",
    "    input_folder=\"test-data/input\",\n",
    "    output_folder=\"output-gambling\",\n",
    "    blocklist_blocked_domain_list_path=\"test-data/domains/gambling\",\n",
    "    blocklist_annotation_column_name=\"gambling_domain\",\n",
    "    blocklist_source_url_column_name=\"title\"\n",
    ").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "custom-columns",
   "metadata": {},
   "source": [
    "## Example: Custom Column Names\n",
    "\n",
    "You can customize both the source URL column and the annotation column names:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "custom-column-example",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "21:31:08 INFO - data factory blocklist_ Missing local configuration\n",
      "21:31:08 INFO - data factory blocklist_ max_files -1, n_sample -1\n",
      "21:31:08 INFO - data factory blocklist_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "21:31:08 INFO - data factory blocklist_ Data Access:  DataAccessLocal\n",
      "21:31:08 INFO - pipeline id pipeline_id\n",
      "21:31:08 INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "21:31:08 INFO - data factory data_ max_files -1, n_sample -1\n",
      "21:31:08 INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "21:31:08 INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "21:31:08 INFO - orchestrator blocklist started at 2025-10-15 21:31:08\n",
      "21:31:08 ERROR - No input files to process - exiting\n",
      "21:31:08 INFO - Completed execution in 0.0 min, execution result 0\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "0"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# If your input data has URLs in a column named 'url' instead of 'title',\n",
    "# and you want the annotation in a column named 'blocked_domain':\n",
    "\n",
    "Blocklist(\n",
    "    input_folder=\"your-input-folder\",\n",
    "    output_folder=\"your-output-folder\",\n",
    "    blocklist_blocked_domain_list_path=\"path-to-domains\",\n",
    "    blocklist_annotation_column_name=\"blocked_domain\",\n",
    "    blocklist_source_url_column_name=\"url\"\n",
    ").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "workflow-integration",
   "metadata": {},
   "source": [
    "## Integration with Other Transforms\n",
    "\n",
    "The blocklist transform is often used in combination with the filter transform to remove blocklisted documents:\n",
    "\n",
    "```python\n",
    "# Step 1: Annotate with blocklist\n",
    "Blocklist(\n",
    "    input_folder=\"input\",\n",
    "    output_folder=\"annotated\",\n",
    "    blocklist_blocked_domain_list_path=\"domains\"\n",
    ").transform()\n",
    "\n",
    "# Step 2: Filter out blocklisted documents\n",
    "from dpk_filter.runtime import Filter\n",
    "Filter(\n",
    "    input_folder=\"annotated\",\n",
    "    output_folder=\"filtered\",\n",
    "    filter_criteria_list=[\"blocklisted = ''\"],  # Keep only non-blocklisted\n",
    "    filter_columns_to_drop=[\"blocklisted\"]       # Remove the annotation column\n",
    ").transform()\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "conclusion",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "This notebook demonstrated:\n",
    "- Installing and importing the blocklist transform\n",
    "- Running the transform with basic parameters\n",
    "- Inspecting the annotated output\n",
    "- Using custom column names and multiple blocklists\n",
    "- Integrating with other transforms\n",
    "\n",
    "For more information, see:\n",
    "- [Blocklist Transform README](./README.md)\n",
    "- [Data Prep Kit Documentation](https://github.com/data-prep-kit/data-prep-kit)\n",
    "- [Transform Project Conventions](../../README.md)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
