{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "installation-note",
   "metadata": {},
   "source": [
    "# Blocklist Transform Example (Ray)\n",
    "\n",
    "This notebook demonstrates how to use the blocklist transform with Ray runtime to annotate documents based on blocklisted domains.\n",
    "\n",
    "## Overview\n",
    "\n",
    "The blocklist transform identifies documents from blocklisted domains by:\n",
    "1. Reading domain blocklists from specified files\n",
    "2. Extracting domains from document URLs\n",
    "3. Adding an annotation column indicating which documents are from blocklisted domains\n",
    "\n",
    "This Ray version allows for distributed processing of large datasets."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "pip-install-note",
   "metadata": {},
   "source": [
    "## Installation\n",
    "\n",
    "**Note:** These pip installs need to be adapted to use the appropriate release level. \n",
    "\n",
    "Alternatively, the venv running the jupyter lab could be pre-configured with a requirements file that includes the right release. \n",
    "\n",
    "**Example for transform developers working from git clone:**\n",
    "```bash\n",
    "make venv\n",
    "source venv/bin/activate\n",
    "pip install jupyterlab\n",
    "venv/bin/jupyter lab\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "install-dependencies",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%capture\n",
    "## This is here as a reference only\n",
    "# Users and application developers must use the right tag for the latest from pypi\n",
    "%pip install \"data-prep-toolkit-transforms[ray,blocklist]==1.1.5\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "import-note",
   "metadata": {},
   "source": [
    "## Import Required Modules"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "import-modules",
   "metadata": {},
   "outputs": [],
   "source": [
    "from dpk_blocklist.ray.runtime import Blocklist\n",
    "from data_processing.utils import GB"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "parameters-note",
   "metadata": {},
   "source": [
    "## Configure and Run the Transform\n",
    "\n",
    "### Parameters\n",
    "\n",
    "The blocklist transform accepts the following key parameters:\n",
    "\n",
    "* **input_folder**: Path to the input parquet files\n",
    "* **output_folder**: Path where annotated parquet files will be written\n",
    "* **blocklist_blocked_domain_list_path**: Path to directory containing domain blocklist files (files matching pattern `domains*`)\n",
    "* **blocklist_annotation_column_name**: Name of the column to add with blocklist annotations (default: `\"blocklisted\"`)\n",
    "* **blocklist_source_url_column_name**: Name of the column containing source URLs (default: `\"title\"`)\n",
    "\n",
    "**Ray-specific parameters:**\n",
    "\n",
    "* **run_locally**: Set to `True` to run Ray locally, `False` to connect to existing Ray cluster\n",
    "* **num_cpus**: Number of CPUs to allocate per worker (default: 0.8)\n",
    "* **memory**: Amount of memory to allocate per worker (e.g., `2 * GB`)\n",
    "* **runtime_num_workers**: Number of Ray workers to spawn (default: determined automatically)\n",
    "\n",
    "For a full list of parameters, please see the [README](./README.md).\n",
    "\n",
    "### Example: Basic Blocklist Annotation with Ray\n",
    "\n",
    "This example shows how to annotate documents using a blocklist of gambling and phishing domains with Ray:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "run-transform",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "17:33:40 INFO - data factory blocklist_ Missing local configuration\n",
      "17:33:40 INFO - data factory blocklist_ max_files -1, n_sample -1\n",
      "17:33:40 INFO - data factory blocklist_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "17:33:40 INFO - data factory blocklist_ Data Access:  DataAccessLocal\n",
      "17:33:40 INFO - pipeline id pipeline_id\n",
      "17:33:40 INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "17:33:40 INFO - number of workers 2 worker options {'num_cpus': 0.8, 'memory': 2147483648, 'max_restarts': -1}\n",
      "17:33:40 INFO - actor creation delay 0\n",
      "17:33:40 INFO - job details {'job category': 'preprocessing', 'job name': 'blocklist', 'job type': 'ray', 'job id': 'job_id'}\n",
      "17:33:40 INFO - data factory data_ max_files -1, n_sample -1\n",
      "17:33:40 INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "17:33:40 INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "17:33:40 INFO - Running locally\n",
      "2025-10-16 17:33:45,051\tINFO worker.py:1777 -- Started a local Ray instance. View the dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8265 \u001b[39m\u001b[22m\n",
      "\u001b[36m(orchestrate pid=47500)\u001b[0m 17:33:57 INFO - orchestrator started at 2025-10-16 17:33:57\n",
      "\u001b[36m(orchestrate pid=47500)\u001b[0m 17:33:57 INFO - Number of files is 1, source profile {'max_file_size': 0.0007181167602539062, 'min_file_size': 0.0007181167602539062, 'total_file_size': 0.0007181167602539062}\n",
      "\u001b[36m(orchestrate pid=47500)\u001b[0m 17:33:57 INFO - Cluster resources: {'cpus': 8, 'gpus': 1, 'memory': 4.26153030525893, 'object_store': 2.130765151232481}\n",
      "\u001b[36m(orchestrate pid=47500)\u001b[0m 17:33:57 INFO - Number of workers - 2 with {'num_cpus': 0.8, 'memory': 2147483648, 'max_restarts': -1} each\n",
      "\u001b[36m(RayTransformFileProcessor pid=49296)\u001b[0m 17:34:08 INFO - Blocked domain list found locally from test-data/domains/arjel\n",
      "\u001b[36m(RayTransformFileProcessor pid=49296)\u001b[0m 17:34:08 INFO - Added 3 domains to domain list\n",
      "\u001b[36m(RayTransformFileProcessor pid=49296)\u001b[0m 17:34:08 INFO - Loading trie with 3 items.\n",
      "\u001b[36m(orchestrate pid=47500)\u001b[0m 17:34:09 INFO - Completed 0 files (0.0%)  in 0.0 min. Waiting for completion\n",
      "\u001b[36m(orchestrate pid=47500)\u001b[0m 17:34:09 INFO - Completed processing 1 files in 0.006 min\n",
      "\u001b[36m(orchestrate pid=47500)\u001b[0m 17:34:09 INFO - done flushing in 0.003 sec\n",
      "17:34:19 INFO - Completed execution in 0.651 min, execution result 0\n",
      "\u001b[36m(RayTransformFileProcessor pid=49316)\u001b[0m 17:34:08 INFO - Blocked domain list found locally from test-data/domains/arjel\n",
      "\u001b[36m(RayTransformFileProcessor pid=49316)\u001b[0m 17:34:08 INFO - Added 3 domains to domain list\n",
      "\u001b[36m(RayTransformFileProcessor pid=49316)\u001b[0m 17:34:08 INFO - Loading trie with 3 items.\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "0"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Blocklist(\n",
    "    input_folder=\"test-data/input\",\n",
    "    output_folder=\"output\",\n",
    "    run_locally=True,\n",
    "    num_cpus=0.8,\n",
    "    memory=2 * GB,\n",
    "    runtime_num_workers=2,\n",
    "    blocklist_blocked_domain_list_path=\"test-data/domains/arjel\",\n",
    "    blocklist_annotation_column_name=\"blocklisted\",\n",
    "    blocklist_source_url_column_name=\"title\"\n",
    ").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "output-note",
   "metadata": {},
   "source": [
    "## Verify the Output\n",
    "\n",
    "The output folder will contain the annotated parquet files. Let's check what was created:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "list-output",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['output\\\\metadata.json', 'output\\\\test1.parquet']"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import glob\n",
    "glob.glob(\"output/*\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "inspect-output",
   "metadata": {},
   "source": [
    "## Inspect the Results\n",
    "\n",
    "Let's read the output parquet file and examine the annotations:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "read-output",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Output table has 7 rows and 2 columns\n",
      "\n",
      "Columns: ['title', 'blocklisted']\n",
      "\n",
      "Sample data:\n",
      "                               title    blocklisted\n",
      "0                      https://poker          poker\n",
      "1                   https://poker.fr       poker.fr\n",
      "2              https://poker.foo.bar  poker.foo.bar\n",
      "3                https://abc.efg.com               \n",
      "4   http://asdf.qwer.com/welcome.htm               \n",
      "5  http://aasdf.qwer.com/welcome.htm               \n",
      "6         https://zxcv.xxx/index.asp               \n",
      "\n",
      "=== Blocklist Statistics ===\n",
      "Total documents: 7\n",
      "Blocklisted documents: 3\n",
      "Clean documents: 4\n",
      "Blocklist rate: 42.9%\n",
      "\n",
      "=== Blocklisted Domains Found ===\n",
      "                   title    blocklisted\n",
      "0          https://poker          poker\n",
      "1       https://poker.fr       poker.fr\n",
      "2  https://poker.foo.bar  poker.foo.bar\n"
     ]
    }
   ],
   "source": [
    "import pyarrow.parquet as pq\n",
    "import pandas as pd\n",
    "\n",
    "# Read the output parquet file\n",
    "output_files = glob.glob(\"output/*.parquet\")\n",
    "if output_files:\n",
    "    table = pq.read_table(output_files[0])\n",
    "    df = table.to_pandas()\n",
    "    print(f\"\\nOutput table has {len(df)} rows and {len(df.columns)} columns\")\n",
    "    print(f\"\\nColumns: {list(df.columns)}\")\n",
    "    print(f\"\\nSample data:\")\n",
    "    print(df.head(10))\n",
    "    \n",
    "    # Show blocklist statistics\n",
    "    blocklisted_count = (df['blocklisted'] != '').sum()\n",
    "    total_count = len(df)\n",
    "    print(f\"\\n=== Blocklist Statistics ===\")\n",
    "    print(f\"Total documents: {total_count}\")\n",
    "    print(f\"Blocklisted documents: {blocklisted_count}\")\n",
    "    print(f\"Clean documents: {total_count - blocklisted_count}\")\n",
    "    print(f\"Blocklist rate: {blocklisted_count/total_count*100:.1f}%\")\n",
    "    \n",
    "    if blocklisted_count > 0:\n",
    "        print(f\"\\n=== Blocklisted Domains Found ===\")\n",
    "        print(df[df['blocklisted'] != ''][['title', 'blocklisted']])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "metadata-note",
   "metadata": {},
   "source": [
    "## Check Transform Metadata\n",
    "\n",
    "The transform also produces a metadata.json file with processing statistics:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "read-metadata",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Transform Metadata:\n",
      "{\n",
      "  \"pipeline\": \"pipeline_id\",\n",
      "  \"job details\": {\n",
      "    \"job category\": \"preprocessing\",\n",
      "    \"job name\": \"blocklist\",\n",
      "    \"job type\": \"ray\",\n",
      "    \"job id\": \"job_id\",\n",
      "    \"start_time\": \"2025-10-16 17:33:57\",\n",
      "    \"end_time\": \"2025-10-16 17:34:09\",\n",
      "    \"status\": \"success\"\n",
      "  },\n",
      "  \"code\": {\n",
      "    \"github\": \"UNDEFINED\",\n",
      "    \"build-date\": \"UNDEFINED\",\n",
      "    \"commit_hash\": \"UNDEFINED\",\n",
      "    \"path\": \"UNDEFINED\"\n",
      "  },\n",
      "  \"job_input_params\": {\n",
      "    \"blocked_domain_list_path\": \"test-data/domains/arjel\",\n",
      "    \"annotation_column_name\": \"blocklisted\",\n",
      "    \"source_url_column_name\": \"title\",\n",
      "    \"checkpointing\": false,\n",
      "    \"max_files\": -1,\n",
      "    \"random_samples\": -1,\n",
      "    \"files_to_use\": [\n",
      "      \".parquet\"\n",
      "    ],\n",
      "    \"number of workers\": 2,\n",
      "    \"worker options\": {\n",
      "      \"num_cpus\": 0.8,\n",
      "      \"memory\": 2147483648,\n",
      "      \"max_restarts\": -1\n",
      "    },\n",
      "    \"actor creation delay\": 0\n",
      "  },\n",
      "  \"execution_stats\": {\n",
      "    \"cpus\": 8,\n",
      "    \"gpus\": 1,\n",
      "    \"memory\": 4.26153030525893,\n",
      "    \"object_store\": 2.130765151232481,\n",
      "    \"execution time, min\": 0.204\n",
      "  },\n",
      "  \"job_output_stats\": {\n",
      "    \"source_files\": 1,\n",
      "    \"source_size\": 753,\n",
      "    \"result_files\": 1,\n",
      "    \"result_size\": 1107,\n",
      "    \"processing_time\": 0.368,\n",
      "    \"total_docs_count\": 7,\n",
      "    \"block_listed_docs_count\": 3,\n",
      "    \"source_doc_count\": 7,\n",
      "    \"result_doc_count\": 7\n",
      "  },\n",
      "  \"source\": {\n",
      "    \"name\": \"c:\\\\VSC_NEW\\\\data-prep-kit\\\\transforms\\\\universal\\\\blocklist\\\\test-data\\\\input\",\n",
      "    \"type\": \"path\"\n",
      "  },\n",
      "  \"target\": {\n",
      "    \"name\": \"c:\\\\VSC_NEW\\\\data-prep-kit\\\\transforms\\\\universal\\\\blocklist\\\\output\",\n",
      "    \"type\": \"path\"\n",
      "  }\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "metadata_file = \"output/metadata.json\"\n",
    "try:\n",
    "    with open(metadata_file, 'r') as f:\n",
    "        metadata = json.load(f)\n",
    "    print(\"Transform Metadata:\")\n",
    "    print(json.dumps(metadata, indent=2))\n",
    "except FileNotFoundError:\n",
    "    print(f\"Metadata file not found at {metadata_file}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "multi-worker-example",
   "metadata": {},
   "source": [
    "## Example: Using Multiple Workers\n",
    "\n",
    "For larger datasets, you can increase the number of workers and adjust resources:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "multi-worker-transform",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "17:35:12 INFO - data factory blocklist_ Missing local configuration\n",
      "17:35:12 INFO - data factory blocklist_ max_files -1, n_sample -1\n",
      "17:35:12 INFO - data factory blocklist_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "17:35:12 INFO - data factory blocklist_ Data Access:  DataAccessLocal\n",
      "17:35:12 INFO - pipeline id pipeline_id\n",
      "17:35:12 INFO - code location {'github': 'UNDEFINED', 'build-date': 'UNDEFINED', 'commit_hash': 'UNDEFINED', 'path': 'UNDEFINED'}\n",
      "17:35:12 INFO - number of workers 8 worker options {'num_cpus': 1.0, 'memory': 4294967296, 'max_restarts': -1}\n",
      "17:35:12 INFO - actor creation delay 0\n",
      "17:35:12 INFO - job details {'job category': 'preprocessing', 'job name': 'blocklist', 'job type': 'ray', 'job id': 'job_id'}\n",
      "17:35:12 INFO - data factory data_ max_files -1, n_sample -1\n",
      "17:35:12 INFO - data factory data_ Not using data sets, checkpointing False, max files -1, random samples -1, files to use ['.parquet'], files to checkpoint ['.parquet']\n",
      "17:35:12 INFO - data factory data_ Data Access:  DataAccessLocal\n",
      "17:35:12 INFO - Running locally\n",
      "2025-10-16 17:35:15,972\tINFO worker.py:1777 -- Started a local Ray instance. View the dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8265 \u001b[39m\u001b[22m\n",
      "\u001b[36m(orchestrate pid=49388)\u001b[0m 17:35:28 INFO - orchestrator started at 2025-10-16 17:35:28\n",
      "\u001b[36m(orchestrate pid=49388)\u001b[0m 17:35:28 ERROR - No input files to process - exiting\n",
      "17:35:38 INFO - Completed execution in 0.435 min, execution result 0\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "0"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Example with more workers for larger datasets\n",
    "Blocklist(\n",
    "    input_folder=\"large-dataset/input\",\n",
    "    output_folder=\"large-dataset/output\",\n",
    "    run_locally=True,\n",
    "    num_cpus=1.0,\n",
    "    memory=4 * GB,\n",
    "    runtime_num_workers=8,\n",
    "    blocklist_blocked_domain_list_path=\"test-data/domains/gambling\",\n",
    "    blocklist_annotation_column_name=\"gambling_domain\",\n",
    "    blocklist_source_url_column_name=\"title\"\n",
    ").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cluster-example",
   "metadata": {},
   "source": [
    "## Example: Running on Existing Ray Cluster\n",
    "\n",
    "To use an existing Ray cluster, set `run_locally=False`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cluster-transform",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Example using existing Ray cluster\n",
    "import ray\n",
    "ray.init(address='auto')  # Connect to existing cluster\n",
    "\n",
    "Blocklist(\n",
    "    input_folder=\"s3://my-bucket/input\",\n",
    "    output_folder=\"s3://my-bucket/output\",\n",
    "    run_locally=False,\n",
    "    num_cpus=2.0,\n",
    "    memory=8 * GB,\n",
    "    runtime_num_workers=16,\n",
    "    blocklist_blocked_domain_list_path=\"s3://my-bucket/domains\",\n",
    "    blocklist_annotation_column_name=\"blocklisted\",\n",
    "    blocklist_source_url_column_name=\"url\"\n",
    ").transform()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "workflow-integration",
   "metadata": {},
   "source": [
    "## Integration with Other Transforms\n",
    "\n",
    "The blocklist transform is often used in combination with the filter transform to remove blocklisted documents:\n",
    "\n",
    "```python\n",
    "# Step 1: Annotate with blocklist\n",
    "from dpk_blocklist.ray.runtime import Blocklist\n",
    "Blocklist(\n",
    "    input_folder=\"input\",\n",
    "    output_folder=\"annotated\",\n",
    "    run_locally=True,\n",
    "    blocklist_blocked_domain_list_path=\"domains\"\n",
    ").transform()\n",
    "\n",
    "# Step 2: Filter out blocklisted documents\n",
    "from dpk_filter.ray.runtime import Filter\n",
    "Filter(\n",
    "    input_folder=\"annotated\",\n",
    "    output_folder=\"filtered\",\n",
    "    run_locally=True,\n",
    "    filter_criteria_list=[\"blocklisted = ''\"],  # Keep only non-blocklisted\n",
    "    filter_columns_to_drop=[\"blocklisted\"]       # Remove the annotation column\n",
    ").transform()\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "conclusion",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "This notebook demonstrated:\n",
    "- Installing and importing the blocklist transform with Ray support\n",
    "- Running the transform with Ray runtime parameters\n",
    "- Configuring worker resources and parallelism\n",
    "- Inspecting the annotated output\n",
    "- Using Ray locally vs. on existing clusters\n",
    "- Integrating with other transforms\n",
    "\n",
    "For more information, see:\n",
    "- [Blocklist Transform README](./README.md)\n",
    "- [Data Prep Kit Documentation](https://github.com/data-prep-kit/data-prep-kit)\n",
    "- [Ray Documentation](https://docs.ray.io/)\n",
    "- [Transform Project Conventions](../../README.md)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
