{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Distributed Data Classification with NeMo Curator's `PromptTaskComplexityClassifier`\n",
    "\n",
    "This notebook demonstrates the use of NeMo Curator's `PromptTaskComplexityClassifier`. The [prompt task and complexity classifier](https://huggingface.co/nvidia/prompt-task-and-complexity-classifier) a multi-headed model which classifies English text prompts across task types and complexity dimensions. It helps with data annotation, which is useful in data blending for foundation model training. Please refer to the NemoCurator Prompt Task and Complexity Classifier Hugging Face page for more information about the prompt task and complexity classifier, including its output labels, here: https://huggingface.co/nvidia/prompt-task-and-complexity-classifier.\n",
    "\n",
    "This tutorial requires at least 1 NVIDIA GPU with:\n",
    "  - Volta™ or higher (compute capability 7.0+)\n",
    "  - CUDA 12.x\n",
    "\n",
    "Before running this notebook, see this [Installation Guide](https://docs.nvidia.com/nemo/curator/latest/admin/installation.html#admin-installation) page for instructions on how to install NeMo Curator. Be sure to use an installation method which includes GPU dependencies.\n",
    "\n",
    "For more information about the classifiers, refer to our [Distributed Data Classification](https://docs.nvidia.com/nemo/curator/latest/curate-text/process-data/quality-assessment/distributed-classifier.html) documentation page."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Silence Curator logs via Loguru\n",
    "import os\n",
    "\n",
    "os.environ[\"LOGURU_LEVEL\"] = \"ERROR\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following imports are required for this tutorial:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "\n",
    "from nemo_curator.core.client import RayClient\n",
    "from nemo_curator.pipeline import Pipeline\n",
    "from nemo_curator.stages.text.classifiers import PromptTaskComplexityClassifier\n",
    "from nemo_curator.stages.text.io.reader.jsonl import JsonlReader\n",
    "from nemo_curator.stages.text.io.writer.jsonl import JsonlWriter"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To run a pipeline in NeMo Curator, we must start a Ray cluster. This can be done manually (see the [Ray documentation](https://docs.ray.io/en/latest/ray-core/starting-ray.html)) or with Curator's `RayClient`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "try:\n",
    "    ray_client = RayClient()\n",
    "    ray_client.start()\n",
    "except Exception as e:\n",
    "    msg = f\"Error initializing Ray client: {e}\"\n",
    "    raise RuntimeError(msg) from e"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Initialize Read, Classification, and Write Stages\n",
    "\n",
    "Functions in NeMo Curator are called stages. For this tutorial, we will initialize 3 stages: a JSONL file reader, a prompt task and complexity classification stage, and a JSONL file writer.\n",
    "\n",
    "For this tutorial, we will create a sample JSONL file to use:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "input_file_path = \"./input_data_dir\"\n",
    "\n",
    "# Create sample dataset for the tutorial\n",
    "text = [\n",
    "    \"Write a mystery set in a small town where an everyday object goes missing, causing a ripple of curiosity and suspicion. Follow the investigation and reveal the surprising truth behind the disappearance.\",\n",
    "    \"Prompt: Write a Python script that uses a for loop.\",\n",
    "    \"Q: What is the capital of Japan?\\nA: The capital of Japan is Tokyo.\",\n",
    "    \"User: Hi, how are you today?\\nBot: I'm doing great, thanks for asking! How about you?\",\n",
    "    \"Input: 'This movie was absolutely amazing, I loved the characters and the story!'\\nTask: Sentiment classification.\\nOutput: Positive.\",\n",
    "]\n",
    "df = pd.DataFrame({\"text\": text})\n",
    "\n",
    "try:\n",
    "    os.makedirs(input_file_path, exist_ok=True)\n",
    "    df.to_json(input_file_path + \"/data.jsonl\", orient=\"records\", lines=True)\n",
    "except Exception as e:\n",
    "    msg = f\"Error creating input file: {e}\"\n",
    "    raise RuntimeError(msg) from e"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can define the reader stage with:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Read existing directory of JSONL files\n",
    "read_stage = JsonlReader(input_file_path, files_per_partition=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The classifier stage is broken down under the hood into a tokenizer stage and a model inference stage. Tokenization is run on the CPU while model inference is run on the GPU. This means that behind the scenes, the `PromptTaskComplexityClassifier` stage is actually being broken down into 2 stages (some parameters and details omitted to avoid complexity, please refer to the documentation for more details):\n",
    "\n",
    "```python\n",
    "class TokenizerStage:\n",
    "    self.resources = Resources(cpus=1)\n",
    "    self.model_identifier = \"nvidia/prompt-task-and-complexity-classifier\"\n",
    "    self.text_field = \"text\"\n",
    "    self.padding_side = \"right\"\n",
    "    ...\n",
    "class ModelStage:\n",
    "    self.resources = Resources(cpus=1, gpus=1)\n",
    "    self.model_identifier = \"nvidia/prompt-task-and-complexity-classifier\"\n",
    "    self.model_inference_batch_size = 256\n",
    "    ...\n",
    "```\n",
    "\n",
    "See the [API Reference](https://docs.nvidia.com/nemo/curator/latest/apidocs/stages/stages.text.classifiers.prompt_task_complexity.html#stages.text.classifiers.prompt_task_complexity.PromptTaskComplexityClassifier) for more information about the `PromptTaskComplexityClassifier` class."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Initialize the prompt task and complexity classifier\n",
    "classifier_stage = PromptTaskComplexityClassifier()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, we can define a stage for writing the results:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Write results to a directory\n",
    "output_file_path = \"./prompt_task_complexity_classifier_results\"\n",
    "\n",
    "# Use mode=\"overwrite\" to overwrite the output directory if it already exists\n",
    "# This helps to ensure that the correct output is written\n",
    "write_stage = JsonlWriter(output_file_path, mode=\"overwrite\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Initialize Pipeline\n",
    "\n",
    "In NeMo Curator, we use pipelines to run distributed data workflows using Ray. Pipelines take care of resource allocation and autoscaling to achieve enhanced performance and minimize GPU idleness.\n",
    "\n",
    "For the distributed data classifiers, we are able to achieve speedups by ensuring that model inference is run in parallel across all available GPUs, while other stages such as I/O, tokenization, and filtering are run across all available CPUs. This is possible because Curator pipelines are composable, which allows each stage in a pipeline to run independently and with its own specified hardware resources."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Pipeline(name='classifier_pipeline', stages=[jsonl_reader(JsonlReader), prompt_task_and_complexity_classifier_classifier(PromptTaskComplexityClassifier), jsonl_writer(JsonlWriter)])"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "classifier_pipeline = Pipeline(name=\"classifier_pipeline\", description=\"Run a classifier pipeline\")\n",
    "\n",
    "# Add stages to the pipeline\n",
    "classifier_pipeline.add_stage(read_stage)\n",
    "classifier_pipeline.add_stage(classifier_stage)\n",
    "classifier_pipeline.add_stage(write_stage)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Composability is also what allows a classifier to sit between pre-processing and post-processing stages. Typical text pre-processing add-ons include text normalization (lowercasing, URL/email removal, Unicode cleanup) and language identification and filtering (to keep only target languages). A full pipeline may look something like:\n",
    "\n",
    "```python\n",
    "pipeline = Pipeline(name=\"full_pipeline\")\n",
    "pipeline.add_stage(read_stage)                # reader (JSONL/S3/etc.)\n",
    "pipeline.add_stage(lang_id_stage)             # optional: language filter\n",
    "pipeline.add_stage(classifier_stage)          # classifier\n",
    "pipeline.add_stage(write_stage)               # writer (JSONL/Parquet)\n",
    "```\n",
    "\n",
    "# Run the  Classifier\n",
    "\n",
    "Let's run the full classifier pipeline:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run the pipeline\n",
    "result = classifier_pipeline.run()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Since the pipeline ran to completion and the result was written to a JSONL file, we can shut down the Ray cluster with:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "try:\n",
    "    ray_client.stop()\n",
    "except Exception as e:  # noqa: BLE001\n",
    "    print(f\"Error stopping Ray client: {e}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Inspect the Output\n",
    "\n",
    "The write stage returns a list of written files. We can read the output file as a Pandas DataFrame for inspection."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>text</th>\n",
       "      <th>prompt_complexity_score</th>\n",
       "      <th>task_type_1</th>\n",
       "      <th>task_type_2</th>\n",
       "      <th>task_type_prob</th>\n",
       "      <th>creativity_scope</th>\n",
       "      <th>reasoning</th>\n",
       "      <th>contextual_knowledge</th>\n",
       "      <th>number_of_few_shots</th>\n",
       "      <th>domain_knowledge</th>\n",
       "      <th>no_label_reason</th>\n",
       "      <th>constraint_ct</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>Write a mystery set in a small town where an e...</td>\n",
       "      <td>0.47155</td>\n",
       "      <td>Text Generation</td>\n",
       "      <td>NA</td>\n",
       "      <td>0.866</td>\n",
       "      <td>0.8668</td>\n",
       "      <td>0.0564</td>\n",
       "      <td>0.0478</td>\n",
       "      <td>0.0000</td>\n",
       "      <td>0.2257</td>\n",
       "      <td>0</td>\n",
       "      <td>0.7855</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Prompt: Write a Python script that uses a for ...</td>\n",
       "      <td>0.27830</td>\n",
       "      <td>Code Generation</td>\n",
       "      <td>Text Generation</td>\n",
       "      <td>0.766</td>\n",
       "      <td>0.0827</td>\n",
       "      <td>0.0632</td>\n",
       "      <td>0.0558</td>\n",
       "      <td>0.0000</td>\n",
       "      <td>0.9803</td>\n",
       "      <td>0</td>\n",
       "      <td>0.5581</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Q: What is the capital of Japan?\\nA: The capit...</td>\n",
       "      <td>0.08542</td>\n",
       "      <td>Open QA</td>\n",
       "      <td>NA</td>\n",
       "      <td>0.957</td>\n",
       "      <td>0.0080</td>\n",
       "      <td>0.0045</td>\n",
       "      <td>0.0080</td>\n",
       "      <td>0.5775</td>\n",
       "      <td>0.3356</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0125</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>User: Hi, how are you today?\\nBot: I'm doing g...</td>\n",
       "      <td>0.07782</td>\n",
       "      <td>Chatbot</td>\n",
       "      <td>Open QA</td>\n",
       "      <td>0.653</td>\n",
       "      <td>0.0341</td>\n",
       "      <td>0.0077</td>\n",
       "      <td>0.3731</td>\n",
       "      <td>0.3492</td>\n",
       "      <td>0.1762</td>\n",
       "      <td>0</td>\n",
       "      <td>0.0094</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>Input: 'This movie was absolutely amazing, I l...</td>\n",
       "      <td>0.07524</td>\n",
       "      <td>Classification</td>\n",
       "      <td>Open QA</td>\n",
       "      <td>0.548</td>\n",
       "      <td>0.0099</td>\n",
       "      <td>0.0177</td>\n",
       "      <td>0.2870</td>\n",
       "      <td>0.0000</td>\n",
       "      <td>0.1376</td>\n",
       "      <td>0</td>\n",
       "      <td>0.2157</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                                text  prompt_complexity_score  \\\n",
       "0  Write a mystery set in a small town where an e...                  0.47155   \n",
       "1  Prompt: Write a Python script that uses a for ...                  0.27830   \n",
       "2  Q: What is the capital of Japan?\\nA: The capit...                  0.08542   \n",
       "3  User: Hi, how are you today?\\nBot: I'm doing g...                  0.07782   \n",
       "4  Input: 'This movie was absolutely amazing, I l...                  0.07524   \n",
       "\n",
       "       task_type_1      task_type_2  task_type_prob  creativity_scope  \\\n",
       "0  Text Generation               NA           0.866            0.8668   \n",
       "1  Code Generation  Text Generation           0.766            0.0827   \n",
       "2          Open QA               NA           0.957            0.0080   \n",
       "3          Chatbot          Open QA           0.653            0.0341   \n",
       "4   Classification          Open QA           0.548            0.0099   \n",
       "\n",
       "   reasoning  contextual_knowledge  number_of_few_shots  domain_knowledge  \\\n",
       "0     0.0564                0.0478               0.0000            0.2257   \n",
       "1     0.0632                0.0558               0.0000            0.9803   \n",
       "2     0.0045                0.0080               0.5775            0.3356   \n",
       "3     0.0077                0.3731               0.3492            0.1762   \n",
       "4     0.0177                0.2870               0.0000            0.1376   \n",
       "\n",
       "   no_label_reason  constraint_ct  \n",
       "0                0         0.7855  \n",
       "1                0         0.5581  \n",
       "2                0         0.0125  \n",
       "3                0         0.0094  \n",
       "4                0         0.2157  "
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# For simplicity, we take the first written file from the writer stage\n",
    "# In real pipelines, the writer may return multiple files (shards) or objects\n",
    "result_file = result[0].data[0]\n",
    "\n",
    "result_df = pd.read_json(result_file, lines=True)\n",
    "result_df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can see that the predictions were generated as expected. For more information about the ranges of the output values for the prompt task and complexity classifier, please refer to the [Hugging Face page](https://huggingface.co/nvidia/prompt-task-and-complexity-classifier)."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
