{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "34a9066b-fa01-4bbf-921d-2bc3880de7f9",
   "metadata": {},
   "source": [
    "# Synthetic Data Generation\n",
    "\n",
    "## 1. Introduction\n",
    "\n",
    "Synthetic Data Generation (SDG) is the process of creating data using statistical simulations or AI models. Visit [Synthetic Data Generation with Language Models: A Practical Guide](https://medium.com/p/0ff98eb226a1) to learn more about SDG and the implementation of this data generator.\n",
    "\n",
    "### Why Synthetic Data?\n",
    "- Scalability\n",
    "- Privacy preservation\n",
    "- Ability to simulate hard-to-capture scenarios\n",
    "\n",
    "### Limitations of Synthetic Data\n",
    "- Lack of Real-World Authenticity\n",
    "- Overfitting and Bias\n",
    "\n",
    "### Overcoming Limitations\n",
    "- Hybrid approach\n",
    "- Validation on real data\n",
    "- Regularization\n",
    "- Diversification"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2b2898c1-5383-4f10-8c3e-3b6ab0a32649",
   "metadata": {},
   "source": [
    "## 2. Implementation\n",
    "\n",
    "### Login with Hugging Face Token\n",
    "\n",
    "- If you plan on using a gated model such as [Llama 3.2](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct), make sure to submit an access request through their model page.\n",
    "- Once your access request is approved, paste your Hugging Face read token, then uncheck the option to git credentials and login."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "f16aad82-ff53-4785-8a1c-62b48e86371a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "35cb8e0f784142e1b41ef0ca3dd86887",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "VBox(children=(HTML(value='<center> <img\\nsrc=https://huggingface.co/front/assets/huggingface_logo-noborder.sv…"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import os\n",
    "from dotenv import load_dotenv\n",
    "from huggingface_hub import login\n",
    "\n",
    "def read_token() -> None:\n",
    "    \"\"\"\n",
    "    Logs into Hugging Face using a token stored in a '.env' file under the key `HF_TOKEN`.\n",
    "    If the '.env' file is missing or `HF_TOKEN` is not provided, the user will be prompted to log in.\n",
    "    \"\"\"\n",
    "    load_dotenv()\n",
    "    token = os.getenv(\"HF_TOKEN\")\n",
    "    login(token)\n",
    "\n",
    "read_token()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3cc6e17d-fa47-4987-9b44-3c1ad57b235a",
   "metadata": {},
   "source": [
    "### Synthetic Data Generation Function\n",
    "\n",
    "- This data generator uses language models to create datasets for labels and categories that you store in a config file.\n",
    "- Labels and categories are randomly selected from the config file, and the language model is instructed to generate synthetic data based on the chosen categories and labels.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "7924acfa-b971-4870-82d4-80122babf66e",
   "metadata": {},
   "outputs": [],
   "source": [
    "import random\n",
    "from datetime import datetime\n",
    "import pandas as pd\n",
    "from transformers import pipeline\n",
    "import re\n",
    "from typing import Dict, List, Tuple\n",
    "\n",
    "def parse_string(input_string: str) -> Tuple[str, str]:\n",
    "    \"\"\"\n",
    "    Parses a string containing `OUTPUT:` and `REASONING:` sections and extracts their values.\n",
    "\n",
    "    Args:\n",
    "        input_string (str): The input string containing `OUTPUT:` and `REASONING:` labels.\n",
    "\n",
    "    Returns:\n",
    "        Tuple[str, str]: A tuple containing two strings:\n",
    "                         - The content following `OUTPUT:`.\n",
    "                         - The content following `REASONING:`.\n",
    "\n",
    "    Raises:\n",
    "        ValueError: If the input string does not match the expected format with both `OUTPUT:` and `REASONING:` sections.\n",
    "\n",
    "    Note:\n",
    "        - The function is case-sensitive and assumes `OUTPUT:` and `REASONING:` are correctly capitalized.\n",
    "    \"\"\"\n",
    "    # Use regular expressions to extract OUTPUT and REASONING\n",
    "    match = re.search(r\"OUTPUT:\\s*(.+?)\\s*REASONING:\\s*(.+)\", input_string, re.DOTALL)\n",
    "\n",
    "    if not match:\n",
    "        raise ValueError(\n",
    "            \"The generated response is not in the expected 'OUTPUT:... REASONING:...' format.\"\n",
    "        )\n",
    "\n",
    "    # Extract the matched groups: output and reasoning\n",
    "    output = match.group(1).strip()\n",
    "    reasoning = match.group(2).strip()\n",
    "\n",
    "    return output, reasoning\n",
    "\n",
    "def sdg(\n",
    "    sample_size: int,\n",
    "    labels: List[str],\n",
    "    label_descriptions: str,\n",
    "    categories_types: Dict[str, str],\n",
    "    use_case: str,\n",
    "    prompt_examples: str,\n",
    "    model: str,\n",
    "    max_new_tokens: int,\n",
    "    batch_size: int,\n",
    "    output_dir: str,\n",
    "    save_reasoning: bool,\n",
    ") -> None:\n",
    "    \"\"\"\n",
    "    Generates synthetic data based on specified categories and labels.\n",
    "\n",
    "    Args:\n",
    "        sample_size (int): The number of synthetic data samples to generate.\n",
    "        labels (List[str]): The labels used to classify the synthetic data.\n",
    "        label_descriptions (str): A description of the meaning of each label.\n",
    "        categories_types (Dict[str, str]): The categories and their types for data generation and diversification.\n",
    "        use_case (str): The use case of the synthetic data to provide context for the language model.\n",
    "        prompt_examples (str): The examples used in the Few-Shot or Chain-of-Thought prompting.\n",
    "        model (str): The large language model used for generating the synthetic data.\n",
    "        max_new_tokens (int): The maximum number of new tokens to generate for each sample.\n",
    "        batch_size (int): The number of samples per batch to append to the output file.\n",
    "        output_dir (str): The directory path where the output file will be saved.\n",
    "        save_reasoning (bool): Whether to save the reasoning or explanation behind the generated data.\n",
    "    \"\"\"\n",
    "\n",
    "    categories = list(categories_types.keys())\n",
    "\n",
    "    # Generate filename with current date and time\n",
    "    timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n",
    "    os.makedirs(output_dir, exist_ok=True)\n",
    "    output_path = os.path.join(output_dir, f\"{timestamp}.csv\")\n",
    "\n",
    "    # If sample_size is not divisible by batch_size, an extra batch is added\n",
    "    num_batches = (sample_size + batch_size - 1) // batch_size\n",
    "\n",
    "    print(\n",
    "        f\"\\U0001f680  Synthetic data will be appended to {output_path} in {num_batches} batch(es).\"\n",
    "    )\n",
    "\n",
    "    for batch in range(num_batches):\n",
    "        # Calculate the start and end indices for the current batch\n",
    "        start = batch * batch_size\n",
    "        end = min(start + batch_size, sample_size)\n",
    "\n",
    "        # Store results of the current batch\n",
    "        batch_data = []\n",
    "\n",
    "        # Assign random labels to the current batch\n",
    "        batch_random_labels = random.choices(labels, k=batch_size)\n",
    "\n",
    "        # Assign random categories to the current batch\n",
    "        batch_random_categories = random.choices(categories, k=batch_size)\n",
    "\n",
    "        for i in range(start, end):\n",
    "            # Assign a random type to the ith category\n",
    "            random_type = random.choices(\n",
    "                categories_types[batch_random_categories[i - start]]\n",
    "            )\n",
    "            prompt = f\"\"\"You should create synthetic data for specified labels and categories. \n",
    "            This is especially useful for {use_case}.\n",
    "\n",
    "            *Label Descriptions*\n",
    "            {label_descriptions}\n",
    "\n",
    "            *Examples*\n",
    "            {prompt_examples}\n",
    "\n",
    "            ####################\n",
    "\n",
    "            Generate one output for the classification below.\n",
    "            You may use the examples I have provided as a guide, but you cannot simply modify or rewrite them.\n",
    "            Only return the OUTPUT and REASONING. \n",
    "            Do not return the LABEL, CATEGORY, or TYPE.\n",
    "\n",
    "            LABEL: {batch_random_labels[i - start]}\n",
    "            CATEGORY: {batch_random_categories[i - start]}\n",
    "            TYPE: {random_type}\n",
    "            OUTPUT:\n",
    "            REASONING:\n",
    "            \"\"\"\n",
    "            messages = [\n",
    "                {\n",
    "                    \"role\": \"system\",\n",
    "                    \"content\": f\"You are a helpful assistant designed to generate synthetic data for {use_case} with labels {labels} in categories {categories}.\",\n",
    "                },\n",
    "                {\"role\": \"user\", \"content\": prompt},\n",
    "            ]\n",
    "            generator = pipeline(\"text-generation\", model=model)\n",
    "            result = generator(messages, max_new_tokens=max_new_tokens)[0][\n",
    "                \"generated_text\"\n",
    "            ][-1][\"content\"]\n",
    "\n",
    "            # Uncomment to see the raw outputs\n",
    "            # print(result)\n",
    "\n",
    "            text, reasoning = parse_string(result)\n",
    "\n",
    "            entry = {\n",
    "                \"text\": text,\n",
    "                \"label\": batch_random_labels[i - start],\n",
    "                \"model\": model,\n",
    "            }\n",
    "\n",
    "            if save_reasoning:\n",
    "                entry[\"reasoning\"] = reasoning\n",
    "\n",
    "            batch_data.append(entry)\n",
    "\n",
    "        # Convert the batch results to a DataFrame\n",
    "        batch_df = pd.DataFrame(batch_data)\n",
    "\n",
    "        # Append the DataFrame to the CSV file\n",
    "        if batch == 0:\n",
    "            # If it's the first batch, write headers\n",
    "            batch_df.to_csv(output_path, mode=\"w\", index=False)\n",
    "        else:\n",
    "            # For subsequent batches, append without headers\n",
    "            batch_df.to_csv(output_path, mode=\"a\", header=False, index=False)\n",
    "        print(f\"\\U000026a1  Saved batch number {batch + 1}/{num_batches}\")\n",
    "\n",
    "    print(f\"\\n\\nGenerated Data:\")\n",
    "    df = pd.read_csv(output_path)\n",
    "    display(df.head())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ffc03f8-74fd-4bb6-b5d9-8a7447e46d34",
   "metadata": {},
   "source": [
    "## 3. Execution\n",
    "\n",
    "- Load the Polite Guard configurations, or provide a config file for your use case.\n",
    "- Set the parameter values. Sample size is set to 4 for a quick demonstration."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "e5bf721a-45fe-4a3f-94a4-3473a9cdccac",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🚀  Synthetic data will be appended to ./output/20250507_163808.csv in 1 batch(es).\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d302cbc645ed4a628d81067214499012",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cpu\n",
      "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "64feb04247fb482683419361ac08aa4d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cpu\n",
      "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "68cd1a3fab3b485a81ab0468a517c3a6",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cpu\n",
      "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "4b8fae4e997e4ae6b3efb0db97c2871f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cpu\n",
      "Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "⚡  Saved batch number 1/1\n",
      "\n",
      "\n",
      "Generated Data:\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>text</th>\n",
       "      <th>label</th>\n",
       "      <th>model</th>\n",
       "      <th>reasoning</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>I'll check what options we have for you, and I...</td>\n",
       "      <td>somewhat polite</td>\n",
       "      <td>meta-llama/Llama-3.2-3B-Instruct</td>\n",
       "      <td>This text would be classified as \"somewhat pol...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>I'm glad you're interested in joining our team...</td>\n",
       "      <td>polite</td>\n",
       "      <td>meta-llama/Llama-3.2-3B-Instruct</td>\n",
       "      <td>This text is polite because it expresses enthu...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Your credit score will be calculated based on ...</td>\n",
       "      <td>neutral</td>\n",
       "      <td>meta-llama/Llama-3.2-3B-Instruct</td>\n",
       "      <td>This text would be classified as \"neutral.\" Th...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>You're wasting your time with this play. It's ...</td>\n",
       "      <td>impolite</td>\n",
       "      <td>meta-llama/Llama-3.2-3B-Instruct</td>\n",
       "      <td>This statement is impolite because it directly...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                                text            label  \\\n",
       "0  I'll check what options we have for you, and I...  somewhat polite   \n",
       "1  I'm glad you're interested in joining our team...           polite   \n",
       "2  Your credit score will be calculated based on ...          neutral   \n",
       "3  You're wasting your time with this play. It's ...         impolite   \n",
       "\n",
       "                              model  \\\n",
       "0  meta-llama/Llama-3.2-3B-Instruct   \n",
       "1  meta-llama/Llama-3.2-3B-Instruct   \n",
       "2  meta-llama/Llama-3.2-3B-Instruct   \n",
       "3  meta-llama/Llama-3.2-3B-Instruct   \n",
       "\n",
       "                                           reasoning  \n",
       "0  This text would be classified as \"somewhat pol...  \n",
       "1  This text is polite because it expresses enthu...  \n",
       "2  This text would be classified as \"neutral.\" Th...  \n",
       "3  This statement is impolite because it directly...  "
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import importlib.util\n",
    "import sys\n",
    "\n",
    "# Dynamically load the configuration module\n",
    "config_path = \"./config/polite-guard-config.py\"\n",
    "if not os.path.exists(config_path):\n",
    "    print(f\"Error: Configuration file not found at {config_path}\")\n",
    "    sys.exit(1)\n",
    "\n",
    "try:\n",
    "    spec = importlib.util.spec_from_file_location(\"config_module\", config_path)\n",
    "    if spec is None or spec.loader is None:\n",
    "        raise ImportError(f\"Could not load spec for module at {config_path}\")\n",
    "    sdg_config = importlib.util.module_from_spec(spec)\n",
    "    spec.loader.exec_module(sdg_config)\n",
    "except Exception as e:\n",
    "    print(f\"Error loading configuration from {config_path}: {e}\")\n",
    "    sys.exit(1)\n",
    "\n",
    "sdg(\n",
    "    sample_size=4,\n",
    "    labels=sdg_config.labels,\n",
    "    label_descriptions=sdg_config.label_descriptions,\n",
    "    categories_types=sdg_config.categories_types,\n",
    "    use_case=sdg_config.use_case,\n",
    "    prompt_examples=sdg_config.prompt_examples,\n",
    "    model=\"meta-llama/Llama-3.2-3B-Instruct\",\n",
    "    max_new_tokens=256,\n",
    "    batch_size=4,\n",
    "    output_dir=\"./output\",\n",
    "    save_reasoning=\"store_true\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5732f142-8ae2-47cd-8846-f7621901928e",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "ai-project",
   "language": "python",
   "name": "ai-project"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
