{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "#Notebook showing how to execute annotation with SDG with a custom annotation yaml"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Annotation with SDG"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Importing the necessary libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/ec2-user/subsetenv/lib64/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n"
     ]
    }
   ],
   "source": [
    "# First Party\n",
    "from instructlab.sdg.pipeline import Pipeline, PipelineContext\n",
    "# Third Party\n",
    "from datasets import load_dataset\n",
    "from openai import OpenAI\n",
    "import yaml\n",
    "import os"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Serve LLM through ilab serve command"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Run the following shell command to serve the Mixtral-8x7B-Instruct-v0.1 model on port 8000 (by default). The mixtral model is quite large and may take a while to be served through vLLM.\n",
    "\n",
    "*Note*: You can serve any other desired model by changing the model-path argument. The rest of this notebook will work seamlessly with any other model as long we can wrap the served model in an OpenAI client"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`ilab serve --model-path ~/.cache/instructlab/models/mistralai/Mixtral-8x7B-Instruct-v0.1/`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Wrap the served model in an OpenAI client"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:8000/v1\",  # Your model endpoint\n",
    "    api_key=\"dummy-key\"  # vLLM doesn't check the key, but one is required\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Make sure the model is served before running the next cell, and that the following cell returns the correct model id"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'/home/ec2-user/.cache/instructlab/models/mistralai/Mixtral-8x7B-Instruct-v0.1'"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "models = client.models.list()\n",
    "teacher_model = models.data[0].id\n",
    "teacher_model #make sure this is the correct model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Preparing classification dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### In this exercise, we will use the Yahoo Answers Topics dataset from HuggingFace.\n",
    "\n",
    "#### Here, we will use a small portion of the training set to demonstrate the prompt engineering process, and the rest of the set will be assumed to be unlabeled.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Steps to follow:\n",
    "1. EDA of the dataset\n",
    "2. Create In-Context Learning examples\n",
    "3. Iterate on the annotation pipeline (the components of the prompt) to improve the quality of the annotation\n",
    "4. Merge the ICL examples, unlabeled samples and the components of the prompt into a final input dataset for annotation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Importing classification dataset from HuggingFace"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "DatasetDict({\n",
      "    train: Dataset({\n",
      "        features: ['text', 'label'],\n",
      "        num_rows: 120000\n",
      "    })\n",
      "    test: Dataset({\n",
      "        features: ['text', 'label'],\n",
      "        num_rows: 7600\n",
      "    })\n",
      "})\n"
     ]
    }
   ],
   "source": [
    "# Importing classification dataset from HuggingFace\n",
    "dataset = load_dataset(\"fancyzhx/ag_news\")\n",
    "print(dataset) # print details of the dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let us use a portion of the dataset for this example, since the dataset is quite large (1.4 million samples)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Dataset({\n",
      "    features: ['text', 'label'],\n",
      "    num_rows: 500\n",
      "})\n"
     ]
    }
   ],
   "source": [
    "#randomly select 500 samples\n",
    "dataset = dataset['train'].shuffle(seed=42).select(range(500))\n",
    "print(dataset) # print details of the dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## EDA of the dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "=== Dataset Overview ===\n",
      "Total number of samples: 500\n",
      "\n",
      "Feature Information:\n",
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 500 entries, 0 to 499\n",
      "Data columns (total 2 columns):\n",
      " #   Column  Non-Null Count  Dtype \n",
      "---  ------  --------------  ----- \n",
      " 0   text    500 non-null    object\n",
      " 1   label   500 non-null    int64 \n",
      "dtypes: int64(1), object(1)\n",
      "memory usage: 7.9+ KB\n",
      "None\n",
      "\n",
      "=== Class Distribution ===\n",
      "Sci/Tech: 137 samples (27.4%)\n",
      "Sports: 135 samples (27.0%)\n",
      "World: 114 samples (22.8%)\n",
      "Business: 114 samples (22.8%)\n",
      "\n",
      "=== Text Length Statistics ===\n",
      "count    500.000000\n",
      "mean     231.006000\n",
      "std       64.919265\n",
      "min      107.000000\n",
      "25%      191.000000\n",
      "50%      226.500000\n",
      "75%      263.000000\n",
      "max      801.000000\n",
      "Name: text_length, dtype: float64\n",
      "\n",
      "=== Average Text Length by Class ===\n",
      "World:\n",
      "  Mean length: 232.8 characters\n",
      "  Min length: 130 characters\n",
      "  Max length: 488 characters\n",
      "Sports:\n",
      "  Mean length: 224.5 characters\n",
      "  Min length: 116 characters\n",
      "  Max length: 801 characters\n",
      "Business:\n",
      "  Mean length: 238.3 characters\n",
      "  Min length: 124 characters\n",
      "  Max length: 476 characters\n",
      "Sci/Tech:\n",
      "  Mean length: 229.9 characters\n",
      "  Min length: 107 characters\n",
      "  Max length: 491 characters\n",
      "\n",
      "=== Word Count Statistics ===\n",
      "count    500.000000\n",
      "mean      37.174000\n",
      "std       10.339707\n",
      "min       14.000000\n",
      "25%       31.000000\n",
      "50%       36.000000\n",
      "75%       42.000000\n",
      "max      126.000000\n",
      "Name: word_count, dtype: float64\n",
      "\n",
      "=== Average Word Count by Class ===\n",
      "World:\n",
      "  Mean words: 37.2\n",
      "  Min words: 18\n",
      "  Max words: 81\n",
      "Sports:\n",
      "  Mean words: 37.7\n",
      "  Min words: 20\n",
      "  Max words: 126\n",
      "Business:\n",
      "  Mean words: 37.5\n",
      "  Min words: 16\n",
      "  Max words: 68\n",
      "Sci/Tech:\n",
      "  Mean words: 36.4\n",
      "  Min words: 14\n",
      "  Max words: 82\n",
      "\n",
      "=== Most Common Words ===\n",
      "Top 20 most frequent words:\n",
      "that: 120\n",
      "with: 107\n",
      "said: 72\n",
      "from: 65\n",
      "after: 64\n",
      "will: 61\n",
      "over: 60\n",
      "have: 57\n",
      "more: 44\n",
      "this: 37\n",
      "first: 37\n",
      "than: 36\n",
      "been: 35\n",
      "about: 35\n",
      "against: 31\n",
      "they: 30\n",
      "company: 30\n",
      "their: 30\n",
      "last: 29\n",
      "says: 27\n",
      "\n",
      "=== Sample Text from Each Class ===\n",
      "\n",
      "WORLD:\n",
      "Bangladesh paralysed by strikes Opposition activists have brought many towns and cities in Bangladesh to a halt, the day after 18 people died in explosions at a political rally.\n",
      "\n",
      "SPORTS:\n",
      "Desiring Stability Redskins coach Joe Gibbs expects few major personnel changes in the offseason and wants to instill a culture of stability in Washington.\n",
      "\n",
      "BUSINESS:\n",
      "Economy builds steam in KC Fed district The economy continued to strengthen in September and early October in the Great Plains and Rocky Mountain regions covered by the Tenth Federal Reserve District,...\n",
      "\n",
      "SCI/TECH:\n",
      "U2 pitches for Apple New iTunes ads airing during baseball games Tuesday will feature the advertising-shy Irish rockers.\n",
      "\n",
      "=== Basic Statistics Summary ===\n",
      "Number of unique documents: 500\n",
      "Average words per document: 37.2\n",
      "Median words per document: 36.0\n",
      "Most common class: Sci/Tech (137 samples)\n",
      "Least common class: Business (114 samples)\n"
     ]
    }
   ],
   "source": [
    "# After loading the dataset, add these EDA steps\n",
    "import pandas as pd\n",
    "from collections import Counter\n",
    "import statistics\n",
    "\n",
    "# Convert Dataset to pandas DataFrame for easier analysis\n",
    "df = pd.DataFrame(dataset)\n",
    "labels = dataset.features['label'].names\n",
    "# 1. Basic Dataset Information\n",
    "print(\"\\n=== Dataset Overview ===\")\n",
    "print(f\"Total number of samples: {len(df)}\")\n",
    "print(\"\\nFeature Information:\")\n",
    "print(df.info())\n",
    "\n",
    "# 2. Class Distribution\n",
    "print(\"\\n=== Class Distribution ===\")\n",
    "class_dist = df['label'].map(lambda x: labels[x]).value_counts()\n",
    "for class_name, count in class_dist.items():\n",
    "    percentage = (count/len(df)) * 100\n",
    "    print(f\"{class_name}: {count} samples ({percentage:.1f}%)\")\n",
    "\n",
    "# 3. Text Length Analysis\n",
    "df['text_length'] = df['text'].str.len()\n",
    "print(\"\\n=== Text Length Statistics ===\")\n",
    "print(df['text_length'].describe())\n",
    "\n",
    "# 4. Text Length by Class\n",
    "print(\"\\n=== Average Text Length by Class ===\")\n",
    "for label_idx, label_name in enumerate(labels):\n",
    "    class_texts = df[df['label'] == label_idx]['text_length']\n",
    "    print(f\"{label_name}:\")\n",
    "    print(f\"  Mean length: {class_texts.mean():.1f} characters\")\n",
    "    print(f\"  Min length: {class_texts.min()} characters\")\n",
    "    print(f\"  Max length: {class_texts.max()} characters\")\n",
    "\n",
    "# 5. Word Count Analysis\n",
    "df['word_count'] = df['text'].str.split().str.len()\n",
    "print(\"\\n=== Word Count Statistics ===\")\n",
    "print(df['word_count'].describe())\n",
    "\n",
    "# 6. Word Count by Class\n",
    "print(\"\\n=== Average Word Count by Class ===\")\n",
    "for label_idx, label_name in enumerate(labels):\n",
    "    class_words = df[df['label'] == label_idx]['word_count']\n",
    "    print(f\"{label_name}:\")\n",
    "    print(f\"  Mean words: {class_words.mean():.1f}\")\n",
    "    print(f\"  Min words: {class_words.min()}\")\n",
    "    print(f\"  Max words: {class_words.max()}\")\n",
    "\n",
    "# 7. Most Common Words\n",
    "def get_words(text):\n",
    "    words = text.lower().split()\n",
    "    return [word for word in words if word.isalnum() and len(word) > 3]  # Simple filtering\n",
    "\n",
    "all_words = []\n",
    "for text in df['text']:\n",
    "    all_words.extend(get_words(text))\n",
    "\n",
    "print(\"\\n=== Most Common Words ===\")\n",
    "word_freq = Counter(all_words).most_common(20)\n",
    "print(\"Top 20 most frequent words:\")\n",
    "for word, count in word_freq:\n",
    "    print(f\"{word}: {count}\")\n",
    "\n",
    "# 8. Sample texts from each class\n",
    "print(\"\\n=== Sample Text from Each Class ===\")\n",
    "for label_idx, label_name in enumerate(labels):\n",
    "    sample_text = df[df['label'] == label_idx]['text'].iloc[0]\n",
    "    print(f\"\\n{label_name.upper()}:\")\n",
    "    print(sample_text[:200] + \"...\" if len(sample_text) > 200 else sample_text)\n",
    "\n",
    "# 9. Basic Statistics Summary\n",
    "print(\"\\n=== Basic Statistics Summary ===\")\n",
    "print(f\"Number of unique documents: {df['text'].nunique()}\")\n",
    "print(f\"Average words per document: {df['word_count'].mean():.1f}\")\n",
    "print(f\"Median words per document: {df['word_count'].median()}\")\n",
    "print(f\"Most common class: {class_dist.index[0]} ({class_dist.iloc[0]} samples)\")\n",
    "print(f\"Least common class: {class_dist.index[-1]} ({class_dist.iloc[-1]} samples)\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Creating In-Context-Learning examples (Few-Shot Examples) for the Prompt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let us first select 3 examples from the dataset to be used as In-Context-Learning examples (Few-Shot Examples) for the prompt, and 20 examples to be used as validation examples for prompt engineering. The rest, we will save for labeling by the annotation pipeline.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ICL examples: 3\n",
      "Validation examples: 30\n",
      "Unlabeled examples: 467\n"
     ]
    }
   ],
   "source": [
    "K = 3 #number of ICL examples\n",
    "N = 30 #number of validation examples to be used for prompt engineering\n",
    "icl_samples = dataset.select(range(K))\n",
    "validation_samples = dataset.select(range(K, K+N))\n",
    "unlabeled_samples = dataset.select(range(K+N, len(dataset)))\n",
    "\n",
    "print(f\"ICL examples: {len(icl_samples)}\")\n",
    "print(f\"Validation examples: {len(validation_samples)}\")\n",
    "print(f\"Unlabeled examples: {len(unlabeled_samples)}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      " Bangladesh paralysed by strikes Opposition activists have brought many towns and cities in Bangladesh to a halt, the day after 18 people died in explosions at a political rally. \n",
      "Label:  World\n",
      "\n",
      " Desiring Stability Redskins coach Joe Gibbs expects few major personnel changes in the offseason and wants to instill a culture of stability in Washington. \n",
      "Label:  Sports\n",
      "\n",
      " Will Putin #39;s Power Play Make Russia Safer? Outwardly, Russia has not changed since the barrage of terrorist attacks that culminated in the school massacre in Beslan on Sept. \n",
      "Label:  World\n"
     ]
    }
   ],
   "source": [
    "for sample in icl_samples:\n",
    "    print(\"\\n\", sample['text'], \"\\nLabel: \", labels[int(sample['label'])])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "These look like good examples to be used as few shot examples for the prompt. We will save this for now"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prompt Engineering"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this section, we will iterate on the prompt to improve the quality of the annotation.\n",
    "\n",
    "- We will start with a basic prompt and then iterate on it based on the performance on the validation examples.\n",
    "- We will cover 3 iterations of prompt engineering:\n",
    "    - Basic prompt\n",
    "    - Prompt with ICL examples and structured principles and system prompt\n",
    "    - Prompt with improved ICL examples"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create annotation config YAML with 6 prompt components:\n",
    "- System prompt (the overall instruction for the task. empty for now)\n",
    "- Introduction (a brief introduction to the task)\n",
    "- Principles (the principles that guide the task, empty for now)\n",
    "- Examples (empty for now)\n",
    "- Generation (the query for annotation, along with any prefix or suffix instructions)\n",
    "- Start tags and end tags (empty)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Note the templating pattern for the introduction and generation components. These will be used to inject the values for the task description and the query for annotation respectively, dynamically. We need to make sure that these keys are present in the input dataset, when we call `pipeline.generate()`. The keys used in this example are `simple_task_description` and `text`. You can use any other keys that you want to inject into the prompt, but these should be present in the input dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create annotation config YAML\n",
    "simple_annotation_config = {\n",
    "    \"system\": None,\n",
    "    \"introduction\": \"Task Description: {{simple_task_description}}\",\n",
    "    \"principles\": None,\n",
    "    \"examples\": None,\n",
    "    \"generation\": \"Here is the query for annotation:\\n{{text}}\",\n",
    "    \"start_tags\": [\"\"],\n",
    "    \"end_tags\": [\"\"]\n",
    "}\n",
    "\n",
    "# Write to YAML file\n",
    "with open('simple_annotation_config.yaml', 'w') as f:\n",
    "    yaml.dump(simple_annotation_config, f, default_flow_style=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Dataset({\n",
       "    features: ['text', 'label', 'simple_task_description'],\n",
       "    num_rows: 30\n",
       "})"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#Let's create 'simple_task_description' key in the validation_samples dataset and populate it with the task description.\n",
    "simple_task_description = \"Annotation\"\n",
    "validation_samples = validation_samples.map(lambda x: {\"simple_task_description\": simple_task_description})\n",
    "validation_samples"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create annotation yaml configuration to leverage guided decoding, and include the labels under the 'guided choice' key like so. We are going to make this point to the simple_annotation_config.yaml file that we just created above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create YAML configuration\n",
    "yaml_config = {\n",
    "    \"version\": \"1.0\",\n",
    "    \"blocks\": [\n",
    "        {\n",
    "            \"name\": \"annotation\",\n",
    "            \"type\": \"LLMBlock\",\n",
    "            \"config\": {\n",
    "                \"config_path\": \"simple_annotation_config.yaml\",\n",
    "                \"model_id\": \"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n",
    "                \"output_cols\": [\"output\"],\n",
    "                \"gen_kwargs\": {\n",
    "                    \"max_tokens\": 20,\n",
    "                    \"temperature\": 0,\n",
    "                    \"extra_body\": {\n",
    "                        \"guided_decoding_backend\": \"xgrammar\", #use xgrammar backend for guided decoding, explicitly, and only xgrammar with no fallback on error\n",
    "                        \"guided_choice\": labels  # This will use your labels list\n",
    "                    }\n",
    "                }\n",
    "            },\n",
    "            \"drop_duplicates\": [\"text\"]\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "\n",
    "# Write to YAML file\n",
    "with open('annotation_pipeline.yaml', 'w') as f: #this is the file that will be used to create the annotation pipeline\n",
    "    yaml.dump(yaml_config, f, default_flow_style=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Initialize pipeline context and annotation pipeline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "ctx = PipelineContext(client=client, model_family=\"mixtral\", model_id=teacher_model)\n",
    "# constructing the path with the 'annotation' directory explicitly\n",
    "current_dir = os.path.dirname(os.path.abspath(''))\n",
    "pipeline_yaml = os.path.join(current_dir, \"annotation\", \"annotation_pipeline.yaml\")\n",
    "annotation_pipe = Pipeline.from_file(ctx, pipeline_yaml)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Main Driver Code"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "gen_data = annotation_pipe.generate(validation_samples)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check output features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'text': Value(dtype='string', id=None),\n",
       " 'label': Value(dtype='int64', id=None),\n",
       " 'simple_task_description': Value(dtype='string', id=None),\n",
       " 'output': Value(dtype='string', id=None)}"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gen_data.features"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Print generated samples with true and predicted labels"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "text:  U2 pitches for Apple New iTunes ads airing during baseball games Tuesday will feature the advertising-shy Irish rockers. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  S African TV in beheading blunder Public broadcaster SABC apologises after news bulletin shows footage of American beheaded in Iraq. \n",
      "true label:  World \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  A Cosmic Storm: When Galaxy Clusters Collide Astronomers have found what they are calling the perfect cosmic storm, a galaxy cluster pile-up so powerful its energy output is second only to the Big Bang. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Sci/Tech\n",
      "\n",
      "text:  West sets deadline for Iran to freeze uranium enrichment Four western countries set the scene yesterday for a showdown with Iran by demanding that it freeze its uranium enrichment activities immediately. \n",
      "true label:  World \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Computer Assoc. Cuts 800 Jobs Worldwide (AP) AP - Computer Associates International Inc. announced a restructuring plan Wednesday that would reduce its work force by 800 people worldwide, saving the business software maker  #36;70 million annually once the plan is fully implemented. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  CA Opens Utility Pricing for Mainframes Keeping its promise to migrate toward more flexible pricing for its software, Computer Associates (Quote, Chart) has unleashed Measured Workload Pricing for its mainframe management products. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Economy builds steam in KC Fed district The economy continued to strengthen in September and early October in the Great Plains and Rocky Mountain regions covered by the Tenth Federal Reserve District, the Federal Reserve Bank of Kansas City said Wednesday. \n",
      "true label:  Business \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Mutombo says he #39;s being traded to Rockets; will back up, mentor &lt;b&gt;...&lt;/b&gt; Dikembe Mutombo, 38, has agreed to a sign-and-trade deal that will send him from the Chicago Bulls to Houston in exchange for Eric Piatkowski, Adrian Griffin and Mike Wilks, the Houston Chronicle reports. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  RBC Centura CEO steps down RALEIGH, NC - The head of RBC Centura Bank has stepped down, and his successor will run the bank out of Raleigh rather than Rocky Mount, where the bank is based. \n",
      "true label:  Business \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Plans put on hold for pro SCO website Infoworld reports that SCO #39;s briefly planned foray into Groklaw territory announced last month has been put on the back burner. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Paris Marks Liberation Mindful of Collaboration With solemn commemorations, a ceremonial flag-raising at the Eiffel Tower and columns of 1940s-era tanks and army jeeps, Parisians on Wednesday marked the 60th anniversary  \n",
      "true label:  World \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Oracle acquisition of PeopleSoft leads flurry of deals NEW YORK (CBS.MW) -- US stocks closed higher Monday, with the Dow Jones Industrial Average ending at its best level in more than nine months amid better-than-expected economic data and merger-related optimism. \n",
      "true label:  Business \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  They #146;re in the wrong ATHENS -- Matt Emmons was focusing on staying calm. He should have been focusing on the right target. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Mularkey Sticking With Bledsoe As Bills QB (AP) AP - Mike Mularkey has a message to those clamoring for rookie quarterback J.P. Losman to replace Drew Bledsoe as Buffalo's starter. Not yet. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Greek membership of eurozone not in doubt BRUSSELS - Greece #39;s membership of the eurozone is not in doubt despite a damaging review of its budget data stretching back five years, a European Commission official said Monday. \n",
      "true label:  Business \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Some fear it's a passport to identity theft It's December 2005 and you're all set for Christmas in Vienna. You have your most fashionable cold-weather gear, right down to the red maple leaves embroidered on your jacket and backpack, to conceal your American citizenship from hostile denizens of Europe. \n",
      "true label:  Business \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  U.S. Plans Crackdown on Piracy, Counterfeiting  WASHINGTON (Reuters) - The United States is cracking down  on the growing trade in counterfeit and pirated goods that  costs U.S. businesses hundreds of billions of dollar annually,  U.S. government and industry officials said on Monday. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Cisco switch products target small business Company aggressively addresses smaller businesses with products that reduce the cost and complexity of operating a Cisco network. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Icahn pushes harder to stop Mylan #39;s King acquisition PITTSBURGH Carl Icahn, the largest shareholder of Mylan Laboratories, is now threatening to push for new company directors to stop the generic drug maker #39;s four (B) billion-dollar takeover bid of King Pharmaceuticals. \n",
      "true label:  Business \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Quantum Snags LTO Rival Certance After preaching the superiority of its DLT tape format for years, Quantum has up and bought Certance, a key maker of the rival LTO format. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Black to Sue Hollinger Committee for C\\$1.1 Bln Fallen press baron Conrad Black will sue Hollinger International Inc. #39;s special committee and others for C\\$1.1 billion (\\$870 million) over a report that accused him of looting the company of millions of dollars. \n",
      "true label:  Business \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Milosevic trial suspended The trial of Slobodan Milosevic has been suspended in Hague to allow the defense lawyers assigned to the former Yugoslav president more time to prepare their case. \n",
      "true label:  World \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Tremor shakes Athens, no damage ATHENS - An earthquake rattled Athens on Tuesday, including the Olympic venues in the Greek capital. The Athens Geodynamic Institute said that there was a tremor, but few other details were known, such as the magnitude or epicentre of the quake. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Bush Web Site Bars Overseas Visitors (washingtonpost.com) washingtonpost.com - The Bush-Cheney reelection campaign has barred people outside the United States from viewing its Web site following an electronic attack that took down the campaign's Internet address for six hours last week, according\\\\to computer security experts. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Merger could affect Nextel Partners The proposed \\$35 billion merger of Sprint Corp. and Nextel Communications could mean changes for Kirkland-based Nextel Partners Inc. \n",
      "true label:  Business \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  TEXANS STAT CENTER After finally winning consecutive games, the Texans are a team in search of a challenge.  quot;I knew that was going to come up, quot; Texans coach Dom Capers said with a laugh. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  About-face for Heels Rashad McCants wasn #39;t thinking about last year at Kentucky. Jawad Williams said last year was last year. And Sean May was thinking more about his last game than North Carolina #39;s Jan. 3 defeat at Rupp Arena. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Web site pictures Photo iPod for holidays A Mac news site says iPods that display digital photos may be under Christmas trees this year. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Shanahan says he intends to honour his deal with Broncos Trying to defuse rumours he might be leaving soon, Denver Broncos coach Mike Shanahan said Thursday night he intends to honour the final four years of his contract. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Iran Says U.S. Lacks Options on Its Atomic Program (Reuters) Reuters - Washington has hit a dead-end over\\Iran's nuclear dossier, lacking enough proof to demand U.N.\\sanctions and too bogged down in Iraq for a military strike,\\President Mohammad Khatami said Saturday. \n",
      "true label:  World \n",
      "predicted label:  Sports\n"
     ]
    }
   ],
   "source": [
    "for sample in gen_data:\n",
    "    print(\"\\ntext: \", sample['text'], \"\\ntrue label: \", labels[int(sample['label'])], \"\\npredicted label: \", sample['output'])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can see that the predicted labels are not very good. Let's check the performance of the pipeline on the validation examples.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy: 30.00%\n"
     ]
    }
   ],
   "source": [
    "#accuracy metrics\n",
    "from sklearn.metrics import accuracy_score\n",
    "\n",
    "# Calculate basic accuracy\n",
    "true_labels = list(map(lambda x: labels[int(x['label'])], gen_data))\n",
    "pred_labels = list(map(lambda x: x['output'], gen_data))\n",
    "accuracy = accuracy_score(true_labels, pred_labels)\n",
    "print(f\"Accuracy: {accuracy:.2%}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['Sci/Tech', 'World', 'Sci/Tech', 'World', 'Sci/Tech', 'Sci/Tech', 'Business', 'Sports', 'Business', 'Sci/Tech', 'World', 'Business', 'Sports', 'Sports', 'Business', 'Business', 'Sci/Tech', 'Sci/Tech', 'Business', 'Sci/Tech', 'Business', 'World', 'Sports', 'Sci/Tech', 'Business', 'Sports', 'Sports', 'Sci/Tech', 'Sports', 'World']\n",
      "['Sports', 'Sports', 'Sci/Tech', 'Sports', 'Sports', 'Business', 'Sports', 'Sports', 'Business', 'Sports', 'Sports', 'Sports', 'Sports', 'Sports', 'Sports', 'Sports', 'Business', 'Sports', 'Sports', 'Business', 'Sports', 'Sports', 'Sports', 'Business', 'Sports', 'Sports', 'Sports', 'Sports', 'Sports', 'Sports']\n"
     ]
    }
   ],
   "source": [
    "print(true_labels)\n",
    "print(pred_labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Let's iterate on the prompt to improve the quality of the annotation.\n",
    "Now we will create a new annotation pipeline config which uses:\n",
    "\n",
    "- the ICL examples to improve the quality of the annotation.\n",
    "- the principles to guide the annotation.\n",
    "- the system prompt to provide overall instructions for the task."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's start by creating the principles and the system prompt for annotation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "task_description = \"annotate the following text with the appropriate category based on the context of the text.\"\n",
    "\n",
    "principles = \"\"\"Important guidelines for classification:\n",
    "- Focus on the main topic, not peripheral mentions\n",
    "- Look for specific keywords that indicate the category\n",
    "- Choose the most specific applicable category\n",
    "- Be consistent with similar types of questions\"\"\"\n",
    "\n",
    "system_prompt = \"\"\"\n",
    "You are an expert in annotation. You will be given a text and you need to annotate it with the appropriate category based on the context of the text.\n",
    "\"\"\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we will prepare dataset to include the ICLs and the principles and the system prompt for annotation, in each row"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Prepare your dataset with all template variables\n",
    "validation_samples = validation_samples.map(lambda x: {\n",
    "    \"simple_task_description\": task_description,\n",
    "    \"principles\": principles,\n",
    "    \"system_prompt\": system_prompt,\n",
    "    \"questions_and_answers\": [\n",
    "        {\n",
    "            \"question\": icl_samples[0][\"text\"],\n",
    "            \"answer\": labels[int(icl_samples[0][\"label\"])]\n",
    "        },\n",
    "        {\n",
    "            \"question\": icl_samples[1][\"text\"],\n",
    "            \"answer\": labels[int(icl_samples[1][\"label\"])]\n",
    "        },\n",
    "        {\n",
    "            \"question\": icl_samples[2][\"text\"],\n",
    "            \"answer\": labels[int(icl_samples[2][\"label\"])]\n",
    "        }\n",
    "    ]\n",
    "})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "check features of dataset to make sure everything needed for the template is present"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'text': Value(dtype='string', id=None),\n",
       " 'label': ClassLabel(names=['World', 'Sports', 'Business', 'Sci/Tech'], id=None),\n",
       " 'simple_task_description': Value(dtype='string', id=None),\n",
       " 'principles': Value(dtype='string', id=None),\n",
       " 'system_prompt': Value(dtype='string', id=None),\n",
       " 'questions_and_answers': [{'answer': Value(dtype='string', id=None),\n",
       "   'question': Value(dtype='string', id=None)}]}"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "validation_samples.features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "detailed_annotation_config = {\n",
    "    \"system\": \"{{system_prompt}}\",\n",
    "    \"introduction\": \"Task Description: {{ simple_task_description }}\",\n",
    "    \"principles\": \"{{ principles }}\",\n",
    "    \"examples\": \"\"\"To better assist you with this task, here are some examples:\n",
    "{% if questions_and_answers is defined %}\n",
    "{% for sample in questions_and_answers %}\n",
    "[Start of Question]\n",
    "{{ sample.question }}\n",
    "[End of Question]\n",
    "\n",
    "[Start of Output]\n",
    "{{ sample.answer }}\n",
    "[End of Output]\n",
    "{% endfor %}\n",
    "{% else %}\n",
    "[Start of Question]\n",
    "{{ seed_question }}\n",
    "[End of Question]\n",
    "\n",
    "[Start of Output]\n",
    "{{ seed_response }}\n",
    "[End of Output]\n",
    "{% endif %}\"\"\",\n",
    "    \"generation\": \"\"\"Here is the query for annotation:\n",
    "  [Start of Question]\n",
    "  {{text}}\n",
    "  [End of Question]\"\"\",\n",
    "    \"start_tags\": [\"\"],\n",
    "    \"end_tags\": [\"\"]\n",
    "}\n",
    "\n",
    "with open('detailed_annotation_config.yaml', 'w') as f:\n",
    "    yaml.dump(detailed_annotation_config, f, default_flow_style=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Make sure that the `annotation_pipeline.yaml` file is pointing to the `detailed_annotation_config.yaml` file now."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "annotation_pipeline_yaml = yaml.safe_load(open('annotation_pipeline.yaml'))\n",
    "annotation_pipeline_yaml['blocks'][0]['config']['config_path'] = 'detailed_annotation_config.yaml'\n",
    "with open('annotation_pipeline.yaml', 'w') as f:\n",
    "    yaml.dump(annotation_pipeline_yaml, f, default_flow_style=False)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Run the pipeline again with the new configs, and check the results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "text:  U2 pitches for Apple New iTunes ads airing during baseball games Tuesday will feature the advertising-shy Irish rockers. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  S African TV in beheading blunder Public broadcaster SABC apologises after news bulletin shows footage of American beheaded in Iraq. \n",
      "true label:  World \n",
      "predicted label:  World\n",
      "\n",
      "text:  A Cosmic Storm: When Galaxy Clusters Collide Astronomers have found what they are calling the perfect cosmic storm, a galaxy cluster pile-up so powerful its energy output is second only to the Big Bang. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Sci/Tech\n",
      "\n",
      "text:  West sets deadline for Iran to freeze uranium enrichment Four western countries set the scene yesterday for a showdown with Iran by demanding that it freeze its uranium enrichment activities immediately. \n",
      "true label:  World \n",
      "predicted label:  World\n",
      "\n",
      "text:  Computer Assoc. Cuts 800 Jobs Worldwide (AP) AP - Computer Associates International Inc. announced a restructuring plan Wednesday that would reduce its work force by 800 people worldwide, saving the business software maker  #36;70 million annually once the plan is fully implemented. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Business\n",
      "\n",
      "text:  CA Opens Utility Pricing for Mainframes Keeping its promise to migrate toward more flexible pricing for its software, Computer Associates (Quote, Chart) has unleashed Measured Workload Pricing for its mainframe management products. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Economy builds steam in KC Fed district The economy continued to strengthen in September and early October in the Great Plains and Rocky Mountain regions covered by the Tenth Federal Reserve District, the Federal Reserve Bank of Kansas City said Wednesday. \n",
      "true label:  Business \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Mutombo says he #39;s being traded to Rockets; will back up, mentor &lt;b&gt;...&lt;/b&gt; Dikembe Mutombo, 38, has agreed to a sign-and-trade deal that will send him from the Chicago Bulls to Houston in exchange for Eric Piatkowski, Adrian Griffin and Mike Wilks, the Houston Chronicle reports. \n",
      "true label:  Sports \n",
      "predicted label:  Business\n",
      "\n",
      "text:  RBC Centura CEO steps down RALEIGH, NC - The head of RBC Centura Bank has stepped down, and his successor will run the bank out of Raleigh rather than Rocky Mount, where the bank is based. \n",
      "true label:  Business \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Plans put on hold for pro SCO website Infoworld reports that SCO #39;s briefly planned foray into Groklaw territory announced last month has been put on the back burner. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Paris Marks Liberation Mindful of Collaboration With solemn commemorations, a ceremonial flag-raising at the Eiffel Tower and columns of 1940s-era tanks and army jeeps, Parisians on Wednesday marked the 60th anniversary  \n",
      "true label:  World \n",
      "predicted label:  World\n",
      "\n",
      "text:  Oracle acquisition of PeopleSoft leads flurry of deals NEW YORK (CBS.MW) -- US stocks closed higher Monday, with the Dow Jones Industrial Average ending at its best level in more than nine months amid better-than-expected economic data and merger-related optimism. \n",
      "true label:  Business \n",
      "predicted label:  Business\n",
      "\n",
      "text:  They #146;re in the wrong ATHENS -- Matt Emmons was focusing on staying calm. He should have been focusing on the right target. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Mularkey Sticking With Bledsoe As Bills QB (AP) AP - Mike Mularkey has a message to those clamoring for rookie quarterback J.P. Losman to replace Drew Bledsoe as Buffalo's starter. Not yet. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Greek membership of eurozone not in doubt BRUSSELS - Greece #39;s membership of the eurozone is not in doubt despite a damaging review of its budget data stretching back five years, a European Commission official said Monday. \n",
      "true label:  Business \n",
      "predicted label:  World\n",
      "\n",
      "text:  Some fear it's a passport to identity theft It's December 2005 and you're all set for Christmas in Vienna. You have your most fashionable cold-weather gear, right down to the red maple leaves embroidered on your jacket and backpack, to conceal your American citizenship from hostile denizens of Europe. \n",
      "true label:  Business \n",
      "predicted label:  World\n",
      "\n",
      "text:  U.S. Plans Crackdown on Piracy, Counterfeiting  WASHINGTON (Reuters) - The United States is cracking down  on the growing trade in counterfeit and pirated goods that  costs U.S. businesses hundreds of billions of dollar annually,  U.S. government and industry officials said on Monday. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Cisco switch products target small business Company aggressively addresses smaller businesses with products that reduce the cost and complexity of operating a Cisco network. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Icahn pushes harder to stop Mylan #39;s King acquisition PITTSBURGH Carl Icahn, the largest shareholder of Mylan Laboratories, is now threatening to push for new company directors to stop the generic drug maker #39;s four (B) billion-dollar takeover bid of King Pharmaceuticals. \n",
      "true label:  Business \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Quantum Snags LTO Rival Certance After preaching the superiority of its DLT tape format for years, Quantum has up and bought Certance, a key maker of the rival LTO format. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Black to Sue Hollinger Committee for C\\$1.1 Bln Fallen press baron Conrad Black will sue Hollinger International Inc. #39;s special committee and others for C\\$1.1 billion (\\$870 million) over a report that accused him of looting the company of millions of dollars. \n",
      "true label:  Business \n",
      "predicted label:  Business\n",
      "\n",
      "text:  Milosevic trial suspended The trial of Slobodan Milosevic has been suspended in Hague to allow the defense lawyers assigned to the former Yugoslav president more time to prepare their case. \n",
      "true label:  World \n",
      "predicted label:  World\n",
      "\n",
      "text:  Tremor shakes Athens, no damage ATHENS - An earthquake rattled Athens on Tuesday, including the Olympic venues in the Greek capital. The Athens Geodynamic Institute said that there was a tremor, but few other details were known, such as the magnitude or epicentre of the quake. \n",
      "true label:  Sports \n",
      "predicted label:  Sci/Tech\n",
      "\n",
      "text:  Bush Web Site Bars Overseas Visitors (washingtonpost.com) washingtonpost.com - The Bush-Cheney reelection campaign has barred people outside the United States from viewing its Web site following an electronic attack that took down the campaign's Internet address for six hours last week, according\\\\to computer security experts. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  World\n",
      "\n",
      "text:  Merger could affect Nextel Partners The proposed \\$35 billion merger of Sprint Corp. and Nextel Communications could mean changes for Kirkland-based Nextel Partners Inc. \n",
      "true label:  Business \n",
      "predicted label:  Business\n",
      "\n",
      "text:  TEXANS STAT CENTER After finally winning consecutive games, the Texans are a team in search of a challenge.  quot;I knew that was going to come up, quot; Texans coach Dom Capers said with a laugh. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  About-face for Heels Rashad McCants wasn #39;t thinking about last year at Kentucky. Jawad Williams said last year was last year. And Sean May was thinking more about his last game than North Carolina #39;s Jan. 3 defeat at Rupp Arena. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Web site pictures Photo iPod for holidays A Mac news site says iPods that display digital photos may be under Christmas trees this year. \n",
      "true label:  Sci/Tech \n",
      "predicted label:  Sci/Tech\n",
      "\n",
      "text:  Shanahan says he intends to honour his deal with Broncos Trying to defuse rumours he might be leaving soon, Denver Broncos coach Mike Shanahan said Thursday night he intends to honour the final four years of his contract. \n",
      "true label:  Sports \n",
      "predicted label:  Sports\n",
      "\n",
      "text:  Iran Says U.S. Lacks Options on Its Atomic Program (Reuters) Reuters - Washington has hit a dead-end over\\Iran's nuclear dossier, lacking enough proof to demand U.N.\\sanctions and too bogged down in Iraq for a military strike,\\President Mohammad Khatami said Saturday. \n",
      "true label:  World \n",
      "predicted label:  World\n"
     ]
    }
   ],
   "source": [
    "ctx = PipelineContext(client=client, model_family=\"mixtral\", model_id=teacher_model)\n",
    "# constructing the path with the 'annotation' directory explicitly\n",
    "current_dir = os.path.dirname(os.path.abspath(''))\n",
    "pipeline_yaml = os.path.join(current_dir, \"annotation\", \"annotation_pipeline.yaml\")\n",
    "\n",
    "annotation_pipe = Pipeline.from_file(ctx, pipeline_yaml)\n",
    "\n",
    "gen_data = annotation_pipe.generate(validation_samples)\n",
    "for sample in gen_data:\n",
    "    print(\"\\ntext: \", sample['text'], \"\\ntrue label: \", labels[int(sample['label'])], \"\\npredicted label: \", sample['output'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy: 60.00%\n"
     ]
    }
   ],
   "source": [
    "# Calculate basic accuracy\n",
    "true_labels = list(map(lambda x: labels[int(x['label'])], gen_data))\n",
    "pred_labels = list(map(lambda x: x['output'], gen_data))\n",
    "accuracy = accuracy_score(true_labels, pred_labels)\n",
    "print(f\"Accuracy: {accuracy:.2%}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Improve the quality of the ICL examples"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have improved the accuracy but not enough. Let's try to improve the quality of the ICL examples, by creating at least one ICL example for each of the labels so the model can learn to annotate all the labels correctly."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Number of ICL examples: 4\n",
      "Labels covered: ['Business', 'Sci/Tech', 'Sports', 'World']\n",
      "Missing labels: set()\n",
      "Original unlabeled samples: 467\n",
      "Remaining unlabeled samples: 463\n",
      "\n",
      "Selected examples:\n",
      "\n",
      "Category: Sports\n",
      "Text: Expectations Low for Georgia Basketball (AP) AP - Georgia is likely to have a dismal season on the basketball court....\n",
      "\n",
      "Category: Business\n",
      "Text: Company: Ameritrade Hldg Corp New With presidential election-related uncertainty safely in the past, retail investors have returned to the US stock market with gusto, and they #39;re likely to stay engaged, several trading companies said Friday....\n",
      "\n",
      "Category: World\n",
      "Text: For Arafat, Oslo Remained Symbol of Hope (AP) AP - Norway's capital could not have been further removed from the chaos and bloodshed of the Middle East. Yet it was as a result of top-secret meetings here that two veteran warriors decided it was time to talk peace....\n",
      "\n",
      "Category: Sci/Tech\n",
      "Text: Red Hat exec takes Sun to task on open source A top Red Hat executive has attacked the open-source credentials of its sometime business partner Sun Microsystems. In a Web log posting Thursday, Michael Tiemann, Red Hat #39;s vice president of open-source affairs ...\n"
     ]
    }
   ],
   "source": [
    "# First, get one example for each label\n",
    "icl_examples = []\n",
    "seen_labels = set()\n",
    "used_indices = set()  # Keep track of which indices we've used\n",
    "\n",
    "# Iterate through the dataset until we have an example for each label\n",
    "for idx, sample in enumerate(unlabeled_samples):\n",
    "    label_idx = int(sample['label'])\n",
    "    label_name = labels[label_idx]\n",
    "\n",
    "    # If we haven't seen this label yet, add it to our examples\n",
    "    if label_name not in seen_labels:\n",
    "        icl_examples.append({\n",
    "            \"question\": sample[\"text\"],\n",
    "            \"answer\": label_name\n",
    "        })\n",
    "        seen_labels.add(label_name)\n",
    "        used_indices.add(idx)\n",
    "\n",
    "    # Break if we have all labels\n",
    "    if len(seen_labels) == len(labels):\n",
    "        break\n",
    "\n",
    "# Remove the used examples from unlabeled_samples\n",
    "# We need to convert to list and sort in descending order to avoid index shifting\n",
    "remaining_indices = [i for i in range(len(unlabeled_samples)) if i not in used_indices]\n",
    "unlabeled_samples = unlabeled_samples.select(remaining_indices)\n",
    "\n",
    "# Verify the results\n",
    "print(\"Number of ICL examples:\", len(icl_examples))\n",
    "print(\"Labels covered:\", sorted(list(seen_labels)))\n",
    "print(\"Missing labels:\", set(labels) - seen_labels)\n",
    "print(\"Original unlabeled samples:\", len(unlabeled_samples) + len(used_indices))\n",
    "print(\"Remaining unlabeled samples:\", len(unlabeled_samples))\n",
    "\n",
    "# Add these examples to your validation dataset\n",
    "validation_samples = validation_samples.map(lambda x: {\n",
    "    \"questions_and_answers\": icl_examples\n",
    "})\n",
    "\n",
    "# Print examples to verify quality\n",
    "print(\"\\nSelected examples:\")\n",
    "for example in icl_examples:\n",
    "    print(f\"\\nCategory: {example['answer']}\")\n",
    "    print(f\"Text: {example['question']}...\")  # Print first 200 chars"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's try the annotation pipeline with the new validation dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy: 73.33%\n"
     ]
    }
   ],
   "source": [
    "gen_data = annotation_pipe.generate(validation_samples)\n",
    "# Calculate basic accuracy\n",
    "true_labels = list(map(lambda x: labels[int(x['label'])], gen_data))\n",
    "pred_labels = list(map(lambda x: x['output'], gen_data))\n",
    "accuracy = accuracy_score(true_labels, pred_labels)\n",
    "print(f\"Accuracy: {accuracy:.2%}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we have a *much* better accuracy. We can now merge all of the components of the prompt and the ICL examples into a final input dataset for annotation, on the `unlabeled_samples` dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "#merge all the components of the prompt and the ICL examples into a final input dataset for annotation\n",
    "unlabeled_samples = unlabeled_samples.map(lambda x: {\n",
    "    \"system_prompt\": system_prompt,\n",
    "    \"simple_task_description\": simple_task_description,\n",
    "    \"principles\": principles,\n",
    "    \"questions_and_answers\": icl_examples,\n",
    "})\n",
    "unlabeled_samples\n",
    "\n",
    "#make sure the features are correct and are same as the validation_samples features\n",
    "assert unlabeled_samples.features == validation_samples.features, \"Features are not the same\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Annotating the unlabeled dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 90,
   "metadata": {},
   "outputs": [],
   "source": [
    "gen_data = annotation_pipe.generate(unlabeled_samples)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Saving results to HuggingFace dataset format"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Map: 100%|██████████| 463/463 [00:00<00:00, 18593.16 examples/s]\n",
      "Creating json from Arrow format: 100%|██████████| 1/1 [00:00<00:00, 130.71ba/s]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "882918"
      ]
     },
     "execution_count": 91,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# First rename the columns\n",
    "gen_data = gen_data.rename_column('label', 'true_label')\n",
    "gen_data = gen_data.rename_column('output', 'predicted_label')\n",
    "\n",
    "# Convert numeric labels to string labels if needed\n",
    "gen_data = gen_data.map(lambda x: {'true_label': labels[int(x['true_label'])]})\n",
    "\n",
    "# Save to JSONL format\n",
    "gen_data.to_json('annotation_results.jsonl', lines=True, orient='records')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Calculate metrics"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy: 78.62%\n",
      "\n",
      "Per-class Metrics:\n",
      "Class\t\tPrecision\tRecall\t\tF1\t\tSupport\n",
      "----------------------------------------------------------------------\n",
      "World       \t0.91\t\t0.67\t\t0.77\t\t106\n",
      "Sports      \t0.87\t\t0.96\t\t0.91\t\t126\n",
      "Business    \t0.77\t\t0.66\t\t0.71\t\t105\n",
      "Sci/Tech    \t0.66\t\t0.82\t\t0.73\t\t126\n",
      "\n",
      "Overall Metrics:\n",
      "Macro Avg:\t0.80\t\t0.78\t\t0.78\n",
      "Weighted Avg:\t0.80\t\t0.79\t\t0.78\n",
      "\n",
      "Detailed Classification Report:\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "    Business       0.77      0.66      0.71       105\n",
      "    Sci/Tech       0.66      0.82      0.73       126\n",
      "      Sports       0.87      0.96      0.91       126\n",
      "       World       0.91      0.67      0.77       106\n",
      "\n",
      "    accuracy                           0.79       463\n",
      "   macro avg       0.80      0.78      0.78       463\n",
      "weighted avg       0.80      0.79      0.78       463\n",
      "\n",
      "\n",
      "Confusion Matrix:\n",
      "Labels: ['World', 'Sports', 'Business', 'Sci/Tech']\n",
      "[[ 71  10   6  19]\n",
      " [  1 121   4   0]\n",
      " [  0   2  69  34]\n",
      " [  6   6  11 103]]\n"
     ]
    }
   ],
   "source": [
    "# Import necessary libraries\n",
    "from sklearn.metrics import accuracy_score, precision_recall_fscore_support\n",
    "from sklearn.metrics import confusion_matrix, classification_report\n",
    "import json\n",
    "\n",
    "# Load predictions and true labels\n",
    "true_labels = []\n",
    "pred_labels = []\n",
    "\n",
    "with open('annotation_results.jsonl', 'r') as f:\n",
    "    for line in f:\n",
    "        data = json.loads(line)\n",
    "        true_labels.append(data['true_label'])\n",
    "        pred_labels.append(data['predicted_label'])\n",
    "\n",
    "# Calculate basic accuracy\n",
    "accuracy = accuracy_score(true_labels, pred_labels)\n",
    "print(f\"Accuracy: {accuracy:.2%}\")\n",
    "\n",
    "# Calculate precision, recall, and F1 score for each class\n",
    "precision, recall, f1, support = precision_recall_fscore_support(true_labels, pred_labels, average=None, labels=labels)\n",
    "\n",
    "# Print metrics for each class\n",
    "print(\"\\nPer-class Metrics:\")\n",
    "print(\"Class\\t\\tPrecision\\tRecall\\t\\tF1\\t\\tSupport\")\n",
    "print(\"-\" * 70)\n",
    "for i, label in enumerate(labels):\n",
    "    print(f\"{label:<12}\\t{precision[i]:.2f}\\t\\t{recall[i]:.2f}\\t\\t{f1[i]:.2f}\\t\\t{support[i]}\")\n",
    "\n",
    "# Calculate and print macro and weighted averages\n",
    "macro_precision, macro_recall, macro_f1, _ = precision_recall_fscore_support(true_labels, pred_labels, average='macro')\n",
    "weighted_precision, weighted_recall, weighted_f1, _ = precision_recall_fscore_support(true_labels, pred_labels, average='weighted')\n",
    "\n",
    "print(\"\\nOverall Metrics:\")\n",
    "print(f\"Macro Avg:\\t{macro_precision:.2f}\\t\\t{macro_recall:.2f}\\t\\t{macro_f1:.2f}\")\n",
    "print(f\"Weighted Avg:\\t{weighted_precision:.2f}\\t\\t{weighted_recall:.2f}\\t\\t{weighted_f1:.2f}\")\n",
    "\n",
    "# Print detailed classification report\n",
    "print(\"\\nDetailed Classification Report:\")\n",
    "print(classification_report(true_labels, pred_labels))\n",
    "\n",
    "# Create and print confusion matrix\n",
    "cm = confusion_matrix(true_labels, pred_labels, labels=labels)\n",
    "print(\"\\nConfusion Matrix:\")\n",
    "print(\"Labels:\", labels)\n",
    "print(cm)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The results are much better now compared to when we started. Looking at the confusion matrix, the model is still struggling with the Business/Sci-Tech categories and our prompting strategy can be further improved to help the model learn to annotate these categories better. Some strategies include:\n",
    "\n",
    "- Adding more specific principles for the Business/Sci-Tech categories\n",
    "- Adding more detailed examples for the Business/Sci-Tech categories\n",
    "- Adding hard examples (examples that are close to the decision boundary) to the ICL examples\n",
    "- Adding more specific system prompt for the Business/Sci-Tech categories\n",
    "- Adding a description for each of the categories in the generation or introduction component of the prompt\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Conclusion"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this exercise:\n",
    "- We demonstrated how to build and use custom, composable pipelines with SDG\n",
    "- We demonstrated how to use SDG to annotate a dataset with a custom annotation pipeline\n",
    "- We demonstrated the basics of prompt engineering to improve the quality of the annotation\n",
    "- We learned the importance of using a diverse set of examples in the ICL examples to improve the quality of the annotation\n",
    "- We learned how to use task specific metrics to understand the performance the prompt against the model for the specific task"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "subsetenv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
