{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Skythought Scoring: Unified APIs for data curation, training and evaluation\n",
    "\n",
    "This notebook will provide a quick overview of the `Scorer` API in Skythought. A `Scorer` is a lightweight class that deals with scoring model response for a given task. Skythought provides a set of pre-defined scoring functions for verifiable domains (math, coding, etc), making it easy to use consistent scoring across curation, training and evaluation. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Installation and Setup\n",
    "\n",
    "First, make sure you've installed the latest changes from source:\n",
    "\n",
    "#### Installing from source\n",
    "\n",
    "\n",
    "```shell\n",
    "# Clone the repository\n",
    "git clone https://github.com/NovaSky-AI/SkyThought.git\n",
    "cd SkyThought\n",
    "\n",
    "# Create and activate a virtual environment (using uv here)\n",
    "uv venv --python 3.10\n",
    "source .venv/bin/activate\n",
    "\n",
    "# Install the package in editable mode\n",
    "uv pip install -e .\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Example Usage during Data Curation\n",
    "\n",
    "Here's an example recipe for data curation:\n",
    "\n",
    "1. Create a dataset combining the “hard’ subset of NUMINA and the GSM8K dataset . \n",
    "2. Perform rejection sampling with the base model.  \n",
    "    a. Obtain a response for each sample and filter out the incorrect responses.   \n",
    "    b. For scoring, we will combine two functions: a correctness check for math responses like math verify along with a format scorer to make sure the model is adhering to instructions.   \n",
    "\n",
    "\n",
    "```python\n",
    "import ray\n",
    "from ray.data.llm import build_llm_processor, vLLMEngineProcessorConfig\n",
    "from datasets import load_dataset\n",
    "from skythought.evals.scoring import Scorer, MathEqualScorer\n",
    "import re\n",
    "import os \n",
    "\n",
    "SYSTEM_PROMPT = \"Think step-by-step and provide the final answer in \\\\boxed{}\"\n",
    "MAX_TOKENS = 2048 \n",
    "\n",
    "class FormatScorer(Scorer):\n",
    "    SCORE_COLUMN = \"format_score\"\n",
    "    def __init__(self, response_column):\n",
    "        self.response_column = response_column\n",
    "\n",
    "    def score(self, row):\n",
    "        pat1 = \"<think>(.*)</think>\"\n",
    "        pat2 = \"\\\\boxed{(.*)}\"\n",
    "        text = row[self.response_column]\n",
    "        match1 = re.search(pat1, text)\n",
    "        match2 = re.search(pat2, text)\n",
    "        # if even one of the patterns is not found, return 0\n",
    "        if not match1 or not match2:\n",
    "            passed = False\n",
    "        passed = True\n",
    "        return {self.SCORE_COLUMN: passed}\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "\n",
    "    # limit the number of samples per dataset for testing\n",
    "    num_samples = 20\n",
    "\n",
    "    save_dir = \"my_results_dir\"\n",
    "    \n",
    "    numina_hf = load_dataset(\"AI-MO/NuminaMath-CoT\", split=\"train\")\n",
    "    gsm8k_hf = load_dataset(\"openai/gsm8k\", \"main\", split=\"train\")\n",
    "    \n",
    "    # filter hard problems and rename to match GSM8K's format\n",
    "    ds1 = ray.data.from_huggingface(numina_hf) \\\n",
    "        .filter(expr=\"source == 'hard'\")\\\n",
    "        .rename_columns({\"problem\": \"question\", \"solution\": \"answer\"}) \\\n",
    "        .drop_columns([\"source\"]).limit(num_samples)\n",
    "\n",
    "    ds2 = ray.data.from_huggingface(gsm8k_hf).limit(num_samples)\n",
    "\n",
    "    ds = ds1.union(ds2)\n",
    "\n",
    "    llm = build_llm_processor(\n",
    "        vLLMEngineProcessorConfig(\n",
    "            model=\"meta-llama/Meta-Llama-3.1-8B-Instruct\",\n",
    "            engine_kwargs=dict(\n",
    "                tensor_parallel_size=2\n",
    "            ),\n",
    "            batch_size=64,\n",
    "            concurrency=2,\n",
    "        ),\n",
    "        preprocess=lambda row: dict(\n",
    "            messages=[\n",
    "                {\"role\": \"system\", \"content\": SYSTEM_PROMPT},\n",
    "                {\"role\": \"user\", \"content\": row[\"question\"]},\n",
    "            ],\n",
    "            sampling_params=dict(\n",
    "                temperature=0,\n",
    "                max_tokens=MAX_TOKENS,\n",
    "            ),\n",
    "        )\n",
    "    )\n",
    "    # generates responses and saves it in \"generated_text\" column\n",
    "    ds = llm(ds)\n",
    "\n",
    "    ds = ds.map(\n",
    "        MathEqualScorer, \n",
    "\t    fn_constructor_kwargs= dict(\n",
    "            response_column=\"generated_text\", answer_column=\"answer\"\n",
    "        ),\n",
    "        concurrency=5\n",
    "    )\n",
    "\n",
    "    ds = ds.map(\n",
    "        FormatScorer, \n",
    "        fn_constructor_kwargs= dict(\n",
    "            response_column=\"generated_text\"\n",
    "        ),\n",
    "        concurrency=5\n",
    "    )\n",
    "\n",
    "    ds = ds.filter(expr=\"math_equal_score and format_score\")\n",
    "    \n",
    "    ds.write_parquet(os.path.abspath(save_dir))\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Example Usage During Training\n",
    "\n",
    "Given below is an example of creating a custom scorer for training for the dataset used in TULU-3's RLVR stage (a mix of GSM8K, IFEval and MATH)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```python\n",
    "...\n",
    "from skythought.scoring import MathVerifyScorer, GSM8KScorer, IFEvalScorer, Scorer\n",
    "\n",
    "# Custom Scoring function for a mix of GSM8K, MATH and IFEval \n",
    "class MyScorer(Scorer):\n",
    "\tSCORE_COLUMN = \"score\"\n",
    "\tdef __init__(self, source_column, response_column, output_column):\n",
    "\t\tself.source_column = source_column\n",
    "\t\tself.response_column = response_column\n",
    "\t\tself.output_column = output_column\n",
    "\t\tself.gsm8k = GSM8KScorer(response_column, output_column)\n",
    "\t\tself.ifeval = IFEvalScorer(response_column, output_column)\n",
    "\t\tself.math = MathVerifyScorer(response_column, output_column)\n",
    "\n",
    "\tdef score(self, row):\n",
    "\t\tsource = row[self.source_column]\n",
    "\t\tif source == \"gsm8k\": \n",
    "\t\t\treturn {self.SCORE_COLUMN: self.gsm8k(row)}\n",
    "\t\telif source == \"math\": \n",
    "\t\t\treturn {self.SCORE_COLUMN: self.math(row)}\n",
    "\t\telif source == \"ifeval\":\n",
    "\t\t\treturn {self.SCORE_COLUMN: self.ifeval(row)}\n",
    "\t\telse:\n",
    "\t\t\traise ValueError\n",
    "\n",
    "def main(args):\n",
    "    dataset_args, training_args = parse_args(args)\n",
    "    ...\n",
    "    train_dataset = prepare_dataset(train_dataset, tokenizer)\n",
    "    eval_dataset = prepare_dataset(eval_dataset, tokenizer)\n",
    "    # assume that the trainer will provide inputs as a single dict. if not, you can customize the interface for the scorer\n",
    "\t# you can use `.score` or the __call__ interface to get the scores\n",
    "    reward_function = MyScorer(\"id\", \"response\", \"ground_truth\")\n",
    "```"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
