{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7PXuEuAJtsT3"
   },
   "source": [
    "# Prerequisites"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ymlHxsGvtxfI"
   },
   "source": [
    "## Uninstall default colab dependencies\n",
    "\n",
    "Here, we are uninstalling default dependencies that cause version conflict with rLLM, VERL, and vLLM dependencies"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "DabAkH75hlTn"
   },
   "outputs": [],
   "source": [
    "!pip uninstall -y fastai albumentations albucore dopamine-rl bigframes \\\n",
    "  opencv-python opencv-python-headless spacy torchvision"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Abj2Gax0Zy6n"
   },
   "outputs": [],
   "source": [
    "%pip uninstall -y torch torchvision torchaudio numpy || true\n",
    "%pip uninstall -y gcsfs fsspec\n",
    "%pip uninstall -y opencv-python opencv-contrib-python opencv-python-headless thinc spacy\n",
    "# vLLM’s Python deps (versions that play nicely here)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "bmcb9Y3_8-M4"
   },
   "outputs": [],
   "source": [
    "!pip uninstall -y gymnasium browsergym-core browsergym"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "nO5wEKmiaJsf"
   },
   "source": [
    "## Installing\n",
    "Now we are installing required dependencies to train our solver-judge workflow!\n",
    "- It may prompt to restart the session. Make sure to do so before running the sunsequent cells."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "yQhtMzeX4N1e"
   },
   "outputs": [],
   "source": [
    "!pip install --no-cache-dir \"vllm==0.8.5.post1\" \"torch==2.6.0\" \"torchvision==0.21.0\" \"torchaudio==2.6.0\" \"tensordict==0.6.2\" torchdata"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "4NS0kCLv4SjB"
   },
   "outputs": [],
   "source": [
    "!pip install \"transformers[hf_xet]>=4.51.0\" accelerate datasets peft hf-transfer \\\n",
    "    \"numpy<2.0.0\" \"pyarrow>=15.0.0\" pandas \\\n",
    "    ray[default] codetiming hydra-core pylatexenc qwen-vl-utils wandb dill pybind11 liger-kernel mathruler \\\n",
    "    pytest py-spy pyext pre-commit ruff tensorboard\n",
    "\n",
    "!pip install \"nvidia-ml-py>=12.560.30\" \"fastapi[standard]>=0.115.0\" \"optree>=0.13.0\" \"pydantic>=2.9\" \"grpcio>=1.62.1\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "gd5MMPhd4gGx"
   },
   "outputs": [],
   "source": [
    "!wget -q https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.4cxx11abiFALSE-cp312-cp312-linux_x86_64.whl\n",
    "!pip install -q --no-cache-dir flash_attn-2.8.3+cu12torch2.4cxx11abiFALSE-cp312-cp312-linux_x86_64.whl"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "SKNgvWer6cIA"
   },
   "outputs": [],
   "source": [
    "!wget -q https://github.com/flashinfer-ai/flashinfer/releases/download/v0.2.2.post1/flashinfer_python-0.2.2.post1+cu124torch2.6-cp38-abi3-linux_x86_64.whl\n",
    "!pip install -q --no-cache-dir flashinfer_python-0.2.2.post1+cu124torch2.6-cp38-abi3-linux_x86_64.whl"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Eovqk4aw6iMS"
   },
   "outputs": [],
   "source": [
    "!pip install opencv-python\n",
    "!pip install opencv-fixer && \\\n",
    "    python -c \"from opencv_fixer import AutoFix; AutoFix()\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true,
    "id": "CupFPg1dif_n"
   },
   "outputs": [],
   "source": [
    "%cd /content\n",
    "!git clone --recurse-submodules https://github.com/rllm-org/rllm.git src\n",
    "%cd /content/src\n",
    "!git switch v0.2\n",
    "!git submodule update --init --recursive\n",
    "\n",
    "# Use the VERL that ships inside the repo\n",
    "%pip install -q -e ./verl\n",
    "# Install rLLM itself\n",
    "%pip install -q -e ."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "q1wKdmTG0IBu"
   },
   "source": [
    "# Train Solver and Judge Workflow\n",
    "\n",
    "rLLM provides AgentWorkFlow engine to train different workflows using the reinforcement learning. You do not have to deal directly with AgentWorkFlow engine. We will just go over how to use AgentTrainer on your workflow logic.  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "w1b4bLEs-rKn"
   },
   "source": [
    "## Solver and Judge definition\n",
    "\n",
    "Here, we'll define a custom workflow, which is SolverJudgeWorkFlow in this tutorial.\n",
    "\n",
    "---\n",
    "\n",
    "### Solver Class\n",
    "`Solver` class generates n different solutions to the input problem (in parallel). It returns a list of n trajectories (without reward).\n",
    "\n",
    "### Judge Class\n",
    "`Judge` class selects the best solution from among the candidates generated by the solver. It returns a trajectory (without reward) containing the selected solution.\n",
    "\n",
    "**Note:** Both classes query the model using the `RolloutEngine`.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Y1RlXXBacE0w"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "os.environ[\"VLLM_USE_V1\"] = \"1\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "mTSYuuXPjmE2"
   },
   "outputs": [],
   "source": [
    "import asyncio\n",
    "import re\n",
    "\n",
    "from rllm.agents.agent import Episode, Step, Trajectory\n",
    "from rllm.engine import ModelOutput, RolloutEngine\n",
    "from rllm.rewards.reward_fn import RewardFunction\n",
    "from rllm.workflows.workflow import Workflow\n",
    "\n",
    "\n",
    "class Solver:\n",
    "    def __init__(self, rollout_engine: RolloutEngine, **kwargs):\n",
    "        self.rollout_engine = rollout_engine\n",
    "\n",
    "    async def generate_solution(self, problem: str) -> Trajectory:\n",
    "        messages = [{\"role\": \"user\", \"content\": f\"{problem}. Output the final answer within <answer>...</answer>\"}]\n",
    "        output: ModelOutput = await self.rollout_engine.get_model_response(messages)\n",
    "        return Trajectory(\n",
    "            name=\"solver\",\n",
    "            steps=[\n",
    "                Step(\n",
    "                    chat_completions=messages + [{\"role\": \"assistant\", \"content\": output.content, \"reasoning\": output.reasoning}],\n",
    "                    thought=output.reasoning,\n",
    "                    action=self._parse_solver_response(output.content),\n",
    "                    model_output=output,\n",
    "                )\n",
    "            ],\n",
    "        )\n",
    "\n",
    "    async def generate_solutions(self, problem: str, n_solutions: int = 2) -> list[Trajectory]:\n",
    "        tasks = [asyncio.create_task(self.generate_solution(problem)) for _ in range(n_solutions)]\n",
    "        return await asyncio.gather(*tasks)\n",
    "\n",
    "    def _parse_solver_response(self, response: str) -> str:\n",
    "        answer_match = re.search(r\"<answer>(.*?)</answer>\", response, re.IGNORECASE | re.DOTALL)\n",
    "        if answer_match:\n",
    "            return f\"<answer>{answer_match.group(1).strip()}</answer>\"\n",
    "        else:\n",
    "            return \"No solution found\"\n",
    "\n",
    "\n",
    "class Judge:\n",
    "    def __init__(self, rollout_engine: RolloutEngine, **kwargs):\n",
    "        self.rollout_engine = rollout_engine\n",
    "\n",
    "    async def judge_solutions(self, problem: str, solutions: list[str]) -> Trajectory:\n",
    "        messages = [{\"role\": \"user\", \"content\": self._create_judge_prompt(problem, solutions)}]\n",
    "        output: ModelOutput = await self.rollout_engine.get_model_response(messages)\n",
    "        return Trajectory(\n",
    "            name=\"judge\",\n",
    "            steps=[\n",
    "                Step(\n",
    "                    chat_completions=messages + [{\"role\": \"assistant\", \"content\": output.content, \"reasoning\": output.reasoning}],\n",
    "                    thought=output.reasoning,\n",
    "                    action=self._parse_judge_response(output.content, solutions),\n",
    "                    model_output=output,\n",
    "                )\n",
    "            ],\n",
    "        )\n",
    "\n",
    "    def _parse_judge_response(self, response: str, solutions: list[str]) -> str:\n",
    "        answer_match = re.search(r\"<answer>(.*?)</answer>\", response, re.IGNORECASE | re.DOTALL)\n",
    "        if answer_match:\n",
    "            answer_text = answer_match.group(1).strip()\n",
    "            try:\n",
    "                solution_index = int(answer_text)\n",
    "                return solutions[solution_index - 1]\n",
    "            except (ValueError, IndexError):\n",
    "                return \"\"\n",
    "        return \"\"\n",
    "\n",
    "    def _create_judge_prompt(self, problem: str, solutions: list[str]) -> str:\n",
    "        \"\"\"Create a prompt for the judge to evaluate solutions.\"\"\"\n",
    "        prompt = f\"\"\"You are an expert verifier. Given a countdown problem and multiple solution attempts, select a correct solution.\n",
    "Problem:\n",
    "{problem}\n",
    "Solutions to evaluate:\n",
    "\"\"\"\n",
    "        for i, solution in enumerate(solutions, 1):\n",
    "            prompt += f\"\\nSolution {i}:\\n{solution}\\n\"\n",
    "\n",
    "        prompt += \"\"\"\n",
    "A correct solution must satisfy the following criteria:\n",
    "1. The solution uses only the given numbers.\n",
    "2. Each number is used exactly once.\n",
    "3. Only basic arithmetic operations (+, -, *, /) are used.\n",
    "4. The calculation results in the target number.\n",
    "5. The final answer is clearly marked within <answer>...</answer> tags.\n",
    "Output the index of your selected solution within <answer>...</answer> tags, e.g., <answer>1</answer> for the first solution, <answer>2</answer> for the second solution, etc. If multiple solutions are correct, output the index of the first correct solution.\"\"\"\n",
    "        return prompt\n",
    "\n",
    "\n",
    "class SolverJudgeWorkflow(Workflow):\n",
    "    def __init__(self, rollout_engine: RolloutEngine, n_solutions: int = 2, reward_function: RewardFunction = None, **kwargs):\n",
    "        super().__init__(rollout_engine, **kwargs)\n",
    "        self.n_solutions = n_solutions\n",
    "        self.reward_function = reward_function\n",
    "        self.solver = Solver(rollout_engine)\n",
    "        self.judge = Judge(rollout_engine)\n",
    "\n",
    "    async def run(self, task: dict, uid: str, **kwargs) -> Episode:\n",
    "        self.reset(task, uid)\n",
    "        problem = task[\"question\"]\n",
    "\n",
    "        # Step 1: Solver generates multiple solutions in parallel\n",
    "        solver_trajectories = await self.solver.generate_solutions(problem, self.n_solutions)\n",
    "\n",
    "        # Assign rewards to solver trajectories\n",
    "        solutions = []\n",
    "        for traj in solver_trajectories:\n",
    "            solution = traj.steps[0].action\n",
    "            solutions.append(solution)\n",
    "            reward = self.reward_function(task, solution).reward\n",
    "            traj.steps[0].reward = reward\n",
    "\n",
    "        # Step 2: Judge selects the best solution\n",
    "        judge_trajectory = await self.judge.judge_solutions(problem, solutions)\n",
    "        selected_solution = judge_trajectory.steps[0].action\n",
    "\n",
    "        # Evaluate the selected solution\n",
    "        reward_result = self.reward_function(task, selected_solution)\n",
    "        judge_trajectory.steps[0].reward = reward_result.reward\n",
    "        is_correct = reward_result.is_correct\n",
    "\n",
    "        # Compute metrics\n",
    "        solver_acc = sum(traj.steps[0].reward for traj in solver_trajectories) / len(solver_trajectories)\n",
    "        judge_acc = int(is_correct)\n",
    "\n",
    "        # Step 3: Return episode with multiple trajectories\n",
    "        return Episode(\n",
    "            id=uid,\n",
    "            task=task,\n",
    "            trajectories=[*solver_trajectories, judge_trajectory],\n",
    "            is_correct=is_correct,\n",
    "            metrics={\"solver_acc\": solver_acc, \"judge_acc\": judge_acc},\n",
    "        )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "X64aEG4YbNK3"
   },
   "source": [
    "## Dataset Creation\n",
    "\n",
    "We are getting the countdown task dataset from Huggingface."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "_U2hWKr8aZn4"
   },
   "outputs": [],
   "source": [
    "import random\n",
    "\n",
    "from datasets import load_dataset\n",
    "\n",
    "from rllm.data.dataset import DatasetRegistry\n",
    "\n",
    "\n",
    "def prepare_countdown_data():\n",
    "    \"\"\"\n",
    "    Prepare the countdown task dataset from HuggingFace.\n",
    "    Take 1024 examples as test set, remaining as training set.\n",
    "    Also create stage 2 and stage 3 training sets with 50k examples each.\n",
    "    \"\"\"\n",
    "    # Load the countdown dataset\n",
    "    dataset = load_dataset(\"Jiayi-Pan/Countdown-Tasks-3to4\", split=\"train\")\n",
    "\n",
    "    # Split dataset: 1024 examples for test, rest for training\n",
    "    test_size = 1024\n",
    "    total_size = len(dataset)\n",
    "\n",
    "    # Create train/test split\n",
    "    test_dataset = dataset.select(range(test_size))\n",
    "    train_dataset = dataset.select(range(test_size, total_size))\n",
    "\n",
    "    def preprocess_fn(example, idx):\n",
    "        \"\"\"\n",
    "        Convert countdown task format to math problem format.\n",
    "        Example: target=98, nums=[44, 19, 35] becomes a math word problem.\n",
    "        \"\"\"\n",
    "        target = example[\"target\"]\n",
    "        nums = example[\"nums\"]\n",
    "\n",
    "        # Format as a math problem\n",
    "        nums_str = \", \".join(map(str, nums))\n",
    "        question = f\"Using the numbers {nums_str}, find a way to reach the target number {target}. You can use basic arithmetic operations (+, -, *, /) and each number can only be used once. Show your step-by-step calculation and output the final answer within <answer>...</answer>, for example <answer> (1 + 2) / 3 </answer>.\"\n",
    "\n",
    "        return {\n",
    "            \"question\": question,\n",
    "            \"ground_truth\": str(target),\n",
    "            \"data_source\": \"countdown\",\n",
    "            \"target\": target,\n",
    "            \"nums\": nums,\n",
    "        }\n",
    "\n",
    "    # Apply preprocessing\n",
    "    train_dataset = train_dataset.map(preprocess_fn, with_indices=True)\n",
    "    test_dataset = test_dataset.map(preprocess_fn, with_indices=True)\n",
    "\n",
    "    # Create stage 2 and stage 3 training datasets\n",
    "    train_size = len(train_dataset)\n",
    "    # stage_size = 50000\n",
    "    stage_size = 5\n",
    "\n",
    "    # Ensure we have enough data for both stages\n",
    "    if train_size < 2 * stage_size:\n",
    "        print(f\"Warning: Training set has only {train_size} examples, but need {2 * stage_size} for both stages\")\n",
    "        stage_size = min(stage_size, train_size // 2)\n",
    "\n",
    "    # Shuffle and select indices for stage 2 and stage 3\n",
    "    all_indices = list(range(train_size))\n",
    "    random.shuffle(all_indices)\n",
    "\n",
    "    stage2_indices = all_indices[:stage_size]\n",
    "    stage3_indices = all_indices[stage_size : 2 * stage_size]\n",
    "\n",
    "    # Create stage datasets\n",
    "    stage2_dataset = train_dataset.select(stage2_indices)\n",
    "    stage3_dataset = train_dataset.select(stage3_indices)\n",
    "\n",
    "    # Register datasets\n",
    "    train_dataset = DatasetRegistry.register_dataset(\"countdown\", train_dataset, \"train\")\n",
    "    test_dataset = DatasetRegistry.register_dataset(\"countdown\", test_dataset, \"test\")\n",
    "    stage2_dataset = DatasetRegistry.register_dataset(\"countdown\", stage2_dataset, \"stage2_train\")\n",
    "    stage3_dataset = DatasetRegistry.register_dataset(\"countdown\", stage3_dataset, \"stage3_train\")\n",
    "\n",
    "    print(f\"Train dataset size: {len(train_dataset)}\")\n",
    "    print(f\"Test dataset size: {len(test_dataset)}\")\n",
    "    print(f\"Stage 2 train dataset size: {len(stage2_dataset)}\")\n",
    "    print(f\"Stage 3 train dataset size: {len(stage3_dataset)}\")\n",
    "\n",
    "    return train_dataset, test_dataset, stage2_dataset, stage3_dataset\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    train_dataset, test_dataset, stage2_dataset, stage3_dataset = prepare_countdown_data()\n",
    "    print(\"Train dataset path:\", train_dataset.get_data_path())\n",
    "    print(\"Test dataset path:\", test_dataset.get_data_path())\n",
    "    print(\"Stage 2 train dataset path:\", stage2_dataset.get_data_path())\n",
    "    print(\"Stage 3 train dataset path:\", stage3_dataset.get_data_path())\n",
    "\n",
    "    # Print a sample\n",
    "    print(\"\\nSample train example:\")\n",
    "    print(train_dataset[0])\n",
    "    print(\"\\nSample stage 2 train example:\")\n",
    "    print(stage2_dataset[0])\n",
    "    print(\"\\nSample stage 3 train example:\")\n",
    "    print(stage3_dataset[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "z154PbvcbaiU"
   },
   "source": [
    "## Training configuration\n",
    "In this section, we are configuring the trainer with information such as the model, batch size, Wandb API key to log, and the engine.\n",
    "Here, we are using OmegaConf to load the base `agent_ppo_trainer` config and merges overrides written configs, including specific PPO settings.\n",
    "\n",
    "For now, LoRA is disabled but it can be enabled by setting it to positive number."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "XJoyk-Wibe0x"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "os.chdir(\"/content/src\")\n",
    "os.environ[\"WANDB_API_KEY\"] = \"YOUR WANDB API KEY!!!\"\n",
    "\n",
    "\n",
    "from rllm.data.dataset import DatasetRegistry\n",
    "from omegaconf import OmegaConf\n",
    "from rllm.trainer.agent_trainer import AgentTrainer\n",
    "from rllm.rewards.countdown_reward import countdown_reward_fn\n",
    "from hydra import compose, initialize_config_module\n",
    "from hydra.core.global_hydra import GlobalHydra\n",
    "import torch\n",
    "\n",
    "\n",
    "# Detect available GPUs and CPUs\n",
    "num_gpus = torch.cuda.device_count()\n",
    "num_cpus = os.cpu_count() or 8\n",
    "print(f\"Detected {num_gpus} GPUs and {num_cpus} CPUs\")\n",
    "\n",
    "# Scale configuration based on available hardware\n",
    "is_single_gpu = num_gpus == 1\n",
    "batch_size = 1 if is_single_gpu else (64 if num_gpus >= 8 else 16)\n",
    "n_parallel = 1 if is_single_gpu else (128 if num_gpus >= 8 else 16)\n",
    "\n",
    "\n",
    "with initialize_config_module(version_base=None, config_module=\"rllm.trainer.config\"):\n",
    "    base_config = compose(config_name=\"agent_ppo_trainer\")\n",
    "\n",
    "overrides = OmegaConf.create(\n",
    "    {\n",
    "        \"data\": {\n",
    "            \"train_batch_size\": batch_size,\n",
    "            \"max_prompt_length\": 1024,\n",
    "            \"max_response_length\": 1024,\n",
    "            \"dataloader_num_workers\": 0,\n",
    "        },\n",
    "        \"actor_rollout_ref\": {\n",
    "            \"model\": {\n",
    "                \"path\": \"Qwen/Qwen3-0.6B\",\n",
    "                \"enable_gradient_checkpointing\": True,\n",
    "                \"lora_rank\": 0,  # Set to positive value to enable LoRA\n",
    "                \"lora_alpha\": 2,\n",
    "                \"use_remove_padding\": True,\n",
    "            },\n",
    "            \"actor\": {\n",
    "                \"optim\": {\"lr\": 1e-6},\n",
    "                \"loss_agg_mode\": \"seq-mean-token-mean\",\n",
    "                \"use_dynamic_bsz\": True,\n",
    "                \"ppo_max_token_len_per_gpu\": 32768,\n",
    "                \"ppo_mini_batch_size\": batch_size,\n",
    "                \"use_kl_loss\": False,\n",
    "                \"kl_loss_coef\": 0.001,\n",
    "                \"kl_loss_type\": \"low_var_kl\",\n",
    "                \"entropy_coeff\": 0.0,\n",
    "                \"clip_ratio_low\": 0.2,\n",
    "                \"clip_ratio_high\": 0.28,\n",
    "                \"ulysses_sequence_parallel_size\": 1,\n",
    "                \"fsdp_config\": {\n",
    "                    \"param_offload\": is_single_gpu,\n",
    "                    \"optimizer_offload\": is_single_gpu,\n",
    "                },\n",
    "            },\n",
    "            \"rollout\": {\n",
    "                \"name\": \"vllm\",\n",
    "                \"mode\": \"async\",\n",
    "                \"enforce_eager\": False,\n",
    "                \"temperature\": 0.6,\n",
    "                \"gpu_memory_utilization\": 0.5,\n",
    "                \"tensor_model_parallel_size\": 1,\n",
    "                \"n\": 1,\n",
    "                \"val_kwargs\": {\n",
    "                    \"n\": 1,\n",
    "                    \"temperature\": 0.6,\n",
    "                    \"top_p\": 0.95,\n",
    "                },\n",
    "                \"load_format\": \"auto\",\n",
    "            },\n",
    "            \"ref\": {\n",
    "                \"fsdp_config\": {\n",
    "                    \"param_offload\": is_single_gpu,\n",
    "                },\n",
    "            },\n",
    "            \"hybrid_engine\": True,\n",
    "        },\n",
    "        \"algorithm\": {\n",
    "            \"adv_estimator\": \"grpo\",\n",
    "        },\n",
    "        \"rllm\": {\n",
    "            \"workflow\": {\n",
    "                \"use_workflow\": True,\n",
    "                \"n_parallel_tasks\": n_parallel,\n",
    "                \"retry_limit\": 1,\n",
    "            },\n",
    "            \"stepwise_advantage\": {\n",
    "                \"enable\": True,\n",
    "                \"mode\": \"per_step\",\n",
    "            },\n",
    "            \"compact_filtering\": {\n",
    "                \"enable\": True,\n",
    "                \"mask_max_prompt_length_exceeded\": True,\n",
    "                \"mask_max_response_length_exceeded\": True,\n",
    "                \"mask_max_turns_exceeded\": False,\n",
    "                \"mask_timeout\": True,\n",
    "            },\n",
    "            \"rejection_sample\": {\n",
    "                \"enable\": False,\n",
    "                \"multiplier\": 1.0,\n",
    "            },\n",
    "        },\n",
    "        \"trainer\": {\n",
    "            \"critic_warmup\": 0,\n",
    "            \"project_name\": \"solver-judge-workflow\",\n",
    "            \"experiment_name\": \"countdown-solver-judge\",\n",
    "            \"total_epochs\": 1,\n",
    "            \"n_gpus_per_node\": num_gpus if num_gpus > 0 else 1,\n",
    "            \"nnodes\": 1,\n",
    "            \"logger\": [\"console\", \"wandb\"],  # add wandb if you have API_KEY\n",
    "            \"val_before_train\": True,\n",
    "            \"test_freq\": 5,\n",
    "            \"save_freq\": 1000,\n",
    "            \"default_hdfs_dir\": None,\n",
    "        },\n",
    "    }\n",
    ")\n",
    "\n",
    "\n",
    "train_config = OmegaConf.merge(base_config, overrides)\n",
    "\n",
    "\n",
    "# Load datasets\n",
    "train_dataset = DatasetRegistry.load_dataset(\"countdown\", \"train\")\n",
    "test_dataset = DatasetRegistry.load_dataset(\"countdown\", \"test\")\n",
    "\n",
    "# Create trainer\n",
    "trainer = AgentTrainer(\n",
    "    workflow_class=SolverJudgeWorkflow,\n",
    "    workflow_args={\n",
    "        \"n_solutions\": 2,\n",
    "        \"reward_function\": countdown_reward_fn,\n",
    "    },\n",
    "    config=train_config,\n",
    "    train_dataset=train_dataset,\n",
    "    val_dataset=test_dataset,\n",
    ")\n",
    "\n",
    "print(\"Trainer ready!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "-UtTZ9pAdX6E"
   },
   "outputs": [],
   "source": [
    "trainer.train()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "9AXofFUXdy6q"
   },
   "outputs": [],
   "source": [
    "!pip show vllm | grep Version"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Bs6G2y9BWeRm"
   },
   "source": [
    "# Train Visualization\n",
    "\n",
    "As we saved our training logs into Wandb, we can use the following code to plot the results. Make sure to replace wandb_run with actually run created from the training above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "n7dLKzvYWf3Z"
   },
   "outputs": [],
   "source": [
    "# Install wandb if not already installed\n",
    "!pip install -q wandb\n",
    "\n",
    "import wandb\n",
    "import matplotlib.pyplot as plt\n",
    "import pandas as pd\n",
    "from google.colab import auth\n",
    "\n",
    "# Login to wandb (will prompt for API key if not logged in)\n",
    "wandb.login()\n",
    "\n",
    "# Initialize wandb API\n",
    "api = wandb.Api()\n",
    "\n",
    "# Fetch the specific run\n",
    "wandb_run = \"YOUR WANDB RUN\"\n",
    "run = api.run(wandb_run)\n",
    "\n",
    "# Get run history (metrics over time)\n",
    "history = run.history()\n",
    "\n",
    "# Print available columns\n",
    "print(\"Available metrics:\")\n",
    "print(history.columns.tolist())\n",
    "print(f\"\\nTotal steps: {len(history)}\")\n",
    "\n",
    "# Create visualizations for all numeric columns\n",
    "numeric_cols = history.select_dtypes(include=[\"float64\", \"int64\"]).columns.tolist()\n",
    "# Remove _step and _timestamp columns\n",
    "numeric_cols = [col for col in numeric_cols if not col.startswith(\"_\")]\n",
    "\n",
    "if numeric_cols:\n",
    "    # Calculate number of subplots needed\n",
    "    n_metrics = len(numeric_cols)\n",
    "    n_cols = 2\n",
    "    n_rows = (n_metrics + n_cols - 1) // n_cols\n",
    "\n",
    "    fig, axes = plt.subplots(n_rows, n_cols, figsize=(15, 5 * n_rows))\n",
    "    axes = axes.flatten() if n_metrics > 1 else [axes]\n",
    "\n",
    "    for idx, metric in enumerate(numeric_cols):\n",
    "        ax = axes[idx]\n",
    "        ax.plot(history[metric], linewidth=2)\n",
    "        ax.set_xlabel(\"Step\")\n",
    "        ax.set_ylabel(metric)\n",
    "        ax.set_title(f\"{metric} over time\")\n",
    "        ax.grid(True, alpha=0.3)\n",
    "\n",
    "    # Hide unused subplots\n",
    "    for idx in range(n_metrics, len(axes)):\n",
    "        axes[idx].axis(\"off\")\n",
    "\n",
    "    plt.tight_layout()\n",
    "    plt.show()\n",
    "else:\n",
    "    print(\"No numeric metrics found to plot\")\n",
    "\n",
    "# Print summary statistics\n",
    "print(\"\\nRun Summary:\")\n",
    "for key, value in run.summary.items():\n",
    "    print(f\"{key}: {value}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "2RUq3KZtax3d"
   },
   "source": [
    "# Inference\n",
    "We can also run solver-judge workflow with vLLM."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "COVewCK6-nim"
   },
   "source": [
    "## vLLM inference"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "qG419XszigZ3"
   },
   "outputs": [],
   "source": [
    "import time\n",
    "import requests\n",
    "\n",
    "# Configuration\n",
    "MODEL_NAME = \"Qwen/Qwen2.5-0.5B-Instruct\"\n",
    "PORT = 30000\n",
    "\n",
    "\n",
    "def is_server_running():\n",
    "    try:\n",
    "        response = requests.get(f\"http://localhost:{PORT}/v1/models\", timeout=2)\n",
    "        return response.status_code == 200\n",
    "    except:\n",
    "        return False\n",
    "\n",
    "\n",
    "# Start or check server\n",
    "if is_server_running():\n",
    "    print(f\"Serverrunning on port {PORT}\")\n",
    "else:\n",
    "    print(f\"Starting vLLM with {MODEL_NAME}...\")\n",
    "\n",
    "    # Start vLLM server in background\n",
    "    !nohup python -m vllm.entrypoints.openai.api_server \\\n",
    "        --model {MODEL_NAME} \\\n",
    "        --port {PORT} \\\n",
    "        --max-model-len 4096 \\\n",
    "        > /dev/null 2>&1 &\n",
    "\n",
    "    print(\"Server starting in background\")\n",
    "\n",
    "# Save config\n",
    "SERVER_CONFIG = {\"model_name\": MODEL_NAME, \"base_url\": f\"http://localhost:{PORT}/v1\", \"port\": PORT}\n",
    "\n",
    "print(f\"\\nServer URL: {SERVER_CONFIG['base_url']}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "8hWtWdM8s9ei"
   },
   "source": [
    "# Misc"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "MkErEhObbALr"
   },
   "source": [
    "When it shows error that there is no enough GPU, run the following code to shutdown ray instances, then restart the trainer above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "bIrokMCJa7fH"
   },
   "outputs": [],
   "source": [
    "import ray\n",
    "\n",
    "ray.shutdown()  # reset if previously inited"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
