{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 步骤1: 安装必要的依赖包\n",
    "\n",
    "首先，我们需要安装Unsloth和vLLM。这些是进行高效模型微调所必需的工具：\n",
    "\n",
    "- **Unsloth**: 一个专门用于快速微调大语言模型的库，支持LoRA和QLoRA\n",
    "- **vLLM**: 一个高性能的大语言模型推理引擎\n",
    "\n",
    "注意：`--no-deps`参数用于避免依赖冲突，确保安装指定版本的包。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# pip install --no-deps unsloth vllm==0.8.5.post1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 步骤2: 加载预训练模型和分词器\n",
    "\n",
    "在这一步中，我们将：\n",
    "\n",
    "1. **导入必要的库**: FastModel是Unsloth提供的快速模型加载接口\n",
    "2. **设置参数**: 定义最大序列长度，这影响模型能处理的文本长度\n",
    "3. **加载模型**: 从本地路径加载预训练的Gemma3-4B模型\n",
    "4. **配置精度**: 使用16位精度进行训练，平衡效果和显存使用\n",
    "\n",
    "**重要参数说明**：\n",
    "- `max_seq_length=1024`: 模型能处理的最大token数量\n",
    "- `load_in_4bit=False, load_in_8bit=False`: 不使用量化加载，保持全精度\n",
    "- `full_finetuning=False`: 使用LoRA而不是全参数微调"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==((====))==  Unsloth 2025.6.11: Fast Gemma3 patching. Transformers: 4.52.4. vLLM: 0.8.5.post1.\n",
      "   \\\\   /|    NVIDIA A800-SXM4-80GB. Num GPUs = 1. Max memory: 79.325 GB. Platform: Linux.\n",
      "O^O/ \\_/ \\    Torch: 2.6.0+cu124. CUDA: 8.0. CUDA Toolkit: 12.4. Triton: 3.2.0\n",
      "\\        /    Bfloat16 = TRUE. FA [Xformers = None. FA2 = True]\n",
      " \"-____-\"     Free license: http://github.com/unslothai/unsloth\n",
      "Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!\n",
      "Unsloth: QLoRA and full finetuning all not selected. Switching to 16bit LoRA.\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "36ab76cf579349e6823ea88e6136316d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.\n"
     ]
    }
   ],
   "source": [
    "# 导入必要的库\n",
    "from unsloth import FastModel  # Unsloth的快速模型加载接口\n",
    "import torch  # PyTorch深度学习框架\n",
    "\n",
    "# 设置最大序列长度\n",
    "# 这个参数决定了模型能处理的最大文本长度（以token为单位）\n",
    "max_seq_length = 1024\n",
    "\n",
    "# 加载预训练模型和分词器\n",
    "# 这里我们加载Gemma3-4B的指令微调版本\n",
    "model, tokenizer = FastModel.from_pretrained(\n",
    "    model_name = \"google/gemma-3-4b-it\",  # 模型路径\n",
    "    max_seq_length = max_seq_length,  # 最大序列长度\n",
    "    load_in_4bit = False,   # 不使用4位量化，保持精度\n",
    "    load_in_8bit = False,   # 不使用8位量化，保持精度\n",
    "    full_finetuning = False,  # 使用LoRA微调，不进行全参数微调\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 步骤3: 配置LoRA（Low-Rank Adaptation）\n",
    "\n",
    "LoRA是一种高效的微调技术，它不直接修改原始模型权重，而是添加低秩适应层。这样可以：\n",
    "\n",
    "1. **节省显存**: 只训练少量新增参数\n",
    "2. **保持原模型**: 原始权重不变，易于管理\n",
    "3. **提高效率**: 训练速度更快\n",
    "\n",
    "**关键参数解释**：\n",
    "- `r=8`: LoRA的秩，控制适应层的大小。越大精度越高但可能过拟合\n",
    "- `lora_alpha=8`: LoRA的缩放因子，通常设置为等于或略大于r\n",
    "- `lora_dropout=0`: Dropout率，这里设为0\n",
    "- `finetune_attention_modules=True`: 微调注意力模块，对GRPO特别重要\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Unsloth: Making `model.base_model.model.model.language_model` require gradients\n"
     ]
    }
   ],
   "source": [
    "# 配置LoRA（Low-Rank Adaptation）参数\n",
    "# 将基础模型转换为PEFT（Parameter Efficient Fine-Tuning）模型\n",
    "model = FastModel.get_peft_model(\n",
    "    model,\n",
    "    # 层级配置：决定哪些层参与微调\n",
    "    finetune_vision_layers     = False, # 关闭视觉层微调（仅文本任务）\n",
    "    finetune_language_layers   = True,  # 开启语言层微调（必须）\n",
    "    finetune_attention_modules = True,  # 开启注意力模块微调（对GRPO很重要）\n",
    "    finetune_mlp_modules       = True,  # 开启MLP模块微调（建议保持开启）\n",
    "\n",
    "    # LoRA核心参数\n",
    "    r = 8,              # LoRA的秩：控制适应层大小，值越大精度越高但可能过拟合\n",
    "    lora_alpha = 8,     # LoRA的缩放因子：建议设置为r的值或略大\n",
    "    lora_dropout = 0,   # LoRA的dropout率：防止过拟合，这里设为0\n",
    "    bias = \"none\",      # 偏置项设置：不训练偏置项\n",
    "    random_state = 3407, # 随机种子：确保结果可复现\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 步骤4: 加载和探索GSM8K数据集\n",
    "\n",
    "GSM8K是一个包含小学数学问题的数据集，非常适合测试模型的推理能力。我们将：\n",
    "\n",
    "1. **加载数据集**: 从本地路径加载GSM8K训练数据\n",
    "2. **探索数据结构**: 查看问题和答案的格式\n",
    "3. **理解答案格式**: GSM8K的答案包含推理过程和最终答案（用####标记）\n",
    "\n",
    "让我们先看看数据集的基本信息和样例：\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 加载GSM8K数据集\n",
    "from datasets import load_dataset\n",
    "\n",
    "# 从本地路径加载GSM8K数据集的训练集\n",
    "# GSM8K是一个包含小学数学推理问题的数据集\n",
    "dataset = load_dataset(\"openai/gsm8k\", \"main\", split = \"train\")\n",
    "\n",
    "# 查看数据集基本信息\n",
    "print(f\"数据集大小: {len(dataset)} 条记录\")\n",
    "print(f\"数据集特征: {dataset.features}\")\n",
    "dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?'"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 查看第一个样例的问题\n",
    "# 这是一个典型的小学数学问题\n",
    "print(\"问题示例:\")\n",
    "print(dataset[0][\"question\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Natalia sold 48/2 = <<48/2=24>>24 clips in May.\\nNatalia sold 48+24 = <<48+24=72>>72 clips altogether in April and May.\\n#### 72'"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 查看第一个样例的答案\n",
    "# 注意答案格式：包含推理过程和最终答案（####后面是最终答案）\n",
    "print(\"答案示例:\")\n",
    "print(dataset[0][\"answer\"])\n",
    "print(\"\\n可以看到：\")\n",
    "print(\"1. 答案包含详细的推理步骤\")\n",
    "print(\"2. 最终答案在####符号后面\")\n",
    "print(\"3. 这种格式有助于模型学习推理过程\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'72'"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 定义函数提取最终答案\n",
    "# GSM8K数据集中，最终答案位于####符号之后\n",
    "def extract_hash_answer(text):\n",
    "    \"\"\"\n",
    "    从GSM8K答案中提取最终数值答案\n",
    "    \n",
    "    Args:\n",
    "        text (str): 包含推理过程和最终答案的完整文本\n",
    "    \n",
    "    Returns:\n",
    "        str or None: 提取的最终答案，如果没有####标记则返回None\n",
    "    \"\"\"\n",
    "    if \"####\" not in text: \n",
    "        return None\n",
    "    # 分割文本，取####后面的部分并去除空格\n",
    "    return text.split(\"####\")[1].strip()\n",
    "\n",
    "# 测试提取函数\n",
    "final_answer = extract_hash_answer(dataset[0][\"answer\"])\n",
    "print(f\"提取的最终答案: {final_answer}\")\n",
    "\n",
    "# 验证提取结果\n",
    "final_answer"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 步骤5: 设计输出格式和系统提示词\n",
    "\n",
    "为了让模型学会按照特定格式输出答案，我们需要：\n",
    "\n",
    "1. **定义格式标记**: 设置开始和结束标记来界定不同部分\n",
    "2. **创建系统提示词**: 指导模型如何结构化输出\n",
    "3. **确保格式一致性**: 训练过程中检查格式合规性\n",
    "\n",
    "我们设计的格式包含两个部分：\n",
    "- **推理过程**: 放在`<start_working_out>`和`<end_working_out>`之间\n",
    "- **最终答案**: 放在`<SOLUTION>`和`</SOLUTION>`之间\n",
    "\n",
    "这种结构化输出有助于：\n",
    "- 评估模型的推理质量\n",
    "- 方便提取最终答案\n",
    "- 提高输出的可读性\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'You are given a problem.\\nThink about the problem and provide your working out.\\nPlace it between <start_working_out> and <end_working_out>.\\nThen, provide your solution between <SOLUTION></SOLUTION>'"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 定义输出格式的标记符号\n",
    "# 这些标记帮助我们识别和评估模型输出的不同部分\n",
    "reasoning_start = \"<start_working_out>\"  # 推理过程开始标记\n",
    "reasoning_end   = \"<end_working_out>\"    # 推理过程结束标记\n",
    "solution_start = \"<SOLUTION>\"            # 最终答案开始标记\n",
    "solution_end = \"</SOLUTION>\"             # 最终答案结束标记\n",
    "\n",
    "# 创建系统提示词\n",
    "# 这个提示词指导模型按照我们期望的格式输出答案\n",
    "system_prompt = f\"\"\"You are given a problem.\n",
    "Think about the problem and provide your working out.\n",
    "Place it between {reasoning_start} and {reasoning_end}.\n",
    "Then, provide your solution between {solution_start}{solution_end}\"\"\"\n",
    "\n",
    "print(\"系统提示词内容:\")\n",
    "print(system_prompt)\n",
    "print(\"\\n这个提示词告诉模型:\")\n",
    "print(\"1. 需要思考问题\")\n",
    "print(\"2. 将推理过程放在指定标记之间\")\n",
    "print(\"3. 将最终答案放在SOLUTION标记之间\")\n",
    "\n",
    "system_prompt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 步骤6: 转换数据集格式\n",
    "\n",
    "现在我们需要将原始的GSM8K数据转换为适合GRPO训练的格式：\n",
    "\n",
    "1. **创建对话格式**: 将每个问题转换为系统消息+用户消息的对话格式\n",
    "2. **提取标准答案**: 使用之前定义的函数提取最终答案\n",
    "3. **构建训练样本**: 每个样本包含提示（prompt）和标准答案（answer）\n",
    "\n",
    "转换后的格式：\n",
    "- `prompt`: 包含系统消息和用户问题的对话列表\n",
    "- `answer`: 提取的数值答案，用于评估模型输出的正确性\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "eec34d39908e4e48a57db4c1a1ada812",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Map:   0%|          | 0/7473 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "{'question': 'Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?',\n",
       " 'answer': '72',\n",
       " 'prompt': [{'content': 'You are given a problem.\\nThink about the problem and provide your working out.\\nPlace it between <start_working_out> and <end_working_out>.\\nThen, provide your solution between <SOLUTION></SOLUTION>',\n",
       "   'role': 'system'},\n",
       "  {'content': 'Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?',\n",
       "   'role': 'user'}]}"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 转换数据集格式\n",
    "# 将原始数据转换为对话格式，便于模型训练\n",
    "dataset = dataset.map(lambda x: {\n",
    "    # 构建对话prompt，包含系统提示和用户问题\n",
    "    \"prompt\" : [\n",
    "        {\"role\": \"system\", \"content\": system_prompt},  # 系统消息：指导输出格式\n",
    "        {\"role\": \"user\",   \"content\": x[\"question\"]},  # 用户消息：具体的数学问题\n",
    "    ],\n",
    "    # 提取标准答案，用于后续的奖励计算\n",
    "    \"answer\": extract_hash_answer(x[\"answer\"]),\n",
    "})\n",
    "\n",
    "print(\"转换后的数据格式示例:\")\n",
    "print(\"1. prompt包含系统提示和用户问题\")\n",
    "print(\"2. answer是提取的数值答案\")\n",
    "print(f\"3. 数据集大小保持不变: {len(dataset)} 条\")\n",
    "\n",
    "# 查看转换后的第一个样本\n",
    "dataset[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 步骤7: 设计奖励函数系统\n",
    "\n",
    "GRPO的核心是通过奖励函数来指导模型学习。我们将设计4个奖励函数来评估模型输出的不同方面：\n",
    "\n",
    "1. **格式完全匹配** (`match_format_exactly`): 检查输出是否严格遵循格式\n",
    "2. **格式近似匹配** (`match_format_approximately`): 检查格式标记的出现情况\n",
    "3. **答案正确性** (`check_answer`): 验证提取的答案是否正确\n",
    "4. **数值提取** (`check_numbers`): 检查是否能正确提取数值\n",
    "\n",
    "### 7.1 首先定义正则表达式来匹配期望的格式\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入正则表达式库\n",
    "import re\n",
    "\n",
    "# 定义正则表达式来匹配期望的输出格式\n",
    "# 这个正则表达式确保模型输出包含所有必需的标记并按正确顺序排列\n",
    "match_format = re.compile(\n",
    "    rf\"^[\\s]{{0,}}\"      # 开头可以有任意数量的空白字符\n",
    "    rf\"{reasoning_start}.+?{reasoning_end}.*?\"  # 推理过程部分（非贪婪匹配）\n",
    "    rf\"{solution_start}(.+?){solution_end}\"     # 解决方案部分（捕获组获取答案）\n",
    "    rf\"[\\s]{{0,}}$\",     # 结尾可以有任意数量的空白字符\n",
    "    flags = re.MULTILINE | re.DOTALL  # 多行模式，.匹配换行符\n",
    ")\n",
    "\n",
    "print(\"正则表达式说明:\")\n",
    "print(\"1. 匹配从<start_working_out>到<end_working_out>的推理过程\")\n",
    "print(\"2. 匹配从<SOLUTION>到</SOLUTION>的最终答案\")\n",
    "print(\"3. 捕获SOLUTION标记内的内容作为答案\")\n",
    "print(\"4. 允许前后有空白字符\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<re.Match object; span=(0, 71), match='<start_working_out>Let me think!<end_working_out>>"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 测试正则表达式\n",
    "# 用一个简单的示例来验证正则表达式是否能正确匹配格式\n",
    "test_text = \"<start_working_out>Let me think!<end_working_out>\" + \"<SOLUTION>2</SOLUTION>\"\n",
    "\n",
    "match_result = match_format.search(test_text)\n",
    "\n",
    "print(\"测试文本:\", test_text)\n",
    "print(\"匹配结果:\", match_result)\n",
    "if match_result:\n",
    "    print(\"提取的答案:\", match_result.group(1))\n",
    "    print(\"✓ 正则表达式工作正常\")\n",
    "else:\n",
    "    print(\"✗ 正则表达式匹配失败\")\n",
    "\n",
    "match_result"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "### 7.2 奖励函数1: 精确格式匹配\n",
    "\n",
    "这个函数检查模型输出是否完全符合我们定义的格式要求。如果格式完全正确，给予最高奖励（3.0分）。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "def match_format_exactly(completions, **kwargs):\n",
    "    \"\"\"\n",
    "    奖励函数1: 检查输出是否严格遵循指定格式\n",
    "    \n",
    "    Args:\n",
    "        completions: 模型生成的完成文本列表\n",
    "        **kwargs: 其他参数（未使用）\n",
    "    \n",
    "    Returns:\n",
    "        list: 每个完成文本的奖励分数列表\n",
    "    \"\"\"\n",
    "    scores = []\n",
    "    for completion in completions:\n",
    "        score = 0\n",
    "        response = completion[0][\"content\"]\n",
    "        \n",
    "        # 如果输出完全匹配期望格式，给予高分奖励\n",
    "        if match_format.search(response) is not None: \n",
    "            score += 3.0\n",
    "            \n",
    "        scores.append(score)\n",
    "    \n",
    "    return scores\n",
    "\n",
    "print(\"奖励函数1说明:\")\n",
    "print(\"- 检查输出是否包含完整的推理过程和解决方案格式\")\n",
    "print(\"- 格式正确: +3.0分\")\n",
    "print(\"- 格式不正确: 0分\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "### 7.3 奖励函数2: 近似格式匹配\n",
    "\n",
    "这个函数检查格式标记的出现次数，鼓励模型使用正确的标记，但对标记过多进行惩罚。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "def match_format_approximately(completions, **kwargs):\n",
    "    \"\"\"\n",
    "    奖励函数2: 检查格式标记的出现次数\n",
    "    \n",
    "    这个函数更宽松，检查各个格式标记是否恰好出现1次。\n",
    "    如果某个标记出现1次，获得奖励；如果出现0次或多次，会被惩罚。\n",
    "    \n",
    "    Args:\n",
    "        completions: 模型生成的完成文本列表\n",
    "        **kwargs: 其他参数（未使用）\n",
    "    \n",
    "    Returns:\n",
    "        list: 每个完成文本的奖励分数列表\n",
    "    \"\"\"\n",
    "    scores = []\n",
    "    for completion in completions:\n",
    "        score = 0\n",
    "        response = completion[0][\"content\"]\n",
    "        \n",
    "        # 检查每个标记的出现次数，理想情况下每个标记应该恰好出现1次\n",
    "        score += 0.5 if response.count(reasoning_start) == 1 else -0.5\n",
    "        score += 0.5 if response.count(reasoning_end)   == 1 else -0.5\n",
    "        score += 0.5 if response.count(solution_start)  == 1 else -0.5\n",
    "        score += 0.5 if response.count(solution_end)    == 1 else -0.5\n",
    "        \n",
    "        scores.append(score)\n",
    "    return scores\n",
    "\n",
    "print(\"奖励函数2说明:\")\n",
    "print(\"- 检查每个格式标记的出现次数\")\n",
    "print(\"- 每个标记出现1次: +0.5分\")\n",
    "print(\"- 每个标记出现0次或多次: -0.5分\")\n",
    "print(\"- 总分范围: -2.0 到 +2.0\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "### 7.4 奖励函数3: 答案正确性检查\n",
    "\n",
    "这是最重要的奖励函数，它评估模型是否给出了正确的答案。它包含多层评估机制：\n",
    "\n",
    "1. **完全匹配**: 答案完全正确 (+3.0分)\n",
    "2. **去空格匹配**: 忽略空格后匹配 (+1.5分)  \n",
    "3. **数值接近**: 答案在正确范围内 (+0.5或+0.25分)\n",
    "4. **错误惩罚**: 答案错误会被扣分\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "def check_answer(prompts, completions, answer, **kwargs):\n",
    "    \"\"\"\n",
    "    奖励函数3: 检查答案的正确性\n",
    "    \n",
    "    这个函数实现了多层次的答案评估机制，从严格匹配到近似匹配。\n",
    "    \n",
    "    Args:\n",
    "        prompts: 输入提示列表\n",
    "        completions: 模型生成的完成文本列表\n",
    "        answer: 标准答案列表\n",
    "        **kwargs: 其他参数\n",
    "    \n",
    "    Returns:\n",
    "        list: 每个完成文本的奖励分数列表\n",
    "    \"\"\"\n",
    "    question = prompts[0][-1][\"content\"]\n",
    "    responses = [completion[0][\"content\"] for completion in completions]\n",
    "\n",
    "    # 从模型输出中提取答案\n",
    "    extracted_responses = [\n",
    "        guess.group(1)\n",
    "        if (guess := match_format.search(r)) is not None else None \\\n",
    "        for r in responses\n",
    "    ]\n",
    "\n",
    "    scores = []\n",
    "    for guess, true_answer in zip(extracted_responses, answer):\n",
    "        score = 0\n",
    "        \n",
    "        # 如果无法提取答案，得分为0\n",
    "        if guess is None:\n",
    "            scores.append(0)\n",
    "            continue\n",
    "            \n",
    "        # 完全匹配：最高奖励\n",
    "        if guess == true_answer:\n",
    "            score += 3.0\n",
    "        # 去除空格后匹配：高奖励\n",
    "        elif guess.strip() == true_answer.strip():\n",
    "            score += 1.5\n",
    "        else:\n",
    "            # 数值接近性检查：对于数值答案，允许一定误差\n",
    "            try:\n",
    "                ratio = float(guess) / float(true_answer)\n",
    "                if   ratio >= 0.9 and ratio <= 1.1:    # 10%误差内\n",
    "                    score += 0.5\n",
    "                elif ratio >= 0.8 and ratio <= 1.2:    # 20%误差内\n",
    "                    score += 0.25\n",
    "                else: \n",
    "                    score -= 1.0 # 错误答案惩罚\n",
    "            except:\n",
    "                score -= 0.5 # 无法转换为数值的惩罚\n",
    "                \n",
    "        scores.append(score)\n",
    "    return scores\n",
    "\n",
    "print(\"奖励函数3说明:\")\n",
    "print(\"- 完全匹配: +3.0分\")\n",
    "print(\"- 去空格匹配: +1.5分\")\n",
    "print(\"- 10%误差内: +0.5分\")\n",
    "print(\"- 20%误差内: +0.25分\")\n",
    "print(\"- 错误答案: -1.0分\")\n",
    "print(\"- 无法解析: -0.5分\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "### 7.5 奖励函数4: 数值提取检查\n",
    "\n",
    "这个函数专门检查模型是否能在SOLUTION标记内输出有效的数值。它使用一个更简单的正则表达式来提取数字。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['0.34']"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 定义用于提取数字的正则表达式\n",
    "# 这个正则表达式专门用于从SOLUTION标记中提取数值\n",
    "match_numbers = re.compile(\n",
    "    rf\"{solution_start}.*?([\\d\\.]{{1,}})\",  # 匹配SOLUTION标记内的数字（包括小数）\n",
    "    flags = re.MULTILINE | re.DOTALL        # 多行模式\n",
    ")\n",
    "\n",
    "# 测试数字提取功能\n",
    "test_solution = \"<SOLUTION>  0.34  </SOLUTION>\"\n",
    "extracted_numbers = match_numbers.findall(test_solution)\n",
    "\n",
    "print(f\"测试文本: {test_solution}\")\n",
    "print(f\"提取的数字: {extracted_numbers}\")\n",
    "print(\"✓ 数字提取正则表达式工作正常\" if extracted_numbers else \"✗ 数字提取失败\")\n",
    "\n",
    "extracted_numbers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "def check_numbers(prompts, completions, answer, **kwargs):\n",
    "    \"\"\"\n",
    "    奖励函数4: 检查数值提取能力\n",
    "    \n",
    "    这个函数专门检查模型是否能在SOLUTION标记内输出有效的数值，\n",
    "    并与标准答案进行精确数值比较。\n",
    "    \n",
    "    Args:\n",
    "        prompts: 输入提示列表\n",
    "        completions: 模型生成的完成文本列表\n",
    "        answer: 标准答案列表\n",
    "        **kwargs: 其他参数\n",
    "    \n",
    "    Returns:\n",
    "        list: 每个完成文本的奖励分数列表\n",
    "    \"\"\"\n",
    "    question = prompts[0][-1][\"content\"]\n",
    "    responses = [completion[0][\"content\"] for completion in completions]\n",
    "\n",
    "    # 使用数字提取正则表达式从响应中提取数值\n",
    "    extracted_responses = [\n",
    "        guess.group(1)\n",
    "        if (guess := match_numbers.search(r)) is not None else None \\\n",
    "        for r in responses\n",
    "    ]\n",
    "\n",
    "    scores = []\n",
    "    \n",
    "    # 打印调试信息（训练时会显示）\n",
    "    print('*'*20, f\"Question:\\n{question}\", \n",
    "          f\"\\nAnswer:\\n{answer[0]}\", \n",
    "          f\"\\nResponse:\\n{responses[0]}\", \n",
    "          f\"\\nExtracted:\\n{extracted_responses[0]}\")\n",
    "    \n",
    "    for guess, true_answer in zip(extracted_responses, answer):\n",
    "        # 如果无法提取数字，得分为0\n",
    "        if guess is None:\n",
    "            scores.append(0)\n",
    "            continue\n",
    "            \n",
    "        # 尝试将提取的答案和标准答案转换为数值进行比较\n",
    "        try:\n",
    "            true_answer_num = float(true_answer.strip())\n",
    "            guess_num = float(guess.strip())\n",
    "            # 数值完全匹配时给予奖励，否则为0\n",
    "            scores.append(1.5 if guess_num == true_answer_num else 0.0)\n",
    "        except:\n",
    "            # 转换失败时得分为0\n",
    "            scores.append(0)\n",
    "            continue\n",
    "            \n",
    "    return scores\n",
    "\n",
    "print(\"奖励函数4说明:\")\n",
    "print(\"- 专门检查SOLUTION标记内的数值提取\")\n",
    "print(\"- 数值完全匹配: +1.5分\")\n",
    "print(\"- 无法提取数值或不匹配: 0分\")\n",
    "print(\"- 用于确保模型输出包含有效数字\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 步骤8: 配置GRPO训练参数\n",
    "\n",
    "现在我们配置GRPO训练的各种参数。这些参数控制训练过程的方方面面：\n",
    "\n",
    "### 关键参数说明：\n",
    "\n",
    "**学习率相关**:\n",
    "- `learning_rate=5e-6`: 较小的学习率，确保稳定训练\n",
    "- `warmup_ratio=0.1`: 学习率预热，前10%步骤逐渐增加学习率\n",
    "\n",
    "**批次和生成**:\n",
    "- `per_device_train_batch_size=1`: 每个设备的批次大小\n",
    "- `num_generations=4`: 每个提示生成4个候选答案进行比较\n",
    "\n",
    "**序列长度**:\n",
    "- `max_prompt_length=256`: 输入提示的最大长度\n",
    "- `max_completion_length`: 输出完成文本的最大长度\n",
    "\n",
    "**训练控制**:\n",
    "- `max_steps=50`: 快速演示训练（实际训练建议更多步骤）\n",
    "- `report_to=\"swanlab\"`: 使用SwanLab进行可视化监控\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Unsloth: We now expect `per_device_train_batch_size` to be a multiple of `num_generations`.\n",
      "We will change the batch size of 1 to the `num_generations` of 4\n"
     ]
    }
   ],
   "source": [
    "# 设置提示词的最大长度\n",
    "max_prompt_length = 256\n",
    "\n",
    "# 导入GRPO相关的配置和训练器\n",
    "from trl import GRPOConfig, GRPOTrainer\n",
    "\n",
    "# 创建GRPO训练配置\n",
    "training_args = GRPOConfig(\n",
    "    # 优化器参数\n",
    "    learning_rate = 5e-6,           # 学习率：GRPO通常使用较小的学习率\n",
    "    adam_beta1 = 0.9,               # Adam优化器的beta1参数\n",
    "    adam_beta2 = 0.99,              # Adam优化器的beta2参数\n",
    "    weight_decay = 0.1,             # 权重衰减，防止过拟合\n",
    "    optim = \"adamw_torch_fused\",    # 使用融合的AdamW优化器，更高效\n",
    "    \n",
    "    # 学习率调度\n",
    "    warmup_ratio = 0.1,             # 学习率预热比例\n",
    "    lr_scheduler_type = \"cosine\",   # 余弦学习率调度\n",
    "    \n",
    "    # 训练批次设置\n",
    "    per_device_train_batch_size = 1,        # 每个设备的批次大小\n",
    "    gradient_accumulation_steps = 1,        # 梯度累积步数（可以增加到4获得更平滑的训练）\n",
    "    num_generations = 4,                    # 每个提示生成的候选数量（显存不足时可减少）\n",
    "    \n",
    "    # 序列长度控制\n",
    "    max_prompt_length = max_prompt_length,                      # 提示的最大长度\n",
    "    max_completion_length = max_seq_length - max_prompt_length, # 完成文本的最大长度\n",
    "    \n",
    "    # 训练控制\n",
    "    max_steps = 50,                 # 最大训练步数（演示用，实际训练建议更多）\n",
    "    # num_train_epochs = 1,         # 或者使用训练轮数而非步数\n",
    "    save_steps = 50,                # 保存模型的步数间隔\n",
    "    max_grad_norm = 0.1,            # 梯度裁剪阈值\n",
    "    \n",
    "    # 日志和监控\n",
    "    logging_steps = 1,              # 日志记录间隔\n",
    "    report_to = \"swanlab\",          # 实验跟踪工具（也可以使用\"wandb\"）\n",
    "    output_dir = \"outputs\",         # 输出目录\n",
    ")\n",
    "\n",
    "print(\"GRPO训练配置已设置完成!\")\n",
    "print(f\"- 最大训练步数: {training_args.max_steps}\")\n",
    "print(f\"- 每步生成候选数: {training_args.num_generations}\")\n",
    "print(f\"- 学习率: {training_args.learning_rate}\")\n",
    "print(f\"- 使用SwanLab进行可视化监控\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 步骤9: 执行GRPO训练\n",
    "\n",
    "现在我们创建GRPO训练器并开始训练过程。训练器会：\n",
    "\n",
    "1. **初始化训练器**: 设置模型、奖励函数和训练参数\n",
    "2. **开始训练**: 循环执行以下步骤：\n",
    "   - 生成多个候选答案\n",
    "   - 使用奖励函数评估每个候选\n",
    "   - 根据奖励信号更新模型参数\n",
    "3. **监控训练**: 通过SwanLab实时查看训练进度\n",
    "\n",
    "### 训练过程中会看到：\n",
    "- 训练进度条和损失值\n",
    "- 每个奖励函数的得分统计\n",
    "- SwanLab的可视化界面链接\n",
    "- 样例问题和模型输出\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Detected kernel version 5.4.143, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "==((====))==  Unsloth - 2x faster free finetuning | Num GPUs used = 1\n",
      "   \\\\   /|    Num examples = 7,473 | Num Epochs = 1 | Total steps = 50\n",
      "O^O/ \\_/ \\    Batch size per device = 4 | Gradient accumulation steps = 1\n",
      "\\        /    Data Parallel GPUs = 1 | Total batch size (4 x 1 x 1) = 4\n",
      " \"-____-\"     Trainable parameters = 14,901,248 of 4,314,980,720 (0.35% trained)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1m\u001b[34mswanlab\u001b[0m\u001b[0m: Tracking run with swanlab version 0.6.4                                   \n",
      "\u001b[1m\u001b[34mswanlab\u001b[0m\u001b[0m: Run data will be saved locally in \u001b[35m\u001b[1m/opt/tiger/test0/swanlog/run-20250701_195941-0e8cd89d\u001b[0m\u001b[0m\n",
      "\u001b[1m\u001b[34mswanlab\u001b[0m\u001b[0m: 👋 Hi \u001b[1m\u001b[39mtwosugar\u001b[0m\u001b[0m, welcome to swanlab!\n",
      "\u001b[1m\u001b[34mswanlab\u001b[0m\u001b[0m: Syncing run \u001b[33moutputs\u001b[0m to the cloud\n",
      "\u001b[1m\u001b[34mswanlab\u001b[0m\u001b[0m: 🏠 View project at \u001b[34m\u001b[4mhttps://swanlab.cn/@twosugar/test0\u001b[0m\u001b[0m\n",
      "\u001b[1m\u001b[34mswanlab\u001b[0m\u001b[0m: 🚀 View run at \u001b[34m\u001b[4mhttps://swanlab.cn/@twosugar/test0/runs/cmax5v7at0zpzpqk94cbg\u001b[0m\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "<!DOCTYPE html>\n",
       "<html lang=\"en\">\n",
       "<head>\n",
       "    <meta charset=\"UTF-8\">\n",
       "    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n",
       "    <title>Show Iframe</title>\n",
       "    \n",
       "        <script>\n",
       "            function showIframe() {\n",
       "                var iframeHtml = '<iframe src=\"https://swanlab.cn/@twosugar/test0/runs/cmax5v7at0zpzpqk94cbg\" width=100% height=\"600\" frameborder=\"no\"></iframe>';\n",
       "                document.getElementById('iframeContainer').innerHTML = iframeHtml;\n",
       "            }\n",
       "        </script>\n",
       "        \n",
       "</head>\n",
       "<body>\n",
       "    <style>\n",
       "        .interactive-button {\n",
       "            display: flex;\n",
       "            align-items: center;\n",
       "            height: 36px;\n",
       "            border: 0px;\n",
       "            background-color: #2c8f63;\n",
       "            color: white;\n",
       "            padding: 10px 20px;\n",
       "            transition: background-color 0.3s, transform 0.2s;\n",
       "        }\n",
       "\n",
       "        .interactive-button:hover {\n",
       "            background-color: #5cab87;\n",
       "            cursor: pointer;\n",
       "        }\n",
       "\n",
       "        .interactive-button:active { background-color: #217952; transform: scale(0.96); } </style> <br> <button \n",
       "        onclick=\"showIframe()\" class=\"interactive-button\"> <svg style=\"height: 16px; margin-right: 8px;\" viewBox=\"0 0 \n",
       "        46 46\" fill=\"none\"> <path d=\"M10.8439 21.1974C10.6414 21.2854 10.4477 21.3925 10.2655 21.5173L10.2069 \n",
       "        21.5652C10.1839 21.58 10.1625 21.5969 10.1429 21.6159C6.29135 24.6118 4.22831 29.4416 5.32646 34.282C5.94656 \n",
       "        37.0577 7.50461 39.5348 9.73801 41.2958C11.9714 43.0568 14.7436 43.994 17.5874 43.9495H18.0219C19.8864 \n",
       "        43.8697 21.7087 43.3694 23.3526 42.486C24.9964 41.6026 26.4193 40.3589 27.5147 38.848C28.61 37.3371 29.3496 \n",
       "        35.598 29.678 33.761C30.0065 31.9239 29.9153 30.0363 29.4112 28.2395C28.9181 26.4723 27.8919 24.8437 26.9937 \n",
       "        23.2551C25.4158 20.4653 23.8343 17.6764 22.2492 14.8884C21.7801 14.0647 21.3057 13.2465 20.8419 \n",
       "        12.4228C20.2315 11.3353 19.2746 10.1519 19.224 8.86183C19.1733 7.57176 20.2235 6.32701 21.5082 \n",
       "        6.07912C23.9284 5.61801 25.0639 8.24078 25.0693 8.23812C25.363 8.94035 25.9123 9.50489 26.6063 \n",
       "        9.81764C27.3002 10.1304 28.087 10.168 28.8077 9.92298C29.5283 9.67791 30.1291 9.1684 30.4885 8.49743C30.8479 \n",
       "        7.82646 30.9392 7.04405 30.7439 6.30835C30.1514 4.37314 28.9133 2.69953 27.2363 1.56656C25.7615 0.511704 \n",
       "        23.9847 -0.0372109 22.1719 0.00195984C20.9049 0.00893199 19.6532 0.27989 18.4967 0.797557C17.3402 1.31522 \n",
       "        16.3043 2.06823 15.4551 3.00856C14.49 4.08707 13.7984 5.38193 13.4389 6.78385C13.0794 8.18576 13.0624 9.6536 \n",
       "        13.3894 11.0635C13.52 11.593 13.6984 12.1095 13.9225 12.6067C14.5595 14.0514 15.4951 15.3681 16.284 \n",
       "        16.7355C17.2525 18.4147 18.2209 20.0948 19.1893 21.7758C20.1578 23.4568 21.1351 25.1449 22.1213 \n",
       "        26.8401C22.9209 28.2421 23.7925 29.4682 23.8805 31.1528C23.9175 32.0513 23.7682 32.9479 23.4419 \n",
       "        33.7859C23.1156 34.6239 22.6194 35.3854 21.9845 36.0223C21.3496 36.6592 20.5897 37.1578 19.7527 \n",
       "        37.4868C18.9157 37.8157 18.0196 37.9678 17.121 37.9336C14.0024 37.7923 11.6488 35.4814 11.1744 32.4588C10.58 \n",
       "        28.6419 13.552 26.5469 13.552 26.5469C14.1782 26.1785 14.6497 25.5955 14.8791 24.906C15.1084 24.2166 15.0801 \n",
       "        23.4673 14.7993 22.7971C14.5186 22.127 14.0044 21.5813 13.3521 21.2611C12.6998 20.941 11.9536 20.8682 11.2517 \n",
       "        21.0561C11.1174 21.0939 10.9856 21.1402 10.8572 21.1947\" fill=\"white\" /> <path d=\"M42.8101 31.5968C42.8109 \n",
       "        30.5198 42.7218 29.4445 42.5435 28.3823C42.2663 26.7069 41.7464 25.0808 41.0002 23.5552C40.5524 22.6463 \n",
       "        39.9874 21.7374 39.1024 21.2417C38.6593 20.9919 38.1589 20.8617 37.6502 20.8639C37.1416 20.8661 36.6423 \n",
       "        21.0006 36.2013 21.2541C35.7604 21.5077 35.393 21.8716 35.1352 22.3101C34.8775 22.7485 34.7382 23.2466 \n",
       "        34.7312 23.7552C34.7072 24.8773 35.3149 25.8875 35.768 26.9217C36.5212 28.6453 36.8623 30.5208 36.7642 \n",
       "        32.3993C36.6661 34.2777 36.1315 36.1075 35.2029 37.7433C35.146 37.8404 35.0952 37.941 35.051 38.0445C34.8623 \n",
       "        38.4842 34.7635 38.9573 34.7605 39.4358C34.7802 40.1222 35.0356 40.7808 35.4835 41.3011C35.9315 41.8214 \n",
       "        36.5449 42.1717 37.2207 42.2932C38.8759 42.589 40.1899 41.347 40.8856 39.9609C42.1643 37.3589 42.823 34.4961 \n",
       "        42.8101 31.5968Z\" fill=\"white\" /> <path d=\"M28.2309 11.8938C28.1761 11.9043 28.1218 11.9176 28.0683 \n",
       "        11.9338C27.9593 11.9642 27.8611 12.0249 27.7851 12.1088C27.7091 12.1928 27.6584 12.2965 27.6389 \n",
       "        12.408C27.6193 12.5195 27.6318 12.6343 27.6748 12.7391C27.7178 12.8438 27.7895 12.9343 27.8818 \n",
       "        12.9999C29.2375 14.0252 30.3809 15.3043 31.2482 16.7662C31.4838 17.1677 31.6888 17.5865 31.8612 \n",
       "        18.0189C32.0052 18.3921 32.1971 18.8799 32.6822 18.8532C33.0607 18.8346 33.2153 18.512 33.3192 \n",
       "        18.1895C33.8137 16.5125 33.9678 14.7534 33.7723 13.0159C33.6331 12.0693 33.4155 11.1359 33.122 \n",
       "        10.2252C33.0775 10.0047 32.9744 9.80029 32.8235 9.6335C32.7273 9.54627 32.6054 9.49262 32.4761 9.4806C32.3468 \n",
       "        9.46859 32.2171 9.49886 32.1065 9.56687C32.0016 9.65188 31.9115 9.75365 31.8399 9.86806C31.3956 10.4658 \n",
       "        30.825 10.9581 30.1687 11.3101C29.8377 11.4861 29.4893 11.6272 29.1292 11.7312C28.828 11.8192 28.5215 11.8325 \n",
       "        28.2309 11.8938Z\" fill=\"white\" /> </svg> Display SwanLab Board </button> <br> <div \n",
       "        id=\"iframeContainer\"></div> </body> </html>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "******************** Question:\n",
      "A concert ticket costs $40. Mr. Benson bought 12 tickets and received a 5% discount for every ticket bought that exceeds 10. How much did Mr. Benson pay in all? \n",
      "Answer:\n",
      "476 \n",
      "Response:\n",
      " <start_working_out> \n",
      "\n",
      " <end_working_out> \n",
      "\n",
      " <SOLUTION> \n",
      "\n",
      " **1.** Calculate the number of tickets bought that exceed 10.  Mr. Benson bought 12 tickets, so 12 - 10 = 2 tickets exceed 10.\n",
      "\n",
      " **2.** Calculate the discount per ticket bought that exceeds 10. The discount is 5% of $40.  0.05 * $40 = $2.  However, the discount is calculated *only* for those tickets exceeding 10.  Therefore, the discount per ticket is $2.\n",
      "\n",
      " **3.**  Calculate the total discount.  Mr. Benson bought 2 tickets that exceed 10, so the total discount is 2 * $2 = $4.\n",
      "\n",
      " **4.**  Calculate the total cost before discount. Mr. Benson bought 12 tickets at $40 each, so the total cost is 12 * $40 = $480.\n",
      "\n",
      " **5.**  Calculate the total cost after discount.  The total cost after discount is $480 - $4 = $476.\n",
      "\n",
      " **6.**  Mr. Benson bought 12 tickets and received a 5% discount for every ticket bought that exceeds 10. The number of tickets bought that exceed 10 is 2. Therefore, the discount is 2 * $2 = $4. Mr. Benson paid in all $480 - $4 = $476.\n",
      "\n",
      " **7.** The problem states that Mr. Benson bought 12 tickets and received a 5% discount for every ticket bought that exceeds 10. The discount is given for every ticket bought that exceeds 10. So, the discount is given for 2 tickets (12-10 = 2). The discount is 2 * $2 = $4. Mr. Benson paid in all $480 - $4 = $476.\n",
      "\n",
      " **8.** The problem is to calculate how much Mr. Benson paid in all. Mr. Benson bought 12 tickets at $40 each, so the cost is $480. He received a 5% discount for every ticket bought that exceeds 10. There are 2 tickets that exceed 10 (12-10 = 2). The discount is 2 * 5% = 10%. The discount is $480 * 10% = $480 * 0.10 = $48. The total cost is $480 - $48 = $432.\n",
      "\n",
      " **9.**  The problem states Mr. Benson bought 12 tickets and received a 5% discount for every ticket bought that exceeds 10.  Tickets bought that exceed 10 is 2.  5% of the price is $40 is $2.  The total discount is 2 * $2 = $4.  The amount paid is $480 - $4 = $476.\n",
      "\n",
      "  **10.** Re-reading the problem:  Mr. Benson bought 12 tickets at $40 each.  A 5% discount is given for every ticket bought that exceeds 10. This means for every ticket *beyond* 10, there is a 5% discount.  So only 2 tickets are eligible for the discount.  The discount is 2 * 5% = 10%.  The total discount is $480 * 0.10 = $48.  Therefore the final \n",
      "Extracted:\n",
      "1.\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='16' max='50' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [16/50 20:14 < 49:10, 0.01 it/s, Epoch 0.00/1]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Step</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>reward</th>\n",
       "      <th>reward_std</th>\n",
       "      <th>completions / mean_length</th>\n",
       "      <th>completions / min_length</th>\n",
       "      <th>completions / max_length</th>\n",
       "      <th>completions / clipped_ratio</th>\n",
       "      <th>completions / mean_terminated_length</th>\n",
       "      <th>completions / min_terminated_length</th>\n",
       "      <th>completions / max_terminated_length</th>\n",
       "      <th>kl</th>\n",
       "      <th>rewards / match_format_exactly / mean</th>\n",
       "      <th>rewards / match_format_exactly / std</th>\n",
       "      <th>rewards / match_format_approximately / mean</th>\n",
       "      <th>rewards / match_format_approximately / std</th>\n",
       "      <th>rewards / check_answer / mean</th>\n",
       "      <th>rewards / check_answer / std</th>\n",
       "      <th>rewards / check_numbers / mean</th>\n",
       "      <th>rewards / check_numbers / std</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>-0.000000</td>\n",
       "      <td>0.500000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.500000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>-0.000000</td>\n",
       "      <td>-0.625000</td>\n",
       "      <td>1.108678</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.750000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>-1.250000</td>\n",
       "      <td>0.957427</td>\n",
       "      <td>-0.125000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.625000</td>\n",
       "      <td>2.625992</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000177</td>\n",
       "      <td>0.750000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>1.414214</td>\n",
       "      <td>-0.125000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>4</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>3.625000</td>\n",
       "      <td>3.944933</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000620</td>\n",
       "      <td>0.750000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>1.750000</td>\n",
       "      <td>0.500000</td>\n",
       "      <td>0.750000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>0.375000</td>\n",
       "      <td>0.750000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>5</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>3.750000</td>\n",
       "      <td>2.020726</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.001447</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>1.732051</td>\n",
       "      <td>2.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>0.288675</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>6</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>5.000000</td>\n",
       "      <td>1.683251</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000681</td>\n",
       "      <td>2.250000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>1.750000</td>\n",
       "      <td>0.500000</td>\n",
       "      <td>-0.125000</td>\n",
       "      <td>0.478714</td>\n",
       "      <td>1.125000</td>\n",
       "      <td>0.750000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>7</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>1.154701</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000422</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>1.154701</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>8</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000513</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>9</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>1.375000</td>\n",
       "      <td>2.688711</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000646</td>\n",
       "      <td>0.750000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>0.750000</td>\n",
       "      <td>1.892969</td>\n",
       "      <td>-0.125000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>10</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>1.750000</td>\n",
       "      <td>2.020726</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000739</td>\n",
       "      <td>2.250000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>-0.500000</td>\n",
       "      <td>1.914854</td>\n",
       "      <td>-0.375000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>0.375000</td>\n",
       "      <td>0.750000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>11</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>4.187500</td>\n",
       "      <td>1.179248</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.001038</td>\n",
       "      <td>2.250000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>1.750000</td>\n",
       "      <td>0.500000</td>\n",
       "      <td>-0.187500</td>\n",
       "      <td>0.375000</td>\n",
       "      <td>0.375000</td>\n",
       "      <td>0.750000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>12</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.875000</td>\n",
       "      <td>2.719528</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000231</td>\n",
       "      <td>0.750000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>1.707825</td>\n",
       "      <td>-0.125000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>13</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>2.187500</td>\n",
       "      <td>3.091487</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.001241</td>\n",
       "      <td>0.750000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>2.000000</td>\n",
       "      <td>0.062500</td>\n",
       "      <td>0.125000</td>\n",
       "      <td>0.375000</td>\n",
       "      <td>0.750000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>14</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>1.250000</td>\n",
       "      <td>2.598076</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>768.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000328</td>\n",
       "      <td>0.750000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>1.500000</td>\n",
       "      <td>-0.125000</td>\n",
       "      <td>0.250000</td>\n",
       "      <td>0.375000</td>\n",
       "      <td>0.750000</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "******************** Question:\n",
      "Jane is trying to decide whether to buy a house or a trailer. A house costs $480,000 and a trailer costs $120,000. Each loan will be paid in monthly installments over 20 years. How much more is the monthly payment on the house compared to the trailer? \n",
      "Answer:\n",
      "1500 \n",
      "Response:\n",
      "Here's the working out, placed between <start_working_out> and <end_working_out>.\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "Jane is trying to decide whether to buy a house or a trailer. A house costs $480,000 and a trailer costs $120,000. Each loan will be paid in monthly installments over 20 years. How much more is the monthly payment on the house compared to the trailer?\n",
      "\n",
      "1.  Calculate the monthly payment for the house.\n",
      "    *   Loan amount: $480,000\n",
      "    *   Number of months: 20 years * 12 months/year = 240 months\n",
      "    *   Monthly payment: $480,000 / 240 months = $2,000/month\n",
      "\n",
      "2.  Calculate the monthly payment for the trailer.\n",
      "    *   Loan amount: $120,000\n",
      "    *   Number of months: 20 years * 12 months/year = 240 months\n",
      "    *   Monthly payment: $120,000 / 240 months = $500/month\n",
      "\n",
      "3.  Find the difference between the house and trailer payment.\n",
      "    *   Difference: $2,000 - $500 = $1,500\n",
      "\n",
      "</start_working_out>\n",
      "\n",
      "<end_working_out>\n",
      "\n",
      "</end_working_out>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "$1,500\n",
      "\n",
      "</SOLUTION>model \n",
      "Extracted:\n",
      "1\n",
      "Unsloth: Will smartly offload gradients to save VRAM!\n",
      "******************** Question:\n",
      "Janet pays $40/hour for 3 hours per week of clarinet lessons and $28/hour for 5 hours a week of piano lessons. How much more does she spend on piano lessons than clarinet lessons in a year? \n",
      "Answer:\n",
      "1040 \n",
      "Response:\n",
      "Here's the breakdown of the problem and solution:\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "**1. Clarinet Lesson Cost:**\n",
      "\n",
      "*   **Annual Hours:** 3 hours/week * 52 weeks/year = 156 hours/year\n",
      "*   **Annual Cost:** 156 hours/year * $28/hour = $4008\n",
      "\n",
      "**2. Piano Lesson Cost:**\n",
      "\n",
      "*   **Annual Hours:** 5 hours/week * 52 weeks/year = 260 hours/year\n",
      "*   **Annual Cost:** 260 hours/year * $40/hour = $10400\n",
      "\n",
      "**3. Cost Difference:**\n",
      "\n",
      "*   **Difference:** $10400 - $4008 = $6392\n",
      "\n",
      "**4. Solution**\n",
      "\n",
      "<end_working_out>\n",
      "\n",
      "<SOLUTION>\n",
      "$6392\n",
      "</SOLUTION> \n",
      "Extracted:\n",
      "6392\n",
      "******************** Question:\n",
      "Sabrina is collecting herbs to make a poultice for her grandmother. She needs twice as many basil leaves as sage leaves and 5 fewer sage leaves than verbena leaves. If she needs 12 basil leaves, how many leaves total does she need? \n",
      "Answer:\n",
      "29 \n",
      "Response:\n",
      "Here's the working out:\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "Sabrina needs 12 basil leaves. \n",
      "The number of basil leaves is twice the number of sage leaves, so 12 = 2 * sage leaves leaves. \n",
      "Therefore, sage leaves need = 12 / 2 = 6 leaves.\n",
      "Sage leaves need 5 fewer leaves than verbena leaves, so verbena leaves need = 6 + 5 = 11 leaves.\n",
      "Verbena leaves need 5 fewer leaves than verbena leaves, so she needs = 11 + 5 = 16 leaves.\n",
      "Total leaves = basil leaves + sage leaves + verbena leaves + sage leaves= 12+6+11+5=32 leaves.\n",
      " Total leaves = 12 + 6 + 11 + 5 = 32 leaves.\n",
      "</start_working_out>\n",
      "\n",
      "<end_working_out>\n",
      "\n",
      "</end_working_out>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "32\n",
      "</SOLUTION>Categorization: 32.\n",
      "\n",
      " \n",
      "Extracted:\n",
      "32\n",
      "******************** Question:\n",
      "Over the past five years, on July 4th, the high temperature for Washington, DC has been: 90 degrees in 2020, 90 degrees in 2019, 90 degrees in 2018, 79 degrees in 2017 and 71 degrees in 2016. What is the average temperature for July 4th in Washington, DC over the past 5 years? \n",
      "Answer:\n",
      "84 \n",
      "Response:\n",
      "<start_working_out>\n",
      "\n",
      "Okay, let's work this out. We need to calculate the average temperature for July 4th in Washington, DC, over the past 5 years. \n",
      "\n",
      "Here's the breakdown of the temperatures:\n",
      "\n",
      "*   2020: 90 degrees\n",
      "*   2019: 90 degrees\n",
      "*   2018: 90 degrees\n",
      "*   2017: 79 degrees\n",
      "*   2016: 71 degrees\n",
      "\n",
      "Now let's calculate the average:\n",
      "\n",
      "1.  Add all the temperatures: 90 + 90 + 90 + 79 + 71 = 399\n",
      "2.  Divide the sum by the number of temperatures (5): 399 / 5 = 79.8\n",
      "\n",
      "Therefore, the average temperature for July 4th in Washington, DC over the past 5 years is 79.8 degrees.\n",
      "\n",
      "<end_working_out>\n",
      "\n",
      "<SOLUTION>79.8</SOLUTION> \n",
      "Extracted:\n",
      "79.8\n",
      "******************** Question:\n",
      "Rene can finish reading 30 pages in 60 minutes. Lulu can read 27 pages in 60 minutes and Cherry can read 25 pages in 60 minutes. If they have been reading for 240 minutes now, how many pages have they finished reading in total? \n",
      "Answer:\n",
      "328 \n",
      "Response:\n",
      "Here's the breakdown of the problem, the solution and the final answer.\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "**Problem Breakdown:**\n",
      "\n",
      "1.  **Individual Rates:** We are given the individual pages read rates per 60 minutes interval.\n",
      "\n",
      "2.  **Combined Rates (Intervals):** The example gives how many pages are read in 60 minutes. Since we are provided the time interval, we can calculate the combined read rate.\n",
      "\n",
      "3.  **Time Interval**: We are provided with the time interval of 240 minutes. We have to convert this to hours and minutes. Also the given time interval that is thet 240 minutes is not provided and has to be calculated for.\n",
      "\n",
      "4.  **Calculate Total Pages Read:** Combine the values to calculate the total pages read for 240 minutes.\n",
      "\n",
      "**Solution:**\n",
      "\n",
      "1.  **Calculate 240 minutes interval** :\n",
      "    240 minutes = 4 hours.\n",
      "\n",
      "2.  **Calculate the total pages read:**\n",
      "    *   **Rene**: Reads 30 pages in 60 minutes. Therefore reads 30 pages / 60 min * 240 min = 120 pages.\n",
      "    *   **Lulu**: Reads 27 pages in 60 minutes. Therefore reads 27 pages / 60 min * 240 min = 108 pages.\n",
      "    *   **Cherry**: Reads 25 pages in 60 minutes. Therefore reads 25 pages / 60 min * 240 min = 120 pages.\n",
      "\n",
      "    **Total Pages Read:** 120 + 108 + 120 = 348 pages.\n",
      "\n",
      "3.  **Answer**\n",
      "    *   **Total Pages Read** = 348 pages.\n",
      "\n",
      "</start_working_out>\n",
      "\n",
      "<end_working_out>\n",
      "\n",
      "<SOLUTION>\n",
      "348\n",
      "</SOLUTION>\n",
      "\n",
      "</end_working_out>\n",
      " \n",
      "Extracted:\n",
      "348\n",
      "******************** Question:\n",
      "Martin rings the small bell 4 times more than 1/3 as often as the big bell. If he rings both of them a combined total of 52 times, how many times does he ring the big bell? \n",
      "Answer:\n",
      "36 \n",
      "Response:\n",
      " <start_working_out>\n",
      "\n",
      "Martin rings the small bell 4 times more than 1/3 as often as the big bell. If he rings both of them a combined total of 52 times, how many times does he ring the big bell?\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "<br>\n",
      "\n",
      "< \n",
      "Extracted:\n",
      "None\n",
      "******************** Question:\n",
      "Bert fills out the daily crossword puzzle in the newspaper every day. He uses up a pencil to fill out the puzzles every two weeks. On average, it takes him 1050 words to use up a pencil. How many words are in each crossword puzzle on average? \n",
      "Answer:\n",
      "75 \n",
      "Response:\n",
      "Here's the solution to the crossword puzzle problem:\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      " \n",
      "Extracted:\n",
      "None\n",
      "******************** Question:\n",
      "Matt can make a batch of a dozen cookies using 2 pounds of flour.  He uses 4 bags of flour each weighing 5 pounds.  If Jim eats 15 cookies how many cookies are left? \n",
      "Answer:\n",
      "105 \n",
      "Response:\n",
      "Okay, let's tackle this problem step-by-step.\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "**1. Understand the problem:****\n",
      "*   **Matt:** Uses 2 pounds of flour for 1 dozen cookies (12).\n",
      "*   **Jim:** Eats 15 cookies.\n",
      "*   **We need to know:** How many cookies are left?\n",
      "\n",
      "**2. Calculate the amount of flour used by Jim:**\n",
      "*   **1 dozen cookies:** = 12 cookies\n",
      "*   **15 cookies:** = 15 cookies/ 12 cookies/ 1 dozen = 1.25 dozens\n",
      "*   **Flour required:** 1.25 * 2 = 2.5 pounds\n",
      "\n",
      "**3. Find the number of cookies left**\n",
      "*   **Number of cookies left:** =12 -15 =-3.\n",
      "*   **Since number can't be negative** We assume 0 cookies.\n",
      "\n",
      "**4. Find the number of cookies left.**\n",
      "*   **Since number can't be negative** We assume 0 cookies.\n",
      "\n",
      "**5. Find the number of cookies left.**\n",
      "*   **Since number can't be negative** We assume 0 cookies.\n",
      "\n",
      "**6. Find the number of cookies left.**\n",
      "*   **Since number can't be negative** We assume 0 cookies.\n",
      "\n",
      "**7. Find the number of cookies left.**\n",
      "*   **Since number can't be negative** We assume 0 cookies.\n",
      "\n",
      "**8. Find the number of cookies left.**\n",
      "*   **Since number can't be negative** We assume 0 cookies.\n",
      "\n",
      "**9. Find the number of cookies left.**\n",
      "*   **Since number can't be negative** We assume 0 cookies.\n",
      "\n",
      "**10. Find the number of cookies left.**\n",
      "*   **Since number can't be negative** We assume 0 cookies.\n",
      "\n",
      "</end_working_out>\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "There are 0 cookies left.\n",
      "\n",
      "</SOLUTION> \n",
      "Extracted:\n",
      "0\n",
      "******************** Question:\n",
      "James decides to build a tin house by collecting 500 tins in a week. On the first day, he collects 50 tins. On the second day, he manages to collect 3 times that number. On the third day, he collects 50 tins fewer than the number he collected on the second day. If he collects an equal number of tins on the remaining days of the week, what's the number of tins he collected each day for the rest of the week? \n",
      "Answer:\n",
      "50 \n",
      "Response:\n",
      "Here's the solution to the James tin house problem, including the working out and the solution between the <start_working_out> and <end_working_out> tags.\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "First, let's calculate the number of tins James collected on each day:\n",
      "\n",
      "*   **Day 1:** 50 tins\n",
      "*   **Day 2:** 3 * 50 = 150 tins\n",
      "*   **Day 3:** 150 - 50 = 100 tins\n",
      "*   **Days 4-7:** Let 'x' be the number of tins collected each of these days.\n",
      "\n",
      "Total number of days in a week is 7.\n",
      "James collected 50 tins on day 1.\n",
      "James collected 50 tins on day 3.\n",
      "So, James collected a total of 50+50=100 tins on days 1 and 3.\n",
      "So, James collected 500-100=400 tins on days 2,4,5,6,7.\n",
      "Since James collected ‘x’ number of tins on each of days 2,4,5,6,7.\n",
      "So, 5x=400.\n",
      "Therefore, x=80.\n",
      "So, James collected 80 tins each on day 2,4,5,6,7.\n",
      "\n",
      "</start_working_out>\n",
      "\n",
      "<end_working_out>\n",
      "\n",
      "**Solution:**\n",
      "\n",
      "James collected 80 tins each on days 2, 4, 5, 6, and 7.\n",
      "\n",
      "</end_working_out>\n",
      "\n",
      "<SOLUTION></SOLUTION> \n",
      "Extracted:\n",
      "None\n",
      "******************** Question:\n",
      "A jar of jellybeans has 14 blue jellybeans, 26 purple jellybeans and 40 orange jellybeans. If there are 200 jellybeans in the jar, how many are there of the red color? \n",
      "Answer:\n",
      "120 \n",
      "Response:\n",
      "Here's the breakdown of the problem and solution:\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "The problem states that there is 14 blue jellybeans, 26 purple jellybeans and 40 orange jellybeans. Also, there are 200 jellybeans in the jar total and we need to find out how many are red. We know that they are for a total of 200 jellybeans and already know how many blue, purple and orange there are. So the equation will be:\n",
      "\n",
      "14 (blue) + 26 (purple) + 40 (orange) +  x (red) = 200 (total)\n",
      "\n",
      "Where x is the red jellybeans. Let’s solve for x:\n",
      "\n",
      "14 + 26 + 40 + x = 200\n",
      "\n",
      "100 + x = 200\n",
      "\n",
      "x = 200 - 100\n",
      "\n",
      "x = 100\n",
      "\n",
      "So, there are 100 red jellybeans.\n",
      "\n",
      "<end_working_out>\n",
      "\n",
      "<SOLUTION>100</SOLUTION> \n",
      "Extracted:\n",
      "100\n",
      "******************** Question:\n",
      "Five adults and two children go to see a movie and buy $12 worth of concessions. The total cost of their trip is $76. If each child's ticket is $7, how much, in dollars, are the adult tickets? \n",
      "Answer:\n",
      "10 \n",
      "Response:\n",
      "Here's the working out:\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "</start_working_out>\n",
      "\n",
      "Here’s the solution:\n",
      "\n",
      "<SOLUTION>\n",
      "\n",
      "</SOLUTION>\n",
      " \n",
      "Extracted:\n",
      "None\n",
      "******************** Question:\n",
      "Janelle had 26 green marbles. Then she bought 6 bags of blue marbles. There were 10 marbles in each bag.  She created a gift of 6 green marbles and 8 blue marbles and gave it to a friend. How many marbles does Janelle have now? \n",
      "Answer:\n",
      "72 \n",
      "Response:\n",
      "Janelle started with 26 green marbles. She also bought 6 bags of blue marbles, and in each bag there were 10 marbles. The total number of blue marbles is 6 * 10 = 60. \n",
      "\n",
      "The total number of marbles Janelle has is 26 + 60 = 86. \n",
      "\n",
      "Janelle created a gift of 6 green marbles and 8 blue marbles.\n",
      "\n",
      "The number of green marbles left is 26 - 6 = 20.\n",
      "The number of blue marbles left is 60 - 8 = 52.\n",
      "\n",
      "Total number of marbles Janelle has is 20 + 52 = 72. \n",
      "\n",
      "The solution is between <SOLUTION></SOLUTION>\n",
      "\n",
      "Janelle started with 26 green marbles. She also bought 6 bags of blue marbles, and in each bag there were 10 marbles. The total number of blue marbles is 6 * 10 = 60. \n",
      "\n",
      "The total number of marbles Janelle has is 26 + 60 = 86. \n",
      "\n",
      "Janelle created a gift of 6 green marbles and 8 blue marbles.\n",
      "\n",
      "The number of green marbles left is 26 - 6 = 20.\n",
      "The number of blue marbles left is 60 - 8 = 52.\n",
      "\n",
      "Total number of marbles Janelle has is 20 + 52 = 72. \n",
      "\n",
      "The answer is 72. <SOLUTION></SOLUTION> \n",
      "Extracted:\n",
      "26\n",
      "******************** Question:\n",
      "Every day in a week, Siena bookmarks 30 website pages from the research she does on her browser. If Siena has 400 bookmarked pages on her bookmarks library now, how many pages will she have in her bookmarks library at the end of March? \n",
      "Answer:\n",
      "1330 \n",
      "Response:\n",
      "Here's a breakdown of the solution, as requested:\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "**1. Determine the number of days from the beginning of March to the end of March:** March has 31 days.\n",
      "\n",
      "**2. Determine the number of days from the beginning of March to the end of March** which means the # of days from the beginning of March to the end of March is 31 days (since March has 31 days).\n",
      "\n",
      "**3. Calculate the total number of booksmarked pages that will be on her bookmarks library at the end of March:** \n",
      "*   She will have on her bookmark library. 400 bookmarked pages.\n",
      "*   On a given day. Sien will have 30 new bookmarked pages.\n",
      "*   During the month of March. Sien will have 30 new bookmarked pages.\n",
      "*   So at the end of March. Sien will have 400 + 30 = 430 bookmarked pages.\n",
      "*   She will have on her bookmark library. 400 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark library. 30 pages.\n",
      "*   She will have on her bookmark \n",
      "Extracted:\n",
      "None\n",
      "******************** Question:\n",
      "An old pirate wants to leave his treasure on an island. He has 3500 gold coins. He wants to spread this equally across 5 chests. Additionally, he will put a total of 500 silver coins and twice as many bronze coins as silver, all distributed equally across the chests. How many coins in total will be in each chest? \n",
      "Answer:\n",
      "1000 \n",
      "Response:\n",
      "Here's the breakdown of the problem and the solution:\n",
      "\n",
      "<start_working_out>\n",
      "\n",
      "**1. Calculate the Gold Coins per Chest:**\n",
      "\n",
      "*   Gold Coins: 3500\n",
      "*   Number of Chests: 5\n",
      "*   Gold Coins per Chest: 3500 / 5 = 700\n",
      "\n",
      "**2. Calculate the Silver Coins per Chest:**\n",
      "\n",
      "*   Silver Coins: 500\n",
      "*   Number of Chests: 5\n",
      "*   Silver Coins per Chest: 500 / 5 = 100\n",
      "\n",
      "**3. Calculate the Bronze Coins per Chest:**\n",
      "\n",
      "*   Bronze Coins: 500 * 2 = 1000\n",
      "*   Number of Chests: 5\n",
      "*   Bronze Coins per Chest: 1000 / 5 = 200\n",
      "\n",
      "**4. Calculate the Total Coins per Chest:**\n",
      "\n",
      "*   Gold Coins per Chest: 700\n",
      "*   Silver Coins per Chest: 100\n",
      "*   Bronze Coins per Chest: 200\n",
      "*   Total Coins per Chest: 700 + 100 + 200 = 1000\n",
      "\n",
      "<end_working_out>\n",
      "\n",
      "<SOLUTION></SOLUTION>\n",
      "\n",
      "Each chest will contain 1000 coins in total.\n",
      " \n",
      "Extracted:\n",
      "1000\n"
     ]
    }
   ],
   "source": [
    "# 创建GRPO训练器\n",
    "# 训练器整合了模型、奖励函数、训练参数和数据集\n",
    "trainer = GRPOTrainer(\n",
    "    model = model,                  # 要训练的模型\n",
    "    processing_class = tokenizer,   # 分词器（用于文本处理）\n",
    "    \n",
    "    # 奖励函数列表：这些函数将评估模型输出质量\n",
    "    reward_funcs = [\n",
    "        match_format_exactly,           # 奖励函数1：严格格式匹配\n",
    "        match_format_approximately,     # 奖励函数2：近似格式匹配\n",
    "        check_answer,                   # 奖励函数3：答案正确性\n",
    "        check_numbers,                  # 奖励函数4：数值提取\n",
    "    ],\n",
    "    \n",
    "    args = training_args,           # 训练配置参数\n",
    "    train_dataset = dataset,        # 训练数据集\n",
    ")\n",
    "\n",
    "print(\"GRPO训练器创建完成!\")\n",
    "print(\"包含的奖励函数:\")\n",
    "print(\"1. match_format_exactly - 检查完整格式\")\n",
    "print(\"2. match_format_approximately - 检查标记使用\")\n",
    "print(\"3. check_answer - 检查答案正确性\")\n",
    "print(\"4. check_numbers - 检查数值提取\")\n",
    "print(\"\\n开始训练...\")\n",
    "\n",
    "# 开始GRPO训练\n",
    "# 注意：训练过程中会显示大量调试信息，包括问题、答案和模型输出\n",
    "trainer.train()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 步骤10: 测试训练后的模型\n",
    "\n",
    "训练完成后，让我们测试一下模型是否学会了按照我们期望的格式回答问题。我们将：\n",
    "\n",
    "1. **构建测试消息**: 使用系统提示词和一个新的数学问题\n",
    "2. **生成回答**: 让微调后的模型回答问题\n",
    "3. **观察输出**: 检查模型是否遵循了我们定义的格式\n",
    "\n",
    "这个测试将帮助我们验证GRPO训练的效果。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 构建测试消息\n",
    "# 使用训练时相同的系统提示词，但提出一个新问题\n",
    "messages = [\n",
    "    {\"role\": \"system\", \"content\": system_prompt},  # 使用相同的格式指导\n",
    "    {\"role\": \"user\",   \"content\": \"What is the sqrt of 101?\"},  # 新的数学问题\n",
    "]\n",
    "\n",
    "# 将消息转换为模型输入格式\n",
    "text = tokenizer.apply_chat_template(\n",
    "    messages,\n",
    "    add_generation_prompt = True,  # 添加生成提示，告诉模型开始回答\n",
    "    tokenize = False,              # 先不分词，保持文本格式\n",
    ")\n",
    "\n",
    "print(\"测试问题: What is the sqrt of 101?\")\n",
    "print(\"期望输出格式:\")\n",
    "print(\"- 包含 <start_working_out> ... <end_working_out> 的推理过程\")\n",
    "print(\"- 包含 <SOLUTION> ... </SOLUTION> 的最终答案\")\n",
    "print(\"\\n模型输出:\")\n",
    "\n",
    "# 导入文本流输出器，用于实时显示生成过程\n",
    "from transformers import TextStreamer\n",
    "\n",
    "# 生成回答\n",
    "_ = model.generate(\n",
    "    **tokenizer(text, return_tensors = \"pt\").to(\"cuda\"),  # 将输入转换为张量并移到GPU\n",
    "    max_new_tokens = 64,       # 限制输出长度（可以根据需要增加）\n",
    "    \n",
    "    # Gemma-3推荐的生成参数\n",
    "    temperature = 1.0,         # 控制输出的随机性\n",
    "    top_p = 0.95,             # 核采样参数\n",
    "    top_k = 64,               # top-k采样参数\n",
    "    \n",
    "    # 实时输出流\n",
    "    streamer = TextStreamer(tokenizer, skip_prompt = True),  # 跳过输入提示，只显示生成内容\n",
    ")\n",
    "\n",
    "# <start_working_out>\n",
    "# We want to find the square root of 101, which is written as √101.\n",
    "\n",
    "# Since 101 is a prime number, its only factors are 1 and 101. Therefore, its square root is not an integer."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "## 步骤11: 保存训练后的模型\n",
    "\n",
    "训练完成并验证效果后，我们需要保存模型以便后续使用。有几种保存方式：\n",
    "\n",
    "1. **LoRA适配器保存**: 只保存训练的LoRA权重（文件小，推荐）\n",
    "2. **完整模型保存**: 将LoRA权重合并到原模型中保存\n",
    "3. **GGUF格式保存**: 保存为量化的GGUF格式，便于部署\n",
    "\n",
    "### LoRA适配器保存（推荐）\n",
    "\n",
    "这种方式只保存训练过程中新增的LoRA权重，文件很小（通常几十MB），使用时需要配合原始模型一起加载。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 保存LoRA适配器（推荐方式）\n",
    "# 这种方式只保存训练过程中新增的LoRA权重，文件很小\n",
    "print(\"正在保存LoRA适配器...\")\n",
    "\n",
    "model.save_pretrained(\"gemma-3\")      # 保存模型（包含LoRA权重）\n",
    "tokenizer.save_pretrained(\"gemma-3\")  # 保存分词器\n",
    "\n",
    "print(\"✓ LoRA适配器和分词器已保存到 'gemma-3' 目录\")\n",
    "print(\"保存内容:\")\n",
    "print(\"- adapter_config.json: LoRA配置文件\")\n",
    "print(\"- adapter_model.safetensors: LoRA权重文件\")\n",
    "print(\"- tokenizer相关文件\")\n",
    "print(\"\\n使用方法:\")\n",
    "print(\"1. 先加载原始Gemma3-4B模型\")\n",
    "print(\"2. 再加载这个LoRA适配器\")\n",
    "print(\"3. 即可获得微调后的模型\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "### 可选：保存完整模型\n",
    "\n",
    "如果你希望将LoRA权重合并到原模型中并保存完整的模型文件，可以使用以下代码：\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 可选：保存完整的微调模型\n",
    "# 将LoRA权重合并到原模型中，生成一个完整的模型文件\n",
    "if False:  # 设置为True以执行保存\n",
    "    print(\"正在保存完整的微调模型...\")\n",
    "    model.save_pretrained_merged(\"gemma-3-finetune\", tokenizer)\n",
    "    print(\"✓ 完整模型已保存到 'gemma-3-finetune' 目录\")\n",
    "    print(\"注意：完整模型文件很大（几GB），但使用时不需要原始模型\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "raw"
    }
   },
   "source": [
    "### 可选：保存GGUF格式\n",
    "\n",
    "GGUF是一种优化的模型格式，支持量化压缩，适合部署到资源受限的环境：\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 可选：保存为GGUF格式\n",
    "# GGUF格式支持量化，文件更小，推理速度更快\n",
    "if False:  # 设置为True以执行保存\n",
    "    print(\"正在保存GGUF格式模型...\")\n",
    "    model.save_pretrained_gguf(\n",
    "        \"gemma-3-finetune\",\n",
    "        quantization_type = \"Q8_0\",  # 量化类型：目前支持Q8_0, BF16, F16\n",
    "    )\n",
    "    print(\"✓ GGUF格式模型已保存\")\n",
    "    print(\"特点:\")\n",
    "    print(\"- 文件更小（通过量化压缩）\")\n",
    "    print(\"- 推理速度更快\")\n",
    "    print(\"- 适合部署到边缘设备\")\n",
    "    print(\"- 可以用llama.cpp等工具加载\")"
   ]
  }
 ],
 "metadata": {
  "fileId": "280337d0-7469-42d6-a465-e0acc4f9fe07",
  "filePath": "/opt/tiger/test0/gemma.ipynb",
  "kernelspec": {
   "display_name": "uni",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
