{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "b9847ea9-bb12-407e-b6a5-fd7f1bc07edf",
   "metadata": {},
   "source": [
    "## 数据准备阶段"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0177bc62-3760-4bd8-8d9a-1629dc15a382",
   "metadata": {},
   "source": [
    "安装依赖包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "f66afbaa-f970-413a-bc76-35ea22f7f01e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# !pip install --upgrade huggingface_hub\n",
    "# !pip install transformers\n",
    "# !pip install accelerate\n",
    "# !pip install datasets\n",
    "# !pip install modelscope\n",
    "# !pip install peft\n",
    "# !pip install deepspeed"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "75f926d8-1c02-4411-82b9-6e93848a57fa",
   "metadata": {},
   "source": [
    "## 下载大模型和微调数据\n",
    "\n",
    "国内下载大模型可以在 **modelscope** 或 **HF-mirror** 下载；\n",
    "\n",
    "**modelscope 下载模型指令：**\n",
    "\n",
    "`modelscope download --model Qwen/Qwen2.5-0.5B --local_dir /root/autodl-tmp/model`\n",
    "\n",
    "---\n",
    "\n",
    "**huggingface 下载数据指令：**\n",
    "\n",
    "设置环境变量 `os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'`\n",
    "\n",
    "\n",
    "`huggingface-cli download --repo-type dataset --resume-download BelleGroup/train_3.5M_CN --local-dir /root/autodl-tmp/dataset/sft_data`\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e78fce7e-0e8f-4a63-af77-1db9b4666b26",
   "metadata": {},
   "source": [
    "## 导入必要依赖"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "f78a7e09-6fb6-4381-bef7-6773c57aaa93",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import json\n",
    "from tqdm import tqdm\n",
    "\n",
    "# 用于将数据处理为torch张量形式\n",
    "import torch\n",
    "from torch.utils.data import Dataset\n",
    "\n",
    "# 用于加载大模型\n",
    "from transformers import AutoModelForCausalLM, AutoTokenizer"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "52f6b1d6-5f98-4860-bdf8-04d14f8ff223",
   "metadata": {},
   "source": [
    "常见问题：\n",
    "- 提示无法识别模型：hf-transformers的版本过低，没兼容最新的大模型，可尝试升级transformers的版本或者将大模型的config.json文件中改成较低版本的模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc7cf069-577c-446d-a708-d9df65d220de",
   "metadata": {},
   "source": [
    "## 加载分词器\n",
    "\n",
    "这里数据处理阶段，我们主要用到的是**Tokenizer**，即将一段自然语言文本给转换成大模型能够识别的Ids"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "f848e81b-a8ed-41c6-881a-c9eca0f22d97",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/root/miniconda3/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory\n",
      "  warn(f\"Failed to load image Python extension: {e}\")\n"
     ]
    }
   ],
   "source": [
    "# 设置大模型在本地的路径\n",
    "model_path = \"/root/autodl-tmp/qwen2_05b\"\n",
    "\n",
    "# 加载分词器和模型\n",
    "tokenizer = AutoTokenizer.from_pretrained(model_path)\n",
    "\n",
    "model = AutoModelForCausalLM.from_pretrained(model_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a76c39c2-f41e-499d-a6a2-1b5508f7aaea",
   "metadata": {},
   "source": [
    "## 加载训练数据\n",
    "\n",
    "直接传入数据在本地的路径即可"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "21bac68e-aa80-4c00-be6f-e962850c483e",
   "metadata": {},
   "outputs": [],
   "source": [
    "from datasets import load_dataset\n",
    "\n",
    "ds = load_dataset('json', data_files='/root/autodl-tmp/dataset/BelleGroup_3_5M_CN/train_3.5M_CN.json')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c660142-bfc9-408d-abeb-ed67fb301ede",
   "metadata": {},
   "source": [
    "**拿到数据后的第一件事情，查看数据的结构**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "10d54798-453b-4025-824d-d9fec80b2188",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "DatasetDict({\n",
      "    train: Dataset({\n",
      "        features: ['conversations', 'id'],\n",
      "        num_rows: 3606402\n",
      "    })\n",
      "})\n"
     ]
    }
   ],
   "source": [
    "print(ds)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d28f7ded-4faf-49db-a2a7-96fd5f0a989a",
   "metadata": {},
   "source": [
    "**这里我们发现这个数据集，包含一个名为`train`的键，对应着Dataset里面有`conversation`和`id`这两个 features**\n",
    "\n",
    "所以我们再看一下`train`这个键里面的情况"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "bf45c15e-b6e6-416a-b90d-c98ca974b311",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'conversations': [{'from': 'human',\n",
       "   'value': '针对健身房的新手，设计一套适合他们的健身器械使用指南，包括安全应用、正确姿势等方面。'},\n",
       "  {'from': 'assistant',\n",
       "   'value': '健身器械使用指南\\n1. 开始前，请先进行热身运动。这会帮助你的身体适应运动，并减少受伤的风险。\\n2. 在使用健身器械前，确保你已经了解了其使用方法。请阅读说明书或咨询教练以获得正确的使用技巧。\\n3. 谨防过度使用或过度挑战你的身体。 如果你觉得有些动作太难或太重，请添加锻炼计划，以逐步提高动作难度。\\n4. 使用合适的装备。 确保你拥有合适的运动鞋和舒适的运动服。 不要在裸露的脚或短裤上进行重量训练。\\n5. 在健身器械上使用安全装置。 这些通常用于保护你的身体免受不当操作造成的损伤。 例如，重量训练中，你需要使用杠铃和负重时，一定要使用卡子来防止重量滑落。\\n6. 注意正确的姿势。 如果你的姿势是错误的，那么你的身体很容易被伤害到，你也可能无法获得最佳的锻炼效果。 至关重要的是，保持直立的身体，保持头部和颈部的稳定，并使用合适的重量。\\n7. 保持合理的呼吸方式。 无论何时进行训练，都必须保持正常呼吸。 当你需要用力时，呼气； 当你放松时，吸气。\\n8. 安全存放器械。 在使用健身器械后，你需要把它们归还给适当的位置，以便其他人可以使用它们。\\n总之，健身器械的正确使用是关键之一，如果不健康和不安全，它们将无法帮助您达到您所需的健康成果。 选择适当的训练计划，并为训练提供足够的时间，以备逐渐适应新方法。 对于任何问题，请向教练咨询。'}],\n",
       " 'id': '66182880'}"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ds['train'][0]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e210c62c-a6b5-4d8d-becb-8b5a19117675",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "**可以看到`conversation`是一个列表（List of Dict），每个 Dict 由`from`, `value`这两个键构成**。\n",
    "\n",
    "**那么现在我们就已经对整个数据集的结构有了个清晰的认知，下面开始进行数据处理操作**\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d68566f7-4858-4f81-8751-ab3960a1a534",
   "metadata": {},
   "source": [
    "## 特殊 Token 的定义\n",
    "\n",
    "在大模型微调工作里，特殊 Token 的定义是基础且关键的环节，这些特殊 Token 能帮模型更好理解输入结构与角色分工。\n",
    "\n",
    "### 1. 基础特殊 Token 定义  \n",
    "- **BOS（Beginning Of Sequence，序列开始）**：  \n",
    "  代码中用 `im_start = tokenizer(\"<|im_start|>\").input_ids` 定义。`<|im_start|>` 是特定格式的起始标识，通过 `tokenizer` 转换为模型能理解的 `input_ids` ，告诉模型“内容要开始了”，像对话开头、任务起始等场景会用到它。 \n",
    "  \n",
    "- **EOS（End Of Sequence，序列结束）**：  \n",
    "  `im_end = tokenizer(\"<|im_end|>\").input_ids` 与之对应，`<|im_end|>` 作为结束标识，转换后的 `input_ids` 让模型识别“这段内容到这结束了” ，比如对话一轮交互收尾、单条任务结果终止时发挥作用。  \n",
    "\n",
    "- **PAD（Padding，填充）**：  \n",
    "  `IGNORE_TOKEN_ID = tokenizer.pad_token_id` ，微调时为让输入长度统一，会用 PAD 填充短序列。`pad_token_id` 是 `tokenizer` 内置的填充标识 ID ，模型会“忽略”这些填充 Token 的语义，专注有效内容学习。  \n",
    "\n",
    "- **换行符**：  \n",
    "  `nl_tokens = tokenizer('\\n').input_ids` ，把换行符 `\\n` 转成 `input_ids` ，让模型理解文本里的换行逻辑，在处理带格式的文本（像代码片段、分段文案）时，保留换行带来的结构信息。  \n",
    "\n",
    "### 2. 角色标识符定义  \n",
    "大模型微调常涉及多角色交互（比如系统、用户、助手对话），代码里：  \n",
    "```python\n",
    "_system = tokenizer('system').input_ids + nl_tokens  \n",
    "_user = tokenizer('human').input_ids + nl_tokens  \n",
    "_assistant = tokenizer('assistant').input_ids + nl_tokens  \n",
    "```  \n",
    "- 先通过 `tokenizer` 把 “system”“human”“assistant” 转成对应 `input_ids` ，再拼接换行符的 `nl_tokens` 。这样模型能清晰区分不同角色发言，比如 `_system` 代表系统预设指令，`_user` 是用户提问，`_assistant` 是模型/助手回复 ，构建有明确角色分工的训练数据，让模型学懂对话逻辑～  \n",
    "\n",
    "这些特殊 Token 定义，可以理解为给模型设计一套“语义暗号”，帮模型在微调时精准理解输入结构、角色互动，是让大模型适配特定任务（比如对话、文本生成）的重要铺垫！"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "bd61fe0a-f6dc-4352-8f5b-ea16e092f0ba",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 特殊的tokenizer定义\n",
    "\n",
    "# BOS\n",
    "im_start = tokenizer(\"<|im_start|>\").input_ids\n",
    "\n",
    "# EOS\n",
    "im_end = tokenizer(\"<|im_end|>\").input_ids\n",
    "\n",
    "# PAD\n",
    "IGNORE_TOKEN_ID = tokenizer.pad_token_id\n",
    "\n",
    "# 换行符\n",
    "nl_tokens = tokenizer('\\n').input_ids\n",
    "\n",
    "# 角色标识符\n",
    "_system = tokenizer('system').input_ids + nl_tokens\n",
    "_user = tokenizer('human').input_ids + nl_tokens\n",
    "_assistant = tokenizer('assistant').input_ids + nl_tokens"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7264e831-3a84-499b-81e0-8d1cb7671000",
   "metadata": {},
   "source": [
    "**这里`<|im_start|>`都是在模型训练之前约定好的，而不是随意给模型这么一个特殊标识，可以看到下面这里，我们随意将BOS替换掉一个字符，分词器就会将其视为多个tokens的拼接，而不是一个特殊标识了**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "206cd1e7-285b-4ce6-ab1f-620ca8e51022",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<|im_start|> 对应的input ids: [151644]\n",
      "<|im-start|> 对应的input ids: [27, 91, 318, 18935, 91, 29]\n"
     ]
    }
   ],
   "source": [
    "im_start = tokenizer(\"<|im_start|>\").input_ids\n",
    "im_start_wrong = tokenizer(\"<|im-start|>\").input_ids\n",
    "\n",
    "print(f'<|im_start|> 对应的input ids: {im_start}')\n",
    "print(f'<|im-start|> 对应的input ids: {im_start_wrong}')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "427c48f7-90f6-4cca-bca2-cf83f018a5b2",
   "metadata": {},
   "source": [
    "## 数据集的处理\n",
    "\n",
    "我们将数据转换为ShareGPT格式，具体详见 《大模型微调数据处理全流程.md》"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "739dbf84-f020-4329-b3cf-595c98418c22",
   "metadata": {},
   "outputs": [],
   "source": [
    "sources = ds[\"train\"]\n",
    "\n",
    "sources = sources.select(range(100))  # 选取前100"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "b9fec6b4-dcfb-4f8f-bf98-2bddec74ee67",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "成功转换 100 条数据\n",
      "{\n",
      "  \"system\": \"你是一个有帮助的AI助手。\",\n",
      "  \"messages\": [\n",
      "    {\n",
      "      \"from\": \"human\",\n",
      "      \"value\": \"针对健身房的新手，设计一套适合他们的健身器械使用指南，包括安全应用、正确姿势等方面。\"\n",
      "    },\n",
      "    {\n",
      "      \"from\": \"gpt\",\n",
      "      \"value\": \"健身器械使用指南\\n1. 开始前，请先进行热身运动。这会帮助你的身体适应运动，并减少受伤的风险。\\n2. 在使用健身器械前，确保你已经了解了其使用方法。请阅读说明书或咨询教练以获得正确的使用技巧。\\n3. 谨防过度使用或过度挑战你的身体。 如果你觉得有些动作太难或太重，请添加锻炼计划，以逐步提高动作难度。\\n4. 使用合适的装备。 确保你拥有合适的运动鞋和舒适的运动服。 不要在裸露的脚或短裤上进行重量训练。\\n5. 在健身器械上使用安全装置。 这些通常用于保护你的身体免受不当操作造成的损伤。 例如，重量训练中，你需要使用杠铃和负重时，一定要使用卡子来防止重量滑落。\\n6. 注意正确的姿势。 如果你的姿势是错误的，那么你的身体很容易被伤害到，你也可能无法获得最佳的锻炼效果。 至关重要的是，保持直立的身体，保持头部和颈部的稳定，并使用合适的重量。\\n7. 保持合理的呼吸方式。 无论何时进行训练，都必须保持正常呼吸。 当你需要用力时，呼气； 当你放松时，吸气。\\n8. 安全存放器械。 在使用健身器械后，你需要把它们归还给适当的位置，以便其他人可以使用它们。\\n总之，健身器械的正确使用是关键之一，如果不健康和不安全，它们将无法帮助您达到您所需的健康成果。 选择适当的训练计划，并为训练提供足够的时间，以备逐渐适应新方法。 对于任何问题，请向教练咨询。\"\n",
      "    }\n",
      "  ]\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "def convert_to_sharegpt_format(data, system_prompt=\"你是一个有帮助的AI助手。\"):\n",
    "    \"\"\"\n",
    "    将包含多轮对话的数据转换为ShareGPT格式\n",
    "    \n",
    "    参数:\n",
    "        data: 原始数据列表，每个元素包含conversations字段\n",
    "        system_prompt: 系统提示词，默认为通用助手描述\n",
    "    \"\"\"\n",
    "    sharegpt_data = []\n",
    "    \n",
    "    for item in data:\n",
    "        # 提取对话列表\n",
    "        conversations = item.get(\"conversations\", [])\n",
    "        \n",
    "        # 跳过无对话的数据\n",
    "        if not conversations:\n",
    "            continue\n",
    "        \n",
    "        # 构建ShareGPT格式\n",
    "        sharegpt_item = {\n",
    "            \"system\": system_prompt,\n",
    "            \"messages\": []\n",
    "        }\n",
    "        \n",
    "        # 转换每轮对话\n",
    "        for conv in conversations:\n",
    "            role = conv.get(\"from\")\n",
    "            content = conv.get(\"value\", \"\").strip()\n",
    "            \n",
    "            # 跳过无效对话\n",
    "            if not role or not content:\n",
    "                continue\n",
    "            \n",
    "            # 转换角色名称（assistant→gpt）\n",
    "            sharegpt_role = \"gpt\" if role == \"assistant\" else role\n",
    "            \n",
    "            # 添加到messages列表\n",
    "            sharegpt_item[\"messages\"].append({\n",
    "                \"from\": sharegpt_role,\n",
    "                \"value\": content\n",
    "            })\n",
    "        \n",
    "        # 仅保存有效对话（至少有一条human消息）\n",
    "        if any(msg[\"from\"] == \"human\" for msg in sharegpt_item[\"messages\"]):\n",
    "            sharegpt_data.append(sharegpt_item)\n",
    "    \n",
    "    return sharegpt_data\n",
    "\n",
    "\n",
    "# 转换数据\n",
    "sharegpt_data = convert_to_sharegpt_format(sources)\n",
    "\n",
    "# 输出结果\n",
    "if sharegpt_data:\n",
    "    print(f\"成功转换 {len(sharegpt_data)} 条数据\")\n",
    "    # 打印第一条示例\n",
    "    print(json.dumps(sharegpt_data[0], ensure_ascii=False, indent=2))\n",
    "else:\n",
    "    print(\"转换失败，未识别到有效对话\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "3ec73811-a164-48ef-9f35-d8c7dcc68907",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "使用特殊标识符的结构化文本：\n",
      "\n",
      "\n",
      "<|im_start|>system\n",
      "你是一个有帮助的AI助手。\n",
      "<|im_start|>human\n",
      "给定一段文本和关键词列表，删除文本中包含所有给定关键词的子字符串。\n",
      "文本：\"这是一个测试句子，目的是看看模型是否可以正确地从这个句子中删除关键词。\"\\n关键词列表：[‘测试’，‘模型’]\n",
      "<|im_start|>assistant\n",
      "删除包含所有给定关键词的子字符串后，文本变为：\"这是一个句子，目的是看看是否可以正确地从这个句子中删除关键词。\"<|im_end|>\n",
      "<|im_start|>human\n",
      "好的。现在请你将这个文本中的所有的逗号都替换成空格。\n",
      "<|im_start|>assistant\n",
      "好的，请稍等一下，现在我会将文本中的所有逗号替换为空格。处理后文本为：\"这是一个句子 目的是看看是否可以正确地从这个句子中删除关键词。\"。处理结果如何？<|im_end|>\n"
     ]
    }
   ],
   "source": [
    "from typing import List, Dict\n",
    "\n",
    "def convert_sharegpt_to_formatted_text(sharegpt_data: List[Dict]):\n",
    "    \"\"\"\n",
    "    将ShareGPT格式数据拼接为结构化文本序列\n",
    "    \n",
    "    参数:\n",
    "        sharegpt_data: ShareGPT格式数据列表\n",
    "    \"\"\"\n",
    "    formatted_texts = []\n",
    "    \n",
    "    # 定义特殊标识符\n",
    "    im_start = \"<|im_start|>\"\n",
    "    im_end = \"<|im_end|>\"\n",
    "    nl_tokens = \"\\n\"\n",
    "    _system = f\"system{nl_tokens}\"\n",
    "    _user = f\"human{nl_tokens}\"\n",
    "    _assistant = f\"assistant{nl_tokens}\"\n",
    "    \n",
    "    for item in sharegpt_data:\n",
    "        system_prompt = item.get(\"system\", \"\")\n",
    "        messages = item.get(\"messages\", [])\n",
    "        \n",
    "        if not messages:\n",
    "            continue\n",
    "            \n",
    "        # 拼接结构化文本\n",
    "        formatted_text = \"\"\n",
    "        \n",
    "        # 添加系统提示词\n",
    "        if system_prompt:\n",
    "            formatted_text += f\"{im_start}{_system}{system_prompt}{nl_tokens}\"\n",
    "        \n",
    "        # 拼接多轮对话\n",
    "        for msg in messages:\n",
    "            role = msg.get(\"from\")\n",
    "            content = msg.get(\"value\", \"\").strip()\n",
    "            \n",
    "            if not role or not content:\n",
    "                continue\n",
    "                \n",
    "            # 根据角色添加标识符\n",
    "            if role == \"human\":\n",
    "                role_identifier = f\"{im_start}{_user}\"\n",
    "            elif role == \"gpt\":\n",
    "                role_identifier = f\"{im_start}{_assistant}\"\n",
    "            else:\n",
    "                role_identifier = f\"{im_start}{role}{nl_tokens}\"\n",
    "            \n",
    "            # 拼接角色和内容\n",
    "            formatted_text += f\"{role_identifier}{content}\"\n",
    "            \n",
    "            # 助手回答后添加结束标记\n",
    "            if role == \"gpt\":\n",
    "                formatted_text += f\"{im_end}{nl_tokens}\"\n",
    "            else:\n",
    "                formatted_text += f\"{nl_tokens}\"\n",
    "        \n",
    "        formatted_texts.append(formatted_text.strip())\n",
    "    \n",
    "    return formatted_texts\n",
    "\n",
    "\n",
    "# 拼接为结构化文本\n",
    "formatted_text = convert_sharegpt_to_formatted_text(sharegpt_data)\n",
    "print(\"使用特殊标识符的结构化文本：\\n\\n\")\n",
    "print(formatted_text[1])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "6c4a3773-05dc-4ce4-a8ed-562a3676b5bc",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from typing import List, Dict, Any\n",
    "\n",
    "\n",
    "def process_texts_to_dataset(\n",
    "    text_list: List[str],\n",
    "    tokenizer,\n",
    "    max_length: int\n",
    ") -> Dict[str, torch.Tensor]:\n",
    "    \"\"\"\n",
    "    将结构化文本列表转换为模型输入数据（input_ids/labels/attention_mask）\n",
    "    \n",
    "    参数:\n",
    "        text_list: 结构化文本列表（每条文本已用<|im_start|>/<|im_end|>标记）\n",
    "        tokenizer: 预训练Tokenizer实例\n",
    "        max_length: 最大序列长度（截断&填充目标）\n",
    "    \"\"\"\n",
    "    # 处理pad_token（若未定义则使用eos_token）\n",
    "    if tokenizer.pad_token is None:\n",
    "        tokenizer.pad_token = tokenizer.eos_token\n",
    "    \n",
    "    all_input_ids = []\n",
    "    all_labels = []\n",
    "    all_attention_mask = []\n",
    "    \n",
    "    # 标记assistant内容的开始/结束标识符\n",
    "    ASSISTANT_START = \"<|im_start|>assistant\\n\"\n",
    "    ASSISTANT_END = \"<|im_end|>\"\n",
    "    \n",
    "    # 解析标识符的Token ID\n",
    "    start_tokens = tokenizer.encode(ASSISTANT_START, add_special_tokens=False)\n",
    "    end_token = tokenizer.encode(ASSISTANT_END, add_special_tokens=False)[0]\n",
    "    \n",
    "    for text in text_list:\n",
    "        # 1. Tokenize文本并截断\n",
    "        tokens = tokenizer.encode(text, max_length=max_length, truncation=True)\n",
    "        input_ids = tokens.copy()\n",
    "        \n",
    "        # 2. 生成attention_mask\n",
    "        attention_mask = [1] * len(input_ids)\n",
    "        \n",
    "        # 3. 初始化labels（全部设为-100，后续只保留assistant内容）\n",
    "        labels = [-100] * len(input_ids)\n",
    "        \n",
    "        # 4. 定位assistant内容并标记labels\n",
    "        i = 0\n",
    "        while i < len(input_ids) - len(start_tokens):\n",
    "            # 检测assistant块开始\n",
    "            if input_ids[i:i+len(start_tokens)] == start_tokens:\n",
    "                start_pos = i + len(start_tokens)\n",
    "                i = start_pos  # 跳过标识符\n",
    "                \n",
    "                # 寻找assistant块结束\n",
    "                end_pos = len(input_ids)\n",
    "                for j in range(start_pos, len(input_ids)):\n",
    "                    if input_ids[j] == end_token:\n",
    "                        end_pos = j\n",
    "                        break\n",
    "                \n",
    "                # 将assistant内容设为有效labels\n",
    "                if start_pos < end_pos:\n",
    "                    labels[start_pos:end_pos] = input_ids[start_pos:end_pos]\n",
    "                i = end_pos + 1  # 跳过结束符\n",
    "            else:\n",
    "                i += 1\n",
    "        \n",
    "        # 5. 过滤无效数据（无assistant内容）\n",
    "        if all(l == -100 for l in labels):\n",
    "            continue\n",
    "        \n",
    "        # 6. 保存处理结果\n",
    "        all_input_ids.append(input_ids)\n",
    "        all_labels.append(labels)\n",
    "        all_attention_mask.append(attention_mask)\n",
    "    \n",
    "    # 7. 批量填充到统一长度\n",
    "    if not all_input_ids:\n",
    "        raise ValueError(\"输入数据中无有效assistant内容，无法创建数据集\")\n",
    "    \n",
    "    max_len = max(len(ids) for ids in all_input_ids)\n",
    "    pad_id = tokenizer.pad_token_id\n",
    "    \n",
    "    # 初始化张量\n",
    "    input_ids_tensor = torch.full((len(all_input_ids), max_len), pad_id, dtype=torch.long)\n",
    "    labels_tensor = torch.full((len(all_labels), max_len), -100, dtype=torch.long)\n",
    "    attention_mask_tensor = torch.zeros((len(all_attention_mask), max_len), dtype=torch.long)\n",
    "    \n",
    "    # 填充数据\n",
    "    for i, (ids, labs, mask) in enumerate(zip(all_input_ids, all_labels, all_attention_mask)):\n",
    "        input_ids_tensor[i, :len(ids)] = torch.tensor(ids)\n",
    "        labels_tensor[i, :len(labs)] = torch.tensor(labs)\n",
    "        attention_mask_tensor[i, :len(mask)] = torch.tensor(mask)\n",
    "    \n",
    "    return {\n",
    "        \"input_ids\": input_ids_tensor,\n",
    "        \"labels\": labels_tensor,\n",
    "        \"attention_mask\": attention_mask_tensor\n",
    "    }\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "101289a0-6de5-408c-97a7-2c00cd33a30a",
   "metadata": {},
   "outputs": [],
   "source": [
    "x = process_texts_to_dataset(formatted_text, tokenizer, 1024)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "76cb41ce-dc87-4749-ace6-05d794478a48",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始文本 (含特殊标识符):\n",
      "\n",
      "<|im_start|>system\n",
      "你是一个有帮助的AI助手。\n",
      "<|im_start|>human\n",
      "针对健身房的新手，设计一套适合他们的健身器械使用指南，包括安全应用、正确姿势等方面。\n",
      "<|im_start|>assistant\n",
      "健身器械使用指南\n",
      "1. 开始前，请先进行热身运动。这会帮助你的身体适应运动，并减少受伤的风险。\n",
      "2. 在使用健身器械前，确保你已经了解了其使用方法。请阅读说明书或咨询教练以获得正确的使用技巧。\n",
      "3. 谨防过度使用或过度挑战你的身体。 如果你觉得有些动作太难或太重，请添加锻炼计划，以逐步提高动作难度。\n",
      "4. 使用合适的装备。 确保你拥有合适的运动鞋和舒适的运动服。 不要在裸露的脚或短裤上进行重量训练。\n",
      "5. 在健身器械上使用安全装置。 这些通常用于保护你的身体免受不当操作造成的损伤。 例如，重量训练中，你需要使用杠铃和负重时，一定要使用卡子来防止重量滑落。\n",
      "6. 注意正确的姿势。 如果你的姿势是错误的，那么你的身体很容易被伤害到，你也可能无法获得最佳的锻炼效果。 至关重要的是，保持直立的身体，保持头部和颈部的稳定，并使用合适的重量。\n",
      "7. 保持合理的呼吸方式。 无论何时进行训练，都必须保持正常呼吸。 当你需要用力时，呼气； 当你放松时，吸气。\n",
      "8. 安全存放器械。 在使用健身器械后，你需要把它们归还给适当的位置，以便其他人可以使用它们。\n",
      "总之，健身器械的正确使用是关键之一，如果不健康和不安全，它们将无法帮助您达到您所需的健康成果。 选择适当的训练计划，并为训练提供足够的时间，以备逐渐适应新方法。 对于任何问题，请向教练咨询。<|im_end|>\n",
      "========================================================================================================================================================================================================\n",
      "\n",
      "助手回答内容 (模型需要预测的部分):\n",
      "\n",
      "健身器械使用指南\n",
      "1. 开始前，请先进行热身运动。这会帮助你的身体适应运动，并减少受伤的风险。\n",
      "2. 在使用健身器械前，确保你已经了解了其使用方法。请阅读说明书或咨询教练以获得正确的使用技巧。\n",
      "3. ���防过度使用或过度挑战你的身体。 如果你觉得有些动作太难或太重，请添加锻炼计划，以逐步提高动作难度。\n",
      "4. 使用合适的装备。 ���保你拥有合适的运动鞋和舒适的运动服。 不要在裸露的脚或短裤上进行重量训练。\n",
      "5. 在健身器械上使用安全装置。 ��些通常用于保护你的身体免受不当操作造成的损伤。 例如，重量训练中，你需要使用杠铃和负重时，一定要使用卡子来防止重量滑落。\n",
      "6. 注意正确的姿势。 如果你的姿势是错误的，那么你的身体很容易被伤害到，你也可能无法获得最佳的锻炼效果。 ��关重要的是，保持直立的身体，保持头部和颈部的稳定，并使用合适的重量。\n",
      "7. 保持合理的呼吸方式。 无论何时进行训练，都必须保持正常呼吸。 当你需要用力时，呼气； 当你放松时，吸气。\n",
      "8. ��全存放器械。 在使用健身器械后，你需要把它们归还给适当的位置，以便其他人可以使用它们。\n",
      "总之，健身器械的正确使用是关键之一，如果不健康和不安全，它们将无法帮助您达到您所需的健康成果。 选择适当的训练计划，并为训练提供足够的时间，以备逐渐适应新方法。 对于任何问题，请向教练咨询。\n",
      "========================================================================================================================================================================================================\n",
      "\n",
      "input_ids (过滤pad后):\n",
      "\n",
      "[151644, 8948, 198, 56568, 101909, 18830, 100364, 9370, 15469, 110498, 8997, 151644, 25312, 198, 101092, 116620, 104267, 44934, 3837, 70500, 104551, 100231, 104056, 101420, 103278, 37029, 105866, 3837, 100630, 99464, 99892, 5373, 88991, 109078, 102159, 8997, 151644, 77091, 198, 101420, 103278, 37029, 105866, 198, 16, 13, 81947, 26606, 24562, 37945, 60726, 71817, 99259, 95256, 101079, 1773, 43288, 36993, 100364, 103929, 101099, 104117, 101079, 90395, 101940, 105497, 106066, 8997, 17, 13, 73562, 37029, 101420, 103278, 24562, 3837, 103944, 56568, 99461, 99794, 34187, 41146, 37029, 39907, 1773, 14880, 101113, 112645, 57191, 100703, 102562, 23031, 100350, 105045, 37029, 102118, 8997, 18, 13, 8908, 108, 101, 99287, 105831, 37029, 57191, 105831, 104036, 103929, 101099, 1773, 81263, 110043, 101895, 102196, 99222, 99349, 57191, 99222, 29258, 37945, 42855, 104904, 101039, 3837, 23031, 104137, 100627, 102196, 104529, 8997, 19, 13, 85658, 106873, 101076, 1773, 10236, 94, 106, 32463, 56568, 103926, 106873, 101079, 102097, 33108, 111941, 101079, 43209, 1773, 86009, 107215, 107714, 99760, 9370, 100037, 57191, 99534, 102693, 17447, 71817, 106102, 104034, 8997, 20, 13, 73562, 101420, 103278, 17447, 37029, 99464, 104919, 1773, 32181, 247, 97084, 102119, 100751, 100153, 103929, 101099, 99506, 99204, 108486, 40090, 105998, 106722, 1773, 220, 77557, 3837, 106102, 104034, 15946, 3837, 112735, 37029, 103178, 106595, 33108, 99393, 29258, 13343, 3837, 103962, 37029, 99603, 44729, 36407, 104431, 106102, 100243, 99297, 8997, 21, 13, 97161, 105045, 109078, 1773, 81263, 103929, 109078, 20412, 32100, 9370, 3837, 100624, 103929, 101099, 104892, 99250, 104229, 26939, 3837, 107411, 87267, 101068, 100350, 102179, 9370, 104904, 101062, 1773, 58464, 111, 29256, 99335, 100146, 3837, 100662, 73145, 79095, 106214, 3837, 100662, 107200, 33108, 117591, 9370, 100407, 90395, 37029, 106873, 106102, 8997, 22, 13, 220, 100662, 105630, 102357, 75768, 1773, 220, 100783, 107995, 71817, 104034, 3837, 71268, 100645, 100662, 100416, 102357, 1773, 84897, 112735, 108312, 13343, 3837, 99592, 99180, 24968, 84897, 56568, 105270, 13343, 3837, 99544, 99180, 8997, 23, 13, 41479, 231, 35987, 110732, 103278, 1773, 73562, 37029, 101420, 103278, 33447, 3837, 112735, 99360, 104017, 100040, 97706, 89012, 102618, 104473, 3837, 105920, 106857, 73670, 37029, 104017, 8997, 106279, 3837, 101420, 103278, 9370, 88991, 37029, 20412, 99936, 100653, 3837, 108338, 99722, 33108, 16530, 99464, 3837, 104017, 44063, 101068, 100364, 87026, 100366, 87026, 109988, 99722, 100735, 1773, 220, 50404, 109776, 104034, 101039, 90395, 17714, 104034, 99553, 101447, 101975, 3837, 23031, 56278, 104052, 104117, 16628, 39907, 1773, 69162, 34204, 99885, 86119, 37945, 69041, 102562, 100703, 1773, 151645]\n",
      "========================================================================================================================================================================================================\n",
      "\n",
      "labels (过滤pad后，-100表示忽略):\n",
      "\n",
      "[-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 101420, 103278, 37029, 105866, 198, 16, 13, 81947, 26606, 24562, 37945, 60726, 71817, 99259, 95256, 101079, 1773, 43288, 36993, 100364, 103929, 101099, 104117, 101079, 90395, 101940, 105497, 106066, 8997, 17, 13, 73562, 37029, 101420, 103278, 24562, 3837, 103944, 56568, 99461, 99794, 34187, 41146, 37029, 39907, 1773, 14880, 101113, 112645, 57191, 100703, 102562, 23031, 100350, 105045, 37029, 102118, 8997, 18, 13, 8908, 108, 101, 99287, 105831, 37029, 57191, 105831, 104036, 103929, 101099, 1773, 81263, 110043, 101895, 102196, 99222, 99349, 57191, 99222, 29258, 37945, 42855, 104904, 101039, 3837, 23031, 104137, 100627, 102196, 104529, 8997, 19, 13, 85658, 106873, 101076, 1773, 10236, 94, 106, 32463, 56568, 103926, 106873, 101079, 102097, 33108, 111941, 101079, 43209, 1773, 86009, 107215, 107714, 99760, 9370, 100037, 57191, 99534, 102693, 17447, 71817, 106102, 104034, 8997, 20, 13, 73562, 101420, 103278, 17447, 37029, 99464, 104919, 1773, 32181, 247, 97084, 102119, 100751, 100153, 103929, 101099, 99506, 99204, 108486, 40090, 105998, 106722, 1773, 220, 77557, 3837, 106102, 104034, 15946, 3837, 112735, 37029, 103178, 106595, 33108, 99393, 29258, 13343, 3837, 103962, 37029, 99603, 44729, 36407, 104431, 106102, 100243, 99297, 8997, 21, 13, 97161, 105045, 109078, 1773, 81263, 103929, 109078, 20412, 32100, 9370, 3837, 100624, 103929, 101099, 104892, 99250, 104229, 26939, 3837, 107411, 87267, 101068, 100350, 102179, 9370, 104904, 101062, 1773, 58464, 111, 29256, 99335, 100146, 3837, 100662, 73145, 79095, 106214, 3837, 100662, 107200, 33108, 117591, 9370, 100407, 90395, 37029, 106873, 106102, 8997, 22, 13, 220, 100662, 105630, 102357, 75768, 1773, 220, 100783, 107995, 71817, 104034, 3837, 71268, 100645, 100662, 100416, 102357, 1773, 84897, 112735, 108312, 13343, 3837, 99592, 99180, 24968, 84897, 56568, 105270, 13343, 3837, 99544, 99180, 8997, 23, 13, 41479, 231, 35987, 110732, 103278, 1773, 73562, 37029, 101420, 103278, 33447, 3837, 112735, 99360, 104017, 100040, 97706, 89012, 102618, 104473, 3837, 105920, 106857, 73670, 37029, 104017, 8997, 106279, 3837, 101420, 103278, 9370, 88991, 37029, 20412, 99936, 100653, 3837, 108338, 99722, 33108, 16530, 99464, 3837, 104017, 44063, 101068, 100364, 87026, 100366, 87026, 109988, 99722, 100735, 1773, 220, 50404, 109776, 104034, 101039, 90395, 17714, 104034, 99553, 101447, 101975, 3837, 23031, 56278, 104052, 104117, 16628, 39907, 1773, 69162, 34204, 99885, 86119, 37945, 69041, 102562, 100703, 1773, -100]\n"
     ]
    }
   ],
   "source": [
    "# 假设x是tokenize_and_create_dataset的输出\n",
    "sample_idx = 0  # 选择第一个样本\n",
    "\n",
    "# 提取样本数据\n",
    "input_ids = x[\"input_ids\"][sample_idx].tolist()\n",
    "labels = x[\"labels\"][sample_idx].tolist()\n",
    "attention_mask = x[\"attention_mask\"][sample_idx].tolist()\n",
    "\n",
    "# 过滤掉pad_token（假设pad_token_id为50256，根据实际模型调整）\n",
    "pad_token_id = tokenizer.pad_token_id or tokenizer.eos_token_id\n",
    "valid_indices = [i for i, idx in enumerate(input_ids) if idx != pad_token_id]\n",
    "valid_input_ids = [input_ids[i] for i in valid_indices]\n",
    "valid_labels = [labels[i] for i in valid_indices]\n",
    "\n",
    "# 解码input_ids为原始文本\n",
    "original_text = tokenizer.decode(valid_input_ids, skip_special_tokens=False)\n",
    "\n",
    "# 提取有效标签对应的文本（labels != -100）\n",
    "assistant_text_parts = []\n",
    "for i, (token_id, label) in enumerate(zip(valid_input_ids, valid_labels)):\n",
    "    if label != -100:\n",
    "        assistant_text_parts.append(tokenizer.decode([token_id], skip_special_tokens=False))\n",
    "assistant_text = ''.join(assistant_text_parts)\n",
    "\n",
    "# 输出结果\n",
    "print(\"原始文本 (含特殊标识符):\\n\")\n",
    "print(original_text)\n",
    "\n",
    "print(\"==\"*100)\n",
    "\n",
    "print(\"\\n助手回答内容 (模型需要预测的部分):\\n\")\n",
    "print(assistant_text)\n",
    "\n",
    "print(\"==\"*100)\n",
    "\n",
    "print(\"\\ninput_ids (过滤pad后):\\n\")\n",
    "print(valid_input_ids)\n",
    "\n",
    "print(\"==\"*100)\n",
    "\n",
    "print(\"\\nlabels (过滤pad后，-100表示忽略):\\n\")\n",
    "print(valid_labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "adfa4102-b476-435e-8e4a-dfc60c62fa6d",
   "metadata": {},
   "source": [
    "## 整合成Dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "8dbde62b-e51a-4f76-a218-12f45143b61c",
   "metadata": {},
   "outputs": [],
   "source": [
    "def preprocess(dataset, tokenizer, max_length):\n",
    "    sharegpt_data = convert_to_sharegpt_format(dataset)\n",
    "    formatted_texts = convert_sharegpt_to_formatted_text(sharegpt_data)\n",
    "    result = process_texts_to_dataset(formatted_texts, tokenizer, max_length)\n",
    "    return result\n",
    "    \n",
    "\n",
    "\n",
    "ds = load_dataset('json', data_files='/root/autodl-tmp/dataset/BelleGroup_3_5M_CN/train_3.5M_CN.json')\n",
    "ds = ds['train'].select(range(100))\n",
    "\n",
    "result = preprocess(ds, tokenizer, 1024)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "1ae8977e-4895-4cd6-8d3a-3010157f2483",
   "metadata": {},
   "outputs": [],
   "source": [
    "class SupervisedDataset(Dataset):\n",
    "\n",
    "    def __init__(self, dataset, tokenizer, max_len: int):\n",
    "        super(SupervisedDataset, self).__init__()\n",
    "\n",
    "        data_dict = preprocess(sources, tokenizer, max_len)\n",
    "\n",
    "        self.input_ids = data_dict[\"input_ids\"]\n",
    "        self.labels = data_dict[\"labels\"]\n",
    "        self.attention_mask = data_dict[\"attention_mask\"]\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.input_ids)\n",
    "\n",
    "    def __getitem__(self, i) -> Dict[str, torch.Tensor]:\n",
    "        return dict(\n",
    "            input_ids=self.input_ids[i],\n",
    "            labels=self.labels[i],\n",
    "            attention_mask=self.attention_mask[i],\n",
    "        )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7800e63a-c329-43ed-85c0-637b57d9ed95",
   "metadata": {},
   "source": [
    "# 开始模型的训练！"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "6d534918-80d9-4441-ae85-9dbb85d5d6d8",
   "metadata": {},
   "outputs": [],
   "source": [
    "from datasets import load_dataset\n",
    "from transformers import (\n",
    "    AutoModelForCausalLM,\n",
    "    AutoTokenizer,\n",
    "    TrainingArguments,\n",
    "    Trainer,\n",
    "    DataCollatorForLanguageModeling\n",
    ")\n",
    "from peft import LoraConfig, get_peft_model\n",
    "import torch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "2ccc1d38-9617-4b75-b651-200637529cf8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "trainable params: 1,081,344 || all params: 495,114,112 || trainable%: 0.2184\n"
     ]
    }
   ],
   "source": [
    "# 配置LoRA参数\n",
    "lora_config = LoraConfig(\n",
    "    r=16,                  # LoRA注意力维度\n",
    "    lora_alpha=32,         # Alpha参数\n",
    "    target_modules=[\"q_proj\", \"v_proj\"],  # 对哪些模块应用LoRA\n",
    "    lora_dropout=0.05,     # Dropout概率\n",
    "    bias=\"none\",           # 是否训练偏置\n",
    "    task_type=\"CAUSAL_LM\"  # 任务类型\n",
    ")\n",
    "\n",
    "# 将LoRA配置应用到模型\n",
    "model = get_peft_model(model, lora_config)\n",
    "model.print_trainable_parameters()  # 打印可训练参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "075a8ee1-e587-492c-9b2f-4283a4d6673a",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-06-29 16:34:05,860] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "/root/miniconda3/compiler_compat/ld: cannot find -laio\n",
      "collect2: error: ld returned 1 exit status\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "/root/miniconda3/compiler_compat/ld: cannot find -lcufile\n",
      "collect2: error: ld returned 1 exit status\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[2025-06-29 16:34:07,661] [INFO] [logging.py:107:log_dist] [Rank -1] [TorchCheckpointEngine] Initialized with serialization = False\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
      "To disable this warning, you can either:\n",
      "\t- Avoid using `tokenizers` before the fork if possible\n",
      "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='75' max='75' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [75/75 00:13, Epoch 3/3]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Step</th>\n",
       "      <th>Training Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "TrainOutput(global_step=75, training_loss=1.8169960530598959, metrics={'train_runtime': 13.7557, 'train_samples_per_second': 21.809, 'train_steps_per_second': 5.452, 'total_flos': 330835466649600.0, 'train_loss': 1.8169960530598959, 'epoch': 3.0})"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_dataset = SupervisedDataset(ds, tokenizer, max_len=512)\n",
    "\n",
    "# 设置训练参数\n",
    "training_args = TrainingArguments(\n",
    "    output_dir=\"./results\",\n",
    "    overwrite_output_dir=True,\n",
    "    num_train_epochs=3,\n",
    "    per_device_train_batch_size=4,\n",
    "    save_steps=10_000,\n",
    "    save_total_limit=2,\n",
    "    prediction_loss_only=True,\n",
    "    logging_steps=100,\n",
    "    save_strategy=\"no\",  # 禁用自动保存检查点\n",
    ")\n",
    "\n",
    "# 创建Trainer\n",
    "trainer = Trainer(\n",
    "    model=model,\n",
    "    args=training_args,\n",
    "    train_dataset=train_dataset,\n",
    ")\n",
    "\n",
    "# 开始训练\n",
    "trainer.train()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "eeea750f-5435-42f8-b964-b645532b907c",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Setting `pad_token_id` to `eos_token_id`:None for open-end generation.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Once upon a time, there was an old man who loved to collect coins. He had a special box where he kept all his coins. One day, he decided to count his coins and found that he had 123 coins in total. He then decided to give 50 coins to his favorite nephew, Carl, and received 35 more coins from his friend, Linda. How many coins did the old man have left after giving some to Carl and receiving some from Linda?\n",
      "\n",
      "To determine how many coins\n"
     ]
    }
   ],
   "source": [
    "# 训练完成后，直接使用model对象进行推理\n",
    "model.eval()  # 设置为评估模式\n",
    "\n",
    "# 准备输入\n",
    "prompt = \"Once upon a time\"\n",
    "inputs = tokenizer(prompt, return_tensors=\"pt\").to(model.device)\n",
    "\n",
    "# 生成文本\n",
    "with torch.no_grad():  # 推理时不需要梯度\n",
    "    outputs = model.generate(\n",
    "        **inputs,\n",
    "        max_new_tokens=100,\n",
    "        temperature=0.7,\n",
    "        do_sample=True\n",
    "    )\n",
    "\n",
    "# 解码生成的文本\n",
    "generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
    "print(generated_text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "41b74a9b-acf4-47d2-bc5e-4c6e3d0db131",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6c740331-ba53-4ba4-bfc0-9952a4e27c35",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a959a0d8-a76e-45ed-9eca-106c3fcb02d2",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "75d53152-ef92-4b10-b675-e6431f3027c5",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6bdfb0d2-e552-4600-9c7e-d8b5dbf85bf3",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9c170aa2-09ba-46ef-857b-449fbd3148c4",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
