{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "d4ecfa5e-ea56-4a8f-a6f5-f5d38c6c0dff",
   "metadata": {},
   "source": [
    "# LLaMA-Factory项目定位"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dad169ff-9cc0-4bce-a604-735d9dbbef2b",
   "metadata": {},
   "source": [
    "开源大模型如Deepseek, LLaMA，Qwen等主要都是使用通用数据进行训练而来，其对于不同下游的使用场景和垂直领域的效果有待进一步提升，衍生出了微调训练相关的需求，包含预训练（pt），指令微调（sft），基于人工反馈的对齐RLHF等全链路。但大模型预训练对于显存和算力的要求较高，同时也需要下游开发者对大模型本身的技术有一定了解，具有一定的门槛。\n",
    "\n",
    "LLaMA-Factory项目的目标是整合主流的各种高效训练微调技术，适配市场主流开源模型，形成一个功能丰富，适配性好的训练框架。项目提供了多个高层次抽象的调用接口，包含多阶段训练，推理测试，benchmark评测，API Server等，使开发者开箱即用。同时借鉴 Stable Diffsion WebUI相关，本项目提供了基于gradio的网页版工作台，方便初学者可以迅速上手操作，开发出自己的第一个模型。\n",
    "\n",
    "Qwen3技术文档也推荐使用LLaMA-Factory进行模型训练  \n",
    "https://qwen.readthedocs.io/en/latest/training/llama_factory.html\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc7d8826-5e37-47c9-962e-ccd88d5abb91",
   "metadata": {},
   "source": [
    "# LLaMA-Factory训练方法"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cd8d8be0-6c59-42f5-9441-c0069c728bb5",
   "metadata": {},
   "source": [
    "通过了解LLaMA-Factory可以做什么，也了解大模型训练会涉及到的阶段"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4baf5f17-c68f-492b-b63a-fcdba25d7546",
   "metadata": {},
   "source": [
    "## Pre-training"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bc612672-9846-42c7-926b-1c83e1e873e8",
   "metadata": {},
   "source": [
    "大语言模型通过在一个大型的通用数据集上通过无监督学习的方式进行预训练来学习语言的表征/初始化模型权重/学习概率分布。 我们期望在预训练后模型能够处理大量、多种类的数据集，进而可以通过监督学习的方式来微调模型使其适应特定的任务。\n",
    "\n",
    "预训练时，请将 stage 设置为 pt ，并确保使用的数据集符合 预训练数据集 格式 。\n",
    "\n",
    "下面提供预训练的配置示例："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f30fc14d-2045-4e0a-b36a-e611a9664a77",
   "metadata": {},
   "source": [
    "\n",
    "```yaml\n",
    "### model\n",
    "model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct\n",
    "\n",
    "### method\n",
    "stage: pt\n",
    "do_train: true\n",
    "finetuning_type: lora\n",
    "lora_target: all\n",
    "\n",
    "### dataset\n",
    "dataset: c4_demo\n",
    "cutoff_len: 1024\n",
    "max_samples: 1000\n",
    "overwrite_cache: true\n",
    "preprocessing_num_workers: 16\n",
    "\n",
    "### output\n",
    "output_dir: saves/llama3-8b/lora/sft\n",
    "logging_steps: 10\n",
    "save_steps: 500\n",
    "plot_loss: true\n",
    "overwrite_output_dir: true\n",
    "\n",
    "### train\n",
    "per_device_train_batch_size: 1\n",
    "gradient_accumulation_steps: 8\n",
    "learning_rate: 1.0e-4\n",
    "num_train_epochs: 3.0\n",
    "lr_scheduler_type: cosine\n",
    "warmup_ratio: 0.1\n",
    "bf16: true\n",
    "ddp_timeout: 180000000\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f29d4c42-9b53-48b5-94e0-48d029e3a686",
   "metadata": {},
   "source": [
    "样例数据集： 预训练样例数据集c4_demo\n",
    "\n",
    "大语言模型通过学习未被标记的文本进行预训练，从而学习语言的表征。通常，预训练数据集从互联网上获得，因为互联网上提供了大量的不同领域的文本信息，有助于提升模型的泛化能力。 预训练数据集文本描述格式如下：\n",
    "```json\n",
    "[\n",
    "  {\"text\": \"document\"},\n",
    "  {\"text\": \"document\"}\n",
    "]\n",
    "```\n",
    "在预训练时，只有 text 列中的 内容 （即document）会用于模型学习。\n",
    "\n",
    "对于上述格式的数据， dataset_info.json 中的 数据集描述 应为：\n",
    "```json\n",
    "\"数据集名称\": {\n",
    "  \"file_name\": \"data.json\",\n",
    "  \"columns\": {\n",
    "    \"prompt\": \"text\"\n",
    "  }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eab63935-e70e-4681-abec-fc8f0d36507c",
   "metadata": {},
   "source": [
    "## Post-training"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c77bf884-53e5-41b1-9da4-4c125a847b23",
   "metadata": {},
   "source": [
    "在预训练结束后，模型的参数得到初始化，模型能够理解语义、语法以及识别上下文关系，在处理一般性任务时有着不错的表现。 尽管模型涌现出的零样本学习，少样本学习的特性使其能在一定程度上完成特定任务， 但仅通过提示（prompt）并不一定能使其表现令人满意。因此，我们需要后训练(post-training)来使得模型在特定任务上也有足够好的表现。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a612629b-ab65-48da-81e9-82ce8a25e538",
   "metadata": {},
   "source": [
    "### Supervised Fine-Tuning"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d3ec6e7-edaa-4074-b225-b04bd4df2fbe",
   "metadata": {},
   "source": [
    "Supervised Fine-Tuning(监督微调)是一种在预训练模型上使用小规模有标签数据集进行训练的方法。 相比于预训练一个全新的模型，对已有的预训练模型进行监督微调是更快速更节省成本的途径。\n",
    "\n",
    "监督微调时，请将 stage 设置为 sft 。 下面提供监督微调的配置示例：\n",
    "```yaml\n",
    "...\n",
    "stage: sft\n",
    "finetuning_type: lora\n",
    "...\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6a01f9d-1d22-41a2-93dc-dc4741149d60",
   "metadata": {},
   "source": [
    "### RLHF\n",
    "由于在监督微调中语言模型学习的数据来自互联网，所以模型可能无法很好地遵循用户指令，甚至可能输出非法、暴力的内容，因此我们需要将模型行为与用户需求对齐(alignment)。 通过 RLHF(Reinforcement Learning from Human Feedback) 方法，我们可以通过人类反馈来进一步微调模型，使得模型能够更好更安全地遵循用户指令。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3d94193b-3c76-476c-8a79-b1a00303c387",
   "metadata": {},
   "source": [
    "#### Reward model\n",
    "但是，获取真实的人类数据是十分耗时且昂贵的。一个自然的想法是我们可以训练一个奖励模型（reward model）来代替人类对语言模型的输出进行评价。 为了训练这个奖励模型，我们需要让奖励模型获知人类偏好，而这通常通过输入经过人类标注的偏好数据集来实现。 在偏好数据集中，数据由三部分组成：输入、好的回答、坏的回答。奖励模型在偏好数据集上训练，从而可以更符合人类偏好地评价语言模型的输出。\n",
    "\n",
    "在训练奖励模型时，请将 stage 设置为 rm ，确保使用的数据集符合 偏好数据集 格式并且指定奖励模型的保存路径。 以下提供一个示例：\n",
    "```yaml\n",
    "...\n",
    "stage: rm\n",
    "dataset: dpo_en_demo\n",
    "...\n",
    "output_dir: saves/llama3-8b/lora/reward\n",
    "...\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "907d528c-947a-404a-b7ef-377c1c269f30",
   "metadata": {},
   "source": [
    "#### PPO\n",
    "在训练奖励完模型之后，我们可以开始进行模型的强化学习部分。与监督学习不同，在强化学习中我们没有标注好的数据。语言模型接受prompt作为输入，其输出作为奖励模型的输入。奖励模型评价语言模型的输出，并将评价返回给语言模型。确保两个模型都能良好运行是一个具有挑战性的任务。 \n",
    "\n",
    "一种实现方式是使用**近端策略优化（PPO，Proximal Policy Optimization）**。  \n",
    "其主要思想是：我们既希望语言模型的输出能够尽可能地获得奖励模型的高评价，又不希望语言模型的变化过于“激进”。 通过这种方法，我们可以使得模型在学习趋近人类偏好的同时不过多地丢失其原有的解决问题的能力。\n",
    "\n",
    "在使用 PPO 进行强化学习时，请将 stage 设置为 ppo，并且指定所使用奖励模型的路径。 下面是一个示例：\n",
    "```yaml\n",
    "...\n",
    "stage: ppo\n",
    "reward_model: saves/llama3-8b/lora/reward\n",
    "...\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "793f3981-ff71-4dd2-8aff-e33a85c5319b",
   "metadata": {},
   "source": [
    "### DPO\n",
    "既然同时保证语言模型与奖励模型的良好运行是有挑战性的，一种想法是我们可以丢弃奖励模型， 进而直接基于人类偏好训练我们的语言模型，这大大简化了训练过程。\n",
    "\n",
    "在使用 DPO 时，请将 stage 设置为 dpo，确保使用的数据集符合 偏好数据集 格式并且设置偏好优化相关参数。 以下是一个示例：\n",
    "```yaml\n",
    "...\n",
    "### method\n",
    "stage: dpo\n",
    "pref_beta: 0.1\n",
    "pref_loss: sigmoid  # choices: [sigmoid (dpo), orpo, simpo]\n",
    "dataset: dpo_en_demo\n",
    "...\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e3916fcd-c977-46b2-990d-b55a0f943a3b",
   "metadata": {},
   "source": [
    "### KTO\n",
    "KTO(Kahneman-Taversky Optimization) 的出现是为了解决成对的偏好数据难以获得的问题。 KTO使用了一种新的损失函数使其只需二元的标记数据， 即只需标注回答的好坏即可训练，并取得与 DPO 相似甚至更好的效果。\n",
    "\n",
    "在使用 KTO 时，请将 stage 设置为 kto ，设置偏好优化相关参数并使用 KTO 数据集。\n",
    "\n",
    "以下是一个示例：\n",
    "```yaml\n",
    "model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct\n",
    "...\n",
    "stage: kto\n",
    "pref_beta: 0.1\n",
    "...\n",
    "dataset: kto_en_demo\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "518e8b7b-577f-45ef-bb8e-9df988259ba2",
   "metadata": {},
   "source": [
    "# LLaMA-Factory训练数据集"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5059ed0e-39a8-492d-8bd9-25a7d2856e64",
   "metadata": {},
   "source": [
    "数据集及数据集配置文件路径： `LLaMA-Factory/data/`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc9568d5-932f-4b77-a49f-8fb988be32fb",
   "metadata": {},
   "source": [
    "## dataset_info.json "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4ce7e3b5-2a0f-43aa-9e0e-72c3b3d0cdcd",
   "metadata": {},
   "source": [
    "dataset_info.json 包含了所有经过预处理的 本地数据集 以及 在线数据集。如果您希望使用自定义数据集，请 务必 在 dataset_info.json 文件中添加对数据集及其内容的定义。\n",
    "```json\n",
    "{\n",
    "  \"identity\": {\n",
    "    \"file_name\": \"identity.json\"\n",
    "  },\n",
    "...\n",
    "...\n",
    "\n",
    "  \"alpaca_en_demo\": {\n",
    "    \"file_name\": \"alpaca_en_demo.json\"\n",
    "  }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b0961127-177d-4feb-9657-279a251c0d04",
   "metadata": {},
   "source": [
    "## 数据集格式"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb2a22f7-64b4-4df0-8670-09310d116d65",
   "metadata": {},
   "source": [
    "目前我们支持 Alpaca 格式和 ShareGPT 格式的数据集。以常用的Alpaca数据集为例进行讲解"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "99780e36-7d9f-48f2-afbe-e27a8b5da9a3",
   "metadata": {},
   "source": [
    "### 预训练数据集\n",
    "样例数据集： 预训练样例数据集\n",
    "\n",
    "大语言模型通过学习未被标记的文本进行预训练，从而学习语言的表征。通常，预训练数据集从互联网上获得，因为互联网上提供了大量的不同领域的文本信息，有助于提升模型的泛化能力。 预训练数据集文本描述格式如下：\n",
    "```JSON\n",
    "[\n",
    "  {\"text\": \"document\"},\n",
    "  {\"text\": \"document\"}\n",
    "]\n",
    "```\n",
    "在预训练时，只有 text 列中的 内容 （即document）会用于模型学习。\n",
    "\n",
    "对于上述格式的数据， dataset_info.json 中的 数据集描述 应为：\n",
    "```JSON\n",
    "\"数据集名称\": {\n",
    "  \"file_name\": \"data.json\",\n",
    "  \"columns\": {\n",
    "    \"prompt\": \"text\"\n",
    "  }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b5a36bd1-ef87-4a64-8a83-8bf4be30134f",
   "metadata": {},
   "source": [
    "### 指令监督微调数据集"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f285cc54-ae9e-4825-8a51-02d434819f22",
   "metadata": {},
   "source": [
    "指令监督微调(Instruct Tuning)通过让模型学习详细的指令以及对应的回答来优化模型在特定指令下的表现。\n",
    "\n",
    "instruction 列对应的内容为人类指令， input 列对应的内容为人类输入， output 列对应的内容为模型回答。  \n",
    "下面是一个例子\n",
    "```json\n",
    "\"alpaca_zh_demo.json\"\n",
    "{\n",
    "  \"instruction\": \"计算这些物品的总费用。 \",\n",
    "  \"input\": \"输入：汽车 - $3000，衣服 - $100，书 - $20。\",\n",
    "  \"output\": \"汽车、衣服和书的总费用为 $3000 + $100 + $20 = $3120。\"\n",
    "},\n",
    "```\n",
    "\n",
    "在进行指令监督微调时， instruction 列对应的内容会与 input 列对应的内容拼接后作为最终的人类输入，即人类输入为 instruction\\ninput。而 output 列对应的内容为模型回答。 在上面的例子中，人类的最终输入是：\n",
    "\n",
    "```shell\n",
    "计算这些物品的总费用。\n",
    "输入：汽车 - $3000，衣服 - $100，书 - $20。\n",
    "模型的回答是：\n",
    "\n",
    "汽车、衣服和书的总费用为 $3000 + $100 + $20 = $3120。\n",
    "如果指定， system 列对应的内容将被作为系统提示词。\n",
    "```\n",
    "history 列是由多个字符串二元组构成的列表，分别代表历史消息中每轮对话的指令和回答。注意在指令监督微调时，历史消息中的回答内容也会被用于模型学习。\n",
    "\n",
    "指令监督微调数据集 格式要求 如下：\n",
    "```json\n",
    "[\n",
    "  {\n",
    "    \"instruction\": \"人类指令（必填）\",\n",
    "    \"input\": \"人类输入（选填）\",\n",
    "    \"output\": \"模型回答（必填）\",\n",
    "    \"system\": \"系统提示词（选填）\",\n",
    "    \"history\": [\n",
    "      [\"第一轮指令（选填）\", \"第一轮回答（选填）\"],\n",
    "      [\"第二轮指令（选填）\", \"第二轮回答（选填）\"]\n",
    "    ]\n",
    "  }\n",
    "]\n",
    "```\n",
    "下面提供一个 alpaca 格式 多轮 对话的例子，对于单轮对话只需省略 history 列即可。\n",
    "```json\n",
    "[\n",
    "  {\n",
    "    \"instruction\": \"今天的天气怎么样？\",\n",
    "    \"input\": \"\",\n",
    "    \"output\": \"今天的天气不错，是晴天。\",\n",
    "    \"history\": [\n",
    "      [\n",
    "        \"今天会下雨吗？\",\n",
    "        \"今天不会下雨，是个好天气。\"\n",
    "      ],\n",
    "      [\n",
    "        \"今天适合出去玩吗？\",\n",
    "        \"非常适合，空气质量很好。\"\n",
    "      ]\n",
    "    ]\n",
    "  }\n",
    "]\n",
    "```\n",
    "对于上述格式的数据， dataset_info.json 中的 数据集描述 应为：\n",
    "```json\n",
    "\"数据集名称\": {\n",
    "  \"file_name\": \"data.json\",\n",
    "  \"columns\": {\n",
    "    \"prompt\": \"instruction\",\n",
    "    \"query\": \"input\",\n",
    "    \"response\": \"output\",\n",
    "    \"system\": \"system\",\n",
    "    \"history\": \"history\"\n",
    "  }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6822aa7-46f0-44b0-a62b-777677384541",
   "metadata": {},
   "source": [
    "### 偏好数据集\n",
    "偏好数据集用于 **Reward model奖励模型训练、DPO 训练和 ORPO 训练**。对于系统指令和人类输入，偏好数据集给出了一个更优的回答和一个更差的回答。\n",
    "\n",
    "一些研究 表明通过让模型学习“什么更好”可以使得模型更加迎合人类的需求。 甚至可以使得参数相对较少的模型的表现优于参数更多的模型。\n",
    "\n",
    "偏好数据集需要在 chosen 列中提供更优的回答，并在 rejected 列中提供更差的回答，在一轮问答中其格式如下：\n",
    "```json\n",
    "[\n",
    "  {\n",
    "    \"instruction\": \"人类指令（必填）\",\n",
    "    \"input\": \"人类输入（选填）\",\n",
    "    \"chosen\": \"优质回答（必填）\",\n",
    "    \"rejected\": \"劣质回答（必填）\"\n",
    "  }\n",
    "]\n",
    "```\n",
    "对于上述格式的数据，dataset_info.json 中的 数据集描述 应为：\n",
    "```json\n",
    "\"数据集名称\": {\n",
    "  \"file_name\": \"data.json\",\n",
    "  \"ranking\": true,\n",
    "  \"columns\": {\n",
    "    \"prompt\": \"instruction\",\n",
    "    \"query\": \"input\",\n",
    "    \"chosen\": \"chosen\",\n",
    "    \"rejected\": \"rejected\"\n",
    "  }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "09960191-ecb3-4bf4-aabb-9083d6ed9c15",
   "metadata": {},
   "source": [
    "### KTO 数据集\n",
    "KTO数据集与偏好数据集类似，但不同于给出一个更优的回答和一个更差的回答，KTO数据集对每一轮问答只给出一个 true/false 的 label。 除了 instruction 以及 input 组成的人类最终输入和模型回答 output ，KTO 数据集还需要额外添加一个 kto_tag 列（true/false）来表示人类的反馈。\n",
    "\n",
    "在一轮问答中其格式如下：\n",
    "```json\n",
    "[\n",
    "  {\n",
    "    \"instruction\": \"人类指令（必填）\",\n",
    "    \"input\": \"人类输入（选填）\",\n",
    "    \"output\": \"模型回答（必填）\",\n",
    "    \"kto_tag\": \"人类反馈 [true/false]（必填）\"\n",
    "  }\n",
    "]\n",
    "```\n",
    "对于上述格式的数据， dataset_info.json 中的 数据集描述 应为：\n",
    "```json\n",
    "\"数据集名称\": {\n",
    "  \"file_name\": \"data.json\",\n",
    "  \"columns\": {\n",
    "    \"prompt\": \"instruction\",\n",
    "    \"query\": \"input\",\n",
    "    \"response\": \"output\",\n",
    "    \"kto_tag\": \"kto_tag\"\n",
    "  }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb8afc3d-094e-4c21-8829-4d99c9777ce5",
   "metadata": {},
   "source": [
    "# LLaMA-Factory：监督微调实战"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c612344-5184-4214-90c6-14843f7896c2",
   "metadata": {},
   "source": [
    "## 目标\n",
    "\n",
    "以Qwen/DeepSeek模型 和 Ubuntu22.04 + 4 * NVIDIA-TeslaT4 16GB 环境，LoRA+sft 训练阶段为例子，帮助开发者迅速浏览和实践本项目会涉及到的常见若干个功能，包括\n",
    "\n",
    "1. 原始模型直接推理\n",
    "2. 自定义数据集构建\n",
    "3. 基于LoRA的sft指令微调\n",
    "4. LoRA模型合并导出\n",
    "5. 微调后模型问答效果验证\n",
    "6. 一站式webui board的使用"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3608f27f-aea7-461c-9c93-5b8f6b8c5050",
   "metadata": {},
   "source": [
    "## 硬件环境校验\n",
    "\n",
    "显卡驱动和CUDA的安装，网络教程很多，不在本教程范围以内\n",
    "使用以下命令做最简单的校验\n",
    "\n",
    "```shell\n",
    "nvidia-smi\n",
    "```\n",
    "\n",
    "显示GPU当前状态和配置信息\n",
    "\n",
    "```shell\n",
    "(env_sft) root@2c61cb3f8af3:/workspace# nvidia-smi \n",
    "Tue Aug 19 03:17:43 2025            \n",
    "+---------------------------------------------------------------------------------------+\n",
    "| NVIDIA-SMI 535.183.01             Driver Version: 535.183.01   CUDA Version: 12.2     |\n",
    "|-----------------------------------------+----------------------+----------------------+\n",
    "| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |\n",
    "| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |\n",
    "|                                         |                      |               MIG M. |\n",
    "|=========================================+======================+======================|\n",
    "|   0  Tesla T4                       Off | 00000000:17:00.0 Off |                    0 |\n",
    "| N/A   39C    P8              11W /  70W |      7MiB / 15360MiB |      0%      Default |\n",
    "|                                         |                      |                  N/A |\n",
    "+-----------------------------------------+----------------------+----------------------+\n",
    "|   1  Tesla T4                       Off | 00000000:31:00.0 Off |                    0 |\n",
    "| N/A   39C    P8              11W /  70W |      7MiB / 15360MiB |      0%      Default |\n",
    "|                                         |                      |                  N/A |\n",
    "+-----------------------------------------+----------------------+----------------------+\n",
    "|   2  Tesla T4                       Off | 00000000:98:00.0 Off |                    0 |\n",
    "| N/A   42C    P8              12W /  70W |      7MiB / 15360MiB |      0%      Default |\n",
    "|                                         |                      |                  N/A |\n",
    "+-----------------------------------------+----------------------+----------------------+\n",
    "|   3  Tesla T4                       Off | 00000000:B1:00.0 Off |                    0 |\n",
    "| N/A   42C    P8              11W /  70W |      7MiB / 15360MiB |      0%      Default |\n",
    "|                                         |                      |                  N/A |\n",
    "+-----------------------------------------+----------------------+----------------------+\n",
    "                                                                                         \n",
    "+---------------------------------------------------------------------------------------+\n",
    "| Processes:                                                                            |\n",
    "|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |\n",
    "|        ID   ID                                                             Usage      |\n",
    "|=======================================================================================|\n",
    "|    0   N/A  N/A    100664      G   /usr/lib/xorg/Xorg                            4MiB |\n",
    "|    1   N/A  N/A    100664      G   /usr/lib/xorg/Xorg                            4MiB |\n",
    "|    2   N/A  N/A    100664      G   /usr/lib/xorg/Xorg                            4MiB |\n",
    "|    3   N/A  N/A    100664      G   /usr/lib/xorg/Xorg                            4MiB |\n",
    "+---------------------------------------------------------------------------------------+\n",
    "```\n",
    "\n",
    "\n",
    "\n",
    "那多大的模型用什么训练方式需要多大的GPU呢，可参考 [https://bgithub.xyz/hiyouga/LLaMA-Factory?tab=readme-ov-file#hardware-requirement](https://link.zhihu.com/?target=https%3A//github.com/hiyouga/LLaMA-Factory%3Ftab%3Dreadme-ov-file%23hardware-requirement)\n",
    "建议选择比较主流的入门级别大模型 7B和8B版本。\n",
    "\n",
    "| 方法                            | 精度 | 7B    | 14B   | 30B   | 70B    | `x`B    |\n",
    "| ------------------------------- | ---- | ----- | ----- | ----- | ------ | ------- |\n",
    "| Full (`bf16` or `fp16`)         | 32   | 120GB | 240GB | 600GB | 1200GB | `18x`GB |\n",
    "| Full (`pure_bf16`)              | 16   | 60GB  | 120GB | 300GB | 600GB  | `8x`GB  |\n",
    "| Freeze/LoRA/GaLore/APOLLO/BAdam | 16   | 16GB  | 32GB  | 64GB  | 160GB  | `2x`GB  |\n",
    "| QLoRA                           | 8    | 10GB  | 20GB  | 40GB  | 80GB   | `x`GB   |\n",
    "| QLoRA                           | 4    | 6GB   | 12GB  | 24GB  | 48GB   | `x/2`GB |\n",
    "| QLoRA                           | 2    | 4GB   | 8GB   | 16GB  | 24GB   | `x/4`GB |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6b0ba55-cee7-4bd9-999e-d98dfe9af4df",
   "metadata": {},
   "source": [
    "## CUDA和Pytorch环境校验\n",
    "\n",
    "安装方法：\n",
    "\n",
    "```shell\n",
    "git clone https://bgithub.xyz/hiyouga/LLaMA-Factory.git\n",
    "conda create -n env_sft python=3.10\n",
    "conda activate env_sft\n",
    "cd LLaMA-Factory\n",
    "pip install -e '.[torch,metrics]' # .[torch,metrics] 会安装核心库及扩展依赖\n",
    "```\n",
    "\n",
    "上述的安装命令完成了如下几件事\n",
    "\n",
    "1. 新建一个LLaMA-Factory 使用的python环境（可选）\n",
    "2. 安装LLaMA-Factory 所需要的第三方基础库（requirements.txt包含的库）\n",
    "3. 安装评估指标所需要的库，包含nltk, jieba, rouge-chinese\n",
    "4. 安装LLaMA-Factory本身，然后在系统中生成一个命令 llamafactory-cli（具体用法见下方教程）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "452e8364-863c-420d-aac6-6ef4c6703756",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'2.8.0+cu128'"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 校验1\n",
    "\n",
    "import torch\n",
    "torch.cuda.current_device()\n",
    "torch.cuda.get_device_name(0)\n",
    "torch.__version__"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ace636c-0b56-4aa3-adc3-1bcf7a6f8a31",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "校验2\n",
    "\n",
    "同时对本库的基础安装做一下校验，输入以下命令获取训练相关的参数指导, 否则说明库还没有安装成功\n",
    "```shell\n",
    "llamafactory-cli train -h\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2f2955f0-8f17-4272-b08b-096355e56036",
   "metadata": {},
   "source": [
    "## 模型准备"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "101fce88-ad6d-4530-b070-0e49cce0e84f",
   "metadata": {},
   "source": [
    "### 模型下载\n",
    "\n",
    "(如果是实训环境，该步骤无需操作，已下载好)\n",
    "\n",
    "从modelscope下载模型\n",
    "\n",
    "以Qwen3-4B为例,下载到 `/workspace/models/` 目录下\n",
    "\n",
    "```shell\n",
    "modelscope download --model Qwen/Qwen3-4B --cache_dir /workspace/models/\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "142fe955-fec2-4ee2-9479-321449f03cc4",
   "metadata": {},
   "source": [
    "### 可用性校验\n",
    "\n",
    "使用vllm进行测试\n",
    "可以复用在之前已经安装好的en_vllm环境\n",
    "```shell\n",
    "conda create -n env_vllm python=3.12\n",
    "conda activate env_vllm\n",
    "pip install vllm \n",
    "```\n",
    "\n",
    "启动模型，在终端执行\n",
    "\n",
    "```shell\n",
    "CUDA_VISIBLE_DEVICES=0 \\\n",
    "vllm serve /workspace/models/Qwen/Qwen3-4B \\\n",
    "--port 8082 \\\n",
    "--max-model-len 16384 \\\n",
    "--tensor-parallel-size 1 \\\n",
    "--trust-remote-code \\\n",
    "--served-model-name my_qwen3_4b \\\n",
    "--dtype=half \\\n",
    "--enable-auto-tool-choice \\\n",
    "--tool-call-parser hermes \\\n",
    "--reasoning-parser deepseek_r1 \\\n",
    "--gpu-memory-utilization 0.8 \\\n",
    "--api-key token-abc123\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77d0155c-36a6-4e1d-b5f6-af53086a30fe",
   "metadata": {},
   "source": [
    "测试问答效果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "07de7639-7811-42c1-ab0f-6f76fe434ee8",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'\\n\\n我是通义千问，是阿里巴巴集团旗下的通义实验室研发的大型语言模型。我能够帮助您回答问题、创作文本、进行对话等。您可以向我提出任何问题或请求，我会尽力提供帮助。如果您有任何具体的问题或需要 assistance，欢迎随时告诉我！'"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from openai import OpenAI\n",
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:8082/v1\",\n",
    "    api_key=\"token-abc123\",\n",
    ")\n",
    "prompt = '你是谁？/no_think'\n",
    "messages = [{\"role\":\"user\", \"content\":prompt}]\n",
    "response = client.chat.completions.create(\n",
    "    model = 'my_qwen3_4b',\n",
    "    messages = messages,\n",
    "    temperature=0.95\n",
    ")\n",
    "\n",
    "response.choices[0].message.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9dc13338-390f-4e47-a518-5d66fe30aa35",
   "metadata": {},
   "source": [
    "回答结果：  \n",
    "```bash\n",
    "'\\n\\n我是通义千问，是阿里巴巴集团旗下的通义实验室研发的大型语言模型。我能够帮助您回答问题、创作文本、进行对话等。您可以向我提出任何问题或请求，我会尽力提供帮助。如果您有任何具体的问题或需要 assistance，欢迎随时告诉我！'\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "1a6cd899-6234-406c-ad2f-30cedc5534ab",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'\\n\\n根据你提供的关键词，可以整理出以下这款裤子的描述：\\n\\n---\\n\\n**裤子类型：** 牛仔裤  \\n**版型特点：** 显瘦  \\n**材质：** 牛仔布  \\n**颜色：** 深蓝色  \\n**裤腰型：** 高腰  \\n\\n---\\n\\n如果你需要这段描述用于商品标题、详情页或广告文案，可以稍作润色，例如：\\n\\n**高腰显瘦深蓝色牛仔裤 | 修身版型 | 牛仔布材质 | 永远时尚的百搭选择**\\n\\n或者更详细一些：\\n\\n**高腰显瘦深蓝色牛仔裤，采用优质牛仔布面料，版型修身显瘦，深蓝色经典百搭，适合多种场合穿着，是衣橱中的必备单品。**\\n\\n如需我帮你生成更完整的商品描述或搭配建议，也可以告诉我！'"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from openai import OpenAI\n",
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:8082/v1\",\n",
    "    api_key=\"token-abc123\",\n",
    ")\n",
    "prompt = '类型#裤*版型#显瘦*材质#牛仔布*颜色#深蓝色*裤腰型#高腰/no_think'\n",
    "messages = [{\"role\":\"user\", \"content\":prompt}]\n",
    "response = client.chat.completions.create(\n",
    "    model = 'my_qwen3_4b',\n",
    "    messages = messages,\n",
    "    temperature=0.95\n",
    ")\n",
    "response.choices[0].message.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dfc1a43d-c428-466f-86a6-3aaddbf80865",
   "metadata": {},
   "source": [
    "回答结果：  \n",
    "```bash\n",
    "'\\n\\n根据你提供的关键词，可以整理出以下这款裤子的描述：\\n\\n---\\n\\n**裤子类型：** 牛仔裤  \\n**版型特点：** 显瘦  \\n**材质：** 牛仔布  \\n**颜色：** 深蓝色  \\n**裤腰型：** 高腰  \\n\\n---\\n\\n如果你需要这段描述用于商品标题、详情页或广告文案，可以稍作润色，例如：\\n\\n**高腰显瘦深蓝色牛仔裤 | 修身版型 | 牛仔布材质 | 永远时尚的百搭选择**\\n\\n或者更详细一些：\\n\\n**高腰显瘦深蓝色牛仔裤，采用优质牛仔布面料，版型修身显瘦，深蓝色经典百搭，适合多种场合穿着，是衣橱中的必备单品。**\\n\\n如需我帮你生成更完整的商品描述或搭配建议，也可以告诉我！'\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d3264654-3b7b-4d6f-b303-c786858db073",
   "metadata": {},
   "source": [
    "## SFT数据集构建"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f0546588-ecbf-4bbc-ba84-25f5d1e9380f",
   "metadata": {},
   "source": [
    "###  identity.json数据集\n",
    "\n",
    "系统自带的identity.json数据集(已默认在data/dataset_info.json 注册为identity)，对应文件已经在data目录下，我们通过操作系统的文本编辑器的替换功能，可以替换其中的NAME 和 AUTHOR ，换成我们需要的内容。如果是linux系统，可以使用**sed** 完成快速替换。\n",
    "\n",
    "在 `/workspace/LLaMA-Factory` 目录下执行： \n",
    "比如助手的名称修改为**商品文案生成助手**， 由 LLaMA Factory开发：\n",
    "\n",
    "```shell\n",
    "sed -i 's/{{name}}/商品文案生成助手/g'  data/identity.json \n",
    "sed -i 's/{{author}}/LLaMA Factory/g'  data/identity.json \n",
    "```\n",
    "\n",
    "替换前\n",
    "\n",
    "```json\n",
    "{\n",
    "  \"instruction\": \"Who are you?\",\n",
    "  \"input\": \"\",\n",
    "  \"output\": \"Hello! I am {{name}}, an AI assistant developed by {{author}}. How can I assist you today?\"\n",
    "}\n",
    "```\n",
    "\n",
    "替换后\n",
    "\n",
    "```json\n",
    "{\n",
    "  \"instruction\": \"Who are you?\",\n",
    "  \"input\": \"\",\n",
    "  \"output\": \"I am 商品文案生成助手, an AI assistant developed by LLaMA Factory. How can I assist you today?\"\n",
    "}\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d00d0e3-a24a-47ca-9f42-67b28374b9f9",
   "metadata": {},
   "source": [
    "### AdvertiseGen数据集\n",
    "\n",
    "一个商品文案生成数据集AdvertiseGen，原始链接为 [AdvertiseGen](https://cloud.tsinghua.edu.cn/f/b3f119a008264b1cabd1/?dl=1)\n",
    "\n",
    "原始格式如下，训练目标是输入content （也就是prompt）, 输出 summary （对应response）\n",
    "\n",
    "```json\n",
    "{\n",
    "    \"content\": \"类型#裤*版型#宽松*风格#性感*图案#线条*裤型#阔腿裤\", \n",
    "    \"summary\": \"宽松的阔腿裤这两年真的吸粉不少，明星时尚达人的心头爱。毕竟好穿时尚，谁都能穿出腿长2米的效果宽松的裤腿，当然是遮肉小能手啊。上身随性自然不拘束，面料亲肤舒适贴身体验感棒棒哒。系带部分增加设计看点，还让单品的设计感更强。腿部线条若隐若现的，性感撩人。颜色敲温柔的，与裤子本身所呈现的风格有点反差萌。\"\n",
    "}\n",
    "```\n",
    "\n",
    "数据集路径： `/workspace/DataSet/sft/AdvertiseGen`\n",
    "\n",
    "复制该数据集到 `/workspace/LLaMA-Factory/data/` 目录下 \n",
    "\n",
    "```shell\n",
    "(env_sft) root@2c61cb3f8af3:/workspace# \n",
    "cp -r  /workspace/DataSet/sft/AdvertiseGen /workspace/LLaMA-Factory/data/\n",
    "```\n",
    "\n",
    "修改 data/dataset_info.json 新加内容完成注册, 该注册同时完成了3件事\n",
    "\n",
    "- 自定义数据集的名称为adgen_local，后续训练的时候就使用这个名称来找到该数据集\n",
    "\n",
    "- 指定了数据集具体文件位置\n",
    "\n",
    "- 定义了原数据集的输入输出和我们所需要的格式之间的映射关系\n",
    "\n",
    "  ```json\n",
    "    \"adgen_local\": {\n",
    "      \"file_name\": \"AdvertiseGen/train.json\",\n",
    "      \"columns\": {\n",
    "        \"prompt\": \"content\",\n",
    "        \"response\": \"summary\"\n",
    "      }\n",
    "    }\n",
    "  ```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "53be6652-0cf6-4438-9fed-57163e2fd687",
   "metadata": {},
   "source": [
    "如果想要转换成标准Alpaca格式，可以执行以下脚本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "65619a72-cdcb-465b-96cf-983bfa3ff236",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "数据转换完成，已保存到 /workspace/DataSet/sft/AdvertiseGen/train_alpaca.json\n",
      "共转换了 114599 条数据\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "def convert_dataset(source_file, target_file):\n",
    "    \"\"\"\n",
    "    将原始数据集(JSON Lines格式)转换为指定格式\n",
    "    \n",
    "    参数:\n",
    "        source_file (str): 原始数据文件路径\n",
    "        target_file (str): 转换后保存的文件路径\n",
    "    \"\"\"\n",
    "    try:\n",
    "        converted_data = []\n",
    "        \n",
    "        # 读取JSON Lines格式的文件\n",
    "        with open(source_file, 'r', encoding='utf-8') as f:\n",
    "            for line in f:\n",
    "                line = line.strip()\n",
    "                if line:  # 跳过空行\n",
    "                    item = json.loads(line)\n",
    "                    converted_item = {\n",
    "                        \"instruction\": item[\"content\"],\n",
    "                        \"input\": \"\",\n",
    "                        \"output\": item[\"summary\"]\n",
    "                    }\n",
    "                    converted_data.append(converted_item)\n",
    "        \n",
    "        # 保存转换后的数据\n",
    "        with open(target_file, 'w', encoding='utf-8') as f:\n",
    "            json.dump(converted_data, f, ensure_ascii=False, indent=2)\n",
    "            \n",
    "        print(f\"数据转换完成，已保存到 {target_file}\")\n",
    "        print(f\"共转换了 {len(converted_data)} 条数据\")\n",
    "        \n",
    "    except FileNotFoundError:\n",
    "        print(f\"错误：找不到文件 {source_file}\")\n",
    "    except json.JSONDecodeError as e:\n",
    "        print(f\"错误：文件 {source_file} 包含无效的JSON格式: {e}\")\n",
    "    except KeyError as e:\n",
    "        print(f\"错误：原始数据缺少必要的键 {e}\")\n",
    "    except Exception as e:\n",
    "        print(f\"发生未知错误: {e}\")\n",
    "\n",
    "# 使用\n",
    "convert_dataset(\"/workspace/DataSet/sft/AdvertiseGen/train.json\", \"/workspace/DataSet/sft/AdvertiseGen/train_alpaca.json\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0981acf8-3b60-4b85-9f0f-ae8df18b97ca",
   "metadata": {},
   "source": [
    "在`dataset_info.json`进行注册,这样就精简很多"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6216154-1944-4255-8fc1-7313e8b77a39",
   "metadata": {},
   "source": [
    "```json\n",
    "    \"adgen_local\": {\n",
    "      \"file_name\": \"AdvertiseGen/train_alpaca.json\"\n",
    "    }\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5790d1a8-a5cb-47c4-99ae-2e7fc78f3aa6",
   "metadata": {},
   "source": [
    "### 补充：自主构建\n",
    "\n",
    "1. 开源工具： [ConardLi/easy-dataset](https://bgithub.xyz/ConardLi/easy-dataset)\n",
    "\n",
    "2. 写python脚本csv2json: \n",
    "\n",
    "```python\n",
    "import csv\n",
    "import json\n",
    "\n",
    "def convert_csv_to_json(csv_file_path, json_file_path):\n",
    "    \"\"\"\n",
    "    将CSV文件转换为JSON文件。\n",
    "\n",
    "    参数:\n",
    "        csv_file_path(str): 输入的CSV文件路径。\n",
    "        json_file_path(str): 输出的JSON文件路径。\n",
    "    \n",
    "    此函数首先读取一个CSV文件，将其内容转换为JSON格式，然后写入到一个JSON文件中。\n",
    "    CSV文件的内容会被转换成一个包含多个字典的列表，每个字典对应CSV文件中的一行。\n",
    "    \"\"\"\n",
    "    \n",
    "    # 读取CSV文件\n",
    "    with open(csv_file_path, mode='r', encoding='utf-8') as file:\n",
    "        reader = csv.DictReader(file)\n",
    "        rows = list(reader)\n",
    "    \n",
    "    # 转换为JSON格式\n",
    "    data = []\n",
    "    for row in rows:\n",
    "        entry = {\n",
    "            'instruction': row['instruction'],\n",
    "            'input': row['input'],\n",
    "            'output': row['output']\n",
    "        }\n",
    "        data.append(entry)\n",
    "    \n",
    "    # 写入JSON文件\n",
    "    with open(json_file_path, mode='w', encoding='utf-8') as file:\n",
    "        json.dump(data, file, ensure_ascii=False, indent=2)\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    # 指定CSV和JSON文件路径\n",
    "    csv_file_path = 'input.csv'\n",
    "    json_file_path = 'output.json'\n",
    "    # 调用函数\n",
    "    convert_csv_to_json(csv_file_path, json_file_path)\n",
    "\n",
    "```\n",
    "\n",
    "3. 写python脚本csv2json-CoT: \n",
    "\n",
    "```python\n",
    "import csv\n",
    "import json\n",
    "\n",
    "def convert_csv_to_json(csv_file_path, json_file_path):\n",
    "    # 读取CSV文件\n",
    "    with open(csv_file_path, mode='r', encoding='gbk') as file:\n",
    "        reader = csv.DictReader(file)\n",
    "        rows = list(reader)\n",
    "\n",
    "    # 数据转换逻辑（新增CoT列处理）\n",
    "    data = []\n",
    "    for row in rows:\n",
    "        entry = {\n",
    "            'instruction': row['instruction'],\n",
    "            'input': row['input'],\n",
    "            # 核心改动：用f-string拼接CoT和output内容\n",
    "            'output': f\"<thing>{row['CoT']}</thing>{row['output']}\"\n",
    "        }\n",
    "        data.append(entry)\n",
    "    \n",
    "    # 写入JSON文件（此部分逻辑不变）\n",
    "    with open(json_file_path, mode='w', encoding='utf-8') as file:\n",
    "        json.dump(data, file, ensure_ascii=False, indent=2)\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    csv_file_path = 'input-CoT.csv'\n",
    "    json_file_path = 'output-CoT.json'\n",
    "    convert_csv_to_json(csv_file_path, json_file_path)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d297ac9d-405b-4a3a-8c4a-a76f6b5a46bf",
   "metadata": {},
   "source": [
    "**推理数据集样例**\n",
    "\n",
    "\n",
    "\n",
    "```json\n",
    "{\n",
    "\t\"instruction\": \"阴阳的概念是什么？\", \n",
    "\t\"input\": \"\", \n",
    "\t\"output\": \"<think>首先，我们需要理解阴阳的基本定义。阴阳是中国古代哲学中的一个基本概念，用来描述宇宙间事物的对立统一关系。阴阳不仅存在于自然界中，也体现在社会生活和人体健康等方面。阴阳的概念强调事物的对立性和统一性，即任何事物都可以分为阴阳两个方面，这两个方面既相互对立又相互依存。通过深入分析，我们可以发现阴阳的概念贯穿于中国传统文化和哲学的各个方面，是理解中国传统文化的重要钥匙。</think> 阴阳的概念是中国古代哲学中用来描述宇宙间事物对立统一关系的基本概念。它认为宇宙间的事物尽管种类繁多，但如果按照相反的属性划分，则可以分为对立的两类，即阴阳。阴阳不仅存在于自然界中，如白天为阳，黑夜为阴，也体现在社会生活和人体健康等方面。阴阳的概念强调事物的对立性和统一性，即任何事物都可以分为阴阳两个方面，这两个方面既相互对立又相互依存。\", \n",
    "\t\"repo_name\": \"\", \n",
    "\t\"prompt_tokens_len\": \"\", \n",
    "\t\"reasoning_content_tokens_len\": \"\", \n",
    "\t\"content_tokens_len\": \"\", \n",
    "\t\"score\": \"\"\n",
    "},\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7b175cd6-8f6a-4662-96f4-5d2862f136cb",
   "metadata": {},
   "source": [
    "## 基于LoRA的sft指令微调"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "54802ce3-218d-4af0-8a6f-f053eaf8dd92",
   "metadata": {},
   "source": [
    "#### LoRA（Low-Rank Adaptation）微调的实现原理"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "33b03591-bf9f-4dc7-898e-157db11a2b91",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "1. 背景\n",
    "\n",
    "在大模型（如 GPT、BERT、LLaMA 等）微调时，如果直接对全部参数进行更新，开销非常大（显存、计算、存储）。\n",
    "LoRA 的核心思想是：**冻结大模型的原始权重，只在部分层插入低秩矩阵进行训练，从而高效地适配下游任务**。\n",
    "\n",
    "---\n",
    "\n",
    "2. 核心思想\n",
    "\n",
    "以一个 **线性层** 为例：\n",
    "假设原始权重矩阵是\n",
    "\n",
    "$$\n",
    "W_0 \\in \\mathbb{R}^{d \\times k}\n",
    "$$\n",
    "\n",
    "在微调时，常规做法是训练一个全尺寸的 $\\Delta W$，更新后变成：\n",
    "\n",
    "$$\n",
    "W = W_0 + \\Delta W\n",
    "$$\n",
    "\n",
    "但 LoRA 认为：$\\Delta W$ 的变化通常存在**低秩性质**（冗余很大）。\n",
    "于是它将 $\\Delta W$ 分解为：\n",
    "\n",
    "$$\n",
    "\\Delta W = BA\n",
    "$$\n",
    "\n",
    "其中：\n",
    "\n",
    "* $B \\in \\mathbb{R}^{d \\times r}$\n",
    "* $A \\in \\mathbb{R}^{r \\times k}$\n",
    "* $r \\ll \\min(d, k)$（通常 r=4, 8, 16 等）\n",
    "\n",
    "这样 $\\Delta W$ 的参数量从 **$d \\times k$** 降到 **$r \\times (d + k)$**，大幅减少。\n",
    "\n",
    "---\n",
    "\n",
    "3. 前向传播过程\n",
    "\n",
    "微调时，权重更新为：\n",
    "\n",
    "$$\n",
    "h = W_0 x + \\Delta W x = W_0 x + BAx\n",
    "$$\n",
    "\n",
    "其中：\n",
    "\n",
    "* $W_0$：冻结，不更新\n",
    "* $BA$：可训练的低秩矩阵\n",
    "\n",
    "只训练 $A$ 和 $B$，梯度更新非常小。\n",
    "\n",
    "---\n",
    "\n",
    "4. 优点\n",
    "\n",
    "**参数效率高**\n",
    "\n",
    "   * 只需要训练少量参数（通常 <1%）。\n",
    "   * 例如，GPT-3（175B 参数），LoRA 只需几百万可训练参数。\n",
    "\n",
    "**显存占用低**\n",
    "\n",
    "   * 不需要保存全量梯度和优化器状态。\n",
    "\n",
    "**模块化**\n",
    "\n",
    "   * 训练好的 LoRA 适配器可以单独保存、快速加载，不影响原模型。\n",
    "   * 可以一个基座模型加载多个不同任务的 LoRA 权重（类似插件）。\n",
    "\n",
    "---\n",
    "\n",
    "5. 实际应用位置\n",
    "\n",
    "LoRA 一般插入到 **Transformer 的注意力层（Q, V 投影矩阵）**，因为这些位置最敏感，对任务迁移效果最好。\n",
    "有些实现也会插入到 **FFN（前馈层）**，但代价更大。\n",
    "\n",
    "---\n",
    "\n",
    "6. 简单类比\n",
    "\n",
    "可以理解为：\n",
    "\n",
    "* 原模型 $W_0$ = “大脑的核心记忆”，冻结不动\n",
    "* LoRA $BA$ = “小贴纸便签”，只在需要时补充少量信息\n",
    "* 好处是：不用重写整个大脑，只要贴几个便签就能适配新任务\n",
    "\n",
    "---\n",
    "\n",
    "7. 可视化对比参数量\n",
    "\n",
    "| 方法            | 训练参数量 (比例) |\n",
    "| ------------- | ---------- |\n",
    "| 全量微调          | 100%       |\n",
    "| Adapter       | \\~10%      |\n",
    "| Prefix Tuning | \\~1–3%     |\n",
    "| **LoRA**      | **0.1–1%** |\n",
    "\n",
    "---\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c48c118a-0c9d-4545-9138-9d55292a7b08",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "在准备好数据集之后，我们就可以开始准备训练了，我们的目标就是让原来的LLaMA3模型能够学会我们定义的“你是谁”，同时学会我们希望的商品文案的一些生成。\n",
    "\n",
    "注意：微调过程中观察显存占用情况： \n",
    "\n",
    "```shell\n",
    "watch -n 1 nvidia-smi  # 每秒刷新一次（默认2秒，-n可调节间隔）\n",
    "watch -n 1 -d nvidia-smi  # 高亮显示变化的数值（适合观察动态波动） \n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5cff0eca-9439-4426-93ad-46fe4f00112d",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "### 命令行形式(略过)\n",
    "\n",
    "这里我们先使用命令行版本来做训练，从命令行更容易学习相关的原理。\n",
    "\n",
    "本脚本参数改编自[LLaMA-Factory/examples/train_lora/llama3_lora_sft.yaml](https://bgithub.xyz/hiyouga/LLaMA-Factory/blob/main/examples/train_lora/llama3_lora_sft.yaml)\n",
    "\n",
    "|          **场景**          |        推荐配置文件        |                          优势                           |\n",
    "| :------------------------: | :------------------------: | :-----------------------------------------------------: |\n",
    "|    单卡/单节点快速验证     |   `llama3_lora_sft.yaml`   |              配置简单，适合调试小规模任务               |\n",
    "|    多卡/多节点显存优化     | `llama3_lora_sft_ds3.yaml` | 通过 DeepSpeed ZeRO-3 分片显存，支持超大模型（如 70B+） |\n",
    "| 弹性分布式训练与自动化调参 | `llama3_lora_sft_ray.yaml` |     资源利用率高，适合云原生环境和大规模超参数搜索      |\n",
    "\n",
    "**Qwen3-4B**\n",
    "\n",
    "命令行：\n",
    "\n",
    "根据`llama3_lora_sft.yaml` \n",
    "\n",
    "```shell\n",
    "PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True  # 使 PyTorch 的内存分配器更灵活地管理内存，从而减少碎片化的影响\n",
    "CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \\\n",
    "    --stage sft \\\n",
    "    --do_train \\\n",
    "    --model_name_or_path /workspace/models/Qwen/Qwen3-4B \\\n",
    "    --dataset identity,adgen_local \\\n",
    "    --dataset_dir ./data \\\n",
    "    --template qwen3 \\\n",
    "    --finetuning_type lora \\\n",
    "    --output_dir /workspace/LLaMA-Factory/saves/Qwen/Qwen-4B/lora/sft-adgen \\\n",
    "    --overwrite_cache \\\n",
    "    --overwrite_output_dir \\\n",
    "    --cutoff_len 1024 \\\n",
    "    --preprocessing_num_workers 16 \\\n",
    "    --per_device_train_batch_size 2 \\\n",
    "    --per_device_eval_batch_size 1 \\\n",
    "    --gradient_accumulation_steps 8 \\\n",
    "    --lr_scheduler_type cosine \\\n",
    "    --logging_steps 50 \\\n",
    "    --warmup_steps 20 \\\n",
    "    --save_steps 100 \\\n",
    "    --eval_steps 50 \\\n",
    "    --evaluation_strategy steps \\\n",
    "    --load_best_model_at_end \\\n",
    "    --learning_rate 5e-5 \\\n",
    "    --num_train_epochs 5.0 \\\n",
    "    --max_samples 1000 \\\n",
    "    --val_size 0.1 \\\n",
    "    --plot_loss \\\n",
    "    --fp16\n",
    "```\n",
    "\n",
    "假如出现报错： \n",
    "\n",
    "```shell\n",
    "[rank2]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB. GPU 2 has a total capacity of 14.58 GiB of which 30.81 MiB is free. Including non-PyTorch memory, this process has 14.54 GiB memory in use. Of the allocated memory 13.79 GiB is allocated by PyTorch, and 565.19 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n",
    "```\n",
    "\n",
    "推荐引入DeepSpeed 使用ZeRO-2/ZeRO-3 分片显存\n",
    "\n",
    "```shell\n",
    "# 安装\n",
    "pip3 install deepspeed==0.16.9\n",
    "```\n",
    "\n",
    "ZeRO 是 DeepSpeed 提出的一种“零冗余优化器”（Zero Redundancy Optimizer），旨在通过将模型状态（优化器参数、梯度、参数权重）在多 GPU 上进行分片，从而显著减少冗余内存占用，提高训练大模型（包括上百亿乃至千亿参数）的能力与效率。\n",
    "\n",
    "的 **Stage 0 / Stage 2 / Stage 3** 主要区别：\n",
    "\n",
    "- 小于 1B 参数 → ZeRO-0 就够了（简单稳定）。\n",
    "- 1B–4B 参数 → ZeRO-2，显存压力主要来自优化器状态。\n",
    "- 大于 4B 参数 或 单卡放不下参数 → ZeRO-3 是唯一选择。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "de6893bb-0a13-45d7-8732-2edfce021a7d",
   "metadata": {},
   "source": [
    "根据`llama3_lora_sft_ds3.yaml`，编写执行指令如下\n",
    "\n",
    "```shell\n",
    "PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True\n",
    "CUDA_VISIBLE_DEVICES=0,1,2,3 llamafactory-cli train \\\n",
    "    --stage sft \\\n",
    "    --do_train \\\n",
    "    --deepspeed examples/deepspeed/ds_z2_config.json \\\n",
    "    --model_name_or_path /workspace/models/Qwen/Qwen3-4B \\\n",
    "    --dataset identity,adgen_local \\\n",
    "    --dataset_dir ./data \\\n",
    "    --template qwen3 \\\n",
    "    --finetuning_type lora \\\n",
    "    --output_dir /workspace/LLaMA-Factory/saves/Qwen2.5-14B-Instruct/lora/sft \\\n",
    "    --overwrite_cache \\\n",
    "    --overwrite_output_dir \\\n",
    "    --cutoff_len 1024 \\\n",
    "    --preprocessing_num_workers 16 \\\n",
    "    --per_device_train_batch_size 2 \\\n",
    "    --per_device_eval_batch_size 1 \\\n",
    "    --gradient_accumulation_steps 8 \\\n",
    "    --lr_scheduler_type cosine \\\n",
    "    --logging_steps 50 \\\n",
    "    --warmup_steps 20 \\\n",
    "    --save_steps 100 \\\n",
    "    --eval_steps 50 \\\n",
    "    --evaluation_strategy steps \\\n",
    "    --load_best_model_at_end \\\n",
    "    --learning_rate 5e-5 \\\n",
    "    --num_train_epochs 5.0 \\\n",
    "    --max_samples 1000 \\\n",
    "    --val_size 0.1 \\\n",
    "    --plot_loss \\\n",
    "    --fp16\n",
    "```\n",
    "\n",
    "\n",
    "\n",
    "执行过程\n",
    "\n",
    "```\n",
    "[2025-03-27 16:32:46,037] [WARNING] [stage3.py:2139:step] 1 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time\n",
    "  3%|███▏                                                                                                                     | 2/75 [01:36<57:38, 47.38s/it]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6779c50-9e6c-434b-8edd-e9cf385884e5",
   "metadata": {},
   "source": [
    "### 脚本方式（推荐）\n",
    "\n",
    "根据脚本:[LLaMA-Factory/examples/train_lora/llama3_lora_sft.yaml](https://bgithub.xyz/hiyouga/LLaMA-Factory/blob/main/examples/train_lora/llama3_lora_sft.yaml)进行编写\n",
    "\n",
    "#### 理解参考脚本\n",
    "\n",
    "`llama3_lora_sft_ds3.yaml`\n",
    "\n",
    "``` python\n",
    "### model\n",
    "model_name_or_path: meta-llama/Llama-4-Scout-17B-16E-Instruct   # 预训练底模的权重名称/路径（HF Hub 或本地路径）\n",
    "trust_remote_code: true                                         # 允许加载模型仓库里自定义的代码；便于兼容特殊模型实现，但有安全风险\n",
    "\n",
    "### method\n",
    "stage: sft                                                       # 训练阶段：SFT（监督微调）\n",
    "do_train: true                                                   # 运行训练流程\n",
    "finetuning_type: lora                                            # 微调方式：LoRA/PEFT 参数高效微调\n",
    "lora_rank: 8                                                     # LoRA 低秩分解的秩（r）；越大可塑性越强、显存与计算稍增\n",
    "lora_target: all                                                 # 应用 LoRA 的目标层；all = 框架定义的所有线性层（更“重”，更灵活）\n",
    "deepspeed: examples/deepspeed/ds_z3_config.json                  # 使用 DeepSpeed 配置（这里是 ZeRO-3），内存/显存优化\n",
    "# choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]\n",
    "\n",
    "### dataset\n",
    "dataset: mllm_demo,identity,alpaca_en_demo                      # 多数据集混合；通常默认等概率/等量采样（取决于框架）\n",
    "template: llama4                                                 # Prompt/Chat 模板名字（决定指令与回复的打包格式）\n",
    "cutoff_len: 2048                                                 # 每条样本的最大 token 长度（超出会被截断）\n",
    "max_samples: 1000                                                # 最多采 1000 条样本（便于快速实验/控制训练规模）\n",
    "overwrite_cache: true                                            # 预处理缓存可重写（变更模板或分词后建议开启）\n",
    "preprocessing_num_workers: 16                                    # 数据预处理的 CPU 线程数（tokenize 等）\n",
    "dataloader_num_workers: 4                                        # PyTorch DataLoader 的工作进程数（取决于 CPU/IO）\n",
    "\n",
    "### output\n",
    "output_dir: saves/llama4-8b/lora/sft                             # 输出目录（权重、日志、曲线等）\n",
    "logging_steps: 10                                                # 每 10 step 记录一次日志\n",
    "save_steps: 500                                                  # 每 500 step 保存一次 checkpoint\n",
    "plot_loss: true                                                  # 训练结束后绘制 loss 曲线图\n",
    "overwrite_output_dir: true                                       # 若目录已存在则覆盖（小心覆盖历史结果）\n",
    "save_only_model: false                                           # 保存除模型外的训练状态（优化器/调度器等）以便断点续训\n",
    "report_to: none                                                  # 训练日志上报：none / wandb / tensorboard / swanlab / mlflow\n",
    "\n",
    "### train\n",
    "per_device_train_batch_size: 1                                   # 单卡的“微批”大小（tokens*长度一起受显存约束）\n",
    "gradient_accumulation_steps: 2                                   # 梯度累积步数；等效于放大总 batch 而不增显存\n",
    "learning_rate: 1.0e-4                                           # 初始学习率（LoRA 场景 1e-4 常见）\n",
    "num_train_epochs: 3.0                                            # 训练轮数\n",
    "lr_scheduler_type: cosine                                        # 学习率调度器：余弦退火\n",
    "warmup_ratio: 0.1                                                # 预热比例（总步数的 10% 用于 warmup）\n",
    "bf16: true                                                       # 使用 bfloat16 训练（A100/4090 等支持；兼顾稳定与速度）\n",
    "ddp_timeout: 180000000                                           # DDP 初始化超时（非常大，避免慢机器超时）\n",
    "resume_from_checkpoint: null                                     # 断点续训路径（null=从头训练）\n",
    "\n",
    "### eval\n",
    "# eval_dataset: alpaca_en_demo                                   # 验证集名称（如启用评估）\n",
    "# val_size: 0.1                                                  # 从训练数据中切 10% 做验证（或使用独立 eval_dataset）\n",
    "# per_device_eval_batch_size: 1                                  # 单卡验证批量\n",
    "# eval_strategy: steps                                           # 评估触发策略：steps / epoch / no\n",
    "# eval_steps: 500                                                # 每 500 step 评估一次\n",
    "\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0e02e837-bc37-4042-a88b-b8d52b2711fe",
   "metadata": {},
   "source": [
    "参数说明"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b75683fa-5f05-4bb2-b6dd-2e0494dbe105",
   "metadata": {},
   "source": [
    "参数说明可参考项目官方说明： https://llamafactory.readthedocs.io/zh-cn/latest/advanced/arguments.html\n",
    "\n",
    "**核心参数解析与调整建议**\n",
    "\n",
    "**1. 模型与方法配置**\n",
    "\n",
    "- **`model_name_or_path`**: 指定预训练模型路径（如`meta-llama/Meta-Llama-3-8B-Instruct`），需确保路径正确。\n",
    "- `finetuning_type: lora`: 使用LoRA（低秩适配）微调，节省显存且效果接近全参数微调。\n",
    "  - **`lora_rank: 8`**: LoRA的秩（参数量），通常设为8-64，秩越小计算开销越低，但可能影响效果。\n",
    "  - **`lora_target: all`**: 对所有层微调，也可指定特定层（如`q_proj,v_proj`）以节省资源。\n",
    "- **`deepspeed`**: 使用DeepSpeed配置（如`ds_z3_config.json`）优化多卡训练，显存不足时可选`ds_z2`。\n",
    "\n",
    "**2. 数据与模板**\n",
    "\n",
    "- **`dataset`**: 数据集名称（如`identity,alpaca_en_demo`），需确保已注册到`dataset_info.json`。\n",
    "- **`template: llama3`**: 匹配Llama3的对话模板，若为其他模型需调整（如`default`或自定义）。\n",
    "- **`cutoff_len: 2048`**: 截断长度，根据任务需求调整（Llama3支持8192，但更长会占用更多显存）。\n",
    "- **`max_samples: 1000`**: 限制训练样本数，数据量大时可减少以加速实验。\n",
    "\n",
    "**3. 训练优化**\n",
    "\n",
    "- **`per_device_train_batch_size: 1`**: 单卡批次大小，显存不足时可设为1，通过`gradient_accumulation_steps`（如2）模拟更大批次。\n",
    "- **`learning_rate: 1e-4`**: LoRA学习率通常高于全参数微调（如5e-5），可尝试1e-4~5e-5。\n",
    "- **`num_train_epochs: 3.0`**: SFT任务通常3-10轮，数据量少时可增加轮次。\n",
    "- **`bf16: true`**: 使用BF16混合精度训练，A100/H100显卡推荐开启。\n",
    "- **`lr_scheduler_type: cosine`**: 学习率余弦衰减，适合长周期训练；小数据集可用`linear`或`constant_with_warmup`。\n",
    "\n",
    "**4. 日志与保存**\n",
    "\n",
    "- **`logging_steps: 10`**: 每10步记录日志，监控Loss波动情况。\n",
    "- **`plot_loss: true`**: 绘制损失曲线，若曲线快速下降后波动可能过拟合，需调整学习率或正则化。\n",
    "- **`save_steps: 500`**: 每500步保存检查点，频繁保存会占用存储。\n",
    "\n",
    "**灵活调整策略**\n",
    "\n",
    "1. **显存优化**\n",
    "   - 显存不足时：降低`batch_size`、启用梯度检查点（`gradient_checkpointing`）、使用4-bit量化（添加`--quantization_bit 4`）。\n",
    "   - 多卡训练：调整`deepspeed`阶段（如`stage2`分片优化器状态）。\n",
    "2. **防止过拟合**\n",
    "   - 增加`lora_dropout`（如0.1）或`weight_decay`（如0.01）。\n",
    "   - 减少`lora_alpha`（默认16，可设为`2*lora_rank`）。\n",
    "3. **效果提升**\n",
    "   - 扩展数据量：调整`max_samples`或增加数据集多样性。\n",
    "   - 微调特定层：如仅微调注意力层（`lora_target: q_proj,v_proj`）。\n",
    "4. **实验效率**\n",
    "   - 快速验证：用少量数据（`max_samples=100`）和1轮训练测试流程。\n",
    "   - 恢复训练：设置`resume_from_checkpoint`为检查点路径。\n",
    "\n",
    "**关键参数总结表**\n",
    "\n",
    "| 参数类别     | 参数名                  | 建议调整范围/值       | 作用说明                                       |\n",
    "| ------------ | ----------------------- | --------------------- | ---------------------------------------------- |\n",
    "| **LoRA配置** | `lora_rank`             | 8-64                  | 控制低秩矩阵参数量，影响微调效果与计算开销     |\n",
    "|              | `lora_alpha`            | 16（或`2*lora_rank`） | 缩放低秩矩阵贡献，值越大效果可能越强但易过拟合 |\n",
    "| **训练优化** | `learning_rate`         | 1e-4 ~ 5e-5           | LoRA学习率，需根据数据量调整                   |\n",
    "|              | `gradient_accumulation` | 2-8                   | 模拟更大批次，缓解显存压力                     |\n",
    "| **数据控制** | `cutoff_len`            | 1024-8192             | 根据任务需求平衡上下文长度与显存               |\n",
    "|              | `max_samples`           | 100-10000             | 限制训练数据量，加速实验                       |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c37f6ea9-a890-43b5-8ec6-a762aec5557d",
   "metadata": {},
   "source": [
    "#### 微调脚本制作"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d2bae2a2-6d31-4977-ac76-9a418df1e294",
   "metadata": {},
   "source": [
    "单卡： `qwen3_4b_lora_sft.yaml`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "15e68052-a4a4-41f2-b1ad-537e6e7be6dd",
   "metadata": {},
   "source": [
    "进入  `/workspace/LLaMA-Factory/examples/train_lora` \n",
    "这里预存了一系列脚本可供参考"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0d879a28-d3df-47f9-b2a0-5f6560ac6bfa",
   "metadata": {},
   "source": [
    "创建脚本 `qwen3_4b_lora_sft.yaml `"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "13dbeb76-b888-49d1-a00e-233f0ddf7361",
   "metadata": {},
   "source": [
    "\n",
    "```yaml\n",
    "### model\n",
    "model_name_or_path: /workspace/models/Qwen/Qwen3-4B\n",
    "trust_remote_code: true\n",
    "\n",
    "### method\n",
    "stage: sft\n",
    "do_train: true\n",
    "finetuning_type: lora\n",
    "lora_rank: 8\n",
    "lora_target: all\n",
    "\n",
    "### dataset\n",
    "dataset: identity,adgen_local\n",
    "template: qwen3\n",
    "cutoff_len: 2048\n",
    "max_samples: 1000\n",
    "overwrite_cache: true\n",
    "preprocessing_num_workers: 16\n",
    "dataloader_num_workers: 4\n",
    "\n",
    "### output\n",
    "output_dir: saves/qwen3-4b/lora/sft-adgen\n",
    "logging_steps: 10\n",
    "save_steps: 500\n",
    "plot_loss: true\n",
    "overwrite_output_dir: true\n",
    "save_only_model: false\n",
    "report_to: none  # choices: [none, wandb, tensorboard, swanlab, mlflow]\n",
    "\n",
    "### train\n",
    "per_device_train_batch_size: 1\n",
    "gradient_accumulation_steps: 8\n",
    "learning_rate: 1.0e-4\n",
    "num_train_epochs: 3.0\n",
    "lr_scheduler_type: cosine\n",
    "warmup_ratio: 0.1\n",
    "bf16: true\n",
    "ddp_timeout: 180000000\n",
    "resume_from_checkpoint: null\n",
    "\n",
    "### eval\n",
    "# eval_dataset: alpaca_en_demo\n",
    "# val_size: 0.1\n",
    "# per_device_eval_batch_size: 1\n",
    "# eval_strategy: steps\n",
    "# eval_steps: 500\n",
    "```\n",
    "\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "07f6b2f5-45c9-4be3-9de7-8cdbac690a17",
   "metadata": {},
   "source": [
    "补充：  \n",
    "如果是多卡，推荐deepspeed  \n",
    "创建脚本 `qwen3_4b_lora_sft_ds.yaml `  \n",
    "需要安装deepspeed"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c95d3f3b-f595-4b62-ac46-f8162486f57b",
   "metadata": {},
   "source": [
    "```yaml\n",
    "### model\n",
    "model_name_or_path: /workspace/models/Qwen/Qwen3-4B\n",
    "trust_remote_code: true\n",
    "\n",
    "### method\n",
    "stage: sft\n",
    "do_train: true\n",
    "finetuning_type: lora\n",
    "lora_rank: 8\n",
    "lora_target: all\n",
    "deepspeed: examples/deepspeed/ds_z3_config.json  # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]\n",
    "\n",
    "### dataset\n",
    "dataset: identity,adgen_local\n",
    "template: qwen3\n",
    "cutoff_len: 2048\n",
    "max_samples: 1000\n",
    "overwrite_cache: true\n",
    "preprocessing_num_workers: 16\n",
    "dataloader_num_workers: 4\n",
    "\n",
    "### output\n",
    "output_dir: saves/qwen3-4b/lora/sft-adgen\n",
    "logging_steps: 10\n",
    "save_steps: 500\n",
    "plot_loss: true\n",
    "overwrite_output_dir: true\n",
    "save_only_model: false\n",
    "report_to: none  # choices: [none, wandb, tensorboard, swanlab, mlflow]\n",
    "\n",
    "### train\n",
    "per_device_train_batch_size: 1\n",
    "gradient_accumulation_steps: 2\n",
    "learning_rate: 1.0e-4\n",
    "num_train_epochs: 3.0\n",
    "lr_scheduler_type: cosine\n",
    "warmup_ratio: 0.1\n",
    "bf16: true\n",
    "ddp_timeout: 180000000\n",
    "resume_from_checkpoint: null\n",
    "\n",
    "### eval\n",
    "# eval_dataset: alpaca_en_demo\n",
    "# val_size: 0.1\n",
    "# per_device_eval_batch_size: 1\n",
    "# eval_strategy: steps\n",
    "# eval_steps: 500\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "79178e10-c084-4cf7-8176-d313922bcdf3",
   "metadata": {},
   "source": [
    "\n",
    "#### 执行脚本\n",
    "\n",
    "```shell\n",
    "(env_sft) root@2c61cb3f8af3:/workspace/learn-llm-sft-easily/LLaMA-Factory# \n",
    "PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/train_lora/qwen3_4b_lora_sft.yaml \n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "57808462-8fa7-42b5-9320-e018f4397c75",
   "metadata": {},
   "source": [
    "参数解释\n",
    "\n",
    "关于参数的完整列表和解释可以通过如下命令来获取\n",
    "\n",
    "```text\n",
    "llamafactory-cli train -h\n",
    "```\n",
    "\n",
    "这里我对部分关键的参数做一下解释，model_name_or_path 和template 上文已解释\n",
    "\n",
    "| 参数名称                    | 参数说明                                                     |\n",
    "| --------------------------- | ------------------------------------------------------------ |\n",
    "| stage                       | 当前训练的阶段，枚举值，有“sft”,\"pt\",\"rm\",\"ppo\"等，代表了训练的不同阶段，这里我们是有监督指令微调，所以是sft |\n",
    "| do_train                    | 是否是训练模式                                               |\n",
    "| dataset                     | 使用的数据集列表，所有字段都需要按上文在data_info.json里注册，多个数据集用\",\"分隔 |\n",
    "| dataset_dir                 | 数据集所在目录，这里是 data，也就是项目自带的data目录        |\n",
    "| finetuning_type             | 微调训练的类型，枚举值，有\"lora\",\"full\",\"freeze\"等，这里使用lora |\n",
    "| output_dir                  | 训练结果保存的位置                                           |\n",
    "| cutoff_len                  | 训练数据集的长度截断                                         |\n",
    "| per_device_train_batch_size | 每个设备上的batch size，最小是1，如果GPU 显存够大，可以适当增加 |\n",
    "| fp16                        | 使用半精度混合精度训练                                       |\n",
    "| max_samples                 | 每个数据集采样多少数据                                       |\n",
    "| val_size                    | 随机从数据集中抽取多少比例的数据作为验证集                   |\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b59f5022-0213-41aa-84c4-dc5da918721e",
   "metadata": {},
   "source": [
    "#### 微调过程\n",
    "\n",
    "```shell\n",
    "[INFO|trainer.py:2409] 2025-06-21 14:15:43,647 >> ***** Running training *****\n",
    "[INFO|trainer.py:2410] 2025-06-21 14:15:43,647 >>   Num examples = 1,091\n",
    "[INFO|trainer.py:2411] 2025-06-21 14:15:43,647 >>   Num Epochs = 3\n",
    "[INFO|trainer.py:2412] 2025-06-21 14:15:43,647 >>   Instantaneous batch size per device = 1\n",
    "[INFO|trainer.py:2415] 2025-06-21 14:15:43,647 >>   Total train batch size (w. parallel, distributed & accumulation) = 8\n",
    "[INFO|trainer.py:2416] 2025-06-21 14:15:43,647 >>   Gradient Accumulation steps = 2\n",
    "[INFO|trainer.py:2417] 2025-06-21 14:15:43,647 >>   Total optimization steps = 411\n",
    "[INFO|trainer.py:2418] 2025-06-21 14:15:43,654 >>   Number of trainable parameters = 16,515,072\n",
    "{'loss': 5.0027, 'grad_norm': 5.833753946176686, 'learning_rate': 2.1428571428571428e-05, 'epoch': 0.07}                                                                           \n",
    "{'loss': 4.2822, 'grad_norm': 2.6275760251509643, 'learning_rate': 4.523809523809524e-05, 'epoch': 0.15}                                                                           \n",
    "{'loss': 3.3337, 'grad_norm': 0.9703707562214435, 'learning_rate': 6.904761904761905e-05, 'epoch': 0.22}                                                                           \n",
    "{'loss': 3.0015, 'grad_norm': 0.9205104397406072, 'learning_rate': 9.285714285714286e-05, 'epoch': 0.29}                                                                           \n",
    "{'loss': 2.9012, 'grad_norm': 1.0176007901971018, 'learning_rate': 9.991123238414455e-05, 'epoch': 0.37}                                                                           \n",
    "{'loss': 2.9264, 'grad_norm': 1.0040465910118002, 'learning_rate': 9.947721081499068e-05, 'epoch': 0.44}                                                                           \n",
    "{'loss': 2.8365, 'grad_norm': 1.1419273277255915, 'learning_rate': 9.868477119388896e-05, 'epoch': 0.51}                                                                           \n",
    "{'loss': 2.7664, 'grad_norm': 1.078742903142043, 'learning_rate': 9.753965403572703e-05, 'epoch': 0.59}                                                                            \n",
    "{'loss': 2.7045, 'grad_norm': 0.9879826464613051, 'learning_rate': 9.605015468808651e-05, 'epoch': 0.66}                                                                           \n",
    "{'loss': 2.7684, 'grad_norm': 1.2054490406644007, 'learning_rate': 9.422706323888397e-05, 'epoch': 0.73}                                                                           \n",
    "{'loss': 2.6814, 'grad_norm': 1.1112793610864422, 'learning_rate': 9.208358635185373e-05, 'epoch': 0.81}                                                                           \n",
    "{'loss': 2.7342, 'grad_norm': 1.235240126788552, 'learning_rate': 8.963525159610465e-05, 'epoch': 0.88}                                                                            \n",
    "{'loss': 2.7398, 'grad_norm': 1.0976515178831519, 'learning_rate': 8.689979496279746e-05, 'epoch': 0.95}                                                                           \n",
    "{'loss': 2.7138, 'grad_norm': 1.1446414892782357, 'learning_rate': 8.389703238378339e-05, 'epoch': 1.02}                                                                           \n",
    "{'loss': 2.582, 'grad_norm': 1.2235500800327508, 'learning_rate': 8.064871618293646e-05, 'epoch': 1.1}                                                                             \n",
    "{'loss': 2.5825, 'grad_norm': 1.412679848545306, 'learning_rate': 7.717837750006106e-05, 'epoch': 1.17}                                                                            \n",
    "{'loss': 2.5803, 'grad_norm': 1.2398186171842849, 'learning_rate': 7.351115582887211e-05, 'epoch': 1.24}                                                                           \n",
    "{'loss': 2.6016, 'grad_norm': 1.1979579122377413, 'learning_rate': 6.967361690389258e-05, 'epoch': 1.32}                                                                           \n",
    "{'loss': 2.5891, 'grad_norm': 1.246123178138684, 'learning_rate': 6.569356025551454e-05, 'epoch': 1.39}                                                                            \n",
    "{'loss': 2.5743, 'grad_norm': 1.3414694365033863, 'learning_rate': 6.159981782731474e-05, 'epoch': 1.46}                                                                           \n",
    "{'loss': 2.546, 'grad_norm': 1.6533095948493972, 'learning_rate': 5.742204511446203e-05, 'epoch': 1.53}                                                                            \n",
    "{'loss': 2.5527, 'grad_norm': 1.5129355819119006, 'learning_rate': 5.319050633623142e-05, 'epoch': 1.61}                                                                           \n",
    "{'loss': 2.562, 'grad_norm': 1.5858800367113297, 'learning_rate': 4.893585519885764e-05, 'epoch': 1.68}                                                                            \n",
    "{'loss': 2.6205, 'grad_norm': 1.6228104769783118, 'learning_rate': 4.468891283690454e-05, 'epoch': 1.75}                                                                           \n",
    "{'loss': 2.587, 'grad_norm': 1.6435387377594144, 'learning_rate': 4.0480444541766576e-05, 'epoch': 1.83}                                                                           \n",
    "{'loss': 2.5742, 'grad_norm': 1.8327322820018979, 'learning_rate': 3.634093689470371e-05, 'epoch': 1.9}                                                                            \n",
    "{'loss': 2.5985, 'grad_norm': 1.5735425057929655, 'learning_rate': 3.2300376918881624e-05, 'epoch': 1.97}                                                                          \n",
    "{'loss': 2.5674, 'grad_norm': 1.6264015395716007, 'learning_rate': 2.8388034850262646e-05, 'epoch': 2.04}                                                                          \n",
    "{'loss': 2.4941, 'grad_norm': 1.6342855582056282, 'learning_rate': 2.4632252100977566e-05, 'epoch': 2.12}                                                                          \n",
    "{'loss': 2.4765, 'grad_norm': 1.775757585454628, 'learning_rate': 2.106023595119358e-05, 'epoch': 2.19}                                                                            \n",
    "{'loss': 2.4938, 'grad_norm': 1.8801417919999908, 'learning_rate': 1.7697862456752273e-05, 'epoch': 2.26}                                                                          \n",
    "{'loss': 2.4026, 'grad_norm': 1.805836836875837, 'learning_rate': 1.4569489000334436e-05, 'epoch': 2.34}                                                                           \n",
    "{'loss': 2.314, 'grad_norm': 1.9221780337919974, 'learning_rate': 1.1697777844051105e-05, 'epoch': 2.41}                                                                           \n",
    "{'loss': 2.3734, 'grad_norm': 1.9015073501977864, 'learning_rate': 9.103531961664118e-06, 'epoch': 2.48}                                                                           \n",
    "{'loss': 2.3468, 'grad_norm': 1.9663808144178434, 'learning_rate': 6.8055443396842945e-06, 'epoch': 2.56}                                                                          \n",
    "{'loss': 2.4596, 'grad_norm': 2.0739838202672556, 'learning_rate': 4.820461839026047e-06, 'epoch': 2.63}                                                                           \n",
    "{'loss': 2.4194, 'grad_norm': 1.8000235117136885, 'learning_rate': 3.162664603418608e-06, 'epoch': 2.7}                                                                            \n",
    "{'loss': 2.5315, 'grad_norm': 1.713198964943577, 'learning_rate': 1.8441618881519184e-06, 'epoch': 2.78}                                                                           \n",
    "{'loss': 2.5054, 'grad_norm': 1.8859713850375126, 'learning_rate': 8.745050637844532e-07, 'epoch': 2.85}                                                                           \n",
    "{'loss': 2.4813, 'grad_norm': 1.7709167803586203, 'learning_rate': 2.6071842502326527e-07, 'epoch': 2.92}                                                                          \n",
    "{'loss': 2.4142, 'grad_norm': 1.8519605184678747, 'learning_rate': 7.248306003865279e-09, 'epoch': 3.0}                                                                            \n",
    "100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 411/411 [39:19<00:00,  5.05s/it]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4dbd3215-c083-4e78-a5b4-49c43741b8b5",
   "metadata": {},
   "source": [
    "##### 开始阶段"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "74a3e73a-ae69-4d7a-8c2b-96ae2d3e1884",
   "metadata": {},
   "source": [
    "\n",
    "```shell\n",
    "[INFO|2025-08-19 03:05:10] llamafactory.model.model_utils.checkpointing:143 >> Gradient checkpointing enabled.\n",
    "[INFO|2025-08-19 03:05:10] llamafactory.model.model_utils.attention:143 >> Using torch SDPA for faster training and inference.\n",
    "[INFO|2025-08-19 03:05:10] llamafactory.model.adapter:143 >> Upcasting trainable params to float32.\n",
    "[INFO|2025-08-19 03:05:10] llamafactory.model.adapter:143 >> Fine-tuning method: LoRA\n",
    "[INFO|2025-08-19 03:05:10] llamafactory.model.model_utils.misc:143 >> Found linear modules: up_proj,q_proj,k_proj,o_proj,down_proj,gate_proj,v_proj\n",
    "[INFO|2025-08-19 03:05:12] llamafactory.model.loader:143 >> trainable params: 16,515,072 || all params: 4,038,983,168 || trainable%: 0.4089\n",
    "[INFO|trainer.py:757] 2025-08-19 03:05:12,404 >> Using auto half precision backend\n",
    "[INFO|trainer.py:2433] 2025-08-19 03:05:12,850 >> ***** Running training *****\n",
    "[INFO|trainer.py:2434] 2025-08-19 03:05:12,850 >>   Num examples = 1,091\n",
    "[INFO|trainer.py:2435] 2025-08-19 03:05:12,850 >>   Num Epochs = 3\n",
    "[INFO|trainer.py:2436] 2025-08-19 03:05:12,850 >>   Instantaneous batch size per device = 1\n",
    "[INFO|trainer.py:2439] 2025-08-19 03:05:12,851 >>   Total train batch size (w. parallel, distributed & accumulation) = 8\n",
    "[INFO|trainer.py:2440] 2025-08-19 03:05:12,851 >>   Gradient Accumulation steps = 8\n",
    "[INFO|trainer.py:2441] 2025-08-19 03:05:12,851 >>   Total optimization steps = 411\n",
    "[INFO|trainer.py:2442] 2025-08-19 03:05:12,856 >>   Number of trainable parameters = 16,515,072\n",
    "{'loss': 4.9641, 'grad_norm': 5.727753639221191, 'learning_rate': 2.1428571428571428e-05, 'epoch': 0.07}                                                                 \n",
    "{'loss': 4.2433, 'grad_norm': 2.5929203033447266, 'learning_rate': 4.523809523809524e-05, 'epoch': 0.15}                                                                 \n",
    "{'loss': 3.3218, 'grad_norm': 0.9618009328842163, 'learning_rate': 6.904761904761905e-05, 'epoch': 0.22}                                                                                                   \n",
    "{'loss': 3.0061, 'grad_norm': 0.9214785695075989, 'learning_rate': 9.285714285714286e-05, 'epoch': 0.29}                                                                                                   \n",
    " 10%|████████████████▋                                                                                                                                                  | 42/411 [10:32<1:33:55, 15.27s/it\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b43a189c-41a0-46ef-9141-b3a224fbeb72",
   "metadata": {},
   "source": [
    "结束阶段"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "60fe214f-96b1-413f-9e52-83f6f1f4d5b4",
   "metadata": {},
   "source": [
    "```shell\n",
    "                                                                                                                         \n",
    "{'loss': 2.4794, 'grad_norm': 2.0868048667907715, 'learning_rate': 4.820461839026047e-06, 'epoch': 2.63}                                                                                                                      \n",
    "{'loss': 2.4289, 'grad_norm': 1.794489860534668, 'learning_rate': 3.162664603418608e-06, 'epoch': 2.7}                                                                                                                        \n",
    "{'loss': 2.5398, 'grad_norm': 1.7356864213943481, 'learning_rate': 1.8441618881519184e-06, 'epoch': 2.78}                                                                                                                            \n",
    "{'loss': 2.5234, 'grad_norm': 1.9091511964797974, 'learning_rate': 8.745050637844532e-07, 'epoch': 2.85}                                                                                                                             \n",
    "{'loss': 2.5072, 'grad_norm': 1.77633535861969, 'learning_rate': 2.6071842502326527e-07, 'epoch': 2.92}                                                                                                                              \n",
    "{'loss': 2.4394, 'grad_norm': 1.845839500427246, 'learning_rate': 7.248306003865279e-09, 'epoch': 3.0}                                                                                                                               \n",
    "100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 411/411 [1:43:47<00:00, 12.55s/it]\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "100891a4-707d-44f4-a7bd-97561f563641",
   "metadata": {},
   "source": [
    "**过程观察**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6cc0464d-8b18-421b-86fd-082db32470e0",
   "metadata": {},
   "source": [
    "> **1. 训练基础配置**\n",
    "> - **训练样本数**：`1,091` 条数据。\n",
    "> - **训练轮次（Epochs）**：`3` 轮完整遍历数据集。\n",
    "> - **单设备批量大小**：每个设备（如GPU）的瞬时批量大小是 `1`（`per device = 1`）。\n",
    "> - 实际总批量大小：通过并行（多设备）、梯度累积（Gradient Accumulation steps = 2）等技术，等效批量大小为 8。\n",
    ">   - 计算方式：`batch_size_per_device * gradient_accumulation_steps * num_devices = 1 * 2 * 4 = 8`（假设用了4个GPU）。\n",
    "> - **总优化步数（Steps）**：`411` 步，即模型权重将更新411次。\n",
    "> ------\n",
    "\n",
    "> ### **2. LoRA 可训练参数量**\n",
    "> - 可训练参数：16,515,072（约1650万）。\n",
    ">   - LoRA 通过冻结原始大模型参数，仅训练低秩矩阵（rank），显著减少计算量。这里的参数量是 LoRA 适配器的参数，而非全量模型参数。\n",
    "> ------\n",
    ">\n",
    "> ### **3. 训练过程指标**\n",
    "> 日志中每步输出的字典包含以下关键指标：\n",
    "> - `loss`：训练损失（交叉熵损失），反映模型预测与真实标签的差异。\n",
    ">- 初始损失较高（`5.0027`），随着训练逐渐下降（最低 `2.314`），表明模型在学习。\n",
    ">   - 后期损失波动（如 `2.5` 左右）可能因学习率调整或接近收敛。\n",
    "> - `grad_norm`：梯度范数，衡量参数更新的幅度。\n",
    ">  - 初始较大（`5.83`），后期稳定在 `1.0~2.0` 之间，说明优化过程相对稳定。  \n",
    ">- `learning_rate`：动态调整的学习率。\n",
    ">   - 初始为 `2.14e-5`，逐渐上升至峰值 `9.99e-5`（类似 warmup），之后按计划下降（如余弦退火）。\n",
    "\n",
    "> - **`epoch`**：当前训练进度（小数表示，如 `0.07` 表示第1轮的7%）。\n",
    "> ------\n",
    "> ### **4. 训练耗时**\n",
    "> - 总时间：39分19秒完成 411步，平均每步 5.05秒。\n",
    ">  - 耗时受硬件（如GPU型号）、批量大小、梯度累积等因素影响。\n",
    ">------\n",
    ">### **5. 关键现象分析**\n",
    "> \n",
    ">- 学习率调度：\n",
    ">  - 采用了 **warmup + 衰减策略**（如线性warmup到峰值后余弦下降），这是训练大模型的常见技巧。\n",
    ">- 损失下降趋势：\n",
    ">  - 初期快速下降，后期平缓，符合预期。未出现剧烈震荡，说明超参（如学习率、批量大小）设置合理。\n",
    ">- LoRA 效率：\n",
    ">  - 仅训练少量参数（1650万），适合资源有限的场景，同时能保持模型性能。\n",
    ">------\n",
    ">### **6. 可能的问题与建议**\n",
    ">- 后期损失波动：\n",
    ">- 若需进一步提升性能，可尝试：\n",
    ">   - 增加 LoRA 的 `rank`（扩大低秩矩阵维度，牺牲计算量换取表现）。\n",
    ">   - 调整学习率调度（如延长 warmup 或降低初始学习率）。 \n",
    ">- 批量大小较小：\n",
    ">- 等效批量 `8` 可能偏小，增大梯度累积步数或使用更多设备可提升稳定性。\n",
    "> "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4fe0528d-8fd8-4482-a373-27b4894d8706",
   "metadata": {},
   "source": [
    "#### 微调结果"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5eb3ed56-4884-43e1-b468-700a7029a13c",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "\n",
    "\n",
    "日志打印\n",
    "\n",
    "定时输出训练日志，包含当前loss，训练进度等\n",
    "\n",
    "```\n",
    "[INFO|tokenization_utils_base.py:2393] 2025-08-19 07:34:30,475 >> chat template saved in saves/qwen3-4b/lora/sft-adgen/chat_template.jinja\n",
    "[INFO|tokenization_utils_base.py:2562] 2025-08-19 07:34:30,476 >> tokenizer config file saved in saves/qwen3-4b/lora/sft-adgen/tokenizer_config.json\n",
    "[INFO|tokenization_utils_base.py:2571] 2025-08-19 07:34:30,476 >> Special tokens file saved in saves/qwen3-4b/lora/sft-adgen/special_tokens_map.json\n",
    "***** train metrics *****\n",
    "  epoch                    =        3.0\n",
    "  total_flos               =  8522312GF\n",
    "  train_loss               =     2.7358\n",
    "  train_runtime            = 1:43:48.33\n",
    "  train_samples_per_second =      0.526\n",
    "  train_steps_per_second   =      0.066\n",
    "Figure saved at: saves/qwen3-4b/lora/sft-adgen/training_loss.png\n",
    "[WARNING|2025-08-19 07:34:31] llamafactory.extras.ploting:148 >> No metric eval_loss to plot.\n",
    "[WARNING|2025-08-19 07:34:31] llamafactory.extras.ploting:148 >> No metric eval_accuracy to plot.\n",
    "[INFO|modelcard.py:456] 2025-08-19 07:34:31,201 >> Dropping the following result as it does not have all the necessary fields:\n",
    "{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}\n",
    "```\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "49cfb8e0-61b4-4efc-8f1b-1cb50bf9abdd",
   "metadata": {},
   "source": [
    "#### 微调结果文件\n",
    "\n",
    "```shell\n",
    "(env_sft) root@2c61cb3f8af3:/workspace/learn-llm-sft-easily/LLaMA-Factory/saves/qwen3-4b/lora/sft-adgen# ll \n",
    "total 80200\n",
    "drwxr-xr-x 3 root root     4096 Aug 19 07:34 ./\n",
    "drwxr-xr-x 3 root root     4096 Aug 19 05:31 ../\n",
    "-rw-r--r-- 1 root root     1324 Aug 19 07:34 README.md\n",
    "-rw-r--r-- 1 root root      862 Aug 19 07:34 adapter_config.json\n",
    "-rw-r--r-- 1 root root 66126768 Aug 19 07:34 adapter_model.safetensors\n",
    "-rw-r--r-- 1 root root      707 Aug 19 07:34 added_tokens.json\n",
    "-rw-r--r-- 1 root root      203 Aug 19 07:34 all_results.json\n",
    "-rw-r--r-- 1 root root     4168 Aug 19 07:34 chat_template.jinja\n",
    "drwxr-xr-x 2 root root     4096 Aug 19 07:34 checkpoint-411/\n",
    "-rw-r--r-- 1 root root  1671853 Aug 19 07:34 merges.txt\n",
    "-rw-r--r-- 1 root root      613 Aug 19 07:34 special_tokens_map.json\n",
    "-rw-r--r-- 1 root root 11422654 Aug 19 07:34 tokenizer.json\n",
    "-rw-r--r-- 1 root root     5431 Aug 19 07:34 tokenizer_config.json\n",
    "-rw-r--r-- 1 root root      203 Aug 19 07:34 train_results.json\n",
    "-rw-r--r-- 1 root root     8062 Aug 19 07:34 trainer_log.jsonl\n",
    "-rw-r--r-- 1 root root     8076 Aug 19 07:34 trainer_state.json\n",
    "-rw-r--r-- 1 root root     6161 Aug 19 07:34 training_args.bin\n",
    "-rw-r--r-- 1 root root    34405 Aug 19 07:34 training_loss.png\n",
    "-rw-r--r-- 1 root root  2776833 Aug 19 07:34 vocab.json\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be41b04e-c733-4be9-a9c3-95111e06aa5b",
   "metadata": {},
   "source": [
    "---\n",
    "**文件说明清单**\n",
    "\n",
    "1. **adapter_model.safetensors**  \n",
    "   • **用途**：LoRA适配器的微调模型权重文件  \n",
    "   • **内容**：以安全张量格式（safetensors）存储的轻量级模型参数增量  \n",
    "   • **重要性**：核心文件，用于推理或继续训练\n",
    "\n",
    "2. **adapter_config.json**  \n",
    "   • **用途**：LoRA适配器的配置参数  \n",
    "   • **关键字段**：`r`（秩维度）、`target_modules`（目标层）、`bias`（偏置类型）  \n",
    "   • **示例注释**：  \n",
    "     ```json\n",
    "     // rank=8的LoRA配置，作用于q_proj/v_proj层（参考网页3注释规范）\n",
    "     {\"r\": 8, \"lora_alpha\": 16, \"target_modules\": [\"q_proj\",\"v_proj\"]}\n",
    "     ```\n",
    "\n",
    "3. **checkpoint-408**（目录）  \n",
    "   • **用途**：第408步的训练检查点  \n",
    "   • **包含文件**：  \n",
    "     ◦ `pytorch_model.bin`（完整模型权重）  \n",
    "     ◦ `optimizer.pt`（优化器状态）  \n",
    "     ◦ `scheduler.pt`（学习率调度器状态）  \n",
    "   • **说明**：可用于断点续训或模型回滚\n",
    "\n",
    "4. **training_loss.png**  \n",
    "   • **用途**：训练损失曲线可视化  \n",
    "   • **解读要点**：  \n",
    "     ◦ X轴：训练步数（steps）  \n",
    "     ◦ Y轴：损失值（loss）  \n",
    "     ◦ 典型特征：观察是否收敛/过拟合（参考网页6图表说明）\n",
    "\n",
    "5. **tokenizer_config.json**  \n",
    "   • **用途**：分词器的核心配置  \n",
    "   • **关键参数**：`pad_token`、`bos_token`、`eos_token`  \n",
    "   • **特殊说明**：需与预训练模型的分词器配置保持一致\n",
    "\n",
    "6. **trainer_state.json**  \n",
    "   • **用途**：训练过程的完整状态记录  \n",
    "   • **关键信息**：  \n",
    "     ◦ `epoch`：当前训练轮次  \n",
    "     ◦ `step`：全局训练步数  \n",
    "     ◦ `log_history`：各阶段评估指标\n",
    "\n",
    "7. **all_results.json**  \n",
    "   • **用途**：最终评估结果汇总  \n",
    "   • **典型指标**：`eval_loss`、`eval_accuracy`、`eval_runtime`  \n",
    "   • **示例数据**：  \n",
    "     ```json\n",
    "     {\"epoch\": 3.0, \"eval_loss\": 1.023, \"eval_acc\": 0.872}\n",
    "     ```\n",
    "\n",
    "8. **README.md**  \n",
    "   • **READMEs说明**\n",
    "   \n",
    "9. **特殊文件说明**  \n",
    "   | 文件                      | 说明                                                 |\n",
    "   | ------------------------- | ---------------------------------------------------- |\n",
    "   | `special_tokens_map.json` | 定义特殊标记（如`<|im_start|>`）的映射关系           |\n",
    "   | `trainer_log.jsonl`       | 按行存储的实时训练日志（JSON Lines格式）             |\n",
    "   | `training_args.bin`       | 序列化保存的训练超参数（如batch_size/learning_rate） |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "441d731a-f0f8-43b0-b564-d4bd3e411185",
   "metadata": {},
   "source": [
    "#### LoRA模型合并导出\n",
    "\n",
    "如果想把训练的LoRA和原始的大模型进行融合，输出一个完整的模型文件的话，可以使用如下命令。合并后的模型可以自由地像使用原始的模型一样应用到其他下游环节，当然也可以递归地继续用于训练。\n",
    "\n",
    "本脚本参数改编自 [LLaMA-Factory/examples/merge_lora/llama3_lora_sft.yaml at main · hiyouga/LLaMA-Factory](https://link.zhihu.com/?target=https%3A//github.com/hiyouga/LLaMA-Factory/blob/main/examples/merge_lora/llama3_lora_sft.yaml)\n",
    "\n",
    "```shell\n",
    "llamafactory-cli export \\\n",
    "    --model_name_or_path /workspace/models/Qwen/Qwen3-4B \\\n",
    "    --adapter_name_or_path /workspace/LLaMA-Factory/saves/qwen3-4b/lora/sft-adgen  \\\n",
    "    --template qwen3 \\\n",
    "    --finetuning_type lora \\\n",
    "    --export_dir /workspace/LLaMA-Factory/saves/qwen3-4b/lora/megred-model-path-adgen \\\n",
    "    --export_size 2 \\\n",
    "    --export_device cpu \\\n",
    "    --export_legacy_format False\n",
    "```\n",
    "\n",
    "结果\n",
    "\n",
    "```shell\n",
    "(env_sft) root@2c61cb3f8af3:/workspace/LLaMA-Factory/saves/qwen3-4b/lora/megred-model-path-adgen# ll\n",
    "total 7872036\n",
    "drwxr-xr-x 2 root root       4096 Aug 19 08:26 ./\n",
    "drwxr-xr-x 4 root root       4096 Aug 19 08:26 ../\n",
    "-rw-r--r-- 1 root root        381 Aug 19 08:26 Modelfile\n",
    "-rw-r--r-- 1 root root        707 Aug 19 08:26 added_tokens.json\n",
    "-rw-r--r-- 1 root root       4168 Aug 19 08:26 chat_template.jinja\n",
    "-rw-r--r-- 1 root root       1542 Aug 19 08:26 config.json\n",
    "-rw-r--r-- 1 root root        214 Aug 19 08:26 generation_config.json\n",
    "-rw-r--r-- 1 root root    1671853 Aug 19 08:26 merges.txt\n",
    "-rw-r--r-- 1 root root 1989089792 Aug 19 08:26 model-00001-of-00005.safetensors\n",
    "-rw-r--r-- 1 root root 1968810960 Aug 19 08:26 model-00002-of-00005.safetensors\n",
    "-rw-r--r-- 1 root root 1968821480 Aug 19 08:26 model-00003-of-00005.safetensors\n",
    "-rw-r--r-- 1 root root 1968821480 Aug 19 08:26 model-00004-of-00005.safetensors\n",
    "-rw-r--r-- 1 root root  149438120 Aug 19 08:26 model-00005-of-00005.safetensors\n",
    "-rw-r--r-- 1 root root      32855 Aug 19 08:26 model.safetensors.index.json\n",
    "-rw-r--r-- 1 root root        613 Aug 19 08:26 special_tokens_map.json\n",
    "-rw-r--r-- 1 root root   11422654 Aug 19 08:26 tokenizer.json\n",
    "-rw-r--r-- 1 root root       5430 Aug 19 08:26 tokenizer_config.json\n",
    "-rw-r--r-- 1 root root    2776833 Aug 19 08:26 vocab.json\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b7adea21-5b1b-449e-8e0d-1d74892f63dd",
   "metadata": {},
   "source": [
    "日志打印\n",
    "\n",
    "\n",
    "\n",
    "```shell\n",
    "\n",
    "[INFO|2025-08-19 08:26:02] llamafactory.model.model_utils.attention:143 >> Using torch SDPA for faster training and inference.\n",
    "[INFO|2025-08-19 08:26:35] llamafactory.model.adapter:143 >> Merged 1 adapter(s).\n",
    "[INFO|2025-08-19 08:26:35] llamafactory.model.adapter:143 >> Loaded adapter(s): /workspace/learn-llm-sft-easily/LLaMA-Factory/saves/qwen3-4b/lora/sft-adgen\n",
    "[INFO|2025-08-19 08:26:35] llamafactory.model.loader:143 >> all params: 4,022,468,096\n",
    "[INFO|2025-08-19 08:26:35] llamafactory.train.tuner:143 >> Convert model dtype to: torch.bfloat16.\n",
    "[INFO|configuration_utils.py:478] 2025-08-19 08:26:35,528 >> Configuration saved in /workspace/learn-llm-sft-easily/LLaMA-Factory/saves/qwen3-4b/lora/megred-model-path-adgen/config.json\n",
    "[INFO|configuration_utils.py:869] 2025-08-19 08:26:35,529 >> Configuration saved in /workspace/learn-llm-sft-easily/LLaMA-Factory/saves/qwen3-4b/lora/megred-model-path-adgen/generation_config.json\n",
    "[INFO|modeling_utils.py:4180] 2025-08-19 08:26:50,138 >> The model is bigger than the maximum size per checkpoint (2GB) and is going to be split in 5 checkpoint shards. You can find where each parameters has been saved in the index located at /workspace/learn-llm-sft-easily/LLaMA-Factory/saves/qwen3-4b/lora/megred-model-path-adgen/model.safetensors.index.json.\n",
    "[INFO|tokenization_utils_base.py:2393] 2025-08-19 08:26:50,139 >> chat template saved in /workspace/learn-llm-sft-easily/LLaMA-Factory/saves/qwen3-4b/lora/megred-model-path-adgen/chat_template.jinja\n",
    "[INFO|tokenization_utils_base.py:2562] 2025-08-19 08:26:50,140 >> tokenizer config file saved in /workspace/learn-llm-sft-easily/LLaMA-Factory/saves/qwen3-4b/lora/megred-model-path-adgen/tokenizer_config.json\n",
    "[INFO|tokenization_utils_base.py:2571] 2025-08-19 08:26:50,140 >> Special tokens file saved in /workspace/learn-llm-sft-easily/LLaMA-Factory/saves/qwen3-4b/lora/megred-model-path-adgen/special_tokens_map.json\n",
    "[INFO|2025-08-19 08:26:50] llamafactory.train.tuner:143 >> Ollama modelfile saved in /workspace/learn-llm-sft-easily/LLaMA-Factory/saves/qwen3-4b/lora/megred-model-path-adgen/Modelfile\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8137a79d-88c5-403f-9435-30203835dc50",
   "metadata": {},
   "source": [
    "## 微调后模型推理\n",
    "\n",
    "### 启动推理服务\n",
    "```shell\n",
    "PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True\n",
    "CUDA_VISIBLE_DEVICES=1 \\\n",
    "vllm serve /workspace/LLaMA-Factory/saves/qwen3-4b/lora/megred-model-path-adgen \\\n",
    "--port 8082 \\\n",
    "--max-model-len 16384 \\\n",
    "--tensor-parallel-size 1 \\\n",
    "--trust-remote-code \\\n",
    "--served-model-name my_sft_model \\\n",
    "--dtype=half \\\n",
    "--api-key token-abc123 \\\n",
    "--gpu-memory-utilization 0.8 \n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "600e891d-96c2-4837-aa51-02aab1aae7c4",
   "metadata": {},
   "source": [
    "### 调用验证\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "43989f80-d0bf-4c18-b851-853d191f5ae1",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'<think>\\n\\n</think>\\n\\n我叫 商品文案生成助手，由 LLaMA Factory 开发。我的任务是为用户提供帮助和解决方案，以满足他们的需求。'"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from openai import OpenAI\n",
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:8082/v1\",\n",
    "    api_key=\"token-abc123\",\n",
    ")\n",
    "\n",
    "\n",
    "prompt = '你是谁？/no_think'\n",
    "messages = [{\"role\":\"user\", \"content\":prompt}]\n",
    "response = client.chat.completions.create(\n",
    "    model = 'my_sft_model',\n",
    "    messages = messages,\n",
    "    temperature=0.95\n",
    ")\n",
    "\n",
    "response.choices[0].message.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb3f80d2-6b4a-4675-8709-aca796898fb3",
   "metadata": {},
   "source": [
    "回答结果：  \n",
    "```bash\n",
    "'<think>\\n\\n</think>\\n\\n我叫 商品文案生成助手，由 LLaMA Factory 开发。我的任务是为用户提供帮助和解决方案，以满足他们的需求。'\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "057c2454-731e-4c2f-a4fd-73a622ea4bb4",
   "metadata": {},
   "source": [
    "'<think>\\n\\n</think>\\n\\n我叫 商品文案生成助手，由 LLaMA Factory 开发。我的任务是为用户提供帮助和解决方案，以满足他们的需求。'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "b567bf34-aae8-4cc2-89a3-975faf6e9841",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'<think>\\n\\n</think>\\n\\n<UNK>牛仔裤裤身采用深蓝色，上身显肤色白皙，显高显瘦。高腰的版型设计，让腿部看起来更加修长。裤身两侧的口袋，既增加了裤子的美观度，还能装下许多物品。后背的插袋，装饰了牛仔裤，增添了时尚感。'"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from openai import OpenAI\n",
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:8082/v1\",\n",
    "    api_key=\"token-abc123\",\n",
    ")\n",
    "\n",
    "\n",
    "prompt = '类型#裤*版型#显瘦*材质#牛仔布*颜色#深蓝色*裤腰型#高腰'\n",
    "messages = [{\"role\":\"user\", \"content\":prompt}]\n",
    "response = client.chat.completions.create(\n",
    "    model = 'my_sft_model',\n",
    "    messages = messages,\n",
    "    temperature=0.95\n",
    ")\n",
    "\n",
    "response.choices[0].message.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44fe2035-8a3f-4951-853c-54b6e8c25af4",
   "metadata": {},
   "source": [
    "回答结果：  \n",
    "```bash\n",
    "'<think>\\n\\n</think>\\n\\n这款牛仔裤精选优质牛仔面料，手感细腻舒适，亲肤透气。深蓝色，更衬肤色白皙。高腰的设计，凸显腰线，打造精致的纤细腰身，让你显高显瘦。两侧插袋的设计，为牛仔裤增添俏皮感。版型修身，修饰腿部线条。'\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1581fcf5-e931-454b-8cfe-6ddb52bda34d",
   "metadata": {},
   "source": [
    "## 一站式webui board的使用\n",
    "\n",
    "到这里，恭喜你完成了LLaMA-Efficent-Tuning训练框架的基础使用，那还有什么内容是没有介绍的呢？还有很多！这里介绍一个在提升交互体验上有重要作用的功能， 支持模型训练全链路的一站式WebUI board。一个好的产品离不开好的交互，Stable Diffusion的大放异彩的重要原因除了强大的内容输出效果，就是它有一个好的WebUI。这个board将训练大模型主要的链路和操作都在一个页面中进行了整合，所有参数都可以可视化地编辑和操作\n",
    "\n",
    "通过以下命令启动\n",
    "\n",
    "注意：目前webui版本只支持单机单卡和单机多卡，如果是多机多卡请使用命令行版本\n",
    "\n",
    "```shell\n",
    "CUDA_VISIBLE_DEVICES=0,1,2,3 llamafactory-cli webui\n",
    "```\n",
    "\n",
    "如果要开启 gradio的share功能，或者修改端口号\n",
    "\n",
    "```text\n",
    "CUDA_VISIBLE_DEVICES=0,1,2,3 GRADIO_SHARE=1 GRADIO_SERVER_PORT=7860 llamafactory-cli webui\n",
    "```\n",
    "\n",
    "如图所示，上述的多个不同的大功能模块都通过不同的tab进行了整合，提供了一站式的操作体验。\n",
    "\n",
    "![img](https://pic4.zhimg.com/v2-a1de61e1483e65fc7b237de43bb437fd_1440w.jpg)\n",
    "\n",
    "当各种参数配置好后，在train页面，可以通过预览命令功能，将训练脚本导出，用于支持多gpu训练\n",
    "\n",
    "![img](https://pic3.zhimg.com/v2-f0f30aba4c6280a4c54aa599f41fa292_1440w.jpg)\n",
    "\n",
    "点击开始按钮, 即可开始训练，网页端和服务器端会同步输出相关的日志结果\n",
    "\n",
    "![img](https://pic2.zhimg.com/v2-3696353c7c0eea5081314ab75b257b29_1440w.jpg)\n",
    "\n",
    "训练完毕后, 点击“刷新适配器”，即可找到该模型历史上使用webui训练的LoRA模型文件，后续再训练或者执行chat的时候，即会将此LoRA一起加载。\n",
    "\n",
    "![img](https://picx.zhimg.com/v2-2278e5aef3e17b8584bbc2838368a85d_1440w.jpg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "629805de-b48f-4c0e-9bc2-f3d54cb0352c",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (eng_sft)",
   "language": "python",
   "name": "env_sft"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
