{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a617005e",
   "metadata": {},
   "source": [
    "## 导入依赖库\n",
    "\n",
    "这里导入相关的依赖库，主要为mindnlp， transformers，peft等常用库。transformers中定义了典型的transformer结构的模型，peft中实现了LoRA微调流程。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "06cb221f",
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindnlp\n",
    "import mindspore\n",
    "from mindnlp import core\n",
    "from datasets import Dataset\n",
    "import pandas as pd\n",
    "from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForSeq2Seq, TrainingArguments, Trainer, GenerationConfig\n",
    "from peft import LoraConfig, TaskType, get_peft_model, PeftModel"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "585597d7",
   "metadata": {},
   "source": [
    "## 格式转换数据\n",
    "\n",
    "通过pd.read_json() 读取JSON文件到Pandas DataFrame，Dataset.from_pandas() 将DataFrame转换为Hugging Face的Dataset对象。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "64234c61",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 数据加载并进行格式转换\n",
    "df = pd.read_json('./old_fashion2.0.json')\n",
    "ds = Dataset.from_pandas(df)\n",
    "ds[:3]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2015af1c",
   "metadata": {},
   "source": [
    "## 实例化分词器\n",
    "\n",
    "实例化DeepSeek-R1-Distill-Qwen-1.5B的分词器，用于下一步的数据预处理。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "eff97fa3",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 实例化tokenizer\n",
    "tokenizer = AutoTokenizer.from_pretrained('deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B', use_fast=False, trust_remote_code=True)\n",
    "tokenizer"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59c1dbe3",
   "metadata": {},
   "source": [
    "## 数据预处理\n",
    "\n",
    "process_func函数的作用是将用户的指令（instruction）、输入内容（input）和模型的期望回复（output）格式化为适合语言模型训练的输入数据。主要包含以下处理：\n",
    "1. 对话模板构建：按system：[指令]，User: [输入] 和 Assistant: [回复] 的格式组织对话\n",
    "2. 分词与编码：用步骤3中的分词器将文本转换为模型可理解的数字序列\n",
    "3. 长度控制：截断超长内容\n",
    "4. 生成掩码和标签：标识有效内容和需要模型学习的部分\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "08a1e40c",
   "metadata": {},
   "outputs": [],
   "source": [
    "def process_func(example):\n",
    "    MAX_LENGTH = 384    # Llama分词器会将一个中文字切分为多个token，因此需要放开一些最大长度，保证数据的完整性\n",
    "    input_ids, attention_mask, labels = [], [], []\n",
    "    instruction = tokenizer(f\"<|im_start|>system\\n现在你要扮演皇帝身边的女人--甄嬛<|im_end|>\\n<|im_start|>user\\n{example['instruction'] + example['input']}<|im_end|>\\n<|im_start|>assistant\\n\", add_special_tokens=False)  # add_special_tokens 不在开头加 special_tokens\n",
    "    response = tokenizer(f\"{example['output']}\", add_special_tokens=False)\n",
    "    input_ids = instruction[\"input_ids\"] + response[\"input_ids\"] + [tokenizer.pad_token_id]\n",
    "    attention_mask = instruction[\"attention_mask\"] + response[\"attention_mask\"] + [1]  # 因为eos token咱们也是要关注的所以 补充为1\n",
    "    labels = [-100] * len(instruction[\"input_ids\"]) + response[\"input_ids\"] + [tokenizer.pad_token_id]  \n",
    "    if len(input_ids) > MAX_LENGTH:  # 做一个截断\n",
    "        input_ids = input_ids[:MAX_LENGTH]\n",
    "        attention_mask = attention_mask[:MAX_LENGTH]\n",
    "        labels = labels[:MAX_LENGTH]\n",
    "    return {\n",
    "        \"input_ids\": input_ids,\n",
    "        \"attention_mask\": attention_mask,\n",
    "        \"labels\": labels\n",
    "    }\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "afa0d564",
   "metadata": {},
   "outputs": [],
   "source": [
    "tokenized_id = ds.map(process_func, remove_columns=ds.column_names)\n",
    "tokenized_id"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5d86d031",
   "metadata": {},
   "outputs": [],
   "source": [
    "tokenizer.decode(tokenized_id[0]['input_ids'])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "63af5616",
   "metadata": {},
   "source": [
    "## 模型加载\n",
    "\n",
    "此步骤是在配置Lora参数之前，需要先加载模型，本实验用DeepSeek-R1-Distill-Qwen-1.5B作为基础模型。加载时需要从镜像站下载模型的权重文件，耗时较长，请耐心等待。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "80ad776d",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 加载基础模型\n",
    "model = AutoModelForCausalLM.from_pretrained('deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B', ms_dtype=mindspore.bfloat16, device_map=0)\n",
    "\n",
    "# 开启梯度检查点时，要执行该方法\n",
    "model.enable_input_require_grads()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c3da2575",
   "metadata": {},
   "source": [
    "## 微调前推理\n",
    "\n",
    "进行模型推理查看效果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2ade8cd2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# host to device\n",
    "model = model.npu()\n",
    "\n",
    "prompt = \"你是谁？\"\n",
    "inputs = tokenizer.apply_chat_template([{\"role\": \"system\", \"content\": \"现在你要扮演一位温雅谦和、辞气含蓄的古风小生，言辞清雅而不失分寸\"},{\"role\": \"user\", \"content\": prompt}],\n",
    "                                       add_generation_prompt=True,\n",
    "                                       tokenize=True,\n",
    "                                       return_tensors=\"ms\",\n",
    "                                       return_dict=True\n",
    "                                       ).to('cuda')\n",
    "\n",
    "\n",
    "gen_kwargs = {\"max_length\": 2500, \"do_sample\": True, \"top_k\": 1}\n",
    "with core.no_grad():\n",
    "    outputs = model.generate(**inputs, **gen_kwargs)\n",
    "    outputs = outputs[:, inputs['input_ids'].shape[1]:]\n",
    "    print(tokenizer.decode(outputs[0], skip_special_tokens=True))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "82e5603b",
   "metadata": {},
   "source": [
    "## 配置LoRA参数\n",
    "\n",
    "LoRA（Low-Rank Adaptation） 是一种高效微调大模型的技术，核心思想是冻结原始模型参数，通过向特定层注入低秩矩阵来实现参数更新，从而节省计算资源和内存。\n",
    "其中一些重要参数的意义如下：r用于控制低秩矩阵的维度、lora_alpha用于表示缩放因子、\n",
    "target_modules用于指定需要添加LoRA的层。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d8876f7a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 配置LoRA\n",
    "config = LoraConfig(\n",
    "    task_type=TaskType.CAUSAL_LM, \n",
    "    target_modules=[\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\", \"gate_proj\", \"up_proj\", \"down_proj\"],\n",
    "    inference_mode=False, # 训练模式\n",
    "    r=8, # Lora 秩\n",
    "    lora_alpha=32, # Lora alaph，具体作用参见 Lora 原理\n",
    "    lora_dropout=0.1# Dropout 比例\n",
    ")\n",
    "config\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10e8df3a",
   "metadata": {},
   "source": [
    "对比前后模型变化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "323d5294",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"Model without LoRA:\\n\",model)\n",
    "# 根据上述的lora配置，为模型添加lora部分\n",
    "model = get_peft_model(model, config)\n",
    "print('='*50)\n",
    "print(\"Model with LoRA:\\n\",model)\n",
    "# 输出打印需要训练的参数比例\n",
    "model.print_trainable_parameters()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae05aa3f",
   "metadata": {},
   "source": [
    "## 模型训练\n",
    "\n",
    "本步骤中将完成训练中的参数配置，最终实例化Trainer启动模型的训练进程。\n",
    "首先，对训练中的参数进行配置，包括num_train_epochs训练轮次、learning_rate学习率、per_device_train_batch_size每批次的数据条数大小等。注意，训练中间过程输出的权重文件将会在output_dir参数指定的目录下。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dd411cb8",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义训练超参数\n",
    "args = TrainingArguments(\n",
    "    output_dir=\"./output_1.5bf/Qwen2.5_instruct_lora\",\n",
    "    per_device_train_batch_size=4,\n",
    "    gradient_accumulation_steps=5,\n",
    "    logging_steps=10,\n",
    "    num_train_epochs=3,\n",
    "    save_steps=100,\n",
    "    # 更换数据集后training loss 初始值变大，可能原因：原先有太多重复数据，删除后数据更分散；数据集数量从1000减少到600\n",
    "    # 解决办法：调整学习率，从1e-1调整到5e-5（效果不明显）\n",
    "    learning_rate=5e-5,#当然，这里调高学习率可能也是导致后续有一次训练时出现灾难性遗忘的原因\n",
    "    # 增加权重衰减防止过拟合\n",
    "    weight_decay=0.01,\n",
    "    # 梯度裁剪\n",
    "    max_grad_norm=1.0,\n",
    "    save_on_each_node=True,\n",
    ")"
   ]
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": [
    "'''\n",
    "#原先数据集使用ai一次性大批量生成，数据集出现大量重复、明显语义问题，应该对不合规代码进行修改或删除\n",
    "#更新了几次数据集，也改变了学习率参数，但trainingloss几乎无变化一直是4.6左右，推理输出也没有变化\n",
    "#我怀疑模型没有在学习，于是测试\n",
    "# 检查可训练参数\n",
    "trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n",
    "print(f\"可训练参数数量: {trainable_params}\")\n",
    "\n",
    "# 检查LoRA适配器状态\n",
    "print(\"LoRA适配器状态:\")\n",
    "for name, param in model.named_parameters():\n",
    "    if param.requires_grad:\n",
    "        print(f\"  {name}: 可训练\")\n",
    "    else:\n",
    "        print(f\"  {name}: 冻结\")\n",
    "\n",
    "# 检查训练前后的参数变化\n",
    "before_training = next(model.parameters()).clone().detach()\n",
    "\n",
    "# 训练几步\n",
    "trainer.train()\n",
    "\n",
    "after_training = next(model.parameters()).clone().detach()\n",
    "change = torch.abs(after_training - before_training).mean()\n",
    "print(f\"参数平均变化: {change.item()}\")\n",
    "'''\n",
    "#解决方法：把前面的都按顺序运行一遍，不要跳过；（可能是修改后没有运行所有依赖导致）"
   ],
   "id": "a0a8758624346514"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": [
    "'''#在更换数据集之前，模型有150步，更换数据集之后的一次训练中也有110步\n",
    "#但是一次测试中只有90步，我觉得应该有2900/(5*4)=145步\n",
    "#2900是数据集一共有2900行，分母的5是一个数据集占五行，4是模型每4条数据集一训练\n",
    "import torch\n",
    "# 检查训练是否完整\n",
    "from transformers import TrainerState\n",
    "\n",
    "# 如果是从checkpoint恢复，检查训练状态\n",
    "if hasattr(trainer, 'state'):\n",
    "    print(f\"当前步数: {trainer.state.global_step}\")\n",
    "    print(f\"当前epoch: {trainer.state.epoch}\")\n",
    "    print(f\"总训练步数: {trainer.state.max_steps}\")\n",
    "\n",
    "# 或者手动计算\n",
    "total_samples = len(tokenized_id)\n",
    "batch_size = args.per_device_train_batch_size * (torch.cuda.device_count() if torch.cuda.is_available() else 1)\n",
    "grad_accum = args.gradient_accumulation_steps\n",
    "\n",
    "steps_per_epoch = total_samples / (batch_size * grad_accum)\n",
    "total_steps = steps_per_epoch * args.num_train_epochs\n",
    "\n",
    "print(f\"每个epoch步数: {steps_per_epoch:.1f}\")\n",
    "print(f\"总所需步数: {total_steps:.1f}\")\n",
    "#输出结果：当前步数: 90\n",
    "#当前epoch: 3.0\n",
    "#总训练步数: 90\n",
    "#每个epoch步数: 29.2\n",
    "#总所需步数: 87.6\n",
    "'''"
   ],
   "id": "2b49302abba491c0"
  },
  {
   "cell_type": "markdown",
   "id": "667eea8e",
   "metadata": {},
   "source": [
    "最后，实例化Trainer类，将上述定义的参数传入，启动训练进程。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ce19594e",
   "metadata": {},
   "outputs": [],
   "source": [
    "trainer = Trainer(\n",
    "    model=model,\n",
    "    args=args,\n",
    "    train_dataset=tokenized_id,\n",
    "    data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, padding=True),\n",
    ")\n",
    "\n",
    "trainer.train()\n",
    "#更改数据集前，到step 100时training loss大约是0.006（重复的太多了）\n",
    "#数据清洗后，training loss 从step 10的5.800500逐步减小，直到step 110时为3.618100，训练结束\n",
    "# 训练损失没有小于1可能是数据集数量太少，但是它的逐步减小可以说明模型成功地进行学习"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aa068d67",
   "metadata": {},
   "source": [
    "## 微调后推理\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f8fa5869",
   "metadata": {},
   "outputs": [],
   "source": [
    "mode_path = 'deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B'\n",
    "lora_path = './output_1.5bf/Qwen2.5_instruct_lora/checkpoint-561' # 这里改称你的 lora 输出对应 checkpoint 地址\n",
    "\n",
    "# 加载tokenizer\n",
    "tokenizer = AutoTokenizer.from_pretrained(mode_path, trust_remote_code=True)\n",
    "\n",
    "# 加载模型\n",
    "model = AutoModelForCausalLM.from_pretrained(mode_path, ms_dtype=mindspore.bfloat16, trust_remote_code=True).eval()\n",
    "\n",
    "# 加载lora权重\n",
    "model = PeftModel.from_pretrained(model, model_id=lora_path)\n",
    "\n",
    "# host to device\n",
    "model = model.npu()\n",
    "\n",
    "prompt = \"你是谁？\"\n",
    "#后面也通过改变prompt和inputs = tokenizer.apply_chat_template中中文描述的方式，试图得到更好的训练效果\n",
    "inputs = tokenizer.apply_chat_template([{\"role\": \"system\", \"content\": \"现在你要扮演一位温雅谦和、辞气含蓄的古风小生，言辞清雅而不失分寸\"},{\"role\": \"user\", \"content\": prompt}],\n",
    "                                       add_generation_prompt=True,\n",
    "                                       tokenize=True,\n",
    "                                       return_tensors=\"ms\",\n",
    "                                       return_dict=True\n",
    "                                       ).to('cuda')\n",
    "\n",
    "\n",
    "gen_kwargs = {\"max_length\": 2500, \"do_sample\": True, \"top_k\": 1}\n",
    "with core.no_grad():\n",
    "    outputs = model.generate(**inputs, **gen_kwargs)\n",
    "    outputs = outputs[:, inputs['input_ids'].shape[1]:]\n",
    "print(tokenizer.decode(outputs[0], skip_special_tokens=True))\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "py39",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "name": "python",
   "version": "3.9.21"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
