{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "d0a9f456-40f8-4024-90fb-8fdca6dd26df",
   "metadata": {},
   "source": [
    "# 使用 Unsloth 对 DeepSeek-R1-Distill-Qwen-1.5B 模型进行 QLoRA 微调"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d175db1-7a15-4372-bbaa-6af6cc451dcf",
   "metadata": {},
   "source": [
    "## 1. 环境准备与库导入\n",
    "`transformers`：用于加载模型和分词器；\n",
    "\n",
    "`unsloth`：用于高效微调；\n",
    "\n",
    "`trl`：提供了 `SFTTrainer`；\n",
    "\n",
    "`datasets`：用于处理数据。\n",
    "\n",
    "在运行此 Notebook 之前，请确保已安装所有依赖包：pip install -r requirements.txt\n",
    "\n",
    "datasets==3.6.0\n",
    "\n",
    "pandas==2.3.1\n",
    "\n",
    "peft==0.17.0\n",
    "\n",
    "timm==1.0.19\n",
    "\n",
    "torch==2.7.1\n",
    "\n",
    "torchvision==0.22.1\n",
    "\n",
    "transformers==4.55.2\n",
    "\n",
    "trl==0.21.0\n",
    "\n",
    "unsloth==2025.8.5\n",
    "\n",
    "unsloth_zoo==2025.8.4"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "fbd8f766-dc43-4562-b055-e52011d46879",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/miniconda3/envs/distill/lib/python3.13/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🦥 Unsloth Zoo will now patch everything to make training faster!\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from unsloth import FastLanguageModel\n",
    "from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, GenerationConfig, DataCollatorForSeq2Seq\n",
    "from datasets import Dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47392a81-668f-4d85-8b0d-031d1054233c",
   "metadata": {},
   "source": [
    "### 2. 加载预训练模型和分词器 (Tokenizer)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "3c979ea6-ed4e-405c-9a57-ce642a79b0ae",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==((====))==  Unsloth 2025.8.5: Fast Qwen2 patching. Transformers: 4.55.2.\n",
      "   \\\\   /|    Tesla T4. Num GPUs = 1. Max memory: 15.575 GB. Platform: Linux.\n",
      "O^O/ \\_/ \\    Torch: 2.7.1+cu126. CUDA: 7.5. CUDA Toolkit: 12.6. Triton: 3.3.1\n",
      "\\        /    Bfloat16 = FALSE. FA [Xformers = 0.0.31.post1. FA2 = False]\n",
      " \"-____-\"     Free license: http://github.com/unslothai/unsloth\n",
      "Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!\n"
     ]
    }
   ],
   "source": [
    "# 定义模型和一些基本参数\n",
    "max_seq_length = 8192\n",
    "dtype = None # None 表示自动选择 (Float16 a T4, V100, BFloat16 a Ampere)\n",
    "load_in_4bit = True # 使用 4bit 量化加载\n",
    "\n",
    "# 模型标识符：正在使用的模型\n",
    "# model_name = \"unsloth/DeepSeek-R1-Distill-Qwen-1.5B\" \n",
    "model_name = \"unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit\" \n",
    "\n",
    "# 这一步会返回一个经过 Unsloth 优化的模型和一个分词器\n",
    "model, tokenizer = FastLanguageModel.from_pretrained(\n",
    "    model_name = model_name,\n",
    "    max_seq_length = max_seq_length,\n",
    "    dtype = dtype,\n",
    "    load_in_4bit = load_in_4bit,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2154350d-8029-4a71-8274-7f1c474f3d37",
   "metadata": {},
   "source": [
    "### 3. 微调前推理测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "9253320a-f685-425f-9c0e-72b4298c2b83",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 模型推理的 Prompt 模板\n",
    "inference_prompt = \"\"\"\n",
    "以下是一条描述任务的指令，并配有一个提供进一步上下文的输入。\n",
    "请撰写一份恰当的回复，以完成该请求。\n",
    "在回答之前，请仔细思考该问题，并构建一个分步的思考过程，以确保回应的逻辑严谨和内容准确。\n",
    "\n",
    "### Instruction:\n",
    "你是一位医学专家，在临床推理、诊断学和治疗规划方面拥有深厚的专业知识。\n",
    "请回答以下医学问题。\n",
    "\n",
    "### Question:\n",
    "{}\n",
    "\n",
    "### Response:\n",
    "<think>{}\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "810ba19c-4b39-475f-97c2-1e36b06e9918",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "<think>\n",
      "好的，我现在需要仔细分析这位28岁的男性的症状，然后结合临床推理和专业知识来制定一个治疗方案。首先，症状包括头晕、脖子疼和恶心，这些都是常见的神经系统的常见问题，可能与脑力活动有关。\n",
      "\n",
      "首先，头晕可能与脑力活动有关，比如长时间的工作可能导致脑力过度劳累，导致头晕。脖子疼可能是由于紧张的肌肉活动，可能与运动神经相关。恶心通常是消化系统的问题，但也有其他原因，比如胃肠道疾病。\n",
      "\n",
      "考虑到工作压力和长时间的脑力活动，恶心和头晕可能与长期的工作压力和疲劳有关。此外，脖子疼可能与长期的运动或紧张的活动有关，比如在编程中频繁使用手指等。\n",
      "\n",
      "接下来，我需要考虑是否有其他可能的疾病或原因。例如，是否有脑力运动损伤？或者是否有运动神经损伤？或者是否有长期的疲劳或压力征？\n",
      "\n",
      "根据临床推理，如果长期的工作压力和脑力活动导致的疲劳，可能与脑力运动损伤有关。因此，考虑药物治疗，比如低剂量的抗抑郁药，如阿米替林，以缓解长期的疲劳和压力。同时，可以考虑补充维生素C，因为维生素C有助于缓解恶心和头晕，促进消化。\n",
      "\n",
      "另外，如果疼痛感存在，可能需要进一步评估，比如疼痛日记，看是否有疼痛表现，然后考虑药物治疗。如果疼痛减轻，可能需要更深入的评估，比如CT scan或MRI，以确定是否有神经损伤。\n",
      "\n",
      "综上所述，治疗方案应包括低剂量阿米替林、补充维生素C，以及如果疼痛存在，考虑疼痛日记和进一步的评估。如果疼痛减轻，可能需要更详细的检查，如CT或MRI，以确定是否有神经损伤。\n",
      "</think>\n",
      "\n",
      "男，28岁，程序员，最近一周每天工作到半夜，感觉头晕、脖子疼，有时候还恶心。\n",
      "\n",
      "### 分步思考过程：\n",
      "\n",
      "1. **症状分析**：\n",
      "   - **头晕**：可能与长期的工作压力和脑力活动有关。\n",
      "   - **脖子疼**：可能与紧张的肌肉活动或运动神经相关。\n",
      "   - **恶心**：可能是消化系统问题，也可能与胃肠道疾病有关。\n",
      "\n",
      "2. **临床推理**：\n",
      "   - 这些症状可能与长期的工作压力或脑力活动有关，可能涉及脑力运动损伤。\n",
      "   - 长期的工作可能导致疲劳和神经损伤。\n",
      "\n",
      "3. **药物推荐**：\n",
      "   - **低剂量阿米替林**：缓解长期的疲劳和压力，促进神经退行性病变的缓解。\n",
      "   - **补充维生素C**：缓解恶心和头晕，促进消化。\n",
      "\n",
      "4. **进一步评估**：\n",
      "   - 如果有疼痛，可能需要疼痛日记评估。\n",
      "   - 如果疼痛减轻，可能需要进一步检查如CT或MRI以确定是否有神经损伤。\n",
      "\n",
      "### 结论：\n",
      "\n",
      "建议采用低剂量阿米替林和补充维生素C作为治疗方案。如果疼痛存在，应评估疼痛日记并考虑进一步药物治疗。如果疼痛减轻，可能需要更详细的检查如CT或MRI以确定是否有神经损伤。\n"
     ]
    }
   ],
   "source": [
    "FastLanguageModel.for_inference(model)\n",
    "\n",
    "question = \"男，28岁，程序员，最近一周每天工作到半夜，感觉头晕、脖子疼，有时候还恶心。\"\n",
    "\n",
    "inputs = tokenizer([inference_prompt.format(question, \"\")], return_tensors=\"pt\").to(\"cuda\")\n",
    "attention_mask = inputs.input_ids.ne(tokenizer.pad_token_id).long().to(\"cuda\")\n",
    "\n",
    "outputs = model.generate(\n",
    "    input_ids=inputs.input_ids,\n",
    "    attention_mask=inputs.attention_mask,\n",
    "    max_new_tokens=1200,\n",
    "    use_cache=True,\n",
    ")\n",
    "\n",
    "response = tokenizer.batch_decode(outputs, skip_special_tokens=True)\n",
    "print(response[0].split(\"### Response:\")[1])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "16bf959a-dc29-4f04-9bf1-8b4fd6929946",
   "metadata": {},
   "source": [
    "### 4. 下载和格式化训练数据集\n",
    "\n",
    "医学推理数据集：https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT/viewer/zh"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "5c33b560-4521-4044-8a35-d08876d69a15",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 模型训练的 Prompt 模板\n",
    "train_prompt = \"\"\"\n",
    "以下是一条描述任务的指令，并配有一个提供进一步上下文的输入。\n",
    "请撰写一份恰当的回复，以完成该请求。\n",
    "在回答之前，请仔细思考该问题，并构建一个分步的思考过程，以确保回应的逻辑严谨和内容准确。\n",
    "\n",
    "### Instruction:\n",
    "你是一位医学专家，在临床推理、诊断学和治疗规划方面拥有深厚的专业知识。\n",
    "请回答以下医学问题。\n",
    "\n",
    "### Question:\n",
    "{}\n",
    "\n",
    "### Response:\n",
    "<think>\n",
    "{}\n",
    "</think>\n",
    "{}\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "c231b334-d1f6-4ad6-a1e4-a86c9bf74afe",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Generating train split: 100%|██████████| 20171/20171 [00:00<00:00, 33880.52 examples/s]\n",
      "Map: 100%|██████████| 20171/20171 [00:00<00:00, 34940.83 examples/s]\n"
     ]
    }
   ],
   "source": [
    "from datasets import load_dataset\n",
    "\n",
    "# 添加 EOS Token\n",
    "EOS_TOKEN = tokenizer.eos_token\n",
    "\n",
    "def formatting_prompts_func(examples):\n",
    "    inputs = examples[\"Question\"]\n",
    "    cots = examples[\"Complex_CoT\"]\n",
    "    outputs = examples[\"Response\"]\n",
    "    texts = []\n",
    "    for input, cot, output in zip(inputs, cots, outputs):\n",
    "        # 将 EOS Token 添加到样本最后\n",
    "        text = train_prompt.format(input, cot, output) + EOS_TOKEN\n",
    "        texts.append(text)\n",
    "    return { \"text\" : texts, }\n",
    "\n",
    "dataset = load_dataset(\"FreedomIntelligence/medical-o1-reasoning-SFT\", \"zh\", split = \"train\")\n",
    "dataset = dataset.map(formatting_prompts_func, batched = True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "1b81bfb9-8217-4449-b633-d42427148e8a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'\\n以下是一条描述任务的指令，并配有一个提供进一步上下文的输入。\\n请撰写一份恰当的回复，以完成该请求。\\n在回答之前，请仔细思考该问题，并构建一个分步的思考过程，以确保回应的逻辑严谨和内容准确。\\n\\n### Instruction:\\n你是一位医学专家，在临床推理、诊断学和治疗规划方面拥有深厚的专业知识。\\n请回答以下医学问题。\\n\\n### Question:\\n根据描述，一个1岁的孩子在夏季头皮出现多处小结节，长期不愈合，且现在疮大如梅，溃破流脓，口不收敛，头皮下有空洞，患处皮肤增厚。这种病症在中医中诊断为什么病？\\n\\n### Response:\\n<think>\\n这个小孩子在夏天头皮上长了些小结节，一直都没好，后来变成了脓包，流了好多脓。想想夏天那么热，可能和湿热有关。才一岁的小孩，免疫力本来就不强，夏天的湿热没准就侵袭了身体。\\n\\n用中医的角度来看，出现小结节、再加上长期不愈合，这些症状让我想到了头疮。小孩子最容易得这些皮肤病，主要因为湿热在体表郁结。\\n\\n但再看看，头皮下还有空洞，这可能不止是简单的头疮。看起来病情挺严重的，也许是脓肿没治好。这样的情况中医中有时候叫做禿疮或者湿疮，也可能是另一种情况。\\n\\n等一下，头皮上的空洞和皮肤增厚更像是疾病已经深入到头皮下，这是不是说明有可能是流注或瘰疬？这些名字常描述头部或颈部的严重感染，特别是有化脓不愈合，又形成通道或空洞的情况。\\n\\n仔细想想，我怎么感觉这些症状更贴近瘰疬的表现？尤其考虑到孩子的年纪和夏天发生的季节性因素，湿热可能是主因，但可能也有火毒或者痰湿造成的滞留。\\n\\n回到基本的症状描述上看，这种长期不愈合又复杂的状况，如果结合中医更偏重的病名，是不是有可能是涉及更深层次的感染？\\n\\n再考虑一下，这应该不是单纯的瘰疬，得仔细分析头皮增厚并出现空洞这样的严重症状。中医里头，这样的表现可能更符合‘蚀疮’或‘头疽’。这些病名通常描述头部严重感染后的溃烂和组织坏死。\\n\\n看看季节和孩子的体质，夏天又湿又热，外邪很容易侵入头部，对孩子这么弱的免疫系统简直就是挑战。头疽这个病名听起来真是切合，因为它描述的感染严重，溃烂到出现空洞。\\n\\n不过，仔细琢磨后发现，还有个病名似乎更为合适，叫做‘蝼蛄疖’，这病在中医里专指像这种严重感染并伴有深部空洞的情况。它也涵盖了化脓和皮肤增厚这些症状。\\n\\n哦，该不会是夏季湿热，导致湿毒入侵，孩子的体质不能御，其病情发展成这样的感染？综合分析后我觉得‘蝼蛄疖’这个病名真是相当符合。\\n</think>\\n从中医的角度来看，你所描述的症状符合“蝼蛄疖”的病症。这种病症通常发生在头皮，表现为多处结节，溃破流脓，形成空洞，患处皮肤增厚且长期不愈合。湿热较重的夏季更容易导致这种病症的发展，特别是在免疫力较弱的儿童身上。建议结合中医的清热解毒、祛湿消肿的治疗方法进行处理，并配合专业的医疗建议进行详细诊断和治疗。\\n<｜end▁of▁sentence｜>'"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dataset[0][\"text\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "240bede5-b068-49c9-a6a3-17bbbffda7c9",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "\n",
       "以下是一条描述任务的指令，并配有一个提供进一步上下文的输入。\n",
       "请撰写一份恰当的回复，以完成该请求。\n",
       "在回答之前，请仔细思考该问题，并构建一个分步的思考过程，以确保回应的逻辑严谨和内容准确。\n",
       "\n",
       "### Instruction:\n",
       "你是一位医学专家，在临床推理、诊断学和治疗规划方面拥有深厚的专业知识。\n",
       "请回答以下医学问题。\n",
       "\n",
       "### Question:\n",
       "根据描述，一个1岁的孩子在夏季头皮出现多处小结节，长期不愈合，且现在疮大如梅，溃破流脓，口不收敛，头皮下有空洞，患处皮肤增厚。这种病症在中医中诊断为什么病？\n",
       "\n",
       "### Response:\n",
       "<think>\n",
       "这个小孩子在夏天头皮上长了些小结节，一直都没好，后来变成了脓包，流了好多脓。想想夏天那么热，可能和湿热有关。才一岁的小孩，免疫力本来就不强，夏天的湿热没准就侵袭了身体。\n",
       "\n",
       "用中医的角度来看，出现小结节、再加上长期不愈合，这些症状让我想到了头疮。小孩子最容易得这些皮肤病，主要因为湿热在体表郁结。\n",
       "\n",
       "但再看看，头皮下还有空洞，这可能不止是简单的头疮。看起来病情挺严重的，也许是脓肿没治好。这样的情况中医中有时候叫做禿疮或者湿疮，也可能是另一种情况。\n",
       "\n",
       "等一下，头皮上的空洞和皮肤增厚更像是疾病已经深入到头皮下，这是不是说明有可能是流注或瘰疬？这些名字常描述头部或颈部的严重感染，特别是有化脓不愈合，又形成通道或空洞的情况。\n",
       "\n",
       "仔细想想，我怎么感觉这些症状更贴近瘰疬的表现？尤其考虑到孩子的年纪和夏天发生的季节性因素，湿热可能是主因，但可能也有火毒或者痰湿造成的滞留。\n",
       "\n",
       "回到基本的症状描述上看，这种长期不愈合又复杂的状况，如果结合中医更偏重的病名，是不是有可能是涉及更深层次的感染？\n",
       "\n",
       "再考虑一下，这应该不是单纯的瘰疬，得仔细分析头皮增厚并出现空洞这样的严重症状。中医里头，这样的表现可能更符合‘蚀疮’或‘头疽’。这些病名通常描述头部严重感染后的溃烂和组织坏死。\n",
       "\n",
       "看看季节和孩子的体质，夏天又湿又热，外邪很容易侵入头部，对孩子这么弱的免疫系统简直就是挑战。头疽这个病名听起来真是切合，因为它描述的感染严重，溃烂到出现空洞。\n",
       "\n",
       "不过，仔细琢磨后发现，还有个病名似乎更为合适，叫做‘蝼蛄疖’，这病在中医里专指像这种严重感染并伴有深部空洞的情况。它也涵盖了化脓和皮肤增厚这些症状。\n",
       "\n",
       "哦，该不会是夏季湿热，导致湿毒入侵，孩子的体质不能御，其病情发展成这样的感染？综合分析后我觉得‘蝼蛄疖’这个病名真是相当符合。\n",
       "</think>\n",
       "从中医的角度来看，你所描述的症状符合“蝼蛄疖”的病症。这种病症通常发生在头皮，表现为多处结节，溃破流脓，形成空洞，患处皮肤增厚且长期不愈合。湿热较重的夏季更容易导致这种病症的发展，特别是在免疫力较弱的儿童身上。建议结合中医的清热解毒、祛湿消肿的治疗方法进行处理，并配合专业的医疗建议进行详细诊断和治疗。\n",
       "<｜end▁of▁sentence｜>"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from IPython.display import display, Markdown\n",
    "\n",
    "display(Markdown(dataset[0][\"text\"]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "67b1bc5e-4963-4d0e-9905-4e8303efd327",
   "metadata": {},
   "source": [
    "### 5. 使用 Unsloth 添加 LoRA 适配器\n",
    "这是使用 `unsloth` 的核心步骤。我们调用 `FastLanguageModel.get_peft_model`，它会非常高效地为模型注入 LoRA 模块。\n",
    "\n",
    "- `r`: LoRA 的秩 (rank)，是控制模型复杂度和参数量的关键超参数。\n",
    "- `target_modules`: 指定要在哪些线性层（如注意力机制的 q, k, v, o 投影层）上应用 LoRA。\n",
    "- `lora_alpha`: LoRA 的缩放因子，通常设置为 `r` 的两倍或与 `r` 相同。\n",
    "- `use_gradient_checkpointing`: 一种节省显存的技术，对于训练大模型至关重要。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "3e00a115-2787-4416-ba92-b5f61830ec14",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Unsloth 2025.8.5 patched 28 layers with 28 QKV layers, 28 O layers and 28 MLP layers.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "PeftModelForCausalLM(\n",
      "  (base_model): LoraModel(\n",
      "    (model): Qwen2ForCausalLM(\n",
      "      (model): Qwen2Model(\n",
      "        (embed_tokens): Embedding(151936, 1536, padding_idx=151654)\n",
      "        (layers): ModuleList(\n",
      "          (0): Qwen2DecoderLayer(\n",
      "            (self_attn): Qwen2Attention(\n",
      "              (q_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=1536, out_features=1536, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (k_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=1536, out_features=256, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=256, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (v_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=1536, out_features=256, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=256, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (o_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=1536, out_features=1536, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (rotary_emb): LlamaRotaryEmbedding()\n",
      "            )\n",
      "            (mlp): Qwen2MLP(\n",
      "              (gate_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=8960, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=8960, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (up_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=8960, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=8960, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (down_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=8960, out_features=1536, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=8960, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (act_fn): SiLU()\n",
      "            )\n",
      "            (input_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "            (post_attention_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "          )\n",
      "          (1-2): 2 x Qwen2DecoderLayer(\n",
      "            (self_attn): Qwen2Attention(\n",
      "              (q_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=1536, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (k_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=256, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=256, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (v_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=256, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=256, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (o_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=1536, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (rotary_emb): LlamaRotaryEmbedding()\n",
      "            )\n",
      "            (mlp): Qwen2MLP(\n",
      "              (gate_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=1536, out_features=8960, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=8960, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (up_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=1536, out_features=8960, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=8960, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (down_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=8960, out_features=1536, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=8960, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (act_fn): SiLU()\n",
      "            )\n",
      "            (input_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "            (post_attention_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "          )\n",
      "          (3-25): 23 x Qwen2DecoderLayer(\n",
      "            (self_attn): Qwen2Attention(\n",
      "              (q_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=1536, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (k_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=256, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=256, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (v_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=256, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=256, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (o_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=1536, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (rotary_emb): LlamaRotaryEmbedding()\n",
      "            )\n",
      "            (mlp): Qwen2MLP(\n",
      "              (gate_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=8960, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=8960, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (up_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=8960, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=8960, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (down_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=8960, out_features=1536, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=8960, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (act_fn): SiLU()\n",
      "            )\n",
      "            (input_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "            (post_attention_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "          )\n",
      "          (26): Qwen2DecoderLayer(\n",
      "            (self_attn): Qwen2Attention(\n",
      "              (q_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=1536, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (k_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=256, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=256, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (v_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=256, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=256, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (o_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=1536, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (rotary_emb): LlamaRotaryEmbedding()\n",
      "            )\n",
      "            (mlp): Qwen2MLP(\n",
      "              (gate_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=1536, out_features=8960, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=8960, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (up_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=1536, out_features=8960, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=8960, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (down_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=8960, out_features=1536, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=8960, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (act_fn): SiLU()\n",
      "            )\n",
      "            (input_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "            (post_attention_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "          )\n",
      "          (27): Qwen2DecoderLayer(\n",
      "            (self_attn): Qwen2Attention(\n",
      "              (q_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=1536, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (k_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=256, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=256, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (v_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=256, bias=True)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=256, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (o_proj): lora.Linear(\n",
      "                (base_layer): Linear(in_features=1536, out_features=1536, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (rotary_emb): LlamaRotaryEmbedding()\n",
      "            )\n",
      "            (mlp): Qwen2MLP(\n",
      "              (gate_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=8960, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=8960, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (up_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=1536, out_features=8960, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=1536, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=8960, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (down_proj): lora.Linear4bit(\n",
      "                (base_layer): Linear4bit(in_features=8960, out_features=1536, bias=False)\n",
      "                (lora_dropout): ModuleDict(\n",
      "                  (default): Identity()\n",
      "                )\n",
      "                (lora_A): ModuleDict(\n",
      "                  (default): Linear(in_features=8960, out_features=16, bias=False)\n",
      "                )\n",
      "                (lora_B): ModuleDict(\n",
      "                  (default): Linear(in_features=16, out_features=1536, bias=False)\n",
      "                )\n",
      "                (lora_embedding_A): ParameterDict()\n",
      "                (lora_embedding_B): ParameterDict()\n",
      "                (lora_magnitude_vector): ModuleDict()\n",
      "              )\n",
      "              (act_fn): SiLU()\n",
      "            )\n",
      "            (input_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "            (post_attention_layernorm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "          )\n",
      "        )\n",
      "        (norm): Qwen2RMSNorm((1536,), eps=1e-06)\n",
      "        (rotary_emb): LlamaRotaryEmbedding()\n",
      "      )\n",
      "      (lm_head): Linear(in_features=1536, out_features=151936, bias=False)\n",
      "    )\n",
      "  )\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "# 因为 `model` 对象现在是由 Unsloth 创建的，它包含了所有必需的属性\n",
    "model = FastLanguageModel.get_peft_model(\n",
    "    model,\n",
    "    r=16,\n",
    "    target_modules=[\n",
    "      \"q_proj\",\n",
    "      \"k_proj\",\n",
    "      \"v_proj\",\n",
    "      \"o_proj\",\n",
    "      \"gate_proj\",\n",
    "      \"up_proj\",\n",
    "      \"down_proj\",\n",
    "    ],\n",
    "    lora_alpha=16,\n",
    "    lora_dropout=0,\n",
    "    bias=\"none\",\n",
    "    use_gradient_checkpointing=\"unsloth\",\n",
    "    random_state=1432,\n",
    "    use_rslora=False,\n",
    "    loftq_config=None,\n",
    ")\n",
    "# 检查模型结构，确认 LoRA 适配器已添加\n",
    "print(model)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a6782427-78f1-4c33-b8e9-d4c813ec211d",
   "metadata": {},
   "source": [
    "### 6. 配置 SFTTrainer\n",
    "`SFTTrainer` (Supervised Fine-tuning Trainer) 是一个专门用于指令微调的训练器。我们需要配置 `TrainingArguments` 来指定所有的训练参数，如批量大小、学习率、优化器等。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "7ebfbe25-adb7-4cb2-a17f-3d5fb0fdcaa3",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Unsloth: Tokenizing [\"text\"] (num_proc=2): 100%|██████████| 20171/20171 [00:15<00:00, 1314.10 examples/s]\n"
     ]
    }
   ],
   "source": [
    "from trl import SFTConfig, SFTTrainer\n",
    "\n",
    "trainer = SFTTrainer(\n",
    "    model = model,\n",
    "    tokenizer = tokenizer,\n",
    "    train_dataset = dataset,\n",
    "    dataset_text_field = \"text\",\n",
    "    max_seq_length = max_seq_length,\n",
    "    packing = False, # Can make training 5x faster for short sequences.\n",
    "    args = SFTConfig(\n",
    "        per_device_train_batch_size = 64,\n",
    "        gradient_accumulation_steps = 2,\n",
    "        warmup_steps = 5,\n",
    "        # num_train_epochs = 1, # Set this for 1 full training run.\n",
    "        max_steps = 60,\n",
    "        learning_rate = 2e-4,\n",
    "        logging_steps = 1,\n",
    "        optim = \"adamw_8bit\",\n",
    "        weight_decay = 0.01,\n",
    "        lr_scheduler_type = \"linear\",\n",
    "        seed = 1432,\n",
    "        output_dir = \"outputs\",\n",
    "        report_to = \"none\", # Use this for WandB etc\n",
    "    ),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7b5918ca-60f6-4483-93d2-f794749dd95a",
   "metadata": {},
   "source": [
    "### 7. 开始训练\n",
    "一切准备就绪后，调用 `trainer.train()` 即可开始微调过程。训练结束后，会返回包含训练统计信息（如训练损失）的对象。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "4141e6ff-2ef8-4ca9-9e5c-c1a90145b2a7",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "==((====))==  Unsloth - 2x faster free finetuning | Num GPUs used = 1\n",
      "   \\\\   /|    Num examples = 20,171 | Num Epochs = 1 | Total steps = 60\n",
      "O^O/ \\_/ \\    Batch size per device = 64 | Gradient accumulation steps = 2\n",
      "\\        /    Data Parallel GPUs = 1 | Total batch size (64 x 2 x 1) = 128\n",
      " \"-____-\"     Trainable parameters = 18,464,768 of 1,795,552,768 (1.03% trained)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Unsloth: Will smartly offload gradients to save VRAM!\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='60' max='60' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [60/60 1:07:47, Epoch 0/1]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Step</th>\n",
       "      <th>Training Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>3.170800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>3.179300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>3.084100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>4</td>\n",
       "      <td>3.149200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>5</td>\n",
       "      <td>3.106100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>6</td>\n",
       "      <td>2.953200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>7</td>\n",
       "      <td>2.937900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>8</td>\n",
       "      <td>2.910600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>9</td>\n",
       "      <td>2.873800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>10</td>\n",
       "      <td>2.769900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>11</td>\n",
       "      <td>2.771100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>12</td>\n",
       "      <td>2.678200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>13</td>\n",
       "      <td>2.666400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>14</td>\n",
       "      <td>2.596700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>15</td>\n",
       "      <td>2.634200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>16</td>\n",
       "      <td>2.587000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>17</td>\n",
       "      <td>2.507400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>18</td>\n",
       "      <td>2.499400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>19</td>\n",
       "      <td>2.431600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>20</td>\n",
       "      <td>2.431800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>21</td>\n",
       "      <td>2.444600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>22</td>\n",
       "      <td>2.392800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>23</td>\n",
       "      <td>2.383000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>24</td>\n",
       "      <td>2.356900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>25</td>\n",
       "      <td>2.372600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>26</td>\n",
       "      <td>2.331700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>27</td>\n",
       "      <td>2.305800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>28</td>\n",
       "      <td>2.326100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>29</td>\n",
       "      <td>2.322300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>30</td>\n",
       "      <td>2.316700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>31</td>\n",
       "      <td>2.306200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>32</td>\n",
       "      <td>2.365400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>33</td>\n",
       "      <td>2.284600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>34</td>\n",
       "      <td>2.290300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>35</td>\n",
       "      <td>2.318500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>36</td>\n",
       "      <td>2.276700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>37</td>\n",
       "      <td>2.316800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>38</td>\n",
       "      <td>2.314000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>39</td>\n",
       "      <td>2.301300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>40</td>\n",
       "      <td>2.237300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>41</td>\n",
       "      <td>2.288000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>42</td>\n",
       "      <td>2.282300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>43</td>\n",
       "      <td>2.268400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>44</td>\n",
       "      <td>2.246500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>45</td>\n",
       "      <td>2.263300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>46</td>\n",
       "      <td>2.252900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>47</td>\n",
       "      <td>2.281900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>48</td>\n",
       "      <td>2.254200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>49</td>\n",
       "      <td>2.303200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>50</td>\n",
       "      <td>2.274700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>51</td>\n",
       "      <td>2.249100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>52</td>\n",
       "      <td>2.304400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>53</td>\n",
       "      <td>2.285200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>54</td>\n",
       "      <td>2.237800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>55</td>\n",
       "      <td>2.264800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>56</td>\n",
       "      <td>2.295200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>57</td>\n",
       "      <td>2.236700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>58</td>\n",
       "      <td>2.240000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>59</td>\n",
       "      <td>2.232300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>60</td>\n",
       "      <td>2.244000</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "TrainOutput(global_step=60, training_loss=2.4634541591008503, metrics={'train_runtime': 4138.1254, 'train_samples_per_second': 1.856, 'train_steps_per_second': 0.014, 'total_flos': 7.163728311484416e+16, 'train_loss': 2.4634541591008503})\n"
     ]
    }
   ],
   "source": [
    "trainer_stats = trainer.train()\n",
    "\n",
    "# 打印训练统计信息\n",
    "print(trainer_stats)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f21e2460-2976-4db4-bb22-fe73aaa899c4",
   "metadata": {},
   "source": [
    "### 8. 保存微调后的模型（Lora）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "e437a440-0cd3-4ffe-9af7-7c18992a8379",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "('qwen-1.5b_lora_model/tokenizer_config.json',\n",
       " 'qwen-1.5b_lora_model/special_tokens_map.json',\n",
       " 'qwen-1.5b_lora_model/chat_template.jinja',\n",
       " 'qwen-1.5b_lora_model/tokenizer.json')"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 分开保存\n",
    "model.save_pretrained(\"qwen-1.5b_lora_model\")\n",
    "tokenizer.save_pretrained(\"qwen-1.5b_lora_model\")\n",
    "# 合并保存\n",
    "# model.save_pretrained_merged(\"qwen-1.5b_lora_model\", tokenizer, save_method=\"merged_16bit\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "524ae51e-4c33-4c29-8316-480789d46b98",
   "metadata": {},
   "source": [
    "### 9. 测试训练后的生成结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "6303eaf3-fa8b-4646-b840-87e06827c330",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "<think>\n",
      "患者患有急性阑尾炎，已经发病5天了，这让他感到症状越来越明显。腹痛稍微减轻了，但还是有发热的情况，这些症状让我有点担心。特别是右下腹出现了一处压痛的包块，这让我有点担心自己是不是真的得有阑尾炎了。现在，我想知道，这个时候应该怎么做才能及时发现问题，避免延误治疗。\n",
      "\n",
      "首先，我需要了解阑尾炎的症状。阑尾炎通常表现为腹痛、发热、腹痛减轻，或者是腹痛加重、发热加重。这些症状可以帮助我们判断是否真的有阑尾炎。而右下腹出现的压痛包块，看起来很像是阑尾炎的早期症状。\n",
      "\n",
      "接下来，我需要考虑一些初步的检查，比如腹痛检查和发热检查。腹痛检查可以帮助我们确定是否有感染，发热检查可以排除感染的可能性。如果发现问题，我们还需要做一些影像学检查，比如CT或MRI，来确认诊断。\n",
      "\n",
      "然后，我想到，对于阑尾炎，及时进行影像学检查是最重要的一步。因为这种情况下，影像学检查可以更准确地诊断出具体的炎症，比如肠炎、肠原发炎或阑尾炎。如果发现有炎症，我们就要立即进行治疗。\n",
      "\n",
      "在紧急情况下，比如急性阑尾炎，我们可能需要立即进行紧急治疗。这时，我们就可以考虑使用抗生素来缓解症状，但同时也要注意避免使用过强的药物，以免引发药物性感染，特别是对于有肠原发炎的患者。\n",
      "\n",
      "此外，由于患者已经发病5天，我们还需要特别注意患者的状态。如果有腹痛、发热、腹痛减轻和压痛包块等症状，我们可能会考虑使用抗生素来缓解症状，但如果患者没有其他症状，我们可能需要立即停药，但要特别注意不要过度使用。\n",
      "\n",
      "最后，我想到，紧急情况下，患者应该立即进行紧急治疗。比如，抗生素、抗组胺药物、抗炎药物等，这些药物可以帮助减轻症状，但需要特别注意患者的情况。\n",
      "\n",
      "综上所述，我需要建议患者在急性阑尾炎发作时，立即进行紧急治疗，并根据具体情况调整治疗方案。同时，影像学检查也是关键一步，可以帮助我们准确诊断，确保治疗的正确性。\n",
      "</think>\n",
      "在急性阑尾炎发作时，立即进行紧急治疗是必要的。由于患者已经发病5天，症状已经变得明显，且右下腹出现压痛包块，这提示可能有阑尾炎的早期症状。因此，立即进行紧急治疗是必要的。\n",
      "\n",
      "紧急治疗可以包括抗生素、抗组胺药物、抗炎药物等，这些药物可以帮助减轻症状。然而，患者的情况需要根据实际情况来调整治疗方案。如果患者没有其他症状，我们可能需要立即停药，但要特别注意不要过度使用药物。\n",
      "\n",
      "在紧急情况下，影像学检查也是关键一步。通过CT或MRI等影像学检查，可以更准确地诊断出具体的炎症，如肠炎、肠原发炎或阑尾炎，从而确保治疗的正确性。\n",
      "\n",
      "总之，急性阑尾炎发作时，立即进行紧急治疗，并结合影像学检查，是处理这种病态的重要步骤。\n",
      "\n"
     ]
    }
   ],
   "source": [
    "FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
    "\n",
    "question=\"一个患有急性阑尾炎的病人已经发病5天，腹痛稍有减轻但仍然发热，在体检时发现右下腹有压痛的包块，此时应如何处理？\", # Question\n",
    "inputs = tokenizer([inference_prompt.format(question, \"\")], return_tensors=\"pt\").to(\"cuda\")\n",
    "\n",
    "outputs = model.generate(\n",
    "    input_ids=inputs.input_ids,\n",
    "    attention_mask=inputs.attention_mask,\n",
    "    max_new_tokens=1000,\n",
    "    use_cache=True,\n",
    ")\n",
    "\n",
    "output = tokenizer.batch_decode(outputs, skip_special_tokens=True)\n",
    "print(output[0].split(\"### Response:\")[1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "a2fa16fe-a70b-4e38-af97-2a6efa68a23a",
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_response(question: str, model, tokenizer, inference_prompt: str, max_new_tokens: int = 1024) -> str:\n",
    "    \"\"\"\n",
    "    使用指定的模型和分词器为给定的医学问题生成响应。\n",
    "\n",
    "    Args:\n",
    "        question (str): 需要模型回答的医学问题。\n",
    "        model: 已加载的 Unsloth/Hugging Face 模型。\n",
    "        tokenizer: 对应的分词器。\n",
    "        inference_prompt (str): 用于格式化输入的 f-string 模板。\n",
    "        max_new_tokens (int, optional): 生成响应的最大 token 数量。默认为 1024。\n",
    "\n",
    "    Returns:\n",
    "        str: 模型生成的响应文本，已去除 prompt 部分。\n",
    "    \"\"\"\n",
    "    # 1. 使用模板格式化输入\n",
    "    prompt = inference_prompt.format(\n",
    "        question, # 填充问题\n",
    "        \"\",       # 留空，让模型生成 CoT 和 Response\n",
    "    )\n",
    "\n",
    "    # 2. 将格式化后的 prompt 进行分词，并转移到 GPU\n",
    "    inputs = tokenizer([prompt], return_tensors=\"pt\").to(model.device)\n",
    "\n",
    "    # 3. 使用模型生成输出\n",
    "    # use_cache=True 用于加速解码过程\n",
    "    outputs = model.generate(\n",
    "        input_ids=inputs.input_ids,\n",
    "        attention_mask=inputs.attention_mask,\n",
    "        max_new_tokens=max_new_tokens,\n",
    "        use_cache=True,\n",
    "    )\n",
    "    \n",
    "    # 4. 将生成的 token 解码为文本\n",
    "    # skip_special_tokens=True 会移除像 EOS_TOKEN 这样的特殊标记\n",
    "    decoded_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]\n",
    "\n",
    "    # 5. 切分字符串，只返回 \"### Response:\" 之后的部分\n",
    "    # 使用 .split() 分割并获取响应内容，.strip() 用于去除可能存在的前后空白字符\n",
    "    response_part = decoded_output.split(\"### Response:\")\n",
    "    if len(response_part) > 1:\n",
    "        return response_part[1].strip()\n",
    "    else:\n",
    "        # 如果模型没有生成 \"### Response:\" 标记，则返回整个生成内容以供调试\n",
    "        return decoded_output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "b5494a71-e0ac-4a38-b895-4f6dc6d5d324",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================== 模型回答 ====================\n",
      "<think>\n",
      "哦，这位60岁男性患者在检查中发现右侧胸疼，而且X线检查显示右侧肋膈角消失，这让我想到可能有肺结核的迹象。肺结核通常会导致胸腔积液，也就是所谓的肺结核胸腔积液。这个情况让我想到，为了了解胸腔积液的性质，可能需要一些特定的检查。\n",
      "\n",
      "首先，我记得胸腔积液的性质与某些特定的实验室检查有关。比如，肺结核患者通常会因为肺结核而出现胸腔积液，这种液体会被称作肺结核胸腔积液。然后，我想到，如果想了解这个液体会不会是液体，或者是否是固体，就需要一些特殊的检查。\n",
      "\n",
      "我想到了尿常规。如果患者在尿液中发现有尿酸和尿蛋白，这些指标通常用于判断是否尿液中的尿酸升高，从而排除尿毒症的可能性。如果尿酸升高，通常是因为尿毒症或者尿毒症相关的问题，但这也可能是因为肺结核导致的。\n",
      "\n",
      "然而，尿常规可能不能完全了解胸腔积液的性质，因为它只是一些指标，不能直接判断液体还是固体。接下来，我想到了血常规。血常规可以帮助我们了解血浆中的血钙和血钠水平，以及血细胞的类型和数目，这能为我们提供一些关于血液状态的信息。\n",
      "\n",
      "再想想，胸腔积液的性质可能与血钙和血钠的水平有关。如果血钙和血钠水平升高，这可能与胸腔积液有关，因为这些指标通常与血液状态有关。这可能是一个重要的线索。\n",
      "\n",
      "然后，想到了血钙和血钠的检查。如果患者在血常规中发现血钙和血钠水平有所升高，这可能提示胸腔积液的性质，因为这些指标通常与血浆状态有关。\n",
      "\n",
      "不过，回想一下，如果患者有肺结核，那么胸腔积液的性质可能更偏向于液体，而非固体。所以在这种情况下，如果胸腔积液是液体，我们可能需要进一步的检查来确认这一点。比如，可以考虑血钙和血钠的检查，这可能有助于判断液体还是固体。\n",
      "\n",
      "综合这些思考，我觉得在了解胸腔积液性质时，血常规和血钙、血钠的检查都是有用的，尤其是血钙和血钠的检查可以帮助我们确定胸腔积液的性质，从而更准确地诊断。\n",
      "\n",
      "哦，对了，还有一个重要的点，就是尿常规。如果患者在尿液中发现尿酸和尿蛋白，这是肺结核胸腔积液的常见表现。尿常规可以帮助我们排除尿毒症的可能性，但不能完全确认这一点，因为尿毒症可能与肺结核相关。\n",
      "\n",
      "因此，在了解胸腔积液的性质时，最直接的检查就是血钙和血钠的检查，因为这些指标可以提供关于血浆状态的信息，从而帮助我们判断胸腔积液是液体还是固体。\n",
      "</think>\n",
      "在了解胸腔积液的性质时，血常规和血钙、血钠的检查是最直接的检查方法。这些指标可以帮助我们了解血浆中的血钙和血钠水平，以及血细胞的类型和数目，从而判断胸腔积液是液体还是固体。对于肺结核患者，胸腔积液通常以液体形式出现，因此，血钙和血钠的检查能够帮助我们更准确地了解胸腔积液的性质。此外，尿常规在某些情况下也提供了胸腔积液的线索，但通常需要结合其他检查来确定其性质。因此，血钙和血钠的检查是了解胸腔积液性质的最有效方法。\n"
     ]
    }
   ],
   "source": [
    "my_question = \"对于一名60岁男性患者，出现右侧胸疼并在X线检查中显示右侧肋膈角消失，诊断为肺结核伴右侧胸腔积液，请问哪一项实验室检查对了解胸水的性质更有帮助？\"\n",
    "\n",
    "response = generate_response(my_question, model, tokenizer, inference_prompt)\n",
    "print(\"==================== 模型回答 ====================\")\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "cf817807-1f2b-45d1-914b-37c399018b99",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================== 模型回答 ====================\n",
      "<think>\n",
      "这位28岁的男性患者，工作是程序员，长时间熬夜，导致身体出现头晕目眩和恶心等症状，这听起来很像是一个比较严重的健康问题。\n",
      "\n",
      "首先，我想到的就是脑血管的问题，比如中风的可能性。因为长时间熬夜，可能导致脑膜供血不足，进而引发脑缺血。这种情况下，脑血管损伤，导致头晕目眩和恶心是常见的表现。\n",
      "\n",
      "不过，也有可能是更严重的问题，比如心脏的负担。如果是长时间工作，心脏可能需要更多的能量来支撑。心脏负担过重，导致心功能不全，心肌供血不足，也有可能出现头晕目眩和恶心的症状。\n",
      "\n",
      "再想想，还有其他可能性，比如肝功能问题。肝功能异常可能导致肝脏负担过重，进而影响到心脏功能，引发类似的症状。\n",
      "\n",
      "综合来看，这三种可能性都存在。头晕目眩和恶心是常见的心血管损伤的表现，所以最有可能的是脑血管损伤导致的中风。\n",
      "\n",
      "不过，也有可能是其他原因，比如心脏功能异常，但这也需要更多的检查来确认。所以，建议先进行心脏功能检查，看看是否有心功能异常的表现。\n",
      "\n",
      "另外，如果心脏功能正常，可能需要考虑其他原因，比如脑血管损伤。因此，建议先进行心脏功能检查，看看是否有其他异常表现。\n",
      "\n",
      "总的来说，头晕目眩和恶心是可能的脑血管损伤，或者心功能异常的表现。建议先进行心脏功能检查，以确认是否有其他问题。\n",
      "</think>\n",
      "根据患者的症状和临床情况，头晕目眩和恶心可能是脑血管损伤（如中风）的表现。然而，也有可能是心功能异常。建议首先进行心脏功能检查，以确认是否心功能异常，以排除心脏负担过重的其他潜在问题。如果心脏功能正常，可能需要进一步考虑脑血管损伤的可能性。\n"
     ]
    }
   ],
   "source": [
    "my_question = \"对于一名 28 岁的男性患者，工作是程序员，常年熬夜，最近突然感觉头晕目眩，甚至有点恶心。请问有可能是什么疾病？\"\n",
    "\n",
    "response = generate_response(my_question, model, tokenizer, inference_prompt)\n",
    "print(\"==================== 模型回答 ====================\")\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7d96e106-d789-4174-97b7-87e25382dd92",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
