{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# deepseek-r1微调实战"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 一、环境配置及DeepSeek-R1-Distill-Llama-8B下载"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "# python -m venv ds\n",
    "# pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 或者其他版本\n",
    "# pip install unsloth\n",
    "# pip install wandb\n",
    "# pip install modelscope\n",
    "# mkdir DeepSeek-R1-Distill-Llama-8B\n",
    "# modelscope download --model deepseek-ai/DeepSeek-R1-Distill-Llama-8B --local_dir ./DeepSeek-R1-Distill-Llama-8B"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 二、模型加载"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2.6.0+cu124\n",
      "True\n"
     ]
    }
   ],
   "source": [
    "print(torch.__version__)\n",
    "print(torch.cuda.is_available())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.\n",
      "🦥 Unsloth Zoo will now patch everything to make training faster!\n"
     ]
    }
   ],
   "source": [
    "from unsloth import FastLanguageModel"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "DeepSeek-R1-Distill-Llama-8B 推理时至少需要约16GB RAM + 8GB 显存\n",
    "\n",
    "命令行nvidia-smi 输入后查看memory-usage 可检查自己显存剩余 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "#模型一些参数配置\n",
    "max_seq_length = 2048 #序列最长限制\n",
    "dtype = None \n",
    "load_in_4bit = False\n",
    "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==((====))==  Unsloth 2025.2.9: Fast Llama patching. Transformers: 4.48.3.\n",
      "   \\\\   /|    GPU: NVIDIA GeForce RTX 4090 D. Max memory: 23.643 GB. Platform: Linux.\n",
      "O^O/ \\_/ \\    Torch: 2.6.0+cu124. CUDA: 8.9. CUDA Toolkit: 12.4. Triton: 3.2.0\n",
      "\\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.29.post3. FA2 = False]\n",
      " \"-____-\"     Free Apache license: http://github.com/unslothai/unsloth\n",
      "Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "ab69e69ec2be40909d7d91d5277c84d9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "autodl-fs/DeepSeek-R1-Distill-Llama-8B does not have a padding token! Will use pad_token = <|finetune_right_pad_id|>.\n"
     ]
    }
   ],
   "source": [
    "#DeepSeek-R1-Distill-Llama-8B \n",
    "model, tokenizer = FastLanguageModel.from_pretrained(\n",
    "    model_name = \"autodl-fs/DeepSeek-R1-Distill-Llama-8B\",\n",
    "    max_seq_length = max_seq_length,\n",
    "    dtype = dtype,\n",
    "    load_in_4bit = load_in_4bit,\n",
    "    device_map={\"\": device},  # 将所有参数加载到指定设备\n",
    ")\n",
    "\n",
    "EOS_TOKEN = tokenizer.eos_token"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "# print(model)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "# print(tokenizer)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "调整模型为推理模式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# FastLanguageModel.for_inference(model) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "测试问答推理功能"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# #提问\n",
    "# question = \"\"\n",
    "# #借助分词器，将输入的问题转化为标记索引：\n",
    "# inputs = tokenizer([question], return_tensors=\"pt\").to(\"cuda\")\n",
    "# print(inputs)\n",
    "# #输入模型进行推理\n",
    "# outputs = model.generate(\n",
    "#     input_ids=inputs.input_ids,\n",
    "#     max_new_tokens=1200,\n",
    "#     use_cache=True,\n",
    "# )\n",
    "# #得到回答也是token索引\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# #将回答的token索引串转换为token串\n",
    "# response = tokenizer.batch_decode(outputs)\n",
    "# print(response[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 三、medical-o1-reasoning-CoT数据集处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import os\n",
    "from datasets import Dataset\n",
    "# import wandb"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "设置训练问答模板"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_prompt_style = \"\"\"\n",
    "    ### 医学推理任务提示:\n",
    "    你是一位经验丰富的医学专家，擅长通过逻辑推理解决复杂的医学问题。现在，你将面对一个医学推理问题，需要根据已知信息进行分析，并给出合理的推断和解释。\n",
    "\n",
    "\n",
    "    ### 问题：\n",
    "    {}\n",
    "\n",
    "    ### 回答：\n",
    "    <think>\n",
    "    {}\n",
    "    </think>\n",
    "    <answer>\n",
    "    {}\n",
    "    </answer>\n",
    "    \"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "def formatting_prompts_func(examples):\n",
    "    inputs = examples[\"Question\"]\n",
    "    cots = examples[\"Complex_CoT\"]\n",
    "    outputs = examples[\"Response\"]\n",
    "    texts = []\n",
    "    for input, cot, output in zip(inputs, cots, outputs):\n",
    "        text = train_prompt_style.format(input, cot, output) + EOS_TOKEN\n",
    "        texts.append(text)\n",
    "    return {\n",
    "        \"text\": texts,\n",
    "    }"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "#把数据处理和读取数据封装成一个函数\n",
    "def get_train_data(file_path):\n",
    "    with open(file_path, 'r', encoding='utf-8') as file:\n",
    "        file_content = file.read()  # 读取文件内容\n",
    "        train_data = json.loads(file_content)  # 加载 JSON 文件内容\n",
    "        \n",
    "    # 将列表转换为字典格式\n",
    "    data_dict = {key: [item[key] for item in train_data] for key in train_data[0].keys()}\n",
    "    # 使用 Dataset.from_dict() 创建 Dataset 对象\n",
    "    train_data = Dataset.from_dict(data_dict)\n",
    "\n",
    "    train_data = train_data.map(formatting_prompts_func, batched = True,)\n",
    "    return train_data\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3feb02bf57ed402593e487b5f531bfe4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Map:   0%|          | 0/24772 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "train_data = get_train_data(\"autodl-fs/medical_o1_sft_Chinese.json\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 四、模型微调"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "from trl import SFTTrainer\n",
    "from transformers import TrainingArguments\n",
    "from unsloth import is_bfloat16_supported"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "模型转为微调模式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Unsloth 2025.2.9 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.\n"
     ]
    }
   ],
   "source": [
    "model = FastLanguageModel.get_peft_model(\n",
    "    model,\n",
    "    r=16,  \n",
    "    target_modules=[\n",
    "        \"q_proj\",\n",
    "        \"k_proj\",\n",
    "        \"v_proj\",\n",
    "        \"o_proj\",\n",
    "        \"gate_proj\",\n",
    "        \"up_proj\",\n",
    "        \"down_proj\",\n",
    "    ],\n",
    "    lora_alpha=16,\n",
    "    lora_dropout=0,  \n",
    "    bias=\"none\",  \n",
    "    use_gradient_checkpointing=\"unsloth\",  # True or \"unsloth\" for very long context\n",
    "    random_state=1290,\n",
    "    use_rslora=False,  \n",
    "    loftq_config=None,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义训练函数各个超参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5adace32d8ae41f5af2dee8b731f1928",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Map (num_proc=2):   0%|          | 0/24772 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "trainer = SFTTrainer(\n",
    "    model=model,\n",
    "    tokenizer=tokenizer,\n",
    "    train_dataset=train_data,\n",
    "    dataset_text_field=\"text\",\n",
    "    max_seq_length=max_seq_length,\n",
    "    dataset_num_proc=2,\n",
    "    args=TrainingArguments(\n",
    "        per_device_train_batch_size=2,\n",
    "        num_train_epochs=3,\n",
    "        gradient_accumulation_steps=4,\n",
    "        # Use num_train_epochs = 1, warmup_ratio for full training runs!\n",
    "        warmup_steps=4,\n",
    "        learning_rate=2e-4,\n",
    "        fp16=not is_bfloat16_supported(),\n",
    "        bf16=is_bfloat16_supported(),\n",
    "        logging_steps=10,\n",
    "        optim=\"adamw_8bit\",\n",
    "        weight_decay=0.01,\n",
    "        lr_scheduler_type=\"linear\",\n",
    "        seed=1291,\n",
    "        output_dir=\"autodl-fs/medical\",\n",
    "    ),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "训练过程损失上传到wandb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "wandb.login(key=\"03df53261308ddd21901480d7befd1ad4e4de221\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "trainer_stats = trainer.train()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 五、保存模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_save = \"autodl-fs/Ds_Llama8B_medical\"\n",
    "model.save_pretrained(model_save) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "加载训练好的模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "from peft import PeftModel"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==((====))==  Unsloth 2025.2.9: Fast Llama patching. Transformers: 4.48.3.\n",
      "   \\\\   /|    GPU: NVIDIA GeForce RTX 4090 D. Max memory: 23.643 GB. Platform: Linux.\n",
      "O^O/ \\_/ \\    Torch: 2.6.0+cu124. CUDA: 8.9. CUDA Toolkit: 12.4. Triton: 3.2.0\n",
      "\\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.29.post3. FA2 = False]\n",
      " \"-____-\"     Free Apache license: http://github.com/unslothai/unsloth\n",
      "Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f0700aa1e85e4f779a0aa1f8f1fba3c4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./autodl-fs/DeepSeek-R1-Distill-Llama-8B does not have a padding token! Will use pad_token = <|finetune_right_pad_id|>.\n"
     ]
    }
   ],
   "source": [
    "#DeepSeek-R1-Distill-Llama-8B 微调后更适用于复杂医学问题的推理\n",
    "base_model, tokenizer = FastLanguageModel.from_pretrained(\n",
    "    model_name = \"./autodl-fs/DeepSeek-R1-Distill-Llama-8B\",\n",
    "    max_seq_length = max_seq_length,\n",
    "    dtype = dtype,\n",
    "    load_in_4bit = load_in_4bit,\n",
    "    device_map={\"\": device},  # 将所有参数加载到指定设备\n",
    ")\n",
    "\n",
    "\n",
    "# 加载 LoRA 适配器，加入微调后的变化\n",
    "model = PeftModel.from_pretrained(\n",
    "    base_model,\n",
    "    \"./autodl-fs/\",\n",
    "    adapter_name=\"lora_adapter\"\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 六、评估问答功能"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由于没有进行RLHF，模型自由发挥空间不大，本部分采用BLEU评估相似度 和 gpt4打分（0-5分）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "import re\n",
    "import math\n",
    "import collections\n",
    "import jieba"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "用BLEU作为评估指标，对比微调后模型回答和标准答案之间的差异"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "def bleu(pred_text, label_text, k):  \n",
    "    \"\"\"计算BLEU\"\"\"\n",
    "    pred_tokens = list(jieba.cut(pred_text))\n",
    "    label_tokens = list(jieba.cut(label_text))\n",
    "    len_pred, len_label = len(pred_tokens), len(label_tokens)\n",
    "    score = math.exp(min(0, 1 - len_label / len_pred))\n",
    "    for n in range(1, k + 1):\n",
    "        num_matches, label_subs = 0, collections.defaultdict(int)\n",
    "        for i in range(len_label - n + 1):\n",
    "            label_subs[' '.join(label_tokens[i: i + n])] += 1\n",
    "        for i in range(len_pred - n + 1):\n",
    "            if label_subs[' '.join(pred_tokens[i: i + n])] > 0:\n",
    "                num_matches += 1\n",
    "                label_subs[' '.join(pred_tokens[i: i + n])] -= 1\n",
    "        score *= math.pow(num_matches / (len_pred - n + 1), math.pow(0.5, n))\n",
    "    return score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "切分推理过程和解答"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_thk_ans(text):\n",
    "    pattern1 = r\"<think>\\s*(.*?)\\s*</think>\"\n",
    "    pattern2 = r\"<answer>\\s*(.*?)\\s*</answer>\"\n",
    "    match = re.search(pattern1, text, re.DOTALL)\n",
    "    #提取think\n",
    "    if match:\n",
    "        thk = match.group(1).strip()\n",
    "    else:\n",
    "        thk = None  \n",
    "    #提取answer\n",
    "    match = re.search(pattern2, text, re.DOTALL)\n",
    "    if match:\n",
    "        ans = match.group(1).strip()\n",
    "    else:\n",
    "        ans = None  \n",
    "        \n",
    "    return thk, ans"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "（1）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "<think>\n",
      "一位23岁的女性患者在烤瓷冠修复后，发现瓷层的颜色缺乏层次感。我觉得这可能是由于瓷层的调色不够准确或者应用时操作不合理。制作烤瓷冠的时候，好像需要用不同厚度和色泽的瓷粉分层，这样才能模拟出天然牙的颜色层次。我猜如果在这个过程中操作不对，可能颜色会显得单一。\n",
      "\n",
      "另外，烧制过程中温度和时间控制不足也可能有影响。我记得如果温度或时间不够，瓷粉可能融合得不够充分，这样颜色层次感就没有了。再想想，是不是金属基底也有点影响？非常可能。如果基底遮色不够，或者颜色深的金属，没有包裹好，那势必会透过瓷层，影响整体色调。\n",
      "\n",
      "所以，调整瓷粉的色调和分层结构十分重要。如果没有调整好，那色泽就会单一，没有自然牙的层次感。上瓷技术也得过关。得具备一定的技术水平才行，知道怎样堆叠瓷粉和控制烧制过程。看来如果瓷粉没有恰当地叠加，好像就很难有自然的渐变效果。\n",
      "\n",
      "我觉得颜色缺乏层次最可能的原因还是在于瓷粉调色不当和层次结构应用不足。这样一来，表现出来的颜色就显得平平，不像自然牙那样有层次。不过，也可能之前想的其他因素也有些影响，总之需要从多个角度考虑。\n",
      "\n",
      "其实，在瓷层应用上，合理的层次调整真的很关键。上瓷时，要根据解剖学形态和色泽来调整瓷层，如果没有做到，颜色可能就会显得太过统一而缺乏变化。\n",
      "\n",
      "最后，上瓷的技术也不能忽视。在上瓷过程中，瓷层的堆叠方式和各层交界的处理一定要妥当。要是瓷层移行不够自然，可能真的会显得颜色变化不明显，缺乏自然的层次感。\n",
      "\n",
      "综合考虑，应该是上瓷时，各层瓷粉的过渡没有处理好，导致了瓷层颜色缺乏层次。。\n",
      "</think>\n",
      "<answer>\n",
      "导致烤瓷冠颜色缺乏层次感的最常见原因一般是由于瓷粉调色不当和层次构成应用不足。在制作烤瓷冠的进程中，需要运用不同厚度和色泽的瓷粉来分层，这样才能仿制天然牙的颜色层次。若调色不当或者分层不合理，就可能导致最终的烤瓷冠颜色显得单一，缺乏渐变效果。此外，烧制过程中温度和时间的控制不当，或者金属基底颜色的透出等因素，也可能影响最终的色泽效果。为了达到理想的层次感，上瓷的技术水平和细致的操作也是必不可少的。\n",
      "</answer>\n",
      "\n"
     ]
    }
   ],
   "source": [
    "question = \"一位23岁的女性患者在进行烤瓷冠修复后，发现瓷层的颜色缺乏层次感。造成这种现象的最常见原因是什么？\"\n",
    "\n",
    "FastLanguageModel.for_inference(model)  # Unsloth has 2x faster inference!\n",
    "inputs = tokenizer([prompt_style.format(question, \"\")], return_tensors=\"pt\").to(\"cuda\")\n",
    "\n",
    "outputs = model.generate(\n",
    "    input_ids=inputs.input_ids,\n",
    "    attention_mask=inputs.attention_mask,\n",
    "    max_new_tokens=1200,\n",
    "    use_cache=True,\n",
    ")\n",
    "response = tokenizer.batch_decode(outputs)\n",
    "text = response[0].split(\"### 回答：\")[1]\n",
    "print(text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tnk_bleu 0.897\n",
      "ans_bleu 0.889\n"
     ]
    }
   ],
   "source": [
    "thk, ans = get_thk_ans(text)\n",
    "thk_label = \"\"\"\n",
    "一位23岁的女性患者在烤瓷冠修复后，发现瓷层的颜色缺乏层次感。我想这可能是由于瓷层的调色不够准确或者应用时操作不当。制作烤瓷冠的时候，好像需要用不同厚度和色泽的瓷粉分层，这样才能模拟出天然牙的颜色层次。我猜如果在这个过程中操作不对，可能颜色会显得单一。\\n\\n哦，另外，烧制过程中温度和时间控制不足也可能有影响。我记得如果温度或时间不够，瓷粉可能融合得不好，这样颜色层次感就没有了。再想想，是不是金属基底也有点影响？嗯，有可能哦。如果基底遮色不够，或者颜色深的金属，没有包裹好，那势必会透过瓷层，影响整体色调。\\n\\n所以说，调整瓷粉的色调和分层结构十分重要。我想如果没有调整好，那色泽就会单调，没有自然牙的层次感。让我看看，上瓷技术也得过关。得具备一定的技术水平才行，知道怎样堆叠瓷粉和控制烧制过程。看来如果瓷粉没有恰当地叠加，好像就很难有自然的渐变效果。\\n\\n想了这么多，我觉得颜色缺乏层次最可能的原因还是在于瓷粉调色不当和层次结构应用不足。这样一来，表现出来的颜色就显得平平，不像自然牙那样有层次。不过，也可能之前想的其他因素也有些影响，总之需要从多个角度考虑。\\n\\n其实，在瓷层应用上，合理的层次调整真的很关键。上瓷时，要根据解剖学形态和色泽来调整瓷层，如果没有做到，颜色可能就会显得太过统一而缺乏变化。\\n\\n最后，上瓷的技术也不能忽视。在上瓷过程中，瓷层的堆叠方式和各层交界的处理一定要妥当。要是瓷层移行不够自然，可能真的会显得颜色变化不明显，缺乏自然的层次感。\\n\\n综合考虑，应该是上瓷时，各层瓷粉的过渡没有处理好，导致了瓷层颜色缺乏层次。嗯，我觉得这个原因最为常见，也没跑了。\n",
    "\"\"\"\n",
    "ans_label=\"导致烤瓷冠颜色缺乏层次感的最常见原因通常是由于瓷粉调色不当和层次结构应用不足。在制作烤瓷冠的过程中，需要运用不同厚度和色泽的瓷粉来分层，这样才能模仿天然牙的颜色层次。如果调色不准确或者分层不合理，就可能导致最终的烤瓷冠颜色显得单一，缺乏自然的渐变效果。此外，烧制过程中温度和时间的控制不当，或者金属基底颜色的透出等因素，也可能影响最终的色泽效果。为了达到理想的层次感，上瓷的技术水平和细致的操作也是必不可少的。\"\n",
    "\n",
    "print(f'tnk_bleu {bleu(thk, thk_label, k=4):.3f}')\n",
    "print(f'ans_bleu {bleu(ans, ans_label, k=4):.3f}')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "评分： 4/5\n",
    "\n",
    "优点：\n",
    "内容全面：回答涵盖了烤瓷冠颜色缺乏层次感的主要原因，包括瓷粉调色不当、层次结构应用不足、烧制过程控制问题以及金属基底的影响。这些因素都被合理地提及并分析。\n",
    "逻辑清晰：从瓷粉调色和层次结构入手，逐步分析到烧制过程和金属基底的影响，最后总结出关键因素，逻辑连贯，条理清晰。\n",
    "语言表达流畅：回答语言自然，用词准确，没有明显的语法错误或表述不清的地方。\n",
    "专业性较强：回答中涉及了烤瓷冠制作的专业知识，如瓷粉分层、烧制温度、金属基底遮色等，显示出一定的专业性。\n",
    "\n",
    "可改进的地方（扣分原因） \n",
    "缺乏重点突出：虽然列举了多个原因，但没有明确指出哪个是最常见的原因。题目问的是“最常见原因”，回答中虽然有提及瓷粉调色和层次结构是关键，但不够突出。\n",
    "重复性内容较多：在分析过程中，有些内容有重复，比如多次提到瓷粉分层的重要性，可以进一步精简和整合。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "（2）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "<think>\n",
      "这个患者的情况看起来有点复杂。首先，他的主要症状是积块坚硬和疼痛加剧，面色萎黄、肌肉瘦削、饮食减少。接着，舌质呈淡紫，苔暗而脉弦细，这些都提示了血瘀的可能。\n",
      "\n",
      "从这些症状来看，首先想到的是活血化瘀的治疗方法。常用的方剂有血府逐瘀汤，它对于胸胁刺痛、面色不华、舌暗这些症状特别有效，可以考虑这个。\n",
      "\n",
      "并且，进一步仔细分析，患者的面色萎黄和饮食减退也暗示气血不足，不仅仅是血瘀。这就让我想到四物汤之类的方剂来增强这方面的调理。\n",
      "\n",
      "此外，积块的问题似乎更复杂，可能需要针对性的治疗，想到膈下逐瘀汤，这个方剂专门针对血瘀引起的积块症状。所以，或许膈下逐瘀汤是一个更全面的选择？\n",
      "\n",
      "可是，再考虑完这些，感觉可能还是不够全面。额，应该再仔细确认一下，毕竟如何融合各个症状的治疗是个关键问题。考虑到气血不足，还需要一些如八珍汤来补血，哦，甚至再加上当归补血汤，这样也许能更贴近患者的需求。\n",
      "\n",
      "不过话说回来，患者的症状似乎更严重，可能需要改良的治疗方案。或许，八珍汤和化积丸一起使用能同时解决气血不足和消除积块的问题。最终用八珍汤合化 积丸可能就能达到好的效果，进行这样的调整应该能更有针对性地改善患者的整体症状。以上就是 较好的解决方案。\n",
      "</think>\n",
      "<answer>\n",
      "我建议使用八珍汤合化积丸进行治疗。该组合可有效针对患者的气血不足和积块坚硬的问题。八珍汤有补益气血的作用，能够改善面色萎黄、肌肉瘦削的症状，同时增强患者的体质。而化积丸则用于消除积块，缓解因血瘀而导致的疼痛问题。通过两剂药的共同作用，相信可以全面治疗患者的症状。\n",
      "</answer>\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "question = \"对于患有积块坚硬、疼痛逐渐加剧，面色萎黄、肌肉瘦削、饮食大减、舌质淡紫、苔暗、脉弦细的患者，建议用哪种方剂进行治疗？\"\n",
    "\n",
    "FastLanguageModel.for_inference(model)  # Unsloth has 2x faster inference!\n",
    "inputs = tokenizer([prompt_style.format(question, \"\")], return_tensors=\"pt\").to(\"cuda\")\n",
    "\n",
    "outputs = model.generate(\n",
    "    input_ids=inputs.input_ids,\n",
    "    attention_mask=inputs.attention_mask,\n",
    "    max_new_tokens=1200,\n",
    "    use_cache=True,\n",
    ")\n",
    "response = tokenizer.batch_decode(outputs)\n",
    "text = response[0].split(\"### 回答：\")[1]\n",
    "print(text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tnk_bleu 0.854\n",
      "ans_bleu 0.632\n"
     ]
    }
   ],
   "source": [
    "thk, ans = get_thk_ans(text)\n",
    "thk_label= \"\"\"\n",
    "这个患者的情况看起来有点复杂。首先，他的主要症状是积块坚硬和疼痛加剧，还有面色萎黄、肌肉瘦削、饮食减少。接着，舌质呈淡紫，苔暗而脉弦细，这些都是提示有血瘀可能性的标志。\\n\\n那么从这些症状来看，首先想到的是活血化瘀的治疗方法。哦，对了，常用的方剂有血府逐瘀汤，它对于胸胁刺痛、面色不华、舌暗这些症状特别有效，也许可以考虑这个。\\n\\n不过，进一步仔细分析，患者的面色萎黄和饮食减退也可能暗示气血不足，而且不仅仅是血瘀。于是，是否需要补益气血呢？这就让我想到四物汤之类的方剂来增强这方面的调理。\\n\\n此外，积块的问题似乎更复杂，可能需要针对性的治疗，想到膈下逐瘀汤，这个方剂专门针对血瘀引起的积块症状。所以，或许膈下逐瘀汤是一个更全面的选择？\\n\\n可是，再考虑完这些，感觉可能还是不够全面。额，应该再仔细确认一下，毕竟如何融合各个症状的治疗是个关键问题。考虑到气血不足，还需要一些如八珍汤来补血，哦，甚至再加上当归补血汤，这样也许能更贴近患者的需求。\\n\\n唉，不过话说回来，患者的症状似乎更严重，可能需要改良的治疗方案。或许，八珍汤和化积丸一起使用能同时解决气血不足和消除积块的问题。是的，最终用八珍汤合化积丸可能就能达到好的效果，进行这样的调整应该能更有针对性地改善患者的整体症状。经过一番反复推敲和考虑，看起来这是最合适的方案。\n",
    "\"\"\"\n",
    "ans_label=\"\"\"\n",
    "对于这位患者，我建议使用八珍汤合化积丸进行治疗。这种组合可以有效针对患者的气血不足和积块坚硬的问题。八珍汤具有补益气血的作用，有助于改善面色萎黄、肌肉瘦削的症状，同时增强患者的整体体质。而化积丸则专门用于消除积块，缓解因血瘀而导致的疼痛问题。通过这两种方剂的联合应用，可以更全面地改善患者的症状，达到更好的治疗效果。\n",
    "\"\"\"\n",
    "\n",
    "print(f'tnk_bleu {bleu(thk, thk_label, k=4):.3f}')\n",
    "print(f'ans_bleu {bleu(ans, ans_label, k=4):.3f}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "评分：4/5\n",
    "\n",
    "优点：\n",
    "症状分析较为全面：你对患者的症状进行了较为细致的分析，尤其是对积块坚硬、疼痛加剧、面色萎黄、舌质淡紫等关键症状的解读，能够准确地联想到气血不足和血瘀的问题。\n",
    "方剂选择有一定合理性：你选择了八珍汤合化积丸，这个组合在理论上确实可以同时解决气血不足和积块的问题。八珍汤补益气血，化积丸消积化瘀，两者合用有一定的针对性。\n",
    "逻辑连贯：从症状分析到方剂选择，思路较为清晰，逻辑连贯，能够体现出对中医辨证论治的基本理解。\n",
    "\n",
    "不足之处：\n",
    "方剂选择不够精准：根据患者的具体症状（积块坚硬、疼痛加剧，面色萎黄、舌质淡紫、脉弦细等），更符合正虚瘀结型积聚的表现。虽然八珍汤合化积丸有一定合理性，但可能不是最佳选择。根据中医内科学的相关知识，对于此类患者，膈下逐瘀汤或鳖甲煎丸可能更为对症。\n",
    "缺乏对症状的精准辨证：虽然提到了气血不足和血瘀，但没有明确指出患者的“正虚瘀结”本质，也没有提到脉象（脉弦细）的意义，这在中医辨证中是重要的。\n",
    "方剂组合的复杂性：八珍汤和化积丸的组合虽然在理论上可以解决问题，但实际应用中可能会因为药物成分的复杂性而影响疗效。例如，化积丸本身就有补益成分，与八珍汤的补益作用可能存在重复。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
