{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# deepseek-r1微调实战"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 一、环境配置及DeepSeek-R1-Distill-Llama-8B下载"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "# python -m venv ds\n",
    "# pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 或者其他版本\n",
    "# pip install unsloth\n",
    "# pip install wandb\n",
    "# pip install modelscope\n",
    "# mkdir DeepSeek-R1-Distill-Llama-8B\n",
    "# modelscope download --model deepseek-ai/DeepSeek-R1-Distill-Llama-8B --local_dir ./DeepSeek-R1-Distill-Llama-8B"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 二、模型加载"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2.6.0+cu124\n",
      "True\n"
     ]
    }
   ],
   "source": [
    "print(torch.__version__)\n",
    "print(torch.cuda.is_available())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.\n",
      "🦥 Unsloth Zoo will now patch everything to make training faster!\n"
     ]
    }
   ],
   "source": [
    "from unsloth import FastLanguageModel"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "DeepSeek-R1-Distill-Llama-8B 推理时至少需要约16GB RAM + 8GB 显存\n",
    "\n",
    "命令行nvidia-smi 输入后查看memory-usage 可检查自己显存剩余 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "#模型一些参数配置\n",
    "max_seq_length = 2048 #序列最长限制\n",
    "dtype = None \n",
    "load_in_4bit = False\n",
    "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==((====))==  Unsloth 2025.2.9: Fast Llama patching. Transformers: 4.48.3.\n",
      "   \\\\   /|    GPU: NVIDIA GeForce RTX 4090 D. Max memory: 23.643 GB. Platform: Linux.\n",
      "O^O/ \\_/ \\    Torch: 2.6.0+cu124. CUDA: 8.9. CUDA Toolkit: 12.4. Triton: 3.2.0\n",
      "\\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.29.post3. FA2 = False]\n",
      " \"-____-\"     Free Apache license: http://github.com/unslothai/unsloth\n",
      "Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "0c5eeee50f094dd49910e9a550d02ac9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "autodl-fs/DeepSeek-R1-Distill-Llama-8B does not have a padding token! Will use pad_token = <|finetune_right_pad_id|>.\n"
     ]
    }
   ],
   "source": [
    "#DeepSeek-R1-Distill-Llama-8B \n",
    "model, tokenizer = FastLanguageModel.from_pretrained(\n",
    "    model_name = \"autodl-fs/DeepSeek-R1-Distill-Llama-8B\",\n",
    "    max_seq_length = max_seq_length,\n",
    "    dtype = dtype,\n",
    "    load_in_4bit = load_in_4bit,\n",
    "    device_map={\"\": device},  # 将所有参数加载到指定设备\n",
    ")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "# print(model)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "# print(tokenizer)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "调整模型为推理模式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# FastLanguageModel.for_inference(model) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "测试问答推理功能"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# #提问\n",
    "# question = \"2. (5 分) 已知复数 $z=\\\\frac{\\\\sqrt{3}+i}{(1-\\\\sqrt{3} i)^{2}}, \\\\bar{z}$ 是 $z$ 的共轭复数, 则 $z\\\\cdot\\bar{z}=(\\\\quad)$\\nA. $\\\\frac{1}{4}$\\nB. $\\\\frac{1}{2}$\\nC. 1\\nD. 2\\n\"\n",
    "# #借助分词器，将输入的问题转化为标记索引：\n",
    "# inputs = tokenizer([question], return_tensors=\"pt\").to(\"cuda\")\n",
    "# print(inputs)\n",
    "# #输入模型进行推理\n",
    "# outputs = model.generate(\n",
    "#     input_ids=inputs.input_ids,\n",
    "#     max_new_tokens=1200,\n",
    "#     use_cache=True,\n",
    "# )\n",
    "# #得到回答也是token索引\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# #将回答的token索引串转换为token串\n",
    "# response = tokenizer.batch_decode(outputs)\n",
    "# print(response[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 三、GAOKAO数据集处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import os\n",
    "from datasets import Dataset\n",
    "# import wandb"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "设置训练问答模板"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_prompt_style = \"\"\"\n",
    "    ### 提示:\n",
    "    你是一个对于解答高考题目有丰富经验的专家，现在有人问你关于{}。\n",
    "    请回答下面的问题，在回答问题之前请给出逐步的推理过程。\n",
    "\n",
    "    ### 问题：\n",
    "    {}\n",
    "\n",
    "    ### 回答：\n",
    "    <think>\n",
    "    {}\n",
    "    </think>\n",
    "    <answer>\n",
    "    {}\n",
    "    </answer>\n",
    "    \"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "def formatting_prompts_func(data):\n",
    "\n",
    "    EOS_TOKEN = tokenizer.eos_token\n",
    "    keywords = data[\"keywords\"]\n",
    "    inputs = data[\"Question\"]\n",
    "    cots = data[\"Complex_CoT\"]\n",
    "    outputs = data[\"Response\"]\n",
    "    texts = []\n",
    "    for k,i,c,o in zip(keywords, inputs, cots, outputs):\n",
    "        text = train_prompt_style.format(k, i, c, o) + EOS_TOKEN\n",
    "        texts.append(text)\n",
    "    return {\n",
    "        \"text\": texts,\n",
    "        }"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "def read_GAOKAO_data(root_folder):\n",
    "    #读取root_folder下所有文件\n",
    "    #返回初步训练数据，包含'keywords' 'Question' 'Complex_CoT' 'Response' 四个字段\n",
    "\n",
    "    train_data = [] \n",
    "    for foldername, _ , filenames in os.walk(root_folder):\n",
    "        for filename in filenames:\n",
    "            if filename.endswith('.json'):  # 确保只处理 JSON 文件\n",
    "                file_path = os.path.join(foldername, filename)  # 获取完整文件路径\n",
    "                with open(file_path, 'r', encoding='utf-8') as file:\n",
    "                    file_content = file.read()  # 读取文件内容\n",
    "                    data_dict = json.loads(file_content)  # 加载 JSON 文件内容\n",
    "                    k = data_dict[\"keywords\"]\n",
    "                    examples = data_dict[\"example\"]\n",
    "                    for example in examples:\n",
    "                        q = example[\"question\"]\n",
    "                        \n",
    "                        # ans是一个list，转换为字符串\n",
    "                        ans = example[\"answer\"]\n",
    "                        ans = \", \".join(ans) \n",
    "\n",
    "                        cot = example[\"analysis\"]\n",
    "                        tmp_dict = {\"keywords\": k,\n",
    "                                    \"Question\": q,\n",
    "                                    \"Complex_CoT\": cot,\n",
    "                                    \"Response\": ans\n",
    "                                    } \n",
    "                        train_data.append(tmp_dict)  # 将内容添加到列表中\n",
    "\n",
    "    return train_data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "【解答】 答案 A． was/were  doing，表示过去的某个时间点或时间段正在做某事\n",
      "，根据句意，我没有读完简爱，我昨天一天一直在写家庭作业． 故选 A． \n",
      "【点评】\n",
      "\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "201e13748f6748a092ea47209679fe1d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Map:   0%|          | 0/4047 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "#测试一下功能\n",
    "\n",
    "root_folder = \"GAOKAO\"  # 替换为你的根文件夹路径\n",
    "train_data = read_GAOKAO_data(root_folder)\n",
    "print(train_data[0][\"Complex_CoT\"])\n",
    "\n",
    "# 要将普通list数据转换为huggingface的Dataset，方便调用各类数据处理函数\n",
    "data_dict = {key: [item[key] for item in train_data] for key in train_data[0].keys()}\n",
    "# 使用 Dataset.from_dict() 创建 Dataset 对象\n",
    "train_data = Dataset.from_dict(data_dict)\n",
    "\n",
    "train_data = train_data.map(formatting_prompts_func, batched = True,)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "    ### 提示:\n",
      "    你是一个对于解答高考题目有丰富经验的专家，现在有人问你关于2010-2022_Political_Science_MCQs。\n",
      "    请回答下面的问题，在回答问题之前请给出逐步的推理过程。\n",
      "\n",
      "    ### 问题：\n",
      "    1．（ 3分）按照中国一东盟自由贸易协议， 成员国 90%的贸易商品实行零关税 。\n",
      "如果以前一件 10人民币元的 M商品出口到某东盟成员国 N国的关税为 5%，\n",
      "本外币间的汇率为 l：8.2010年该商品实行零关税， 中国生产 M商品的劳动\n",
      "生产率提高 25%，其他条件不变 ，则一件 M商品在实行零关税之前和之后出\n",
      "口到 N国的价格用 N国货币单位表示分别为（ 　　） \n",
      "A．80，84 B．84，80 C．84.64  D．84，100\n",
      "\n",
      "\n",
      "    ### 回答：\n",
      "    <think>\n",
      "    C正确，实行零关税前， 因为汇率为 1：8，关税为 5%，所以 M商品用\n",
      "N国货币表示价格为（ 10×8）×（1+5%）=84．实行零关税后，因为劳动生\n",
      "产率（社会劳动生产率 ）提高 25%，且零关税 ，所以价格为 （10/1.25）×8=64\n",
      "．故答案为 C； \n",
      "ABD均不正确，故排除。  \n",
      "故选： C。\n",
      "\n",
      "    </think>\n",
      "    <answer>\n",
      "    C\n",
      "    </answer>\n",
      "    <｜end▁of▁sentence｜>\n"
     ]
    }
   ],
   "source": [
    "print(train_data[0][\"text\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "#把数据处理和读取数据封装成一个函数\n",
    "def get_train_data(root_folder):\n",
    "    root_folder = \"GAOKAO\"  # 替换为你的根文件夹路径\n",
    "    train_data = read_GAOKAO_data(root_folder)\n",
    "    #转换为huggingface数据集，方便使用封装的各种数据处理方法\n",
    "    \n",
    "    # 将列表转换为字典格式\n",
    "    data_dict = {key: [item[key] for item in train_data] for key in train_data[0].keys()}\n",
    "    # 使用 Dataset.from_dict() 创建 Dataset 对象\n",
    "    train_data = Dataset.from_dict(data_dict)\n",
    "\n",
    "    train_data = train_data.map(formatting_prompts_func, batched = True,)\n",
    "    return train_data\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "448cd7dee3774a86af14298cec6511fd",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Map:   0%|          | 0/4047 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "train_data = get_train_data(\"GAOKAO\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 四、模型微调"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "from trl import SFTTrainer\n",
    "from transformers import TrainingArguments\n",
    "from unsloth import is_bfloat16_supported"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "模型转为微调模式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Unsloth 2025.2.9 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.\n"
     ]
    }
   ],
   "source": [
    "model = FastLanguageModel.get_peft_model(\n",
    "    model,\n",
    "    r=16,  \n",
    "    target_modules=[\n",
    "        \"q_proj\",\n",
    "        \"k_proj\",\n",
    "        \"v_proj\",\n",
    "        \"o_proj\",\n",
    "        \"gate_proj\",\n",
    "        \"up_proj\",\n",
    "        \"down_proj\",\n",
    "    ],\n",
    "    lora_alpha=16,\n",
    "    lora_dropout=0,  \n",
    "    bias=\"none\",  \n",
    "    use_gradient_checkpointing=\"unsloth\",  # True or \"unsloth\" for very long context\n",
    "    random_state=1290,\n",
    "    use_rslora=False,  \n",
    "    loftq_config=None,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义训练函数各个超参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "3ff779fc02454acab2c0c8e363ac1b2c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Map (num_proc=2):   0%|          | 0/4047 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "trainer = SFTTrainer(\n",
    "    model=model,\n",
    "    tokenizer=tokenizer,\n",
    "    train_dataset=train_data,\n",
    "    dataset_text_field=\"text\",\n",
    "    max_seq_length=max_seq_length,\n",
    "    dataset_num_proc=2,\n",
    "    args=TrainingArguments(\n",
    "        per_device_train_batch_size=2,\n",
    "        num_train_epochs=6,\n",
    "        gradient_accumulation_steps=4,\n",
    "        # Use num_train_epochs = 1, warmup_ratio for full training runs!\n",
    "        warmup_steps=4,\n",
    "        learning_rate=2e-4,\n",
    "        fp16=not is_bfloat16_supported(),\n",
    "        bf16=is_bfloat16_supported(),\n",
    "        logging_steps=10,\n",
    "        optim=\"adamw_8bit\",\n",
    "        weight_decay=0.01,\n",
    "        lr_scheduler_type=\"linear\",\n",
    "        seed=1291,\n",
    "        output_dir=\"autodl-fs/outputs\",\n",
    "    ),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "训练过程损失上传到wandb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "wandb.login(key=\"03df53261308ddd21901480d7befd1ad4e4de221\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "==((====))==  Unsloth - 2x faster free finetuning | Num GPUs = 1\n",
      "   \\\\   /|    Num examples = 4,047 | Num Epochs = 6\n",
      "O^O/ \\_/ \\    Batch size per device = 2 | Gradient Accumulation steps = 4\n",
      "\\        /    Total batch size = 8 | Total steps = 3,036\n",
      " \"-____-\"     Number of trainable parameters = 41,943,040\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='3036' max='3036' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [3036/3036 3:00:29, Epoch 6/6]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Step</th>\n",
       "      <th>Training Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>10</td>\n",
       "      <td>2.305600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>20</td>\n",
       "      <td>1.679100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>30</td>\n",
       "      <td>1.319000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>40</td>\n",
       "      <td>1.288900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>50</td>\n",
       "      <td>1.329400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>60</td>\n",
       "      <td>1.144500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>70</td>\n",
       "      <td>1.258300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>80</td>\n",
       "      <td>1.288900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>90</td>\n",
       "      <td>1.250200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>100</td>\n",
       "      <td>1.152300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>110</td>\n",
       "      <td>1.128600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>120</td>\n",
       "      <td>1.064200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>130</td>\n",
       "      <td>1.361600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>140</td>\n",
       "      <td>0.992300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>150</td>\n",
       "      <td>1.082600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>160</td>\n",
       "      <td>1.025100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>170</td>\n",
       "      <td>1.147300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>180</td>\n",
       "      <td>0.938600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>190</td>\n",
       "      <td>1.140100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>200</td>\n",
       "      <td>1.080200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>210</td>\n",
       "      <td>1.156600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>220</td>\n",
       "      <td>1.314500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>230</td>\n",
       "      <td>1.127200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>240</td>\n",
       "      <td>1.168600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>250</td>\n",
       "      <td>1.049500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>260</td>\n",
       "      <td>1.044500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>270</td>\n",
       "      <td>1.062800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>280</td>\n",
       "      <td>1.019800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>290</td>\n",
       "      <td>1.044500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>300</td>\n",
       "      <td>0.928300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>310</td>\n",
       "      <td>1.122600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>320</td>\n",
       "      <td>1.091800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>330</td>\n",
       "      <td>1.079600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>340</td>\n",
       "      <td>1.173700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>350</td>\n",
       "      <td>1.116900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>360</td>\n",
       "      <td>0.967200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>370</td>\n",
       "      <td>1.165000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>380</td>\n",
       "      <td>1.307400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>390</td>\n",
       "      <td>1.118200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>400</td>\n",
       "      <td>0.987100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>410</td>\n",
       "      <td>0.958900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>420</td>\n",
       "      <td>1.095100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>430</td>\n",
       "      <td>1.017300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>440</td>\n",
       "      <td>0.964600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>450</td>\n",
       "      <td>0.961700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>460</td>\n",
       "      <td>1.063800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>470</td>\n",
       "      <td>0.992300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>480</td>\n",
       "      <td>1.024000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>490</td>\n",
       "      <td>1.127600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>500</td>\n",
       "      <td>1.175800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>510</td>\n",
       "      <td>1.041800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>520</td>\n",
       "      <td>0.898600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>530</td>\n",
       "      <td>0.819600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>540</td>\n",
       "      <td>0.888900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>550</td>\n",
       "      <td>0.890300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>560</td>\n",
       "      <td>0.743000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>570</td>\n",
       "      <td>0.827300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>580</td>\n",
       "      <td>0.786600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>590</td>\n",
       "      <td>0.942200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>600</td>\n",
       "      <td>0.970300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>610</td>\n",
       "      <td>0.947900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>620</td>\n",
       "      <td>0.975100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>630</td>\n",
       "      <td>0.963300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>640</td>\n",
       "      <td>0.858100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>650</td>\n",
       "      <td>0.824200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>660</td>\n",
       "      <td>0.877100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>670</td>\n",
       "      <td>0.815100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>680</td>\n",
       "      <td>0.821400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>690</td>\n",
       "      <td>0.902000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>700</td>\n",
       "      <td>0.897700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>710</td>\n",
       "      <td>0.841700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>720</td>\n",
       "      <td>0.823600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>730</td>\n",
       "      <td>0.929700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>740</td>\n",
       "      <td>0.940500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>750</td>\n",
       "      <td>1.058700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>760</td>\n",
       "      <td>0.875300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>770</td>\n",
       "      <td>0.906200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>780</td>\n",
       "      <td>0.914800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>790</td>\n",
       "      <td>0.795100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>800</td>\n",
       "      <td>0.826400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>810</td>\n",
       "      <td>0.819100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>820</td>\n",
       "      <td>0.750000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>830</td>\n",
       "      <td>0.806200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>840</td>\n",
       "      <td>0.987400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>850</td>\n",
       "      <td>0.851300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>860</td>\n",
       "      <td>0.892700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>870</td>\n",
       "      <td>0.821500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>880</td>\n",
       "      <td>0.977300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>890</td>\n",
       "      <td>0.813000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>900</td>\n",
       "      <td>0.750400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>910</td>\n",
       "      <td>0.702700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>920</td>\n",
       "      <td>0.890100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>930</td>\n",
       "      <td>0.767200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>940</td>\n",
       "      <td>0.722800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>950</td>\n",
       "      <td>0.705300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>960</td>\n",
       "      <td>0.706100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>970</td>\n",
       "      <td>0.870100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>980</td>\n",
       "      <td>0.932700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>990</td>\n",
       "      <td>0.874100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1000</td>\n",
       "      <td>0.802000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1010</td>\n",
       "      <td>0.760600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1020</td>\n",
       "      <td>0.670700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1030</td>\n",
       "      <td>0.569000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1040</td>\n",
       "      <td>0.589500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1050</td>\n",
       "      <td>0.632800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1060</td>\n",
       "      <td>0.625100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1070</td>\n",
       "      <td>0.527700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1080</td>\n",
       "      <td>0.604800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1090</td>\n",
       "      <td>0.691300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1100</td>\n",
       "      <td>0.494600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1110</td>\n",
       "      <td>0.672000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1120</td>\n",
       "      <td>0.579000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1130</td>\n",
       "      <td>0.565300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1140</td>\n",
       "      <td>0.697900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1150</td>\n",
       "      <td>0.677300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1160</td>\n",
       "      <td>0.608500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1170</td>\n",
       "      <td>0.627100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1180</td>\n",
       "      <td>0.549500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1190</td>\n",
       "      <td>0.661100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1200</td>\n",
       "      <td>0.596500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1210</td>\n",
       "      <td>0.546600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1220</td>\n",
       "      <td>0.587000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1230</td>\n",
       "      <td>0.490500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1240</td>\n",
       "      <td>0.621300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1250</td>\n",
       "      <td>0.647200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1260</td>\n",
       "      <td>0.521100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1270</td>\n",
       "      <td>0.458400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1280</td>\n",
       "      <td>0.641600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1290</td>\n",
       "      <td>0.555200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1300</td>\n",
       "      <td>0.482500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1310</td>\n",
       "      <td>0.510600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1320</td>\n",
       "      <td>0.620800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1330</td>\n",
       "      <td>0.566000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1340</td>\n",
       "      <td>0.546300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1350</td>\n",
       "      <td>0.562700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1360</td>\n",
       "      <td>0.584200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1370</td>\n",
       "      <td>0.446700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1380</td>\n",
       "      <td>0.572500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1390</td>\n",
       "      <td>0.536800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1400</td>\n",
       "      <td>0.506900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1410</td>\n",
       "      <td>0.474200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1420</td>\n",
       "      <td>0.467000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1430</td>\n",
       "      <td>0.515100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1440</td>\n",
       "      <td>0.660100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1450</td>\n",
       "      <td>0.532200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1460</td>\n",
       "      <td>0.570900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1470</td>\n",
       "      <td>0.496600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1480</td>\n",
       "      <td>0.574200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1490</td>\n",
       "      <td>0.551500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1500</td>\n",
       "      <td>0.486900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1510</td>\n",
       "      <td>0.575400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1520</td>\n",
       "      <td>0.470300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1530</td>\n",
       "      <td>0.414300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1540</td>\n",
       "      <td>0.390700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1550</td>\n",
       "      <td>0.267500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1560</td>\n",
       "      <td>0.431200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1570</td>\n",
       "      <td>0.404000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1580</td>\n",
       "      <td>0.357500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1590</td>\n",
       "      <td>0.440400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1600</td>\n",
       "      <td>0.283500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1610</td>\n",
       "      <td>0.316900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1620</td>\n",
       "      <td>0.343300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1630</td>\n",
       "      <td>0.257700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1640</td>\n",
       "      <td>0.379900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1650</td>\n",
       "      <td>0.463500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1660</td>\n",
       "      <td>0.369800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1670</td>\n",
       "      <td>0.423600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1680</td>\n",
       "      <td>0.295800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1690</td>\n",
       "      <td>0.282600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1700</td>\n",
       "      <td>0.397300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1710</td>\n",
       "      <td>0.406100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1720</td>\n",
       "      <td>0.395600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1730</td>\n",
       "      <td>0.407600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1740</td>\n",
       "      <td>0.289800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1750</td>\n",
       "      <td>0.424900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1760</td>\n",
       "      <td>0.387700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1770</td>\n",
       "      <td>0.350800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1780</td>\n",
       "      <td>0.227400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1790</td>\n",
       "      <td>0.228500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1800</td>\n",
       "      <td>0.469600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1810</td>\n",
       "      <td>0.268500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1820</td>\n",
       "      <td>0.313900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1830</td>\n",
       "      <td>0.278900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1840</td>\n",
       "      <td>0.370300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1850</td>\n",
       "      <td>0.337300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1860</td>\n",
       "      <td>0.250700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1870</td>\n",
       "      <td>0.311300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1880</td>\n",
       "      <td>0.365600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1890</td>\n",
       "      <td>0.407300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1900</td>\n",
       "      <td>0.454100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1910</td>\n",
       "      <td>0.405300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1920</td>\n",
       "      <td>0.275300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1930</td>\n",
       "      <td>0.415500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1940</td>\n",
       "      <td>0.298000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1950</td>\n",
       "      <td>0.357200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1960</td>\n",
       "      <td>0.306300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1970</td>\n",
       "      <td>0.353700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1980</td>\n",
       "      <td>0.249800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>1990</td>\n",
       "      <td>0.266300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2000</td>\n",
       "      <td>0.401600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2010</td>\n",
       "      <td>0.297100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2020</td>\n",
       "      <td>0.348100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2030</td>\n",
       "      <td>0.287800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2040</td>\n",
       "      <td>0.261700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2050</td>\n",
       "      <td>0.253700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2060</td>\n",
       "      <td>0.266500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2070</td>\n",
       "      <td>0.249900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2080</td>\n",
       "      <td>0.223700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2090</td>\n",
       "      <td>0.189000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2100</td>\n",
       "      <td>0.222000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2110</td>\n",
       "      <td>0.189000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2120</td>\n",
       "      <td>0.204600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2130</td>\n",
       "      <td>0.271300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2140</td>\n",
       "      <td>0.239100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2150</td>\n",
       "      <td>0.270000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2160</td>\n",
       "      <td>0.228400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2170</td>\n",
       "      <td>0.190600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2180</td>\n",
       "      <td>0.183600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2190</td>\n",
       "      <td>0.200700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2200</td>\n",
       "      <td>0.194400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2210</td>\n",
       "      <td>0.149100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2220</td>\n",
       "      <td>0.179600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2230</td>\n",
       "      <td>0.188100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2240</td>\n",
       "      <td>0.156400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2250</td>\n",
       "      <td>0.291200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2260</td>\n",
       "      <td>0.164100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2270</td>\n",
       "      <td>0.296700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2280</td>\n",
       "      <td>0.166100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2290</td>\n",
       "      <td>0.236800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2300</td>\n",
       "      <td>0.142200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2310</td>\n",
       "      <td>0.294600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2320</td>\n",
       "      <td>0.214000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2330</td>\n",
       "      <td>0.238100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2340</td>\n",
       "      <td>0.343500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2350</td>\n",
       "      <td>0.171100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2360</td>\n",
       "      <td>0.226500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2370</td>\n",
       "      <td>0.154500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2380</td>\n",
       "      <td>0.212100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2390</td>\n",
       "      <td>0.172500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2400</td>\n",
       "      <td>0.150300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2410</td>\n",
       "      <td>0.199100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2420</td>\n",
       "      <td>0.251200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2430</td>\n",
       "      <td>0.209800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2440</td>\n",
       "      <td>0.117600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2450</td>\n",
       "      <td>0.240700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2460</td>\n",
       "      <td>0.202400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2470</td>\n",
       "      <td>0.149600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2480</td>\n",
       "      <td>0.340300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2490</td>\n",
       "      <td>0.219200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2500</td>\n",
       "      <td>0.207000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2510</td>\n",
       "      <td>0.228700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2520</td>\n",
       "      <td>0.184100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2530</td>\n",
       "      <td>0.241200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2540</td>\n",
       "      <td>0.094200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2550</td>\n",
       "      <td>0.118600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2560</td>\n",
       "      <td>0.129000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2570</td>\n",
       "      <td>0.136600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2580</td>\n",
       "      <td>0.159800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2590</td>\n",
       "      <td>0.119100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2600</td>\n",
       "      <td>0.096500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2610</td>\n",
       "      <td>0.122100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2620</td>\n",
       "      <td>0.172000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2630</td>\n",
       "      <td>0.177900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2640</td>\n",
       "      <td>0.188500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2650</td>\n",
       "      <td>0.110900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2660</td>\n",
       "      <td>0.182600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2670</td>\n",
       "      <td>0.092100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2680</td>\n",
       "      <td>0.095900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2690</td>\n",
       "      <td>0.175000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2700</td>\n",
       "      <td>0.130800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2710</td>\n",
       "      <td>0.132000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2720</td>\n",
       "      <td>0.132600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2730</td>\n",
       "      <td>0.126700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2740</td>\n",
       "      <td>0.176000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2750</td>\n",
       "      <td>0.162800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2760</td>\n",
       "      <td>0.161900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2770</td>\n",
       "      <td>0.154200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2780</td>\n",
       "      <td>0.144100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2790</td>\n",
       "      <td>0.106500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2800</td>\n",
       "      <td>0.167400</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2810</td>\n",
       "      <td>0.096800</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2820</td>\n",
       "      <td>0.163500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2830</td>\n",
       "      <td>0.118100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2840</td>\n",
       "      <td>0.117500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2850</td>\n",
       "      <td>0.146100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2860</td>\n",
       "      <td>0.210100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2870</td>\n",
       "      <td>0.097300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2880</td>\n",
       "      <td>0.182600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2890</td>\n",
       "      <td>0.177200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2900</td>\n",
       "      <td>0.188600</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2910</td>\n",
       "      <td>0.191300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2920</td>\n",
       "      <td>0.190900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2930</td>\n",
       "      <td>0.107900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2940</td>\n",
       "      <td>0.118000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2950</td>\n",
       "      <td>0.234500</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2960</td>\n",
       "      <td>0.141700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2970</td>\n",
       "      <td>0.188900</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2980</td>\n",
       "      <td>0.141700</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2990</td>\n",
       "      <td>0.084200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3000</td>\n",
       "      <td>0.173000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3010</td>\n",
       "      <td>0.179100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3020</td>\n",
       "      <td>0.153100</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3030</td>\n",
       "      <td>0.139600</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "trainer_stats = trainer.train()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 五、保存模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_save = \"autodl-fs/Ds_Llama8B_GAOKAO\"\n",
    "model.save_pretrained(model_save) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "加载训练好的模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "from peft import PeftModel"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==((====))==  Unsloth 2025.2.9: Fast Llama patching. Transformers: 4.48.3.\n",
      "   \\\\   /|    GPU: NVIDIA GeForce RTX 4090 D. Max memory: 23.643 GB. Platform: Linux.\n",
      "O^O/ \\_/ \\    Torch: 2.6.0+cu124. CUDA: 8.9. CUDA Toolkit: 12.4. Triton: 3.2.0\n",
      "\\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.29.post3. FA2 = False]\n",
      " \"-____-\"     Free Apache license: http://github.com/unslothai/unsloth\n",
      "Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f0700aa1e85e4f779a0aa1f8f1fba3c4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./autodl-fs/DeepSeek-R1-Distill-Llama-8B does not have a padding token! Will use pad_token = <|finetune_right_pad_id|>.\n"
     ]
    }
   ],
   "source": [
    "#DeepSeek-R1-Distill-Llama-8B 微调后更适用于中国高考问答\n",
    "base_model, tokenizer = FastLanguageModel.from_pretrained(\n",
    "    model_name = \"./autodl-fs/DeepSeek-R1-Distill-Llama-8B\",\n",
    "    max_seq_length = max_seq_length,\n",
    "    dtype = dtype,\n",
    "    load_in_4bit = load_in_4bit,\n",
    "    device_map={\"\": device},  # 将所有参数加载到指定设备\n",
    ")\n",
    "\n",
    "\n",
    "# 加载 LoRA 适配器，加入微调后的变化\n",
    "model = PeftModel.from_pretrained(\n",
    "    base_model,\n",
    "    \"./autodl-fs/adapter\",\n",
    "    adapter_name=\"lora_adapter\"\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 六、评估问答功能"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由于没有进行RLHF，模型自由发挥空间不大。本部分采用BLEU评估相似度 和 使用prompt提示词让gpt4打分（0-5分）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import re\n",
    "import math\n",
    "import collections"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "用BLEU作为评估指标，对比微调后模型回答和标准答案之间的差异"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def bleu(pred_text, label_text, k):  \n",
    "    \"\"\"计算BLEU\"\"\"\n",
    "    pred_tokens = re.findall(r'[\\u4e00-\\u9fa5]+|[a-zA-Z0-9]+|[^a-zA-Z0-9\\s]', pred_text)\n",
    "    label_tokens = re.findall(r'[\\u4e00-\\u9fa5]+|[a-zA-Z0-9]+|[^a-zA-Z0-9\\s]', label_text)\n",
    "    len_pred, len_label = len(pred_tokens), len(label_tokens)\n",
    "    score = math.exp(min(0, 1 - len_label / len_pred))\n",
    "    for n in range(1, k + 1):\n",
    "        num_matches, label_subs = 0, collections.defaultdict(int)\n",
    "        for i in range(len_label - n + 1):\n",
    "            label_subs[' '.join(label_tokens[i: i + n])] += 1\n",
    "        for i in range(len_pred - n + 1):\n",
    "            if label_subs[' '.join(pred_tokens[i: i + n])] > 0:\n",
    "                num_matches += 1\n",
    "                label_subs[' '.join(pred_tokens[i: i + n])] -= 1\n",
    "        score *= math.pow(num_matches / (len_pred - n + 1), math.pow(0.5, n))\n",
    "    return score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "切分推理过程和解答"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_thk_ans(text):\n",
    "    pattern1 = r\"<think>\\s*(.*?)\\s*</think>\"\n",
    "    pattern2 = r\"<answer>\\s*(.*?)\\s*</answer>\"\n",
    "    match = re.search(pattern1, text, re.DOTALL)\n",
    "    #提取think\n",
    "    if match:\n",
    "        thk = match.group(1).strip()\n",
    "    else:\n",
    "        thk = None  \n",
    "    #提取answer\n",
    "    match = re.search(pattern2, text, re.DOTALL)\n",
    "    if match:\n",
    "        ans = match.group(1).strip()\n",
    "    else:\n",
    "        ans = None  \n",
    "        \n",
    "    return thk, ans"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "（1）数学问答"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "    <think>\n",
      "    解: $\\because y=\\frac{x}{x+2}$,\n",
      "\n",
      "$\\therefore y^{\\prime}=\\frac{2}{(x+2)^{2}}$\n",
      "\n",
      "所以 $\\mathrm{k}=\\left.\\mathrm{y}^{\\prime}\\right|_{\\mathrm{x}=-1}=2$, 得切线的斜率为 2 , 所以 $\\mathrm{k}=2$;\n",
      "\n",
      "所以曲线 $y=f(x)$ 在点 $(-1,-1)$ 处的切线方程为:\n",
      "\n",
      "$y+1=2 \\times(x+1)$ ，即 $y=2 x+1$.\n",
      "\n",
      "故选: A.\n",
      "    </think>\n",
      "    <answer>\n",
      "    A\n",
      "    </answer>\n",
      "\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "question = \"3. (5 分) 曲线 $y=\\\\frac{x}{x+2}$ 在点 $(-1,-1)$ 处的切线方程为（ $）$\\nA. $y=2 x+1$\\nB. $y=2 x-1$\\nC. $y=-2 x-3$\\nD. $y=-2 x-2$\\n\"\n",
    "questype = \"2010-2022_Math_I_MCQs\"\n",
    "\n",
    "FastLanguageModel.for_inference(model)  # Unsloth has 2x faster inference!\n",
    "inputs = tokenizer([prompt_style.format(questype ,question, \"\")], return_tensors=\"pt\").to(\"cuda\")\n",
    "\n",
    "outputs = model.generate(\n",
    "    input_ids=inputs.input_ids,\n",
    "    attention_mask=inputs.attention_mask,\n",
    "    max_new_tokens=1200,\n",
    "    use_cache=True,\n",
    ")\n",
    "response = tokenizer.batch_decode(outputs)\n",
    "text = response[0].split(\"### 回答：\")[1]\n",
    "\n",
    "print(text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tnk_bleu 1.000\n",
      "ans_bleu 1.000\n"
     ]
    }
   ],
   "source": [
    "thk, ans = get_thk_ans(text)\n",
    "thk_label = \"\"\"解: $\\\\because y=\\\\frac{x}{x+2}$, \n",
    "$\\\\therefore y^{\\\\prime}=\\\\frac{2}{(x+2)^{2}}$ \n",
    "所以 $\\\\mathrm{k}=\\\\left.\\\\mathrm{y}^{\\\\prime}\\\\right|_{\\\\mathrm{x}=-1}=2$, 得切线的斜率为 2 , 所以 $\\\\mathrm{k}=2$; \n",
    "所以曲线 $y=f(x)$ 在点 $(-1,-1)$ 处的切线方程为: \n",
    "$y+1=2 \\\\times(x+1)$ ，即 $y=2 x+1$. \n",
    "故选: A. \"\"\"\n",
    "ans_label=\"A\"\n",
    "\n",
    "print(f'tnk_bleu {bleu(thk, thk_label, k=4):.3f}')\n",
    "print(f'ans_bleu {bleu(ans, ans_label, k=1):.3f}')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "评分：**5/5**\n",
    "\n",
    "**评分理由**\n",
    "\n",
    "回答非常准确且详细。首先，回答正确地计算了函数 \\( y = \\frac{x}{x+2} \\) 的导数 \\( y' = \\frac{2}{(x+2)^2} \\)，并正确地求出了在点 \\((-1, -1)\\) 处的斜率 \\( k = 2 \\)。接着，回答正确地使用了点斜式方程 \\( y + 1 = 2(x + 1) \\) 来得到切线方程 \\( y = 2x + 1 \\)，并正确地选择了选项 A。整个过程逻辑清晰，步骤完整，没有任何错误。因此，我给这个回答满分5分。**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "（2）物理问答"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "    <think>\n",
      "     解: A、任何物体都有保持原来运动状态的性质, 叫着惯性, 所以物体 抵抗运动状态变化的性质是惯性, 故 $\\mathrm{A}$ 正确;\n",
      "\n",
      "B、没有力作用，物体可以做匀速直线运动，故 B 错误;\n",
      "\n",
      "C、惯性是任何物体保持原来运动状态的性质, 行星在圆周轨道上会改变运动状态, 故 C 错误;\n",
      "\n",
      "D、运动的物体在不受力时，将保持匀速直线运动，故 D 正确;\n",
      "\n",
      "故选：AD。\n",
      "\n",
      "    </think>\n",
      "    <answer>\n",
      "    AD\n",
      "    </answer>\n",
      "    <｜end▁of▁sentence｜>\n"
     ]
    }
   ],
   "source": [
    "question = \"1. (3 分) 伽利略根据小球在斜面上运动的实验和理想实验, 提出了惯性的概 念, 从而奠定了牛顿力学的基础. 早期物理学家关于惯性有下列说法, 其中 正确的是（） A. 物体抵抗运动状态变化的性质是惯性 B. 没有力作用, 物体只能处于静止状态 C. 行星在圆周轨道上保持匀速率运动的性质是惯性 D. 运动物体如果没有受到力的作用, 将继续以同一速度沿同一直线运动\"\n",
    "questype = \"2010-2022_Physics_MCQs\"\n",
    "\n",
    "FastLanguageModel.for_inference(model)  # Unsloth has 2x faster inference!\n",
    "inputs = tokenizer([prompt_style.format(questype ,question, \"\")], return_tensors=\"pt\").to(\"cuda\")\n",
    "\n",
    "outputs = model.generate(\n",
    "    input_ids=inputs.input_ids,\n",
    "    attention_mask=inputs.attention_mask,\n",
    "    max_new_tokens=1200,\n",
    "    use_cache=True,\n",
    ")\n",
    "response = tokenizer.batch_decode(outputs)\n",
    "text = response[0].split(\"### 回答：\")[1]\n",
    "print(text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tnk_bleu 0.944\n",
      "ans_bleu 1.000\n"
     ]
    }
   ],
   "source": [
    "thk, ans = get_thk_ans(text)\n",
    "thk_label= \"\"\"解: A、任何物体都有保持原来运动状态的性质, 叫着惯性, 所以物体 抵抗运动状态变化的性质是惯性, 故 $\\\\mathrm{A}$ 正确;\n",
    "B、没有力作用，物体可以做匀速直线运动，故 B 错误; \n",
    "C、惯性是保持原来运动状态的性质, 圆周运动速度是改变的, 故 C 错误;\n",
    "D、运动的物体在不受力时，将保持匀速直线运动，故 D 正确; \n",
    "故选：AD。\"\"\"\n",
    "ans_label=\"AD\"\n",
    "\n",
    "print(f'tnk_bleu {bleu(thk, thk_label, k=4):.3f}')\n",
    "print(f'ans_bleu {bleu(ans, ans_label, k=1):.3f}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "评分：**5/5**\n",
    "\n",
    "**评分理由**\n",
    "1. **准确性**：回答完全正确。选项A和D是对惯性概念的准确描述，而B和C的错误解释也清晰明了。\n",
    "2. **逻辑性**：回答逻辑清晰，逐条分析了每个选项的正确性，并给出了合理的解释。\n",
    "3. **科学性**：回答符合物理学中关于惯性的定义（牛顿第一定律），且对每个选项的分析都基于正确的物理原理。\n",
    "4. **完整性**：回答涵盖了所有选项，并给出了最终正确答案（AD）。\n",
    "5. **表达清晰**：语言简洁明了，易于理解。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "（3）语文问答"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "    <think>\n",
      "     【解答】A．振振有词：理直气壮的样子。形容自以为理由很充分，说个不休。\n",
      "含贬义。用于学生上课表现不符合语境，感情色彩不对； B．浩如烟海：形容典籍、图书等极为丰富。现多形容海如烟海，书如海海如\n",
      "烟； C．电光火石：指闪电的光，燧石的火。比喻事物瞬息即逝。现也比喻时间、机会\n",
      "不能再来。用于张经理的话符合语境；  \n",
      "D．平分秋色：比喻双方各得一半，不分高低，表示平局。也指各得半成果。\n",
      "用于竞争中，符合语境。  \n",
      "故选： C。\n",
      "\n",
      "    </think>\n",
      "    <answer>\n",
      "    C\n",
      "    </answer>\n",
      "    <｜end▁of▁sentence｜>\n"
     ]
    }
   ],
   "source": [
    "question = \"7．（ 3分）下列各句中，加点的成语使用恰当的一项是（ 　　） A．他性格比较内向 ，平时沉默寡言 ，但是一到课堂上就变得振振有词 ，滔滔 不绝，所以他的课很受学生欢迎。 B．泰山几千年来都是文人墨客们向往的圣地 ，在浩如烟海的中华典籍中 ，留 下了众多颂扬泰山的诗词文章。 C．张经理语重心长的一席话 ，如电光火石 ，让小余心头淤积的阴霾顿时消散 ，再次燃气争创销售佳绩的激情。 D．迅速崛起的快递行业 ，经过几年的激烈竞争 ，大部分企业都已经转行或倒 闭了，市场上只剩他们几家平分秋色。 \"\n",
    "questype = \"2010-2022_Chinese_Lang_and_Usage_MCQs\"\n",
    "\n",
    "FastLanguageModel.for_inference(model)  # Unsloth has 2x faster inference!\n",
    "inputs = tokenizer([prompt_style.format(questype ,question, \"\")], return_tensors=\"pt\").to(\"cuda\")\n",
    "\n",
    "outputs = model.generate(\n",
    "    input_ids=inputs.input_ids,\n",
    "    attention_mask=inputs.attention_mask,\n",
    "    max_new_tokens=1200,\n",
    "    use_cache=True,\n",
    ")\n",
    "response = tokenizer.batch_decode(outputs)\n",
    "text = response[0].split(\"### 回答：\")[1]\n",
    "print(text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tnk_bleu 0.645\n",
      "ans_bleu 1.000\n"
     ]
    }
   ],
   "source": [
    "thk, ans = get_thk_ans(text)\n",
    "thk_label= \"\"\"A．振振有词：理直气壮的样子。形容自以为理由很充分，说个不休。 含贬义。用于学生上课表现不符合语境，感情色彩不对。 \n",
    "B．浩如烟海：形容典籍、图书等极为丰富。 C．电光火石：指闪电的光，燧石的火。比喻事物瞬息即逝。现多形容事物像闪 电和石火一样一瞬间就消逝。亦比喻行动迅速，出手先制。此处属于望文生义。 \n",
    "D．平分秋色：比喻双方各得一半，不分高低，表示平局。此处对象不是双方 ， 不合语境。\"\"\"\n",
    "ans_label=\"C\"\n",
    "\n",
    "print(f'tnk_bleu {bleu(thk, thk_label, k=4):.3f}')\n",
    "print(f'ans_bleu {bleu(ans, ans_label, k=1):.3f}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "评分：**2/5**\n",
    "\n",
    "**评分理由：**\n",
    "1. **准确性**：\n",
    "   - 选项A的分析是正确的，指出“振振有词”含贬义，不适合用于描述学生上课的表现。\n",
    "   - 选项B的分析不完整且表述混乱（“现多形容海如烟海，书如海海如烟”），未能准确解释“浩如烟海”的含义。\n",
    "   - 选项C的分析错误。“电光火石”比喻事物瞬息即逝或时间、机会不能再来，不适合用于形容话语的效果。\n",
    "   - 选项D的分析部分正确，但未明确指出“平分秋色”在语境中的不适用性（快递行业竞争后只剩几家企业，并非“平分秋色”）。\n",
    "\n",
    "2. **逻辑性**：\n",
    "   - 回答的逻辑性较差，尤其是对选项B和C的解释混乱且不准确。\n",
    "   - 对选项D的分析未能结合语境深入说明。\n",
    "\n",
    "3. **科学性**：\n",
    "   - 对成语的解释不够准确，尤其是“电光火石”和“平分秋色”的用法与语境不符。\n",
    "   - 选项B的解释缺乏清晰性和准确性。\n",
    "\n",
    "4. **完整性**：\n",
    "   - 回答虽然对每个选项进行了分析，但部分分析不完整或错误。\n",
    "   - 最终答案（C）是错误的，正确答案应为B。\n",
    "\n",
    "5. **表达清晰性**：\n",
    "   - 语言表达不够清晰，尤其是选项B的解释令人费解。\n",
    "\n",
    "---\n",
    "\n",
    " **改进建议：**\n",
    "1. **选项B**：\n",
    "   - 应明确解释“浩如烟海”的含义，并说明其在语境中的适用性。例如：\n",
    "     - “浩如烟海：形容典籍、图书等极为丰富。句中用于形容中华典籍的数量之多，符合语境。”\n",
    "2. **选项C**：\n",
    "   - 应指出“电光火石”不适合"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "（4）历史问答"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "    <think>\n",
      "     王安石认为 ，人的生理和心理方面的活动和人的形体联系在一起 ，突出\n",
      "了“形”的存在； B、C、D三项均突出了 “天地 ”或“万物 ”这一 “形”的存在。 A\n",
      "项是宋明 “心学 ”的观点，“心学 ”认为天地万物都在心中 ，这明显与王安石的思\n",
      "想相对立。故A项正确。  \n",
      "故选： A。\n",
      "\n",
      "    </think>\n",
      "    <answer>\n",
      "    A\n",
      "    </answer>\n",
      "    <｜end▁of▁sentence｜>\n"
     ]
    }
   ],
   "source": [
    "question = \"3．（ 4分）王安石提出 “形者，有生之本 ”，与之相对立的观点是（ 　　） A．“心外无物 ” B．“天地为万物之本 ” C．“夫形于天地之间者，物也 ” D．“舍天地则无以为道 ” \"\n",
    "questype = \"2010-2022_History_MCQs\"\n",
    "\n",
    "FastLanguageModel.for_inference(model)  # Unsloth has 2x faster inference!\n",
    "inputs = tokenizer([prompt_style.format(questype ,question, \"\")], return_tensors=\"pt\").to(\"cuda\")\n",
    "\n",
    "outputs = model.generate(\n",
    "    input_ids=inputs.input_ids,\n",
    "    attention_mask=inputs.attention_mask,\n",
    "    max_new_tokens=1200,\n",
    "    use_cache=True,\n",
    ")\n",
    "response = tokenizer.batch_decode(outputs)\n",
    "text = response[0].split(\"### 回答：\")[1]\n",
    "print(text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tnk_bleu 0.923\n",
      "ans_bleu 1.000\n"
     ]
    }
   ],
   "source": [
    "thk, ans = get_thk_ans(text)\n",
    "thk_label= \"\"\"王安石认为 ，人的生理和心理方面的活动和人的形体联系在一起 ，突出 了“形”的存在； B、C、D三项均突出了 “天地 ”或“万物 ”这一 “形”的存在。\n",
    "A 项是宋明 “心学 ”的观点，“心学 ”认为天地万物都在心中 ，这明显与王安石的思 想相对立。 故选： A。\"\"\"\n",
    "ans_label=\"A\"\n",
    "\n",
    "print(f'tnk_bleu {bleu(thk, thk_label, k=4):.3f}')\n",
    "print(f'ans_bleu {bleu(ans, ans_label, k=1):.3f}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "评分：**5/5**\n",
    "\n",
    " **评分理由：**\n",
    "1. **准确性**：\n",
    "   - 回答准确指出了王安石的观点（“形者，有生之本”）与“心外无物”（心学观点）的对立关系。\n",
    "   - 对选项B、C、D的分析正确，指出它们均强调“天地”或“万物”的存在，与王安石的观点并不对立。\n",
    "   - 最终答案（A）正确。\n",
    "\n",
    "2. **逻辑性**：\n",
    "   - 回答逻辑清晰，逐条分析了每个选项与王安石观点的关系。\n",
    "   - 对“心学”观点的解释简洁明了，突出了其与王安石思想的对立。\n",
    "\n",
    "3. **科学性**：\n",
    "   - 回答基于对王安石哲学思想和宋明心学的准确理解，符合历史与哲学背景。\n",
    "   - 对“形”与“心”的对立关系分析准确。\n",
    "\n",
    "4. **完整性**：\n",
    "   - 回答涵盖了所有选项，并给出了最终正确答案（A）。\n",
    "   - 对每个选项的分析都紧扣题目要求，没有遗漏或冗余。\n",
    "\n",
    "5. **表达清晰性**：\n",
    "   - 语言简洁明了，逻辑清晰，易于理解。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "（5）政治问答"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "    <think>\n",
      "     A：美国频频发起针对中国新能源产品的反倾销与反补贴调查，限制中国产 品进口，是为了阻碍中国的新能源产业发展，求之于美国的利益，故A符合题 意；\n",
      "\n",
      "B：太阳能电池板、风力利用级风塔属于资本密集型产业，而不是劳动密集型产 业，故B排除；\n",
      "\n",
      "C：减少中国新能源产品进口，会造成贸易逆差减少，但不是美国发起保护性贸易 进口措施的目的，故C排除；\n",
      "\n",
      "D：美国的新能源产业有亟需要中国的相关产品，而不是不需要，故D排除；\n",
      "\n",
      "故选：A。\n",
      "\n",
      "    </think>\n",
      "    <answer>\n",
      "    A\n",
      "    </answer>\n",
      "    <｜end▁of▁sentence｜>\n"
     ]
    }
   ],
   "source": [
    "question = \"4．（4分）2011年11月，美国发起了针对从中国进口的太阳能电池板的反倾销 与反补贴调查： 2012年1月，美国宣布对从中国进口的风力发电设备 ﹣﹣应 用级风塔发起反倾销与反补贴调查。美国频频发起针对中国新能源产品的反 倾销与反补贴调查，限制中国产品进口，主要是因为（ 　　） A．美国欲以贸易保护措施扶持国内新能源产业发展 B．新能源产业是劳动密集型产业，美国需要其提供就业岗位 C．美国需要通过减少中国新能源产品进口才能缩小与中国的贸易逆差 D．美国的新能源产业能过剩，不需要从中国大量进口相关产品 \"\n",
    "questype = \"2010-2022_Political_Science_MCQs\"\n",
    "\n",
    "FastLanguageModel.for_inference(model)  # Unsloth has 2x faster inference!\n",
    "inputs = tokenizer([prompt_style.format(questype ,question, \"\")], return_tensors=\"pt\").to(\"cuda\")\n",
    "\n",
    "outputs = model.generate(\n",
    "    input_ids=inputs.input_ids,\n",
    "    attention_mask=inputs.attention_mask,\n",
    "    max_new_tokens=1200,\n",
    "    use_cache=True,\n",
    ")\n",
    "response = tokenizer.batch_decode(outputs)\n",
    "text = response[0].split(\"### 回答：\")[1]\n",
    "print(text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tnk_bleu 0.562\n",
      "ans_bleu 1.000\n"
     ]
    }
   ],
   "source": [
    "thk, ans = get_thk_ans(text)\n",
    "thk_label= \"\"\"美国频频发起针对中国新能源产品的反倾销与反补贴调查 ，限制中国产 品进口，主要是因为美国经济不断下滑 ，为了发展其相关产业而采取的措施 ， 故A正确。\n",
    "太阳能电池板、风力利用级风塔属于资本密集型产业，而不是劳动密集型产 业， 故B错误。 减少中国新能源产品进口，会造成贸易逆差减少，但不是美国发起保护性贸易 进口措施的目的，故C排除。 新能源产业属于新兴产业，不存在产能过剩， 故D错误。 故选： A。\"\"\"\n",
    "ans_label=\"A\"\n",
    "\n",
    "print(f'tnk_bleu {bleu(thk, thk_label, k=4):.3f}')\n",
    "print(f'ans_bleu {bleu(ans, ans_label, k=1):.3f}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "评分：**4/5**\n",
    "\n",
    "**评分理由：**\n",
    "1. **准确性**：\n",
    "   - 回答正确指出了选项A是主要原因，符合题意。\n",
    "   - 对选项B、C、D的分析基本正确，但部分表述不够严谨。\n",
    "\n",
    "2. **逻辑性**：\n",
    "   - 回答逻辑清晰，逐条分析了每个选项的合理性。\n",
    "   - 对选项B、C、D的排除理由基本合理。\n",
    "\n",
    "3. **科学性**：\n",
    "   - 回答基于对国际贸易保护措施的理解，符合经济学常识。\n",
    "   - 对“资本密集型产业”与“劳动密集型产业”的区分正确。\n",
    "\n",
    "4. **完整性**：\n",
    "   - 回答涵盖了所有选项，并给出了最终正确答案（A）。\n",
    "   - 对每个选项的分析都紧扣题目要求，没有遗漏。\n",
    "\n",
    "5. **表达清晰性**：\n",
    "   - 语言简洁明了，逻辑清晰，易于理解。\n",
    "\n",
    "---\n",
    "\n",
    " **改进建议：**\n",
    "1. **选项A**：\n",
    "   - 可以进一步补充美国通过贸易保护措施扶持国内新能源产业的具体动机。例如：\n",
    "     - “美国通过反倾销与反补贴调查，限制中国新能源产品进口，旨在保护国内新能源产业免受竞争压力，促进其发展。”\n",
    "2. **选项B**：\n",
    "   - 可以更详细地解释为什么太阳能电池板和风力发电设备属于资本密集型产业。例如：\n",
    "     - “太阳能电池板和风力发电设备的生产需要大量资金投入和技术支持，属于资本密集型产业，而非劳动密集型产业。”\n",
    "3. **选项C**：\n",
    "   - 可以更清晰地说明贸易逆差与贸易保护措施的关系。例如：\n",
    "     - “虽然减少中国新能源产品进口可能缩小贸易逆差，但这并非美国发起贸易保护措施的主要目的。”\n",
    "4. **选项D**：\n",
    "   - 可以更准确地说明美国新能源产业的需求情况。例如：\n",
    "     - “美国新能源产业仍需进口部分中国产品以满足需求，因此‘不需要从中国大量进口相关产品’的说法不准确。”"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "（6）英语问答"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "<think>\n",
      "【解析】本文为科普文类说明文，介绍了北极熊的生存现状。\n",
      "【61题详解】\n",
      "考查同位语从句。根据句子结构分析可知，主句为there be句型，且结构完整，空格后为同位语从句，解释说明中心词evidence的内容，故填that。\n",
      "【62题详解】\n",
      "考查副词用法。根据句意和结构分析可知，此处用副词poorly修饰谓语动词has been studied，意为“研究很少”。故填poorly。\n",
      "【63题详解】\n",
      "考查介词用法。此处tracking polar bear populations作Modern methods的定语，用of 连接，“methods of doing sth.”,意为“…的方法”，构成固定结构。或者意为“对于跟踪北极熊的方法”用for。故填of/for。\n",
      "【64题详解】\n",
      "考查非谓语动词。主系表结构之后，常用不定式作原因或目的状语，句意：跟踪北极熊的现代方法只是在二十世纪八十年代以来开始采用，并且在如此大区域内持续采用是昂贵的，故此处用to perform。\n",
      "【65题详解】\n",
      "考查时态。根据上下文语境，尤其是时间状语in recent years可知，主句用现在完成时态，故填have reported。\n",
      "【66题详解】\n",
      "考查名词。根据其前不定冠词和其后的同位语从句可知，空格处为名词形式，故填belief。\n",
      "【67题详解】\n",
      "考查非谓语动词。根据其前介词by可知，此处用动名词主动形式，故填noting。\n",
      "【68题详解】\n",
      "考查形容词比较级。根据其后than they actually are可知，此处为形容词的比较级，故填higher。\n",
      "【69题详解】\n",
      "考查定冠词。此处为特指，意为“出于19个已知的北极熊亚种群”，故填the。\n",
      "【70题详解】\n",
      "考查主谓一致。根据three are declining，此处数词six作主语，代指前文中的“polar bear subpopulations”，故用复数谓语，一般现在时，故填are。\n",
      "    </think>\n",
      "    <answer>\n",
      "    【答案】61. that 62. poorly 63. of/for 64. to perform 65. have reported 66. belief 67. noting 68. higher 69. the 70. are \n",
      "    </answer>\n",
      "\n"
     ]
    }
   ],
   "source": [
    "question = \"阅读下面短文，在空白处填入1个适当的单词或括号内单词的正确形式。 The polar bear is found in the Arctic Circle and some big land masses as far south as Newfoundland. While they are rare north of 88°,there is evidence ___61___ they range all the way across the Arctic, and as far south as James Bay in Canada. It is difficult to figure out a global population of polar bears as much of the range has been ___62___ (poor) studied; however, biologists calculate that there are about 20,000-25,000 polar bears worldwide. Modem methods ___63___ tracking polar bear populations have been employed only since the mid-1980s, and are expensive ___64___ (perform) consistently over a large area. In recent years some Inuit people in Nunayut ___65___ (report) increases in bear sightings around human settlements, leading to a ___66___ (believe) that populations are increasing. Scientists have responded by ___67___ (note) that hungry bears may be congregating(聚集) around human settlements, leading to the illusion(错觉) that populations are ___68___ (high) than they actually are. Of ___69___ nineteen recognized polar bear subpopulations, three are declining, six ___70___ (be) stable, one is increasing, and nine lack enough data. \"\n",
    "questype = \"2014-2022_English_Language_Cloze_Passage\"\n",
    "\n",
    "FastLanguageModel.for_inference(model)  # Unsloth has 2x faster inference!\n",
    "inputs = tokenizer([prompt_style.format(questype ,question, \"\")], return_tensors=\"pt\").to(\"cuda\")\n",
    "\n",
    "outputs = model.generate(\n",
    "    input_ids=inputs.input_ids,\n",
    "    attention_mask=inputs.attention_mask,\n",
    "    max_new_tokens=1200,\n",
    "    use_cache=True,\n",
    ")\n",
    "response = tokenizer.batch_decode(outputs)\n",
    "text = response[0].split(\"### 回答：\")[1]\n",
    "print(text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tnk_bleu 0.987\n",
      "ans_bleu 1.000\n"
     ]
    }
   ],
   "source": [
    "thk, ans = get_thk_ans(text)\n",
    "thk_label= \"\"\"【解析】 本文为科普文类说明文，介绍了北极熊的生存现状。 \n",
    "【61题详解】 考查同位语从句。根据句子结构分析可知，主句为there be句型，且结构完整，空格后为同位语从句，解释说明中心词evidence的内容，故填that。 \n",
    "【62题详解】 考查副词用法。根据句意和结构分析可知，此处用副词poorly修饰谓语动词has been studied，意为“研究很少”。故填poorly。 \n",
    "【63题详解】 考查介词用法。此处tracking polar bear populations作Modern methods的定语，用of 连接，“methods of doing sth.”,意为“…的方法”，构成固定结构。或者意为“对于跟踪北极熊的方法”用for。故填of/for。\n",
    "【64题详解】 考查非谓语动词。主系表结构之后，常用不定式作原因或目的状语，句意：跟踪北极熊的现代方法只是在二十世纪八十年代以来开始采用，并且在如此大区域内持续采用是昂贵的，故此处用to perform。 \n",
    "【65题详解】 考查时态。根据上下文语境，尤其是时间状语in recent years可知，主句用现在完成时态，故填have reported。 \n",
    "【66题详解】 考查名词。根据其前不定冠词和其后的同位语从句可知，空格处为名词形式，故填belief。\n",
    "【67题详解】 考查非谓语动词。根据其前介词by可知，此处用动名词主动形式，故填noting。\n",
    "【68题详解】 考查形容词比较级。根据其后than they actually are可知，此处为形容词的比较级，故填higher。 \n",
    "【69题详解】 考查定冠词。此处为特指，意为“在已知的19个北极熊亚种群中”，故填the。\n",
    "【70题详解】 考查主谓一致。根据three are declining，此处数词six作主语，代指前文中的“polar bear subpopulations”，故用复数谓语，一般现在时，故填are。 \"\"\"\n",
    "ans_label=\" 【答案】61. that 62. poorly 63. of/for 64. to perform 65. have reported 66. belief 67. noting 68. higher 69. the 70. are \"\n",
    "\n",
    "print(f'tnk_bleu {bleu(thk, thk_label, k=4):.3f}')\n",
    "print(f'ans_bleu {bleu(ans, ans_label, k=1):.3f}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "评分：**5/5**\n",
    "\n",
    "**评分理由：**\n",
    "1. **准确性**：\n",
    "   - 每个空格的答案均正确，且解析详细、准确。\n",
    "   - 对语法点（如同位语从句、副词用法、介词用法、非谓语动词、时态、名词、形容词比较级、定冠词、主谓一致）的解释清晰且符合语法规则。\n",
    "\n",
    "2. **逻辑性**：\n",
    "   - 解析逻辑清晰，逐题分析，结合上下文语境和语法规则，得出正确答案。\n",
    "   - 对每个空格的分析都紧扣题目要求，没有遗漏或冗余。\n",
    "\n",
    "3. **科学性**：\n",
    "   - 解析基于对英语语法的准确理解，符合语言学规则。\n",
    "   - 对固定搭配（如“methods of doing sth.”）和语法结构（如“there is evidence that”）的解释准确。\n",
    "\n",
    "4. **完整性**：\n",
    "   - 解析涵盖了所有空格，并给出了详细的解释。\n",
    "   - 对每个空格的分析都完整且具体。\n",
    "\n",
    "5. **表达清晰性**：\n",
    "   - 语言简洁明了，逻辑清晰，易于理解。\n",
    "   - 解析中使用了专业术语（如同位语从句、非谓语动词等），但解释通俗易懂。\n",
    "\n",
    "---\n",
    "\n",
    "**改进建议：**\n",
    "1. **63题**：\n",
    "   - 可以进一步说明“methods of doing sth.”和“methods for doing sth.”的区别，以增强解析的深度。例如：\n",
    "     - “methods of doing sth.”强调方法本身，而“methods for doing sth.”强调方法的用途。\n",
    "2. **65题**：\n",
    "   - 可以补充说明“in recent years”与现在完成时的搭配关系，以帮助读者更好地理解时态选择。例如：\n",
    "     - “时间状语‘in recent years’通常与现在完成时连用，表示从过去某一时刻持续到现在的动作或状态。”\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
