{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### XTuner（模型微调工具）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 环境要求\n",
    "\n",
    "* python ==3.10\n",
    "\n",
    "#### 对应文档\n",
    "\n",
    "https://xtuner.readthedocs.io/zh-cn/latest/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 执行命令\n",
    "\n",
    "* pip install -U 'xtuner[deepspeed]'\n",
    "\n",
    "* git clone https://github.com/InternLM/xtuner.git </br>\n",
    "  cd xtuner </br>\n",
    "  pip install -e '.[all]' </br>\n",
    "  推荐</br>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 支持模型\n",
    "\n",
    "1. config文件夹下有所有支持的模型\n",
    "2. 找到对应的模型，选择相应的文件，拷贝到模型文件夹下\n",
    "3. 修改文件中的配置为实际模型情况\n",
    "    * 修正data_files路径为自己的data路径 \n",
    "    * 修正batch_size 批次\n",
    "    * max_epochs 训练轮次\n",
    "    * save_steps 保存步长\n",
    "    * evaluation_inputs 验证集输入需要粘贴过来\n",
    "4. data数据集格式要求\n",
    "    {\n",
    "        [\n",
    "            {\n",
    "                \"conversation\":[\n",
    "                    {\n",
    "                        \"input\":\"输入内容\",\n",
    "                        \"output\":\"输出内容\"\n",
    "                    }\n",
    "                ]\n",
    "            },\n",
    "                        {\n",
    "                \"conversation\":[\n",
    "                    {\n",
    "                        \"input\":\"输入内容\",\n",
    "                        \"output\":\"输出内容\"\n",
    "                    },\n",
    "                    {\n",
    "                        \"input\":\"输入内容\",\n",
    "                        \"output\":\"输出内容\"\n",
    "                    },\n",
    "                    {\n",
    "                        \"input\":\"输入内容\",\n",
    "                        \"output\":\"输出内容\"\n",
    "                    }\n",
    "                ]\n",
    "            }\n",
    "        ]\n",
    "    }\n",
    "\n",
    "5. part2 模型配置一般不动，可能需要变动的地方见6\n",
    "6. quantization_config字典中，如果只是使用lora，就把它设置成None\n",
    "   lora字典中，r为低秩矩阵的秩，lora_alpha一般为r的二倍（进行qlora时有效）\n",
    "7. part3中\n",
    "   alpaca_en字典中，dataset=dict(type=load_dataset,path=\"json\",data_files=data_files)\n",
    "   dataset_map_fn=None\n",
    "   "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 错误坑：\n",
    "\n",
    "* libGL缺失\n",
    "  去apt-get libGL就行"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 训练\n",
    "\n",
    "xtuner train 配置文件.py --work-dir 保存路径"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 权重文件\n",
    "\n",
    "生成的是pth（pytorch权重文件）\n",
    "需要转换为hf\n",
    "\n",
    "#### 权重文件转换\n",
    "\n",
    "xtuner convert pth_to_hf ${配置文件} ${权重文件}  ${保存文件路径}\n",
    "\n",
    "生成了adapter模型了（大模型的低秩矩阵模型）\n",
    "\n",
    "#### 模型合并\n",
    "\n",
    "xtuner convert merge ${base模型路径} ${adapter模型路径} ${保存文件路径}\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 使用模型(不推荐，效果不好)\n",
    "xtuner chat 模型路径 \n",
    "\n",
    "RESET清理聊天记录"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
