{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### lama-factory（模型工具一条龙）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 开始llamafactory\n",
    "\n",
    "1. 安装依赖包\n",
    "\n",
    "git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git\n",
    "cd LLaMA-Factory\n",
    "pip install -e\n",
    "\n",
    "2. 启动webUI\n",
    "\n",
    "llamafactory-cli webui\n",
    "\n",
    "\n",
    "3. 用配置文件合并模型\n",
    "llamafactory-cli export ./merge_model（合并模型配置文件）.yaml\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 环境需求（注意！）\n",
    "\n",
    "* 高配组合:尽量往高版本靠,忽略官方要求.能正常运行(推荐安装).\n",
    "    版本组合:cuda12.4/pytorch2.5.1/python3.10/auto-gptq 0.7.1/vllm0.6.5\n",
    "\n",
    "    安装方法:\n",
    "\n",
    "            pip install auto-gptq== 0.7.1  #一定要指定安装版本  \n",
    "\n",
    "            pip install vllm==0.6.5# 注意:vllm安装的同时,CUDA 12.1将被自动安装覆盖为                                    12.4版本,pytorch2.1.0将被自动安装覆盖为2.5.1版本,无须理会!!!\n",
    "\n",
    "* 保守组合:均严格按照官方要求的支持的稳定版本.稳定性可能高些但性能可能低些.\n",
    "    版本组合:cuda12.1/pytorch2.1.0/python3.10/auto-gptq0.5.1/vllm0.5.4\n",
    "    安装方法: \n",
    "\n",
    "            pip install auto-gptq==0.5.1 #一定要指定安装版本    \n",
    "\n",
    "            pip install vllm==0.5.4  # 注意:vllm安装的同时,CUDA 12.1和pytorch将被自\n",
    "                                                    动再安装一次,无须理会!!!\n",
    "\n",
    "注意:你可以在安装后检查cuda版本,建议用以下方式查看.\n",
    "\n",
    "import torch\n",
    "print(torch.__version__)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 量化工具（扩张 加快模型速度的集中方法：剪枝、蒸馏、量化）\n",
    "\n",
    "pip install auto-gptq\n",
    "pip install -e .[\"vllm\"]\n",
    "\n",
    "#### auto-gptq的一些坑\n",
    "\n",
    "https://blog.csdn.net/qq_42755230/article/details/144427660\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### lora\n",
    "\n",
    "    通过低秩矩阵来降低模型的参数量，从而减少模型的参数量\n",
    "\n",
    "#### qLora \n",
    "\n",
    "    就是 lora过程中使用了量化，并且对模型精度影响是比较低的\n",
    "    lora 启用的量化为8或者4，此时在lora参数设置中，最好将lora秩提高\n",
    "    缩放系数一般为秩的两倍\n",
    "\n",
    "####  传统量化只模型部署时候的量化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 模型评估\n",
    "\n",
    "    1. 主观评估\n",
    "\n",
    "        问一些有代表性的问题，根据模型的回答，给出评价\n",
    "\n",
    "    2. 客观评估\n",
    "\n",
    "        样本一般分为train、val、test\n",
    "        train 用与训练\n",
    "        val 用与验证\n",
    "        test 一般甲方测试模型结果\n",
    "\n",
    "        2.1. 评估指标说明\n",
    "\n",
    "            predict_bleu-4（准确率）:生成的文本质量的评估指标，区间[0,1]\n",
    "            rouge-1（召回率）:不在意顺序，评估模型生成字符与验证字符结果的匹配程度\n",
    "            rouge-l:在考虑顺序，评估模型生成字符与验证字符结果的匹配程度\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 准确度、精确率、召回率、F1值的概念\n",
    "\n",
    "https://blog.csdn.net/lhxez6868/article/details/108150777"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "py310",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "name": "python",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
