{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### llama.cpp（模型转换工具）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "#### 安装\n",
    "\n",
    "git clone https://github.com/ggerganov/llama.cpp.git\n",
    "pip install -r llama.cpp/requirements.txt\n",
    "\n",
    "#### 执行转换\n",
    "\n",
    "1. 不量化脚本\n",
    "\n",
    "python llama.cpp/convert-hf-to-gguf.py ./Meta-Llama-3-8B-Instruct --outtype f16 --verbose --outfile Meta-Llama-3-8B-Instruct-gguf.gguf\n",
    "\n",
    "2. 量化脚本\n",
    "\n",
    "python llama.cpp/convert-hf-to-gguf.py ./Meta-Llama-3-8B-Instruct --outtype q8_0 --verbose --outfile Meta-Llama-3-8B-Instruct-gguf_q8_0.gguf\n",
    "\n",
    "#### 脚本中具体含义\n",
    "  --outtype 是输出类型，对应列表为：\n",
    "  q2_k: 特定张量（Tensor）采用较高的精度设置，其他保持基础\n",
    "  q3_k_1、q3_k_m、q3_k_2：采用较高的精度设置，其他保持基础\n",
    "  q4_0：最初的量化方案，使用4位精度\n",
    "  q4_1 和 q4_k_m、q4_k_s：提供不同程度的精确度和推理速度，适合需要平衡资源使用的场景\n",
    "  q8_0：用的较多\n",
    "  f16和f32： 不量化"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
