{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "fe98a9e5-4646-47d0-9445-19bf22ef3a54",
   "metadata": {},
   "source": [
    "# <center>Ch3 QLoRA微调原理详解 </center>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6f40a5fa",
   "metadata": {},
   "source": [
    "# 背景"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8edc497d",
   "metadata": {},
   "source": [
    "### 为什么会有QLoRA？\n",
    "\n",
    "1. **背景需求**：\n",
    "   - 随着大语言模型(LLM)的规模不断扩大，微调这些模型需要大量的计算资源和显存。\n",
    "   - 传统的微调方法在处理超大模型时，显存需求和计算成本都非常高。\n",
    "\n",
    "2. **现有方法的局限**：\n",
    "   - **LoRA**：通过低秩分解减少了可训练参数，但仍需加载完整的模型权重到GPU显存中。\n",
    "\n",
    "   想象一个实际场景：\n",
    "```json\n",
    "假设你想微调一个65B参数的大模型：\n",
    "\n",
    "   传统方法：需要500GB+ GPU显存\n",
    "   LoRA方法：仍需要130GB GPU显存(因为要加载原始模型)\n",
    "   QLoRA方法：只需要48GB GPU显存\n",
    "```\n",
    "核心问题：\n",
    "- LoRA虽然减少了可训练参数\n",
    "- 但原始模型权重仍需完整加载到GPU\n",
    "- 对于超大模型来说，光是加载就需要太多显存"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc0d934a",
   "metadata": {},
   "source": [
    "# 1. QLoRA微调方式对比"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "de075258",
   "metadata": {},
   "source": [
    "<div align=center><img src=\"https://typora-photo1220.oss-cn-beijing.aliyuncs.com/DataAnalysis/muyan/image-20241125222513938.png\" width=100%></div>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "24e56cea",
   "metadata": {},
   "source": [
    "### 1. Full Finetuning (完全微调)\n",
    "- Base Model: 16-bit基础模型\n",
    "- 没有使用适配器(No Adapters)\n",
    "- Optimizer State: 需要存储所有参数的32-bit优化器状态\n",
    "- 特点：\n",
    "  - 需要更新所有模型参数\n",
    "  - 显存消耗最大\n",
    "  - 蓝色箭头表示参数更新流\n",
    "  - 绿色箭头表示梯度流"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7e55a641",
   "metadata": {},
   "source": [
    "### 2. LoRA (低秩适配器)\n",
    "- Base Model: 16-bit基础模型\n",
    "- Adapters: 添加16-bit低秩适配器\n",
    "- 特点：\n",
    "  - 基础模型参数保持冻结\n",
    "  - 只训练适配器参数\n",
    "  - 每个层都有独立的适配器\n",
    "  - 蓝色箭头表示参数更新流\n",
    "  - 绿色箭头表示梯度流"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0d32fd6",
   "metadata": {},
   "source": [
    "### 3. QLoRA (量化版LoRA)\n",
    "- Base Model: 4-bit量化基础模型\n",
    "- Adapters: 添加低秩适配器\n",
    "- 创新点：\n",
    "  - 基础模型被量化到4-bit\n",
    "  - 使用分页优化器(CPU存储优化器状态)\n",
    "  - 粉色箭头表示优化器状态在CPU-GPU间的分页流动\n",
    "  - 蓝色箭头表示参数更新流\n",
    "  - 绿色箭头表示梯度流"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d5bcb2c7",
   "metadata": {},
   "source": [
    "### 主要区别：\n",
    "\n",
    "| 特性               | Full Finetuning                | LoRA                           | QLoRA                                      |\n",
    "|--------------------|---------------------------------|--------------------------------|--------------------------------------------|\n",
    "| 显存效率           | 显存消耗最大                   | 中等显存消耗                   | 显存消耗最小                               |\n",
    "| 参数更新           | 更新所有参数                   | 只更新适配器参数               | 只更新适配器参数,且使用分页机制           |\n",
    "| 模型精度           | 16-bit                         | 16-bit                         | 4-bit基础模型 + 32-bit适配器              |\n",
    "| 优化器状态管理     | 全部在GPU                      | 全部在GPU                      | CPU-GPU分页机制                           |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4dc0fdb7",
   "metadata": {},
   "source": [
    "### 4.QLoRA与LoRA的关系与对比\n",
    "\n",
    "- **LoRA**：主要通过低秩分解减少参数量，适用于中小型模型的高效微调。\n",
    "- **QLoRA**：结合了LoRA的低秩分解和4-bit量化技术，特别适合在消费级硬件上微调超大模型。\n",
    "\n",
    "可以理解为：\n",
    "```\n",
    "QLoRA = LoRA + 量化技术\n",
    "- LoRA负责减少可训练参数\n",
    "- 量化负责压缩原始模型权重\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f8fbf70c",
   "metadata": {},
   "source": [
    "### Q代表什么？\n",
    "\n",
    "- **Q**代表**Quantization**，即量化。QLoRA的核心创新在于结合量化技术来优化显存和计算效率。\n",
    "- 将模型权重从高精度(如FP16)转换为低精度(如4-bit)\n",
    "- 大大减少资源占用\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "53dd3d11",
   "metadata": {},
   "source": [
    "### LoRA的工作方式\n",
    "```json\n",
    "原始模型(FP16格式保持不变)\n",
    "|    权重矩阵A (16-bit)    |\n",
    "|    权重矩阵B (16-bit)    |    +    低秩矩阵(可训练)\n",
    "|    权重矩阵C (16-bit)    |             ↑\n",
    "|           ...           |         仅训练这部分\n",
    "```\n",
    "\n",
    "### QLoRA的工作方式\n",
    "```json\n",
    "原始模型(量化为4-bit)\n",
    "|    权重矩阵A (4-bit)     |\n",
    "|    权重矩阵B (4-bit)     |    +    低秩矩阵(可训练)\n",
    "|    权重矩阵C (4-bit)     |            ↑\n",
    "|          ...            |         仅训练这部分\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "40e4b19c",
   "metadata": {},
   "source": [
    "训练过程：\n",
    "- LoRA和QLoRA都只训练低秩矩阵\n",
    "- 原始模型权重都是冻结的\n",
    "- 区别仅在于原始模型的存储格式\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4471759",
   "metadata": {},
   "source": [
    "### QLoRA的好处\n",
    "\n",
    "1. **提高显存效率**：\n",
    "   - 通过4-bit量化，QLoRA显著降低了显存占用。\n",
    "   - 可以在单个48GB GPU上微调65B参数模型，甚至在24GB消费级GPU上微调33B参数模型。\n",
    "   - 减少了对高端硬件的依赖，使得更多研究者和开发者能够参与大模型的微调。\n",
    "   - 量化后的模型推理速度更快，适合大规模部署。\n",
    "```json\n",
    "   同样的65B模型：\n",
    "   FP16格式：130GB\n",
    "   4-bit量化：32.5GB\n",
    "```\n",
    "2. **保持模型性能**：\n",
    "- 创新的NF4量化格式\n",
    "- 几乎不损失模型效果\n",
    "3. **降低硬件门槛**：\n",
    "- 可在单张消费级显卡上微调大模型\n",
    "- 让更多研究者能参与大模型开发"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "341c41fd",
   "metadata": {},
   "source": [
    "### QLoRA的缺点\n",
    "\n",
    "1. **精度损失**：\n",
    "   - 量化可能导致一定的精度损失，尽管QLoRA通过NF4技术尽量减少了这种影响。\n",
    "\n",
    "2. **复杂性增加**：\n",
    "   - 需要额外的量化和反量化步骤，增加了实现的复杂性。\n",
    "\n",
    "3. **训练稳定性**：\n",
    "   - 需要仔细调节超参数以确保训练稳定性。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "943f9cf0",
   "metadata": {},
   "source": [
    "### 5. 实际应用场景\n",
    "\n",
    "适用场景：\n",
    "```json\n",
    "1. 个人开发者微调大模型\n",
    "2. 研究机构预算有限\n",
    "3. 需要在边缘设备部署大模型\n",
    "```\n",
    "\n",
    "不太适用：\n",
    "```json\n",
    "1. 有充足算力资源的场景\n",
    "2. 对推理速度要求极高的场景\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ddb794aa",
   "metadata": {},
   "source": [
    "## 学习QLoRA"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "505b8a71",
   "metadata": {},
   "source": [
    "### 1. 量化基础\n",
    "- 量化的基本原理\n",
    "- 不同精度格式(FP32/FP16/INT8/INT4)的特点\n",
    "- 量化误差与舍入方式\n",
    "- 量化校准(Calibration)\n",
    "\n",
    "### 2. QLoRA量化技术   \n",
    " NF4(4-bit NormalFloat)：\n",
    "- 非线性量化区间\n",
    "- 更好保留权重分布特征    \n",
    "  \n",
    "双量化技术：\n",
    "- 对量化参数再次量化\n",
    "- 进一步节省显存\n",
    "\n",
    "### 3. 显存管理机制\n",
    "分页优化器：\n",
    "- 智能利用CPU显存\n",
    "- 扩展GPU显存容量\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bc68f225",
   "metadata": {},
   "source": [
    "# 2. 基础概念了解"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d92a5348",
   "metadata": {},
   "source": [
    "## 2.1 什么是量化"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d71bdb52",
   "metadata": {},
   "source": [
    "量化是将高精度的数值（如浮点数）转换为低精度的数值（如整数）的过程，以减少存储和计算的需求。它可以显著降低模型的显存占用和计算复杂度，同时在某些情况下还能加速推理过程。   \n",
    "注意：只能从高精度量化到低精度    \n",
    "FP32 -> FP16 -> 8-bit -> 4-bit    \n",
    "不能反向操作"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9011442f",
   "metadata": {},
   "source": [
    "## 2.2 精度怎么理解"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "190456ba",
   "metadata": {},
   "source": [
    "### 1. FP32 (32位浮点数)\n",
    "\n",
    "结构：\n",
    "[1位符号位][8位指数位][23位尾数位]\n",
    "\n",
    "表示范围：\n",
    "- 最小值：±1.175494351 × 10^-38\n",
    "- 最大值：±3.402823466 × 10^38\n",
    "- 精度：约7位十进制数字\n",
    "\n",
    "示例：\n",
    "3.14159265359 -> 0x40490FDB"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb2eb4c2",
   "metadata": {},
   "source": [
    "数字示意"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "350a02be",
   "metadata": {},
   "source": [
    "符号位 \n",
    "\n",
    "```json\n",
    "符号位用1位表示：\n",
    "0 -> 正数\n",
    "1 -> 负数\n",
    "\n",
    "例如：\n",
    "3.14159  -> 0|...\n",
    "-3.14159 -> 1|...\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "823b006a",
   "metadata": {},
   "source": [
    "**指数位**\n",
    "\n",
    "```json\n",
    "# FP32中指数位占8位，范围是0-255\n",
    "# 使用偏置表示法(Bias=127)：\n",
    "实际指数 = 指数位的值 - 127(偏置值)\n",
    "\n",
    "例如：\n",
    "指数位值 = 128(二进制10000000)\n",
    "实际指数 = 128 - 127 = 1\n",
    "\n",
    "这样可以表示负指数：\n",
    "指数位值 = 126\n",
    "实际指数 = 126 - 127 = -1\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f0ede921",
   "metadata": {},
   "source": [
    "**尾数位** \n",
    "\n",
    "```json\n",
    "# FP32中尾数位占23位\n",
    "# 实际值前面默认加1.\n",
    "二进制: 10010010000111111011011\n",
    "实际值: 1.10010010000111111011011\n",
    "       = 1 + 2^(-1) + 2^(-4) + 2^(-5) + ...\n",
    "       ≈ 1.5707963...\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c79572e5",
   "metadata": {},
   "source": [
    "**最终值计算**\n",
    "\n",
    "```json\n",
    "浮点数值 = (-1)^符号位 × 2^实际指数 × 尾数\n",
    "\n",
    "例如3.14159:\n",
    "符号位 = 0    -> (-1)^0 = 1\n",
    "实际指数 = 1  -> 2^1 = 2\n",
    "尾数 ≈ 1.5707963...\n",
    "\n",
    "最终值 = 1 × 2 × 1.5707963...\n",
    "      = 3.14159...\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7be72737",
   "metadata": {},
   "source": [
    " **0x40490FDB的含义**\n",
    "```json\n",
    "0x40490FDB 是十六进制表示：\n",
    "0x表示十六进制\n",
    "4049 0FDB 是实际的值\n",
    "\n",
    "转换为二进制：\n",
    "0x4    0    4    9    0    F    D    B\n",
    "0100 0000 0100 1001 0000 1111 1101 1011\n",
    "\n",
    "按IEEE 754标准解析这32位：\n",
    "[0|10000000|10010010000111111011011]\n",
    " ↑    ↑           ↑\n",
    " |    |           |\n",
    "符号位 指数位      尾数位\n",
    "(1位)  (8位)      (23位)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9ee900d5",
   "metadata": {},
   "source": [
    "**详细计算过程**\n",
    "\n",
    "```json\n",
    "1. 符号位(1位)：\n",
    "0 -> 正数\n",
    "\n",
    "2. 指数位(8位)：\n",
    "10000000 -> 128(十进制)\n",
    "实际指数 = 128 - 127(偏置) = 1\n",
    "\n",
    "3. 尾数位(23位):\n",
    "10010010000111111011011\n",
    "1.10010010000111111011011(二进制)\n",
    "= 1.5707963...\n",
    "\n",
    "4. 最终值计算：\n",
    "值 = (-1)^符号位 × 2^实际指数 × 尾数\n",
    "= 1 × 2^1 × 1.5707963...\n",
    "= 3.14159265359...\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d9f1f7d2",
   "metadata": {},
   "source": [
    "### 2. BF16 (Brain Floating Point)\n",
    "\n",
    "结构：\n",
    "[1位符号位][8位指数位][7位尾数位]\n",
    "\n",
    "特点：\n",
    "- 保持与FP32相同的指数范围\n",
    "- 牺牲一些精度换取更小的存储空间\n",
    "- 常用于深度学习训练\n",
    "\n",
    "示例：\n",
    "3.14159265359 -> 0x4049\n",
    "精度约为2-3位十进制数字"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3ca17a5",
   "metadata": {},
   "source": [
    "### 3. FP16 (16位浮点数)\n",
    "\n",
    "\n",
    "结构：\n",
    "[1位符号位][5位指数位][10位尾数位]\n",
    "\n",
    "表示范围：\n",
    "- 最小值：±6.10352 × 10^-5\n",
    "- 最大值：±65504\n",
    "- 精度：约3-4位十进制数字\n",
    "\n",
    "示例：\n",
    "3.14159265359 -> 0x4248"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "caffa695",
   "metadata": {},
   "source": [
    "### 4. INT8 (8位整数)\n",
    "\n",
    "结构：\n",
    "8位整数值\n",
    "\n",
    "表示范围：\n",
    "有符号：[-128, 127]\n",
    "无符号：[0, 255]\n",
    "\n",
    "量化示例：\n",
    "原始值: 3.14159265359\n",
    "量化后: 25 (假设映射到0-255范围)\n",
    "\n",
    "需要额外存储：\n",
    "- scale (缩放因子)\n",
    "- zero_point (零点偏移)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ca72e5b5",
   "metadata": {},
   "source": [
    "### 5. INT4 (4位整数)\n",
    "\n",
    "结构：\n",
    "4位整数值\n",
    "\n",
    "表示范围：\n",
    "有符号：[-8, 7]\n",
    "无符号：[0, 15]\n",
    "\n",
    "QLoRA的NF4格式：\n",
    "特殊的非线性量化区间：\n",
    "[-1, -0.7, -0.3, -0.1, 0, 0.1, 0.3, 0.7, 1]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fdb2a43b",
   "metadata": {},
   "source": [
    "### 精度对比示例\n",
    "\n",
    "```python\n",
    "原始值: 3.14159265359\n",
    "\n",
    "FP32: 3.14159265359  # 完整精度\n",
    "BF16: 3.141          # 3位精度\n",
    "FP16: 3.1416         # 4位精度\n",
    "INT8: 3.14           # 2位精度\n",
    "INT4: 3.0            # 1位精度\n",
    "\n",
    "# 存储同一个数所需的位数\n",
    "FP32: 32位 -> 0x40490FDB\n",
    "BF16: 16位 -> 0x4049\n",
    "FP16: 16位 -> 0x4248\n",
    "INT8: 8位  -> 25\n",
    "INT4: 4位  -> 7\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1a604d11",
   "metadata": {},
   "source": [
    "### 使用场景建议\n",
    "\n",
    "```python\n",
    "# 科学计算，需要高精度\n",
    "使用FP32\n",
    "\n",
    "# 模型训练\n",
    "使用BF16/FP16\n",
    "- BF16: 训练更稳定\n",
    "- FP16: 精度稍好\n",
    "\n",
    "# 模型推理\n",
    "使用INT8/INT4\n",
    "- INT8: 常规部署\n",
    "- INT4: 资源受限场景\n",
    "\n",
    "# QLoRA训练\n",
    "基座模型: INT4(NF4)\n",
    "LoRA层: FP16\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6eaadfbf",
   "metadata": {},
   "source": [
    "## 2.3 量化误差与舍入方式"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7d04327",
   "metadata": {},
   "source": [
    "### 1. 什么是量化误差？\n",
    "\n",
    "```python\n",
    "# 假设我们有一个FP16的权重值\n",
    "原始值 = 0.37\n",
    "\n",
    "# 使用4-bit量化，我们只能用16个数字表示所有可能的值\n",
    "可用的量化值 = [-1.0, -0.7, -0.3, -0.1, 0, 0.1, 0.3, 0.7, 1.0]\n",
    "\n",
    "# 0.37需要被映射到最近的可用值\n",
    "量化后 = 0.3\n",
    "\n",
    "# 量化误差\n",
    "误差 = 原始值 - 量化后的值\n",
    "     = 0.37 - 0.3 \n",
    "     = 0.07  # 这就是量化误差\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a95eb53d",
   "metadata": {},
   "source": [
    "### 2. 为什么要关注量化误差？\n",
    "\n",
    "```python\n",
    "# 在神经网络中\n",
    "1. 单个权重的误差可能很小\n",
    "weight_error = 0.07\n",
    "\n",
    "2. 但在前向传播时误差会累积\n",
    "layer1_error = weight_error * input_value\n",
    "layer2_error = layer1_error * next_weight\n",
    "...\n",
    "\n",
    "3. 累积的误差可能导致：\n",
    "- 模型性能下降\n",
    "- 训练不稳定\n",
    "- 预测结果偏差\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0657fc41",
   "metadata": {},
   "source": [
    "###  舍入"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "390c49fc",
   "metadata": {},
   "source": [
    "### 1. QLoRA如何处理舍入？\n",
    "\n",
    "```python\n",
    "# 传统的舍入方式（最近邻舍入）\n",
    "def nearest_rounding(value, quantized_values):\n",
    "    return 找到最近的量化值\n",
    "\n",
    "# 例如：\n",
    "value = 0.37\n",
    "nearest = 0.3  # 因为0.3是最近的可用值\n",
    "\n",
    "# QLoRA使用随机舍入\n",
    "def stochastic_rounding(value, lower, upper):\n",
    "    distance_to_lower = value - lower\n",
    "    distance_to_upper = upper - value\n",
    "    \n",
    "    # 根据距离决定舍入概率\n",
    "    prob_upper = distance_to_lower / (upper - lower)\n",
    "    prob_lower = 1 - prob_upper\n",
    "    \n",
    "    # 随机选择\n",
    "    return random.choices(\n",
    "        [lower, upper], \n",
    "        weights=[prob_lower, prob_upper]\n",
    "    )[0]\n",
    "\n",
    "# 例如：\n",
    "value = 0.37\n",
    "lower = 0.3\n",
    "upper = 0.7\n",
    "# 可能舍入到0.3或0.7，概率基于距离\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "580943fd",
   "metadata": {},
   "source": [
    "### 2. 为什么QLoRA选择这种舍入方式？\n",
    "\n",
    "```python\n",
    "# 考虑以下情况\n",
    "原始权重序列 = [0.37, 0.34, 0.36, 0.35]\n",
    "\n",
    "# 使用最近邻舍入\n",
    "传统结果 = [0.3, 0.3, 0.3, 0.3]  # 所有值都被舍入到0.3\n",
    "问题: 完全丢失了原始分布信息\n",
    "\n",
    "# 使用随机舍入\n",
    "QLoRA结果可能是 = [0.3, 0.3, 0.7, 0.3]\n",
    "优势:\n",
    "1. 在统计意义上保持了原始分布\n",
    "2. 避免了系统性偏差\n",
    "3. 有助于训练稳定性\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "85e4e4b0",
   "metadata": {},
   "source": [
    "这种设计的优势：\n",
    "1. 保持统计特性：随机舍入帮助保持权重分布\n",
    "2. 避免累积误差：减少系统性偏差\n",
    "3. 训练稳定性：更好的梯度流动\n",
    "4. 显存效率：在极致压缩的同时保持模型性能\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11e44989",
   "metadata": {},
   "source": [
    "## 2.4 量化校准"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cff13a81",
   "metadata": {},
   "source": [
    "### 1. 什么是量化校准？   \n",
    "量化校准就是在压缩前，先分析数据特点，找到最合适的压缩方案。\n",
    "\n",
    "### 2. 为什么需要校准？\n",
    "\n",
    "```python\n",
    "# 假设我们有一层神经网络的权重\n",
    "weights = [\n",
    "    0.001,  # 接近0的小权重\n",
    "    0.002,\n",
    "    0.003,\n",
    "    0.3,    # 中等大小的权重\n",
    "    0.5,\n",
    "    0.7,\n",
    "    0.95,   # 接近1的大权重\n",
    "    0.98\n",
    "]\n",
    "\n",
    "# 不进行校准的4-bit量化\n",
    "# 简单地平均分割0-1的范围为16份\n",
    "简单量化结果 = [\n",
    "    0.0,    # 小值都变成0了\n",
    "    0.0,\n",
    "    0.0,\n",
    "    0.3,\n",
    "    0.5,\n",
    "    0.7,\n",
    "    0.9,    # 大值的细微差别也丢失了\n",
    "    0.9\n",
    "]\n",
    "\n",
    "# 经过校准的4-bit量化\n",
    "# 根据权重分布设计量化点\n",
    "校准后结果 = [\n",
    "    0.001,  # 保留了小值\n",
    "    0.002,\n",
    "    0.003,\n",
    "    0.3,\n",
    "    0.5,\n",
    "    0.7,\n",
    "    0.95,   # 保留了大值的差别\n",
    "    0.98\n",
    "]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "08478474",
   "metadata": {},
   "source": [
    "### 3. 校准过程示例\n",
    "\n",
    "```python\n",
    "# 1. 先看看数据的分布特点\n",
    "def analyze_weights(model):\n",
    "    weights = model.get_weights()\n",
    "    return {\n",
    "        \"最小值\": min(weights),\n",
    "        \"最大值\": max(weights),\n",
    "        \"平均值\": mean(weights),\n",
    "        \"分布集中区间\": find_main_range(weights)\n",
    "    }\n",
    "\n",
    "# 2. 根据分布特点调整量化方案\n",
    "def calibrate_quantization(analysis):\n",
    "    if \"大部分值集中在0附近\":\n",
    "        return use_dense_zero_quantization()\n",
    "    elif \"值比较分散\":\n",
    "        return use_uniform_quantization()\n",
    "    else:\n",
    "        return use_custom_quantization()\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3c374bda",
   "metadata": {},
   "source": [
    "### 5. 校准的作用\n",
    "\n",
    "```python\n",
    "校准的好处：\n",
    "\n",
    "1. 保留重要信息\n",
    "原始值: [0.001, 0.002, 0.003]\n",
    "未校准: [0.0, 0.0, 0.0]     # 信息丢失\n",
    "已校准: [0.001, 0.002, 0.003] # 保留差异\n",
    "\n",
    "2. 减少量化误差\n",
    "原始值: [0.98, 0.99, 1.0]\n",
    "未校准: [1.0, 1.0, 1.0]    # 误差大\n",
    "已校准: [0.98, 0.99, 1.0]  # 误差小\n",
    "\n",
    "3. 提高模型精度\n",
    "- 更准确的权重表示\n",
    "- 更好的模型性能\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c5126124",
   "metadata": {},
   "source": [
    "# 3.QLoRA概念"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4c9a4dd",
   "metadata": {},
   "source": [
    "## 3.1 NF4量化"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e4bce27",
   "metadata": {},
   "source": [
    "### 1. NF4的基本概念\n",
    "\n",
    "```python\n",
    "# 传统4-bit整数量化(INT4)\n",
    "INT4_POINTS = [-8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7]\n",
    "# 特点：线性均匀分布\n",
    "\n",
    "# QLoRA的NF4量化\n",
    "NF4_POINTS = [\n",
    "    -1.0, -0.7, -0.3, -0.1,   # 负值区间\n",
    "    0,                         # 零点\n",
    "    0.1, 0.3, 0.7, 1.0        # 正值区间\n",
    "]\n",
    "# 特点：非线性分布，针对正态分布优化\n",
    "1. 在常见值（接近0）的地方放更多的量化点\n",
    "2. 在罕见值（远离0）的地方放较少的量化点\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "236af6db",
   "metadata": {},
   "source": [
    "### 2. 为什么需要NF4？\n",
    "\n",
    "```python\n",
    "# 神经网络权重通常呈现正态分布\n",
    "    \"\"\"\n",
    "    大多数权重集中在0附近\n",
    "    |         ***          |\n",
    "    |       *******        |\n",
    "    |     ***********      |\n",
    "    |  ******************  |\n",
    "      -1 ------0------ +1\n",
    "    \"\"\"\n",
    "    \n",
    "# 传统INT4的问题\n",
    "    \"\"\"\n",
    "    1. 线性间隔不适合正态分布\n",
    "    2. 对中心区域(接近0)的精度不够\n",
    "    3. 对两端的大值表示过多\n",
    "    \"\"\"\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "33888b1b",
   "metadata": {},
   "source": [
    "### 3. NF4的设计原理\n",
    "\n",
    "```python\n",
    "class NF4Quantization:\n",
    "    def __init__(self):\n",
    "        # 1. 非均匀量化点设计\n",
    "        self.quant_points = {\n",
    "            \"near_zero\": [-0.1, 0, 0.1],     # 密集采样\n",
    "            \"middle\": [-0.3, 0.3],           # 中等间隔\n",
    "            \"far\": [-1.0, -0.7, 0.7, 1.0]    # 大间隔\n",
    "        }\n",
    "        \n",
    "    def quantize(self, weight):\n",
    "        # 2. 基于距离的量化\n",
    "        if abs(weight) < 0.15:\n",
    "            # 小值使用精细量化\n",
    "            return find_nearest(weight, self.quant_points[\"near_zero\"])\n",
    "        elif abs(weight) < 0.5:\n",
    "            # 中等值使用中等精度\n",
    "            return find_nearest(weight, self.quant_points[\"middle\"])\n",
    "        else:\n",
    "            # 大值使用粗略量化\n",
    "            return find_nearest(weight, self.quant_points[\"far\"])\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "febaafef",
   "metadata": {},
   "source": [
    "\n",
    "### 4. NF4与传统量化的对比\n",
    "\n",
    "```python\n",
    "# 假设有以下权重\n",
    "weights = [0.05, 0.2,0.28 , 0.8, -0.03, -0.6, -0.9]\n",
    "\n",
    "# INT4量化（线性）\n",
    "def int4_quantize(weights):\n",
    "    # 均匀分布的量化点\n",
    "    quantized = [\n",
    "        0.0,   # 0.05  -> 0\n",
    "        0.25,  # 0.2   -> 0.25\n",
    "        0.25,  # 0.28   -> 0.25\n",
    "        0.75,  # 0.8   -> 0.75\n",
    "        0.0,   # -0.03 -> 0\n",
    "        -0.5,  # -0.6  -> -0.5\n",
    "        -1.0   # -0.9  -> -1.0\n",
    "    ]\n",
    "    return quantized\n",
    "\n",
    "# NF4量化（非线性）\n",
    "def nf4_quantize(weights):\n",
    "    # 非均匀分布的量化点\n",
    "    quantized = [\n",
    "        0.1,   # 0.05  -> 0.1 (更精确)\n",
    "        0.3,   # 0.2   -> 0.3\n",
    "        0.3,   # 0.28   -> 0.3\n",
    "        0.7,   # 0.8   -> 0.7\n",
    "        0.0,   # -0.03 -> 0 (更精确)\n",
    "        -0.7,  # -0.6  -> -0.7\n",
    "        -1.0   # -0.9  -> -1.0\n",
    "    ]\n",
    "    return quantized\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1eb8cec9",
   "metadata": {},
   "source": [
    "### 5. NF4的优势\n",
    "\n",
    "```python\n",
    "\"精度优化\": {\n",
    "    \"零点附近\": \"更密集的量化点\",\n",
    "    \"大值区域\": \"更合理的分布\"\n",
    "},\n",
    "\"显存效率\": {\n",
    "    \"位宽\": \"仍然是4-bit\",\n",
    "    \"表示范围\": \"优化后的[-1,1]区间\"\n",
    "},\n",
    "\"训练稳定性\": {\n",
    "    \"梯度传播\": \"更好的数值稳定性\",\n",
    "    \"舍入误差\": \"更小的累积误差\"\n",
    "}\n",
    "```\n",
    "\n",
    "NF4的核心创新在于：\n",
    "1. 非线性量化点分布\n",
    "2. 针对神经网络权重的正态分布特性优化\n",
    "3. 在相同位宽(4-bit)下获得更好的量化效果\n",
    "4. 特别适合大语言模型的权重分布特征"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "152bc9c3",
   "metadata": {},
   "source": [
    "## 3.2 双量化技术 "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "34791326",
   "metadata": {},
   "source": [
    "### 1. 首先理解单次量化\n",
    "\n",
    "```python\n",
    "# 假设我们有一组原始权重\n",
    "original_weights = [2.5, -1.8, 0.4, -0.2, 0.1]\n",
    "\n",
    "# 单次量化过程\n",
    "def single_quantization():\n",
    "    # 1. 计算量化参数\n",
    "    scale = max(abs(original_weights))  # = 2.5\n",
    "    \n",
    "    # 2. 归一化\n",
    "    normalized = [\n",
    "        2.5/2.5,   # = 1.0\n",
    "        -1.8/2.5,  # = -0.72\n",
    "        0.4/2.5,   # = 0.16\n",
    "        -0.2/2.5,  # = -0.08\n",
    "        0.1/2.5    # = 0.04\n",
    "    ]\n",
    "    \n",
    "    # 3. 使用NF4量化\n",
    "    quantized = [\n",
    "        1.0,    # 1.0 -> 1.0\n",
    "        -0.7,   # -0.72 -> -0.7\n",
    "        0.1,    # 0.16 -> 0.1\n",
    "        -0.1,   # -0.08 -> -0.1\n",
    "        0.0     # 0.04 -> 0.0\n",
    "    ]\n",
    "    \n",
    "    # 需要保存：\n",
    "    # - 量化后的权重(4-bit)\n",
    "    # - scale值(FP16格式，16-bit)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "feb3c0f3",
   "metadata": {},
   "source": [
    "### 2. 双量化的问题由来\n",
    "\n",
    "```python\n",
    "# 问题：scale值占用空间太大\n",
    "def memory_analysis():\n",
    "    # 对于每个权重：\n",
    "    weight_memory = \"4-bit\"      # 量化后的权重\n",
    "    scale_memory = \"16-bit\"      # 量化参数(scale)\n",
    "    \n",
    "    # 显存占用比例\n",
    "    print(\"scale占用显存是权重的4倍！\")\n",
    "    \n",
    "    # 例如：\n",
    "    # 权重：[1.0, -0.7, 0.1, -0.1, 0.0] -> 4-bit每个数\n",
    "    # scale：2.5 -> 16-bit\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4868e476",
   "metadata": {},
   "source": [
    "### 3. 双量化的解决方案\n",
    "\n",
    "```python\n",
    "def double_quantization():\n",
    "    # 第一次量化：对权重进行4-bit量化\n",
    "    # 1. 计算scale\n",
    "    scale = 2.5\n",
    "    \n",
    "    # 2. 量化权重\n",
    "    quantized_weights = [1.0, -0.7, 0.1, -0.1, 0.0]  # 4-bit\n",
    "    \n",
    "    # 第二次量化：对scale进行8-bit量化\n",
    "    # 3. 量化scale值\n",
    "    quantized_scale = quantize_to_8bit(2.5)\n",
    "    \n",
    "    # 最终存储：\n",
    "    # - 量化后的权重(4-bit)\n",
    "    # - 量化后的scale(8-bit，不是原来的16-bit)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2c371524",
   "metadata": {},
   "source": [
    "### 4. 具体例子\n",
    "\n",
    "```python\n",
    "def full_double_quantization_example():\n",
    "    # 原始权重\n",
    "    weights = [2.5, -1.8, 0.4]\n",
    "    \n",
    "    # 1. 计算scale\n",
    "    scale = max(abs(weights))  # = 2.5\n",
    "    \n",
    "    # 2. 第一次量化（权重）\n",
    "    normalized = [w/scale for w in weights]\n",
    "    # normalized = [1.0, -0.72, 0.16]\n",
    "    \n",
    "    # 使用NF4量化点\n",
    "    quantized_weights = [1.0, -0.7, 0.1]  # 4-bit存储\n",
    "    \n",
    "    # 3. 第二次量化（scale）\n",
    "    # 直接线性量化到8-bit\n",
    "    quantized_scale = int(scale * 255 / 256)  # 8-bit存储\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6e093231",
   "metadata": {},
   "source": [
    "### 5. 两种量化方式的区别\n",
    "\n",
    "```python\n",
    "量化方式不同\n",
    "    \"第一次量化(NF4)\": {\n",
    "        \"方式\": \"映射到预定义量化点\",\n",
    "        \"特点\": \"非线性分布的量化点\",\n",
    "        \"位数\": \"4-bit\",\n",
    "        \"目的\": \"保持权重分布特征\"\n",
    "        \"原因1\": \"权重分布需要特殊处理\",\n",
    "        \"原因2\": \"模型性能对权重精度敏感\"\n",
    "    },\n",
    "    \n",
    "    \"第二次量化(线性)\": {\n",
    "        \"方式\": \"简单线性映射\",\n",
    "        \"特点\": \"均匀分布的量化点\",\n",
    "        \"位数\": \"8-bit\",\n",
    "        \"目的\": \"简单压缩存储空间\",\n",
    "        \"原因1\": \"scale是单个数值，不需要特殊分布\",\n",
    "        \"原因2\": \"8-bit精度对scale已经足够\",\n",
    "        \"原因3\": \"实现简单，计算快速\"\n",
    "    }\n",
    "\n",
    "占用空间不同\n",
    "# 单次量化\n",
    "    single_quant = {\n",
    "        \"权重\": \"1000个 × 4-bit = 4000 bits\",\n",
    "        \"scale\": \"1个 × 16-bit = 16 bits\",\n",
    "        \"总计\": \"4016 bits\"\n",
    "    }\n",
    "# 双量化\n",
    "    double_quant = {\n",
    "        \"权重\": \"1000个 × 4-bit = 4000 bits\",\n",
    "        \"scale\": \"1个 × 8-bit = 8 bits\",\n",
    "        \"总计\": \"4008 bits\"\n",
    "    }\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10916007",
   "metadata": {},
   "source": [
    "### 总结：\n",
    "1. 双量化就是\"量化两次\"：\n",
    "   - 第一次：把权重量化成4-bit\n",
    "   - 第二次：把量化参数(scale)量化成8-bit\n",
    "2. 主要目的：\n",
    "   - 减少量化参数占用的显存\n",
    "   - 原本16-bit的scale变成8-bit\n",
    "3. 最终效果：\n",
    "   - 权重用4-bit存储\n",
    "   - 量化参数用8-bit存储\n",
    "   - 进一步节省了显存空间\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b513790b",
   "metadata": {},
   "source": [
    "## 3.3 分页优化器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ba431e31",
   "metadata": {},
   "source": [
    "### 1.传统方式：\n",
    "    1. 模型参数在GPU\n",
    "    2. 优化器状态也在GPU\n",
    "    \n",
    "    问题：\n",
    "    - Adam优化器需要存储两倍于参数量的状态\n",
    "    - 对于大模型来说，GPU显存不够用\n",
    "    # 例如：\n",
    "    model_params = \"1GB参数\"\n",
    "    adam_states = \"2GB优化器状态\"  # 需要常驻GPU\n",
    "    total_gpu_memory = \"3GB\"    # 总共占用"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d2b04988",
   "metadata": {},
   "source": [
    "### 2.分页优化器的基本思想\n",
    "```python\n",
    "    \"\"\"\n",
    "    核心思想：\n",
    "    1. 优化器状态主要存在CPU显存中\n",
    "    2. 只把当前需要用的部分调到GPU\n",
    "    3. 用完就调回CPU，释放GPU显存\n",
    "    \"\"\"\n",
    "    \n",
    "# 例如：处理一个神经网络层\n",
    "    # 1. 把这层的优化器状态从CPU调到GPU\n",
    "    # 2. 更新这层的参数\n",
    "    # 3. 把更新后的状态调回CPU\n",
    "    # GPU显存被释放\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d039de7",
   "metadata": {},
   "source": [
    "### 3. 具体例子\n",
    "\n",
    "```python\n",
    "# 假设我们有一个大模型\n",
    "model = {\n",
    "    \"layer1\": \"1GB参数\",\n",
    "    \"layer2\": \"1GB参数\",\n",
    "    \"layer3\": \"1GB参数\"\n",
    "}\n",
    "\n",
    "# 传统优化器\n",
    "    \"\"\"所有优化器状态都在GPU\"\"\"\n",
    "    gpu_memory = {\n",
    "        \"layer1_state\": \"2GB\",\n",
    "        \"layer2_state\": \"2GB\",\n",
    "        \"layer3_state\": \"2GB\"\n",
    "    }\n",
    "    # 总共需要6GB GPU显存\n",
    "\n",
    "# 分页优化器\n",
    "    # 处理layer1\n",
    "    gpu_memory = {\n",
    "        \"layer1_state\": \"2GB\"  # 只加载当前层的状态\n",
    "    }\n",
    "    # 处理完释放\n",
    "    \n",
    "    # 处理layer2\n",
    "    gpu_memory = {\n",
    "        \"layer2_state\": \"2GB\"  # 替换成第二层的状态\n",
    "    }\n",
    "    # 处理完释放\n",
    "\n",
    "    # 处理layer3\n",
    "    gpu_memory = {\n",
    "        \"layer3_state\": \"2GB\"  # 替换成第三层的状态\n",
    "    }\n",
    "    # 处理完释放\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4bf283a",
   "metadata": {},
   "source": [
    "\n",
    "### 4. 实际效果\n",
    "\n",
    "```python\n",
    "# 显存使用对比\n",
    "    \"传统优化器\": {\n",
    "        \"模型参数\": \"1GB\",\n",
    "        \"优化器状态\": \"2GB\",\n",
    "        \"总GPU显存\": \"3GB\",\n",
    "        \"特点\": \"常驻显存\"\n",
    "    }   \n",
    "    \"分页优化器\": {\n",
    "        \"模型参数\": \"1GB\",\n",
    "        \"优化器状态\": \"0.1GB\",  # 只保留当前处理的部分\n",
    "        \"总GPU显存\": \"1.1GB\",\n",
    "        \"特点\": \"动态调度\"\n",
    "    }\n",
    "```\n",
    "\n",
    "简单来说：\n",
    "1. 传统优化器就像是把所有书都摊在桌子上\n",
    "2. 分页优化器就像是：\n",
    "   - 书架上放着所有的书（CPU显存）\n",
    "   - 桌子上只放当前在看的那本书（GPU显存）\n",
    "   - 看完一本就放回书架，再拿下一本\n",
    "   - 这样桌子（GPU显存）就不会被占满"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "63733c92",
   "metadata": {},
   "source": [
    "# 4.总结"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "560832d8",
   "metadata": {},
   "source": [
    "<div align=center><img src=\"https://typora-photo1220.oss-cn-beijing.aliyuncs.com/DataAnalysis/muyan/image-20250312163022619.png\" width=100%></div>"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "fufan_chat_api",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
