{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1.模型压缩"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1.1  剪枝（Pruning）\n",
    "知识点：\n",
    "\n",
    "剪枝通过删除不重要的权重，减少模型的复杂度和存储需求。常见的方法包括随机剪枝和重要性剪枝。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch  \n",
    "import torch.nn as nn  \n",
    "import torch.nn.utils.prune as prune  \n",
    "\n",
    "# 定义一个简单的卷积模型  \n",
    "class SimpleCNN(nn.Module):  \n",
    "    def __init__(self):  \n",
    "        super(SimpleCNN, self).__init__()  \n",
    "        self.conv1 = nn.Conv2d(1, 16, 3)  \n",
    "        self.relu = nn.ReLU()  \n",
    "        self.fc = nn.Linear(16 * 26 * 26, 10)  \n",
    "\n",
    "    def forward(self, x):  \n",
    "        x = self.relu(self.conv1(x))  \n",
    "        x = x.view(x.size(0), -1)  \n",
    "        x = self.fc(x)  \n",
    "        return x  \n",
    "\n",
    "model = SimpleCNN()  \n",
    "\n",
    "# 随机构造削减 30% 的 weights  \n",
    "prune.random_unstructured(model.conv1, name=\"weight\", amount=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的地方：\n",
    "\n",
    "模型定义：根据自己的模型结构定义 SimpleCNN，可以替换为你自己的模型类。\n",
    "剪枝层：在 prune.random_unstructured(model.conv1, ...) 中，确保剪枝的层是你的模型中的适当层。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1.2低秩分解（Low-Rank Decomposition）\n",
    "知识点：\n",
    "\n",
    "通过将大的权重矩阵分解成多个小矩阵以减少模型参数数量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch  \n",
    "\n",
    "# 假设我们有一个权重矩阵  \n",
    "weight_matrix = torch.randn(100, 100)  \n",
    "\n",
    "# 使用 SVD 进行低秩分解  \n",
    "u, s, v = torch.svd(weight_matrix)  \n",
    "\n",
    "# 保留前 k 个奇异值（例如前 10 个）  \n",
    "k = 10  \n",
    "u_k = u[:, :k]  \n",
    "s_k = torch.diag(s[:k])  \n",
    "v_k = v[:, :k]  \n",
    "\n",
    "# 重构矩阵  \n",
    "reduced_weight_matrix = u_k @ s_k @ v_k.t()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的地方：\n",
    "\n",
    "权重矩阵的定义：这里使用的 weight_matrix 是随机生成的，实际使用时应替换为你模型中的权重矩阵。\n",
    "选择的秩：调整 k 的值以控制保留的奇异值数量。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1.3知识蒸馏（Knowledge Distillation）\n",
    "知识点：\n",
    "\n",
    "将一个大模型（教师模型）和小模型（学生模型）进行训练，使学生模型能够学习教师模型的知识，保持高准确性。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch  \n",
    "import torch.nn as nn  \n",
    "import torch.optim as optim  \n",
    "\n",
    "# 定义教师和学生模型  \n",
    "class TeacherModel(nn.Module):  \n",
    "    # ... 定义教师模型  \n",
    "    pass  \n",
    "\n",
    "class StudentModel(nn.Module):  \n",
    "    # ... 定义学生模型  \n",
    "    pass  \n",
    "\n",
    "def train_student(teacher_model, student_model, data_loader, num_epochs):  \n",
    "    criterion = nn.KLDivLoss(reduction='batchmean')  \n",
    "    optimizer = optim.Adam(student_model.parameters())  \n",
    "\n",
    "    teacher_model.eval()  # 教师模型为评估模式  \n",
    "\n",
    "    for epoch in range(num_epochs):  \n",
    "        for data, target in data_loader:  \n",
    "            optimizer.zero_grad()  \n",
    "            teacher_output = teacher_model(data)  # 教师模型输出  \n",
    "            student_output = student_model(data)  # 学生模型输出  \n",
    "\n",
    "            # 计算损失  \n",
    "            loss = criterion(torch.log_softmax(student_output / 2, dim=1),  \n",
    "                             torch.softmax(teacher_output / 2, dim=1))  \n",
    "            loss.backward()  \n",
    "            optimizer.step()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的地方：\n",
    "\n",
    "模型定义：实现 TeacherModel 和 StudentModel 类以符合你的具体模型结构。\n",
    "数据加载器：确保提供合适的数据加载器 data_loader。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1.4 共享权重（Weight Sharing）\n",
    "知识点：\n",
    "\n",
    "通过将相似的权重共享，减少模型大小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch  \n",
    "import torch.nn as nn  \n",
    "\n",
    "class SharedWeightModel(nn.Module):  \n",
    "    def __init__(self):  \n",
    "        super(SharedWeightModel, self).__init__()  \n",
    "        self.shared_weights = nn.Parameter(torch.randn(10, 10))  \n",
    "\n",
    "    def forward(self, x):  \n",
    "        # 使用共享权重  \n",
    "        return x @ self.shared_weights  \n",
    "\n",
    "model = SharedWeightModel()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的地方：\n",
    "\n",
    "权重形状：根据网络结构修改 shared_weights 的形状及功能。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "pytorch模型量化思路图"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "                      +-------------------------+  \n",
    "                      |   加载预训练模型       |  \n",
    "                      +-----+-------------------+  \n",
    "                            |  \n",
    "                            v  \n",
    "                      +-------------------------+  \n",
    "                      |   选择量化方法         |  \n",
    "                      |   1. 动态量化         |  \n",
    "                      |   2. 静态量化         |  \n",
    "                      |   3. 量化感知训练     |  \n",
    "                      +-----------+-------------+  \n",
    "                                  |  \n",
    "                +-----------------+------------------+  \n",
    "                |                 |                  |  \n",
    "                v                 v                  v  \n",
    "      +-----------------+  +-------------------+  +---------------------+  \n",
    "      |  动态量化       |  |  静态量化         |  |  量化感知训练       |  \n",
    "      +-----------------+  +-------------------+  +---------------------+  \n",
    "      |  - 使用 quantize_dynamic()   |  |  - 使用 prepare()      |  |  - 使用 prepare_qat()    |  \n",
    "      |  - 应用在 nn.Linear 等层    |  |  - 前向传播收集统计    |  |  - 由训练数据进行训练  |  \n",
    "      |  - 收集量化模型              |  |  - 使用 convert()      |  |  - 模型优化              |  \n",
    "      +-----------------+  +-------------------+  +---------------------+  \n",
    "                |                 |                  |  \n",
    "                v                 v                  v  \n",
    "      +-----------------+  +-------------------+  +---------------------+  \n",
    "      |  测试量化模型    |  |  测试量化模型      |  |  测试量化模型        |  \n",
    "      +-----------------+  +-------------------+  +---------------------+  \n",
    "      |  - 输入数据      |  |  - 输入数据        |  |  - 输入数据        |  \n",
    "      |  - 输出结果      |  |  - 输出结果        |  |  - 输出结果        |  \n",
    "      +-----------------+  +-------------------+  +---------------------+"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "tensorflow模型量化思路图"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "                          +------------------------+  \n",
    "                          |   加载预训练模型       |  \n",
    "                          +--------+---------------+  \n",
    "                                   |  \n",
    "                                   v  \n",
    "                          +------------------------+  \n",
    "                          |   选择量化方法         |  \n",
    "                          |   1. 权重量化         |  \n",
    "                          |   2. 激活量化         |  \n",
    "                          |   3. 全整数量化       |  \n",
    "                          |   4. 量化感知训练     |  \n",
    "                          +-----------+------------+  \n",
    "                                      |  \n",
    "                    +-----------------+-----------------+  \n",
    "                    |                 |                 |  \n",
    "                    v                 v                 v  \n",
    "          +-----------------+  +-----------------+  +---------------------+  \n",
    "          |  权重量化       |  |  激活量化       |  |  量化感知训练       |  \n",
    "          +-----------------+  +-----------------+  +---------------------+  \n",
    "          |  - 使用 TFLiteConverter  |  |  - 收集激活统计  |  |  - 模型训练       |  \n",
    "          |  - 转换为 TFLite 模型   |  |  - 准备量化模型  |  |  - 插入伪量化层   |  \n",
    "          |  - 设置量化配置        |  |  - 应用量化转换  |  |  - 量化模拟       |  \n",
    "          +-----------------+  +-----------------+  +---------------------+  \n",
    "                    |                 |                 |  \n",
    "                    v                 v                 v  \n",
    "          +-----------------+  +-----------------+  +---------------------+  \n",
    "          |  测试量化模型    |  |  测试量化模型    |  |  测试量化模型       |  \n",
    "          +-----------------+  +-----------------+  +---------------------+  \n",
    "          |  - 输入数据      |  |  - 输入数据      |  |  - 输入数据        |  \n",
    "          |  - 输出结果      |  |  - 输出结果      |  |  - 输出结果        |  \n",
    "          +-----------------+  +-----------------+  +---------------------+  \n",
    "                                       |  \n",
    "                                       v  \n",
    "                          +------------------------+  \n",
    "                          |   性能评估与优化       |  \n",
    "                          +------------------------+"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2 模型量化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.1动态量化（Dynamic Quantization）\n",
    "知识点：\n",
    "\n",
    "在推理时将模型权重动态转换为整型以减小内存占用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch  \n",
    "\n",
    "# 假设你已经训练好了模型 model  \n",
    "model.eval()  # 切换为评估模式  \n",
    "quantized_model = torch.quantization.quantize_dynamic(  \n",
    "    model, {torch.nn.Linear}, dtype=torch.qint8  \n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的地方：\n",
    "\n",
    "模型：确保 model 是你想要量化的 PyTorch 模型。\n",
    "量化层类型：在 {torch.nn.Linear} 中，添加其他要量化的层（如卷积层）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.2静态量化（Static Quantization）\n",
    "知识点：\n",
    "\n",
    "在部署前收集激活值的量化范围，并将模型权重和激活值静态量化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch  \n",
    "import torch.quantization  \n",
    "\n",
    "# 假设你已经训练好了一个模型 model  \n",
    "model.eval()  \n",
    "model.qconfig = torch.quantization.get_default_qconfig('fbgemm')  # 设置量化配置  \n",
    "torch.quantization.prepare(model, inplace=True)  # 准备量化  \n",
    "\n",
    "# 使用校准数据对模型进行校准  \n",
    "# (此处缺少实际数据加载和校准的部分，假装你在这里调用了模型)  \n",
    "\n",
    "torch.quantization.convert(model, inplace=True)  # 转换为量化模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的地方：\n",
    "\n",
    "量化配置：根据需求修改 model.qconfig 的配置。\n",
    "校准数据：确保在 torch.quantization.prepare 和 torch.quantization.convert 中使用适当的校准数据。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.3全精度量化（Full Precision Quantization）\n",
    "知识点：\n",
    "\n",
    "将所有权重和激活量化为低精度格式，可以大幅提高内存效率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch.quantization  \n",
    "\n",
    "# 假设 model 是你训练好的 PyTorch 模型  \n",
    "model.eval()  \n",
    "model.qconfig = torch.quantization.get_default_qconfig('fbgemm')  \n",
    "\n",
    "# 准备和量化  \n",
    "torch.quantization.prepare(model, inplace=True)  \n",
    "# 这里需要传入校准数据  \n",
    "torch.quantization.convert(model, inplace=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的地方：\n",
    "\n",
    "如前所述，确保设置合适的 model.qconfig 和提供有效的校准数据。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.4 混合精度训练（Mixed Precision Training）\n",
    "知识点：\n",
    "\n",
    "结合16位和32位浮点数以加速训练，对计算资源进行优化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch.cuda.amp import GradScaler, autocast  \n",
    "\n",
    "model.train()  \n",
    "scaler = GradScaler()  \n",
    "\n",
    "for data, target in data_loader:  \n",
    "    optimizer.zero_grad()  \n",
    "    with autocast():  \n",
    "        output = model(data)  \n",
    "        loss = criterion(output, target)  \n",
    "    \n",
    "    # 反向传播  \n",
    "    scaler.scale(loss).backward()  \n",
    "    scaler.step(optimizer)  \n",
    "    scaler.update()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "需要修改的地方：\n",
    "\n",
    "数据加载：确保提供合适的数据加载器 data_loader。\n",
    "优化器和损失函数：定义合适的 optimizer 和 criterion"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model Accuracy: 1.00\n",
      "Feature Importance:\n",
      "Feature: f0, Score: 9.0\n",
      "Feature: f1, Score: 15.0\n",
      "Feature: f2, Score: 59.0\n",
      "Feature: f3, Score: 31.0\n"
     ]
    }
   ],
   "source": [
    "# 导入所需的库  \n",
    "import xgboost as xgb  \n",
    "from sklearn.datasets import load_iris  \n",
    "from sklearn.model_selection import train_test_split  \n",
    "from sklearn.metrics import accuracy_score  \n",
    "\n",
    "# 1. 数据加载  \n",
    "# 加载鸢尾花数据集  \n",
    "iris = load_iris()  \n",
    "X, y = iris.data, iris.target  \n",
    "\n",
    "# 2. 数据分割  \n",
    "# 将数据集划分为训练集和测试集，75% 训练和 25% 测试  \n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)  \n",
    "\n",
    "# 3. 创建 DMatrix  \n",
    "# XGBoost 使用 DMatrix 数据结构，优化内存使用及速度  \n",
    "dtrain = xgb.DMatrix(X_train, label=y_train)  # 训练数据  \n",
    "dtest = xgb.DMatrix(X_test, label=y_test)      # 测试数据  \n",
    "\n",
    "# 4. 设置参数  \n",
    "params = {  \n",
    "    'objective': 'multi:softmax',  #多类分类任务  \n",
    "    'num_class': 3,                 # 类别数量  \n",
    "    'eta': 0.3,                     # 学习率  \n",
    "    'max_depth': 6,                 # 决策树的最大深度  \n",
    "    'eval_metric': 'mlogloss',      # 多类对数损失  \n",
    "}  \n",
    "\n",
    "# 5. 训练模型  \n",
    "# num_boost_round 指定要训练的树的数量  \n",
    "num_boost_round = 10  \n",
    "model = xgb.train(params, dtrain, num_boost_round)  \n",
    "\n",
    "# 6. 预测  \n",
    "# 使用模型对测试集进行预测  \n",
    "y_pred = model.predict(dtest)  \n",
    "\n",
    "# 7. 评估模型  \n",
    "# 计算并输出准确率  \n",
    "accuracy = accuracy_score(y_test, y_pred)  \n",
    "print(f\"Model Accuracy: {accuracy:.2f}\")  \n",
    "\n",
    "# 8. 特征重要性  \n",
    "# 打印每个特征的重要性  \n",
    "importance = model.get_score(importance_type='weight')  \n",
    "print(\"Feature Importance:\")  \n",
    "for feature, score in importance.items():  \n",
    "    print(f\"Feature: {feature}, Score: {score}\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorchgpu",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.20"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
