{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "0370ba6f",
   "metadata": {},
   "source": [
    "# Transformer模型多策略剪枝实验笔记本\n",
    "\n",
    "此笔记本演示针对Transformer模型（如BERT）应用多种剪枝（Pruning）策略：\n",
    "- 非结构化幅度剪枝 (Magnitude)\n",
    "- 注意力头级剪枝 (Head Pruning)\n",
    "- 基于梯度重要性 (SNIP)\n",
    "- PyTorch内置L1非结构化剪枝\n",
    "- 剪枝后微调与性能对比\n",
    "- 稀疏度与速度、准确率的综合评估\n",
    "- 与动态量化组合的示例\n",
    "\n",
    "> 提示：运行前请确认环境已安装 `transformers`, `datasets`, `torch`, `matplotlib`, `pandas`. 若未安装，可使用 `%pip install transformers datasets matplotlib pandas`。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4e69e0bd",
   "metadata": {},
   "source": [
    "## 1. 环境与库导入\n",
    "导入所需库并检测设备（CPU/GPU）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cedd1600",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os, math, time, json, random\n",
    "import numpy as np\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "\n",
    "try:\n",
    "    import transformers\n",
    "    from transformers import AutoTokenizer, AutoModelForSequenceClassification\n",
    "except Exception as e:\n",
    "    transformers = None\n",
    "    print('Transformers 未安装，请先安装: pip install transformers')\n",
    "\n",
    "try:\n",
    "    import pandas as pd\n",
    "except Exception:\n",
    "    pd = None\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "seed = int(os.getenv('SEED', '42'))\n",
    "random.seed(seed); np.random.seed(seed); torch.manual_seed(seed)\n",
    "\n",
    "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
    "print('Using device:', device)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26c4e88b",
   "metadata": {},
   "source": [
    "## 2. 加载预训练Transformer模型与数据集\n",
    "为确保可运行性，这里构造一个小型合成二分类数据集并构建DataLoader。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5652eac9",
   "metadata": {},
   "outputs": [],
   "source": [
    "assert transformers is not None, \"需要安装 transformers 才能运行本示例\"\n",
    "MODEL_NAME = os.getenv('MODEL_NAME', 'bert-base-uncased')\n",
    "num_labels = 2\n",
    "\n",
    "tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\n",
    "model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME, num_labels=num_labels).to(device)\n",
    "model.eval()\n",
    "\n",
    "class TinyTextDataset(Dataset):\n",
    "    def __init__(self, tokenizer, size=128, max_len=64):\n",
    "        pos_templates = [\n",
    "            \"I love this product\", \"This is fantastic\", \"What a great experience\",\n",
    "            \"Absolutely wonderful\", \"I enjoyed it\", \"Highly recommended\"\n",
    "        ]\n",
    "        neg_templates = [\n",
    "            \"I hate this\", \"This is terrible\", \"What a bad experience\",\n",
    "            \"Absolutely awful\", \"I regret it\", \"Not recommended\"\n",
    "        ]\n",
    "        texts, labels = [], []\n",
    "        for _ in range(size//2):\n",
    "            texts.append(random.choice(pos_templates)); labels.append(1)\n",
    "            texts.append(random.choice(neg_templates)); labels.append(0)\n",
    "        enc = tokenizer(texts, padding=True, truncation=True, max_length=max_len, return_tensors='pt')\n",
    "        self.input_ids = enc['input_ids']\n",
    "        self.attention_mask = enc['attention_mask']\n",
    "        self.labels = torch.tensor(labels)\n",
    "    def __len__(self): return self.input_ids.size(0)\n",
    "    def __getitem__(self, idx):\n",
    "        return {\n",
    "            'input_ids': self.input_ids[idx],\n",
    "            'attention_mask': self.attention_mask[idx],\n",
    "            'labels': self.labels[idx]\n",
    "        }\n",
    "\n",
    "train_ds = TinyTextDataset(tokenizer, size=128)\n",
    "val_ds = TinyTextDataset(tokenizer, size=128)\n",
    "train_loader = DataLoader(train_ds, batch_size=16, shuffle=True)\n",
    "val_loader = DataLoader(val_ds, batch_size=32)\n",
    "\n",
    "print('Dataset ready:', len(train_ds), len(val_ds))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47a6bf8e",
   "metadata": {},
   "source": [
    "## 3. 定义评估指标函数\n",
    "实现评估函数，计算accuracy、loss，并测量推理耗时与吞吐量（samples/sec）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cd1cbb33",
   "metadata": {},
   "outputs": [],
   "source": [
    "@torch.no_grad()\n",
    "def evaluate(dataloader, model, device=device, amp=False):\n",
    "    model.eval()\n",
    "    total, correct, losses = 0, 0, []\n",
    "    start = time.time()\n",
    "    scaler = torch.cuda.amp.autocast if (amp and device.type=='cuda') else torch.cpu.amp.autocast\n",
    "    criterion = nn.CrossEntropyLoss()\n",
    "    with scaler():\n",
    "        for batch in dataloader:\n",
    "            input_ids = batch['input_ids'].to(device)\n",
    "            attention_mask = batch['attention_mask'].to(device)\n",
    "            labels = batch['labels'].to(device)\n",
    "            outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels)\n",
    "            loss = outputs.loss\n",
    "            logits = outputs.logits\n",
    "            preds = logits.argmax(-1)\n",
    "            correct += (preds == labels).sum().item()\n",
    "            total += labels.size(0)\n",
    "            losses.append(loss.item())\n",
    "    elapsed = time.time() - start\n",
    "    acc = correct / max(1, total)\n",
    "    avg_loss = float(np.mean(losses)) if losses else 0.0\n",
    "    throughput = total / max(1e-6, elapsed)\n",
    "    return {\"acc\": acc, \"loss\": avg_loss, \"time_sec\": elapsed, \"throughput\": throughput}\n",
    "\n",
    "# 参数总量估计与模型大小（近似）\n",
    "def count_parameters(model):\n",
    "    return sum(p.numel() for p in model.parameters())\n",
    "\n",
    "def estimate_model_size_bytes(model, dtype_bytes=4):\n",
    "    # 粗略估计：参数数量 * 每参数字节 (float32=4)\n",
    "    return count_parameters(model) * dtype_bytes\n",
    "\n",
    "print('评估函数已定义。')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f14212e9",
   "metadata": {},
   "source": [
    "## 4. 基线模型评估与推理耗时测量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "68743eff",
   "metadata": {},
   "outputs": [],
   "source": [
    "base_metrics = evaluate(val_loader, model)\n",
    "params = count_parameters(model)\n",
    "size_mb = estimate_model_size_bytes(model)/1024/1024\n",
    "print('Baseline:', base_metrics)\n",
    "print(f'Params: {params:,} (~{size_mb:.2f} MB @fp32)')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e4352a5",
   "metadata": {},
   "source": [
    "## 5. 权重稀疏度统计函数实现\n",
    "统计全模型与各层的非零参数与稀疏率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7840cc09",
   "metadata": {},
   "outputs": [],
   "source": [
    "def count_nonzero_and_sparsity(model: torch.nn.Module):\n",
    "    total_params, total_nonzero = 0, 0\n",
    "    per_module = []\n",
    "    for name, param in model.named_parameters():\n",
    "        if param is None:\n",
    "            continue\n",
    "        numel = param.numel()\n",
    "        # 统计权重零值。对有prune mask的参数，实际权重为 weight = weight_orig * mask\n",
    "        # 直接统计param==0 即可反映当前张量真实稀疏度（remove之前为重参数化结果）。\n",
    "        nz = int(torch.count_nonzero(param).item())\n",
    "        total_params += numel\n",
    "        total_nonzero += nz\n",
    "        sparsity = 1.0 - (nz / max(1, numel))\n",
    "        per_module.append({\n",
    "            'name': name,\n",
    "            'numel': int(numel),\n",
    "            'nonzero': int(nz),\n",
    "            'sparsity': float(sparsity)\n",
    "        })\n",
    "    overall = {\n",
    "        'total_params': int(total_params),\n",
    "        'total_nonzero': int(total_nonzero),\n",
    "        'overall_sparsity': float(1.0 - total_nonzero / max(1, total_params))\n",
    "    }\n",
    "    return overall, per_module\n",
    "\n",
    "overall0, per0 = count_nonzero_and_sparsity(model)\n",
    "print('Baseline sparsity:', overall0)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "917a759d",
   "metadata": {},
   "source": [
    "## 6. 注意力头剪枝（Head Pruning）\n",
    "演示在若干层上裁剪部分Attention Head，并重新评估。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "71c30d65",
   "metadata": {},
   "outputs": [],
   "source": [
    "def prune_some_heads(model, config={0: {0,1}, 1: {0}}):\n",
    "    \"\"\"\n",
    "    对指定层裁剪指定head集合。\n",
    "    config: {layer_idx: set(head_idx)}\n",
    "    \"\"\"\n",
    "    # 针对BERT结构：model.bert.encoder.layer[i].attention\n",
    "    pruned = []\n",
    "    if not hasattr(model, 'bert'):\n",
    "        print('当前模型不包含bert属性，跳过head剪枝示例。')\n",
    "        return pruned\n",
    "    for li, heads in config.items():\n",
    "        try:\n",
    "            attn = model.bert.encoder.layer[li].attention\n",
    "            # transformers的BertAttention支持 prune_heads\n",
    "            attn.prune_heads(heads)\n",
    "            pruned.append((li, sorted(list(heads))))\n",
    "        except Exception as e:\n",
    "            print(f'层{li} head剪枝失败: {e}')\n",
    "    return pruned\n",
    "\n",
    "print('执行Head剪枝...')\n",
    "pruned_heads = prune_some_heads(model, {0: {0,1}, 1: {0}})\n",
    "print('已剪枝的heads:', pruned_heads)\n",
    "\n",
    "# 评估\n",
    "head_metrics = evaluate(val_loader, model)\n",
    "overall_head, _ = count_nonzero_and_sparsity(model)\n",
    "print('After head pruning metrics:', head_metrics)\n",
    "print('After head pruning sparsity:', overall_head)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1e77901d",
   "metadata": {},
   "source": [
    "## 7. 非结构化幅度剪枝 (Magnitude Pruning)\n",
    "按权重绝对值排序，剪掉最小的一部分参数（示例：全局10%）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e04f7f0c",
   "metadata": {},
   "outputs": [],
   "source": [
    "def magnitude_global_prune_(model: nn.Module, amount: float = 0.10, modules_filter=(nn.Linear,)):\n",
    "    \"\"\"对指定类型模块执行全局幅度剪枝：将最小|w|的amount比例置零。\"\"\"\n",
    "    # 收集所有目标权重参数\n",
    "    params = []\n",
    "    for module in model.modules():\n",
    "        if isinstance(module, modules_filter) and hasattr(module, 'weight'):\n",
    "            params.append(module.weight.data.view(-1))\n",
    "    if not params:\n",
    "        print('未找到可剪枝的线性层权重。')\n",
    "        return 0\n",
    "    weights = torch.cat(params)\n",
    "    k = int(amount * weights.numel())\n",
    "    if k <= 0:\n",
    "        return 0\n",
    "    threshold = weights.abs().kthvalue(k).values.item()\n",
    "    # 应用阈值\n",
    "    pruned = 0\n",
    "    for module in model.modules():\n",
    "        if isinstance(module, modules_filter) and hasattr(module, 'weight'):\n",
    "            w = module.weight.data\n",
    "            mask = w.abs() >= threshold\n",
    "            pruned += int((~mask).sum().item())\n",
    "            w.mul_(mask)\n",
    "    return pruned\n",
    "\n",
    "print('执行Magnitude全局剪枝10%...')\n",
    "num_pruned = magnitude_global_prune_(model, amount=0.10, modules_filter=(nn.Linear,))\n",
    "print('剪枝参数数量:', num_pruned)\n",
    "\n",
    "mag_metrics = evaluate(val_loader, model)\n",
    "overall_mag, _ = count_nonzero_and_sparsity(model)\n",
    "print('After magnitude pruning metrics:', mag_metrics)\n",
    "print('After magnitude pruning sparsity:', overall_mag)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7299faec",
   "metadata": {},
   "source": [
    "## 8. SNIP 梯度重要性剪枝（单批次）\n",
    "基于 $S_i = |\\partial L / \\partial w_i \\cdot w_i|$ 计算评分，剪去得分最低的部分。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4b1e6c6e",
   "metadata": {},
   "outputs": [],
   "source": [
    "def snip_prune_(model: nn.Module, dataloader: DataLoader, amount: float = 0.05, modules_filter=(nn.Linear,)):\n",
    "    model.train()\n",
    "    for p in model.parameters():\n",
    "        if p.grad is not None:\n",
    "            p.grad = None\n",
    "    # 取单批次\n",
    "    batch = next(iter(dataloader))\n",
    "    input_ids = batch['input_ids'].to(device)\n",
    "    attention_mask = batch['attention_mask'].to(device)\n",
    "    labels = batch['labels'].to(device)\n",
    "\n",
    "    outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels)\n",
    "    loss = outputs.loss\n",
    "    loss.backward()\n",
    "\n",
    "    # 收集score = |grad * weight|\n",
    "    scores = []\n",
    "    params_refs = []\n",
    "    for module in model.modules():\n",
    "        if isinstance(module, modules_filter) and hasattr(module, 'weight') and module.weight.requires_grad:\n",
    "            w = module.weight\n",
    "            if w.grad is None:\n",
    "                continue\n",
    "            s = (w.grad * w).abs().detach().flatten()\n",
    "            scores.append(s)\n",
    "            params_refs.append(w)\n",
    "    if not scores:\n",
    "        print('SNIP未找到可剪枝权重或梯度不可用。')\n",
    "        return 0\n",
    "\n",
    "    all_scores = torch.cat(scores)\n",
    "    k = int(amount * all_scores.numel())\n",
    "    if k <= 0:\n",
    "        return 0\n",
    "    threshold = all_scores.kthvalue(k).values.item()\n",
    "\n",
    "    pruned = 0\n",
    "    for w, s in zip(params_refs, scores):\n",
    "        s2 = s.view_as(w)\n",
    "        mask = s2 >= threshold\n",
    "        pruned += int((~mask).sum().item())\n",
    "        with torch.no_grad():\n",
    "            w.mul_(mask)\n",
    "    return pruned\n",
    "\n",
    "print('执行SNIP剪枝5%...')\n",
    "snip_pruned = snip_prune_(model, train_loader, amount=0.05, modules_filter=(nn.Linear,))\n",
    "print('SNIP剪枝参数数量:', snip_pruned)\n",
    "\n",
    "snip_metrics = evaluate(val_loader, model)\n",
    "overall_snip, _ = count_nonzero_and_sparsity(model)\n",
    "print('After SNIP pruning metrics:', snip_metrics)\n",
    "print('After SNIP pruning sparsity:', overall_snip)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b857858c",
   "metadata": {},
   "source": [
    "## 9. 使用 PyTorch nn.utils.prune（L1 非结构化）\n",
    "演示对全部Linear层应用全局L1非结构化剪枝，并移除重参数化以固化权重。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "050800e5",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch.nn.utils.prune as prune\n",
    "\n",
    "def apply_l1_unstructured(model: nn.Module, amount: float = 0.05, modules_filter=(nn.Linear,)):\n",
    "    params_to_prune = []\n",
    "    for module in model.modules():\n",
    "        if isinstance(module, modules_filter) and hasattr(module, 'weight'):\n",
    "            params_to_prune.append((module, 'weight'))\n",
    "    if not params_to_prune:\n",
    "        print('未找到Linear层进行L1剪枝。')\n",
    "        return\n",
    "    prune.global_unstructured(\n",
    "        params_to_prune,\n",
    "        pruning_method=prune.L1Unstructured,\n",
    "        amount=amount\n",
    "    )\n",
    "    print(f'已应用L1非结构化剪枝 amount={amount}')\n",
    "\n",
    "print('应用L1非结构化剪枝5%...')\n",
    "apply_l1_unstructured(model, amount=0.05, modules_filter=(nn.Linear,))\n",
    "\n",
    "# 移除重参数化以固化剪枝掩码\n",
    "for module in model.modules():\n",
    "    if isinstance(module, nn.Linear) and hasattr(module, 'weight_orig'):\n",
    "        try:\n",
    "            prune.remove(module, 'weight')\n",
    "        except Exception:\n",
    "            pass\n",
    "\n",
    "l1_metrics = evaluate(val_loader, model)\n",
    "overall_l1, _ = count_nonzero_and_sparsity(model)\n",
    "print('After L1 prune metrics:', l1_metrics)\n",
    "print('After L1 prune sparsity:', overall_l1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "41c3ad31",
   "metadata": {},
   "source": [
    "## 10. 稀疏度与模型大小报告、保存剪枝后模型\n",
    "保存state_dict并统计磁盘大小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f2707c24",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 统一最终评估并保存\n",
    "final_metrics = evaluate(val_loader, model)\n",
    "overall_final, per_final = count_nonzero_and_sparsity(model)\n",
    "print('Final metrics:', final_metrics)\n",
    "print('Final sparsity:', overall_final)\n",
    "\n",
    "save_dir = 'pruned_bert'\n",
    "os.makedirs(save_dir, exist_ok=True)\n",
    "state_path = os.path.join(save_dir, 'pytorch_model.bin')\n",
    "torch.save(model.state_dict(), state_path)\n",
    "file_size_mb = os.path.getsize(state_path)/1024/1024\n",
    "print(f'Saved to {state_path} ({file_size_mb:.2f} MB)')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e9d90536",
   "metadata": {},
   "source": [
    "## 11. 后续建议与可选扩展\n",
    "- 结构化剪枝：剪除整列/整通道，便于推理加速（需内核/库支持）\n",
    "- Movement Pruning / Diff Pruning：适配器化低秩稀疏\n",
    "- 剪枝后微调：冻结mask仅训练非零权重，观察准确率恢复\n",
    "- 动态/静态量化：与剪枝组合进一步压缩与加速\n",
    "- 稀疏推理库：探索Torch sparse、ONNX Runtime、TensorRT支持稀疏加速"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
