{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "3ebb4e3f",
   "metadata": {},
   "source": [
    "---\n",
    "- 为什么：网络中图像处理完成后，特征图、梯度即消失\n",
    "- 宏观上怎么做（策略）：forward最后不仅返回模型预测输出，同时返回所需的特征图等信息\n",
    "- 微观上怎么做（机制）：hook编程"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9f59316d",
   "metadata": {},
   "source": [
    "## hook\n",
    "1. 功能\n",
    "- 提取或改变tensor的梯度;\n",
    "- 获取nn.Module的输出和梯度（不能改变）\n",
    "2. 具体实现：三个函数\n",
    "- Tensor.register_hook(hook_fn)，\n",
    "- nn.Module.register_forward_hook(hook_fn)，\n",
    "- nn.Module.register_backward_hook(hook_fn)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f2116af4",
   "metadata": {},
   "source": [
    "### Tensor.register_hook(hook_fn)\n",
    "- 为什么：pytorch只会保存叶节点的梯度信息，其他的都会被释放\n",
    "- 例1：没有hook的时候，不能得到所有的梯度信息"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "fb3fd776",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "fbeaab98",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --------叶节点---------\n",
    "a = torch.Tensor([1,2]).requires_grad_() \n",
    "b = torch.Tensor([3,4]).requires_grad_()\n",
    "d = torch.Tensor([2]).requires_grad_()\n",
    "\n",
    "# --------中间节点--------\n",
    "c = a + b\n",
    "e = c * d\n",
    "o = e.sum()\n",
    "\n",
    "# ---------回传-----------\n",
    "o.backward(retain_graph=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "c0ef8bc9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "a.grad:tensor([2., 2.])\n",
      "b.grad:tensor([2., 2.])\n",
      "c.grad:None\n",
      "d.grad:tensor([10.])\n",
      "e.grad:None\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Anaconda3\\lib\\site-packages\\torch\\_tensor.py:1013: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  aten\\src\\ATen/core/TensorBody.h:417.)\n",
      "  return self._grad\n"
     ]
    }
   ],
   "source": [
    "print(f'a.grad:{a.grad}')\n",
    "print(f'b.grad:{b.grad}')\n",
    "print(f'c.grad:{c.grad}')\n",
    "print(f'd.grad:{d.grad}')\n",
    "print(f'e.grad:{e.grad}')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "097dfa71",
   "metadata": {},
   "source": [
    "- 例2：使用hook"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "7336aa30",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<torch.utils.hooks.RemovableHandle at 0x221ebf25130>"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def hook_fn(grad):\n",
    "    print(grad)\n",
    "\n",
    "e.register_hook(hook_fn)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "93d4dc3a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([1., 1.])\n"
     ]
    }
   ],
   "source": [
    "o.backward(retain_graph=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d62f4f80",
   "metadata": {},
   "source": [
    "`register_hook的输入可以是任意的名称。该函数起到的作用是处理梯度`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "333253ca",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([1., 1.])\n",
      "a.grad:tensor([8., 8.])\n",
      "b.grad:tensor([8., 8.])\n",
      "c.grad:None\n",
      "d.grad:tensor([30.])\n",
      "e.grad:None\n"
     ]
    }
   ],
   "source": [
    "grad_list = []              # 构建一个列表来保存梯度\n",
    "def fucking_hook(grad):\n",
    "    grad_list.append(grad)\n",
    "    return 2*grad\n",
    "\n",
    "c.register_hook(fucking_hook)\n",
    "o.backward()                    # 这一步还是会从输出e的梯度\n",
    "print(f'a.grad:{a.grad}')\n",
    "print(f'b.grad:{b.grad}')\n",
    "print(f'c.grad:{c.grad}')\n",
    "print(f'd.grad:{d.grad}')\n",
    "print(f'e.grad:{e.grad}')      # 但是最终还是没有保存e的梯度"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b032132",
   "metadata": {},
   "source": [
    "- 小结\n",
    ">如果要获取非叶子节点Tensor的梯度值，我们需要在反向传播前：\n",
    "1）自定义一个hook函数，描述对梯度的操作，函数名自拟，参数只有grad，表示Tensor的梯度；\n",
    "2）对要获取梯度的Tensor用方法Tensor.register_hook(hook)进行注册。\n",
    "3）执行反向传播。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "362f5e50",
   "metadata": {},
   "source": [
    "### nn.Module.register_forward_hook(hook_fn)和nn.Module.register_backward_hook(hook_fn)\n",
    "操作对象：nn.Module类。比如nn.Conv2d, nn.Linear, nn.MaxPool2d, nn.AvgPool2d, nn.ReLU)或者nn.Sequential\n",
    "- 模型的中间模块，也可以视作中间节点(非叶子节点)，它的输出为特征图或激活值，反向传播的梯度值都会被系统自动释放，如果想要获取它们，就要用到hook功能。\n",
    "- register_forward_hook是获取前向传播的输出的，即特征图或激活值；\n",
    "- register_backward_hook是获取反向传播的输出的，即梯度值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "cc56758e",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<torch.utils.hooks.RemovableHandle at 0x1d1ec50e250>"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import numpy as np\n",
    "import torchvision.transforms as transforms\n",
    "\n",
    "# -----1、搭建网络\n",
    "class Net(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(Net, self).__init__()\n",
    "        self.conv1 = nn.Conv2d(3,6,3,1,1)\n",
    "        self.relu1 = nn.ReLU()\n",
    "        self.pool1 = nn.MaxPool2d(2,2)\n",
    "        self.conv2 = nn.Conv2d(6,9,3,1,1)\n",
    "        self.relu2 = nn.ReLU()\n",
    "        self.pool2 = nn.MaxPool2d(2,2)\n",
    "        self.fc1 = nn.Linear(8*8*9, 120)\n",
    "        self.relu3 = nn.ReLU()\n",
    "        self.fc2 = nn.Linear(120,10)\n",
    "\n",
    "    def forward(self, x):\n",
    "        out = self.pool1(self.relu1(self.conv1(x)))\n",
    "        out = self.pool2(self.relu2(self.conv2(out)))\n",
    "        out = out.view(out.shape[0], -1)\n",
    "        out = self.relu3(self.fc1(out))\n",
    "        out = self.fc2(out)\n",
    "\n",
    "        return out\n",
    "    \n",
    "loss_func = nn.CrossEntropyLoss()   \n",
    "\n",
    "# -----2、定义前向函数、后向函数\n",
    "def fucking_backward_hook(module, grad_in, grad_out):\n",
    "    grad_block['grad_in'] = grad_in\n",
    "    grad_block['grad_out'] = grad_out\n",
    "\n",
    "\n",
    "def fucking_farward_hook(module, inp, outp):\n",
    "    fmap_block['input'] = inp\n",
    "    fmap_block['output'] = outp\n",
    "    \n",
    "# ----3、生成输入数据与标签\n",
    "label = torch.empty(1, dtype=torch.long).random_(3)\n",
    "input_img = torch.randn(1, 3, 32, 32).requires_grad_()\n",
    "\n",
    "fmap_block = dict()        # 保存特征图\n",
    "grad_block = dict()        # 保存梯度\n",
    "\n",
    "net = Net()\n",
    "\n",
    "net.conv2.register_forward_hook(fucking_farward_hook)\n",
    "net.conv2.register_backward_hook(fucking_backward_hook)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "ae94c4cb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([1, 6, 16, 16])\n"
     ]
    }
   ],
   "source": [
    "outs = net(input_img)\n",
    "loss = loss_func(outs, label)\n",
    "print(fmap_block['input'][0].shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "ced1cad4",
   "metadata": {},
   "outputs": [
    {
     "ename": "KeyError",
     "evalue": "'grad_in'",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mKeyError\u001b[0m                                  Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-4-3371ca677422>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      1\u001b[0m \u001b[1;31m# 第一组对应关系\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 2\u001b[1;33m \u001b[0mprint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mgrad_block\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'grad_in'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mshape\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m      3\u001b[0m \u001b[0mprint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfmap_block\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'input'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mshape\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      4\u001b[0m \u001b[1;31m# 第二组对应关系\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      5\u001b[0m \u001b[0mprint\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mgrad_block\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'grad_in'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mshape\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mKeyError\u001b[0m: 'grad_in'"
     ]
    }
   ],
   "source": [
    "# loss.backward()\n",
    "\n",
    "# print(len(fmap_block['input']), len(fmap_block['output']), len(grad_block['grad_in']), len(grad_block['grad_out']))\n",
    "\n",
    "# 第一组对应关系\n",
    "print(grad_block['grad_in'][0].shape)\n",
    "print(fmap_block['input'][0].shape)\n",
    "# 第二组对应关系\n",
    "print(grad_block['grad_in'][1].shape)\n",
    "print(net.conv2.weight.shape)\n",
    "# 第三组对应关系\n",
    "print(grad_block['grad_in'][2].shape)\n",
    "print(net.conv2.bias.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "47d25f24",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
