{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1.残差块"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image-20240811194409933](https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20240811194409933.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 2.残差块的优势"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 缓解深层网络的退化问题\n",
    "  - 退化：深度神经网络中，随着层数的增加，模型理论上应该能够拟合更复杂的函数，但在实际训练中，增加层数反而可能导致训练误差变大。这是因为在深层网络中，信息在逐层传递时可能发生损失，导致模型难以训练，从而表现不佳。\n",
    "  - 作用：即使输出$f(x)$会丢失一部分信息，通过与原始输入相加的操作，输出仍会保留原始的信息\n",
    "- 解决深层网络的梯度消失问题\n",
    "  - 梯度消失问题：梯度随着神经网络逐层传导，可能会出现小梯度越乘越小的问题，到更深的网络时，梯度趋近于0，网络更新缓慢\n",
    "  - 作用：残差块的正向传播为$y = F(x) + x$，反向传播为$\\frac{\\partial L}{\\partial x} = \\frac{\\partial L}{\\partial y}·(1 + \\frac{\\partial F(x)}{\\partial x})$，如果说带权重的那部分$\\frac{\\partial F(x)}{\\partial x}$计算出来的梯度很小，总的梯度由于有1的作用，依然至少会保持原有的从上面传下来的梯度。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3.ResNet结构"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img src=\"https://zyc-learning-1309954661.cos.ap-nanjing.myqcloud.com/machine-learning-pic/image-20240812131716903.png\" alt=\"image-20240812131716903\" style=\"zoom: 50%;\" />"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
