{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "bfee91c9-9a31-4f6e-8661-2a2b405ce262",
   "metadata": {},
   "source": [
    "\n",
    "## 一、网络结构与符号定义\n",
    "\n",
    "### 1. 网络层级与节点\n",
    "- **输入层**：2个节点，输入值 $x_1, x_2$（无激活函数）\n",
    "- **隐藏层**：2个节点，输入值 $a_1^h, a_2^h$，输出值 $h_1, h_2$（激活函数 $\\sigma$）\n",
    "- **输出层**：2个节点，输入值 $a_1^o, a_2^o$，输出值 $y_1, y_2$（激活函数 $\\sigma$）\n",
    "\n",
    "### 2. 参数定义\n",
    "- **输入层→隐藏层**：\n",
    "  - 权重：$w_{11}^h$（$x_1→h_1$）、$w_{12}^h$（$x_2→h_1$）、$w_{21}^h$（$x_1→h_2$）、$w_{22}^h$（$x_2→h_2$）\n",
    "  - 偏置：$b_1^h, b_2^h$\n",
    "\n",
    "- **隐藏层→输出层**：\n",
    "  - 权重：$w_{11}^o$（$h_1→y_1$）、$w_{12}^o$（$h_2→y_1$）、$w_{21}^o$（$h_1→y_2$）、$w_{22}^o$（$h_2→y_2$）\n",
    "  - 偏置：$b_1^o, b_2^o$\n",
    "\n",
    "### 3. 核心函数\n",
    "- **激活函数**：$\\sigma(t) = \\frac{1}{1+e^{-t}}$，导数 $\\sigma'(t) = \\sigma(t)(1-\\sigma(t))$\n",
    "- **损失函数**（单样本MSE）：$L = \\frac{1}{2}[(y_1 - \\hat{y}_1)^2 + (y_2 - \\hat{y}_2)^2]$，其中 $\\hat{y}_1, \\hat{y}_2$ 为真实标签\n",
    "\n",
    "## 二、前向传播基础公式\n",
    "\n",
    "先明确前向计算逻辑，为反向推导铺垫：\n",
    "\n",
    "- **隐藏层输入**：\n",
    "  - $a_1^h = w_{11}^h x_1 + w_{12}^h x_2 + b_1^h$\n",
    "  - $a_2^h = w_{21}^h x_1 + w_{22}^h x_2 + b_2^h$\n",
    "\n",
    "- **隐藏层输出**：\n",
    "  - $h_1 = \\sigma(a_1^h)$\n",
    "  - $h_2 = \\sigma(a_2^h)$\n",
    "\n",
    "- **输出层输入**：\n",
    "  - $a_1^o = w_{11}^o h_1 + w_{12}^o h_2 + b_1^o$\n",
    "  - $a_2^o = w_{21}^o h_1 + w_{22}^o h_2 + b_2^o$\n",
    "\n",
    "- **输出层输出**：\n",
    "  - $y_1 = \\sigma(a_1^o)$\n",
    "  - $y_2 = \\sigma(a_2^o)$\n",
    "\n",
    "---\n",
    "\n",
    "## 三、反向传播核心推导（单样本）\n",
    "\n",
    "反向传播的关键是计算\"误差项\"（损失对各层输入的偏导），再通过误差项求参数梯度。\n",
    "\n",
    "### 1. 输出层误差项 $\\delta^o$\n",
    "\n",
    "**误差项定义**：$\\delta_i^o = \\frac{\\partial L}{\\partial a_i^o}$（$i=1,2$，对应输出层两个节点）\n",
    "\n",
    "- **链式法则拆解**：$\\delta_i^o = \\frac{\\partial L}{\\partial y_i} \\cdot \\frac{\\partial y_i}{\\partial a_i^o}$\n",
    "\n",
    "- **第一步**：求 $\\frac{\\partial L}{\\partial y_i}$  \n",
    "  由损失函数：\n",
    "  - $\\frac{\\partial L}{\\partial y_1} = y_1 - \\hat{y}_1$\n",
    "  - $\\frac{\\partial L}{\\partial y_2} = y_2 - \\hat{y}_2$\n",
    "\n",
    "- **第二步**：求 $\\frac{\\partial y_i}{\\partial a_i^o}$  \n",
    "  由激活函数导数：\n",
    "  - $\\frac{\\partial y_i}{\\partial a_i^o} = \\sigma'(a_i^o) = y_i(1 - y_i)$\n",
    "\n",
    "- **输出层误差项最终公式**：\n",
    "  - $\\delta_1^o = (y_1 - \\hat{y}_1) \\cdot y_1(1 - y_1)$\n",
    "  - $\\delta_2^o = (y_2 - \\hat{y}_2) \\cdot y_2(1 - y_2)$\n",
    "\n",
    "### 2. 隐藏层误差项 $\\delta^h$\n",
    "\n",
    "**误差项定义**：$\\delta_j^h = \\frac{\\partial L}{\\partial a_j^h}$（$j=1,2$，对应隐藏层两个节点）\n",
    "\n",
    "- **链式法则拆解**：\n",
    "  $\\delta_j^h = \\left( \\frac{\\partial L}{\\partial a_1^o} \\cdot \\frac{\\partial a_1^o}{\\partial h_j} + \\frac{\\partial L}{\\partial a_2^o} \\cdot \\frac{\\partial a_2^o}{\\partial h_j} \\right) \\cdot \\frac{\\partial h_j}{\\partial a_j^h}$\n",
    "\n",
    "- **第一步**：代入已知项\n",
    "  - $\\frac{\\partial L}{\\partial a_1^o} = \\delta_1^o$\n",
    "  - $\\frac{\\partial L}{\\partial a_2^o} = \\delta_2^o$\n",
    "  - $\\frac{\\partial h_j}{\\partial a_j^h} = \\sigma'(a_j^h) = h_j(1 - h_j)$\n",
    "\n",
    "- **第二步**：求 $\\frac{\\partial a_i^o}{\\partial h_j}$（输出层输入对隐藏层输出的偏导）\n",
    "  - $\\frac{\\partial a_1^o}{\\partial h_1} = w_{11}^o$\n",
    "  - $\\frac{\\partial a_1^o}{\\partial h_2} = w_{12}^o$\n",
    "  - $\\frac{\\partial a_2^o}{\\partial h_1} = w_{21}^o$\n",
    "  - $\\frac{\\partial a_2^o}{\\partial h_2} = w_{22}^o$\n",
    "\n",
    "- **隐藏层误差项最终公式**：\n",
    "  - $\\delta_1^h = \\left( \\delta_1^o \\cdot w_{11}^o + \\delta_2^o \\cdot w_{21}^o \\right) \\cdot h_1(1 - h_1)$\n",
    "  - $\\delta_2^h = \\left( \\delta_1^o \\cdot w_{12}^o + \\delta_2^o \\cdot w_{22}^o \\right) \\cdot h_2(1 - h_2)$\n",
    "\n",
    "### 3. 参数梯度（权重+偏置）\n",
    "\n",
    "梯度是损失对参数的偏导，用于后续参数更新（$\\theta = \\theta - \\eta \\cdot \\frac{\\partial L}{\\partial \\theta}$，$\\eta$ 为学习率）\n",
    "\n",
    "#### （1）隐藏层→输出层参数梯度\n",
    "\n",
    "- **权重梯度**：$\\frac{\\partial L}{\\partial w_{ij}^o} = \\delta_i^o \\cdot h_j$（$i$ 为输出层节点，$j$ 为隐藏层节点）\n",
    "  - $\\frac{\\partial L}{\\partial w_{11}^o} = \\delta_1^o \\cdot h_1$\n",
    "  - $\\frac{\\partial L}{\\partial w_{12}^o} = \\delta_1^o \\cdot h_2$\n",
    "  - $\\frac{\\partial L}{\\partial w_{21}^o} = \\delta_2^o \\cdot h_1$\n",
    "  - $\\frac{\\partial L}{\\partial w_{22}^o} = \\delta_2^o \\cdot h_2$\n",
    "\n",
    "- **偏置梯度**：$\\frac{\\partial L}{\\partial b_i^o} = \\delta_i^o$（偏置的输入恒为1，导数为1）\n",
    "  - $\\frac{\\partial L}{\\partial b_1^o} = \\delta_1^o$\n",
    "  - $\\frac{\\partial L}{\\partial b_2^o} = \\delta_2^o$\n",
    "\n",
    "#### （2）输入层→隐藏层参数梯度\n",
    "\n",
    "- **权重梯度**：$\\frac{\\partial L}{\\partial w_{ji}^h} = \\delta_j^h \\cdot x_i$（$j$ 为隐藏层节点，$i$ 为输入层节点）\n",
    "  - $\\frac{\\partial L}{\\partial w_{11}^h} = \\delta_1^h \\cdot x_1$\n",
    "  - $\\frac{\\partial L}{\\partial w_{12}^h} = \\delta_1^h \\cdot x_2$\n",
    "  - $\\frac{\\partial L}{\\partial w_{21}^h} = \\delta_2^h \\cdot x_1$\n",
    "  - $\\frac{\\partial L}{\\partial w_{22}^h} = \\delta_2^h \\cdot x_2$\n",
    "\n",
    "- **偏置梯度**：$\\frac{\\partial L}{\\partial b_j^h} = \\delta_j^h$\n",
    "  - $\\frac{\\partial L}{\\partial b_1^h} = \\delta_1^h$\n",
    "  - $\\frac{\\partial L}{\\partial b_2^h} = \\delta_2^h$\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "77e838d7-f205-45ed-89e7-f66b282b2e04",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
