{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "3ded2cf6",
   "metadata": {},
   "source": [
    "## Softmax&交叉熵代价函数\n",
    "\n",
    "softmax经常被添加在分类任务的神经网络中的输出层，神经网络的反向传播中关键的步骤就是求导，从这个过程也可以更深刻地理解反向传播的过程，还可以对梯度传播的问题有更多的思考。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "37ed277a",
   "metadata": {},
   "source": [
    "### 1. softmax函数\n",
    "\n",
    "softmax(**柔性**最大值)函数，一般在神经网络中， softmax可以作为分类任务的输出层。其实可以认为softmax输出的是几个类别选择的概率，比如有一个分类任务，要分为三个类，softmax函数可以根据它们相对的大小，输出三个类别选取的概率，并且概率和为1。\n",
    "\n",
    "进一步我们举一个例子，来对比“softmax”和“hardmax”的区别"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "575493bf",
   "metadata": {},
   "source": [
    "对于hardmax来说，我们要求最大值，那我们就按照要救只给出一个最大的值，非黑即白。但是在现实的情况中这种方法往往不太适合，我们更期望得到的是**每种信息的概率值（也可以叫置信度），而softmax就是不再唯一的确定某一个最大值，而是为每个输出分类的结果都赋予一个概率值，表示属于每个类别的可能性。**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "8e770ef6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "5\n"
     ]
    }
   ],
   "source": [
    "#hardmax\n",
    "import numpy as np\n",
    "\n",
    "a = np.array([1, 2, 3, 4, 5])\n",
    "a_max = np.max(a)\n",
    "print(a_max) "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4637c82d",
   "metadata": {},
   "source": [
    "Softmax 的定义形式如下\n",
    "\n",
    "(可以理解为在所有类别里面第i类占的份额，在exp上做了一次映射）\n",
    "\n",
    "$$\n",
    "S_i = \\frac{e^{z_i}}{\\sum_k e^{z_k}}\n",
    "$$\n",
    "\n",
    "* $S_i$是经过softmax的类别概率输出\n",
    "* $z_k$是神经元的输出"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "519791de",
   "metadata": {},
   "source": [
    "举这样的一个例子\n",
    "\n",
    "![softmax_demo](images/softmax_demo.png)\n",
    "\n",
    "从这里我们就可以理解为什么要在exp上做映射，因为输出有可能是负数，如果我们直接加就会导致结果出现错误，但是exp的值是恒正的，而且越靠近$-\\infty$他的置信度相应也就越小。然后我们也可以将本来输出是 $[3,1,-3]$ 的值通过$softmax$映射到$(0,1)$上\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "014e5972",
   "metadata": {},
   "source": [
    "**对于神经网络的输出来说，就会变成如下的情况**\n",
    "\n",
    "![softmax_neuron](images/softmax_neuron.png)\n",
    "\n",
    "每个神经元的输出我们可以表示为\n",
    "\n",
    "$$\n",
    "z_i = \\sum_{j} w_{ij} x_{j} + b\n",
    "$$\n",
    "\n",
    "我们引入softmax函数之后输出就会变成\n",
    "\n",
    "$$\n",
    "a_i = \\frac{e^{z_i}}{\\sum_k e^{z_k}}\n",
    "$$\n",
    "\n",
    "其中$a_i$代表softmax的第$i$个输出值"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4ff92f1",
   "metadata": {},
   "source": [
    "### 2.交叉熵损失函数\n",
    "\n",
    "我们在设计神经网络的时候，其实是希望他的学习过程同人的学习过程类似，**人在学习分析新事物时，当发现自己犯的错误越大时，改正的力度就越大。（例如投篮）**但是在实际情况中，我们看到的训练效果是**如果使用二次代价函数训练ANN，看到的实际效果是，如果误差越大，参数调整的幅度可能更小，训练更缓慢**\n",
    "\n",
    ">以一个神经元的二类分类训练为例，进行两次实验（神经网络常用的激活函数为sigmoid函数，该实验也采用该函数）：输入一个相同的样本数据x=1.0（该样本对应的实际分类y=0）；两次实验各自随机初始化参数，从而在各自的第一次前向传播后得到不同的输出值，形成不同的误差：\n",
    "![cross_entropy_loss_1](images/cross_entropy_loss_1.png)\n",
    "实验1：第一次输出值为0.82\n",
    "![cross_entropy_loss_2](images/cross_entropy_loss_2.png)\n",
    "实验2：第一次输出值为0.98\n",
    "\n",
    ">在实验1中，随机初始化参数，使得第一次输出值为0.82（该样本对应的实际值为0）；经过300次迭代训练后，输出值由0.82降到0.09，逼近实际值。而在实验2中，第一次输出值为0.98，同样经过300迭代训练，输出值只降到了0.20。\n",
    "\n",
    ">![cross_entropy_loss_sigmod.png](images/cross_entropy_loss_sigmod.png)\n",
    "可以看的，实验2的初始输出值（0.98）对应的梯度明显小于实验1的输出值（0.82），因此实验2的参数梯度下降得比实验1慢。这就是初始的代价（误差）越大，导致训练越慢的原因。与我们的期望不符，即：不能像人一样，错误越大，改正的幅度越大，从而学习得越快。\n",
    "\n",
    "\n",
    "简单的来说我们可以这样理解，对于我们经常使用的sigmoid函数，真实的标签应该是0 1，那么**当sigmoid的输出初始误差较大的时候，就会更加靠近错误的那一侧**（比如说真实值是0，但是误差大的输出会更靠近1，因为只要是误分类他都应该更靠近1），**这个时候进行迭代他的梯度值是比较小的，所以就造成了误差大的反而迭代更慢。**\n",
    "\n",
    "\n",
    "所以我们引入了**[交叉熵函数](https://blog.csdn.net/u014313009/article/details/51043064)**来解决这一问题，他的形式如下\n",
    "\n",
    "$$\n",
    "C = - \\sum_i y_i ln a_i\n",
    "$$\n",
    "\n",
    "其中$y_i$表示真实的分类结果。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7134a100",
   "metadata": {},
   "source": [
    "### 3.交叉熵函数详细推导\n",
    "\n",
    "首先，我们要明确一下我们要求什么，我们要求的是我们的$loss$对于神经元输出($z_i$)的梯度，即：\n",
    "\n",
    "$$\n",
    "\\frac{\\partial C}{\\partial z_i}\n",
    "$$\n",
    "\n",
    "根据复合函数求导法则：\n",
    "\n",
    "$$\n",
    "\\frac{\\partial C}{\\partial z_i} = \\frac{\\partial C}{\\partial a_j} \\frac{\\partial a_j}{\\partial z_i}\n",
    "$$\n",
    "\n",
    "有个人可能有疑问了，这里为什么是$a_j$而不是$a_i$，这里要看一下$softmax$的公式了，因为$softmax$公式的特性，它的分母包含了所有神经元的输出，所以，对于不等于$i$的其他输出里面，也包含着$z_i$，所有的$a$都要纳入到计算范围中，并且后面的计算可以看到需要分为$i = j$和$i \\ne j$两种情况求导。\n",
    "\n",
    "### 3.1 针对$a_j$的偏导\n",
    "\n",
    "$$\n",
    "\\frac{\\partial C}{\\partial a_j} = \\frac{(\\partial -\\sum_j y_j ln a_j)}{\\partial a_j} = -\\sum_j y_j \\frac{1}{a_j}\n",
    "$$\n",
    "\n",
    "### 3.2 针对$z_i$的偏导\n",
    "\n",
    "如果 $i=j$ :\n",
    "\n",
    "\\begin{eqnarray}\n",
    "\\frac{\\partial a_i}{\\partial z_i} & = & \\frac{\\partial (\\frac{e^{z_i}}{\\sum_k e^{z_k}})}{\\partial z_i} \\\\\n",
    "  & = & \\frac{\\sum_k e^{z_k} e^{z_i} - (e^{z_i})^2}{\\sum_k (e^{z_k})^2} \\\\\n",
    "  & = & (\\frac{e^{z_i}}{\\sum_k e^{z_k}} ) (1 - \\frac{e^{z_i}}{\\sum_k e^{z_k}} ) \\\\\n",
    "  & = & a_i (1 - a_i)\n",
    "\\end{eqnarray}\n",
    "\n",
    "如果 $i \\ne j$:\n",
    "\\begin{eqnarray}\n",
    "\\frac{\\partial a_j}{\\partial z_i} & = & \\frac{\\partial (\\frac{e^{z_j}}{\\sum_k e^{z_k}})}{\\partial z_i} \\\\\n",
    "  & = &  \\frac{0 \\cdot \\sum_k e^{z_k} - e^{z_j} \\cdot e^{z_i} }{(\\sum_k e^{z_k})^2} \\\\\n",
    "  & = & - \\frac{e^{z_j}}{\\sum_k e^{z_k}} \\cdot \\frac{e^{z_i}}{\\sum_k e^{z_k}} \\\\\n",
    "  & = & -a_j a_i\n",
    "\\end{eqnarray}\n",
    "\n",
    "当u，v都是变量的函数时的导数推导公式：\n",
    "$$\n",
    "(\\frac{u}{v})' = \\frac{u'v - uv'}{v^2} \n",
    "$$\n",
    "\n",
    "### 3.3 整体的推导\n",
    "\n",
    "\\begin{eqnarray}\n",
    "\\frac{\\partial C}{\\partial z_i} & = & (-\\sum_j y_j \\frac{1}{a_j} ) \\frac{\\partial a_j}{\\partial z_i} \\\\\n",
    "  & = & - \\frac{y_i}{a_i} a_i ( 1 - a_i) + \\sum_{j \\ne i} \\frac{y_j}{a_j} a_i a_j \\\\\n",
    "  & = & -y_i + y_i a_i + \\sum_{j \\ne i} y_j a_i \\\\\n",
    "  & = & -y_i + a_i \\sum_{j} y_j \\\\\n",
    "  & = & -y_i + a_i\n",
    "\\end{eqnarray}\n",
    "\n",
    "参数更新方程为\n",
    "$$\n",
    "\\frac{\\partial C}{\\partial w_{ij}} = (-y_i + a_i) x_i\n",
    "$$\n",
    "\n",
    "其中\n",
    "$$\n",
    "z_i = \\sum_{j} w_{ij} x_{j} + b\n",
    "$$\n",
    "\n",
    "对于使用二次代价函数的更新方程为：\n",
    "\n",
    "$$\n",
    "\\delta_i = a_i (1-a_i) (y_i - a_i)\n",
    "$$\n",
    "\n",
    "$$\n",
    "w_{ji} = w_{ji} + \\eta \\delta_j x_{ji}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "19b54b59",
   "metadata": {},
   "source": [
    "## References\n",
    "\n",
    "* Softmax & 交叉熵\n",
    "  * [一文详解Softmax函数](https://zhuanlan.zhihu.com/p/105722023)\n",
    "  * [交叉熵代价函数（作用及公式推导）](https://blog.csdn.net/u014313009/article/details/51043064)\n",
    "  * [手打例子一步一步带你看懂softmax函数以及相关求导过程](https://www.jianshu.com/p/ffa51250ba2e)\n",
    "  * [简单易懂的softmax交叉熵损失函数求导](https://www.jianshu.com/p/c02a1fbffad6)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
