{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# softmax\n",
    "\n",
    "$$\n",
    "softmax(X) = \\frac{e^{x_i}}{\\sum_i^m e^{x_i}}\n",
    "$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1.38389653e-87 3.72007598e-44 1.00000000e+00]\n",
      "1.0\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "def softmax(_X):\n",
    "    ex = np.exp(_X)\n",
    "    return ex / np.sum(ex)\n",
    "\n",
    "a = softmax(np.array([100, 200, 300]))\n",
    "print(a)\n",
    "print(np.sum(a))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 优化\n",
    "一般情况下，为了防止溢出，我们会把每个值都减去其中的最大值，然后进行计算。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1.38389653e-87 3.72007598e-44 1.00000000e+00]\n",
      "1.0\n"
     ]
    }
   ],
   "source": [
    "def optimize_softmax(_X):\n",
    "    return softmax(_X - np.max(_X))\n",
    "\n",
    "a = optimize_softmax(np.array([100, 200, 300]))\n",
    "print(a)\n",
    "print(np.sum(a))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 作用\n",
    "通过这种函数，我们可以把全部的数据分为多个结果输出，并且相加为1，作为概率值来进行条件判断。<br>\n",
    "在`MINIST`中使用的也是`softmax`函数进行的输出。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 交叉熵\n",
    "一般情况下我们使用交叉熵作为损失函数\n",
    "$$\n",
    "Loss = -\\sum_i t_i \\ln y_i\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 求导\n",
    "$$\n",
    "f(x) = g(x)h(x) \\Rightarrow f'(x) = g'(x)h(x) + g(x)h'(x)\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`softmax`函数当中的分子，是对于$i$项的数值，对于其他项求导必然为零。<br>\n",
    "## $i = j$\n",
    "$$\n",
    "\\frac{\\partial s_j}{\\partial x_j} = \\frac{e^{x_i}}{\\sum_i^m e^{x_i}} - e^{x_j}\\frac{e^{x_i}}{(\\sum_i^m e^{x_i})^2} = s_i(1 - s_j)\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## $i\\neq j$\n",
    "同上，$e^{x_i}$作为无关项，只有后面一项，结果为\n",
    "$$\n",
    "\\frac{\\partial s_j}{\\partial x_j} = - \\frac{e^{x_i} e^{x_j}}{(\\sum_i^m e^{x_i})^2} = -s_is_j\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 交叉熵求导\n",
    "由于表达式中\n",
    "$$\n",
    "t_j = \\left\\{\\begin{matrix}\n",
    "1 & i = j \\\\\n",
    "0 & i \\neq j\n",
    "\\end{matrix}\\right. \n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "所以一般来说\n",
    "$$\n",
    "Loss_i = -\\ln y_i\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\n",
    "\\frac{\\partial Loss}{\\partial x_i} = - \\frac{1}{s_i} s_i(1 - s_i) = s_i - 1\n",
    "$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
