{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 神经网络的损失函数\n",
    "损失函数是量化模型预测与真实结果之间差异的函数，用于指导模型训练过程中参数的优化。损失函数的选择对于模型的训练速度和效果至关重要，不同的损失函数会导致不同的梯度下降速度和优化效果。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 一 损失函数的种类及其应用\n",
    "1. 均方误差损失（MSE）：计算预测值与真实值之间差的平方的平均值。适用于回归为题，如房价预测、温度预测等。  \n",
    "   $$loss(y,\\hat y)= \\frac{1}{2}(\\hat y - y)^2 $$    \n",
    "其中，$\\hat y$表示神经网络的输出，$y$表示真实值。\n",
    "\n",
    "注意：损失函数是针对一个样本的，如果样本是一个向量，自然真实值也是一个向量，这个时候的计算就要计算向量的各个分量。比如在手写识别的例子中，$y_k,t_k$是如下包含10个元素的向量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "y = np.array([0.1,0.05,0.6,0.0,0.05,0.1,0.0,0.1,0.0,0.0])\n",
    "t = np.array([0,0,1,0,0,0,0,0,0,0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "数组元素的索引从第一个开始依次对应数字“0”“1”“2”…… 这里，神经网络的输出y是softmax函数的输出。由于softmax函数的输出可以理解为概率，因此上例表示“0”的概率是0.1，“1”的概率是0.05，“2”的概率是0.6等。t是监督数据，将正确解标签设为1，其他均设为0。这里，标签“2”为1，表示正确解是“2”。将正确解标签表示为1，其他标签表示为0的表示方法称为one-hot表示。  \n",
    "\n",
    "Python实现均方误差的计算"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "def mean_squared_error(y,t):\n",
    "    return 0.5*np.sum((y-t)**2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.09750000000000003"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 2为正确的解\n",
    "t = np.array([0,0,1,0,0,0,0,0,0,0])\n",
    "# 2的概率最高的预测\n",
    "y = [0.1, 0.05, 0.6, 0.0, 0.05, 0.1, 0.0, 0.1, 0.0, 0.0]\n",
    "\n",
    "mean_squared_error(y,t)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.5975"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 2为正确的解\n",
    "t = np.array([0,0,1,0,0,0,0,0,0,0])\n",
    "# 7的概率最高的预测\n",
    "y = [0.1, 0.05, 0.1, 0.0, 0.05, 0.1, 0.0, 0.6, 0.0, 0.0]\n",
    "mean_squared_error(y,t)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "从上面的实例可以看出，预测值与真实值越接近，均方误差越小。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2. 交叉熵损失函数（cross entropy error）：交叉熵误差是一种用来衡量真实分布和预测分布之间差距的指标。它可以用来衡量分类问题中分类精度的模型。  \n",
    "$$E = -\\sum_kt_klogy_k$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上式中，$y_k$是神经网络的输出，$t_k$是正确解标签。通常，$t_k$中只有正确解标签的索引为1，其他的均为0（one-hot表示）。因此上式实际上只计算对应正确解标签的输出的自然对数。  \n",
    "Python实现交叉熵误差计算："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "def cross_entropy_error(y,t): # y通常为softmax函数的输出，范围在0-1\n",
    "    delta = 1e-7 # 防止下溢出\n",
    "    return -np.sum(t*np.log(y+delta))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.510825457099338"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 2为正确的解\n",
    "t = np.array([0,0,1,0,0,0,0,0,0,0])\n",
    "# 2的概率最高的预测\n",
    "y = np.array([0.1, 0.05, 0.6, 0.0, 0.05, 0.1, 0.0, 0.1, 0.0, 0.0])\n",
    "\n",
    "cross_entropy_error(y,t)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "2.302584092994546"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 2为正确的解\n",
    "t = np.array([0,0,1,0,0,0,0,0,0,0])\n",
    "# 正确解标签对应的输出为0.1的低值，交叉熵误差较大\n",
    "y = np.array([0.1, 0.05, 0.1, 0.0, 0.05, 0.1, 0.0, 0.6, 0.0, 0.0])\n",
    "cross_entropy_error(y,t)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "da",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
