{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* 通过$softmax$运算获取一个向量并将其映射为概率\n",
    "* $softmax$回归适用于分类问题，它使用了$softmax$运算中输出类别的概率分布\n",
    "* 交叉熵是一个两个概率分布之间差异的很好的度量，它测量给定模型编码数据所需的比特数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ['KMP_DUPLICATE_LIB_OK'] = \"TRUE\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3.4.1 分类问题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "独热编码是一个向量，它的分量和类别一样多，类别对应的分量设置为1，其他分量设置为0。，例如，假设有3个类别，则独热编码可以表示为：\n",
    "$y \\in \\left \\{(1,0,0), (0,1,0), (0,0,1) \\right \\}$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3.4.2 网络结构"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了解决线性模型的分类问题，我们需要和输出一样多的仿射函数，每个输出对应于他自己的仿射函数，比如我们有4个特征和三个可能的输出类别，我们将需要12个标量来表示权重，3个标量来表示偏置："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "$$ \\begin{matrix}\n",
    "o_1 = x_1 w_{11} + x_2 w_{12} + x_3 w_{13} + x_4 w_{14} + b_1 \\\\\n",
    "o_2 = x_1 w_{21} + x_2 w_{22} + x_3 w_{23} + x_4 w_{24} + b_2 \\\\\n",
    "o_3 = x_1 w_{31} + x_2 w_{32} + x_3 w_{33} + x_4 w_{34} + b_3 \\\\\n",
    "\\end{matrix}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "向量表示：$o = Wx + b$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3.4.3 全连接层的参数开销"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对于任何具有$d$个输入和$q$个输出的全连接层，参数开销为 $O(dq)$。将$d$个输入转换为$q$个输出的成本可以减少到$O(dq/n)$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3.4.4 $softmax$ 运算"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$softmax$函数能够将未规范化的预测变换为非负数并且总和为1，同时让模型保持可导的性质。我们首先对每个未规范化的预测求幂，为了确保最终输出的概率值总和为1，我们再让每个求幂后的结果除以结果的总和："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\hat y = softmax(o),其中 \\hat {y_j} = \\frac {\\exp(o_j)} {\\sum_k \\exp(o_k)}$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对于所有的$j$总有$0 \\le \\hat {y_j} \\le 1$,因此$\\hat y$可以视为一个正确的概率分布。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\underset {j}{argmax} \\hat {y_j} = \\underset {j}{argmax} o_j$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3.4.5 小批量样本的向量化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "假设我们读取了一个批量的样本$X$，其中特征维度为$d$，批量大小为$n$,此外，假设我们在输出中有$q$个类别，那么小批量样本的特征为$X\\in\\mathbb{R}^{n\\times d}$，权重为$W\\in\\mathbb{R}^{d\\times q}$,偏置项为$b\\in\\mathbb{R}^{1\\times q}$.$softmax$回归的向量计算表达式："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\begin{matrix}\n",
    "O = XW + b \\\\\n",
    "\\hat Y = softmax(O) \\\\\n",
    "\\end{matrix}$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3.4.6 损失函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们需要一个损失函数来度量预测的效果。我们将使用极大似然估计，这与在线性回归中的方法相同"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 对数似然"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$softmax$函数给出了一个向量$\\hat y$,将其视为“对给定任意输入$x$的每个类的条件概率”的函数。假设整个数据集$\\left \\{X,Y\\right \\}$由$n$个样本组成,其中索引为$i$的样本由特征向量$x^{(i)}$,独热标签向量$y^{(i)}$组成.我们可以将估计值与实际值进行比较："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$P(Y|X) = \\prod_{i=1}^{n} P(Y^{(i)}|X^{(i)})$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "根据极大似然估计，我们最大化$P(X|Y)$，相当于最小化负对数似然："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$-\\log P(Y|X) = \\sum_{i=1}^n -\\log P(y^{(i)}|x^{(i)}) = \\sum_{i=1}^n l(y^{(i)}, \\hat y^{(i)}))$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "其中，对于任何标签$y$和模型预测$\\hat y$,损失函数：\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$l(y,\\hat y) = - \\sum_{j = 1}^q y_j \\log \\hat y_j$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## $softmax$及其函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$l(y,\\hat y) = - \\sum_{j = 1 }^{q} y_{j} \\log \\frac{\\exp(o_j)}{\\sum_{k = 1}^{q} \\exp(o_k)} = \\sum_{j = 1}^{q} y_{j} \\log \\sum_{k = 1}^{q} \\exp(o_k) - \\sum_{j = 1}^{q} y_{j} o_j = \\log \\sum_{k = 1}^{q} \\exp(o_k) - \\sum_{j = 1}^{q} y_{j} o_j$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "考虑相对于任何未规范化的预测$o_j$的导数："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\partial_{o_j}l(y,\\hat{y}) = \\frac {\\exp(o_j)}{\\sum_{k=1}^q \\exp(o_k)} - y_j = softmax(o)_j - y_j$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 交叉熵损失"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "考虑整个结果分布的情况，即观察到的不仅仅是一个结果，对于标签$y$，我们可以使用与以前相同的表达形式。唯一的区别时，我们现在用一个概率向量表示。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们使用$l(y,\\hat y) = - \\sum_{j = 1}^q y_j \\log \\hat y_j$来定义损失，它是所有标签分布的预期损失值。称为交叉熵损失\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3.4.7 信息论基础"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "信息论涉及编码、解码、发送以及尽可能地处理信息或数据"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 熵"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "信息论的核心思想时量化数据中的信息内容。在信息论中，该数值被称为分布$P$的熵:\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$H(P) = \\sum_j -P(j)\\log P(j)$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "信息论的基本定理之一指出，为了对分布$P$中随机抽取的数据进行编码，我们至少需要$H(P)$“纳特”对其进行编码。纳特相当于比特，对数的底为$e$而不时2。因此一个纳特约等于1.44比特。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 信息量"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$\\log \\frac{1}{P(j)} = - log P(j)$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 重新审视交叉熵"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "交叉熵从$P$到$Q$，记为$H(P,Q)$，当$P = Q$时，交叉熵达到最小值。从$P$到$Q$的交叉熵为$H(P,P) = H(P)$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3.4.8 模型预测和评估"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如果预测与实际类别一直，则预测是正确的，在接下来的实验中，我们将使用精度来评估模型的性能。精度等于正确预测数与预测总数的比。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "test",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
