{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "import math\n",
    "import torch\n",
    "from torch import nn\n",
    "from torch.nn import functional as F\n",
    "from d2l import torch as d2l"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "batch_size, num_steps = 32, 35\n",
    "train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 独热编码"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将每个索引映射为相互不同的单位向量：假设词表中不同词元的数目为N（即len(vocab)），词元索\n",
    "引的范围为0到N − 1。如果词元的索引是整数i，那么我们将创建一个长度为N的全0向量，并将第i处的元素\n",
    "设置为1。此向量是原始词元的一个独热向量。索引为0和2的独热向量如下所示："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
       "         0, 0, 0, 0],\n",
       "        [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n",
       "         0, 0, 0, 0]])"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "F.one_hot(torch.tensor([0,2]),len(vocab))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们每次采样的小批量数据形状是二维张量：（批量大小，时间步数）。<br>one_hot函数将这样一个小批量数据转\n",
    "换成三维张量，张量的最后一个维度等于词表大小（len(vocab)）。<br>我们经常转换输入的维度，以便获得形状\n",
    "为 **（时间步数，批量大小，词表大小）** 的输出。这将使我们能够更方便地通过最外层的维度，一步一步地更\n",
    "新小批量数据的隐状态"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([5, 2, 28])"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 创建一个从0到9的一维张量，并将其重塑为形状(2, 5)的二维张量\n",
    "X = torch.arange(10).reshape(2, 5)\n",
    "# 使用F.one_hot函数对X的转置进行独热编码，词表大小为28，然后查看其形状\n",
    "F.one_hot(X.T, 28).shape\n",
    "# 输出的形状为torch.Size([5, 2, 28])，即（时间步数，批量大小，词表大小）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 梯度裁剪"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "裁剪后的梯度=原始梯度× \n",
    "norm/θ\n",
    "​\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "比如输入3，4，阈值为4，使用L2范数，开根号（3平方+4平方）=5,大于阈值θ=4了<br>\n",
    "需要(3,4)* θ/5=(3,4)*0.8=(2.4,3.2),这俩数字平方开根号等于4.符合阈值"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 梯度裁剪\n",
    "def grad_clipping(net,theta):\n",
    "    \"\"\"裁剪梯度\"\"\"\n",
    "    if isinstance(net,nn.Module):\n",
    "        params=[p for p in net.parameters() if p.requires_grad]\n",
    "    else:\n",
    "        params=net.params\n",
    "    # L2范数\n",
    "    norm=torch.sqrt(sum(torch.sum((p.grad**2)) for p in params))\n",
    "    \"\"\"如果L2范数大于阈值，进行梯度裁剪\"\"\"\n",
    "    if norm>theta:\n",
    "        for param in params:\n",
    "            # 梯度裁剪\n",
    "            param.grad[:]*=theta/norm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "NLP",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.21"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
