{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 作业2：PyTorch基础+线性模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第1题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(a) 生成一个大小为 $[n \\times p] = [200 \\times 50]$ 的数据矩阵 `x`，用正态分布 N(0, 2) 填充。随机数种子设为123456。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "n = 200\n",
    "p = 50\n",
    "\n",
    "torch.manual_seed(123456)\n",
    "\n",
    "x = torch.randn(n, p)\n",
    "\n",
    "# 检查 x 的大小，方便 debug\n",
    "assert x.shape == (200, 50), \"x 形状有误\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(b) 生成一个长度为 $p$ 的向量 `beta`，每个元素服从均匀分布 Uniform(-1, 1)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "beta = 2 * torch.rand(p) - 1\n",
    "\n",
    "# 检查 beta 的长度，方便 debug\n",
    "assert beta.shape == (50,), \"beta 长度有误\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(c) 生成一个长度为 $n$ 的向量 `eps`，每个元素服从独立正态 $N(0, 0.1)$。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "eps = torch.normal(0, 0.1, (n,))\n",
    "\n",
    "# 检查 eps 的长度，方便 debug\n",
    "assert eps.shape == (200,), \"eps 长度有误\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(d) 创建向量 `y`，令其在数学上等于 $y=X\\beta+\\epsilon$。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [],
   "source": [
    "#y = None  # 替换这里的代码\n",
    "y = torch.matmul(x, beta) + eps\n",
    "\n",
    "# 检查 y 的长度，方便 debug\n",
    "assert y.shape == (200,), \"y 长度有误\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(e) 回归问题：给定数据 `x` 和 `y`，估计 `beta` 的取值。以 MSE 为损失函数，编写 Python 函数 `loss_fn_reg(bhat, x, y)`，用来返回任意 $\\hat{\\beta}$ 下的目标函数值。请用基础的矩阵和向量运算实现。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [],
   "source": [
    "def loss_fn_reg(bhat, x, y):\n",
    "    y_pred = torch.matmul(x, bhat) \n",
    "    mse = torch.mean((y - y_pred) ** 2)  \n",
    "    return mse\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(f) Pytorch 中也提供了 MSE 损失函数，参见[其文档](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss)。其用法是先建立一个损失函数对象，然后将 $\\hat{y}$ 和 $y$ 作为参数传入。请利用这种方法计算如下给定 $\\hat{\\beta}$ 后的损失函数值，并与你自己的函数结果进行对比。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(72.8337)\n",
      "tensor(72.8337)\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "bhat = torch.ones(p)\n",
    "yhat = torch.matmul(x, bhat)\n",
    "\n",
    "mse_reg = nn.MSELoss()\n",
    "loss1 = mse_reg(yhat, y)\n",
    "\n",
    "loss2 = loss_fn_reg(bhat, x, y)\n",
    "\n",
    "print(loss1)\n",
    "print(loss2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "【说明文字】loss1等于loss2，因为自定义函数和pytorch的MSELoss类对于均方误差计算的公式是一样的，在这个情景下，输入数据相同，因此输出结果相同"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(g) 利用 PyTorch 的自动微分功能，计算 MSE 损失函数在 $\\hat{\\beta}=\\mathbf{1}_p$ 处的梯度，其中 $\\mathbf{1}_p$ 是一个元素全为1的向量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Gradient= tensor([ 0.6231,  2.1012,  5.5648,  0.3384,  2.9264,  5.0942, -0.6677,  5.1865,\n",
      "         1.8099,  2.4094,  2.1397,  1.5629,  2.3993,  3.1158,  2.3100, -1.0086,\n",
      "         3.4097,  3.1163,  4.7258,  0.9961,  3.4152,  4.3153,  2.9035,  0.7121,\n",
      "         2.3734, -0.3858, -0.4984,  1.8598,  3.0906, -0.8195,  3.9510,  0.5043,\n",
      "         2.2943,  1.5133,  1.4616, -0.1208,  1.1346,  4.9672,  2.1844,  2.0699,\n",
      "         3.4521, -0.1082,  1.8532,  1.6736,  4.4128,  2.7239,  6.3797, -1.0205,\n",
      "         4.0564,  3.4194])\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "bhat = torch.ones(p, requires_grad=True)\n",
    "yhat = torch.matmul(x, bhat)\n",
    "\n",
    "mse_reg = nn.MSELoss()\n",
    "loss = mse_reg(yhat, y)\n",
    "loss.backward()\n",
    "\n",
    "print(\"Gradient=\", bhat.grad)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第2题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(a) 与第1题类似创建变量 `x` 和 `beta`，但使用不同的 `n` 和 `p`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "torch.manual_seed(123456)\n",
    "n = 150\n",
    "p = 6\n",
    "x =torch.randn(n, p)\n",
    "beta = beta = 2 * torch.rand(p) - 1\n",
    "# 检查 x 和 beta 的大小，方便 debug\n",
    "assert x.shape == (n, p), \"x 形状有误\"\n",
    "assert beta.shape == (p,), \"beta 形状有误\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(b) 定义函数 `sigmoid(x)`，其中 `x` 是一个 Tensor，$\\mathrm{sigmoid}(x)=e^x/(1+e^x)$。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [],
   "source": [
    "def sigmoid(x):\n",
    "     return 1 / (1 + torch.exp(-x))\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(c) 根据分布 $Y|X\\sim Bernoulli(\\mathrm{sigmoid}(X\\beta))$，生成 $Y$ 的随机数。提示：参照1.4节的方法，先计算 Bernoulli 分布的参数**向量**，然后生成随机数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [],
   "source": [
    "xb = torch.matmul(x, beta)\n",
    "pro= torch.sigmoid(xb)\n",
    "y = torch.bernoulli(pro)\n",
    "\n",
    "# 检查 y 的大小，方便 debug\n",
    "assert y.shape == (n,), \"y 形状有误\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(d) 已知 $Bernoulli(\\rho)$ 分布的对数密度函数为 $\\log p(y;\\rho)=y\\log \\rho + (1-y) \\cdot \\log(1-\\rho)$。根据此信息，推导出给定 $\\hat{\\beta}$ 时的对数似然函数，并编写损失函数 `loss_fn_logistic(bhat, x, y)`，返回**负**对数似然值。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "【说明文字】\n",
    "### 对数似然函数\n",
    "\n",
    "给定 \\( \\hat{\\beta} \\)，对数似然函数为：\n",
    "\n",
    "$$\n",
    "\\log p(y; \\hat{\\rho}) = \\sum_{i=1}^{n} \\left[ y_i \\log(\\hat{\\rho}_i) + (1 - y_i) \\log(1 - \\hat{\\rho}_i) \\right]\n",
    "$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [],
   "source": [
    "def loss_fn_logistic(bhat, x, y):\n",
    "    xb = torch.matmul(x, bhat)\n",
    "    pro = sigmoid(xb) \n",
    "    log_likelihood = y * torch.log(pro) + (1 - y) * torch.log(1 - pro)\n",
    "    loss = -torch.mean(log_likelihood)\n",
    "    \n",
    "    return loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(e) Pytorch 中也提供了 BCELoss 损失函数，参见[其文档](https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html)。其用法是先建立一个损失函数对象，然后将 $\\hat{\\rho}$ 和 $y$ 作为参数传入。请利用这种方法计算如下给定 $\\hat{\\beta}$ 后的损失函数值，并与你自己的函数结果进行对比。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(0.9764)\n",
      "tensor(0.9764)\n"
     ]
    }
   ],
   "source": [
    "bhat = torch.ones(p)\n",
    "rhohat =torch.sigmoid(torch.matmul(x, bhat))\n",
    "\n",
    "bce_logistic = nn.BCELoss()\n",
    "loss1 = bce_logistic(rhohat, y)\n",
    "\n",
    "loss2 = loss_fn_logistic(bhat, x, y)\n",
    "\n",
    "print(loss1)\n",
    "print(loss2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "【说明文字】loss1等于loss2，因为自定义函数和pytorch的BCELoss类对于误差计算的公式是一样的，在这个情景下，输入数据相同，因此输出结果相同"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(f) 利用 PyTorch 的自动微分功能，计算上述损失函数在 $\\hat{\\beta}=\\mathbf{0}_p$ 处的梯度，其中 $\\mathbf{0}_p$ 是一个元素全为0的向量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "gradient= None\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "bhat=torch.zeros(p, requires_grad=True)\n",
    "\n",
    "loss = loss_fn_logistic(bhat, x, y)\n",
    "loss.backward()\n",
    "print(\"gradient=\", beta.grad)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第3题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(a) 多分类问题的数据通常包括数据阵 $X$ 和标签向量 $l$，其中标签为整数。在计算损失函数时，我们需要先将 $l$ 转换成多项分布的0-1数据，即所谓 One-hot 编码。运行并观察下面的代码。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([1, 2, 2, 1, 0, 3, 3, 3, 3, 0, 3, 0, 0, 2, 2, 0, 3, 0, 3, 3])\n",
      "torch.Size([200, 4])\n",
      "tensor([[0, 1, 0, 0],\n",
      "        [0, 0, 1, 0],\n",
      "        [0, 0, 1, 0],\n",
      "        [0, 1, 0, 0],\n",
      "        [1, 0, 0, 0],\n",
      "        [0, 0, 0, 1],\n",
      "        [0, 0, 0, 1],\n",
      "        [0, 0, 0, 1],\n",
      "        [0, 0, 0, 1],\n",
      "        [1, 0, 0, 0]])\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "np.random.seed(123456)\n",
    "torch.manual_seed(123456)\n",
    "n = 200  # 样本量\n",
    "p = 10   # 变量数\n",
    "k = 4    # 类别数\n",
    "x = torch.randn(n, p)\n",
    "l = torch.tensor(np.random.choice(range(4), size=n, replace=True), dtype=int)\n",
    "print(l[:20])\n",
    "\n",
    "y = torch.nn.functional.one_hot(l)\n",
    "print(y.shape)\n",
    "print(y[:10])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(b) 创建矩阵 `W`，大小为 $k \\times p$，用 N(0, 1) 填充其取值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [],
   "source": [
    "W =torch.randn(k, p)\n",
    "\n",
    "\n",
    "# 检查 W 的形状，方便 debug\n",
    "assert W.shape == (k, p), \"W 形状有误\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(c) 接下来计算对 $Y$ 的概率预测值，其中每个 $Y_i$ 观测对应一个等长的概率向量 $p_i$，而 $p_i=\\mathrm{Softmax}(Wx_i)$。首先计算 $Wx_i$，其中 $x_i$ 是第 $i$ 个观测。由于 $X$ 是把 $x_i$ 按行组合，因此矩阵形式表达为 $U=XW'$，其中 $U$ 的第 $i$ 行即为 $Wx_i$。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [],
   "source": [
    "U = torch.matmul(x, W.T)\n",
    "\n",
    "# 检查 U 的形状，方便 debug\n",
    "assert U.shape == (n, k), \"U 形状有误\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们先测试一下 $\\mathrm{Softmax}(Wx_{100})$ 的结果，观察其元素和是否为1。代码中的 `dim=0` 意思是对第一个下标方向计算 Softmax，由于 `U[99]` 是一个向量，因此第一个下标方向就是该向量本身。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([0.1958, 0.0477, 0.6024, 0.1541])"
      ]
     },
     "execution_count": 64,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.softmax(U[99], dim=0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "而为了对 $U$ 的每一行都计算 Softmax，我们可以直接对整个 `U` 矩阵用 `torch.softmax`，其中 `dim` 需指定为1，意思是对第二个下标方向求 Softmax，即对 $U$ 的每一行。原理类似于1.3节的按坐标轴汇总。请完成该计算，得到矩阵 $P$，其中 $P$ 的第 $i$ 行即为 $p_i$。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {},
   "outputs": [],
   "source": [
    "P =  torch.softmax(U, dim=1)\n",
    "\n",
    "# 检查 P 的形状，方便 debug\n",
    "assert P.shape == (n, k), \"P 形状有误\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(d) 根据 `y` 和 `P` 两个矩阵，即可根据公式得到对数似然函数值。总结上述步骤，编写损失函数 `loss_fn_softmax(w, x, y)`，返回**负**对数似然值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch.nn.functional as F\n",
    "\n",
    "def loss_fn_softmax(w, x, y):\n",
    "    probs = F.softmax(U, dim=1)\n",
    "    loss = -torch.mean(torch.sum(y * torch.log(probs), dim=1))\n",
    "    \n",
    "    return loss\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(e) Pytorch 中也提供了 CrossEntropyLoss 损失函数，参见[其文档](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html)。其用法是先建立一个损失函数对象，然后将 $U$ 和 $l$ 作为参数传入（注意 $U$ 是经过 Softmax **之前**的矩阵，$l$ 是**原始**的标签）。请利用这种方法计算损失函数值，并与你自己的函数结果进行对比。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(3.3032)\n",
      "tensor(3.3032)\n"
     ]
    }
   ],
   "source": [
    "ce_softmax = nn.CrossEntropyLoss()\n",
    "loss1 = ce_softmax(U, l)\n",
    "\n",
    "loss2 = loss_fn_softmax(W, x, y)\n",
    "\n",
    "print(loss1)\n",
    "print(loss2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "【说明文字】loss1等于loss2，因为自定义函数和pytorch的CrossEntropyLoss类的计算的公式是一样的，在这个情景下，输入数据相同，因此输出结果相同"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(f) 利用 PyTorch 的自动微分功能，计算上述损失函数在 $W=O$ 处的梯度，其中 $O$ 是一个元素全为0的矩阵。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-0.0198,  0.0474,  0.0067, -0.0427, -0.0012, -0.0092,  0.0167,  0.0030,\n",
      "          0.0800, -0.0152],\n",
      "        [ 0.0250, -0.0273, -0.0283,  0.0377,  0.0333, -0.0031,  0.0130,  0.0072,\n",
      "         -0.0125,  0.0173],\n",
      "        [-0.0236, -0.0355,  0.0060,  0.0342, -0.0175,  0.0324, -0.0054,  0.0260,\n",
      "          0.0312, -0.0129],\n",
      "        [ 0.0183,  0.0154,  0.0155, -0.0292, -0.0146, -0.0201, -0.0243, -0.0363,\n",
      "         -0.0988,  0.0108]])\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "W = torch.zeros(k, p, requires_grad=True)  \n",
    "U = torch.matmul(x, W.T)\n",
    "\n",
    "ce_softmax = nn.CrossEntropyLoss()\n",
    "loss = ce_softmax(U, l)\n",
    "\n",
    "loss.backward() \n",
    "\n",
    "print(W.grad)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
