{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 作业3：前馈神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. 数值稳定的算法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在编写激活函数或计算损失函数时，经常会遇到一些极端的取值，如果不对其进行适当的处理，很可能导致计算结果出现 `NaN` 或其他异常结果，影响程序的正常运行。本题将着重练习若干数值稳定的计算方法。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第1题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(a) 考虑 Sigmoid 函数 $$\\sigma(x)=\\frac{e^x}{1+e^x}$$\n",
    "\n",
    "请利用 PyTorch 编写一个函数 `sigmoid(x)`，令其可以接收一个 Tensor `x`，返回 Sigmoid 函数在 `x` 上的取值。不可直接调用 `torch.sigmoid()`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import torch\n",
    "\n",
    "def sigmoid(x):\n",
    "    return 1 / (1 + torch.exp(-x))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "一个简单的测试："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([0.0000e+00, 0.0000e+00, 4.5398e-05, 5.0000e-01, 9.9995e-01, 1.0000e+00,\n",
      "        1.0000e+00])\n",
      "tensor([0.0000e+00, 0.0000e+00, 4.5398e-05, 5.0000e-01, 9.9995e-01, 1.0000e+00,\n",
      "        1.0000e+00])\n"
     ]
    }
   ],
   "source": [
    "x = torch.tensor([-1000.0, -100.0, -10.0, 0.0, 10.0, 100.0, 1000.0])\n",
    "\n",
    "# PyTorch 自带函数\n",
    "print(torch.sigmoid(x))\n",
    "\n",
    "# 上面编写的函数\n",
    "print(sigmoid(x))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(b) 如果出现异常取值，思考可能的原因是什么。（提示：Sigmoid 函数真实的取值范围是多少？分子和分母的取值范围又是什么？是否可以对 Sigmoid 函数的表达式进行某种等价变换？）请再次尝试编写 Sigmoid 函数。如果一切正常，可忽略此问题。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "#可能的原因：对于 x = -1000.0，torch.exp(-x) =exp(1000.0)，这个数字太大，浮点数溢出为inf，导致 1 + e^1000 ≈ ∞，最终 1 / ∞ ≈ 0，显示为 0.0000e+00\n",
    "\n",
    "#对于 x = 1000.0，torch.exp(-x) = exp(-1000.0)，这个数字太小，接近 0，所以 1 / (1 + 0) ≈ 1，显示为 1.0000e+00\n",
    "\n",
    "\n",
    "def sigmoid(x):\n",
    "    return torch.where(x >= 0,\n",
    "                       1 / (1 + torch.exp(-x)),\n",
    "                       torch.exp(x) / (1 + torch.exp(x)))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第2题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(a) 考虑 Tanh 函数 $$\\sigma(x)=\\frac{e^x-e^{-x}}{e^x+e^{-x}}$$\n",
    "\n",
    "请利用 PyTorch 编写一个函数 `tanh(x)`，令其可以接收一个 Tensor `x`，返回 Tanh 函数在 `x` 上的取值。不可直接调用 `torch.tanh()`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import torch\n",
    "def tanh(x):\n",
    "    return (torch.exp(x) - torch.exp(-x)) / (torch.exp(x) + torch.exp(-x))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "一个简单的测试："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([-1.0000, -1.0000, -1.0000, -0.7616,  0.0000,  0.7616,  1.0000,  1.0000,\n",
      "         1.0000])\n",
      "tensor([    nan,     nan, -1.0000, -0.7616,  0.0000,  0.7616,  1.0000,     nan,\n",
      "            nan])\n"
     ]
    }
   ],
   "source": [
    "x = torch.tensor([-1000.0, -100.0, -10.0, -1.0, 0.0, 1.0, 10.0, 100.0, 1000.0])\n",
    "\n",
    "# PyTorch 自带函数\n",
    "print(torch.tanh(x))\n",
    "\n",
    "# 上面编写的函数\n",
    "print(tanh(x))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(b) 如果出现异常取值，思考可能的原因是什么。请再次尝试编写 Tanh 函数。如果一切正常，可忽略此问题。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 可能的原因：当 x 很大，比如 1000，那么 torch.exp(x) 会溢出为 inf， (inf - 0) / (inf + 0) 还是 1，但是如果中间操作不当，可能会出 inf/inf 或 nan\n",
    "\n",
    "def tanh(x):\n",
    "    return 2 / (1 + torch.exp(-2 * x)) - 1\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第3题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(a) 考虑 Softplus 函数 $$\\mathrm{softplus}(x)=\\log(1+e^x)$$\n",
    "\n",
    "请利用 PyTorch 编写一个函数 `softplus(x)`，令其可以接收一个 Tensor `x`，返回 Softplus 函数在 `x` 上的取值。不可直接调用 `torch.nn.functional.softplus()`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "\n",
    "def softplus(x):\n",
    "    return torch.log(1 + torch.exp(x))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "一个简单的测试："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([0.0000e+00, 3.7835e-44, 4.5399e-05, 6.9315e-01, 1.0000e+01, 1.0000e+02,\n",
      "        1.0000e+03])\n",
      "tensor([0.0000e+00, 0.0000e+00, 4.5418e-05, 6.9315e-01, 1.0000e+01,        inf,\n",
      "               inf])\n"
     ]
    }
   ],
   "source": [
    "x = torch.tensor([-1000.0, -100.0, -10.0, 0.0, 10.0, 100.0, 1000.0])\n",
    "\n",
    "# PyTorch 自带函数\n",
    "print(torch.nn.functional.softplus(x))\n",
    "\n",
    "# 上面编写的函数\n",
    "print(softplus(x))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(b) 如果出现异常取值，思考可能的原因是什么。请再次尝试编写 Softplus 函数。如果一切正常，可忽略此问题。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 当 x=100、x=1000 时，torch.exp(100) ≈ 2.688e+43（已经接近浮点数上限），torch.exp(1000) 直接变成 inf（溢出），log(1 + inf) = inf\n",
    "def softplus(x):\n",
    "    return torch.where(x > 20, x, torch.log1p(torch.exp(x)))\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第4题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在作业2第2题中，为了计算损失函数，我们先计算了 $\\hat{\\rho}=\\mathrm{sigmoid}(X\\beta)$，然后再与 $y$ 计算 $l(y,\\hat{\\rho})=-y\\log \\hat{\\rho} - (1-y) \\cdot \\log(1-\\hat{\\rho})$。但当 $\\hat{\\rho}$ 非常接近0或1时，可能就会出现 $\\log(0)$ 错误。根据本次作业第1题和第3题的结果，请思考是否有更稳定的数值算法，并重新编写损失函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "def loss_fn_logistic(bhat, x, y):\n",
    "    xb = torch.matmul(x, bhat)\n",
    "    pro = sigmoid(xb)\n",
    "    eps = 1e-8  \n",
    "    log_likelihood = y * torch.log(pro + eps) + (1 - y) * torch.log(1 - pro + eps)\n",
    "    loss = -torch.mean(log_likelihood)\n",
    "    return loss\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 前馈神经网络"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第5题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "利用模块化编程（参考课件 `lec5-fnn.ipynb` 中的实现），在如下模拟数据上构建一个 Logistic 回归模型（包含截距项），并利用自动微分和梯度下降法求解回归系数。要求使用 PyTorch 中的 `nn.Linear` 模块完成模型构建。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ -8.0600,  -9.6046],\n",
      "        [ -0.1087, -11.9803],\n",
      "        [-10.0556,  -0.2010],\n",
      "        [-12.1487,  10.7105],\n",
      "        [  5.6814,   9.5508]])\n",
      "tensor([0., 0., 0., 1., 1.])\n"
     ]
    }
   ],
   "source": [
    "# 来自 https://gist.github.com/45deg/e731d9e7f478de134def5668324c44c5\n",
    "import math\n",
    "import numpy as np\n",
    "import torch\n",
    "\n",
    "def gen_data(n):\n",
    "    theta = np.sqrt(np.random.rand(n)) * 2 * math.pi\n",
    "\n",
    "    r_a = 2 * theta + math.pi\n",
    "    data_a = np.array([np.cos(theta) * r_a, np.sin(theta) * r_a]).T\n",
    "    x_a = data_a + np.random.randn(n, 2)\n",
    "    \n",
    "    r_b = -2 * theta - math.pi\n",
    "    data_b = np.array([np.cos(theta) * r_b, np.sin(theta) * r_b]).T\n",
    "    x_b = data_b + np.random.randn(n, 2)\n",
    "    \n",
    "    res_a = np.append(x_a, np.zeros((n, 1)), axis=1)\n",
    "    res_b = np.append(x_b, np.ones((n, 1)), axis=1)\n",
    "    \n",
    "    res = np.append(res_a, res_b, axis=0)\n",
    "    np.random.shuffle(res)\n",
    "    \n",
    "    x = torch.tensor(res[:, :2], dtype=torch.float)\n",
    "    y = torch.tensor(res[:, 2], dtype=torch.float)\n",
    "    return x, y\n",
    "\n",
    "np.random.seed(123456)\n",
    "torch.random.manual_seed(123456)\n",
    "n = 1000\n",
    "x, y = gen_data(n)\n",
    "print(x[:5])\n",
    "print(y[:5])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import seaborn as sns\n",
    "\n",
    "dat = pd.DataFrame(x.numpy(), columns=[\"x1\", \"x2\"])\n",
    "dat[\"y\"] = y.numpy()\n",
    "sns.scatterplot(data=dat, x=\"x1\", y=\"x2\", hue=\"y\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import numpy as np\n",
    "import math\n",
    "\n",
    "def gen_data(n):\n",
    "    theta = np.sqrt(np.random.rand(n)) * 2 * math.pi\n",
    "    r_a = 2 * theta + math.pi\n",
    "    data_a = np.array([np.cos(theta) * r_a, np.sin(theta) * r_a]).T\n",
    "    x_a = data_a + np.random.randn(n, 2)\n",
    "\n",
    "    r_b = -2 * theta - math.pi\n",
    "    data_b = np.array([np.cos(theta) * r_b, np.sin(theta) * r_b]).T\n",
    "    x_b = data_b + np.random.randn(n, 2)\n",
    "\n",
    "    res_a = np.append(x_a, np.zeros((n, 1)), axis=1)\n",
    "    res_b = np.append(x_b, np.ones((n, 1)), axis=1)\n",
    "\n",
    "    res = np.append(res_a, res_b, axis=0)\n",
    "    np.random.shuffle(res)\n",
    "\n",
    "    x = torch.tensor(res[:, :2], dtype=torch.float32)\n",
    "    y = torch.tensor(res[:, 2], dtype=torch.float32)\n",
    "    return x, y\n",
    "\n",
    "np.random.seed(123456)\n",
    "torch.manual_seed(123456)\n",
    "x, y = gen_data(1000)\n",
    "\n",
    "model = nn.Linear(2, 1)\n",
    "\n",
    "loss_fn = nn.BCELoss()\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=0.1)\n",
    "\n",
    "for epoch in range(10):\n",
    "    logits = model(x).squeeze()  # (N,)\n",
    "    probs = torch.sigmoid(logits)\n",
    "    loss = loss_fn(probs, y)\n",
    "\n",
    "    optimizer.zero_grad()\n",
    "    loss.backward()\n",
    "    optimizer.step()\n",
    "\n",
    "    print(f\"Epoch {epoch}: Loss = {loss.item():.4f}\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "完成模型训练后，利用得到的模型对如下测试集数据进行预测（概率 >0.5 判为1，反之判为0），计算分类的正确率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "np.random.seed(654321)\n",
    "torch.random.manual_seed(654321)\n",
    "\n",
    "ntest = 200\n",
    "xtest, ytest = gen_data(ntest)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Test accuracy: 67.50%\n"
     ]
    }
   ],
   "source": [
    "np.random.seed(654321)\n",
    "torch.manual_seed(654321)\n",
    "\n",
    "ntest = 200\n",
    "xtest, ytest = gen_data(ntest)\n",
    "\n",
    "# 模型预测\n",
    "with torch.no_grad(): \n",
    "    logits = model(xtest).squeeze()\n",
    "    probs = torch.sigmoid(logits)\n",
    "    preds = (probs > 0.5).float() \n",
    "\n",
    "accuracy = (preds == ytest).float().mean().item()\n",
    "print(f\"Test accuracy: {accuracy * 100:.2f}%\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第6题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "修改第5题中的线性模型，将其变为一个两层的前馈神经网络，隐藏神经元数量为32，使用 ReLU 激活函数。然后重新训练模型（可尝试使用不同的学习率和迭代次数），并对测试集进行预测，计算分类的正确率（目标是 >90%）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 0: Loss = 1.0332\n",
      "Epoch 20: Loss = 0.4758\n",
      "Epoch 40: Loss = 0.4541\n",
      "Epoch 60: Loss = 0.4216\n",
      "Epoch 80: Loss = 0.3768\n",
      "Epoch 100: Loss = 0.3193\n",
      "Epoch 120: Loss = 0.2598\n",
      "Epoch 140: Loss = 0.2061\n",
      "Epoch 160: Loss = 0.1613\n",
      "Epoch 180: Loss = 0.1255\n",
      "\n",
      "Test accuracy: 97.75%\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import numpy as np\n",
    "import math\n",
    "\n",
    "def gen_data(n):\n",
    "    theta = np.sqrt(np.random.rand(n)) * 2 * math.pi\n",
    "    r_a = 2 * theta + math.pi\n",
    "    data_a = np.array([np.cos(theta) * r_a, np.sin(theta) * r_a]).T\n",
    "    x_a = data_a + np.random.randn(n, 2)\n",
    "    r_b = -2 * theta - math.pi\n",
    "    data_b = np.array([np.cos(theta) * r_b, np.sin(theta) * r_b]).T\n",
    "    x_b = data_b + np.random.randn(n, 2)\n",
    "    res_a = np.append(x_a, np.zeros((n, 1)), axis=1)\n",
    "    res_b = np.append(x_b, np.ones((n, 1)), axis=1)\n",
    "    res = np.append(res_a, res_b, axis=0)\n",
    "    np.random.shuffle(res)\n",
    "    x = torch.tensor(res[:, :2], dtype=torch.float32)\n",
    "    y = torch.tensor(res[:, 2], dtype=torch.float32)\n",
    "    return x, y\n",
    "\n",
    "\n",
    "np.random.seed(123456)\n",
    "torch.manual_seed(123456)\n",
    "x, y = gen_data(1000)\n",
    "class MLP(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.hidden = nn.Linear(2, 32)\n",
    "        self.output = nn.Linear(32, 1)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = F.relu(self.hidden(x))\n",
    "        x = torch.sigmoid(self.output(x))\n",
    "        return x\n",
    "\n",
    "model = MLP()\n",
    "loss_fn = nn.BCELoss()\n",
    "optimizer = torch.optim.Adam(model.parameters(), lr=0.01)\n",
    "\n",
    "n_epoch = 200\n",
    "for epoch in range(n_epoch):\n",
    "    y_pred = model(x).squeeze()\n",
    "    loss = loss_fn(y_pred, y)\n",
    "    \n",
    "    optimizer.zero_grad()\n",
    "    loss.backward()\n",
    "    optimizer.step()\n",
    "\n",
    "    if epoch % 20 == 0:\n",
    "        print(f\"Epoch {epoch}: Loss = {loss.item():.4f}\")\n",
    "\n",
    "np.random.seed(654321)\n",
    "torch.manual_seed(654321)\n",
    "xtest, ytest = gen_data(200)\n",
    "\n",
    "with torch.no_grad():\n",
    "    y_pred_test = model(xtest).squeeze()\n",
    "    preds = (y_pred_test > 0.5).float()\n",
    "    acc = (preds == ytest).float().mean().item()\n",
    "\n",
    "print(f\"\\nTest accuracy: {acc * 100:.2f}%\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
