{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "636a370d-071d-433e-a160-9c901430cef8",
   "metadata": {},
   "source": [
    "实验题目：多层感知机（MLP）的构建与训练"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f93655ec-01d8-43a1-8193-b56b89c42bf8",
   "metadata": {},
   "source": [
    "实验目的：\n",
    "1.理解多层感知机（MLP）的基本结构和工作原理。\n",
    "2.学习使用 PyTorch 框架构建和训练 MLP 模型。\n",
    "3.掌握使用 PyTorch 进行模型参数初始化、前向传播、损失计算、反向传播和参数更新的过程。\n",
    "4.探索不同的优化算法和学习率对模型训练效果的影响。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab4eea23-7717-4fa6-b2bd-7afc0bb3ca68",
   "metadata": {},
   "source": [
    "实验环境：\n",
    "Python 3.11\n",
    "PyTorch 2.4.0\n",
    "Vscode 1.93.1\n",
    "python 3.11.4\n",
    "GPU 加速"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "86f99226-a4de-4616-bdac-15d242f39d62",
   "metadata": {},
   "source": [
    "实验步骤\n",
    "1、理论学习：\n",
    "阅读并理解多层感知机（MLP）的基本概念和结构。\n",
    "学习 PyTorch 框架中 Tensor、Variable 以及自动求导机制的工作原理。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dabf962c-33a6-4c10-933c-c1a508fe44e7",
   "metadata": {},
   "source": [
    "2、环境搭建：\n",
    "安装并配置 Python 环境。\n",
    "安装 PyTorch 及相关库。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a1626a28-a015-4884-9b27-a93c4d0e8ce5",
   "metadata": {},
   "source": [
    "3、模型构建：\n",
    "使用 PyTorch 的 nn.Module构建一个两层的 MLP 模型。\n",
    "理解并实现模型中的前向传播过程。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "bdc38925-2f5d-4bb1-923e-6f5e969a2605",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "import torch\n",
    "from torch.autograd import Variable\n",
    "import torch.nn as nn"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "06aa3940-3648-4bdc-863b-ef1ec7b4e428",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 一定要继承 nn.Module\n",
    "class TwoLayerNet(nn.Module):\n",
    "    def __init__(self, input_size, hidden_size, output_size):\n",
    "        \"\"\"\n",
    "            我们在构建模型的时候，能够使用nn.Sequential的地方，尽量使用它，因为这样可以让结构更加清晰\n",
    "        \"\"\"\n",
    "        super(TwoLayerNet, self).__init__()\n",
    "        self.twolayernet = nn.Sequential(\n",
    "            nn.Linear(input_size, hidden_size),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(hidden_size, output_size),\n",
    "        )\n",
    "\n",
    "    def forward(self, x):\n",
    "        \"\"\"\n",
    "        在forward函数中，我们会接受一个Variable，然后我们也会返回一个Varible\n",
    "        \"\"\"\n",
    "        y_pred = self.twolayernet(x)\n",
    "        return y_pred"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ea6a241-ab98-4755-b10a-e1133a1c0026",
   "metadata": {},
   "source": [
    "4、参数初始化：\n",
    "实现模型参数的初始化，包括权重和偏置。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "b122d2c1-1e39-4485-be24-6f5668ca403b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# M是样本数量，input_size是输入层大小\n",
    "# hidden_size是隐含层大小，output_size是输出层大小\n",
    "M, input_size, hidden_size, output_size = 64, 1000, 100, 10\n",
    "\n",
    "# 生成随机数当作样本，同时用Variable 来包装这些数据，设置 requires_grad=False 表示在方向传播的时候，\n",
    "# 我们不需要求这几个 Variable 的导数\n",
    "x = Variable(torch.randn(M, input_size))\n",
    "y = Variable(torch.randn(M, output_size))\n",
    "\n",
    "model = TwoLayerNet(input_size, hidden_size, output_size)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "968b07e0-8ec0-46dd-855d-9f2a3b300d56",
   "metadata": {},
   "source": [
    "5、损失函数定义：\n",
    "定义均方误差（MSE）作为损失函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "e4296b72-c9e4-4843-93a4-6f1f40ab99b2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义损失函数\n",
    "loss_fn = nn.MSELoss(reduction='sum')\n",
    "\n",
    "## 设置超参数 ##\n",
    "learning_rate = 1e-4\n",
    "EPOCH = 300"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "97a28a58-c8c1-4adf-a4a6-241a4c78307f",
   "metadata": {},
   "source": [
    "6、训练过程：\n",
    "使用随机梯度下降（SGD）作为优化算法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "5b313673-b6e6-47cc-8abd-c999a585c84c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用optim包来定义优化算法，可以自动的帮我们对模型的参数进行梯度更新。这里我们使用的是随机梯度下降法。\n",
    "# 第一个传入的参数是告诉优化器，我们需要进行梯度更新的Variable 是哪些，\n",
    "# 第二个参数就是学习速率了。\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0de08370-8ca0-462b-9327-0c6800e6d771",
   "metadata": {},
   "source": [
    "7、性能评估：\n",
    "实现模型的训练过程，包括前向传播、损失计算、反向传播和参数更新。\n",
    "在训练过程中记录并打印损失值。\n",
    "分析模型训练过程中损失的变化趋势。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "246ff3bc-3547-4a26-a205-a81204049978",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.0007909746491350234\n",
      "0.00020309886895120144\n",
      "5.3305484470911324e-05\n",
      "1.4196016309142578e-05\n",
      "3.8190732993825804e-06\n",
      "1.034470983540814e-06\n"
     ]
    }
   ],
   "source": [
    "## 开始训练 ##\n",
    "for t in range(EPOCH):    \n",
    "    \n",
    "    # 向前传播\n",
    "    y_pred= model(x)\n",
    "    \n",
    "    # 计算损失\n",
    "    loss = loss_fn(y_pred, y)\n",
    "    \n",
    "    # 显示损失\n",
    "    if (t+1) % 50 == 0:\n",
    "        print(loss.item())\n",
    "    \n",
    "    # 在我们进行梯度更新之前，先使用optimier对象提供的清除已经积累的梯度。\n",
    "    optimizer.zero_grad()\n",
    "    \n",
    "    # 计算梯度\n",
    "    loss.backward()\n",
    "    \n",
    "    # 更新梯度\n",
    "    optimizer.step()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0ffb6828-2ebe-467f-ac8e-545dff01a8fb",
   "metadata": {},
   "source": [
    "8.实验结果分析：\n",
    "学习率: 如果学习率过大，模型可能会在训练过程中跳过最优解，导致训练效果不佳；如果学习率过小，模型可能会在训练过程中陷入局部最优解，导致训练速度过慢。\n",
    "优化算法: 不同的优化算法（如SGD、Adam、RMSprop等）对模型的训练速度和最终性能有不同的影响。通常需要通过实验来确定最适合特定任务的优化算法。\n",
    "损失函数: 选择合适的损失函数对于模型的训练效果至关重要。不同的损失函数适用于不同的任务和数据分布。例如，均方误差（MSE）适用于回归任务，交叉熵损失（CrossEntropyLoss）适用于分类任务"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ee4971bc-c0da-40aa-872d-fc01e3db3c5e",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
