{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# FashionMNIST 图像分类任务 - 网络设计和自动调参\n",
    "\n",
    "## 任务\n",
    "\n",
    "本次实验的任务是自己设计一个 CNN 模型，对 FashionMNIST 进行图像分类。在设计模型的过程中，你需要合理地选择网络的深度、每一层的卷积核个数、激活函数、池化操作、正则化方法等。\n",
    "\n",
    "在设计好模型之后，手动调参找到合适的超参数是非常麻烦的，因此需要自动调参工具。\n",
    "\n",
    "## 评估\n",
    "\n",
    "要求：\n",
    "- 为了节约时间，我们限制训练的 epoch 数不超过 20\n",
    "- 过大的模型是没有意义的，因此我们限制模型的参数数量不超过 5M，你可以使用`count_parameters`函数来统计模型的参数量\n",
    "- 你必须使用超参数优化工具来自动调参，并且展示调参的过程和结果。推荐使用 [optuna](https://optuna.org/)，但你也可以使用其他的工具，或者手动实现的`HyperparameterTuner`\n",
    "\n",
    "\n",
    "评分标准:\n",
    "- 测试集上准确率（取历史最高值）达到 90% 以上，即可获得 90 分，剩下的10分会根据排名给分\n",
    "- 如果准确率低于 90%，会根据代码完成度和准确率视情况给分\n",
    "\n",
    "注意事项：\n",
    "- 你可以自由修改数据载入函数`get_dataloaders`，例如使用额外的数据增强方法\n",
    "- 你可以自由修改`超参数搜索`和`使用最佳参数进行训练`两部分的代码，例如使用不同的优化器、学习率调度器等\n",
    "- 一般来说，你不需要修改`train_epoch`和`eval_model`函数\n",
    "- 你不能使用预训练权重，每次训练应该是从头开始的"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "import optuna\n",
    "import pandas as pd\n",
    "from torch.utils.data import DataLoader\n",
    "from tqdm import tqdm\n",
    "\n",
    "# 设备选择\n",
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "device"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# 数据加载函数\n",
    "def get_dataloaders(batch_size):\n",
    "    train_trans = transforms.Compose(\n",
    "        [\n",
    "            # transforms.RandomHorizontalFlip(),\n",
    "            transforms.ToTensor(),\n",
    "            transforms.Normalize((0.5,), (0.5,)),\n",
    "        ]\n",
    "    )\n",
    "    test_trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])\n",
    "    trainset = torchvision.datasets.FashionMNIST(root='./data', train=True, download=True, transform=train_trans)\n",
    "    testset = torchvision.datasets.FashionMNIST(root='./data', train=False, download=True, transform=test_trans)\n",
    "    trainloader = DataLoader(trainset, batch_size=batch_size, shuffle=True)\n",
    "    testloader = DataLoader(testset, batch_size=batch_size, shuffle=False)\n",
    "    return trainloader, testloader"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## CNN 网络设计\n",
    "\n",
    "网络的设计其实没有标准答案，但是有一些经验可以参考。你可以参考一些经典的网络结构，比如 VGG、ResNet、DenseNet 等，也可以参考一些最近的论文，比如 MobileNet、RegNet 等。\n",
    "\n",
    "一个聪明的办法是查看 Torchvision 中的模型文档 [Models and pre-trained weights — Torchvision 0.17 documentation](https://pytorch.org/vision/stable/models.html#classification)，里面列出的模型都是经过充分验证的，可以作为参考，并且在Table of all available classification weights 中列出了模型的参数量，可以帮助你过滤掉参数量过大的模型。\n",
    "\n",
    "下面是一个参考模型结构，虽然看起来比较朴素，但在合适的数据增强和调参下，也能达到 90% 的准确率。\n",
    "\n",
    "| Layer | Type        | Input Shape               | Output Shape              |\n",
    "| ----- | ----------- | ------------------------- | ------------------------- |\n",
    "| 1     | Conv2D      | (Batch Size, 1, 28, 28)   | (Batch Size, 32, 28, 28)  |\n",
    "| 2     | ReLU        | (Batch Size, 32, 28, 28)  | (Batch Size, 32, 28, 28)  |\n",
    "| 3     | MaxPool2D   | (Batch Size, 32, 28, 28)  | (Batch Size, 32, 14, 14)  |\n",
    "| 4     | Conv2D      | (Batch Size, 32, 14, 14)  | (Batch Size, 64, 14, 14)  |\n",
    "| 5     | ReLU        | (Batch Size, 64, 14, 14)  | (Batch Size, 64, 14, 14)  |\n",
    "| 6     | MaxPool2D   | (Batch Size, 64, 14, 14)  | (Batch Size, 64, 7, 7)    |\n",
    "| 7     | Flatten     | (Batch Size, 64, 7, 7)    | (Batch Size, 64\\*7\\*7)      |\n",
    "| 8     | Linear      | (Batch Size, 64\\*7\\*7)      | (Batch Size, 128)         |\n",
    "| 9     | ReLU        | (Batch Size, 128)         | (Batch Size, 128)         |\n",
    "| 10    | Dropout     | (Batch Size, 128)         | (Batch Size, 128)         |\n",
    "| 11    | Linear      | (Batch Size, 128)         | (Batch Size, 10)          |\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# CNN 模型\n",
    "class CNN(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(CNN, self).__init__()\n",
    "        # TODO\n",
    "\n",
    "    def forward(self, x):\n",
    "        # TODO\n",
    "        \n",
    "        return x\n",
    "    \n",
    "def count_parameters(model):\n",
    "    total = sum([param.nelement() for param in model.parameters()])\n",
    "    print(\"Number of parameter: %.2fM\" % (total / 1e6))\n",
    "    return total\n",
    "\n",
    "model_test = CNN()\n",
    "total = count_parameters(model_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "##################################################\n",
    "# 请勿修改此单元格中的代码\n",
    "##################################################\n",
    "\n",
    "# 评估指标累加器\n",
    "class Accumulator:\n",
    "    def __init__(self, n):\n",
    "        self.data = [0.0] * n\n",
    "\n",
    "    def add(self, *args):\n",
    "        self.data = [a + float(b) for a, b in zip(self.data, args)]\n",
    "\n",
    "    def reset(self):\n",
    "        self.data = [0.0] * len(self.data)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        return self.data[idx]\n",
    "\n",
    "# 计算准确率\n",
    "def accuracy(y_hat, y_true):\n",
    "    y_pred = y_hat.argmax(dim=1)\n",
    "    return (y_pred == y_true).float().mean().item()\n",
    "\n",
    "\n",
    "\n",
    "# 训练函数\n",
    "def train_epoch(net, train_iter, loss_fn, optimizer):\n",
    "    net.train()\n",
    "    device = next(net.parameters()).device\n",
    "    metrics = Accumulator(3)\n",
    "    for X, y in train_iter:\n",
    "        X, y = X.to(device), y.to(device)\n",
    "        optimizer.zero_grad()\n",
    "        y_hat = net(X)\n",
    "        loss = loss_fn(y_hat, y)\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        metrics.add(loss.item() * len(y), accuracy(y_hat, y) * len(y), len(y))\n",
    "    return metrics[0] / metrics[2], metrics[1] / metrics[2]\n",
    "\n",
    "# 评估函数\n",
    "@torch.no_grad()\n",
    "def eval_model(net, test_iter, loss_fn):\n",
    "    net.eval()\n",
    "    device = next(net.parameters()).device\n",
    "    metrics = Accumulator(3)\n",
    "    for X, y in test_iter:\n",
    "        X, y = X.to(device), y.to(device)\n",
    "        y_hat = net(X)\n",
    "        loss = loss_fn(y_hat, y)\n",
    "        metrics.add(loss.item() * len(y), accuracy(y_hat, y) * len(y), len(y))\n",
    "    return metrics[0] / metrics[2], metrics[1] / metrics[2]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 超参数搜索\n",
    "\n",
    "下面是一个用 Optuna 自动调参的大致框架，你也可以替换成其它自动调参库。\n",
    "从使用难度方面来推荐，[Optuna](https://optuna.org/) >> [Ax](https://ax.dev/) > [Ray Tune](https://docs.ray.io/en/master/tune/index.html)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# # optuna调参示例\n",
    "# import optuna\n",
    "# import math\n",
    "\n",
    "# # 目标函数\n",
    "# def objective(trial):\n",
    "#     x = trial.suggest_float(\"x\", 0, 2 * math.pi)\n",
    "#     y = trial.suggest_categorical(\"y\", [1, 3, 5, 7, 9])\n",
    "\n",
    "#     value = math.sin(x) + math.log(y + 1)\n",
    "#     return value\n",
    "\n",
    "# # 调优过程\n",
    "# study = optuna.create_study(direction=\"maximize\")\n",
    "# study.optimize(objective, n_trials=10)\n",
    "\n",
    "# # 输出结果\n",
    "# print(\"Best value:\", study.best_value)\n",
    "# print(\"Best parameters:\", study.best_params)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# Optuna 超参数优化\n",
    "def objective(trial):\n",
    "    # TODO\n",
    "    \n",
    "    #(自行设置搜索参数与搜索范围)\n",
    "    \n",
    "    # trainloader, testloader = \n",
    "    # model = \n",
    "    # loss_fn = \n",
    "    # optimizer = \n",
    "    \n",
    "    best_test_acc = 0\n",
    "    for epoch in tqdm(range(epochs)):\n",
    "        train_epoch(model, trainloader, loss_fn, optimizer)\n",
    "        _, test_acc = eval_model(model, testloader, loss_fn)\n",
    "        best_test_acc = max(best_test_acc, test_acc)\n",
    "    \n",
    "    return best_test_acc\n",
    "\n",
    "# 运行Optuna\n",
    "study = optuna.create_study(direction='maximize')\n",
    "study.optimize(objective, n_trials=10)\n",
    "\n",
    "# 保存结果到 CSV 文件\n",
    "results_df = pd.DataFrame([{**trial.params, \"test_acc\": trial.value} for trial in study.trials])\n",
    "results_df.to_csv(\"optuna_results.csv\", index=False)\n",
    "\n",
    "# 输出最佳参数和结果\n",
    "best_trial = study.best_trial\n",
    "print(f\"Best parameters: {best_trial.params}\")\n",
    "print(f\"Best test accuracy: {best_trial.value}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 使用最佳参数进行训练\n",
    "\n",
    "使用最优的超参数，重新训练模型，最后展示训练过程中的 loss 和 accuracy 曲线"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用最佳参数进行训练\n",
    "def train_best_model():\n",
    "    # TODO\n",
    "\n",
    "    # trainloader, testloader = \n",
    "    # model = \n",
    "    # loss_fn = \n",
    "    # optimizer = \n",
    "\n",
    "    train_losses, test_losses, train_accs, test_accs = [], [], [], []\n",
    "    for epoch in range(epochs):\n",
    "        train_loss, train_acc = train_epoch(model, trainloader, loss_fn, optimizer)\n",
    "        test_loss, test_acc = eval_model(model, testloader, loss_fn)\n",
    "        train_losses.append(train_loss)\n",
    "        test_losses.append(test_loss)\n",
    "        train_accs.append(train_acc)\n",
    "        test_accs.append(test_acc)\n",
    "        print(f\"Epoch {epoch+1}: Train Loss {train_loss:.4f}, Train Acc {train_acc:.4f}, Test Loss {test_loss:.4f}, Test Acc {test_acc:.4f}\")\n",
    "    return train_losses, test_losses, train_accs, test_accs\n",
    "\n",
    "# 绘制学习曲线\n",
    "def plot_learning_curves(train_losses, test_losses, train_accs, test_accs):\n",
    "    fig, axs = plt.subplots(1, 2, figsize=(12, 4))\n",
    "    axs[0].plot(train_losses, label='Train Loss')\n",
    "    axs[0].plot(test_losses, label='Test Loss')\n",
    "    axs[0].set_xlabel('Epoch')\n",
    "    axs[0].set_ylabel('Loss')\n",
    "    axs[0].legend()\n",
    "    axs[1].plot(train_accs, label='Train Accuracy')\n",
    "    axs[1].plot(test_accs, label='Test Accuracy')\n",
    "    axs[1].axhline(y=0.9, color='b', linestyle='--')  # 添加 y=0.9 的参考线\n",
    "    axs[1].set_xlabel('Epoch')\n",
    "    axs[1].set_ylabel('Accuracy')\n",
    "    axs[1].legend()\n",
    "    plt.show()\n",
    "\n",
    "# 训练最终模型\n",
    "train_losses, test_losses, train_accs, test_accs = train_best_model()\n",
    "plot_learning_curves(train_losses, test_losses, train_accs, test_accs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "print(\"Best test accuaracy: \", max(test_accs))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
