{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 整理数据集\n",
    "\n",
    "把数据集放到 datases/raw/ 目录下，建两个文件夹 y 和 n\n",
    "\n",
    "- datasets/raw/y/ 里面放技能 ready 的图\n",
    "- datasets/raw/n/ 里面放没技能的图\n",
    "\n",
    "要求输入图片尺寸为 64x64x3，文件名随便"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这段代码主要是定义了一个用于整理技能数据集的类 `SkillDataset`，并使用该类实例化了一个数据集 `dataset`。\n",
    "- `pathlib.Path` 是一个操作路径的类。 `Path(\"datasets/raw\")` 创建一个指向 `datasets/raw` 目录的 `Path` 对象。\n",
    "- `default_loader` 是一个函数，用于加载图片。它接收一个路径，返回该路径指向的图片经过处理后的结果。\n",
    "\n",
    "`SkillDataset` 是一个类，继承自 `Dataset`，用于整理技能数据集。它有一个构造函数 `__init__`，接收一个 `path` 参数，即数据集所在的路径。\n",
    "- `self.y` 是一个列表，包含所有技能数据集中“有技能”的图片的路径。`self.n` 是一个列表，包含所有技能数据集中“无技能”的图片的路径。\n",
    "- `self.transform` 是一个 `torchvision.transforms` 的实例，用于对图片进行预处理，包括高斯模糊、随机压缩、锐化、自动对比度、转化为张量。\n",
    "- `self.loader` 是一个函数，用于加载图片。\n",
    "- `self.data` 是一个列表，用于存储数据集中所有图片的处理结果。\n",
    "- `__len__` 是一个特殊函数，用于获取数据集的长度。这里返回的是“有技能”图片和“无技能”图片的数量之和。\n",
    "- `get` 函数用于获取指定索引的图片和标签。如果索引小于“有技能”图片的数量，那么这张图片就是“有技能”的，返回的标签为 1；否则这张图片就是“无技能”的，返回的标签为 0。\n",
    "- `__getitem__` 是一个特殊函数，用于获取指定索引的图片和标签。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "from torchvision import transforms\n",
    "from PIL import Image\n",
    "\n",
    "dataset_path = Path(\"datasets/raw\")\n",
    "\n",
    "def default_loader(path):\n",
    "    return Image.open(path).convert('RGB')\n",
    "\n",
    "class SkillDataset(Dataset):\n",
    "    def __init__(self, path: Path) -> None:\n",
    "        super().__init__()\n",
    "        self.y = list((path / 'y').glob('**/*'))\n",
    "        self.n = list((path / 'n').glob('**/*'))\n",
    "        self.transform = transforms.Compose([\n",
    "            transforms.GaussianBlur(3, sigma=(0.1, 2.0)), # 高斯模糊，卷积核的大小为3x3，水平和垂直方向上的标准偏差分别为 0.1 和 2.0\n",
    "            transforms.RandomPosterize(3),                # 随机压缩\n",
    "            transforms.RandomAdjustSharpness(3),          # 随机锐化\n",
    "            transforms.RandomAutocontrast(),              # 自动对比度调整\n",
    "            transforms.ToTensor(),                        # 将图像转换为 PyTorch 张量\n",
    "        ])\n",
    "        self.loader = default_loader\n",
    "        # 技能图片没多大，一次性全部载入内存算了\n",
    "        self.data = [ self.get(i) for i in range(len(self))]\n",
    "    \n",
    "    def __len__(self):\n",
    "        return self.len_y() + self.len_n()\n",
    "    \n",
    "    def len_y(self):\n",
    "        # 给 y 来个 5 倍数据增强\n",
    "        return len(self.y) * 5\n",
    "    \n",
    "    def len_n(self):\n",
    "        return len(self.n)\n",
    "\n",
    "    def get(self, index):\n",
    "        if index < self.len_y():\n",
    "            if index % 1000 == 0:\n",
    "                print(f'load y: {index} / {self.len_y()}')\n",
    "            path = self.y[index % len(self.y)]\n",
    "            label = 1\n",
    "        else:\n",
    "            if index % 1000 == 0:\n",
    "                print(f'load n: {index - self.len_y()} / {self.len_n()}')\n",
    "            path = self.n[(index - self.len_y()) % len(self.n)]\n",
    "            label = 0\n",
    "        image = self.loader(path)\n",
    "        image = self.transform(image)\n",
    "        return image, label\n",
    "    \n",
    "    def __getitem__(self, index):\n",
    "        return self.data[index]\n",
    "\n",
    "dataset = SkillDataset(dataset_path)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这是一个 `PyTorch` 的数据处理部分的代码片段，主要实现了数据的划分和加载。\n",
    "\n",
    "- 该代码将数据集划分为训练集和测试集，其中训练集占总数据集的 80%，测试集占 20%。\n",
    "- 然后使用 `DataLoader` 对训练集和测试集进行批处理，每次处理的批次大小分别为 4096 和 2048，并进行随机打乱（对于训练集），不打乱（对于测试集）。\n",
    "- 其中 `num_workers` 参数表示数据读取的并行进程数量，这里设置为 0 表示不使用多线程读取数据。\n",
    "\n",
    "如果显存不够，可以尝试减小批次大小，以减少显存的占用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "\n",
    "train_size = int(0.8 * len(dataset))\n",
    "test_size = len(dataset) - train_size\n",
    "train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])\n",
    "\n",
    "train_loader = DataLoader(train_dataset, batch_size=4096, shuffle=True, num_workers=0)\n",
    "test_loader = DataLoader(test_dataset, batch_size=2048, shuffle=False, num_workers=0)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个代码实现了一个简单的GoogleNet模型，其中包含了两个Inception模块。\n",
    "\n",
    "Inception模块是GoogleNet中的重要组成部分，通过使用不同大小和类型的卷积核来并行计算特征图，从而提高了模型的精度和效率。\n",
    "\n",
    "GoogleNet是经典的深度神经网络，可以用于图像分类、目标检测和语义分割等任务。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "\n",
    "class InceptionA(torch.nn.Module):\n",
    "    def __init__(self, in_ch) -> None:\n",
    "        super().__init__()\n",
    "        self.branch_1x1 = torch.nn.Conv2d(in_ch, 16, kernel_size=1)\n",
    "\n",
    "        self.branch_5x5_1 = torch.nn.Conv2d(in_ch, 16, kernel_size=1)\n",
    "        self.branch_5x5_2 = torch.nn.Conv2d(16, 24, kernel_size=5, padding=2)\n",
    "\n",
    "        self.branch_3x3_1 = torch.nn.Conv2d(in_ch, 16, kernel_size=1)\n",
    "        self.branch_3x3_2 = torch.nn.Conv2d(16, 24, kernel_size=3, padding=1)\n",
    "        self.branch_3x3_3 = torch.nn.Conv2d(24, 24, kernel_size=3, padding=1)\n",
    "\n",
    "        self.branch_pool = torch.nn.Conv2d(in_ch, 24, kernel_size=1)\n",
    "\n",
    "    def forward(self, x):\n",
    "        branch_1x1 = self.branch_1x1(x)\n",
    "\n",
    "        branch_5x5 = self.branch_5x5_1(x)\n",
    "        branch_5x5 = self.branch_5x5_2(branch_5x5)\n",
    "\n",
    "        branch_3x3 = self.branch_3x3_1(x)\n",
    "        branch_3x3 = self.branch_3x3_2(branch_3x3)\n",
    "        branch_3x3 = self.branch_3x3_3(branch_3x3)\n",
    "\n",
    "        branch_pool = torch.nn.functional.avg_pool2d(x, kernel_size=3, stride=1, padding=1)\n",
    "        branch_pool = self.branch_pool(branch_pool)\n",
    "\n",
    "        outputs = [branch_1x1, branch_5x5, branch_3x3, branch_pool] # 16 + 24 + 24 + 24\n",
    "        return torch.cat(outputs, 1)\n",
    "\n",
    "\n",
    "class GoogleNet(torch.nn.Module):\n",
    "    def __init__(self, channels) -> None:\n",
    "        super().__init__()\n",
    "        self.conv1 = torch.nn.Conv2d(channels, 10, kernel_size=5)\n",
    "        self.conv2 = torch.nn.Conv2d(88, 20, kernel_size=5)\n",
    "        self.incep1 = InceptionA(10)\n",
    "        self.incep2 = InceptionA(20)\n",
    "        self.mp = torch.nn.MaxPool2d(2)\n",
    "        self.fc = torch.nn.Linear(14872, 2)\n",
    "\n",
    "    def forward(self, x):\n",
    "        in_size = x.size(0)\n",
    "        x = torch.nn.functional.relu(self.mp(self.conv1(x)))\n",
    "        x = self.incep1(x)\n",
    "        x = torch.nn.functional.relu(self.mp(self.conv2(x)))\n",
    "        x = self.incep2(x)\n",
    "        x = x.view(in_size, -1)\n",
    "        x = self.fc(x)\n",
    "        return x\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这段代码使用 PyTorch 的自动混合精度加速训练过程。\n",
    "\n",
    "自动混合精度可以让模型的训练更快，同时避免数值下溢或上溢的问题。具体而言，它可以将 `float32` 数据类型转换为 `float16`，以减少 GPU 内存的使用量，从而可以使用更大的 `batch size` 加速训练。同时，使用 `GradScaler` 可以帮助我们调整梯度的缩放因子，以保持数值稳定性。\n",
    "\n",
    "这里的代码也检查了系统是否支持 CUDA，如果不支持，则会在 CPU 上进行训练。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from torch.cuda.amp import autocast, GradScaler\n",
    "\n",
    "use_cuda = torch.cuda.is_available()\n",
    "device = torch.device(\"cuda\" if use_cuda else \"cpu\")\n",
    "if not use_cuda:\n",
    "    print(\"WARNING: CPU will be used for training.\")\n",
    "\n",
    "model = GoogleNet(3).to(device)\n",
    "\n",
    "criterion = torch.nn.CrossEntropyLoss().to(device)\n",
    "optimizer = torch.optim.Adam(model.parameters(), lr=0.001)\n",
    "scaler = GradScaler()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这段代码是定义了训练和测试模型的函数。\n",
    "\n",
    "`train` 函数用于训练模型。在函数中，将模型设置为训练模式，并遍历训练集，将数据和目标标签分别移动到设备上进行计算。然后，通过 `optimizer.zero_grad()` 清除之前的梯度，并在 `autocast()` 内使用自动混合精度进行前向传递、计算损失和反向传递。最后，使用 `scaler.step()` 进行优化步骤，并使用 `scaler.update()` 更新缩放器以便下一个批次的使用。最后，打印出损失和时间。\n",
    "\n",
    "`test` 函数用于测试模型。在函数中，将模型设置为评估模式，并遍历测试集，将数据和目标标签分别移动到设备上进行计算。通过 `model(data)` 计算模型的输出，并计算测试集上的损失和准确率。最后，打印出测试集上的损失和准确率，并返回这两个值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "\n",
    "start_time = time.time()\n",
    "def train(epoch):\n",
    "    global start_time\n",
    "    model.train()\n",
    "    for batch_idx, (data, target) in enumerate(train_loader):\n",
    "        data, target = data.to(device), target.to(device)\n",
    "        optimizer.zero_grad()\n",
    "\n",
    "        with autocast():\n",
    "            output = model(data)\n",
    "            loss = criterion(output, target)\n",
    "\n",
    "        scaler.scale(loss).backward()\n",
    "        scaler.step(optimizer)\n",
    "        scaler.update()\n",
    "        \n",
    "    cur_time = time.time()\n",
    "    cost = cur_time - start_time\n",
    "    print(f'Train Epoch: {epoch}, Loss: {loss.item():.8f}, cost: {cost:.2f} s')\n",
    "    start_time = cur_time\n",
    "            \n",
    "def test():\n",
    "    model.eval()\n",
    "    test_loss = 0.0\n",
    "    correct = 0\n",
    "    with torch.no_grad():\n",
    "        for data, target in test_loader:\n",
    "            data, target = data.to(device), target.to(device)\n",
    "            with autocast():\n",
    "                output = model(data)\n",
    "                test_loss += criterion(output, target).item()\n",
    "            pred = output.argmax(dim=1, keepdim=True)\n",
    "            correct += pred.eq(target.view_as(pred)).sum().item()\n",
    "\n",
    "    test_loss /= len(test_loader.dataset)\n",
    "    acc = 100. * correct / len(test_loader.dataset)\n",
    "\n",
    "    print(f'=== Test: Loss: {test_loss:.8f}, Acc: {acc:.4f} ===')\n",
    "    return test_loss, acc"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这段代码是训练模型的主要过程。\n",
    "\n",
    "在训练过程中，每个 epoch 会先调用 `train` 函数来训练模型，然后调用 `test` 函数来评估模型在测试集上的性能。在每 `test_interval`（默认为10）个 epoch 后，会评估模型在测试集上的性能，并记录当前的最佳模型。\n",
    "\n",
    "如果当前 epoch 的测试集上的损失大于当前最佳损失，则认为模型性能没有改进。如果当前 epoch 与最佳 epoch 之间的差距超过10个 `test_interval` ，则认为模型已经很久没有得到改进，训练将被中止。如果当前 epoch 的测试集上的损失小于当前最佳损失，则将当前模型保存为最佳模型，并更新最佳 epoch 、最佳损失和最佳准确率的值。\n",
    "\n",
    "最后，训练结束后，最佳模型将保存在 `checkpoints` 目录下，文件名为 `GoogleNet_best.pt`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "if use_cuda:\n",
    "    torch.cuda.empty_cache()\n",
    "\n",
    "output = Path('checkpoints')\n",
    "output.mkdir(exist_ok=True, parents=True)\n",
    "\n",
    "model_name = model.__class__.__name__\n",
    "best_model_path = output / f'{model_name}_best.pt'\n",
    "\n",
    "best_epoch = 0\n",
    "best_loss = 100.0\n",
    "best_acc = 0.0\n",
    "default_interval = 1\n",
    "\n",
    "def pipeline(start_epoch = 0, test_interval = default_interval):\n",
    "    global best_epoch, best_loss, best_acc\n",
    "\n",
    "    for epoch in range(start_epoch, 1000):\n",
    "        train(epoch)\n",
    "        if epoch % test_interval != 0 or epoch < 50:\n",
    "            continue\n",
    "        \n",
    "        loss, acc = test()\n",
    "        print(f'=== Pre best is {best_epoch}, Loss: {best_loss:.8f}, Acc: {best_acc:.4f} ===')\n",
    "        torch.save(model, output / f'{model_name}_{epoch}.pt')\n",
    "        if loss > best_loss:\n",
    "            if epoch - best_epoch > 100:\n",
    "                print('No improvement for a long time, Early stop!')\n",
    "                break\n",
    "            else:\n",
    "                continue\n",
    "        best_epoch = epoch\n",
    "        best_loss = loss\n",
    "        best_acc = acc\n",
    "        print(f'====== New best is {best_epoch}, Loss: {best_loss:.8f}, Acc: {best_acc:.4f} ======')\n",
    "        torch.save(model, best_model_path)\n",
    "\n",
    "pipeline()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这段代码会从文件中加载最佳模型，并在测试集上进行评估。然后它将返回测试集上的损失和准确率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model = torch.load(best_model_path)\n",
    "test()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这段代码是将 `PyTorch` 训练好的模型导出为 `ONNX` 格式，以便在其他框架或硬件上进行部署和推理。主要包含了以下几个步骤：\n",
    "\n",
    "1. 加载训练好的模型：使用 `torch.load` 函数加载PyTorch模型，`map_location=torch.device(\"cpu\")`表示将模型参数加载到 CPU 上。\n",
    "2. 设置输入：使用 `torch.randn` 函数创建一个随机的输入数据，用于设置模型的输入大小。\n",
    "3. 导出模型：使用 `torch.onnx.export` 函数将模型导出为 `ONNX` 格式，需要指定模型、输入数据、输出文件路径、输入和输出的名称。\n",
    "4. 值得注意的是，在导出 `ONNX` 模型时，需要注意模型的输入输出名称、大小和顺序等信息，以确保在后续的推理中能够正确地加载和使用模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch.onnx\n",
    "from pathlib import Path\n",
    "\n",
    "\n",
    "def convert_onnx(path: Path):\n",
    "    model = torch.load(path, map_location=torch.device(\"cpu\"))\n",
    "    model.eval()\n",
    "    dummy_input = torch.randn(1, 3, 64, 64)\n",
    "    torch.onnx.export(\n",
    "        model,\n",
    "        dummy_input,\n",
    "        path.with_suffix(\".onnx\"),\n",
    "        input_names=[\"input\"],\n",
    "        output_names=[\"output\"],\n",
    "    )\n",
    "\n",
    "\n",
    "convert_onnx(best_model_path)\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "该代码是用于检查 `ONNX` 模型是否规范。如果检查失败，将引发异常并显示有关问题的详细信息。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import onnx\n",
    "\n",
    "onnx.checker.check_model(str(best_model_path.with_suffix(\".onnx\")))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 以下是部分清洗数据的代码，可以自己研究下"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由于是注释掉的代码，并且需要手工清洗，所以我们不能直接运行，需要手动选择和清洗一些数据。\n",
    "\n",
    "这部分代码的目的是从原始数据集中随机选择一定数量的“正面”和“负面”图像样本，用于构建清洗后的数据集。这里假设原始数据集是由两个子目录“y”和“n”组成，每个子目录包含正面或负面的样本图片。\n",
    "\n",
    "这部分代码的思路是首先指定要从原始数据集中选择多少个样本（`clean_set_size`），然后随机从两个子目录中选择相同数量的样本，并将其移动到另一个目录中（`clean_y`或`clean_n`）作为清洗后的数据集。需要手工查看每个样本并判断其是否属于相应的类别。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 随机数据挑选，请手工清洗\n",
    "\n",
    "# from pathlib import Path\n",
    "# import random\n",
    "\n",
    "# clean_set_size = 1000\n",
    "# raw_path = Path(\"datasets/raw\")\n",
    "# positive_set = random.sample(list((raw_path / \"y\").glob(\"**/*\")), clean_set_size)\n",
    "# negative_set = random.sample(list((raw_path / \"n\").glob(\"**/*\")), clean_set_size)\n",
    "\n",
    "# clean = Path(\"datasets/clean/\")\n",
    "# clean_y = clean / \"y\"\n",
    "# clean_y.mkdir(parents=True, exist_ok=True)\n",
    "# clean_n = clean / \"n\"\n",
    "# clean_n.mkdir(parents=True, exist_ok=True)\n",
    "# for path in positive_set:\n",
    "#     path.rename(clean_y / path.name)\n",
    "# for path in negative_set:\n",
    "#     path.rename(clean_n / path.name)\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "与上文中的 `SkillDataset` 类似，定义了一个名为 `SkillRawDataset` 的数据集类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# class SkillRawDataset(Dataset):\n",
    "#     def __init__(self) -> None:\n",
    "#         super().__init__()\n",
    "#         self.y = list(Path('datasets/raw/y/').glob('**/*'))\n",
    "#         self.n = list(Path('datasets/raw/n/').glob('**/*'))\n",
    "#         self.transform = transforms.ToTensor()\n",
    "#         self.loader = default_loader\n",
    "#         self.data = [ self.get(i) for i in range(len(self))]\n",
    "    \n",
    "#     def __len__(self):\n",
    "#         return len(self.y) + len(self.n)\n",
    "    \n",
    "#     def get(self, index):\n",
    "#         # print(f'load: {self.count} / {len(self)}')\n",
    "#         if index < len(self.y):\n",
    "#             path = self.y[index]\n",
    "#             label = 1\n",
    "#         else:\n",
    "#             path = self.n[index - len(self.y)]\n",
    "#             label = 0\n",
    "#         image = self.loader(path)\n",
    "#         image = self.transform(image)\n",
    "#         return image, label\n",
    "    \n",
    "#     def __getitem__(self, index):\n",
    "#         return self.data[index]\n",
    "    \n",
    "#     def get_path(self, index):\n",
    "#         if index < len(self.y):\n",
    "#             path = self.y[index]\n",
    "#             label = 1\n",
    "#         else:\n",
    "#             path = self.n[index - len(self.y)]\n",
    "#             label = 0\n",
    "#         return path, label\n",
    "\n",
    "# raw_data_set = SkillRawDataset()\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这段代码是对数据集进行清洗的代码，其中：\n",
    "\n",
    "`SkillRawDataset` 是对原始数据集进行封装的 `Dataset` 类，其中 `__init__` 方法中会将 y 和 n 两个目录下的所有图片路径加载进来，并转换为数据和标签，存储到 `self.data` 中。`get` 方法用于获取指定 index 的数据和标签。\n",
    "`raw_loader` 是使用 `SkillRawDataset` 加载数据集的 `DataLoader`，每个 batch 仅包含一个样本。\n",
    "`clear` 函数用于遍历 `raw_loader` 中的每一个样本，并使用当前的模型进行前向推断，如果预测结果与标签不符，则将这张图片移到 `datasets/maybe_error/1` 或 `datasets/maybe_error/0` 目录下（根据标签确定）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# raw_loader = DataLoader(raw_data_set, batch_size=1, shuffle=False, num_workers=0)\n",
    "# import os\n",
    "# import shutil\n",
    "# def clear():\n",
    "#     model.eval()\n",
    "#     test_loss = 0\n",
    "#     Path('datasets/maybe_error/1').mkdir(parents=True, exist_ok=True)\n",
    "#     Path('datasets/maybe_error/0').mkdir(parents=True, exist_ok=True)\n",
    "#     with torch.no_grad():\n",
    "#         for batch_idx, (data, target) in enumerate(raw_loader):\n",
    "#             data, target = data.cuda(), target.cuda()\n",
    "#             output = model(data)\n",
    "#             loss = criterion(output, target).item() # sum up batch loss\n",
    "#             test_loss += loss\n",
    "#             pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability\n",
    "#             correct = pred.eq(target.view_as(pred)).sum().item()\n",
    "#             if not correct:\n",
    "#                 tup = raw_data_set.get_path(batch_idx)\n",
    "#                 print(tup)\n",
    "#                 os.rename(tup[0], Path('datasets/maybe_error/') / str(tup[1]) / tup[0].name)                \n",
    "                \n",
    "\n",
    "\n",
    "# print(len(raw_data_set))\n",
    "# clear()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "chat",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.9"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
