{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 半监督学习或自监督学习\n",
    "\n",
    "对于没有Label2、Label3标签的数据，可以考虑利用半监督学习或自监督学习的策略.这种办法并不是总是有用的，比较直观的方法是CV领域的研究更加可能使用这个方法。但是在NLP领域，我们钱理解的方法就是使用密码学的方法似乎可以得到数据增强，这可能用来进行对比学习。\n",
    "\n",
    "这些方法尝试在数据不完全标注的情况下，通过某些代理任务来学习特征。\n",
    "\n",
    "**半监督学习**：如果你能找到与当前任务相关的少量Label2和Label3的标注数据集，甚至是公开的无关数据集，你可以将这些数据用作辅助数据，训练C模型使其能够泛化到你的数据集上。\n",
    "\n",
    "问题在于这类非监督学习它的内部逻辑是什么?\n",
    "\n",
    "> 我有一个困惑,根据我的理解,ResNet这种结构只支持监督学习,是这样吗?如果不是这样,我想请你通过具体的代码例子说明为什么一个简单的网络结构却可以同时支持几种不同的学习范式?在我看来网络的结构是和对应的数据组织有直接关系的,不可能说没有y label就能实现学习什么的\n",
    "\n",
    "ResNet 等深度卷积神经网络虽然主要是为**监督学习**设计的，但并不是仅限于监督学习。网络的结构本身并不直接决定它只能应用于某一种学习范式。相反，学习范式的选择通常取决于你如何组织数据和定义损失函数。\n",
    "\n",
    "我们来看看几种不同学习范式如何在相同的网络结构上实现："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 监督学习 (Supervised Learning)\n",
    "\n",
    "这是 ResNet 的典型应用场景。给定输入 \\( X \\) 和标签 \\( y \\)，通过前向传播计算预测值 \\( \\hat{y} \\)，并基于真实标签 \\( y \\) 计算损失，然后反向传播更新权重。代码示例如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "from torchvision.models import resnet18\n",
    "\n",
    "# 创建ResNet18网络\n",
    "model = resnet18(pretrained=False)\n",
    "# 替换最后的分类层（假设是10分类问题）\n",
    "model.fc = nn.Linear(model.fc.in_features, 10)\n",
    "\n",
    "# 输入和标签\n",
    "inputs = torch.randn(16, 3, 224, 224)\n",
    "labels = torch.randint(0, 10, (16,))\n",
    "\n",
    "# 定义损失和优化器\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "optimizer = optim.Adam(model.parameters(), lr=0.001)\n",
    "\n",
    "# 前向传播\n",
    "outputs = model(inputs)\n",
    "loss = criterion(outputs, labels)\n",
    "\n",
    "# 反向传播\n",
    "loss.backward()\n",
    "optimizer.step()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这个例子中，ResNet18 接受输入并根据分类标签进行学习。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. **自监督学习 (Self-Supervised Learning)**：\n",
    "\n",
    "自监督学习不依赖于外部的标签，而是通过设计预任务（pretext task）来生成伪标签进行训练。比如，常见的自监督方法包括对图像的旋转预测、拼图重组等。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 以图像旋转预测为例，自监督任务\n",
    "# 假设我们设计一个任务，预测图像旋转角度\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "from torchvision.models import resnet18\n",
    "import random\n",
    "\n",
    "# 定义ResNet18\n",
    "model = resnet18(pretrained=False)\n",
    "# 最后的输出改为预测4类旋转角度 [0, 90, 180, 270]\n",
    "model.fc = nn.Linear(model.fc.in_features, 4)\n",
    "\n",
    "# 自监督数据生成：随机旋转图片\n",
    "def rotate_image(inputs):\n",
    "    angles = [0, 90, 180, 270]\n",
    "    rotated_images = []\n",
    "    labels = []\n",
    "    for img in inputs:\n",
    "        angle = random.choice(angles)\n",
    "        rotated_img = torch.rot90(img, k=angle // 90, dims=[1, 2])\n",
    "        rotated_images.append(rotated_img)\n",
    "        labels.append(angles.index(angle))\n",
    "    return torch.stack(rotated_images), torch.tensor(labels)\n",
    "\n",
    "# 输入图像并生成旋转后的图片和标签\n",
    "inputs = torch.randn(16, 3, 224, 224)\n",
    "rotated_inputs, pseudo_labels = rotate_image(inputs)\n",
    "\n",
    "# 定义损失和优化器\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "optimizer = optim.Adam(model.parameters(), lr=0.001)\n",
    "\n",
    "# 前向传播\n",
    "outputs = model(rotated_inputs)\n",
    "loss = criterion(outputs, pseudo_labels)\n",
    "\n",
    "# 反向传播\n",
    "loss.backward()\n",
    "optimizer.step()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这个例子中，没有明确的标签 \\( y \\)，而是通过旋转图片生成伪标签，ResNet 结构可以同样用于训练。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. **无监督学习 (Unsupervised Learning)**：\n",
    "\n",
    "ResNet 也可以用于无监督学习的任务，比如通过**自动编码器**来重构输入数据，或者通过对比学习（contrastive learning）在没有标签的情况下学习有效特征。\n",
    "\n",
    "以下是对比学习的简单实现："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "from torchvision.models import resnet18\n",
    "\n",
    "# 定义ResNet18\n",
    "model = resnet18(pretrained=False)\n",
    "\n",
    "# 移除最后的分类层，只保留特征提取部分\n",
    "model = nn.Sequential(*list(model.children())[:-1])\n",
    "\n",
    "# 定义对比学习的损失 (假设你有自定义的对比学习损失函数)\n",
    "class ContrastiveLoss(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(ContrastiveLoss, self).__init__()\n",
    "\n",
    "    def forward(self, feature1, feature2, label):\n",
    "        # 假设 feature1 和 feature2 是两种增强视图，label 是相似度标签\n",
    "        # 对比学习的核心是最小化相似样本的距离，最大化非相似样本的距离\n",
    "        pass\n",
    "\n",
    "# 定义损失和优化器\n",
    "criterion = ContrastiveLoss()\n",
    "optimizer = optim.Adam(model.parameters(), lr=0.001)\n",
    "\n",
    "# 输入数据 (假设有两种增强视图 augment1 和 augment2)\n",
    "inputs = torch.randn(16, 3, 224, 224)\n",
    "augment1 = inputs  # 假设是一种增强\n",
    "augment2 = inputs  # 另一种增强\n",
    "\n",
    "# 前向传播\n",
    "feature1 = model(augment1)\n",
    "feature2 = model(augment2)\n",
    "\n",
    "# 计算对比损失\n",
    "loss = criterion(feature1, feature2, labels)  # labels 是相似/不相似标签\n",
    "\n",
    "# 反向传播\n",
    "loss.backward()\n",
    "optimizer.step()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过对比学习，我们可以在没有明确标签的情况下通过增强的数据进行学习。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 结论：\n",
    "\n",
    "ResNet 结构本身的作用是提取特征，至于它用来学习什么，取决于你的任务定义、数据组织和损失函数。无论是监督学习、自监督学习还是无监督学习，核心的学习范式是通过调整损失函数和数据方式来实现的，而网络结构本身往往是通用的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "scene(nn.modal):\n",
    "    forward(self, x):\n",
    "        return A,B,C\n",
    "contrast(nn.modal):\n",
    "    forward(self,scene_1,scebe_2):\n",
    "        return result\n",
    "each scene in n:"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
