{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Assignment 3 Self-Supervised Learning"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "###  Task Description\n",
    "- Task: Classify cifar10 dataset using self-supervised learning.\n",
    "- Main Goal: Learn how self-supervised learning works and how to implement it.\n",
    "\n",
    "- TODO:\n",
    "    - Implement classification without self-supervised learning.\n",
    "    - Implement classification with [Context Encoder](https://arxiv.org/abs/1604.07379).\n",
    "    - Implement classification with [Rotation Prediction](https://arxiv.org/abs/1803.07728).\n",
    "    - Implement classification with [SimCLR](https://arxiv.org/abs/2002.05709)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Import Packages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "c:\\ProgramData\\anaconda3\\envs\\paddle_cuda\\lib\\site-packages\\paddle\\utils\\cpp_extension\\extension_utils.py:711: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md\n",
      "  warnings.warn(warning_message)\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "from tqdm import tqdm\n",
    "\n",
    "\n",
    "import paddle\n",
    "from paddle.io import DataLoader\n",
    "import paddle.nn as nn\n",
    "from paddle.optimizer import AdamW\n",
    "import paddle.vision.transforms as T\n",
    "from paddle.vision.datasets import Cifar10"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始图像：\n",
      "[[[[1. 2. 3.]\n",
      "   [4. 5. 6.]\n",
      "   [7. 8. 9.]]]]\n",
      "转置变换后的图像：\n",
      "[[[[1. 4. 7.]\n",
      "   [2. 5. 8.]\n",
      "   [3. 6. 9.]]]]\n",
      "变换后的图像：\n",
      "[[[[3. 6. 9.]\n",
      "   [2. 5. 8.]\n",
      "   [1. 4. 7.]]]]\n"
     ]
    }
   ],
   "source": [
    "# 创建一个简单的图像张量\n",
    "images = paddle.to_tensor([[[[1, 2, 3], [4, 5, 6], [7, 8, 9]]]], dtype='float32')\n",
    "\n",
    "# 执行变换\n",
    "t_images = paddle.transpose(images, perm=[0, 1, 3, 2])\n",
    "transformed_images = paddle.transpose(images, perm=[0, 1, 3, 2]).flip(axis=2)\n",
    "\n",
    "# 打印结果\n",
    "print(\"原始图像：\")\n",
    "print(images.numpy())\n",
    "print(\"转置变换后的图像：\")\n",
    "print(t_images.numpy())\n",
    "print(\"变换后的图像：\")\n",
    "print(transformed_images.numpy())\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Dataset and Dataloader\n",
    "- Original dataset is [cifar10](https://www.cs.toronto.edu/~kriz/cifar.html).\n",
    "- We use [paddle API](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.6/api/index_cn.html) to download the dataset and create dataloader."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "img_size = 32\n",
    "img_channels = 3\n",
    "batch_size = 128"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Cache file C:\\Users\\Administrator\\.cache\\paddle\\dataset\\cifar\\cifar-10-python.tar.gz not found, downloading https://dataset.bj.bcebos.com/cifar/cifar-10-python.tar.gz \n",
      "Begin to download\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "item 41626/41626 [============================>.] - ETA: 0.00034049660698473567s - 673us/item"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "Download finished\n"
     ]
    }
   ],
   "source": [
    "transform = T.Compose(\n",
    "    [\n",
    "        T.Resize((img_size, img_size)),\n",
    "        T.ToTensor(),\n",
    "        T.Normalize(\n",
    "            mean=[0.4914, 0.4822, 0.4465],\n",
    "            std=[0.2023, 0.1994, 0.2010],\n",
    "            to_rgb=True,\n",
    "        ),\n",
    "    ]\n",
    ")\n",
    "\n",
    "cifar10_train = Cifar10(mode=\"train\", transform=transform)\n",
    "cifar10_test = Cifar10(mode=\"test\", transform=transform)\n",
    "\n",
    "train_loader = DataLoader(cifar10_train, batch_size=batch_size, shuffle=True)\n",
    "test_loader = DataLoader(cifar10_test, batch_size=batch_size, shuffle=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Model\n",
    "- Encoder: CNN to extract features from images\n",
    "- Decoder: CNN to reconstruct images\n",
    "- Classifier: Linear classifier"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "num_classes = 10\n",
    "hidden_dim = 256"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
    "class Encoder(nn.Layer):\n",
    "    def __init__(self, in_dim, out_dim):\n",
    "        super().__init__()\n",
    "        self.layers = nn.Sequential(\n",
    "            nn.Conv2D(in_dim, 64, kernel_size=4, stride=2, padding=1),\n",
    "            nn.BatchNorm2D(64),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2D(64, 128, kernel_size=4, stride=2, padding=1),\n",
    "            nn.BatchNorm2D(128),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2D(128, 256, kernel_size=4, stride=2, padding=1),\n",
    "            nn.BatchNorm2D(256),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2D(256, 512, kernel_size=4, stride=2, padding=1),\n",
    "            nn.BatchNorm2D(512),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2D(512, out_dim, kernel_size=4, stride=2, padding=1),\n",
    "        )\n",
    "\n",
    "    def forward(self, images):\n",
    "        out = self.layers(images)\n",
    "        return out\n",
    "    \n",
    "class Decoder(nn.Layer):\n",
    "    def __init__(self, in_dim, out_dim):\n",
    "        super().__init__()\n",
    "        self.layers = nn.Sequential(\n",
    "            nn.BatchNorm2D(in_dim),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2DTranspose(in_dim, 512, kernel_size=4, stride=2, padding=1),\n",
    "            nn.BatchNorm2D(512),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2DTranspose(512, 256, kernel_size=4, stride=2, padding=1),\n",
    "            nn.BatchNorm2D(256),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2DTranspose(256, 128, kernel_size=4, stride=2, padding=1),\n",
    "            nn.BatchNorm2D(128),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2DTranspose(128, 64, kernel_size=4, stride=2, padding=1),\n",
    "            nn.BatchNorm2D(64),\n",
    "            nn.ReLU(),\n",
    "            nn.Conv2DTranspose(64, out_dim, kernel_size=4, stride=2, padding=1),\n",
    "            nn.Tanh(),\n",
    "        )\n",
    "\n",
    "    def forward(self, latent):\n",
    "        out = self.layers(latent)\n",
    "        return out\n",
    "\n",
    "class Classifier(nn.Layer):\n",
    "    def __init__(self, in_dim, out_dim):\n",
    "        super().__init__()\n",
    "        self.layers = nn.Sequential(\n",
    "            nn.Flatten(),\n",
    "            nn.BatchNorm1D(in_dim),\n",
    "            nn.Linear(in_dim, out_dim)\n",
    "        )\n",
    "\n",
    "    def forward(self, latent):\n",
    "        out = self.layers(latent)\n",
    "        return out"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Validation\n",
    "- Use accuracy@1 as the metric to evaluate the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def validation(model, data_loader):\n",
    "    model.eval()\n",
    "    correct = 0\n",
    "    total = 0\n",
    "    with paddle.no_grad():\n",
    "        for batch in data_loader:\n",
    "            images, labels = batch\n",
    "            logits = model(images)\n",
    "            _, predicted = paddle.topk(logits, k=1)\n",
    "            total += labels.shape[0]\n",
    "            correct += (predicted.flatten() == labels).sum().numpy()\n",
    "    accuracy = correct / total\n",
    "    model.train()\n",
    "    return accuracy"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "###########################################################################\n",
    "# TODO:\n",
    "#   Finish PureClassifier with Encoder and Classifier defined above\n",
    "#   `forward` function returns the predicted logits.\n",
    "###########################################################################\n",
    "class PureClassifier(nn.Layer):\n",
    "    def __init__(self, in_dim, hidden_dim, out_dim):\n",
    "        super().__init__()\n",
    "        pass\n",
    "\n",
    "    def forward(self, images):\n",
    "        pass"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Classification without Self-Supervised Learning\n",
    "- Train a linear classifier on top of the encoder features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "epochs = 4\n",
    "lr = 1e-4\n",
    "\n",
    "model = PureClassifier(img_channels, hidden_dim, num_classes)\n",
    "model.train()\n",
    "\n",
    "optimizer = AdamW(\n",
    "    parameters=model.parameters(),\n",
    "    learning_rate=lr,\n",
    ")\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "\n",
    "best_accuracy = 0.0\n",
    "for epoch in range(epochs):\n",
    "    for batch_id, (images, labels) in enumerate(tqdm(train_loader)):\n",
    "        optimizer.clear_grad()\n",
    "        logits = model(images)\n",
    "        loss = criterion(logits, labels)\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        if batch_id % 300 == 0:\n",
    "            print(f\"Epoch {epoch}, Batch {batch_id}, Loss: {loss.numpy()}\")\n",
    "    accuracy = validation(model, test_loader)\n",
    "    print(f\"Epoch {epoch}, Validation Accuracy: {accuracy:.4f}\")\n",
    "    if accuracy > best_accuracy:\n",
    "        best_accuracy = accuracy\n",
    "        paddle.save(model.state_dict(), \"PureClassifier.pdparams\")\n",
    "        print(f\"Model saved with accuracy: {best_accuracy:.4f}\")\n",
    "print(f\"Best Validation Accuracy: {best_accuracy:.4f}\")\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Classification with [Context Encoder]((https://arxiv.org/abs/1604.07379))\n",
    "- Use the encoder to extract features from the images.\n",
    "- Use the decoder to reconstruct the images.\n",
    "- Train a linear classifier on top of the encoder features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "class ContextEncoder(nn.Layer):\n",
    "    def __init__(self, in_dim, hidden_dim, num_classes):\n",
    "        super().__init__()\n",
    "        self.encoder = Encoder(in_dim, hidden_dim)\n",
    "        self.decoder = Decoder(hidden_dim, in_dim)\n",
    "        self.classifier = Classifier(hidden_dim, num_classes)\n",
    "\n",
    "    def forward(self, images, task=\"classification\"):\n",
    "        if task == \"reconstruction\":\n",
    "            latent = self.encoder(images)\n",
    "            out = self.decoder(latent)\n",
    "            return out\n",
    "        elif task == \"classification\":\n",
    "            latent = self.encoder(images)\n",
    "            logits = self.classifier(latent)\n",
    "            return logits\n",
    "        else:\n",
    "            raise ValueError(f\"Unknown task: {task}\")\n",
    "\n",
    "    ###########################################################################\n",
    "    # TODO:\n",
    "    #   Finish mask_images below.\n",
    "    #   This function masks a portion of the image at random in proportion\n",
    "    #   `mask_ratio` used to control the ratio\n",
    "    #   Return: masked images\n",
    "    ###########################################################################\n",
    "    def mask_images(self, images, mask_ratio=0.3):\n",
    "        pass"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
    "epochs = 4\n",
    "lr = 1e-4\n",
    "mask_ratio = 0.3\n",
    "\n",
    "model = ContextEncoder(img_channels, hidden_dim, num_classes)\n",
    "model.train()\n",
    "\n",
    "# self-supervised pretraining with reconstruction\n",
    "reconstruction_optimizer = AdamW(\n",
    "    parameters=model.parameters(),\n",
    "    learning_rate=lr,\n",
    ")\n",
    "reconstruction_criterion = nn.MSELoss()\n",
    "for epoch in range(epochs):\n",
    "    for batch_id, (images, labels) in enumerate(tqdm(train_loader)):\n",
    "        reconstruction_optimizer.clear_grad()\n",
    "        masked_images, mask = model.mask_images(images, mask_ratio)\n",
    "        ###########################################################################\n",
    "        # TODO:\n",
    "        #   Compute reconstruction_loss.\n",
    "        ###########################################################################\n",
    "        reconstruction_loss.backward()\n",
    "        reconstruction_optimizer.step()\n",
    "        if batch_id % 300 == 0:\n",
    "            print(f\"Epoch {epoch}, Batch {batch_id}, Loss: {reconstruction_loss.numpy()}\")\n",
    "print(\"Self-supervised pretraining completed with reconstruction\")\n",
    "paddle.save(model.state_dict(), \"ContextEncoder.pdparams\")\n",
    "\n",
    "# fine-tuning with classification task\n",
    "classification_optimizer = AdamW(\n",
    "    parameters=model.parameters(),\n",
    "    learning_rate=lr,\n",
    ")\n",
    "classification_criterion = nn.CrossEntropyLoss()\n",
    "best_accuracy = 0.0\n",
    "for epoch in range(epochs):\n",
    "    for batch_id, (images, labels) in enumerate(tqdm(train_loader)):\n",
    "        classification_optimizer.clear_grad()\n",
    "        logits = model(images, task=\"classification\")\n",
    "        loss = classification_criterion(logits, labels)\n",
    "        loss.backward()\n",
    "        classification_optimizer.step()\n",
    "        if batch_id % 300 == 0:\n",
    "            print(f\"Epoch {epoch}, Batch {batch_id}, Loss: {loss.numpy()}\")\n",
    "    accuracy = validation(model, test_loader)\n",
    "    print(f\"Epoch {epoch}, Validation Accuracy: {accuracy:.4f}\")\n",
    "    if accuracy > best_accuracy:\n",
    "        best_accuracy = accuracy\n",
    "        paddle.save(model.state_dict(), \"ContextEncoder_classifier.pdparams\")\n",
    "        print(f\"Model saved with accuracy: {best_accuracy:.4f}\")\n",
    "print(f\"Best Validation Accuracy: {best_accuracy:.4f}\")\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Classification with [Rotation Prediction](https://arxiv.org/abs/1803.07728)\n",
    "- Use the encoder to extract features from the images.\n",
    "- Use the rotation prediction task to train the encoder.\n",
    "- Train a linear classifier on top of the encoder features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "class RotationPrediction(nn.Layer):\n",
    "    def __init__(self, in_dim, hidden_dim, num_classes, rotation_classes):\n",
    "        super().__init__()\n",
    "        self.encoder = Encoder(in_dim, hidden_dim)\n",
    "        self.rotation_classifier = Classifier(hidden_dim, rotation_classes)\n",
    "        self.classifier = Classifier(hidden_dim, num_classes)\n",
    "\n",
    "    def forward(self, images, task=\"classification\"):\n",
    "        if task == \"rotation_prediction\":\n",
    "            latent = self.encoder(images)\n",
    "            out = self.rotation_classifier(latent)\n",
    "            return out\n",
    "        elif task == \"classification\":\n",
    "            latent = self.encoder(images)\n",
    "            logits = self.classifier(latent)\n",
    "            return logits\n",
    "        else:\n",
    "            raise ValueError(f\"Unknown task: {task}\")\n",
    "\n",
    "    ###########################################################################\n",
    "    # TODO:\n",
    "    #   Finish rotate_images below\n",
    "    #   This function rotates the image by `angle` degrees counterclockwise\n",
    "    #   Return: rotated images\n",
    "    ###########################################################################\n",
    "    def rotate_images(self, images, angle=0):\n",
    "        pass"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
    "epochs = 4\n",
    "lr = 1e-4\n",
    "angles = [0, 90, 180, 270]\n",
    "\n",
    "model = RotationPrediction(img_channels, hidden_dim, num_classes, len(angles))\n",
    "model.train()\n",
    "\n",
    "# self-supervised pretraining with rotation prediction\n",
    "rotation_optimizer = AdamW(\n",
    "    parameters=model.parameters(),\n",
    "    learning_rate=lr,\n",
    ")\n",
    "rotation_criterion = nn.CrossEntropyLoss()\n",
    "for epoch in range(epochs):\n",
    "    for batch_id, (images, labels) in enumerate(tqdm(train_loader)):\n",
    "        rotation_optimizer.clear_grad()\n",
    "        rotated_images = []\n",
    "        rotation_labels = []\n",
    "        for angle_idx, angle in enumerate(angles):\n",
    "            rotated_image = model.rotate_images(images, angle)\n",
    "            rotated_images.append(rotated_image)\n",
    "            rotation_labels.append(paddle.full_like(labels, angle_idx))\n",
    "        rotated_images = paddle.concat(rotated_images, axis=0)\n",
    "        rotation_labels = paddle.concat(rotation_labels, axis=0)\n",
    "        ###########################################################################\n",
    "        # TODO:\n",
    "        #   Compute rotation prediction loss.\n",
    "        ###########################################################################\n",
    "        loss.backward()\n",
    "        rotation_optimizer.step()\n",
    "        if batch_id % 300 == 0:\n",
    "            print(f\"Epoch {epoch}, Batch {batch_id}, Loss: {loss.numpy()}\")\n",
    "print(\"Self-supervised pretraining completed with rotation prediction\")\n",
    "paddle.save(model.state_dict(), \"RotationPrediction.pdparams\")\n",
    "\n",
    "# fine-tuning with classification task\n",
    "classification_optimizer = AdamW(\n",
    "    parameters=model.parameters(),\n",
    "    learning_rate=lr,\n",
    ")\n",
    "classification_criterion = nn.CrossEntropyLoss()\n",
    "best_accuracy = 0.0\n",
    "for epoch in range(epochs):\n",
    "    for batch_id, (images, labels) in enumerate(tqdm(train_loader)):\n",
    "        classification_optimizer.clear_grad()\n",
    "        logits = model(images, task=\"classification\")\n",
    "        loss = classification_criterion(logits, labels)\n",
    "        loss.backward()\n",
    "        classification_optimizer.step()\n",
    "        if batch_id % 300 == 0:\n",
    "            print(f\"Epoch {epoch}, Batch {batch_id}, Loss: {loss.numpy()}\")\n",
    "    accuracy = validation(model, test_loader)\n",
    "    print(f\"Epoch {epoch}, Validation Accuracy: {accuracy:.4f}\")\n",
    "    if accuracy > best_accuracy:\n",
    "        best_accuracy = accuracy\n",
    "        paddle.save(model.state_dict(), \"RotationPrediction_classifier.pdparams\")\n",
    "        print(f\"Model saved with accuracy: {best_accuracy:.4f}\")\n",
    "print(f\"Best Validation Accuracy: {best_accuracy:.4f}\")\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Classification with [SimCLR](https://arxiv.org/abs/2002.05709)\n",
    "- Use the encoder to extract features from the images.\n",
    "- Use the SimCLR task to train the encoder.\n",
    "- Train a linear classifier on top of the encoder features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "class SimCLR(nn.Layer):\n",
    "    def __init__(self, in_dim, hidden_dim, num_classes):\n",
    "        super().__init__()\n",
    "        self.encoder = Encoder(in_dim, hidden_dim)\n",
    "        self.projector = Classifier(hidden_dim, hidden_dim)\n",
    "        self.classifier = Classifier(hidden_dim, num_classes)\n",
    "\n",
    "    def forward(self, images, task=\"classification\"):\n",
    "        if task == \"SimCLR\":\n",
    "            latent = self.encoder(images)\n",
    "            out = self.projector(latent)\n",
    "            return out\n",
    "        elif task == \"classification\":\n",
    "            latent = self.encoder(images)\n",
    "            logits = self.classifier(latent)\n",
    "            return logits\n",
    "        else:\n",
    "            raise ValueError(f\"Unknown task: {task}\")\n",
    "\n",
    "    def augment_images(self, images):\n",
    "        images = T.RandomCrop(size=(img_size, img_size))(images)\n",
    "        images = T.RandomHorizontalFlip()(images)\n",
    "        images = T.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.2)(\n",
    "            images\n",
    "        )\n",
    "        return images\n",
    "\n",
    "    ###########################################################################\n",
    "    # TODO:\n",
    "    #   Finish contrastive_loss below.\n",
    "    #   This function compute contrastive loss between images and augmented images.\n",
    "    # HINT:\n",
    "    #   You can refer to the SimCLR paper(https://arxiv.org/abs/2002.05709) for details.\n",
    "    ###########################################################################\n",
    "    def contrastive_loss(self, features, augmented_features, temperature=0.1):\n",
    "        pass"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "epochs = 4\n",
    "lr = 1e-4\n",
    "\n",
    "model = SimCLR(img_channels, hidden_dim, num_classes)\n",
    "model.train()\n",
    "\n",
    "# self-supervised pretraining with SimCLR\n",
    "simclr_optimizer = AdamW(\n",
    "    parameters=model.parameters(),\n",
    "    learning_rate=lr,\n",
    ")\n",
    "for epoch in range(epochs):\n",
    "    for batch_id, (images, labels) in enumerate(tqdm(train_loader)):\n",
    "        simclr_optimizer.clear_grad()\n",
    "        augmented_images = model.augment_images(images)\n",
    "        augmented_features = model(augmented_images, task=\"SimCLR\")\n",
    "        features = model(images, task=\"SimCLR\")\n",
    "        loss = model.contrastive_loss(features, augmented_features)\n",
    "        loss.backward()\n",
    "        simclr_optimizer.step()\n",
    "        if batch_id % 300 == 0:\n",
    "            print(f\"Epoch {epoch}, Batch {batch_id}, Loss: {loss.numpy()}\")\n",
    "print(\"Self-supervised pretraining completed with SimCLR\")\n",
    "paddle.save(model.state_dict(), \"SimCLR.pdparams\")\n",
    "\n",
    "# fine-tuning with classification task\n",
    "classification_optimizer = AdamW(\n",
    "    parameters=model.parameters(),\n",
    "    learning_rate=lr,\n",
    ")\n",
    "classification_criterion = nn.CrossEntropyLoss()\n",
    "best_accuracy = 0.0\n",
    "for epoch in range(epochs):\n",
    "    for batch_id, (images, labels) in enumerate(tqdm(train_loader)):\n",
    "        classification_optimizer.clear_grad()\n",
    "        logits = model(images, task=\"classification\")\n",
    "        loss = classification_criterion(logits, labels)\n",
    "        loss.backward()\n",
    "        classification_optimizer.step()\n",
    "        if batch_id % 300 == 0:\n",
    "            print(f\"Epoch {epoch}, Batch {batch_id}, Loss: {loss.numpy()}\")\n",
    "    accuracy = validation(model, test_loader)\n",
    "    print(f\"Epoch {epoch}, Validation Accuracy: {accuracy:.4f}\")\n",
    "    if accuracy > best_accuracy:\n",
    "        best_accuracy = accuracy\n",
    "        paddle.save(model.state_dict(), \"SimCLR_classifier.pdparams\")\n",
    "        print(f\"Model saved with accuracy: {best_accuracy:.4f}\")\n",
    "print(f\"Best Validation Accuracy: {best_accuracy:.4f}\")\n",
    "    "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "paddle_cuda",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
