{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Improved Techniques For Consistency Training\n",
    "\n",
    "[![arXiv](https://img.shields.io/badge/arXiv-2301.01469-<COLOR>.svg)](https://arxiv.org/abs/2310.14189) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Kinyugo/consistency_models/blob/main/notebooks/improved_techniques_for_consistency_training_example.ipynb) [![GitHub Repo stars](https://img.shields.io/github/stars/Kinyugo/consistency_models?style=social) ](https://github.com/Kinyugo/consistency_models)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 📖 Introduction\n",
    "\n",
    "[Consistency Models](https://arxiv.org/abs/2303.01469) are a new family of generative models that achieve high sample quality without adversarial training. They support fast one-step generation by design, while still allowing for few-step sampling to trade compute for sample quality. They also support zero-shot data editing, like image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks.\n",
    "\n",
    "The original formulation of consistency models achieved good results, however they were still sub-optimal compared to Diffusion models. In a follow up paper the authors revisit the theory and present training techniques that allow them to further narrow the gap between consistency models and other generative model. \n",
    "\n",
    "### Contributions \n",
    "\n",
    "- Eliminating the Exponential Moving Average from the teacher model i.e: teacher model = student model\n",
    "- Replacing LPIPS with Pseudo-Huber loss \n",
    "- Log-normal noise schedule\n",
    "- Improved timestep discretization schedule \n",
    "- Improved loss weighting \n",
    "### Definition\n",
    "\n",
    "Given a diffusion trajectory $x_{\\sigma \\in \\left[\\sigma_{min}, \\sigma_{max}\\right]}$, we define a consistency function $f : \\left(x_{\\sigma}, \\sigma\\right) \\rightarrow x_{\\sigma_{min}}$.\n",
    "\n",
    "We can then train a consistency model $f_{\\theta}\\left(., . \\right)$ to approximate the consistency function. A property of the consistency function is that $f : \\left(x_{\\sigma_{min}}, \\sigma_{min} \\right) \\rightarrow x_{\\sigma_{min}}$. To achieve this we parameterize the consistency model using skip connections as in [[2]](#2)\n",
    "\n",
    "$$\n",
    "f_{\\theta}\\left(x_{\\sigma}, \\sigma \\right) = c_{skip}\\left(\\sigma \\right)x_{\\sigma} + c_{out}\\left(\\sigma \\right)F_{\\theta}\\left(x_{\\sigma}, \\sigma \\right)\n",
    "$$\n",
    "\n",
    "where $c_{skip}\\left(\\sigma_{min} \\right) = 1$ and $c_{out}\\left(\\sigma_{min} \\right) = 0$ and $F_{\\theta}\\left(.,.\\right)$ is the neural network.\n",
    "\n",
    "### Algorithms \n",
    "\n",
    "#### Training \n",
    "```python\n",
    "for current_training_step in range(total_training_steps):\n",
    "    data = data_distribution()\n",
    "\n",
    "    num_timesteps = improved_timesteps_schedule(current_training_step, total_training_steps, initial_timesteps, final_timesteps)\n",
    "    sigmas = karras_schedule(num_timesteps, sigma_min, sigma_max)\n",
    "    timesteps = lognormal_distribution(batch_size, sigmas, mean, std)\n",
    "    noise = standard_gaussian_noise()\n",
    "\n",
    "    current_sigmas = sigmas[timesteps]\n",
    "    next_sigmas = sigmas[timesteps + 1]\n",
    "\n",
    "    current_noisy_data = data + current_sigmas * noise\n",
    "    next_noisy_data = data + current_sigmas * noise\n",
    "\n",
    "    prediction = (skip_scaling(next_sigmas, sigma_data, sigma_min) * next_noisy_data \n",
    "                + output_scaling(next_sigmas, sigma_data, sigma_min) * model(next_noisy_data, next_sigmas))\n",
    "\n",
    "    with no_grad():\n",
    "        target = (skip_scaling(current_sigmas, sigma_data, sigma_min) * current_noisy_data \n",
    "                + output_scaling(current_sigmas, sigma_data, sigma_min) * model(current_noisy_data, current_sigmas))\n",
    "\n",
    "    loss_weights = improved_loss_weighting(sigmas)[timesteps]\n",
    "    loss = mean(loss_weights[timesteps] * pseudo_huber_loss(prediction, target))\n",
    "\n",
    "    loss.backward()\n",
    "```\n",
    "\n",
    "#### Sampling\n",
    "\n",
    "Starting from an initial random noise $\\hat{x}_{\\sigma_{max}} \\sim \\mathcal{N}(0, \\sigma_{max}^2I)$, the consistency model can be used to sample a point in a single step: $\\hat{x}_{\\sigma_{min}} = f_{\\theta}(x_{\\sigma_{max}}, \\sigma_{max})$. For iterative refinement, the following algorithm can be used:\n",
    "\n",
    "```python\n",
    "# Generate an initial sample from the initial random noise\n",
    "sample = (skip_scaling(sigma_max, sigma_data, sigma_min) * x_sigma_max\n",
    "                + output_scaling(sigma_max, sigma_data, sigma_min) * model(x_sigma_max, sigma_max))\n",
    "sample = clamp?(sample)\n",
    "\n",
    "for sigma in sigmas:\n",
    "    noise = standard_gaussian_noise()\n",
    "    noisy_sample = sample + square_root(square(sigma) - square(sigma_min)) * noise\n",
    "    sample = (skip_scaling(sigma, sigma_data, sigma_min) * noisy_sample\n",
    "                + output_scaling(sigma, sigma_data, sigma_min) * model(noisy_sample, sigma))\n",
    "    sample = clamp?(sample)\n",
    "```\n",
    "\n",
    "where `consistency_model` $= f_{\\theta}\\left(.,.\\right)$,\n",
    "`clamp?` is a function that optionally clips values to a given range.\n",
    "\n",
    "### References\n",
    "\n",
    "<a id=\"1\">[1]</a> Song, Y., &amp; Dhariwal, P. (2023, October 22). Improved techniques for training consistency models. arXiv.org. https://arxiv.org/abs/2310.14189 \n",
    "\n",
    "<a id=\"2\">[2]</a> Song, Y., Dhariwal, P., Chen, M., &amp; Sutskever, I. (2023, May 31). Consistency models. arXiv.org. https://arxiv.org/abs/2303.01469 \n",
    "\n",
    "<a id=\"3\">[3]</a> Karras, T., Aittala, M., Aila, T., &amp; Laine, S. (2022, October 11). Elucidating the design space of diffusion-based Generative Models. arXiv.org. https://arxiv.org/abs/2206.00364 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 🛠️ Setup\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### GPU Check"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Packages Installation\n",
    "\n",
    "> **NOTE:** Restart the runtime if using colab after installing the packages."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install -q lightning gdown torchmetrics einops torchinfo --no-cache --upgrade\n",
    "%pip install -q -e git+https://github.com/Kinyugo/consistency_models.git#egg=consistency_models"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import os\n",
    "from dataclasses import asdict, dataclass\n",
    "from typing import Any, Callable, List, Optional, Tuple, Union\n",
    "\n",
    "import torch\n",
    "from einops import rearrange\n",
    "from einops.layers.torch import Rearrange\n",
    "from lightning import LightningDataModule, LightningModule, Trainer, seed_everything\n",
    "from lightning.pytorch.callbacks import LearningRateMonitor\n",
    "from lightning.pytorch.loggers import TensorBoardLogger\n",
    "from matplotlib import pyplot as plt\n",
    "from torch import Tensor, nn\n",
    "from torch.nn import functional as F\n",
    "from torch.utils.data import DataLoader\n",
    "from torchinfo import summary\n",
    "from torchvision import transforms as T\n",
    "from torchvision.datasets import ImageFolder\n",
    "from torchvision.utils import make_grid\n",
    "\n",
    "from consistency_models import (\n",
    "    ConsistencySamplingAndEditing,\n",
    "    ImprovedConsistencyTraining,\n",
    "    pseudo_huber_loss,\n",
    ")\n",
    "from consistency_models.utils import update_ema_model_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 🧠  Implementation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Downloading and Extraction"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!gdown 1FnzQLDPs-IlTTEr14YyENKjTYqZfn8mS  && tar -xf butterflies256.tar.gz # Butterflies Dataset\n",
    "# !gdown 1m1QrNnKJy7hEzUQusyD3th_La775QKUV && tar -xf abstract_art.tar.gz  # Abstract Art Dataset\n",
    "# !gdown 1VJow74U3H7KG_HOiP1WWo6LoqoE3azJj && tar -xf anime_faces.tar.gz # Anime Faces"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### DataModule"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@dataclass\n",
    "class ImageDataModuleConfig:\n",
    "    data_dir: str = \"butterflies256\"\n",
    "    image_size: Tuple[int, int] = (32, 32)\n",
    "    batch_size: int = 32\n",
    "    num_workers: int = 8\n",
    "    pin_memory: bool = True\n",
    "    persistent_workers: bool = True\n",
    "\n",
    "\n",
    "class ImageDataModule(LightningDataModule):\n",
    "    def __init__(self, config: ImageDataModuleConfig) -> None:\n",
    "        super().__init__()\n",
    "\n",
    "        self.config = config\n",
    "\n",
    "    def setup(self, stage: str = None) -> None:\n",
    "        transform = T.Compose(\n",
    "            [\n",
    "                T.Resize(self.config.image_size),\n",
    "                T.RandomHorizontalFlip(),\n",
    "                T.ToTensor(),\n",
    "                T.Lambda(lambda x: (x * 2) - 1),\n",
    "            ]\n",
    "        )\n",
    "        self.dataset = ImageFolder(self.config.data_dir, transform=transform)\n",
    "\n",
    "    def train_dataloader(self) -> DataLoader:\n",
    "        return DataLoader(\n",
    "            self.dataset,\n",
    "            batch_size=self.config.batch_size,\n",
    "            shuffle=True,\n",
    "            num_workers=self.config.num_workers,\n",
    "            pin_memory=self.config.pin_memory,\n",
    "            persistent_workers=self.config.persistent_workers,\n",
    "        )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Modules"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def GroupNorm(channels: int) -> nn.GroupNorm:\n",
    "    return nn.GroupNorm(num_groups=min(32, channels // 4), num_channels=channels)\n",
    "\n",
    "\n",
    "class SelfAttention(nn.Module):\n",
    "    def __init__(\n",
    "        self,\n",
    "        in_channels: int,\n",
    "        out_channels: int,\n",
    "        n_heads: int = 8,\n",
    "        dropout: float = 0.3,\n",
    "    ) -> None:\n",
    "        super().__init__()\n",
    "\n",
    "        self.dropout = dropout\n",
    "\n",
    "        self.qkv_projection = nn.Sequential(\n",
    "            GroupNorm(in_channels),\n",
    "            nn.Conv2d(in_channels, 3 * in_channels, kernel_size=1, bias=False),\n",
    "            Rearrange(\"b (i h d) x y -> i b h (x y) d\", i=3, h=n_heads),\n",
    "        )\n",
    "        self.output_projection = nn.Sequential(\n",
    "            Rearrange(\"b h l d -> b l (h d)\"),\n",
    "            nn.Linear(in_channels, out_channels, bias=False),\n",
    "            Rearrange(\"b l d -> b d l\"),\n",
    "            GroupNorm(out_channels),\n",
    "            nn.Dropout1d(dropout),\n",
    "        )\n",
    "        self.residual_projection = nn.Conv2d(in_channels, out_channels, kernel_size=1)\n",
    "\n",
    "    def forward(self, x: Tensor) -> Tensor:\n",
    "        q, k, v = self.qkv_projection(x).unbind(dim=0)\n",
    "\n",
    "        output = F.scaled_dot_product_attention(\n",
    "            q, k, v, dropout_p=self.dropout if self.training else 0.0, is_causal=False\n",
    "        )\n",
    "        output = self.output_projection(output)\n",
    "        output = rearrange(output, \"b c (x y) -> b c x y\", x=x.shape[-2], y=x.shape[-1])\n",
    "\n",
    "        return output + self.residual_projection(x)\n",
    "\n",
    "\n",
    "class UNetBlock(nn.Module):\n",
    "    def __init__(\n",
    "        self,\n",
    "        in_channels: int,\n",
    "        out_channels: int,\n",
    "        noise_level_channels: int,\n",
    "        dropout: float = 0.3,\n",
    "    ) -> None:\n",
    "        super().__init__()\n",
    "\n",
    "        self.input_projection = nn.Sequential(\n",
    "            GroupNorm(in_channels),\n",
    "            nn.SiLU(),\n",
    "            nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=\"same\"),\n",
    "            nn.Dropout2d(dropout),\n",
    "        )\n",
    "        self.noise_level_projection = nn.Sequential(\n",
    "            nn.SiLU(),\n",
    "            nn.Conv2d(noise_level_channels, out_channels, kernel_size=1),\n",
    "        )\n",
    "        self.output_projection = nn.Sequential(\n",
    "            GroupNorm(out_channels),\n",
    "            nn.SiLU(),\n",
    "            nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=\"same\"),\n",
    "            nn.Dropout2d(dropout),\n",
    "        )\n",
    "        self.residual_projection = nn.Conv2d(in_channels, out_channels, kernel_size=1)\n",
    "\n",
    "    def forward(self, x: Tensor, noise_level: Tensor) -> Tensor:\n",
    "        h = self.input_projection(x)\n",
    "        h = h + self.noise_level_projection(noise_level)\n",
    "\n",
    "        return self.output_projection(h) + self.residual_projection(x)\n",
    "\n",
    "\n",
    "class UNetBlockWithSelfAttention(nn.Module):\n",
    "    def __init__(\n",
    "        self,\n",
    "        in_channels: int,\n",
    "        out_channels: int,\n",
    "        noise_level_channels: int,\n",
    "        n_heads: int = 8,\n",
    "        dropout: float = 0.3,\n",
    "    ) -> None:\n",
    "        super().__init__()\n",
    "\n",
    "        self.unet_block = UNetBlock(\n",
    "            in_channels, out_channels, noise_level_channels, dropout\n",
    "        )\n",
    "        self.self_attention = SelfAttention(\n",
    "            out_channels, out_channels, n_heads, dropout\n",
    "        )\n",
    "\n",
    "    def forward(self, x: Tensor, noise_level: Tensor) -> Tensor:\n",
    "        return self.self_attention(self.unet_block(x, noise_level))\n",
    "\n",
    "\n",
    "class Downsample(nn.Module):\n",
    "    def __init__(self, channels: int) -> None:\n",
    "        super().__init__()\n",
    "\n",
    "        self.projection = nn.Sequential(\n",
    "            Rearrange(\"b c (h ph) (w pw) -> b (c ph pw) h w\", ph=2, pw=2),\n",
    "            nn.Conv2d(4 * channels, channels, kernel_size=1),\n",
    "        )\n",
    "\n",
    "    def forward(self, x: Tensor) -> Tensor:\n",
    "        return self.projection(x)\n",
    "\n",
    "\n",
    "class Upsample(nn.Module):\n",
    "    def __init__(self, channels: int) -> None:\n",
    "        super().__init__()\n",
    "\n",
    "        self.projection = nn.Sequential(\n",
    "            nn.Upsample(scale_factor=2.0, mode=\"nearest\"),\n",
    "            nn.Conv2d(channels, channels, kernel_size=3, padding=\"same\"),\n",
    "        )\n",
    "\n",
    "    def forward(self, x: Tensor) -> Tensor:\n",
    "        return self.projection(x)\n",
    "\n",
    "\n",
    "class NoiseLevelEmbedding(nn.Module):\n",
    "    def __init__(self, channels: int, scale: float = 0.02) -> None:\n",
    "        super().__init__()\n",
    "\n",
    "        self.W = nn.Parameter(torch.randn(channels // 2) * scale, requires_grad=False)\n",
    "\n",
    "        self.projection = nn.Sequential(\n",
    "            nn.Linear(channels, 4 * channels),\n",
    "            nn.SiLU(),\n",
    "            nn.Linear(4 * channels, channels),\n",
    "            Rearrange(\"b c -> b c () ()\"),\n",
    "        )\n",
    "\n",
    "    def forward(self, x: Tensor) -> Tensor:\n",
    "        h = x[:, None] * self.W[None, :] * 2 * torch.pi\n",
    "        h = torch.cat([torch.sin(h), torch.cos(h)], dim=-1)\n",
    "\n",
    "        return self.projection(h)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### UNet\n",
    "\n",
    "Our UNet is inspired by Stable Diffusion and Imagen. It's not the same UNet as the paper."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@dataclass\n",
    "class UNetConfig:\n",
    "    channels: int = 3\n",
    "    noise_level_channels: int = 256\n",
    "    noise_level_scale: float = 0.02\n",
    "    n_heads: int = 8\n",
    "    top_blocks_channels: Tuple[int, ...] = (128, 128)\n",
    "    top_blocks_n_blocks_per_resolution: Tuple[int, ...] = (2, 2)\n",
    "    top_blocks_has_resampling: Tuple[bool, ...] = (True, True)\n",
    "    top_blocks_dropout: Tuple[float, ...] = (0.0, 0.0)\n",
    "    mid_blocks_channels: Tuple[int, ...] = (256, 512)\n",
    "    mid_blocks_n_blocks_per_resolution: Tuple[int, ...] = (4, 4)\n",
    "    mid_blocks_has_resampling: Tuple[bool, ...] = (True, False)\n",
    "    mid_blocks_dropout: Tuple[float, ...] = (0.0, 0.3)\n",
    "\n",
    "\n",
    "class UNet(nn.Module):\n",
    "    def __init__(self, config: UNetConfig) -> None:\n",
    "        super().__init__()\n",
    "\n",
    "        self.config = config\n",
    "\n",
    "        self.input_projection = nn.Conv2d(\n",
    "            config.channels,\n",
    "            config.top_blocks_channels[0],\n",
    "            kernel_size=3,\n",
    "            padding=\"same\",\n",
    "        )\n",
    "        self.noise_level_embedding = NoiseLevelEmbedding(\n",
    "            config.noise_level_channels, config.noise_level_scale\n",
    "        )\n",
    "        self.top_encoder_blocks = self._make_encoder_blocks(\n",
    "            self.config.top_blocks_channels + self.config.mid_blocks_channels[:1],\n",
    "            self.config.top_blocks_n_blocks_per_resolution,\n",
    "            self.config.top_blocks_has_resampling,\n",
    "            self.config.top_blocks_dropout,\n",
    "            self._make_top_block,\n",
    "        )\n",
    "        self.mid_encoder_blocks = self._make_encoder_blocks(\n",
    "            self.config.mid_blocks_channels + self.config.mid_blocks_channels[-1:],\n",
    "            self.config.mid_blocks_n_blocks_per_resolution,\n",
    "            self.config.mid_blocks_has_resampling,\n",
    "            self.config.mid_blocks_dropout,\n",
    "            self._make_mid_block,\n",
    "        )\n",
    "        self.mid_decoder_blocks = self._make_decoder_blocks(\n",
    "            self.config.mid_blocks_channels + self.config.mid_blocks_channels[-1:],\n",
    "            self.config.mid_blocks_n_blocks_per_resolution,\n",
    "            self.config.mid_blocks_has_resampling,\n",
    "            self.config.mid_blocks_dropout,\n",
    "            self._make_mid_block,\n",
    "        )\n",
    "        self.top_decoder_blocks = self._make_decoder_blocks(\n",
    "            self.config.top_blocks_channels + self.config.mid_blocks_channels[:1],\n",
    "            self.config.top_blocks_n_blocks_per_resolution,\n",
    "            self.config.top_blocks_has_resampling,\n",
    "            self.config.top_blocks_dropout,\n",
    "            self._make_top_block,\n",
    "        )\n",
    "        self.output_projection = nn.Conv2d(\n",
    "            config.top_blocks_channels[0],\n",
    "            config.channels,\n",
    "            kernel_size=3,\n",
    "            padding=\"same\",\n",
    "        )\n",
    "\n",
    "    def forward(self, x: Tensor, noise_level: Tensor) -> Tensor:\n",
    "        h = self.input_projection(x)\n",
    "        noise_level = self.noise_level_embedding(noise_level)\n",
    "\n",
    "        top_encoder_embeddings = []\n",
    "        for block in self.top_encoder_blocks:\n",
    "            if isinstance(block, UNetBlock):\n",
    "                h = block(h, noise_level)\n",
    "                top_encoder_embeddings.append(h)\n",
    "            else:\n",
    "                h = block(h)\n",
    "\n",
    "        mid_encoder_embeddings = []\n",
    "        for block in self.mid_encoder_blocks:\n",
    "            if isinstance(block, UNetBlockWithSelfAttention):\n",
    "                h = block(h, noise_level)\n",
    "                mid_encoder_embeddings.append(h)\n",
    "            else:\n",
    "                h = block(h)\n",
    "\n",
    "        for block in self.mid_decoder_blocks:\n",
    "            if isinstance(block, UNetBlockWithSelfAttention):\n",
    "                h = torch.cat((h, mid_encoder_embeddings.pop()), dim=1)\n",
    "                h = block(h, noise_level)\n",
    "            else:\n",
    "                h = block(h)\n",
    "\n",
    "        for block in self.top_decoder_blocks:\n",
    "            if isinstance(block, UNetBlock):\n",
    "                h = torch.cat((h, top_encoder_embeddings.pop()), dim=1)\n",
    "                h = block(h, noise_level)\n",
    "            else:\n",
    "                h = block(h)\n",
    "\n",
    "        return self.output_projection(h)\n",
    "\n",
    "    def _make_encoder_blocks(\n",
    "        self,\n",
    "        channels: Tuple[int, ...],\n",
    "        n_blocks_per_resolution: Tuple[int, ...],\n",
    "        has_resampling: Tuple[bool, ...],\n",
    "        dropout: Tuple[float, ...],\n",
    "        block_fn: Callable[[], nn.Module],\n",
    "    ) -> nn.ModuleList:\n",
    "        blocks = nn.ModuleList()\n",
    "\n",
    "        channel_pairs = list(zip(channels[:-1], channels[1:]))\n",
    "        for idx, (in_channels, out_channels) in enumerate(channel_pairs):\n",
    "            for _ in range(n_blocks_per_resolution[idx]):\n",
    "                blocks.append(block_fn(in_channels, out_channels, dropout[idx]))\n",
    "                in_channels = out_channels\n",
    "\n",
    "            if has_resampling[idx]:\n",
    "                blocks.append(Downsample(out_channels))\n",
    "\n",
    "        return blocks\n",
    "\n",
    "    def _make_decoder_blocks(\n",
    "        self,\n",
    "        channels: Tuple[int, ...],\n",
    "        n_blocks_per_resolution: Tuple[int, ...],\n",
    "        has_resampling: Tuple[bool, ...],\n",
    "        dropout: Tuple[float, ...],\n",
    "        block_fn: Callable[[], nn.Module],\n",
    "    ) -> nn.ModuleList:\n",
    "        blocks = nn.ModuleList()\n",
    "\n",
    "        channel_pairs = list(zip(channels[:-1], channels[1:]))[::-1]\n",
    "        for idx, (out_channels, in_channels) in enumerate(channel_pairs):\n",
    "            if has_resampling[::-1][idx]:\n",
    "                blocks.append(Upsample(in_channels))\n",
    "\n",
    "            inner_blocks = []\n",
    "            for _ in range(n_blocks_per_resolution[::-1][idx]):\n",
    "                inner_blocks.append(\n",
    "                    block_fn(in_channels * 2, out_channels, dropout[::-1][idx])\n",
    "                )\n",
    "                out_channels = in_channels\n",
    "            blocks.extend(inner_blocks[::-1])\n",
    "\n",
    "        return blocks\n",
    "\n",
    "    def _make_top_block(\n",
    "        self, in_channels: int, out_channels: int, dropout: float\n",
    "    ) -> UNetBlock:\n",
    "        return UNetBlock(\n",
    "            in_channels,\n",
    "            out_channels,\n",
    "            self.config.noise_level_channels,\n",
    "            dropout,\n",
    "        )\n",
    "\n",
    "    def _make_mid_block(\n",
    "        self,\n",
    "        in_channels: int,\n",
    "        out_channels: int,\n",
    "        dropout: float,\n",
    "    ) -> UNetBlockWithSelfAttention:\n",
    "        return UNetBlockWithSelfAttention(\n",
    "            in_channels,\n",
    "            out_channels,\n",
    "            self.config.noise_level_channels,\n",
    "            self.config.n_heads,\n",
    "            dropout,\n",
    "        )\n",
    "\n",
    "    def save_pretrained(self, pretrained_path: str) -> None:\n",
    "        os.makedirs(pretrained_path, exist_ok=True)\n",
    "\n",
    "        with open(os.path.join(pretrained_path, \"config.json\"), mode=\"w\") as f:\n",
    "            json.dump(asdict(self.config), f)\n",
    "\n",
    "        torch.save(self.state_dict(), os.path.join(pretrained_path, \"model.pt\"))\n",
    "\n",
    "    @classmethod\n",
    "    def from_pretrained(cls, pretrained_path: str) -> \"UNet\":\n",
    "        with open(os.path.join(pretrained_path, \"config.json\"), mode=\"r\") as f:\n",
    "            config_dict = json.load(f)\n",
    "        config = UNetConfig(**config_dict)\n",
    "\n",
    "        model = cls(config)\n",
    "\n",
    "        state_dict = torch.load(\n",
    "            os.path.join(pretrained_path, \"model.pt\"), map_location=torch.device(\"cpu\")\n",
    "        )\n",
    "        model.load_state_dict(state_dict)\n",
    "\n",
    "        return model\n",
    "\n",
    "\n",
    "summary(UNet(UNetConfig()), input_size=((1, 3, 32, 32), (1,)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### LitUNet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@dataclass\n",
    "class LitImprovedConsistencyModelConfig:\n",
    "    ema_decay_rate: float = 0.99993\n",
    "    lr: float = 1e-4\n",
    "    betas: Tuple[float, float] = (0.9, 0.995)\n",
    "    lr_scheduler_start_factor: float = 1e-5\n",
    "    lr_scheduler_iters: int = 10_000\n",
    "    sample_every_n_steps: int = 10_000\n",
    "    num_samples: int = 8\n",
    "    sampling_sigmas: Tuple[Tuple[int, ...], ...] = (\n",
    "        (80,),\n",
    "        (80.0, 0.661),\n",
    "        (80.0, 24.4, 5.84, 0.9, 0.661),\n",
    "    )\n",
    "\n",
    "\n",
    "class LitImprovedConsistencyModel(LightningModule):\n",
    "    def __init__(\n",
    "        self,\n",
    "        consistency_training: ImprovedConsistencyTraining,\n",
    "        consistency_sampling: ConsistencySamplingAndEditing,\n",
    "        model: UNet,\n",
    "        ema_model: UNet,\n",
    "        config: LitImprovedConsistencyModelConfig,\n",
    "    ) -> None:\n",
    "        super().__init__()\n",
    "\n",
    "        self.consistency_training = consistency_training\n",
    "        self.consistency_sampling = consistency_sampling\n",
    "        self.model = model\n",
    "        self.ema_model = ema_model\n",
    "        self.config = config\n",
    "\n",
    "        # Freeze the EMA model and set it to eval mode\n",
    "        for param in self.ema_model.parameters():\n",
    "            param.requires_grad = False\n",
    "        self.ema_model = self.ema_model.eval()\n",
    "\n",
    "    def training_step(self, batch: Union[Tensor, List[Tensor]], batch_idx: int) -> None:\n",
    "        if isinstance(batch, list):\n",
    "            batch = batch[0]\n",
    "\n",
    "        output = self.consistency_training(\n",
    "            self.model, batch, self.global_step, self.trainer.max_steps\n",
    "        )\n",
    "\n",
    "        loss = (\n",
    "            pseudo_huber_loss(output.predicted, output.target) * output.loss_weights\n",
    "        ).mean()\n",
    "\n",
    "        self.log_dict({\"train_loss\": loss, \"num_timesteps\": output.num_timesteps})\n",
    "\n",
    "        return loss\n",
    "\n",
    "    def on_train_batch_end(\n",
    "        self, outputs: Any, batch: Union[Tensor, List[Tensor]], batch_idx: int\n",
    "    ) -> None:\n",
    "        update_ema_model_(self.model, self.ema_model, self.config.ema_decay_rate)\n",
    "\n",
    "        if (\n",
    "            (self.global_step + 1) % self.config.sample_every_n_steps == 0\n",
    "        ) or self.global_step == 0:\n",
    "            self.__sample_and_log_samples(batch)\n",
    "\n",
    "    def configure_optimizers(self):\n",
    "        opt = torch.optim.Adam(\n",
    "            self.model.parameters(), lr=self.config.lr, betas=self.config.betas\n",
    "        )\n",
    "        sched = torch.optim.lr_scheduler.LinearLR(\n",
    "            opt,\n",
    "            start_factor=self.config.lr_scheduler_start_factor,\n",
    "            total_iters=self.config.lr_scheduler_iters,\n",
    "        )\n",
    "        sched = {\"scheduler\": sched, \"interval\": \"step\", \"frequency\": 1}\n",
    "\n",
    "        return [opt], [sched]\n",
    "\n",
    "    @torch.no_grad()\n",
    "    def __sample_and_log_samples(self, batch: Union[Tensor, List[Tensor]]) -> None:\n",
    "        if isinstance(batch, list):\n",
    "            batch = batch[0]\n",
    "\n",
    "        # Ensure the number of samples does not exceed the batch size\n",
    "        num_samples = min(self.config.num_samples, batch.shape[0])\n",
    "        noise = torch.randn_like(batch[:num_samples])\n",
    "\n",
    "        # Log ground truth samples\n",
    "        self.__log_images(\n",
    "            batch[:num_samples].detach().clone(), f\"ground_truth\", self.global_step\n",
    "        )\n",
    "\n",
    "        for sigmas in self.config.sampling_sigmas:\n",
    "            samples = self.consistency_sampling(\n",
    "                self.ema_model, noise, sigmas, clip_denoised=True, verbose=True\n",
    "            )\n",
    "            samples = samples.clamp(min=-1.0, max=1.0)\n",
    "\n",
    "            # Generated samples\n",
    "            self.__log_images(\n",
    "                samples,\n",
    "                f\"generated_samples-sigmas={sigmas}\",\n",
    "                self.global_step,\n",
    "            )\n",
    "\n",
    "    @torch.no_grad()\n",
    "    def __log_images(self, images: Tensor, title: str, global_step: int) -> None:\n",
    "        images = images.detach().float()\n",
    "\n",
    "        grid = make_grid(\n",
    "            images.clamp(-1.0, 1.0), value_range=(-1.0, 1.0), normalize=True\n",
    "        )\n",
    "        self.logger.experiment.add_image(title, grid, global_step)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 🚀 Training"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Training Loop"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "@dataclass\n",
    "class TrainingConfig:\n",
    "    image_dm_config: ImageDataModuleConfig\n",
    "    unet_config: UNetConfig\n",
    "    consistency_training: ImprovedConsistencyTraining\n",
    "    consistency_sampling: ConsistencySamplingAndEditing\n",
    "    lit_icm_config: LitImprovedConsistencyModelConfig\n",
    "    trainer: Trainer\n",
    "    seed: int = 42\n",
    "    model_ckpt_path: str = \"checkpoints/icm\"\n",
    "    resume_ckpt_path: Optional[str] = None\n",
    "\n",
    "\n",
    "def run_training(config: TrainingConfig) -> None:\n",
    "    # Set seed\n",
    "    seed_everything(config.seed)\n",
    "\n",
    "    # Create data module\n",
    "    dm = ImageDataModule(config.image_dm_config)\n",
    "\n",
    "    # Create model and its EMA\n",
    "    model = UNet(config.unet_config)\n",
    "    ema_model = UNet(config.unet_config)\n",
    "    ema_model.load_state_dict(model.state_dict())\n",
    "\n",
    "    # Create lightning module\n",
    "    lit_icm = LitImprovedConsistencyModel(\n",
    "        config.consistency_training,\n",
    "        config.consistency_sampling,\n",
    "        model,\n",
    "        ema_model,\n",
    "        config.lit_icm_config,\n",
    "    )\n",
    "\n",
    "    # Run training\n",
    "    config.trainer.fit(lit_icm, dm, ckpt_path=config.resume_ckpt_path)\n",
    "\n",
    "    # Save model\n",
    "    lit_icm.model.save_pretrained(config.model_ckpt_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Run Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%load_ext tensorboard\n",
    "%tensorboard --logdir=lightning_logs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> **NOTE:** While the paper suggests a final_timesteps value of `1280`, it's important to note that during my experimentation on a smaller dataset with approximately 10,000 training steps, I discovered that setting final_timesteps to `11` yielded superior results. This value corresponds to the number of timesteps that would be obtained if we were to train for the same number of steps as mentioned in the paper (400,000 to 800,000). However, it's crucial to emphasize that this particular setting has not been extensively explored or experimented with, so it may require further fine-tuning and adjustment to suit your specific needs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "training_config = TrainingConfig(\n",
    "    image_dm_config=ImageDataModuleConfig(\"butterflies256\"),\n",
    "    unet_config=UNetConfig(),\n",
    "    consistency_training=ImprovedConsistencyTraining(final_timesteps=11),\n",
    "    consistency_sampling=ConsistencySamplingAndEditing(),\n",
    "    lit_icm_config=LitImprovedConsistencyModelConfig(\n",
    "        sample_every_n_steps=1000, lr_scheduler_iters=1000\n",
    "    ),\n",
    "    trainer=Trainer(\n",
    "        max_steps=10_000,\n",
    "        precision=\"16-mixed\",\n",
    "        log_every_n_steps=10,\n",
    "        logger=TensorBoardLogger(\".\", name=\"logs\", version=\"icm\"),\n",
    "        callbacks=[LearningRateMonitor(logging_interval=\"step\")],\n",
    "    ),\n",
    ")\n",
    "run_training(training_config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 🎲 Sampling & Zero-shot Editing"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "seed_everything(42)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Utils"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot_images(images: Tensor, cols: int = 4) -> None:\n",
    "    rows = max(images.shape[0] // cols, 1)\n",
    "    fig, axs = plt.subplots(rows, cols)\n",
    "    axs = axs.flatten()\n",
    "    for i, image in enumerate(images):\n",
    "        axs[i].imshow(image.permute(1, 2, 0).numpy() / 2 + 0.5)\n",
    "        axs[i].set_axis_off()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Checkpoint Loading"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n",
    "dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32\n",
    "\n",
    "unet = UNet.from_pretrained(\"checkpoints/icm\").eval().to(device=device, dtype=dtype)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Load Sample Batch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dm = ImageDataModule(ImageDataModuleConfig(\"butterflies256\", batch_size=4))\n",
    "dm.setup()\n",
    "\n",
    "batch, _ = next(iter(dm.train_dataloader()))\n",
    "batch = batch.to(device=device, dtype=dtype)\n",
    "\n",
    "plot_images(batch.float().cpu())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Experiments"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Sampling"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "consistency_sampling_and_editing = ConsistencySamplingAndEditing()\n",
    "\n",
    "with torch.no_grad():\n",
    "    samples = consistency_sampling_and_editing(\n",
    "        unet,\n",
    "        torch.randn((4, 3, 32, 32), device=device, dtype=dtype),\n",
    "        sigmas=[80.0],  # Use more steps for better samples e.g 2-5\n",
    "        clip_denoised=True,\n",
    "        verbose=True,\n",
    "    )\n",
    "\n",
    "plot_images(samples.float().cpu())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Inpainting"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "random_erasing = T.RandomErasing(p=1.0, scale=(0.2, 0.5), ratio=(0.5, 0.5))\n",
    "masked_batch = random_erasing(batch)\n",
    "mask = torch.logical_not(batch == masked_batch)\n",
    "\n",
    "plot_images(masked_batch.float().cpu())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with torch.no_grad():\n",
    "    inpainted_batch = consistency_sampling_and_editing(\n",
    "        unet,\n",
    "        masked_batch,\n",
    "        sigmas=[5.23, 2.25],\n",
    "        mask=mask.to(dtype=dtype),\n",
    "        clip_denoised=True,\n",
    "        verbose=True,\n",
    "    )\n",
    "\n",
    "plot_images(torch.cat((masked_batch, inpainted_batch), dim=0).float().cpu())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Interpolation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "batch_a = batch.clone()\n",
    "batch_b = torch.flip(batch, dims=(0,))\n",
    "\n",
    "plot_images(torch.cat((batch_a, batch_b), dim=0).float().cpu())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with torch.no_grad():\n",
    "    interpolated_batch = consistency_sampling_and_editing.interpolate(\n",
    "        unet,\n",
    "        batch_a,\n",
    "        batch_b,\n",
    "        ab_ratio=0.5,\n",
    "        sigmas=[5.23, 2.25],\n",
    "        clip_denoised=True,\n",
    "        verbose=True,\n",
    "    )\n",
    "\n",
    "plot_images(torch.cat((batch_a, batch_b, interpolated_batch), dim=0).float().cpu())"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "torch",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
