{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "0b25724f",
   "metadata": {},
   "source": [
    "## 目录\n",
    "1. 基础\n",
    "   1. 图片生成原理\n",
    "   2. 编程实现\n",
    "   3. 练习\n",
    "2. 进阶\n",
    "   1. 文生图\n",
    "   2. 图生图\n",
    "   3. controlNet\n",
    "   4. LoRA微调\n",
    "   5. Dit\n",
    "3. 项目--EasyPhoto\n",
    "   1. EasyPhoto是一款Webui UI插件，用于生成AI肖像画，该代码可用于训练相关的数字分身。支持任意Id的图像生成。只需要提供少量的带有主体（id）的训练图片，将首先训练一个包含id信息的LoRA模型。基于该LoRA模型，可任意的替换模版图片中的指定区域"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a83a030",
   "metadata": {},
   "source": [
    "- [前置内容](./前置内容.ipynb)\n",
    "   1. UNet网络\n",
    "   2. 注意力"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1d0a0538",
   "metadata": {},
   "source": [
    "## 基础\n",
    "\n",
    "### 图片生成原理\n",
    "#### 1. DDPM理论\n",
    "1. DDPM（DiffusionDenoisingProcessModel）模型主要分为两个过程：先是Forward加噪过程，然后是Reverse去噪过程\n",
    "   1. 加噪：在数据集中逐步加入随机高斯噪声，最后生成纯高斯噪声数据，这过程一般在训练阶段完成\n",
    "   2. 去噪：对随机生成的纯高斯噪声数据**逐步**去噪，从而还原出真实数据，这过程一般在预测生成时完成去噪\n",
    "2. 生成的数据是不受控的随机数据\n",
    "####   2. 前向传播--加噪\n",
    "1. 对原始数据 $X_{0}$ 不断加高斯噪声最后生成纯高斯噪声数据 $X_{t}$\n",
    "2. 公式：\n",
    "   1. 逐步加噪$X_{t-1}$ 到 $X_{t}$ ：$X_{t}=\\sqrt{\\alpha_{t}}X_{t-1}+\\sqrt{1-\\alpha_{t}}\\epsilon_{t-1}$\n",
    "   2. 一步加噪$X_{0}$ 到 $X_{t}$ ：$X_{t}=\\sqrt{\\bar{\\alpha _{t}}}X_{0}+\\sqrt{1-\\bar{\\alpha _{t}}}\\epsilon$\n",
    "   3. 符号说明\n",
    "      1. $\\alpha_{t}$ 是高斯噪声，一个很小值的超参数，在论文中的值从0.9999到0.998，$\\alpha_{t}=1-\\beta_{t}$ \n",
    "      2. $\\bar{\\alpha _{t}}=\\prod\\limits_{i=1}^{t}\\alpha_{i}$\n",
    "      3. $\\epsilon\\in{N(0,1)}$ 是高斯噪声\n",
    "   4. 编程实现：\n",
    "      ```python\n",
    "      betas = torch.linspace(start=0.0001, end=0.02, steps=1000)`\n",
    "      alphas = 1 - betas\n",
    "      alphas_cum = torch.cumprod(alphas, 0) \n",
    "      alphas_cum_s = torch.sqrt(alphas_cum) \n",
    "      alphas_cum_sm = torch.sqrt(1 - alphas_cum)\n",
    "      noise = torch.randn_like(x) // x 是图片数据\n",
    "      x_t = alphas_cum_s[t] * x + alphas_cum_sm[t] * noise\n",
    "      ```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0cae4875",
   "metadata": {},
   "source": [
    "####   3. 反向传播--去噪\n",
    "1. 前向传播中已经获取到了样本和噪音，反向传播要让模型估测噪音\n",
    "2. 去燥：通过预测噪声$\\epsilon$，将纯高斯噪声数据**逐步**还原为原始数据，从$X_{t}$到$X_{t-1}$，一步一步得，最终到$X_{0}$\n",
    "3. 公式\n",
    "   1. $X_{t-1}= \\frac{1}{\\sqrt{\\alpha_{t}}} \\left( X_{t}-\\frac{1-\\alpha_{t}}{\\sqrt{1-\\overline{\\alpha}_{t}}} \\epsilon_{\\theta}(X_{t},t)\\right)+\\sigma_{t}Z$\n",
    "   2. 符号说明\n",
    "      1. $\\epsilon_{\\theta}$ 是噪声估计函数（一般使用NN模型）；用于估计真实噪声 ， $\\theta$ 是模型训练的参数\n",
    "      2. $\\sigma_{t}Z$ 是预测噪声和真实噪声的误差，$Z\\in{N(0,1)}$\n",
    "4. 加噪过程中的真实噪声在去噪过程中是无法获得的，所以DDPM的关键就是训练一个由纯高斯噪声数据$X_{t}$和t估测橾声的模型 $\\epsilon_{\\theta}(X_{t},t)$\n",
    "5. 编程实现\n",
    "      ```python\n",
    "      betas = torch.linspace(start=0.0001, end=0.02, steps=1000)  \n",
    "      alphas = 1 - betas  \n",
    "      alphas_cum = torch.cumprod(alphas, 0)\n",
    "      alphas_cum_prev = torch.cat((torch.tensor([1.0]), alphas_cum[:-1]), 0)\n",
    "      posterior_variance = betas * (1 - alphas_cum_prev) / (1 - alphas_cum) \n",
    "      ``` \n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0ddcacee",
   "metadata": {},
   "source": [
    "#### 4. 时间步长t\n",
    "1. 时间步长t是为了模拟一个随时间逐渐增强的扰动过程。每个时间步长代表一个扰动过程，从初始状态开始，通过多次应用噪声来逐渐改变数据的结构。较小的代表较弱的噪声扰动，而较大的代表更强的噪声扰动。\n",
    "2. 时间步长t的作用：对生成和采样过程都有帮助\n",
    "   1. DDPM 中的 UNet 都是共享参数的，那如何根据不同的输入生成不同的输出，最后从一个完全的一个随机噪声变成一个有意义的数据。UNet 模型在刚开始的反向过程之中，它可以先生成一些物体的大体轮廓，随着扩散模型一点一点往前走，然后到最后快生成逼真图像的时候，这时候希望它学习到高频的一些特征信息。由于 UNet 都是共享参数，这时候就需要 time embedding 去提醒这个模型，现在走到哪一步了，现在输出是想要粗糙一点的，还是细致一点的。\n",
    "   2. 为了让模型了解当前的数据应该处于扩散过程中的哪一步，时间t需要被编码成模型可以处理的格式。和Transformer模型中使用的位置编码一样，可以通过对时间步t使用正弦和余弦函数的组合来得到更丰富的时间编码信息。这些编码然后再输入到模型中，帮助模型理解当前噪声的级别\n",
    "3. 训练过程中每一次引入的是随机时间步长\n",
    "   1. 模型在训练过程中 loss 会逐渐降低，越到后面loss的变化幅度越小。如果时间步长t是递增的，会使得模型过多的关注较早的时间步长（因为早期 loss 大），而忽略了较晚的时间步长信息。如果训练过程中，每次都按t从1到N输入，这样unet模型来讲，反而去拟合t\n",
    "   2. 主可以帮助模型在训练过程中避免陷入局部最优解，并且有助于提高模型的鲁棒性。使用固定步长的训练方法容易陷入局部最优解，因为这种方法会使模型只关注特定的变化模式，而忽略其他可能的模式。而随机步长的方法可以使模型在训练过程中跳出局部最优。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b458a7d3",
   "metadata": {},
   "source": [
    "#### 5. 训练\n",
    "1. 在DDPM中，使用UNet作为估测噪声的模型，该模型输入为$X_{t}$和t，输出为$X_{t}$的高斯噪声；一步一步得从纯噪声回到真实数据\n",
    "   1. 因为不同的时间，加噪程度不同，所以噪音与时间有关联，需要作为输入\n",
    "   2. t需转为embedding\n",
    "2. 损失函数可以使用MSE误差\n",
    "   1. $Loss=\\left \\| \\epsilon-\\epsilon_{\\theta}(X_{t},t) \\right \\|^2=\\left \\| \\epsilon-\\epsilon_{\\theta}(\\sqrt{\\overline{\\alpha}_{t}}X_{0}+\\sqrt{1-\\overline{\\alpha}_{t}}\\epsilon,t)\\right \\|^2$\n",
    "3. 训练思路\n",
    "   1. 首先随机给每个batch里每个数据都生成一个t，代表选择这个batch里面第t个时刻的噪声进行拟合\n",
    "   2. 用这个噪声数据、t和网络模型计算预测噪声，利用预测噪声和实际噪声进行拟合\n",
    "4. 训练过程\n",
    "   1. 重复如下步骤\n",
    "   2. $\\qquad X_{0}\\in{q(X_{0})}$\n",
    "   3. $\\qquad t\\in{Uniform(\\{1,...,T\\})}$\n",
    "   4. $\\qquad \\epsilon\\in{N(0,1)}$\n",
    "   5. $\\qquad \\text{进行梯度下降步骤 } \\nabla_{\\theta} \\left \\| \\epsilon-\\epsilon_{\\theta}(\\sqrt{\\overline{\\alpha}_{t}}X_{0}+\\sqrt{1-\\overline{\\alpha}_{t}}\\epsilon,t)\\right \\|^2 $\n",
    "   6. 直到模型收敛完毕\n",
    "5. 编码过程\n",
    "   1. 数据集中选取一张原图X0\n",
    "   2. 生成随机时间t对应的噪音，根据扩散公式加到数据中\n",
    "   3. 随机时间t转为embedding\n",
    "   4. embedding与已加噪的数据输入模型中，输出拟合预测噪音，进行梯度下降更新模型\n",
    "   5. 以上重复直到模型收敛"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "73906f68",
   "metadata": {},
   "source": [
    "#### 6. 推理--生成数据\n",
    "1. 模型训练只需要关注 $\\epsilon_{\\theta}(X_{t},t)$，在推理阶段使用 **反向传播公式** 生成数据\n",
    "2. 使用训练好的模型，从纯高斯噪声数据中按 **反向传播公式** 逐步生成数据\n",
    "3. 推理过程\n",
    "   1. $X_{t}\\in{N(0,I)}$\n",
    "   2. `for t = T,...,1 do`\n",
    "   3. $\\qquad z\\in{N(0,I)}$`if t>1,else z=0`\n",
    "   4. $\\qquad X_{t-1}=\\frac{1}{\\sqrt{\\alpha_{t}}}\\left( X_{t}-\\frac{1-\\alpha_{t}}{\\sqrt{1-\\overline{\\alpha}_{t}}} \\epsilon_{\\theta}(X_{t},t)\\right)+\\sigma_{t}Z$\n",
    "   5. 循环结束\n",
    "   6. return $X_{0}$\n",
    "4. 编码过程\n",
    "   1. 取纯高斯噪声数据 $X_{t}$和对应的时间t的embedding，输入模型中，生成噪音\n",
    "   2. 根据公式生成数据 $X_{t-1}$\n",
    "   3. 重复上面两步直到生成数据 $X_{0}$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a92799b",
   "metadata": {},
   "source": [
    "### 7. 编程\n",
    "1. 数据用图片数据集"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4e293dbd",
   "metadata": {},
   "source": [
    "#### 配置文件"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "54da83f0",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from torch import nn\n",
    "import math \n",
    "from torch.utils.data import DataLoader\n",
    "import os \n",
    "import torchvision\n",
    "from torchvision import transforms \n",
    "import matplotlib.pyplot as plt "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "797292ff",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 配置文件\n",
    "IMG_SIZE=48   # 图像尺寸\n",
    "T=1000   # 加噪最大步数\n",
    "DEVICE = \"cuda\" if torch.cuda.is_available() else \"cpu\" # 训练设备"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "781ca1fb",
   "metadata": {},
   "source": [
    "#### 数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b993bf4f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 数据集\n",
    "\n",
    "# PIL图像转tensor\n",
    "pil_2_tensor = transforms.Compose([\n",
    "    transforms.Resize((IMG_SIZE,IMG_SIZE)), # PIL图像尺寸统一\n",
    "    transforms.ToTensor()                   # PIL图像转tensor, (H,W,C) ->（C,H,W）,像素值[0,1]\n",
    "])\n",
    "\n",
    "# tensor转PIL图像\n",
    "tensor_2_pil = transforms.Compose([\n",
    "    transforms.Lambda(lambda t : t*255),   # 像素还原\n",
    "    transforms.Lambda(lambda t: t.type(torch.uint8)),    # 像素值取整\n",
    "    transforms.ToPILImage()                # tensor转回PIL图像, (C,H,W) -> (H,W,C)\n",
    "])\n",
    "\n",
    "train_dataset = torchvision.datasets.MNIST('.',  train=True,download=True,transform=pil_2_tensor)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "df18dbe3",
   "metadata": {},
   "source": [
    "#### 时间emb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3b09d3be",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 时间步长的位置嵌入\n",
    "# 一半为sin，一半为cos\n",
    "class TimePositionEmbedding(nn.Module):\n",
    "    def __init__(self, emb_size):\n",
    "        super().__init__()\n",
    "        \n",
    "        self.half_emb_size = emb_size // 2\n",
    "        half_emb = torch.exp(torch.arange(self.half_emb_size)*(-1*math.log(10000)/(self.half_emb_size-1)))\n",
    "        self.register_buffer('half_emb',half_emb)\n",
    "        \n",
    "    def forward(self,t):\n",
    "        t = t.view(t.shape[0],1)\n",
    "        half_emb = self.half_emb.unsqueeze(0).expand(t.shape[0],self.half_emb_size)\n",
    "        half_emb_size = half_emb * t\n",
    "        return torch.cat((half_emb_size.sin(),half_emb_size.cos()),-1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "281c4fd2",
   "metadata": {},
   "source": [
    "#### 前向加噪"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cfbf5610",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 前向加噪\n",
    "\n",
    "# 前向diffusion计算参数\n",
    "betas = torch.linspace(0.0001,0.02,T)\n",
    "alphas = 1 - betas\n",
    "alphas_cumprod = torch.cumprod(alphas,-1) # alpha_t累乘 (T,)\n",
    "alphas_cumprod_prev = torch.cat((torch.tensor([1.0]),alphas_cumprod[:-1]),-1)\n",
    "variance = (1-alphas) * (1-alphas_cumprod_prev) / (1-alphas_cumprod)\n",
    "\n",
    "def forward_diffusion(x, t): # (batch,channel,width,height), (batch_size,)\n",
    "    # 为每张图片生成第t步的随机高斯噪音   (batch,channel,width,height)\n",
    "    noise_t = torch.rand_like(x)\n",
    "    alphas_cumprod_t = alphas_cumprod.to(DEVICE)[t].view(x.shape[0],1,1,1)\n",
    "    # 基于公式直接生成第t步加噪后图片\n",
    "    x = torch.sqrt(alphas_cumprod_t) * x + \\\n",
    "        torch.sqrt(1-alphas_cumprod_t) * noise_t\n",
    "    return x, noise_t"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2c165c1f",
   "metadata": {},
   "source": [
    "#### UNet模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4b5afe03",
   "metadata": {},
   "outputs": [],
   "source": [
    "# UNet模型\n",
    "\n",
    "# \n",
    "class ConvBlock(nn.Module):\n",
    "    def __init__(self, in_channels,out_channels,time_emb_size):\n",
    "        super().__init__()\n",
    "        self.seq1 = nn.Sequential(\n",
    "            nn.Conv2d(in_channels,out_channels,3,1,1),   # 改通道数,不改大小\n",
    "            nn.BatchNorm2d(out_channels),\n",
    "            nn.ReLU()\n",
    "        )\n",
    "        \n",
    "        self.time_emb_linear = nn.Linear(time_emb_size, out_channels) # 时间emb转成channel宽,加到每个像素点上\n",
    "        self.relu = nn.ReLU()\n",
    "        \n",
    "        self.seq2 = nn.Sequential(\n",
    "            nn.Conv2d(out_channels,out_channels,3,1,1),   # 不改通道数,不改大小\n",
    "            nn.BatchNorm2d(out_channels),\n",
    "            nn.ReLU()\n",
    "        )\n",
    "        \n",
    "    def forward(self,x,t_emb):\n",
    "        x = self.seq1(x)\n",
    "        t_emb = self.relu(self.time_emb_linear(t_emb)).view(x.size(0),x.size(1),1,1)  # (batch_size,out_channel,1,1) \n",
    "        return self.seq2(x + t_emb)\n",
    "\n",
    "class UNet(nn.Module):\n",
    "    def __init__(self,img_channel,channels=[64, 128, 256, 512, 1024],time_emb_size=256):\n",
    "        super().__init__()\n",
    "        channels=[img_channel]+channels\n",
    "        \n",
    "        self.time_emb = nn.Sequential(\n",
    "            TimePositionEmbedding(time_emb_size),\n",
    "            nn.Linear(time_emb_size,time_emb_size),\n",
    "            nn.ReLU()\n",
    "        )\n",
    "\n",
    "        # 每个encoder conv block增加一倍通道数\n",
    "        self.enc_convs = nn.ModuleList([ConvBlock(channels[i],channels[i+1],time_emb_size)  for i in range(len(channels)-1) ])\n",
    "       \n",
    "        self.maxpools = nn.ModuleList()\n",
    "        self.deconvs = nn.ModuleList()\n",
    "        self.dec_convs = nn.ModuleList()\n",
    "        for i in range(len(channels)-2):\n",
    "            # 每个encoder conv后马上缩小一倍图像尺寸,最后一个conv后不缩小\n",
    "            self.maxpools.append(nn.MaxPool2d(2,2,0))\n",
    "            # 每个decoder conv前放大一倍图像尺寸，缩小一倍通道数\n",
    "            self.deconvs.append(nn.ConvTranspose2d(channels[-i-1],channels[-i-2],2,2))\n",
    "            # 每个decoder conv block减少一倍通道数\n",
    "            self.dec_convs.append(ConvBlock(channels[-i-1],channels[-i-2],time_emb_size))\n",
    "        \n",
    "        # 还原通道数,尺寸不变\n",
    "        self.output = nn.Conv2d(channels[1],img_channel,1,1,0)\n",
    "    \n",
    "    def forward(self,x,t):\n",
    "        t_emb = self.time_emb(t)\n",
    "        \n",
    "        # encoder阶段\n",
    "        residual = []\n",
    "        for i, conv in enumerate(self.enc_convs):\n",
    "\n",
    "            x = conv(x,t_emb)\n",
    "            if i != len(self.enc_convs)-1:\n",
    "                residual.append(x)\n",
    "                x = self.maxpools[i](x)\n",
    "        \n",
    "        # decoder阶段\n",
    "        for i, deconv in enumerate(self.deconvs):\n",
    "            x = self.dec_convs[i](\n",
    "                torch.cat((residual.pop(-1),deconv(x)),1),\n",
    "                t_emb\n",
    "            ) # 残差用于纵深channel维\n",
    "            \n",
    "        return self.output(x)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f3aff2d4",
   "metadata": {},
   "source": [
    "#### 训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d40d3888",
   "metadata": {},
   "outputs": [],
   "source": [
    "# train\n",
    "\n",
    "EPOCH=300\n",
    "BATCH_SIZE=400\n",
    "dataloader=DataLoader(train_dataset,batch_size=BATCH_SIZE,num_workers=4,persistent_workers=True,shuffle=True)   # 数据加载器\n",
    "\n",
    "lr = 0.0001\n",
    "\n",
    "try:\n",
    "    model=torch.load('model2.pt')\n",
    "except:\n",
    "    model=UNet(1).to(DEVICE)   # 噪音预测模型\n",
    "\n",
    "\n",
    "optimizer = torch.optim.Adam(model.parameters(),lr)\n",
    "loss = nn.L1Loss()\n",
    "\n",
    "for epoch in range(EPOCH):\n",
    "    last_loss=0\n",
    "    for batch_x,batch_cls in dataloader:\n",
    "        # 图像的像素范围转换到[-1,1],和高斯分布对应\n",
    "        batch_x=batch_x.to(DEVICE)*2-1\n",
    "        # 为每张图片生成随机t时刻\n",
    "        batch_t=torch.randint(0,T,(batch_x.size(0),)).to(DEVICE)\n",
    "        # 生成t时刻的加噪图片和对应噪音\n",
    "        batch_x_t,batch_noise_t=forward_diffusion(batch_x,batch_t)\n",
    "        # 模型预测t时刻的噪音\n",
    "        batch_predict_t=model(batch_x_t,batch_t)\n",
    "        # 求损失\n",
    "        l=loss(batch_predict_t,batch_noise_t)\n",
    "        # 优化参数\n",
    "        optimizer.zero_grad()\n",
    "        l.backward()\n",
    "        optimizer.step()\n",
    "        last_loss=l.item()\n",
    "        \n",
    "\n",
    "    print('epoch:{} loss={}'.format(epoch,last_loss))\n",
    "    torch.save(model,'model2.pt.tmp')\n",
    "    os.replace('model2.pt.tmp','model2.pt')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d5df660",
   "metadata": {},
   "source": [
    "#### 去除噪音，生成图片"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "570e2de0",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 噪音\n",
    "def backward_denoise(model,x_t):\n",
    "    \n",
    "    steps = [x_t]\n",
    "    \n",
    "    global alphas, alphas_cumprod, variance\n",
    "    \n",
    "    model = model.to(DEVICE)\n",
    "    x_t = x_t.to(DEVICE)\n",
    "    alphas = alphas.to(DEVICE)\n",
    "    alphas_cumprod = alphas_cumprod.to(DEVICE)\n",
    "    variance = variance.to(DEVICE)\n",
    "    # BN层的存在，需要eval模式避免推理时跟随batch的数据分布，但是相反训练的时候需要更加充分让它见到各种batch数据分布\n",
    "    model.eval()\n",
    "    with torch.no_grad():\n",
    "        for t in range(T-1,-1,-1):\n",
    "            batch_t = torch.full((x_t.size(0),),t) .to(DEVICE) #[999,999,....]\n",
    "            # 预测x_t时刻的噪音\n",
    "            noise_t = model(x_t,batch_t)\n",
    "            # 生成t-1时刻的图像\n",
    "            shape = (x_t.size(0),1,1,1)\n",
    "            mean_t = 1 / torch.sqrt(alphas[batch_t].view(*shape)) * \\\n",
    "                (\n",
    "                    x_t - \n",
    "                    (1-alphas[batch_t].view(*shape))/torch.sqrt(1-alphas_cumprod[batch_t].view(*shape))* noise_t                \n",
    "                )\n",
    "            if t != 0:\n",
    "                x_t = mean_t + torch.randn_like(x_t) * \\\n",
    "                    torch.sqrt(variance[batch_t].view(*shape))\n",
    "            else:\n",
    "                x_t = mean_t\n",
    "            \n",
    "            x_t = torch.clamp(x_t,-1.0,1.0,).detach()\n",
    "            steps.append(x_t)\n",
    "    return steps"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "abb2b4f8",
   "metadata": {},
   "source": [
    "#### 生成图片"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bfd3bf3f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 推理--生成图片\n",
    "\n",
    "# 加载模型\n",
    "model=torch.load('model.pt')\n",
    "\n",
    "# 生成噪音图\n",
    "batch_size=10\n",
    "batch_x_t=torch.randn(size=(batch_size,1,IMG_SIZE,IMG_SIZE))  # (5,1,48,48)\n",
    "# 逐步去噪得到原图\n",
    "steps=backward_denoise(model,batch_x_t)\n",
    "# 绘制数量\n",
    "num_imgs=20\n",
    "# 绘制还原过程\n",
    "plt.figure(figsize=(15,15))\n",
    "for b in range(batch_size):\n",
    "    for i in range(0,num_imgs):\n",
    "        idx=int(T/num_imgs)*(i+1)\n",
    "        # 像素值还原到[0,1]\n",
    "        final_img=(steps[idx][b].to('cpu')+1)/2\n",
    "        # tensor转回PIL图\n",
    "        final_img=tensor_2_pil(final_img)\n",
    "        plt.subplot(batch_size,num_imgs,b*num_imgs+i+1)\n",
    "        plt.imshow(final_img)\n",
    "plt.show()\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "725552f9",
   "metadata": {},
   "source": [
    "\n",
    "## 进阶\n",
    "\n",
    "1. SD(stable diffusion）基于扩散模型，可生成稳定的（指定的）图片，即文生图和图生图，还支持ControlNet等各种控制方法来定制生成的图像\n",
    "2. 使用了DDIM采样器，使用了隐空间的扩散，\n",
    "3. 由四大部分组成。\n",
    "   1、Sampler采样器。\n",
    "   2、Variational Autoencoder (VAE) 变分自编码器。\n",
    "   3、UNet 主网络，噪声预测器。\n",
    "   4、CLIPEmbedder文本编码器"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ffb26efe",
   "metadata": {},
   "source": [
    "### 1. 文生图\n",
    "1. 文字转换成Diffusion的输入：使用Text Encoder（CLIP模型）生成文字对应的embedding，然后和随机噪声embedding，t一起作为Diffusion的输入\n",
    "2. UNet网络中使用文字embedding：在UNet的每个ResNet之间添加一个Attention，而Attention一端的输入便是文字embedding\n",
    "3. "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "527aff70",
   "metadata": {},
   "source": [
    "#### 注意力"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b4f4abba",
   "metadata": {},
   "outputs": [],
   "source": [
    "class CrossAttention(nn.Module):\n",
    "    def __init__(self, channels, qsize, vsize, fsize, cls_emb_size):\n",
    "        super().__init__()\n",
    "        self.w_q = nn.Linear(channels, qsize)\n",
    "        self.w_k = nn.Linear(cls_emb_size, qsize)\n",
    "        self.w_v = nn.Linear(cls_emb_size, vsize)\n",
    "        self.softmax = nn.Softmax(dim = -1)\n",
    "        self.z_linear = nn.Linear(vsize, channels)\n",
    "        self.norm1 = nn.LayerNorm(channels)\n",
    "        \n",
    "        self.feedforward = nn.Sequential(\n",
    "            nn.Linear(channels, fsize),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(fsize, channels)\n",
    "        )\n",
    "        self.norm2 = nn.LayerNorm(channels)\n",
    "\n",
    "    def forward(self, x, cls_emb): # cls_emb:(batch_size,cls_emb_size)\n",
    "        # (batch_size,channel,width,height) => (batch_size,width,height,channel)\n",
    "        x = x.permute(0,2,3,1)\n",
    "\n",
    "        # 像素是Query (batch_size,width,height,qsize)\n",
    "        Q = self.w_q(x)\n",
    "        Q = Q.view(Q.shape[0], Q.shape[1]*Q.shape[2], Q.shape[3])\n",
    "\n",
    "        # 引导分类是Key和Value\n",
    "        K = self.w_k(cls_emb)\n",
    "        K = K.view(K.shape[0], K.shape[1], 1)\n",
    "\n",
    "        V = self.w_v(cls_emb)\n",
    "        V = V.view(V.shape[0],1, V.shape[1])\n",
    "        \n",
    "        # 注意力打分矩阵Q*K\n",
    "        # attn: (batch_size,width*height,1)\n",
    "\n",
    "        attn = self.softmax(torch.matmul(Q, K) / math.sqrt(Q.shape[2]))\n",
    "        \n",
    "        # 注意力层的输出\n",
    "        Z = self.z_linear(torch.matmul(attn, V))\n",
    "        Z = Z.view(*x.shape)\n",
    "\n",
    "        Z = self.norm1(Z + x)\n",
    "        out = self.feedforward(Z)\n",
    "        out = self.norm2(out + Z).permute(0,3,1,2)\n",
    "\n",
    "        return out"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "69d6920c",
   "metadata": {},
   "source": [
    "#### UNnet模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3d01578f",
   "metadata": {},
   "outputs": [],
   "source": [
    "class ConvBlockCls(ConvBlock):\n",
    "    def __init__(self, in_channels, out_channels, time_emb_size,qsize,vsize,fsize,cls_emb_size):\n",
    "        super().__init__(in_channels, out_channels, time_emb_size)\n",
    "        # 像素做Query，计算对分类ID的注意力，实现分类信息融入图像，不改变图像形状和通道数\n",
    "        self.crossattn = CrossAttention(out_channels,qsize,vsize,fsize,cls_emb_size)\n",
    "\n",
    "    def forward(self, x, t_emb, cls_emb):\n",
    "        x = super().forward(x, t_emb)\n",
    "        out = self.crossattn(x, cls_emb)\n",
    "        return out\n",
    "\n",
    "class UNetCls(UNet):\n",
    "    def __init__(self, img_channel, channels=[64, 128, 256, 512, 1024], time_emb_size=256,qsize=16,vsize=16,fsize=32,cls_emb_size=32):\n",
    "        super().__init__(img_channel, channels, time_emb_size)\n",
    "        channels = [img_channel] + channels\n",
    "        # 引导词cls转embedding\n",
    "        self.cls_emb=nn.Embedding(10,cls_emb_size)\n",
    "        self.enc_convs = nn.ModuleList([\n",
    "            ConvBlockCls(channels[i],\n",
    "                         channels[i+1],\n",
    "                         time_emb_size,\n",
    "                         qsize,\n",
    "                         vsize,\n",
    "                         fsize,\n",
    "                         cls_emb_size\n",
    "                        ) \n",
    "            for i in range(len(channels)-1)\n",
    "        ])\n",
    "        \n",
    "        self.dec_convs = nn.ModuleList([\n",
    "            ConvBlockCls(channels[-i-1],\n",
    "                         channels[-i-2],\n",
    "                         time_emb_size,\n",
    "                         qsize,\n",
    "                         vsize,\n",
    "                         fsize,\n",
    "                         cls_emb_size\n",
    "                        ) \n",
    "            for i in range(len(channels)-1)\n",
    "        ])\n",
    "        \n",
    "        \n",
    "    def forward(self, x, t,cls): # cls是引导词（图片分类ID）\n",
    "        \n",
    "         # time做embedding\n",
    "        t_emb=self.time_emb(t)\n",
    "        # cls做embedding\n",
    "        cls_emb=self.cls_emb(cls)\n",
    "        \n",
    "        # encoder阶段\n",
    "        residual=[]\n",
    "        for i,conv in enumerate(self.enc_convs):\n",
    "            x = conv(x,t_emb,cls_emb)\n",
    "            if i != len(self.enc_convs)-1:\n",
    "                residual.append(x)\n",
    "                x = self.maxpools[i](x)\n",
    "            \n",
    "        # decoder阶段\n",
    "        for i,deconv in enumerate(self.deconvs):\n",
    "            x = deconv(x)\n",
    "            residual_x = residual.pop(-1)\n",
    "            x = self.dec_convs[i](torch.cat((residual_x,x),dim=1),t_emb,cls_emb)    # 残差用于纵深channel维\n",
    "        return self.output(x) # 还原通道数"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "35c0e3f4",
   "metadata": {},
   "source": [
    "#### 训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ff699ae5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 训练\n",
    "\n",
    "try:\n",
    "    model=torch.load('model-cls.pt')\n",
    "except:\n",
    "    model=UNetCls(1).to(DEVICE)   # 噪音预测模型\n",
    "\n",
    "\n",
    "model.train()\n",
    "n_iter=0\n",
    "for epoch in range(EPOCH):\n",
    "    last_loss=0\n",
    "    for batch_x,batch_cls in dataloader:\n",
    "        # 图像的像素范围转换到[-1,1],和高斯分布对应\n",
    "        batch_x=batch_x.to(DEVICE)*2-1\n",
    "        # 引导分类ID\n",
    "        batch_cls=batch_cls.to(DEVICE)\n",
    "        # 为每张图片生成随机t时刻\n",
    "        batch_t=torch.randint(0,T,(batch_x.size(0),)).to(DEVICE)\n",
    "        # 生成t时刻的加噪图片和对应噪音\n",
    "        batch_x_t,batch_noise_t=forward_diffusion(batch_x,batch_t)\n",
    "        # 模型预测t时刻的噪音\n",
    "        batch_predict_t=model(batch_x_t,batch_t,batch_cls)\n",
    "        # 求损失\n",
    "        l=loss(batch_predict_t,batch_noise_t)\n",
    "        # 优化参数\n",
    "        optimizer.zero_grad()\n",
    "        l.backward()\n",
    "        optimizer.step()\n",
    "        last_loss=l.item()\n",
    "\n",
    "    print('epoch:{} loss={}'.format(epoch,last_loss))\n",
    "    torch.save(model,'model-cls.pt.tmp')\n",
    "    os.replace('model-cls.pt.tmp','model-cls.pt')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2055cef6",
   "metadata": {},
   "source": [
    "#### "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "982f6cf7",
   "metadata": {},
   "source": [
    "### 2. LoRA微调\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0f30191e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Lora\n",
    "\n",
    "LORA_ALPHA=1    # lora的a权重\n",
    "LORA_R=8    # lora的秩\n",
    "\n",
    "class LoraLayer(nn.Module):\n",
    "    def __init__(self, raw_linear, in_features, out_features, r, alpha):\n",
    "        super().__init__()\n",
    "\n",
    "        self.r = r\n",
    "        self.alpha = alpha\n",
    "        self.lora_a = nn.Parameter(torch.empty((in_features, r)))\n",
    "        self.lora_b = nn.Parameter(torch.zeros((r, out_features)))\n",
    "\n",
    "        nn.init.kaiming_uniform_(self.lora_a, a=math.sqrt(5))\n",
    "\n",
    "        self.raw_linear = raw_linear\n",
    "\n",
    "    def forward(self, x): # x:(batch_size,in_features)\n",
    "        raw_linear = self.raw_linear(x)\n",
    "        self.lora_a\n",
    "        lora = torch.mm(self.lora_a, self.lora_b) * self.alpha / self.r\n",
    "        lora_out = torch.mm(x, lora)\n",
    "        return raw_linear + lora_out \n",
    "\n",
    "def inject_lora(model, name, layer):\n",
    "    name_cols = name.split('.')\n",
    "\n",
    "    # 逐层下探到linear归属的module\n",
    "    children = name_cols[:-1]\n",
    "    cur_layer = model\n",
    "    for child in children:\n",
    "        cur_layer = getattr(cur_layer, child)\n",
    "\n",
    "    lora_layer = LoraLayer(layer, layer.in_features, layer.out_features, LORA_R, LORA_ALPHA)\n",
    "    setattr(cur_layer, name_cols[-1], lora_layer)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7cb3664e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Lora微调\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "\n",
    "EPOCH=200\n",
    "BATCH_SIZE=400\n",
    "\n",
    "if __name__=='__main__':\n",
    "    # 模型\n",
    "    model=torch.load('model.pt')\n",
    "\n",
    "    # 向nn.Linear层注入Lora\n",
    "    for name,layer in model.named_modules():\n",
    "        name_cols=name.split('.')\n",
    "        # 过滤出cross attention使用的linear权重\n",
    "        filter_names=['w_q','w_k','w_v']\n",
    "        if any(n in name_cols for n in filter_names) and isinstance(layer,nn.Linear):\n",
    "            inject_lora(model,name,layer)\n",
    "    \n",
    "    # lora权重的加载\n",
    "    try:\n",
    "        restore_lora_state=torch.load('lora.pt')\n",
    "        model.load_state_dict(restore_lora_state,strict=False)\n",
    "    except:\n",
    "        pass \n",
    "\n",
    "    model=model.to(DEVICE)\n",
    "\n",
    "    # 冻结非Lora参数\n",
    "    for name,param in model.named_parameters():\n",
    "        if name.split('.')[-1] not in ['lora_a','lora_b']:  # 非LOra部分不计算梯度\n",
    "            param.requires_grad=False\n",
    "        else:\n",
    "            param.requires_grad=True\n",
    "\n",
    "    dataloader=DataLoader(train_dataset,batch_size=BATCH_SIZE,num_workers=4,persistent_workers=True,shuffle=True)   # 数据加载器\n",
    "\n",
    "    optimizer=torch.optim.Adam(filter(lambda x: x.requires_grad==True,model.parameters()),lr=0.001) # 优化器只更新Lorac参数\n",
    "    loss_fn=nn.L1Loss() # 损失函数(绝对值误差均值)\n",
    "\n",
    "    print(model)\n",
    "\n",
    "    writer = SummaryWriter()\n",
    "    model.train()\n",
    "    n_iter=0\n",
    "    for epoch in range(EPOCH):\n",
    "        last_loss=0\n",
    "        for batch_x,batch_cls in dataloader:\n",
    "            # 图像的像素范围转换到[-1,1],和高斯分布对应\n",
    "            batch_x=batch_x.to(DEVICE)*2-1\n",
    "            # 引导分类ID\n",
    "            batch_cls=batch_cls.to(DEVICE)\n",
    "            # 为每张图片生成随机t时刻\n",
    "            batch_t=torch.randint(0,T,(batch_x.size(0),)).to(DEVICE)\n",
    "            # 生成t时刻的加噪图片和对应噪音\n",
    "            batch_x_t,batch_noise_t=forward_diffusion(batch_x,batch_t)\n",
    "            # 模型预测t时刻的噪音\n",
    "            batch_predict_t=model(batch_x_t,batch_t,batch_cls)\n",
    "            # 求损失\n",
    "            loss=loss_fn(batch_predict_t,batch_noise_t)\n",
    "            # 优化参数\n",
    "            optimizer.zero_grad()\n",
    "            loss.backward()\n",
    "            optimizer.step()\n",
    "            last_loss=loss.item()\n",
    "            writer.add_scalar('Loss/train', last_loss, n_iter)\n",
    "            n_iter+=1\n",
    "        print('epoch:{} loss={}'.format(epoch,last_loss))\n",
    "\n",
    "        # 保存训练好的Lora权重\n",
    "        lora_state={}\n",
    "        for name,param in model.named_parameters():\n",
    "            name_cols=name.split('.')\n",
    "            filter_names=['lora_a','lora_b']\n",
    "            if any(n==name_cols[-1] for n in filter_names):\n",
    "                lora_state[name]=param\n",
    "        torch.save(lora_state,'lora.pt.tmp')\n",
    "        os.replace('lora.pt.tmp','lora.pt')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4d4865fc",
   "metadata": {},
   "source": [
    "## 进阶 LoRA微调\n",
    "1. SD使用了非常大的LAION-5B数据集进行预训练，直接微调成本高，有很多轻量的方案（Lora、Textual Inversion）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "760bd775",
   "metadata": {},
   "source": [
    "## 进阶 DiT--Diffusion Transformers "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.21"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
