Upload 7 files
Browse files- README.md +339 -0
- gitattributes.txt +39 -0
- gitignore.txt +17 -0
- inference.py +86 -0
- requirements.txt +19 -0
- train_stage1.py +249 -0
- train_stage2.py +207 -0
README.md
ADDED
|
@@ -0,0 +1,339 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
tags:
|
| 4 |
+
- Image-to-Image
|
| 5 |
+
---
|
| 6 |
+

|
| 7 |
+
## DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior
|
| 8 |
+
|
| 9 |
+
[Paper](https://arxiv.org/abs/2308.15070) | [Project Page](https://0x3f3f3f3fun.github.io/projects/diffbir/)
|
| 10 |
+
|
| 11 |
+
 [](https://openxlab.org.cn/apps/detail/linxinqi/DiffBIR-official) [](https://colab.research.google.com/github/camenduru/DiffBIR-colab/blob/main/DiffBIR_colab.ipynb)
|
| 12 |
+
|
| 13 |
+
[Xinqi Lin](https://0x3f3f3f3fun.github.io/)<sup>1,\*</sup>, [Jingwen He](https://github.com/hejingwenhejingwen)<sup>2,\*</sup>, [Ziyan Chen](https://orcid.org/0000-0001-6277-5635)<sup>2</sup>, [Zhaoyang Lyu](https://scholar.google.com.tw/citations?user=gkXFhbwAAAAJ&hl=en)<sup>2</sup>, [Ben Fei](https://scholar.google.com/citations?user=skQROj8AAAAJ&hl=zh-CN&oi=ao)<sup>2</sup>, [Bo Dai](http://daibo.info/)<sup>2</sup>, [Wanli Ouyang](https://wlouyang.github.io/)<sup>2</sup>, [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao)<sup>2</sup>, [Chao Dong](http://xpixel.group/2010/01/20/chaodong.html)<sup>1,2</sup>
|
| 14 |
+
|
| 15 |
+
<sup>1</sup>Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences<br><sup>2</sup>Shanghai AI Laboratory
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
:star:If DiffBIR is helpful for you, please help star this repo. Thanks!:hugs:
|
| 20 |
+
|
| 21 |
+
## Table Of Contents
|
| 22 |
+
|
| 23 |
+
- [Visual Results On Real-world Images](#visual_results)
|
| 24 |
+
- [Update](#update)
|
| 25 |
+
- [TODO](#todo)
|
| 26 |
+
- [Installation](#installation)
|
| 27 |
+
- [Pretrained Models](#pretrained_models)
|
| 28 |
+
- [Quick Start (gradio demo)](#quick_start)
|
| 29 |
+
- [Inference](#inference)
|
| 30 |
+
- [Train](#train)
|
| 31 |
+
|
| 32 |
+
## <a name="visual_results"></a>Visual Results On Real-world Images
|
| 33 |
+
|
| 34 |
+
<!-- <details close>
|
| 35 |
+
<summary>General Image Restoration</summary> -->
|
| 36 |
+
### General Image Restoration
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
|
| 40 |
+
<!-- <summary>Face Image Restoration</summary> -->
|
| 41 |
+
### Face Image Restoration
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
Face and the background enhanced by DiffBIR.
|
| 46 |
+
|
| 47 |
+
<!-- </details> -->
|
| 48 |
+
|
| 49 |
+
## <a name="update"></a>Update
|
| 50 |
+
|
| 51 |
+
- **2023.09.19**: ✅ Add support for Apple Silicon! Check [installation_xOS.md](assets/docs/installation_xOS.md) to work with **CPU/CUDA/MPS** device!
|
| 52 |
+
- **2023.09.14**: ✅ Integrate a patch-based sampling strategy ([mixture-of-diffusers](https://github.com/albarji/mixture-of-diffusers)). [**Try it!**](#general_image_inference) Here is an [example](https://imgsli.com/MjA2MDA1) with a resolution of 2396 x 1596. GPU memory usage will continue to be optimized in the future and we are looking forward to your pull requests!
|
| 53 |
+
- **2023.09.14**: ✅ Add support for background upsampler(DiffBIR/[RealESRGAN](https://github.com/xinntao/Real-ESRGAN)) in face enhancement! :rocket: [**Try it!**](#unaligned_face_inference)
|
| 54 |
+
- **2023.09.13**: :rocket: Provide online demo (DiffBIR-official) in [OpenXLab](https://openxlab.org.cn/apps/detail/linxinqi/DiffBIR-official), which integrates both general model and face model. Please have a try! [camenduru](https://github.com/camenduru) also implements an online demo, thanks for his work.:hugs:
|
| 55 |
+
- **2023.09.12**: ✅ Upload inference code of latent image guidance and release [real47](inputs/real47) testset.
|
| 56 |
+
- **2023.09.08**: ✅ Add support for restoring unaligned faces.
|
| 57 |
+
- **2023.09.06**: :rocket: Update [colab demo](https://colab.research.google.com/github/camenduru/DiffBIR-colab/blob/main/DiffBIR_colab.ipynb). Thanks to [camenduru](https://github.com/camenduru)!:hugs:
|
| 58 |
+
- **2023.08.30**: This repo is released.
|
| 59 |
+
<!-- - [**History Updates** >]() -->
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
## <a name="todo"></a>TODO
|
| 63 |
+
|
| 64 |
+
- [x] Release code and pretrained models:computer:.
|
| 65 |
+
- [x] Update links to paper and project page:link:.
|
| 66 |
+
- [x] Release real47 testset:minidisc:.
|
| 67 |
+
- [ ] Provide webui and reduce the memory usage of DiffBIR:fire::fire::fire:.
|
| 68 |
+
- [ ] Provide HuggingFace demo:notebook::fire::fire::fire:.
|
| 69 |
+
- [x] Add a patch-based sampling schedule:mag:.
|
| 70 |
+
- [x] Upload inference code of latent image guidance:page_facing_up:.
|
| 71 |
+
- [ ] Improve the performance:superhero:.
|
| 72 |
+
- [x] Support MPS acceleration for MacOS users.
|
| 73 |
+
|
| 74 |
+
## <a name="installation"></a>Installation
|
| 75 |
+
<!-- - **Python** >= 3.9
|
| 76 |
+
- **CUDA** >= 11.3
|
| 77 |
+
- **PyTorch** >= 1.12.1
|
| 78 |
+
- **xformers** == 0.0.16 -->
|
| 79 |
+
|
| 80 |
+
```shell
|
| 81 |
+
# clone this repo
|
| 82 |
+
git clone https://github.com/XPixelGroup/DiffBIR.git
|
| 83 |
+
cd DiffBIR
|
| 84 |
+
|
| 85 |
+
# create an environment with python >= 3.9
|
| 86 |
+
conda create -n diffbir python=3.9
|
| 87 |
+
conda activate diffbir
|
| 88 |
+
pip install -r requirements.txt
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
Note the installation is only compatible with **Linux** users. If you are working on different platforms, please check [xOS Installation](assets/docs/installation_xOS.md).
|
| 92 |
+
|
| 93 |
+
<!-- ```shell
|
| 94 |
+
# clone this repo
|
| 95 |
+
git clone https://github.com/XPixelGroup/DiffBIR.git
|
| 96 |
+
cd DiffBIR
|
| 97 |
+
|
| 98 |
+
# create a conda environment with python >= 3.9
|
| 99 |
+
conda create -n diffbir python=3.9
|
| 100 |
+
conda activate diffbir
|
| 101 |
+
|
| 102 |
+
conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=11.3 -c pytorch
|
| 103 |
+
conda install xformers==0.0.16 -c xformers
|
| 104 |
+
|
| 105 |
+
# other dependencies
|
| 106 |
+
pip install -r requirements.txt
|
| 107 |
+
``` -->
|
| 108 |
+
|
| 109 |
+
## <a name="pretrained_models"></a>Pretrained Models
|
| 110 |
+
|
| 111 |
+
| Model Name | Description | HuggingFace | BaiduNetdisk | OpenXLab |
|
| 112 |
+
| :--------- | :---------- | :---------- | :---------- | :---------- |
|
| 113 |
+
| general_swinir_v1.ckpt | Stage1 model (SwinIR) for general image restoration. | [download](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_swinir_v1.ckpt) | [download](https://pan.baidu.com/s/1uvSvJgcoL_Knj0h22-9TvA?pwd=v3v6) (pwd: v3v6) | [download](https://download.openxlab.org.cn/models/linxinqi/DiffBIR/weight//diffbir_general_swinir_v1) |
|
| 114 |
+
| general_full_v1.ckpt | Full model for general image restoration. "Full" means it contains both the stage1 and stage2 model. | [download](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_full_v1.ckpt) | [download](https://pan.baidu.com/s/1gLvW1nvkJStdVAKROqaYaA?pwd=86zi) (pwd: 86zi) | [download](https://download.openxlab.org.cn/models/linxinqi/DiffBIR/weight//diffbir_general_full_v1) |
|
| 115 |
+
| face_swinir_v1.ckpt | Stage1 model (SwinIR) for face restoration. | [download](https://huggingface.co/lxq007/DiffBIR/resolve/main/face_swinir_v1.ckpt) | [download](https://pan.baidu.com/s/1cnBBC8437BJiM3q6suaK8g?pwd=xk5u) (pwd: xk5u) | [download](https://download.openxlab.org.cn/models/linxinqi/DiffBIR/weight//diffbir_face_swinir_v1) |
|
| 116 |
+
| face_full_v1.ckpt | Full model for face restoration. | [download](https://huggingface.co/lxq007/DiffBIR/resolve/main/face_full_v1.ckpt) | [download](https://pan.baidu.com/s/1pc04xvQybkynRfzK5Y8K0Q?pwd=ov8i) (pwd: ov8i) | [download](https://download.openxlab.org.cn/models/linxinqi/DiffBIR/weight//diffbir_face_full_v1) |
|
| 117 |
+
|
| 118 |
+
## <a name="quick_start"></a>Quick Start
|
| 119 |
+
|
| 120 |
+
Download [general_full_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_full_v1.ckpt) and [general_swinir_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_swinir_v1.ckpt) to `weights/`, then run the following command to interact with the gradio website.
|
| 121 |
+
|
| 122 |
+
```shell
|
| 123 |
+
python gradio_diffbir.py \
|
| 124 |
+
--ckpt weights/general_full_v1.ckpt \
|
| 125 |
+
--config configs/model/cldm.yaml \
|
| 126 |
+
--reload_swinir \
|
| 127 |
+
--swinir_ckpt weights/general_swinir_v1.ckpt \
|
| 128 |
+
--device cuda
|
| 129 |
+
```
|
| 130 |
+
|
| 131 |
+
<img width="887" alt="5" src="https://github.com/open-mmlab/mmdetection/assets/95841578/36afc84f-61d9-4514-88c8-40eaec557e44">
|
| 132 |
+
|
| 133 |
+
## <a name="inference"></a>Inference
|
| 134 |
+
|
| 135 |
+
### Full Pipeline (Remove Degradations & Refine Details)
|
| 136 |
+
|
| 137 |
+
<a name="general_image_inference"></a>
|
| 138 |
+
#### General Image
|
| 139 |
+
|
| 140 |
+
Download [general_full_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_full_v1.ckpt) and [general_swinir_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_swinir_v1.ckpt) to `weights/` and run the following command.
|
| 141 |
+
|
| 142 |
+
```shell
|
| 143 |
+
python inference.py \
|
| 144 |
+
--input inputs/demo/general \
|
| 145 |
+
--config configs/model/cldm.yaml \
|
| 146 |
+
--ckpt weights/general_full_v1.ckpt \
|
| 147 |
+
--reload_swinir --swinir_ckpt weights/general_swinir_v1.ckpt \
|
| 148 |
+
--steps 50 \
|
| 149 |
+
--sr_scale 4 \
|
| 150 |
+
--color_fix_type wavelet \
|
| 151 |
+
--output results/demo/general \
|
| 152 |
+
--device cuda [--tiled --tile_size 512 --tile_stride 256]
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
Remove the brackets to enable tiled sampling. If you are confused about where the `reload_swinir` option came from, please refer to the [degradation details](#degradation-details).
|
| 156 |
+
|
| 157 |
+
#### Face Image
|
| 158 |
+
<!-- Download [face_full_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/face_full_v1.ckpt) to `weights/` and run the following command. -->
|
| 159 |
+
The [face_full_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/face_full_v1.ckpt) will be downloaded from HuggingFace automatically.
|
| 160 |
+
|
| 161 |
+
```shell
|
| 162 |
+
# for aligned face inputs
|
| 163 |
+
python inference_face.py \
|
| 164 |
+
--input inputs/demo/face/aligned \
|
| 165 |
+
--sr_scale 1 \
|
| 166 |
+
--output results/demo/face/aligned \
|
| 167 |
+
--has_aligned \
|
| 168 |
+
--device cuda
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
<a name="unaligned_face_inference"></a>
|
| 172 |
+
|
| 173 |
+
```shell
|
| 174 |
+
# for unaligned face inputs
|
| 175 |
+
python inference_face.py \
|
| 176 |
+
--input inputs/demo/face/whole_img \
|
| 177 |
+
--sr_scale 2 \
|
| 178 |
+
--output results/demo/face/whole_img \
|
| 179 |
+
--bg_upsampler DiffBIR \
|
| 180 |
+
--device cuda
|
| 181 |
+
```
|
| 182 |
+
|
| 183 |
+
### Latent Image Guidance (Quality-fidelity trade-off)
|
| 184 |
+
|
| 185 |
+
Latent image guidance is used to achieve a trade-off bwtween quality and fidelity. We default to closing it since we prefer quality rather than fidelity. Here is an example:
|
| 186 |
+
|
| 187 |
+
```shell
|
| 188 |
+
python inference.py \
|
| 189 |
+
--input inputs/demo/general \
|
| 190 |
+
--config configs/model/cldm.yaml \
|
| 191 |
+
--ckpt weights/general_full_v1.ckpt \
|
| 192 |
+
--reload_swinir --swinir_ckpt weights/general_swinir_v1.ckpt \
|
| 193 |
+
--steps 50 \
|
| 194 |
+
--sr_scale 4 \
|
| 195 |
+
--color_fix_type wavelet \
|
| 196 |
+
--output results/demo/general \
|
| 197 |
+
--device cuda \
|
| 198 |
+
--use_guidance --g_scale 400 --g_t_start 200
|
| 199 |
+
```
|
| 200 |
+
|
| 201 |
+
You will see that the results become more smooth.
|
| 202 |
+
|
| 203 |
+
### Only Stage1 Model (Remove Degradations)
|
| 204 |
+
|
| 205 |
+
Download [general_swinir_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_swinir_v1.ckpt), [face_swinir_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/face_swinir_v1.ckpt) for general, face image respectively, and run the following command.
|
| 206 |
+
|
| 207 |
+
```shell
|
| 208 |
+
python scripts/inference_stage1.py \
|
| 209 |
+
--config configs/model/swinir.yaml \
|
| 210 |
+
--ckpt [swinir_ckpt_path] \
|
| 211 |
+
--input [lq_dir] \
|
| 212 |
+
--sr_scale 1 --image_size 512 \
|
| 213 |
+
--output [output_dir_path]
|
| 214 |
+
```
|
| 215 |
+
|
| 216 |
+
### Only Stage2 Model (Refine Details)
|
| 217 |
+
|
| 218 |
+
Since the proposed two-stage pipeline is very flexible, you can utilize other awesome models to remove degradations instead of SwinIR and then leverage the Stable Diffusion to refine details.
|
| 219 |
+
|
| 220 |
+
```shell
|
| 221 |
+
# step1: Use other models to remove degradations and save results in [img_dir_path].
|
| 222 |
+
|
| 223 |
+
# step2: Refine details of step1 outputs.
|
| 224 |
+
python inference.py \
|
| 225 |
+
--config configs/model/cldm.yaml \
|
| 226 |
+
--ckpt [full_ckpt_path] \
|
| 227 |
+
--steps 50 --sr_scale 1 \
|
| 228 |
+
--input [img_dir_path] \
|
| 229 |
+
--color_fix_type wavelet \
|
| 230 |
+
--output [output_dir_path] \
|
| 231 |
+
--disable_preprocess_model \
|
| 232 |
+
--device cuda
|
| 233 |
+
```
|
| 234 |
+
|
| 235 |
+
## <a name="train"></a>Train
|
| 236 |
+
|
| 237 |
+
### Degradation Details
|
| 238 |
+
|
| 239 |
+
For general image restoration, we first train both the stage1 and stage2 model under codeformer degradation to enhance the generative capacity of the stage2 model. In order to improve the ability for degradation removal, we train another stage1 model under Real-ESRGAN degradation and utilize it during inference.
|
| 240 |
+
|
| 241 |
+
For face image restoration, we adopt the degradation model used in [DifFace](https://github.com/zsyOAOA/DifFace/blob/master/configs/training/swinir_ffhq512.yaml) for training and directly utilize the SwinIR model released by them as our stage1 model.
|
| 242 |
+
|
| 243 |
+
### Data Preparation
|
| 244 |
+
|
| 245 |
+
1. Generate file list of training set and validation set.
|
| 246 |
+
|
| 247 |
+
```shell
|
| 248 |
+
python scripts/make_file_list.py \
|
| 249 |
+
--img_folder [hq_dir_path] \
|
| 250 |
+
--val_size [validation_set_size] \
|
| 251 |
+
--save_folder [save_dir_path] \
|
| 252 |
+
--follow_links
|
| 253 |
+
```
|
| 254 |
+
|
| 255 |
+
This script will collect all image files in `img_folder` and split them into training set and validation set automatically. You will get two file lists in `save_folder`, each line in a file list contains an absolute path of an image file:
|
| 256 |
+
|
| 257 |
+
```
|
| 258 |
+
save_folder
|
| 259 |
+
├── train.list # training file list
|
| 260 |
+
└── val.list # validation file list
|
| 261 |
+
```
|
| 262 |
+
|
| 263 |
+
2. Configure training set and validation set.
|
| 264 |
+
|
| 265 |
+
For general image restoration, fill in the following configuration files with appropriate values.
|
| 266 |
+
|
| 267 |
+
- [training set](configs/dataset/general_deg_codeformer_train.yaml) and [validation set](configs/dataset/general_deg_codeformer_val.yaml) for **CodeFormer** degradation.
|
| 268 |
+
- [training set](configs/dataset/general_deg_realesrgan_train.yaml) and [validation set](configs/dataset/general_deg_realesrgan_val.yaml) for **Real-ESRGAN** degradation.
|
| 269 |
+
|
| 270 |
+
For face image restoration, fill in the face [training set](configs/dataset/face_train.yaml) and [validation set](configs/dataset/face_val.yaml) configuration files with appropriate values.
|
| 271 |
+
|
| 272 |
+
### Train Stage1 Model
|
| 273 |
+
|
| 274 |
+
1. Configure training-related information.
|
| 275 |
+
|
| 276 |
+
Fill in the configuration file of [training](configs/train_swinir.yaml) with appropriate values.
|
| 277 |
+
|
| 278 |
+
2. Start training.
|
| 279 |
+
|
| 280 |
+
```shell
|
| 281 |
+
python train.py --config [training_config_path]
|
| 282 |
+
```
|
| 283 |
+
|
| 284 |
+
:bulb::Checkpoints of SwinIR will be used in training stage2 model.
|
| 285 |
+
|
| 286 |
+
### Train Stage2 Model
|
| 287 |
+
|
| 288 |
+
1. Download pretrained [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) to provide generative capabilities.
|
| 289 |
+
|
| 290 |
+
```shell
|
| 291 |
+
wget https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.ckpt --no-check-certificate
|
| 292 |
+
```
|
| 293 |
+
|
| 294 |
+
2. Create the initial model weights.
|
| 295 |
+
|
| 296 |
+
```shell
|
| 297 |
+
python scripts/make_stage2_init_weight.py \
|
| 298 |
+
--cldm_config configs/model/cldm.yaml \
|
| 299 |
+
--sd_weight [sd_v2.1_ckpt_path] \
|
| 300 |
+
--swinir_weight [swinir_ckpt_path] \
|
| 301 |
+
--output [init_weight_output_path]
|
| 302 |
+
```
|
| 303 |
+
|
| 304 |
+
You will see some [outputs](assets/init_weight_outputs.txt) which show the weight initialization.
|
| 305 |
+
|
| 306 |
+
3. Configure training-related information.
|
| 307 |
+
|
| 308 |
+
Fill in the configuration file of [training](configs/train_cldm.yaml) with appropriate values.
|
| 309 |
+
|
| 310 |
+
4. Start training.
|
| 311 |
+
|
| 312 |
+
```shell
|
| 313 |
+
python train.py --config [training_config_path]
|
| 314 |
+
```
|
| 315 |
+
|
| 316 |
+
## Citation
|
| 317 |
+
|
| 318 |
+
Please cite us if our work is useful for your research.
|
| 319 |
+
|
| 320 |
+
```
|
| 321 |
+
@article{2023diffbir,
|
| 322 |
+
author = {Xinqi Lin, Jingwen He, Ziyan Chen, Zhaoyang Lyu, Ben Fei, Bo Dai, Wanli Ouyang, Yu Qiao, Chao Dong},
|
| 323 |
+
title = {DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior},
|
| 324 |
+
journal = {arxiv},
|
| 325 |
+
year = {2023},
|
| 326 |
+
}
|
| 327 |
+
```
|
| 328 |
+
|
| 329 |
+
## License
|
| 330 |
+
|
| 331 |
+
This project is released under the [Apache 2.0 license](LICENSE).
|
| 332 |
+
|
| 333 |
+
## Acknowledgement
|
| 334 |
+
|
| 335 |
+
This project is based on [ControlNet](https://github.com/lllyasviel/ControlNet) and [BasicSR](https://github.com/XPixelGroup/BasicSR). Thanks for their awesome work.
|
| 336 |
+
|
| 337 |
+
## Contact
|
| 338 |
+
|
| 339 |
+
If you have any questions, please feel free to contact with me at linxinqi@tju.edu.cn.
|
gitattributes.txt
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
diffbir_face_full_v1 filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
diffbir_face_swinir_v1 filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
diffbir_general_full_v1 filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
diffbir_general_swinir_v1 filter=lfs diff=lfs merge=lfs -text
|
gitignore.txt
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
__pycache__
|
| 2 |
+
*.ckpt
|
| 3 |
+
*.pth
|
| 4 |
+
/data
|
| 5 |
+
/exps
|
| 6 |
+
*.sh
|
| 7 |
+
!install_env.sh
|
| 8 |
+
/weights
|
| 9 |
+
/temp
|
| 10 |
+
/results
|
| 11 |
+
.ipynb_checkpoints/
|
| 12 |
+
/TODO.txt
|
| 13 |
+
/deprecated
|
| 14 |
+
/temp_scripts
|
| 15 |
+
/.vscode
|
| 16 |
+
/runs
|
| 17 |
+
/tests
|
inference.py
ADDED
|
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from argparse import ArgumentParser, Namespace
|
| 2 |
+
|
| 3 |
+
import torch
|
| 4 |
+
|
| 5 |
+
from accelerate.utils import set_seed
|
| 6 |
+
from utils.inference import (
|
| 7 |
+
V1InferenceLoop,
|
| 8 |
+
BSRInferenceLoop, BFRInferenceLoop, BIDInferenceLoop, UnAlignedBFRInferenceLoop
|
| 9 |
+
)
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
def check_device(device: str) -> str:
|
| 13 |
+
if device == "cuda":
|
| 14 |
+
if not torch.cuda.is_available():
|
| 15 |
+
print("CUDA not available because the current PyTorch install was not "
|
| 16 |
+
"built with CUDA enabled.")
|
| 17 |
+
device = "cpu"
|
| 18 |
+
else:
|
| 19 |
+
if device == "mps":
|
| 20 |
+
if not torch.backends.mps.is_available():
|
| 21 |
+
if not torch.backends.mps.is_built():
|
| 22 |
+
print("MPS not available because the current PyTorch install was not "
|
| 23 |
+
"built with MPS enabled.")
|
| 24 |
+
device = "cpu"
|
| 25 |
+
else:
|
| 26 |
+
print("MPS not available because the current MacOS version is not 12.3+ "
|
| 27 |
+
"and/or you do not have an MPS-enabled device on this machine.")
|
| 28 |
+
device = "cpu"
|
| 29 |
+
print(f"using device {device}")
|
| 30 |
+
return device
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
def parse_args() -> Namespace:
|
| 34 |
+
parser = ArgumentParser()
|
| 35 |
+
### model parameters
|
| 36 |
+
parser.add_argument("--task", type=str, required=True, choices=["sr", "dn", "fr", "fr_bg"])
|
| 37 |
+
parser.add_argument("--upscale", type=float, required=True)
|
| 38 |
+
parser.add_argument("--version", type=str, default="v2", choices=["v1", "v2"])
|
| 39 |
+
### sampling parameters
|
| 40 |
+
parser.add_argument("--steps", type=int, default=50)
|
| 41 |
+
parser.add_argument("--better_start", action="store_true")
|
| 42 |
+
parser.add_argument("--tiled", action="store_true")
|
| 43 |
+
parser.add_argument("--tile_size", type=int, default=512)
|
| 44 |
+
parser.add_argument("--tile_stride", type=int, default=256)
|
| 45 |
+
parser.add_argument("--pos_prompt", type=str, default="")
|
| 46 |
+
parser.add_argument("--neg_prompt", type=str, default="low quality, blurry, low-resolution, noisy, unsharp, weird textures")
|
| 47 |
+
parser.add_argument("--cfg_scale", type=float, default=4.0)
|
| 48 |
+
### input parameters
|
| 49 |
+
parser.add_argument("--input", type=str, required=True)
|
| 50 |
+
parser.add_argument("--n_samples", type=int, default=1)
|
| 51 |
+
### guidance parameters
|
| 52 |
+
parser.add_argument("--guidance", action="store_true")
|
| 53 |
+
parser.add_argument("--g_loss", type=str, default="w_mse", choices=["mse", "w_mse"])
|
| 54 |
+
parser.add_argument("--g_scale", type=float, default=0.0)
|
| 55 |
+
parser.add_argument("--g_start", type=int, default=1001)
|
| 56 |
+
parser.add_argument("--g_stop", type=int, default=-1)
|
| 57 |
+
parser.add_argument("--g_space", type=str, default="latent")
|
| 58 |
+
parser.add_argument("--g_repeat", type=int, default=1)
|
| 59 |
+
### output parameters
|
| 60 |
+
parser.add_argument("--output", type=str, required=True)
|
| 61 |
+
### common parameters
|
| 62 |
+
parser.add_argument("--seed", type=int, default=231)
|
| 63 |
+
parser.add_argument("--device", type=str, default="cuda", choices=["cpu", "cuda", "mps"])
|
| 64 |
+
|
| 65 |
+
return parser.parse_args()
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
def main():
|
| 69 |
+
args = parse_args()
|
| 70 |
+
args.device = check_device(args.device)
|
| 71 |
+
set_seed(args.seed)
|
| 72 |
+
if args.version == "v1":
|
| 73 |
+
V1InferenceLoop(args).run()
|
| 74 |
+
else:
|
| 75 |
+
supported_tasks = {
|
| 76 |
+
"sr": BSRInferenceLoop,
|
| 77 |
+
"dn": BIDInferenceLoop,
|
| 78 |
+
"fr": BFRInferenceLoop,
|
| 79 |
+
"fr_bg": UnAlignedBFRInferenceLoop
|
| 80 |
+
}
|
| 81 |
+
supported_tasks[args.task](args).run()
|
| 82 |
+
print("done!")
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
if __name__ == "__main__":
|
| 86 |
+
main()
|
requirements.txt
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
--extra-index-url https://download.pytorch.org/whl/cu118
|
| 2 |
+
torch==2.2.2+cu118
|
| 3 |
+
torchvision==0.17.2+cu118
|
| 4 |
+
torchaudio==2.2.2+cu118
|
| 5 |
+
omegaconf==2.3.0
|
| 6 |
+
accelerate==0.28.0
|
| 7 |
+
einops==0.7.0
|
| 8 |
+
opencv_python==4.9.0.80
|
| 9 |
+
scipy==1.12.0
|
| 10 |
+
ftfy==6.2.0
|
| 11 |
+
regex==2023.12.25
|
| 12 |
+
python-dateutil==2.9.0.post0
|
| 13 |
+
timm==0.9.16
|
| 14 |
+
pytorch-lightning==2.2.1 # only for loading pretrained sd weight
|
| 15 |
+
tensorboard==2.16.2 # for tensorboard event visualization
|
| 16 |
+
protobuf==4.25.3 # for tensorboard
|
| 17 |
+
lpips==0.1.4
|
| 18 |
+
facexlib==0.3.0
|
| 19 |
+
gradio==4.25.0
|
train_stage1.py
ADDED
|
@@ -0,0 +1,249 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
from argparse import ArgumentParser
|
| 3 |
+
import warnings
|
| 4 |
+
|
| 5 |
+
from omegaconf import OmegaConf
|
| 6 |
+
import torch
|
| 7 |
+
from torch.nn import functional as F
|
| 8 |
+
from torch.utils.data import DataLoader
|
| 9 |
+
from torch.utils.tensorboard import SummaryWriter
|
| 10 |
+
from torchvision.utils import make_grid
|
| 11 |
+
from accelerate import Accelerator
|
| 12 |
+
from accelerate.utils import set_seed
|
| 13 |
+
from einops import rearrange
|
| 14 |
+
from tqdm import tqdm
|
| 15 |
+
import lpips
|
| 16 |
+
|
| 17 |
+
from model import SwinIR
|
| 18 |
+
from utils.common import instantiate_from_config
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
# https://github.com/XPixelGroup/BasicSR/blob/033cd6896d898fdd3dcda32e3102a792efa1b8f4/basicsr/utils/color_util.py#L186
|
| 22 |
+
def rgb2ycbcr_pt(img, y_only=False):
|
| 23 |
+
"""Convert RGB images to YCbCr images (PyTorch version).
|
| 24 |
+
|
| 25 |
+
It implements the ITU-R BT.601 conversion for standard-definition television. See more details in
|
| 26 |
+
https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.
|
| 27 |
+
|
| 28 |
+
Args:
|
| 29 |
+
img (Tensor): Images with shape (n, 3, h, w), the range [0, 1], float, RGB format.
|
| 30 |
+
y_only (bool): Whether to only return Y channel. Default: False.
|
| 31 |
+
|
| 32 |
+
Returns:
|
| 33 |
+
(Tensor): converted images with the shape (n, 3/1, h, w), the range [0, 1], float.
|
| 34 |
+
"""
|
| 35 |
+
if y_only:
|
| 36 |
+
weight = torch.tensor([[65.481], [128.553], [24.966]]).to(img)
|
| 37 |
+
out_img = torch.matmul(img.permute(0, 2, 3, 1), weight).permute(0, 3, 1, 2) + 16.0
|
| 38 |
+
else:
|
| 39 |
+
weight = torch.tensor([[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], [24.966, 112.0, -18.214]]).to(img)
|
| 40 |
+
bias = torch.tensor([16, 128, 128]).view(1, 3, 1, 1).to(img)
|
| 41 |
+
out_img = torch.matmul(img.permute(0, 2, 3, 1), weight).permute(0, 3, 1, 2) + bias
|
| 42 |
+
|
| 43 |
+
out_img = out_img / 255.
|
| 44 |
+
return out_img
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
# https://github.com/XPixelGroup/BasicSR/blob/033cd6896d898fdd3dcda32e3102a792efa1b8f4/basicsr/metrics/psnr_ssim.py#L52
|
| 48 |
+
def calculate_psnr_pt(img, img2, crop_border, test_y_channel=False):
|
| 49 |
+
"""Calculate PSNR (Peak Signal-to-Noise Ratio) (PyTorch version).
|
| 50 |
+
|
| 51 |
+
Reference: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
|
| 52 |
+
|
| 53 |
+
Args:
|
| 54 |
+
img (Tensor): Images with range [0, 1], shape (n, 3/1, h, w).
|
| 55 |
+
img2 (Tensor): Images with range [0, 1], shape (n, 3/1, h, w).
|
| 56 |
+
crop_border (int): Cropped pixels in each edge of an image. These pixels are not involved in the calculation.
|
| 57 |
+
test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
|
| 58 |
+
|
| 59 |
+
Returns:
|
| 60 |
+
float: PSNR result.
|
| 61 |
+
"""
|
| 62 |
+
|
| 63 |
+
assert img.shape == img2.shape, (f'Image shapes are different: {img.shape}, {img2.shape}.')
|
| 64 |
+
|
| 65 |
+
if crop_border != 0:
|
| 66 |
+
img = img[:, :, crop_border:-crop_border, crop_border:-crop_border]
|
| 67 |
+
img2 = img2[:, :, crop_border:-crop_border, crop_border:-crop_border]
|
| 68 |
+
|
| 69 |
+
if test_y_channel:
|
| 70 |
+
img = rgb2ycbcr_pt(img, y_only=True)
|
| 71 |
+
img2 = rgb2ycbcr_pt(img2, y_only=True)
|
| 72 |
+
|
| 73 |
+
img = img.to(torch.float64)
|
| 74 |
+
img2 = img2.to(torch.float64)
|
| 75 |
+
|
| 76 |
+
mse = torch.mean((img - img2)**2, dim=[1, 2, 3])
|
| 77 |
+
return 10. * torch.log10(1. / (mse + 1e-8))
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
def main(args) -> None:
|
| 81 |
+
# Setup accelerator:
|
| 82 |
+
accelerator = Accelerator(split_batches=True)
|
| 83 |
+
set_seed(231)
|
| 84 |
+
device = accelerator.device
|
| 85 |
+
cfg = OmegaConf.load(args.config)
|
| 86 |
+
|
| 87 |
+
# Setup an experiment folder:
|
| 88 |
+
if accelerator.is_local_main_process:
|
| 89 |
+
exp_dir = cfg.train.exp_dir
|
| 90 |
+
os.makedirs(exp_dir, exist_ok=True)
|
| 91 |
+
ckpt_dir = os.path.join(exp_dir, "checkpoints")
|
| 92 |
+
os.makedirs(ckpt_dir, exist_ok=True)
|
| 93 |
+
print(f"Experiment directory created at {exp_dir}")
|
| 94 |
+
|
| 95 |
+
# Create model:
|
| 96 |
+
swinir: SwinIR = instantiate_from_config(cfg.model.swinir)
|
| 97 |
+
if cfg.train.resume:
|
| 98 |
+
swinir.load_state_dict(torch.load(cfg.train.resume, map_location="cpu"), strict=True)
|
| 99 |
+
if accelerator.is_local_main_process:
|
| 100 |
+
print(f"strictly load weight from checkpoint: {cfg.train.resume}")
|
| 101 |
+
else:
|
| 102 |
+
if accelerator.is_local_main_process:
|
| 103 |
+
print("initialize from scratch")
|
| 104 |
+
|
| 105 |
+
# Setup optimizer:
|
| 106 |
+
opt = torch.optim.AdamW(
|
| 107 |
+
swinir.parameters(), lr=cfg.train.learning_rate,
|
| 108 |
+
weight_decay=0
|
| 109 |
+
)
|
| 110 |
+
|
| 111 |
+
# Setup data:
|
| 112 |
+
dataset = instantiate_from_config(cfg.dataset.train)
|
| 113 |
+
loader = DataLoader(
|
| 114 |
+
dataset=dataset, batch_size=cfg.train.batch_size,
|
| 115 |
+
num_workers=cfg.train.num_workers,
|
| 116 |
+
shuffle=True, drop_last=True
|
| 117 |
+
)
|
| 118 |
+
val_dataset = instantiate_from_config(cfg.dataset.val)
|
| 119 |
+
val_loader = DataLoader(
|
| 120 |
+
dataset=val_dataset, batch_size=cfg.train.batch_size,
|
| 121 |
+
num_workers=cfg.train.num_workers,
|
| 122 |
+
shuffle=False, drop_last=False
|
| 123 |
+
)
|
| 124 |
+
if accelerator.is_local_main_process:
|
| 125 |
+
print(f"Dataset contains {len(dataset):,} images from {dataset.file_list}")
|
| 126 |
+
|
| 127 |
+
# Prepare models for training:
|
| 128 |
+
swinir.train().to(device)
|
| 129 |
+
swinir, opt, loader, val_loader = accelerator.prepare(swinir, opt, loader, val_loader)
|
| 130 |
+
pure_swinir = accelerator.unwrap_model(swinir)
|
| 131 |
+
|
| 132 |
+
# Variables for monitoring/logging purposes:
|
| 133 |
+
global_step = 0
|
| 134 |
+
max_steps = cfg.train.train_steps
|
| 135 |
+
step_loss = []
|
| 136 |
+
epoch = 0
|
| 137 |
+
epoch_loss = []
|
| 138 |
+
with warnings.catch_warnings():
|
| 139 |
+
# avoid warnings from lpips internal
|
| 140 |
+
warnings.simplefilter("ignore")
|
| 141 |
+
lpips_model = lpips.LPIPS(net="alex", verbose=accelerator.is_local_main_process).eval().to(device)
|
| 142 |
+
if accelerator.is_local_main_process:
|
| 143 |
+
writer = SummaryWriter(exp_dir)
|
| 144 |
+
print(f"Training for {max_steps} steps...")
|
| 145 |
+
|
| 146 |
+
while global_step < max_steps:
|
| 147 |
+
pbar = tqdm(iterable=None, disable=not accelerator.is_local_main_process, unit="batch", total=len(loader))
|
| 148 |
+
for gt, lq, _ in loader:
|
| 149 |
+
gt = rearrange((gt + 1) / 2, "b h w c -> b c h w").contiguous().float().to(device)
|
| 150 |
+
lq = rearrange(lq, "b h w c -> b c h w").contiguous().float().to(device)
|
| 151 |
+
pred = swinir(lq)
|
| 152 |
+
loss = F.mse_loss(input=pred, target=gt, reduction="sum")
|
| 153 |
+
|
| 154 |
+
opt.zero_grad()
|
| 155 |
+
accelerator.backward(loss)
|
| 156 |
+
opt.step()
|
| 157 |
+
accelerator.wait_for_everyone()
|
| 158 |
+
|
| 159 |
+
global_step += 1
|
| 160 |
+
step_loss.append(loss.item())
|
| 161 |
+
epoch_loss.append(loss.item())
|
| 162 |
+
pbar.update(1)
|
| 163 |
+
pbar.set_description(f"Epoch: {epoch:04d}, Global Step: {global_step:07d}, Loss: {loss.item():.6f}")
|
| 164 |
+
|
| 165 |
+
# Log loss values:
|
| 166 |
+
if global_step % cfg.train.log_every == 0:
|
| 167 |
+
# Gather values from all processes
|
| 168 |
+
avg_loss = accelerator.gather(torch.tensor(step_loss, device=device).unsqueeze(0)).mean().item()
|
| 169 |
+
step_loss.clear()
|
| 170 |
+
if accelerator.is_local_main_process:
|
| 171 |
+
writer.add_scalar("train/loss_step", avg_loss, global_step)
|
| 172 |
+
|
| 173 |
+
# Save checkpoint:
|
| 174 |
+
if global_step % cfg.train.ckpt_every == 0:
|
| 175 |
+
if accelerator.is_local_main_process:
|
| 176 |
+
checkpoint = pure_swinir.state_dict()
|
| 177 |
+
ckpt_path = f"{ckpt_dir}/{global_step:07d}.pt"
|
| 178 |
+
torch.save(checkpoint, ckpt_path)
|
| 179 |
+
|
| 180 |
+
if global_step % cfg.train.image_every == 0 or global_step == 1:
|
| 181 |
+
swinir.eval()
|
| 182 |
+
N = 12
|
| 183 |
+
log_gt, log_lq = gt[:N], lq[:N]
|
| 184 |
+
with torch.no_grad():
|
| 185 |
+
log_pred = swinir(log_lq)
|
| 186 |
+
if accelerator.is_local_main_process:
|
| 187 |
+
for tag, image in [
|
| 188 |
+
("image/pred", log_pred),
|
| 189 |
+
("image/gt", log_gt),
|
| 190 |
+
("image/lq", log_lq),
|
| 191 |
+
]:
|
| 192 |
+
writer.add_image(tag, make_grid(image, nrow=4), global_step)
|
| 193 |
+
swinir.train()
|
| 194 |
+
|
| 195 |
+
# Evaluate model:
|
| 196 |
+
if global_step % cfg.train.val_every == 0:
|
| 197 |
+
swinir.eval()
|
| 198 |
+
val_loss = []
|
| 199 |
+
val_lpips = []
|
| 200 |
+
val_psnr = []
|
| 201 |
+
val_pbar = tqdm(iterable=None, disable=not accelerator.is_local_main_process, unit="batch",
|
| 202 |
+
total=len(val_loader), leave=False, desc="Validation")
|
| 203 |
+
# TODO: use accelerator.gather_for_metrics for more precise metric calculation?
|
| 204 |
+
for val_gt, val_lq, _ in val_loader:
|
| 205 |
+
val_gt = rearrange((val_gt + 1) / 2, "b h w c -> b c h w").contiguous().float().to(device)
|
| 206 |
+
val_lq = rearrange(val_lq, "b h w c -> b c h w").contiguous().float().to(device)
|
| 207 |
+
with torch.no_grad():
|
| 208 |
+
# forward
|
| 209 |
+
val_pred = swinir(val_lq)
|
| 210 |
+
# compute metrics (loss, lpips, psnr)
|
| 211 |
+
val_loss.append(F.mse_loss(input=val_pred, target=val_gt, reduction="sum").item())
|
| 212 |
+
val_lpips.append(lpips_model(val_pred, val_gt, normalize=True).mean().item())
|
| 213 |
+
val_psnr.append(calculate_psnr_pt(val_pred, val_gt, crop_border=0).mean().item())
|
| 214 |
+
val_pbar.update(1)
|
| 215 |
+
val_pbar.close()
|
| 216 |
+
avg_val_loss = accelerator.gather(torch.tensor(val_loss, device=device).unsqueeze(0)).mean().item()
|
| 217 |
+
avg_val_lpips = accelerator.gather(torch.tensor(val_lpips, device=device).unsqueeze(0)).mean().item()
|
| 218 |
+
avg_val_psnr = accelerator.gather(torch.tensor(val_psnr, device=device).unsqueeze(0)).mean().item()
|
| 219 |
+
if accelerator.is_local_main_process:
|
| 220 |
+
for tag, val in [
|
| 221 |
+
("val/loss", avg_val_loss),
|
| 222 |
+
("val/lpips", avg_val_lpips),
|
| 223 |
+
("val/psnr", avg_val_psnr)
|
| 224 |
+
]:
|
| 225 |
+
writer.add_scalar(tag, val, global_step)
|
| 226 |
+
swinir.train()
|
| 227 |
+
|
| 228 |
+
accelerator.wait_for_everyone()
|
| 229 |
+
|
| 230 |
+
if global_step == max_steps:
|
| 231 |
+
break
|
| 232 |
+
|
| 233 |
+
pbar.close()
|
| 234 |
+
epoch += 1
|
| 235 |
+
avg_epoch_loss = accelerator.gather(torch.tensor(epoch_loss, device=device).unsqueeze(0)).mean().item()
|
| 236 |
+
epoch_loss.clear()
|
| 237 |
+
if accelerator.is_local_main_process:
|
| 238 |
+
writer.add_scalar("train/loss_epoch", avg_epoch_loss, global_step)
|
| 239 |
+
|
| 240 |
+
if accelerator.is_local_main_process:
|
| 241 |
+
print("done!")
|
| 242 |
+
writer.close()
|
| 243 |
+
|
| 244 |
+
|
| 245 |
+
if __name__ == "__main__":
|
| 246 |
+
parser = ArgumentParser()
|
| 247 |
+
parser.add_argument("--config", type=str, required=True)
|
| 248 |
+
args = parser.parse_args()
|
| 249 |
+
main(args)
|
train_stage2.py
ADDED
|
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
from argparse import ArgumentParser
|
| 3 |
+
|
| 4 |
+
from omegaconf import OmegaConf
|
| 5 |
+
import torch
|
| 6 |
+
from torch.utils.data import DataLoader
|
| 7 |
+
from torchvision.utils import make_grid
|
| 8 |
+
from accelerate import Accelerator
|
| 9 |
+
from accelerate.utils import set_seed
|
| 10 |
+
from einops import rearrange
|
| 11 |
+
from tqdm import tqdm
|
| 12 |
+
from torch.utils.tensorboard import SummaryWriter
|
| 13 |
+
from PIL import Image, ImageDraw, ImageFont
|
| 14 |
+
import numpy as np
|
| 15 |
+
|
| 16 |
+
from model import ControlLDM, SwinIR, Diffusion
|
| 17 |
+
from utils.common import instantiate_from_config
|
| 18 |
+
from utils.sampler import SpacedSampler
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
def log_txt_as_img(wh, xc):
|
| 22 |
+
# wh a tuple of (width, height)
|
| 23 |
+
# xc a list of captions to plot
|
| 24 |
+
b = len(xc)
|
| 25 |
+
txts = list()
|
| 26 |
+
for bi in range(b):
|
| 27 |
+
txt = Image.new("RGB", wh, color="white")
|
| 28 |
+
draw = ImageDraw.Draw(txt)
|
| 29 |
+
# font = ImageFont.truetype('font/DejaVuSans.ttf', size=size)
|
| 30 |
+
font = ImageFont.load_default()
|
| 31 |
+
nc = int(40 * (wh[0] / 256))
|
| 32 |
+
lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc))
|
| 33 |
+
|
| 34 |
+
try:
|
| 35 |
+
draw.text((0, 0), lines, fill="black", font=font)
|
| 36 |
+
except UnicodeEncodeError:
|
| 37 |
+
print("Cant encode string for logging. Skipping.")
|
| 38 |
+
|
| 39 |
+
txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0
|
| 40 |
+
txts.append(txt)
|
| 41 |
+
txts = np.stack(txts)
|
| 42 |
+
txts = torch.tensor(txts)
|
| 43 |
+
return txts
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
def main(args) -> None:
|
| 47 |
+
# Setup accelerator:
|
| 48 |
+
accelerator = Accelerator(split_batches=True)
|
| 49 |
+
set_seed(231)
|
| 50 |
+
device = accelerator.device
|
| 51 |
+
cfg = OmegaConf.load(args.config)
|
| 52 |
+
|
| 53 |
+
# Setup an experiment folder:
|
| 54 |
+
if accelerator.is_local_main_process:
|
| 55 |
+
exp_dir = cfg.train.exp_dir
|
| 56 |
+
os.makedirs(exp_dir, exist_ok=True)
|
| 57 |
+
ckpt_dir = os.path.join(exp_dir, "checkpoints")
|
| 58 |
+
os.makedirs(ckpt_dir, exist_ok=True)
|
| 59 |
+
print(f"Experiment directory created at {exp_dir}")
|
| 60 |
+
|
| 61 |
+
# Create model:
|
| 62 |
+
cldm: ControlLDM = instantiate_from_config(cfg.model.cldm)
|
| 63 |
+
sd = torch.load(cfg.train.sd_path, map_location="cpu")["state_dict"]
|
| 64 |
+
unused = cldm.load_pretrained_sd(sd)
|
| 65 |
+
if accelerator.is_local_main_process:
|
| 66 |
+
print(f"strictly load pretrained SD weight from {cfg.train.sd_path}\n"
|
| 67 |
+
f"unused weights: {unused}")
|
| 68 |
+
|
| 69 |
+
if cfg.train.resume:
|
| 70 |
+
cldm.load_controlnet_from_ckpt(torch.load(cfg.train.resume, map_location="cpu"))
|
| 71 |
+
if accelerator.is_local_main_process:
|
| 72 |
+
print(f"strictly load controlnet weight from checkpoint: {cfg.train.resume}")
|
| 73 |
+
else:
|
| 74 |
+
init_with_new_zero, init_with_scratch = cldm.load_controlnet_from_unet()
|
| 75 |
+
if accelerator.is_local_main_process:
|
| 76 |
+
print(f"strictly load controlnet weight from pretrained SD\n"
|
| 77 |
+
f"weights initialized with newly added zeros: {init_with_new_zero}\n"
|
| 78 |
+
f"weights initialized from scratch: {init_with_scratch}")
|
| 79 |
+
|
| 80 |
+
swinir: SwinIR = instantiate_from_config(cfg.model.swinir)
|
| 81 |
+
sd = {
|
| 82 |
+
(k[len("module."):] if k.startswith("module.") else k): v
|
| 83 |
+
for k, v in torch.load(cfg.train.swinir_path, map_location="cpu").items()
|
| 84 |
+
}
|
| 85 |
+
swinir.load_state_dict(sd, strict=True)
|
| 86 |
+
for p in swinir.parameters():
|
| 87 |
+
p.requires_grad = False
|
| 88 |
+
if accelerator.is_local_main_process:
|
| 89 |
+
print(f"load SwinIR from {cfg.train.swinir_path}")
|
| 90 |
+
|
| 91 |
+
diffusion: Diffusion = instantiate_from_config(cfg.model.diffusion)
|
| 92 |
+
|
| 93 |
+
# Setup optimizer:
|
| 94 |
+
opt = torch.optim.AdamW(cldm.controlnet.parameters(), lr=cfg.train.learning_rate)
|
| 95 |
+
|
| 96 |
+
# Setup data:
|
| 97 |
+
dataset = instantiate_from_config(cfg.dataset.train)
|
| 98 |
+
loader = DataLoader(
|
| 99 |
+
dataset=dataset, batch_size=cfg.train.batch_size,
|
| 100 |
+
num_workers=cfg.train.num_workers,
|
| 101 |
+
shuffle=True, drop_last=True
|
| 102 |
+
)
|
| 103 |
+
if accelerator.is_local_main_process:
|
| 104 |
+
print(f"Dataset contains {len(dataset):,} images from {dataset.file_list}")
|
| 105 |
+
|
| 106 |
+
# Prepare models for training:
|
| 107 |
+
cldm.train().to(device)
|
| 108 |
+
swinir.eval().to(device)
|
| 109 |
+
diffusion.to(device)
|
| 110 |
+
cldm, opt, loader = accelerator.prepare(cldm, opt, loader)
|
| 111 |
+
pure_cldm: ControlLDM = accelerator.unwrap_model(cldm)
|
| 112 |
+
|
| 113 |
+
# Variables for monitoring/logging purposes:
|
| 114 |
+
global_step = 0
|
| 115 |
+
max_steps = cfg.train.train_steps
|
| 116 |
+
step_loss = []
|
| 117 |
+
epoch = 0
|
| 118 |
+
epoch_loss = []
|
| 119 |
+
sampler = SpacedSampler(diffusion.betas)
|
| 120 |
+
if accelerator.is_local_main_process:
|
| 121 |
+
writer = SummaryWriter(exp_dir)
|
| 122 |
+
print(f"Training for {max_steps} steps...")
|
| 123 |
+
|
| 124 |
+
while global_step < max_steps:
|
| 125 |
+
pbar = tqdm(iterable=None, disable=not accelerator.is_local_main_process, unit="batch", total=len(loader))
|
| 126 |
+
for gt, lq, prompt in loader:
|
| 127 |
+
gt = rearrange(gt, "b h w c -> b c h w").contiguous().float().to(device)
|
| 128 |
+
lq = rearrange(lq, "b h w c -> b c h w").contiguous().float().to(device)
|
| 129 |
+
with torch.no_grad():
|
| 130 |
+
z_0 = pure_cldm.vae_encode(gt)
|
| 131 |
+
clean = swinir(lq)
|
| 132 |
+
cond = pure_cldm.prepare_condition(clean, prompt)
|
| 133 |
+
t = torch.randint(0, diffusion.num_timesteps, (z_0.shape[0],), device=device)
|
| 134 |
+
|
| 135 |
+
loss = diffusion.p_losses(cldm, z_0, t, cond)
|
| 136 |
+
opt.zero_grad()
|
| 137 |
+
accelerator.backward(loss)
|
| 138 |
+
opt.step()
|
| 139 |
+
|
| 140 |
+
accelerator.wait_for_everyone()
|
| 141 |
+
|
| 142 |
+
global_step += 1
|
| 143 |
+
step_loss.append(loss.item())
|
| 144 |
+
epoch_loss.append(loss.item())
|
| 145 |
+
pbar.update(1)
|
| 146 |
+
pbar.set_description(f"Epoch: {epoch:04d}, Global Step: {global_step:07d}, Loss: {loss.item():.6f}")
|
| 147 |
+
|
| 148 |
+
# Log loss values:
|
| 149 |
+
if global_step % cfg.train.log_every == 0 and global_step > 0:
|
| 150 |
+
# Gather values from all processes
|
| 151 |
+
avg_loss = accelerator.gather(torch.tensor(step_loss, device=device).unsqueeze(0)).mean().item()
|
| 152 |
+
step_loss.clear()
|
| 153 |
+
if accelerator.is_local_main_process:
|
| 154 |
+
writer.add_scalar("loss/loss_simple_step", avg_loss, global_step)
|
| 155 |
+
|
| 156 |
+
# Save checkpoint:
|
| 157 |
+
if global_step % cfg.train.ckpt_every == 0 and global_step > 0:
|
| 158 |
+
if accelerator.is_local_main_process:
|
| 159 |
+
checkpoint = pure_cldm.controlnet.state_dict()
|
| 160 |
+
ckpt_path = f"{ckpt_dir}/{global_step:07d}.pt"
|
| 161 |
+
torch.save(checkpoint, ckpt_path)
|
| 162 |
+
|
| 163 |
+
if global_step % cfg.train.image_every == 0 or global_step == 1:
|
| 164 |
+
N = 12
|
| 165 |
+
log_clean = clean[:N]
|
| 166 |
+
log_cond = {k:v[:N] for k, v in cond.items()}
|
| 167 |
+
log_gt, log_lq = gt[:N], lq[:N]
|
| 168 |
+
log_prompt = prompt[:N]
|
| 169 |
+
cldm.eval()
|
| 170 |
+
with torch.no_grad():
|
| 171 |
+
z = sampler.sample(
|
| 172 |
+
model=cldm, device=device, steps=50, batch_size=len(log_gt), x_size=z_0.shape[1:],
|
| 173 |
+
cond=log_cond, uncond=None, cfg_scale=1.0, x_T=None,
|
| 174 |
+
progress=accelerator.is_local_main_process, progress_leave=False
|
| 175 |
+
)
|
| 176 |
+
if accelerator.is_local_main_process:
|
| 177 |
+
for tag, image in [
|
| 178 |
+
("image/samples", (pure_cldm.vae_decode(z) + 1) / 2),
|
| 179 |
+
("image/gt", (log_gt + 1) / 2),
|
| 180 |
+
("image/lq", log_lq),
|
| 181 |
+
("image/condition", log_clean),
|
| 182 |
+
("image/condition_decoded", (pure_cldm.vae_decode(log_cond["c_img"]) + 1) / 2),
|
| 183 |
+
("image/prompt", (log_txt_as_img((512, 512), log_prompt) + 1) / 2)
|
| 184 |
+
]:
|
| 185 |
+
writer.add_image(tag, make_grid(image, nrow=4), global_step)
|
| 186 |
+
cldm.train()
|
| 187 |
+
accelerator.wait_for_everyone()
|
| 188 |
+
if global_step == max_steps:
|
| 189 |
+
break
|
| 190 |
+
|
| 191 |
+
pbar.close()
|
| 192 |
+
epoch += 1
|
| 193 |
+
avg_epoch_loss = accelerator.gather(torch.tensor(epoch_loss, device=device).unsqueeze(0)).mean().item()
|
| 194 |
+
epoch_loss.clear()
|
| 195 |
+
if accelerator.is_local_main_process:
|
| 196 |
+
writer.add_scalar("loss/loss_simple_epoch", avg_epoch_loss, global_step)
|
| 197 |
+
|
| 198 |
+
if accelerator.is_local_main_process:
|
| 199 |
+
print("done!")
|
| 200 |
+
writer.close()
|
| 201 |
+
|
| 202 |
+
|
| 203 |
+
if __name__ == "__main__":
|
| 204 |
+
parser = ArgumentParser()
|
| 205 |
+
parser.add_argument("--config", type=str, required=True)
|
| 206 |
+
args = parser.parse_args()
|
| 207 |
+
main(args)
|