timbrooks commited on
Commit
2afcb7e
·
1 Parent(s): 658ad5e

Add InstructPix2Pix

Browse files

Former-commit-id: 3626d699482f2419961432bff2e1763ccf55f6e7

LICENSE ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ Copyright 2023 Timothy Brooks, Aleksander Holynski, Alexei A. Efros
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4
+
5
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6
+
7
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
8
+
9
+ Portions of code and models (such as pretrained checkpoints, which are fine-tuned starting from released Stable Diffusion checkpoints) are derived from the Stable Diffusion codebase (https://github.com/CompVis/stable-diffusion). Further restrictions may apply. Please consult the Stable Diffusion license `stable_diffusion/LICENSE`. Modified code is denoted as such in comments at the start of each file.
README.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # InstructPix2Pix: Learning to Follow Image Editing Instructions
2
+ ### [Project Page](https://www.timothybrooks.com/instruct-pix2pix/) | [Paper](https://arxiv.org/abs/2211.09800) | [Data](http://instruct-pix2pix.eecs.berkeley.edu/)
3
+ PyTorch implementation of InstructPix2Pix, an instruction-based image editing model, based on the original [CompVis/stable_diffusion](https://github.com/CompVis/stable-diffusion) repo. <br>
4
+
5
+ [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://www.timothybrooks.com/instruct-pix2pix/)
6
+ [Tim Brooks](https://www.timothybrooks.com/)\*,
7
+ [Aleksander Holynski](https://holynski.org/)\*,
8
+ [Alexei A. Efros](https://people.eecs.berkeley.edu/~efros/) <br>
9
+ UC Berkeley <br>
10
+ \*denotes equal contribution
11
+
12
+ <img src='https://instruct-pix2pix.timothybrooks.com/teaser.jpg'/>
13
+
14
+ ## TL;DR: quickstart
15
+
16
+ To setup a conda environment, download a pretrained model, and edit an image:
17
+ ```
18
+ conda env create -f environment.yaml
19
+ conda activate ip2p
20
+ bash scripts/download_checkpoints.sh
21
+ python edit_cli.py --input imgs/example.jpg --output imgs/output.jpg --edit "turn him into a cyborg"
22
+
23
+ # Optionally, you can specify parameters:
24
+ # python edit_cli.py --steps 100 --resolution 512 --seed 0 --cfg-text 7.5 --cfg-image 1.2 --input imgs/example.jpg --output imgs/output.jpg --edit "turn him into a cyborg"
25
+ ```
26
+
27
+ ## Setup
28
+
29
+ Install all dependencies with:
30
+ ```
31
+ conda env create -f environment.yaml
32
+ ```
33
+
34
+ Download the pretrained models by running:
35
+ ```
36
+ bash scripts/download_checkpoints.sh
37
+ ```
38
+
39
+ ## Generated Dataset
40
+
41
+ Our image editing model is trained on a generated dataset consisting of 454,445 examples. Each example contains (1) an input image, (2) an editing instruction, and (3) an output edited image. We provide two versions of the dataset, one in which each pair of edited images is generated 100 times, and the best examples are chosen based on CLIP metrics (Section 3.1.2 in the paper) (`clip-filtered-dataset`), and one in which examples are randomly chosen (`random-sample-dataset`).
42
+
43
+ For the released version of this dataset, we've additionally filtered prompts and images for NSFW content. After NSFW filtering, the GPT-3 generated dataset contains 451,990 examples. The final image-pair datasets contain:
44
+
45
+ | | # of image editing examples | Dataset size |
46
+ |--|-----------------------|----------------------- |
47
+ | `random-sample-dataset` |451990|727GB|
48
+ | `clip-filtered-dataset` |313010|436GB|
49
+
50
+ To download one of these datasets, along with the entire NSFW-filtered text data, run the following command with the appropriate dataset name:
51
+
52
+ ```
53
+ bash scripts/download_data.sh clip-filtered-dataset
54
+ ```
55
+
56
+
57
+ ## Training InstructPix2Pix
58
+
59
+ Need to modify configs/instruct-pix2pix/default.yaml to point to the dataset in the right location. Need to also download the Stable Diffusion checkpoint from which to finetune.
60
+
61
+ ```
62
+ python stable_diffusion/main.py --name default --base configs/train.yaml --train --gpus 0,1,2,3,4,5,6,7
63
+ ```
64
+
65
+
66
+ ## Creating your own dataset
67
+
68
+ Our generated dataset of paired images and editing instructions is made in two phases: First, we use GPT-3 to generate text triplets: (a) a caption describing an image, (b) an edit instruction, (c) a caption describing the image after the edit. Then, we turn pairs of captions (before/after the edit) into pairs of images using Stable Diffusion and Prompt-to-Prompt.
69
+
70
+ ### (1) Generate a dataset of captions and instructions
71
+
72
+ We provide our generated dataset of captions and edit instructions [here](https://instruct-pix2pix.eecs.berkeley.edu/gpt-generated-prompts.jsonl). If you plan to use our captions+instructions, skip to step (2). Otherwise, if you would like to create your own text dataset, please follow steps (1.1-1.3) below. Note that generating very large datasets using GPT-3 can be expensive.
73
+
74
+ #### (1.1) Manually write a dataset of instructions and captions
75
+
76
+ The first step of the process is fine-tuning GPT-3. To do this, we made a dataset of 700 examples broadly covering of edits that we might want our model to be able to perform. Our examples are available here [here](https://instruct-pix2pix.eecs.berkeley.edu/human_written_examples.jsonl). These should be diverse and cover a wide range of possible captions and types of edits. Ideally, they should avoid duplication or significant overlap of captions and instructions. It is also important to be mindful of limitations of Stable Diffusion and Prompt-to-Prompt in writing these examples, such as inability to perform large spatial transformations (e.g., moving the camera, zooming in, swapping object locations).
77
+
78
+ Input prompts should closely match the distribution of input prompts used to generate the larger dataset. We sampled the 700 input prompts from LAION Improves Aesthetics 6.5+ dataset and also use this dataset for generating examples. We found this dataset is quite noisy (many of the captions are overly long and contain irrelevant text). For this reason, we also considered MSCOCO and LAION-COCO datasets, but ultimately chose LAION Improves Aesthetics 6.5+ due to its diversity of content, proper nouns, and artistic mediums. If you choose to use another dataset or combination of datasets as input to GPT-3 when generating examples, we recomend you sample the input prompts from the same distribution when manually writing training examples.
79
+
80
+ #### (1.2) Finetune GPT-3
81
+
82
+ The next step is to finetune a large language model to generate an edit instruction and edited caption from a new input caption. We use GPT-3 Davinci via the OpenAI API, although other language models could be used.
83
+
84
+ To prepare training data for GPT-3, one must setup an OpenAI developer account to access the needed APIs. Run the `prompts/prepare_for_gpt.py` script, which forms the prompts into the correct format by concatenating instructions and captions and adding delimiters and stop sequences.
85
+
86
+ ```bash
87
+ python dataset_creation/prepare_for_gpt.py prompts/human_written_examples.jsonl prompts/human_written_examples_for_gpt.jsonl
88
+ ```
89
+
90
+ Next, finetune GPT-3 via the OpenAI CLI. We provide an example below, although please refer to the official documentation here as best practices may change. We trained the Davinci model for a single epoch. You could experiment with smaller less expensive GPT-3 variants or with open source language models, although this may negatively hurt performance.
91
+
92
+ ```bash
93
+ openai api fine_tunes.create -t prompts/human_written_examples_for_gpt.jsonl -m davinci --n_epochs 1 --suffix "instruct-pix2pix"
94
+ ```
95
+
96
+ You can test out the finetuned GPT-3 model by launching the provided Gradio app:
97
+
98
+ ```bash
99
+ python prompt_app.py OPENAI_MODEL_NAME
100
+ ```
101
+
102
+ #### (1.3) Generate a large dataset of captions and instructions
103
+
104
+ We now use the finetuned GPT-3 model to generate a large dataset. Our dataset cost thousands of dollars to create. See `prompts/gen_instructions_and_captions.py` for the script which generates these examples. We recommend first generating a small number of examples and gradually increasing the scale to ensure the results are working as desired before increasing scale.
105
+
106
+ ```bash
107
+ python dataset_creation/generate_txt_dataset.py OPENAI_MODEL_NAME
108
+ ```
109
+
110
+ If you are generating at a very large scale (e.g., 100K+), it will be noteably faster to generate the dataset with multiple processes running in parallel. This can be accomplished by setting `--partitions=N` to a higher number and running multiple processes, setting each `--partition` to the corresponding value.
111
+
112
+ ```bash
113
+ python dataset_creation/generate_txt_dataset.py OPENAI_MODEL_NAME --partitions=10 --partition=0
114
+ ```
115
+
116
+ ### (2) Turn paired captions into paired images
117
+
118
+ The next step is to turn pairs of text captions into pairs of images. For this, we need to copy a pre-trained Stable Diffusion model checkpoint to `stable_diffusion/models/ldm/stable-diffusion-v1/`. For our model, we used [checkpoint v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.ckpt), but other versions may also work. It is also necessary to download a checkpoint for the Stable Diffusion autoencoder. We used the [new autoencoder](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt), which should be put in the same directory. Once all checkpoints have been downloaded, we can generate the dataset with the following command:
119
+
120
+ ```
121
+ python dataset_creation/generate_img_dataset.py data/instruct-pix2pix-dataset-000 data/gpt_generated_prompts.jsonl
122
+ ```
123
+
124
+ This command operates on a single GPU (typically a V100 or A100). To parallelize over many GPUs/machines, set `--n-partitions` to the total number of parallel jobs and `--partition` to the index of each job.
125
+
126
+ ```
127
+ python dataset_creation/generate_img_dataset.py data/instruct-pix2pix-dataset-000 data/gpt_generated_prompts.jsonl --n-partitions 100 --partition 0
128
+ ```
129
+
130
+ The default parameters match that of our dataset, although in practice you can use a smaller number of steps (e.g., `--steps=25`) to generate high quality data faster. By default, we generate 100 samples per prompt and use CLIP filtering to keep a max of 4 per prompt. You can experiment with fewer samples by setting `--n-samples`. The command below turns off CLIP filtering entirely and is therefore faster:
131
+
132
+ ```
133
+ python dataset_creation/generate_img_dataset.py data/instruct-pix2pix-dataset-000 data/gpt_generated_prompts.jsonl --n-samples 4 --clip-threshold 0 --clip-dir-threshold 0 --clip-img-threshold 0 --n-partitions 100 --partition 0
134
+ ```
135
+
136
+ After generating all of the dataset examples, run the following command below to create a list of the examples. This is needed for the dataset onject to efficiently be able to sample examples without needing to iterate over the entire dataset directory at the start of each training run.
137
+
138
+ ```
139
+ python dataset_creation/prepare_dataset.py data/instruct-pix2pix-dataset-000
140
+ ```
141
+
142
+ ## Comments
143
+
144
+ - Our codebase is based on the [Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion).
145
+
146
+ ## BibTeX
147
+
148
+ ```
149
+ @article{brooks2022instructpix2pix,
150
+ title={InstructPix2Pix: Learning to Follow Image Editing Instructions},
151
+ author={Brooks, Tim and Holynski, Aleksander and Efros, Alexei A},
152
+ journal={arXiv preprint arXiv:2211.09800},
153
+ year={2022}
154
+ }
155
+ ```
156
+
157
+
158
+
configs/generate.yaml ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion).
2
+ # See more details in LICENSE.
3
+
4
+ model:
5
+ base_learning_rate: 1.0e-04
6
+ target: stable_diffusion.ldm.models.diffusion.ddpm_edit.LatentDiffusion
7
+ params:
8
+ linear_start: 0.00085
9
+ linear_end: 0.0120
10
+ num_timesteps_cond: 1
11
+ log_every_t: 200
12
+ timesteps: 1000
13
+ first_stage_key: edited
14
+ cond_stage_key: edit
15
+ # image_size: 64
16
+ # image_size: 32
17
+ image_size: 16
18
+ channels: 4
19
+ cond_stage_trainable: false # Note: different from the one we trained before
20
+ conditioning_key: hybrid
21
+ monitor: val/loss_simple_ema
22
+ scale_factor: 0.18215
23
+ use_ema: true
24
+ load_ema: true
25
+
26
+ scheduler_config: # 10000 warmup steps
27
+ target: stable_diffusion.ldm.lr_scheduler.LambdaLinearScheduler
28
+ params:
29
+ warm_up_steps: [ 0 ]
30
+ cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
31
+ f_start: [ 1.e-6 ]
32
+ f_max: [ 1. ]
33
+ f_min: [ 1. ]
34
+
35
+ unet_config:
36
+ target: stable_diffusion.ldm.modules.diffusionmodules.openaimodel.UNetModel
37
+ params:
38
+ image_size: 32 # unused
39
+ in_channels: 8
40
+ out_channels: 4
41
+ model_channels: 320
42
+ attention_resolutions: [ 4, 2, 1 ]
43
+ num_res_blocks: 2
44
+ channel_mult: [ 1, 2, 4, 4 ]
45
+ num_heads: 8
46
+ use_spatial_transformer: True
47
+ transformer_depth: 1
48
+ context_dim: 768
49
+ use_checkpoint: True
50
+ legacy: False
51
+
52
+ first_stage_config:
53
+ target: stable_diffusion.ldm.models.autoencoder.AutoencoderKL
54
+ params:
55
+ embed_dim: 4
56
+ monitor: val/rec_loss
57
+ ddconfig:
58
+ double_z: true
59
+ z_channels: 4
60
+ resolution: 256
61
+ in_channels: 3
62
+ out_ch: 3
63
+ ch: 128
64
+ ch_mult:
65
+ - 1
66
+ - 2
67
+ - 4
68
+ - 4
69
+ num_res_blocks: 2
70
+ attn_resolutions: []
71
+ dropout: 0.0
72
+ lossconfig:
73
+ target: torch.nn.Identity
74
+
75
+ cond_stage_config:
76
+ target: stable_diffusion.ldm.modules.encoders.modules.FrozenCLIPEmbedder
77
+
78
+ data:
79
+ target: main.DataModuleFromConfig
80
+ params:
81
+ batch_size: 128
82
+ num_workers: 1
83
+ wrap: false
84
+ validation:
85
+ target: edit_dataset.EditDataset
86
+ params:
87
+ path: /shared/holynski/laion-aesthetics-6.5_edit-model=davinci-laion700-1epoch_samples=10000/laion-aesthetics-6.5_edit-model=davinci-laion700-1epoch_samples=10000
88
+ cache_dir: /shared/timbrooks/image-edit-data/caches
89
+ cache_name: davinci10k
90
+ split: val
91
+ min_text_sim: 0.2
92
+ min_image_sim: 0.75
93
+ min_direction_sim: 0.2
94
+ max_samples_per_prompt: 1
95
+ min_resize_res: 512
96
+ max_resize_res: 512
97
+ crop_res: 512
98
+ output_as_edit: False
99
+ real_input: True
configs/train.yaml ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion).
2
+ # See more details in LICENSE.
3
+
4
+ model:
5
+ base_learning_rate: 1.0e-04
6
+ target: stable_diffusion.ldm.models.diffusion.ddpm_edit.LatentDiffusion
7
+ params:
8
+ ckpt_path: stable_diffusion/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
9
+ linear_start: 0.00085
10
+ linear_end: 0.0120
11
+ num_timesteps_cond: 1
12
+ log_every_t: 200
13
+ timesteps: 1000
14
+ first_stage_key: edited
15
+ cond_stage_key: edit
16
+ image_size: 32
17
+ channels: 4
18
+ cond_stage_trainable: false # Note: different from the one we trained before
19
+ conditioning_key: hybrid
20
+ monitor: val/loss_simple_ema
21
+ scale_factor: 0.18215
22
+ use_ema: true
23
+ load_ema: false
24
+
25
+ scheduler_config: # 10000 warmup steps
26
+ target: stable_diffusion.ldm.lr_scheduler.LambdaLinearScheduler
27
+ params:
28
+ warm_up_steps: [ 0 ]
29
+ cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
30
+ f_start: [ 1.e-6 ]
31
+ f_max: [ 1. ]
32
+ f_min: [ 1. ]
33
+
34
+ unet_config:
35
+ target: stable_diffusion.ldm.modules.diffusionmodules.openaimodel.UNetModel
36
+ params:
37
+ image_size: 32 # unused
38
+ in_channels: 8
39
+ out_channels: 4
40
+ model_channels: 320
41
+ attention_resolutions: [ 4, 2, 1 ]
42
+ num_res_blocks: 2
43
+ channel_mult: [ 1, 2, 4, 4 ]
44
+ num_heads: 8
45
+ use_spatial_transformer: True
46
+ transformer_depth: 1
47
+ context_dim: 768
48
+ use_checkpoint: True
49
+ legacy: False
50
+
51
+ first_stage_config:
52
+ target: stable_diffusion.ldm.models.autoencoder.AutoencoderKL
53
+ params:
54
+ embed_dim: 4
55
+ monitor: val/rec_loss
56
+ ddconfig:
57
+ double_z: true
58
+ z_channels: 4
59
+ resolution: 256
60
+ in_channels: 3
61
+ out_ch: 3
62
+ ch: 128
63
+ ch_mult:
64
+ - 1
65
+ - 2
66
+ - 4
67
+ - 4
68
+ num_res_blocks: 2
69
+ attn_resolutions: []
70
+ dropout: 0.0
71
+ lossconfig:
72
+ target: torch.nn.Identity
73
+
74
+ cond_stage_config:
75
+ target: stable_diffusion.ldm.modules.encoders.modules.FrozenCLIPEmbedder
76
+
77
+ data:
78
+ target: main.DataModuleFromConfig
79
+ params:
80
+ batch_size: 32
81
+ num_workers: 2
82
+ train:
83
+ target: edit_dataset.EditDataset
84
+ params:
85
+ path: /home/timbrooks/instruct-pix2pix-datasets/20-20-75
86
+ split: train
87
+ min_resize_res: 256
88
+ max_resize_res: 256
89
+ crop_res: 256
90
+ flip_prob: 0.5
91
+ validation:
92
+ target: edit_dataset.EditDataset
93
+ params:
94
+ path: /home/timbrooks/instruct-pix2pix-datasets/20-20-75
95
+ split: val
96
+ min_resize_res: 256
97
+ max_resize_res: 256
98
+ crop_res: 256
99
+
100
+ lightning:
101
+ callbacks:
102
+ image_logger:
103
+ target: main.ImageLogger
104
+ params:
105
+ batch_frequency: 2000
106
+ max_images: 2
107
+ increase_log_steps: False
108
+
109
+ trainer:
110
+ max_epochs: 2000
111
+ benchmark: True
112
+ accumulate_grad_batches: 4
113
+ check_val_every_n_epoch: 4
dataset_creation/generate_img_dataset.py ADDED
@@ -0,0 +1,297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ from pathlib import Path
4
+
5
+ import k_diffusion
6
+ import numpy as np
7
+ import torch
8
+ import torch.nn as nn
9
+ from einops import rearrange, repeat
10
+ from omegaconf import OmegaConf
11
+ from PIL import Image
12
+ from pytorch_lightning import seed_everything
13
+ from tqdm import tqdm
14
+
15
+ from stable_diffusion.ldm.modules.attention import CrossAttention
16
+ from stable_diffusion.ldm.util import instantiate_from_config
17
+ from metrics.clip_similarity import ClipSimilarity
18
+
19
+
20
+ ################################################################################
21
+ # Modified K-diffusion Euler ancestral sampler with prompt-to-prompt.
22
+ # https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py
23
+
24
+
25
+ def append_dims(x, target_dims):
26
+ """Appends dimensions to the end of a tensor until it has target_dims dimensions."""
27
+ dims_to_append = target_dims - x.ndim
28
+ if dims_to_append < 0:
29
+ raise ValueError(f"input has {x.ndim} dims but target_dims is {target_dims}, which is less")
30
+ return x[(...,) + (None,) * dims_to_append]
31
+
32
+
33
+ def to_d(x, sigma, denoised):
34
+ """Converts a denoiser output to a Karras ODE derivative."""
35
+ return (x - denoised) / append_dims(sigma, x.ndim)
36
+
37
+
38
+ def get_ancestral_step(sigma_from, sigma_to):
39
+ """Calculates the noise level (sigma_down) to step down to and the amount
40
+ of noise to add (sigma_up) when doing an ancestral sampling step."""
41
+ sigma_up = min(sigma_to, (sigma_to**2 * (sigma_from**2 - sigma_to**2) / sigma_from**2) ** 0.5)
42
+ sigma_down = (sigma_to**2 - sigma_up**2) ** 0.5
43
+ return sigma_down, sigma_up
44
+
45
+
46
+ def sample_euler_ancestral(model, x, sigmas, prompt2prompt_threshold=0.0, **extra_args):
47
+ """Ancestral sampling with Euler method steps."""
48
+ s_in = x.new_ones([x.shape[0]])
49
+ for i in range(len(sigmas) - 1):
50
+ prompt_to_prompt = prompt2prompt_threshold > i / (len(sigmas) - 2)
51
+ for m in model.modules():
52
+ if isinstance(m, CrossAttention):
53
+ m.prompt_to_prompt = prompt_to_prompt
54
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
55
+ sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1])
56
+ d = to_d(x, sigmas[i], denoised)
57
+ # Euler method
58
+ dt = sigma_down - sigmas[i]
59
+ x = x + d * dt
60
+ if sigmas[i + 1] > 0:
61
+ # Make noise the same across all samples in batch.
62
+ x = x + torch.randn_like(x[:1]) * sigma_up
63
+ return x
64
+
65
+
66
+ ################################################################################
67
+
68
+
69
+ def load_model_from_config(config, ckpt, vae_ckpt=None, verbose=False):
70
+ print(f"Loading model from {ckpt}")
71
+ pl_sd = torch.load(ckpt, map_location="cpu")
72
+ if "global_step" in pl_sd:
73
+ print(f"Global Step: {pl_sd['global_step']}")
74
+ sd = pl_sd["state_dict"]
75
+ if vae_ckpt is not None:
76
+ print(f"Loading VAE from {vae_ckpt}")
77
+ vae_sd = torch.load(vae_ckpt, map_location="cpu")["state_dict"]
78
+ sd = {
79
+ k: vae_sd[k[len("first_stage_model.") :]] if k.startswith("first_stage_model.") else v
80
+ for k, v in sd.items()
81
+ }
82
+ model = instantiate_from_config(config.model)
83
+ m, u = model.load_state_dict(sd, strict=False)
84
+ if len(m) > 0 and verbose:
85
+ print("missing keys:")
86
+ print(m)
87
+ if len(u) > 0 and verbose:
88
+ print("unexpected keys:")
89
+ print(u)
90
+ return model
91
+
92
+
93
+ class CFGDenoiser(nn.Module):
94
+ def __init__(self, model):
95
+ super().__init__()
96
+ self.inner_model = model
97
+
98
+ def forward(self, x, sigma, uncond, cond, cfg_scale):
99
+ x_in = torch.cat([x] * 2)
100
+ sigma_in = torch.cat([sigma] * 2)
101
+ cond_in = torch.cat([uncond, cond])
102
+ uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
103
+ return uncond + (cond - uncond) * cfg_scale
104
+
105
+
106
+ def to_pil(image: torch.Tensor) -> Image.Image:
107
+ image = 255.0 * rearrange(image.cpu().numpy(), "c h w -> h w c")
108
+ image = Image.fromarray(image.astype(np.uint8))
109
+ return image
110
+
111
+
112
+ def main():
113
+ parser = argparse.ArgumentParser()
114
+ parser.add_argument(
115
+ "out_dir",
116
+ type=str,
117
+ help="Path to output dataset directory.",
118
+ )
119
+ parser.add_argument(
120
+ "prompts_file",
121
+ type=str,
122
+ help="Path to prompts .jsonl file.",
123
+ )
124
+ parser.add_argument(
125
+ "--steps",
126
+ type=int,
127
+ default=100,
128
+ help="Number of sampling steps.",
129
+ )
130
+ parser.add_argument(
131
+ "--n-samples",
132
+ type=int,
133
+ default=100,
134
+ help="Number of samples to generate per prompt (before CLIP filtering).",
135
+ )
136
+ parser.add_argument(
137
+ "--max-out-samples",
138
+ type=int,
139
+ default=4,
140
+ help="Max number of output samples to save per prompt (after CLIP filtering).",
141
+ )
142
+ parser.add_argument(
143
+ "--n-partitions",
144
+ type=int,
145
+ default=1,
146
+ help="Number of total partitions.",
147
+ )
148
+ parser.add_argument(
149
+ "--partition",
150
+ type=int,
151
+ default=0,
152
+ help="Partition index.",
153
+ )
154
+ parser.add_argument(
155
+ "--min-p2p",
156
+ type=float,
157
+ default=0.1,
158
+ help="Min prompt2prompt threshold (portion of denoising for which to fix self attention maps).",
159
+ )
160
+ parser.add_argument(
161
+ "--max-p2p",
162
+ type=float,
163
+ default=0.9,
164
+ help="Max prompt2prompt threshold (portion of denoising for which to fix self attention maps).",
165
+ )
166
+ parser.add_argument(
167
+ "--min-cfg",
168
+ type=float,
169
+ default=7.5,
170
+ help="Min classifier free guidance scale.",
171
+ )
172
+ parser.add_argument(
173
+ "--max-cfg",
174
+ type=float,
175
+ default=15,
176
+ help="Max classifier free guidance scale.",
177
+ )
178
+ parser.add_argument(
179
+ "--clip-threshold",
180
+ type=float,
181
+ default=0.2,
182
+ help="CLIP threshold for text-image similarity of each image.",
183
+ )
184
+ parser.add_argument(
185
+ "--clip-dir-threshold",
186
+ type=float,
187
+ default=0.2,
188
+ help="Directional CLIP threshold for similarity of change between pairs of text and pairs of images.",
189
+ )
190
+ parser.add_argument(
191
+ "--clip-img-threshold",
192
+ type=float,
193
+ default=0.7,
194
+ help="CLIP threshold for image-image similarity.",
195
+ )
196
+ opt = parser.parse_args()
197
+
198
+ global_seed = torch.randint(1 << 32, ()).item()
199
+ print(f"Global seed: {global_seed}")
200
+ seed_everything(global_seed)
201
+
202
+ model = load_model_from_config(
203
+ OmegaConf.load("configs/stable-diffusion/v1-inference.yaml"),
204
+ ckpt="models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt",
205
+ vae_ckpt="models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt",
206
+ )
207
+ model.cuda().eval()
208
+ model_wrap = k_diffusion.external.CompVisDenoiser(model)
209
+
210
+ clip_similarity = ClipSimilarity().cuda()
211
+
212
+ out_dir = Path(opt.out_dir)
213
+ out_dir.mkdir(exist_ok=True, parents=True)
214
+
215
+ with open(opt.prompts_file) as fp:
216
+ prompts = [json.loads(line) for line in fp]
217
+
218
+ print(f"Partition index {opt.partition} ({opt.partition + 1} / {opt.n_partitions})")
219
+ prompts = np.array_split(list(enumerate(prompts)), opt.n_partitions)[opt.partition]
220
+
221
+ with torch.no_grad(), torch.autocast("cuda"), model.ema_scope():
222
+ uncond = model.get_learned_conditioning(2 * [""])
223
+ sigmas = model_wrap.get_sigmas(opt.steps)
224
+
225
+ for i, prompt in tqdm(prompts, desc="Prompts"):
226
+ prompt_dir = out_dir.joinpath(f"{i:07d}")
227
+ prompt_dir.mkdir(exist_ok=True)
228
+
229
+ with open(prompt_dir.joinpath("prompt.json"), "w") as fp:
230
+ json.dump(prompt, fp)
231
+
232
+ cond = model.get_learned_conditioning([prompt["input"], prompt["output"]])
233
+ results = {}
234
+
235
+ with tqdm(total=opt.n_samples, desc="Samples") as progress_bar:
236
+
237
+ while len(results) < opt.n_samples:
238
+ seed = torch.randint(1 << 32, ()).item()
239
+ if seed in results:
240
+ continue
241
+ torch.manual_seed(seed)
242
+
243
+ x = torch.randn(1, 4, 512 // 8, 512 // 8, device="cuda") * sigmas[0]
244
+ x = repeat(x, "1 ... -> n ...", n=2)
245
+
246
+ model_wrap_cfg = CFGDenoiser(model_wrap)
247
+ p2p_threshold = opt.min_p2p + torch.rand(()).item() * (opt.max_p2p - opt.min_p2p)
248
+ cfg_scale = opt.min_cfg + torch.rand(()).item() * (opt.max_cfg - opt.min_cfg)
249
+ extra_args = {"cond": cond, "uncond": uncond, "cfg_scale": cfg_scale}
250
+ samples_ddim = sample_euler_ancestral(model_wrap_cfg, x, sigmas, p2p_threshold, **extra_args)
251
+ x_samples_ddim = model.decode_first_stage(samples_ddim)
252
+ x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
253
+
254
+ x0 = x_samples_ddim[0]
255
+ x1 = x_samples_ddim[1]
256
+
257
+ clip_sim_0, clip_sim_1, clip_sim_dir, clip_sim_image = clip_similarity(
258
+ x0[None], x1[None], [prompt["input"]], [prompt["output"]]
259
+ )
260
+
261
+ results[seed] = dict(
262
+ image_0=to_pil(x0),
263
+ image_1=to_pil(x1),
264
+ p2p_threshold=p2p_threshold,
265
+ cfg_scale=cfg_scale,
266
+ clip_sim_0=clip_sim_0[0].item(),
267
+ clip_sim_1=clip_sim_1[0].item(),
268
+ clip_sim_dir=clip_sim_dir[0].item(),
269
+ clip_sim_image=clip_sim_image[0].item(),
270
+ )
271
+
272
+ progress_bar.update()
273
+
274
+ # CLIP filter to get best samples for each prompt.
275
+ metadata = [
276
+ (result["clip_sim_dir"], seed)
277
+ for seed, result in results.items()
278
+ if result["clip_sim_image"] >= opt.clip_img_threshold
279
+ and result["clip_sim_dir"] >= opt.clip_dir_threshold
280
+ and result["clip_sim_0"] >= opt.clip_threshold
281
+ and result["clip_sim_1"] >= opt.clip_threshold
282
+ ]
283
+ metadata.sort(reverse=True)
284
+ for _, seed in metadata[: opt.max_out_samples]:
285
+ result = results[seed]
286
+ image_0 = result.pop("image_0")
287
+ image_1 = result.pop("image_1")
288
+ image_0.save(prompt_dir.joinpath(f"{seed}_0.jpg"), quality=100)
289
+ image_1.save(prompt_dir.joinpath(f"{seed}_1.jpg"), quality=100)
290
+ with open(prompt_dir.joinpath(f"metadata.jsonl"), "a") as fp:
291
+ fp.write(f"{json.dumps(dict(seed=seed, **result))}\n")
292
+
293
+ print("Done.")
294
+
295
+
296
+ if __name__ == "__main__":
297
+ main()
dataset_creation/generate_txt_dataset.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import time
5
+ from argparse import ArgumentParser
6
+ from pathlib import Path
7
+ from typing import Optional
8
+
9
+ import datasets
10
+ import numpy as np
11
+ import openai
12
+ from tqdm.auto import tqdm
13
+
14
+
15
+ DELIMITER_0 = "\n##\n"
16
+ DELIMITER_1 = "\n%%\n"
17
+ STOP = "\nEND"
18
+
19
+
20
+ def generate(
21
+ openai_model: str,
22
+ caption: str,
23
+ num_retries: int = 3,
24
+ max_tokens: int = 256,
25
+ temperature: float = 0.7,
26
+ top_p: float = 1.0,
27
+ frequency_penalty: float = 0.1,
28
+ presence_penalty: float = 0.0,
29
+ sleep_on_error: float = 1.0,
30
+ ) -> Optional[tuple[str, str]]:
31
+ for _ in range(1 + num_retries):
32
+ try:
33
+ response = openai.Completion.create(
34
+ model=openai_model,
35
+ prompt=caption + DELIMITER_0,
36
+ temperature=temperature,
37
+ max_tokens=max_tokens,
38
+ top_p=top_p,
39
+ frequency_penalty=frequency_penalty,
40
+ presence_penalty=presence_penalty,
41
+ stop=[STOP],
42
+ )
43
+ except Exception as e:
44
+ print(e)
45
+ time.sleep(sleep_on_error)
46
+ continue
47
+ output = response["choices"][0]["text"].split(DELIMITER_1)
48
+ if len(output) == 2:
49
+ instruction, edited_caption = output
50
+ results = openai.Moderation.create([instruction, edited_caption])["results"]
51
+ if results[0]["flagged"] or results[1]["flagged"]:
52
+ continue
53
+ if caption.strip().strip(".!?").lower() != edited_caption.strip().strip(".!?").lower():
54
+ return instruction, edited_caption
55
+
56
+
57
+ def main(openai_model: str, num_samples: int, num_partitions: int, partition: int, seed: int):
58
+ dataset = datasets.load_dataset("ChristophSchuhmann/improved_aesthetics_6.5plus", split="train")
59
+ # Other datasets we considered that may be worth trying:
60
+ # dataset = datasets.load_dataset("ChristophSchuhmann/MS_COCO_2017_URL_TEXT", split="train")
61
+ # dataset = datasets.load_dataset("laion/laion-coco", split="train")
62
+
63
+ np.random.seed(seed)
64
+ permutation = np.array_split(np.random.permutation(len(dataset)), num_partitions)[partition]
65
+ dataset = dataset[permutation]
66
+ captions = dataset["TEXT"]
67
+ urls = dataset["URL"]
68
+ output_path = f"prompts/dataset=laion-aesthetics-6.5_model={openai_model}_samples={num_samples}_partition={partition}.jsonl" # fmt: skip
69
+ print(f"Prompt file path: {output_path}")
70
+
71
+ count = 0
72
+ caption_set = set()
73
+ url_set = set()
74
+
75
+ if Path(output_path).exists():
76
+ with open(output_path, "r") as f:
77
+ for line in tqdm(f, desc="Resuming from existing prompts"):
78
+ prompt = json.loads(line)
79
+ if prompt["caption"] not in caption_set and prompt["url"] not in url_set:
80
+ caption_set.add(prompt["caption"])
81
+ url_set.add(prompt["url"])
82
+ count += 1
83
+
84
+ with open(output_path, "a") as fp:
85
+ with tqdm(total=num_samples - count, desc="Generating instructions and edited captions") as progress_bar:
86
+ for caption, url in zip(captions, urls):
87
+ if caption in caption_set or url in url_set:
88
+ continue
89
+ if openai.Moderation.create(caption)["results"][0]["flagged"]:
90
+ continue
91
+ edit_output = generate(caption)
92
+ if edit_output is not None:
93
+ edit, output = edit_output
94
+ fp.write(f"{json.dumps(dict(caption=caption, edit=edit, output=output, url=url))}\n")
95
+ count += 1
96
+ progress_bar.update()
97
+ caption_set.add(caption)
98
+ url_set.add(url)
99
+ if count == num_samples:
100
+ break
101
+
102
+
103
+ if __name__ == "__main__":
104
+ parser = ArgumentParser()
105
+ parser.add_argument("openai-api-key", type=str)
106
+ parser.add_argument("openai-model", type=str)
107
+ parser.add_argument("--num-samples", default=10000, type=int)
108
+ parser.add_argument("--num-partitions", default=1, type=int)
109
+ parser.add_argument("--partition", default=0, type=int)
110
+ parser.add_argument("--seed", default=0, type=int)
111
+ args = parser.parse_args()
112
+ openai.api_key = args.openai_api_key
113
+ main(args.openai_model, args.num_samples, args.num_partitions, args.partition, args.seed)
dataset_creation/prepare_dataset.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from argparse import ArgumentParser
3
+ from pathlib import Path
4
+
5
+ from tqdm.auto import tqdm
6
+
7
+
8
+ def main():
9
+ parser = ArgumentParser()
10
+ parser.add_argument("dataset_dir")
11
+ args = parser.parse_args()
12
+ dataset_dir = Path(args.dataset_dir)
13
+
14
+ seeds = []
15
+ with tqdm(desc="Listing dataset image seeds") as progress_bar:
16
+ for prompt_dir in dataset_dir.iterdir():
17
+ if prompt_dir.is_dir():
18
+ prompt_seeds = [image_path.name.split("_")[0] for image_path in sorted(prompt_dir.glob("*_0.jpg"))]
19
+ if len(prompt_seeds) > 0:
20
+ seeds.append((prompt_dir.name, prompt_seeds))
21
+ progress_bar.update()
22
+ seeds.sort()
23
+
24
+ with open(dataset_dir.joinpath("seeds.json"), "w") as f:
25
+ json.dump(seeds, f)
26
+
27
+
28
+ if __name__ == "__main__":
29
+ main()
dataset_creation/prepare_for_gpt.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from argparse import ArgumentParser
3
+
4
+ from .generate_txt_dataset import DELIMITER_0, DELIMITER_1, STOP
5
+
6
+
7
+ def main(input_path: str, output_path: str):
8
+ with open(input_path) as f:
9
+ prompts = [json.loads(l) for l in f]
10
+
11
+ with open(output_path, "w") as f:
12
+ for prompt in prompts:
13
+ prompt_for_gpt = {
14
+ "prompt": f"{prompt['input']}{DELIMITER_0}",
15
+ "completion": f"{prompt['edit']}{DELIMITER_1}{prompt['output']}{STOP}",
16
+ }
17
+ f.write(f"{json.dumps(prompt_for_gpt)}\n")
18
+
19
+
20
+ if __name__ == "__main__":
21
+ parser = ArgumentParser()
22
+ parser.add_argument("input-path", type=str)
23
+ parser.add_argument("output-path", type=str)
24
+ args = parser.parse_args()
25
+ main(args.input_path, args.output_path)
edit_app.py ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import math
4
+ import random
5
+ import sys
6
+ from argparse import ArgumentParser
7
+
8
+ import einops
9
+ import gradio as gr
10
+ import k_diffusion as K
11
+ import numpy as np
12
+ import torch
13
+ import torch.nn as nn
14
+ from einops import rearrange
15
+ from omegaconf import OmegaConf
16
+ from PIL import Image, ImageOps
17
+ from torch import autocast
18
+
19
+ sys.path.append("./stable_diffusion")
20
+
21
+ from stable_diffusion.ldm.util import instantiate_from_config
22
+
23
+
24
+ help_text = """
25
+ If you're not getting what you want, there may be a few reasons:
26
+ 1. Is the image not changing enough? Your Image CFG weight may be too high. This value dictates how similar the output should be to the input. It's possible your edit requires larger changes from the original image, and your Image CFG weight isn't allowing that. Alternatively, your Text CFG weight may be too low. This value dictates how much to listen to the text instruction. The default Image CFG of 1.5 and Text CFG of 7.5 are a good starting point, but aren't necessarily optimal for each edit. Try:
27
+ * Decreasing the Image CFG weight, or
28
+ * Incerasing the Text CFG weight, or
29
+ 2. Conversely, is the image changing too much, such that the details in the original image aren't preserved? Try:
30
+ * Increasing the Image CFG weight, or
31
+ * Decreasing the Text CFG weight
32
+ 3. Try generating results with different random seeds by setting "Randomize Seed" and running generation multiple times. You can also try setting "Randomize CFG" to sample new Text CFG and Image CFG values each time.
33
+ 4. Rephrasing the instruction sometimes improves results (e.g., "turn him into a dog" vs. "make him a dog" vs. "as a dog").
34
+ 5. Increasing the number of steps sometimes improves results.
35
+ 6. Do faces look weird? The Stable Diffusion autoencoder has a hard time with faces that are small in the image. Try:
36
+ * Cropping the image so the face takes up a larger portion of the frame.
37
+ """
38
+
39
+
40
+ example_instructions = [
41
+ "Make it a picasso painting",
42
+ "as if it were by modigliani",
43
+ "convert to a bronze statue",
44
+ "Turn it into an anime.",
45
+ "have it look like a graphic novel",
46
+ "make him gain weight",
47
+ "what would he look like bald?",
48
+ "Have him smile",
49
+ "Put him in a cocktail party.",
50
+ "move him at the beach.",
51
+ "add dramatic lighting",
52
+ "Convert to black and white",
53
+ "What if it were snowing?",
54
+ "Give him a leather jacket",
55
+ "Turn him into a cyborg!",
56
+ "make him wear a beanie",
57
+ ]
58
+
59
+
60
+ class CFGDenoiser(nn.Module):
61
+ def __init__(self, model):
62
+ super().__init__()
63
+ self.inner_model = model
64
+
65
+ def forward(self, z, sigma, cond, uncond, text_cfg_scale, image_cfg_scale):
66
+ cfg_z = einops.repeat(z, "1 ... -> n ...", n=3)
67
+ cfg_sigma = einops.repeat(sigma, "1 ... -> n ...", n=3)
68
+ cfg_cond = {
69
+ "c_crossattn": [torch.cat([cond["c_crossattn"][0], uncond["c_crossattn"][0], uncond["c_crossattn"][0]])],
70
+ "c_concat": [torch.cat([cond["c_concat"][0], cond["c_concat"][0], uncond["c_concat"][0]])],
71
+ }
72
+ out_cond, out_img_cond, out_uncond = self.inner_model(cfg_z, cfg_sigma, cond=cfg_cond).chunk(3)
73
+ return out_uncond + text_cfg_scale * (out_cond - out_img_cond) + image_cfg_scale * (out_img_cond - out_uncond)
74
+
75
+
76
+ def load_model_from_config(config, ckpt, vae_ckpt=None, verbose=False, cached=False):
77
+ print(f"Cache: {cached}")
78
+ print(f"Loading model from {ckpt}")
79
+ pl_sd = torch.load(ckpt, map_location="cpu")
80
+ if "global_step" in pl_sd:
81
+ print(f"Global Step: {pl_sd['global_step']}")
82
+ sd = pl_sd["state_dict"]
83
+ if vae_ckpt is not None:
84
+ print(f"Loading VAE from {vae_ckpt}")
85
+ vae_sd = torch.load(vae_ckpt, map_location="cpu")["state_dict"]
86
+ sd = {
87
+ k: vae_sd[k[len("first_stage_model.") :]] if k.startswith("first_stage_model.") else v
88
+ for k, v in sd.items()
89
+ }
90
+ model = instantiate_from_config(config.model, cached=cached)
91
+ m, u = model.load_state_dict(sd, strict=False)
92
+ if len(m) > 0 and verbose:
93
+ print("missing keys:")
94
+ print(m)
95
+ if len(u) > 0 and verbose:
96
+ print("unexpected keys:")
97
+ print(u)
98
+ return model
99
+
100
+
101
+ def main():
102
+ parser = ArgumentParser()
103
+ parser.add_argument("--resolution", default=512, type=int)
104
+ parser.add_argument("--config", default="configs/instruct-pix2pix/generate.yaml", type=str)
105
+ parser.add_argument("--ckpt", default="checkpoints/instruct-pix2pix-00-20000.ckpt", type=str)
106
+ parser.add_argument("--vae-ckpt", default=None, type=str)
107
+ args = parser.parse_args()
108
+
109
+ config = OmegaConf.load(args.config)
110
+ model = load_model_from_config(config, args.ckpt, args.vae_ckpt)
111
+ model.eval().cuda()
112
+ model_wrap = K.external.CompVisDenoiser(model)
113
+ model_wrap_cfg = CFGDenoiser(model_wrap)
114
+ null_token = model.get_learned_conditioning([""])
115
+ example_image = Image.open("imgs/example.jpg").convert("RGB")
116
+
117
+ def load_example(
118
+ steps: int,
119
+ randomize_seed: bool,
120
+ seed: int,
121
+ randomize_cfg: bool,
122
+ text_cfg_scale: float,
123
+ image_cfg_scale: float,
124
+ ):
125
+ example_instruction = random.choice(example_instructions)
126
+ return [example_image, example_instruction] + generate(
127
+ example_image,
128
+ example_instruction,
129
+ steps,
130
+ randomize_seed,
131
+ seed,
132
+ randomize_cfg,
133
+ text_cfg_scale,
134
+ image_cfg_scale,
135
+ )
136
+
137
+ def generate(
138
+ input_image: Image.Image,
139
+ instruction: str,
140
+ steps: int,
141
+ randomize_seed: bool,
142
+ seed: int,
143
+ randomize_cfg: bool,
144
+ text_cfg_scale: float,
145
+ image_cfg_scale: float,
146
+ ):
147
+ seed = random.randint(0, 100000) if randomize_seed else seed
148
+ text_cfg_scale = round(random.uniform(6.0, 9.0), ndigits=2) if randomize_cfg else text_cfg_scale
149
+ image_cfg_scale = round(random.uniform(1.2, 1.8), ndigits=2) if randomize_cfg else image_cfg_scale
150
+
151
+ width, height = input_image.size
152
+ factor = args.resolution / max(width, height)
153
+ factor = math.ceil(min(width, height) * factor / 64) * 64 / min(width, height)
154
+ width = int((width * factor) // 64) * 64
155
+ height = int((height * factor) // 64) * 64
156
+ input_image = ImageOps.fit(input_image, (width, height), method=Image.Resampling.LANCZOS)
157
+
158
+ if instruction == "":
159
+ return [input_image, seed]
160
+
161
+ with torch.no_grad(), autocast("cuda"), model.ema_scope():
162
+ cond = {}
163
+ cond["c_crossattn"] = [model.get_learned_conditioning([instruction])]
164
+ input_image = 2 * torch.tensor(np.array(input_image)).float() / 255 - 1
165
+ input_image = rearrange(input_image, "h w c -> 1 c h w").to(model.device)
166
+ cond["c_concat"] = [model.encode_first_stage(input_image).mode()]
167
+
168
+ uncond = {}
169
+ uncond["c_crossattn"] = [null_token]
170
+ uncond["c_concat"] = [torch.zeros_like(cond["c_concat"][0])]
171
+
172
+ sigmas = model_wrap.get_sigmas(steps)
173
+
174
+ extra_args = {
175
+ "cond": cond,
176
+ "uncond": uncond,
177
+ "text_cfg_scale": text_cfg_scale,
178
+ "image_cfg_scale": image_cfg_scale,
179
+ }
180
+ torch.manual_seed(seed)
181
+ z = torch.randn_like(cond["c_concat"][0]) * sigmas[0]
182
+ z = K.sampling.sample_euler_ancestral(model_wrap_cfg, z, sigmas, extra_args=extra_args)
183
+ x = model.decode_first_stage(z)
184
+ x = torch.clamp((x + 1.0) / 2.0, min=0.0, max=1.0)
185
+ x = 255.0 * rearrange(x, "1 c h w -> h w c")
186
+ edited_image = Image.fromarray(x.type(torch.uint8).cpu().numpy())
187
+
188
+ return [seed, text_cfg_scale, image_cfg_scale, edited_image]
189
+
190
+ def reset():
191
+ return [50, "Randomize Seed", random.randint(0, 100000), "Fix CFG", 7.5, 1.5, None]
192
+
193
+ with gr.Blocks(css="footer {visibility: hidden}") as demo:
194
+ with gr.Row():
195
+ with gr.Column(scale=1, min_width=100):
196
+ generate_button = gr.Button("Generate")
197
+ with gr.Column(scale=1, min_width=100):
198
+ load_button = gr.Button("Load Example")
199
+ with gr.Column(scale=1, min_width=100):
200
+ reset_button = gr.Button("Reset")
201
+ with gr.Column(scale=3):
202
+ instruction = gr.Textbox(lines=1, label="Edit Instruction", interactive=True)
203
+
204
+ with gr.Row():
205
+ input_image = gr.Image(label="Input Image", type="pil", interactive=True)
206
+ edited_image = gr.Image(label=f"Edited Image", type="pil", interactive=False)
207
+ input_image.style(height=512, width=512)
208
+ edited_image.style(height=512, width=512)
209
+
210
+ with gr.Row():
211
+ steps = gr.Number(value=50, precision=0, label="Steps", interactive=True)
212
+ randomize_seed = gr.Radio(
213
+ ["Fix Seed", "Randomize Seed"],
214
+ value="Randomize Seed",
215
+ type="index",
216
+ show_label=False,
217
+ interactive=True,
218
+ )
219
+ seed = gr.Number(value=random.randint(0, 100000), precision=0, label="Seed", interactive=True)
220
+ randomize_cfg = gr.Radio(
221
+ ["Fix CFG", "Randomize CFG"],
222
+ value="Fix CFG",
223
+ type="index",
224
+ show_label=False,
225
+ interactive=True,
226
+ )
227
+ text_cfg_scale = gr.Number(value=7.5, label=f"Text CFG", interactive=True)
228
+ image_cfg_scale = gr.Number(value=1.5, label=f"Image CFG", interactive=True)
229
+
230
+ gr.Markdown(help_text)
231
+
232
+ load_button.click(
233
+ fn=load_example,
234
+ inputs=[
235
+ steps,
236
+ randomize_seed,
237
+ seed,
238
+ randomize_cfg,
239
+ text_cfg_scale,
240
+ image_cfg_scale,
241
+ ],
242
+ outputs=[input_image, instruction, seed, text_cfg_scale, image_cfg_scale, edited_image],
243
+ )
244
+ generate_button.click(
245
+ fn=generate,
246
+ inputs=[
247
+ input_image,
248
+ instruction,
249
+ steps,
250
+ randomize_seed,
251
+ seed,
252
+ randomize_cfg,
253
+ text_cfg_scale,
254
+ image_cfg_scale,
255
+ ],
256
+ outputs=[seed, text_cfg_scale, image_cfg_scale, edited_image],
257
+ )
258
+ reset_button.click(
259
+ fn=reset,
260
+ inputs=[],
261
+ outputs=[steps, randomize_seed, seed, randomize_cfg, text_cfg_scale, image_cfg_scale, edited_image],
262
+ )
263
+
264
+ demo.queue(concurrency_count=1)
265
+ demo.launch(share=True)
266
+
267
+
268
+ if __name__ == "__main__":
269
+ main()
edit_cli.py ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import math
4
+ import random
5
+ import sys
6
+ from argparse import ArgumentParser
7
+
8
+ import einops
9
+ import k_diffusion as K
10
+ import numpy as np
11
+ import torch
12
+ import torch.nn as nn
13
+ from einops import rearrange
14
+ from omegaconf import OmegaConf
15
+ from PIL import Image, ImageOps
16
+ from torch import autocast
17
+
18
+ sys.path.append("./stable_diffusion")
19
+
20
+ from stable_diffusion.ldm.util import instantiate_from_config
21
+
22
+
23
+ class CFGDenoiser(nn.Module):
24
+ def __init__(self, model):
25
+ super().__init__()
26
+ self.inner_model = model
27
+
28
+ def forward(self, z, sigma, cond, uncond, text_cfg_scale, image_cfg_scale):
29
+ cfg_z = einops.repeat(z, "1 ... -> n ...", n=3)
30
+ cfg_sigma = einops.repeat(sigma, "1 ... -> n ...", n=3)
31
+ cfg_cond = {
32
+ "c_crossattn": [torch.cat([cond["c_crossattn"][0], uncond["c_crossattn"][0], uncond["c_crossattn"][0]])],
33
+ "c_concat": [torch.cat([cond["c_concat"][0], cond["c_concat"][0], uncond["c_concat"][0]])],
34
+ }
35
+ out_cond, out_img_cond, out_uncond = self.inner_model(cfg_z, cfg_sigma, cond=cfg_cond).chunk(3)
36
+ return out_uncond + text_cfg_scale * (out_cond - out_img_cond) + image_cfg_scale * (out_img_cond - out_uncond)
37
+
38
+
39
+ def load_model_from_config(config, ckpt, vae_ckpt=None, verbose=False):
40
+ print(f"Loading model from {ckpt}")
41
+ pl_sd = torch.load(ckpt, map_location="cpu")
42
+ if "global_step" in pl_sd:
43
+ print(f"Global Step: {pl_sd['global_step']}")
44
+ sd = pl_sd["state_dict"]
45
+ if vae_ckpt is not None:
46
+ print(f"Loading VAE from {vae_ckpt}")
47
+ vae_sd = torch.load(vae_ckpt, map_location="cpu")["state_dict"]
48
+ sd = {
49
+ k: vae_sd[k[len("first_stage_model.") :]] if k.startswith("first_stage_model.") else v
50
+ for k, v in sd.items()
51
+ }
52
+ model = instantiate_from_config(config.model)
53
+ m, u = model.load_state_dict(sd, strict=False)
54
+ if len(m) > 0 and verbose:
55
+ print("missing keys:")
56
+ print(m)
57
+ if len(u) > 0 and verbose:
58
+ print("unexpected keys:")
59
+ print(u)
60
+ return model
61
+
62
+
63
+ def main():
64
+ parser = ArgumentParser()
65
+ parser.add_argument("--resolution", default=512, type=int)
66
+ parser.add_argument("--steps", default=100, type=int)
67
+ parser.add_argument("--config", default="configs/generate.yaml", type=str)
68
+ parser.add_argument("--ckpt", default="checkpoints/instruct-pix2pix-00-20000.ckpt", type=str)
69
+ parser.add_argument("--vae-ckpt", default=None, type=str)
70
+ parser.add_argument("--input", required=True, type=str)
71
+ parser.add_argument("--output", required=True, type=str)
72
+ parser.add_argument("--edit", required=True, type=str)
73
+ parser.add_argument("--cfg-text", default=7.5, type=float)
74
+ parser.add_argument("--cfg-image", default=1.2, type=float)
75
+ parser.add_argument("--seed", type=int)
76
+ args = parser.parse_args()
77
+
78
+ config = OmegaConf.load(args.config)
79
+ model = load_model_from_config(config, args.ckpt, args.vae_ckpt)
80
+ model.eval().cuda()
81
+ model_wrap = K.external.CompVisDenoiser(model)
82
+ model_wrap_cfg = CFGDenoiser(model_wrap)
83
+ null_token = model.get_learned_conditioning([""])
84
+
85
+ seed = random.randint(0, 100000) if args.seed is None else args.seed
86
+ input_image = Image.open(args.input).convert("RGB")
87
+ width, height = input_image.size
88
+ factor = args.resolution / max(width, height)
89
+ factor = math.ceil(min(width, height) * factor / 64) * 64 / min(width, height)
90
+ width = int((width * factor) // 64) * 64
91
+ height = int((height * factor) // 64) * 64
92
+ input_image = ImageOps.fit(input_image, (width, height), method=Image.Resampling.LANCZOS)
93
+
94
+ if args.edit == "":
95
+ input_image.save(args.output)
96
+ return
97
+
98
+ with torch.no_grad(), autocast("cuda"), model.ema_scope():
99
+ cond = {}
100
+ cond["c_crossattn"] = [model.get_learned_conditioning([args.edit])]
101
+ input_image = 2 * torch.tensor(np.array(input_image)).float() / 255 - 1
102
+ input_image = rearrange(input_image, "h w c -> 1 c h w").to(model.device)
103
+ cond["c_concat"] = [model.encode_first_stage(input_image).mode()]
104
+
105
+ uncond = {}
106
+ uncond["c_crossattn"] = [null_token]
107
+ uncond["c_concat"] = [torch.zeros_like(cond["c_concat"][0])]
108
+
109
+ sigmas = model_wrap.get_sigmas(args.steps)
110
+
111
+ extra_args = {
112
+ "cond": cond,
113
+ "uncond": uncond,
114
+ "text_cfg_scale": args.cfg_text,
115
+ "image_cfg_scale": args.cfg_image,
116
+ }
117
+ torch.manual_seed(seed)
118
+ z = torch.randn_like(cond["c_concat"][0]) * sigmas[0]
119
+ z = K.sampling.sample_euler_ancestral(model_wrap_cfg, z, sigmas, extra_args=extra_args)
120
+ x = model.decode_first_stage(z)
121
+ x = torch.clamp((x + 1.0) / 2.0, min=0.0, max=1.0)
122
+ x = 255.0 * rearrange(x, "1 c h w -> h w c")
123
+ edited_image = Image.fromarray(x.type(torch.uint8).cpu().numpy())
124
+ edited_image.save(args.output)
125
+
126
+
127
+ if __name__ == "__main__":
128
+ main()
edit_dataset.py ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import math
5
+ from pathlib import Path
6
+ from typing import Any
7
+
8
+ import numpy as np
9
+ import torch
10
+ import torchvision
11
+ from einops import rearrange
12
+ from PIL import Image
13
+ from torch.utils.data import Dataset
14
+
15
+
16
+ class EditDataset(Dataset):
17
+ def __init__(
18
+ self,
19
+ path: str,
20
+ split: str = "train",
21
+ splits: tuple[float, float, float] = (0.9, 0.05, 0.05),
22
+ min_resize_res: int = 256,
23
+ max_resize_res: int = 256,
24
+ crop_res: int = 256,
25
+ flip_prob: float = 0.0,
26
+ ):
27
+ assert split in ("train", "val", "test")
28
+ assert sum(splits) == 1
29
+ self.path = path
30
+ self.min_resize_res = min_resize_res
31
+ self.max_resize_res = max_resize_res
32
+ self.crop_res = crop_res
33
+ self.flip_prob = flip_prob
34
+
35
+ with open(Path(self.path, "seeds.json")) as f:
36
+ self.seeds = json.load(f)
37
+
38
+ split_0, split_1 = {
39
+ "train": (0.0, splits[0]),
40
+ "val": (splits[0], splits[0] + splits[1]),
41
+ "test": (splits[0] + splits[1], 1.0),
42
+ }[split]
43
+
44
+ idx_0 = math.floor(split_0 * len(self.seeds))
45
+ idx_1 = math.floor(split_1 * len(self.seeds))
46
+ self.seeds = self.seeds[idx_0:idx_1]
47
+
48
+ def __len__(self) -> int:
49
+ return len(self.seeds)
50
+
51
+ def __getitem__(self, i: int) -> dict[str, Any]:
52
+ name, seeds = self.seeds[i]
53
+ propt_dir = Path(self.path, name)
54
+ seed = seeds[torch.randint(0, len(seeds), ()).item()]
55
+ with open(propt_dir.joinpath("prompt.json")) as fp:
56
+ prompt = json.load(fp)["edit"]
57
+
58
+ image_0 = Image.open(propt_dir.joinpath(f"{seed}_0.jpg"))
59
+ image_1 = Image.open(propt_dir.joinpath(f"{seed}_1.jpg"))
60
+
61
+ reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item()
62
+ image_0 = image_0.resize((reize_res, reize_res), Image.Resampling.LANCZOS)
63
+ image_1 = image_1.resize((reize_res, reize_res), Image.Resampling.LANCZOS)
64
+
65
+ image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w")
66
+ image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w")
67
+
68
+ crop = torchvision.transforms.RandomCrop(self.crop_res)
69
+ flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob))
70
+ image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2)
71
+
72
+ return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt))
environment.yaml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion).
2
+ # See more details in LICENSE.
3
+
4
+ name: ip2p
5
+ channels:
6
+ - pytorch
7
+ - defaults
8
+ dependencies:
9
+ - python=3.8.5
10
+ - pip=20.3
11
+ - cudatoolkit=11.3
12
+ - pytorch=1.11.0
13
+ - torchvision=0.12.0
14
+ - numpy=1.19.2
15
+ - pip:
16
+ - albumentations==0.4.3
17
+ - diffusers
18
+ - opencv-python==4.1.2.30
19
+ - pudb==2019.2
20
+ - invisible-watermark
21
+ - imageio==2.9.0
22
+ - imageio-ffmpeg==0.4.2
23
+ - pytorch-lightning==1.4.2
24
+ - omegaconf==2.1.1
25
+ - test-tube>=0.7.5
26
+ - streamlit>=0.73.1
27
+ - einops==0.3.0
28
+ - torch-fidelity==0.3.0
29
+ - transformers==4.19.2
30
+ - torchmetrics==0.6.0
31
+ - kornia==0.6
32
+ - -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
33
+ - -e git+https://github.com/openai/CLIP.git@main#egg=clip
34
+ - openai
35
+ - gradio
36
+ - seaborn
37
+ - git+https://github.com/crowsonkb/k-diffusion.git
imgs/example.jpg ADDED
main.py ADDED
@@ -0,0 +1,797 @@