Anonymous941 commited on
Commit
fc0edcf
·
1 Parent(s): a8811be

please work

Browse files
README.md CHANGED
@@ -1,13 +1,246 @@
1
- ---
2
- title: Testa
3
- emoji:
4
- colorFrom: red
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.19.1
8
- app_file: app.py
9
- pinned: false
10
- license: agpl-3.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stable Diffusion text-to-image fine-tuning
2
+
3
+ The `train_text_to_image.py` script shows how to fine-tune stable diffusion model on your own dataset.
4
+
5
+ ___Note___:
6
+
7
+ ___This script is experimental. The script fine-tunes the whole model and often times the model overfits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparamters to get the best result on your dataset.___
8
+
9
+
10
+ ## Running locally with PyTorch
11
+ ### Installing the dependencies
12
+
13
+ Before running the scripts, make sure to install the library's training dependencies:
14
+
15
+ **Important**
16
+
17
+ To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
18
+ ```bash
19
+ git clone https://github.com/huggingface/diffusers
20
+ cd diffusers
21
+ pip install .
22
+ ```
23
+
24
+ Then cd in the example folder and run
25
+ ```bash
26
+ pip install -r requirements.txt
27
+ ```
28
+
29
+ And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
30
+
31
+ ```bash
32
+ accelerate config
33
+ ```
34
+
35
+ ### Pokemon example
36
+
37
+ You need to accept the model license before downloading or using the weights. In this example we'll use model version `v1-4`, so you'll need to visit [its card](https://huggingface.co/CompVis/stable-diffusion-v1-4), read the license and tick the checkbox if you agree.
38
+
39
+ You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).
40
+
41
+ Run the following command to authenticate your token
42
+
43
+ ```bash
44
+ huggingface-cli login
45
+ ```
46
+
47
+ If you have already cloned the repo, then you won't need to go through these steps.
48
+
49
+ <br>
50
+
51
+ #### Hardware
52
+ With `gradient_checkpointing` and `mixed_precision` it should be possible to fine tune the model on a single 24GB GPU. For higher `batch_size` and faster training it's better to use GPUs with >30GB memory.
53
+
54
+ **___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___**
55
+
56
+ ```bash
57
+ export MODEL_NAME="CompVis/stable-diffusion-v1-4"
58
+ export dataset_name="lambdalabs/pokemon-blip-captions"
59
+
60
+ accelerate launch --mixed_precision="fp16" train_text_to_image.py \
61
+ --pretrained_model_name_or_path=$MODEL_NAME \
62
+ --dataset_name=$dataset_name \
63
+ --use_ema \
64
+ --resolution=512 --center_crop --random_flip \
65
+ --train_batch_size=1 \
66
+ --gradient_accumulation_steps=4 \
67
+ --gradient_checkpointing \
68
+ --max_train_steps=15000 \
69
+ --learning_rate=1e-05 \
70
+ --max_grad_norm=1 \
71
+ --lr_scheduler="constant" --lr_warmup_steps=0 \
72
+ --output_dir="sd-pokemon-model"
73
+ ```
74
+
75
+
76
+ To run on your own training files prepare the dataset according to the format required by `datasets`, you can find the instructions for how to do that in this [document](https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder-with-metadata).
77
+ If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script.
78
+
79
+ ```bash
80
+ export MODEL_NAME="CompVis/stable-diffusion-v1-4"
81
+ export TRAIN_DIR="path_to_your_dataset"
82
+
83
+ accelerate launch --mixed_precision="fp16" train_text_to_image.py \
84
+ --pretrained_model_name_or_path=$MODEL_NAME \
85
+ --train_data_dir=$TRAIN_DIR \
86
+ --use_ema \
87
+ --resolution=512 --center_crop --random_flip \
88
+ --train_batch_size=1 \
89
+ --gradient_accumulation_steps=4 \
90
+ --gradient_checkpointing \
91
+ --max_train_steps=15000 \
92
+ --learning_rate=1e-05 \
93
+ --max_grad_norm=1 \
94
+ --lr_scheduler="constant" --lr_warmup_steps=0 \
95
+ --output_dir="sd-pokemon-model"
96
+ ```
97
+
98
+
99
+ Once the training is finished the model will be saved in the `output_dir` specified in the command. In this example it's `sd-pokemon-model`. To load the fine-tuned model for inference just pass that path to `StableDiffusionPipeline`
100
+
101
+
102
+ ```python
103
+ from diffusers import StableDiffusionPipeline
104
+
105
+ model_path = "path_to_saved_model"
106
+ pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
107
+ pipe.to("cuda")
108
+
109
+ image = pipe(prompt="yoda").images[0]
110
+ image.save("yoda-pokemon.png")
111
+ ```
112
+
113
+ ## Training with LoRA
114
+
115
+ Low-Rank Adaption of Large Language Models was first introduced by Microsoft in [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) by *Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen*.
116
+
117
+ In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-decomposition matrices to existing weights and **only** training those newly added weights. This has a couple of advantages:
118
+
119
+ - Previous pretrained weights are kept frozen so that model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114).
120
+ - Rank-decomposition matrices have significantly fewer parameters than original model, which means that trained LoRA weights are easily portable.
121
+ - LoRA attention layers allow to control to which extent the model is adapted toward new training images via a `scale` parameter.
122
+
123
+ [cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository.
124
+
125
+ With LoRA, it's possible to fine-tune Stable Diffusion on a custom image-caption pair dataset
126
+ on consumer GPUs like Tesla T4, Tesla V100.
127
+
128
+ ### Training
129
+
130
+ First, you need to set up your development environment as is explained in the [installation section](#installing-the-dependencies). Make sure to set the `MODEL_NAME` and `DATASET_NAME` environment variables. Here, we will use [Stable Diffusion v1-4](https://hf.co/CompVis/stable-diffusion-v1-4) and the [Pokemons dataset](https://hf.colambdalabs/pokemon-blip-captions).
131
+
132
+ **___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___**
133
+
134
+ **___Note: It is quite useful to monitor the training progress by regularly generating sample images during training. [Weights and Biases](https://docs.wandb.ai/quickstart) is a nice solution to easily see generating images during training. All you need to do is to run `pip install wandb` before training to automatically log images.___**
135
+
136
+ ```bash
137
+ export MODEL_NAME="CompVis/stable-diffusion-v1-4"
138
+ export DATASET_NAME="lambdalabs/pokemon-blip-captions"
139
+ ```
140
+
141
+ For this example we want to directly store the trained LoRA embeddings on the Hub, so
142
+ we need to be logged in and add the `--push_to_hub` flag.
143
+
144
+ ```bash
145
+ huggingface-cli login
146
+ ```
147
+
148
+ Now we can start training!
149
+
150
+ ```bash
151
+ accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
152
+ --pretrained_model_name_or_path=$MODEL_NAME \
153
+ --dataset_name=$DATASET_NAME --caption_column="text" \
154
+ --resolution=512 --random_flip \
155
+ --train_batch_size=1 \
156
+ --num_train_epochs=100 --checkpointing_steps=5000 \
157
+ --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
158
+ --seed=42 \
159
+ --output_dir="sd-pokemon-model-lora" \
160
+ --validation_prompt="cute dragon creature" --report_to="wandb"
161
+ ```
162
+
163
+ The above command will also run inference as fine-tuning progresses and log the results to Weights and Biases.
164
+
165
+ **___Note: When using LoRA we can use a much higher learning rate compared to non-LoRA fine-tuning. Here we use *1e-4* instead of the usual *1e-5*. Also, by using LoRA, it's possible to run `train_text_to_image_lora.py` in consumer GPUs like T4 or V100.___**
166
+
167
+ The final LoRA embedding weights have been uploaded to [sayakpaul/sd-model-finetuned-lora-t4](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4). **___Note: [The final weights](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4/blob/main/pytorch_lora_weights.bin) are only 3 MB in size, which is orders of magnitudes smaller than the original model.___**
168
+
169
+ You can check some inference samples that were logged during the course of the fine-tuning process [here](https://wandb.ai/sayakpaul/text2image-fine-tune/runs/q4lc0xsw).
170
+
171
+ ### Inference
172
+
173
+ Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline` after loading the trained LoRA weights. You
174
+ need to pass the `output_dir` for loading the LoRA weights which, in this case, is `sd-pokemon-model-lora`.
175
+
176
+ ```python
177
+ from diffusers import StableDiffusionPipeline
178
+ import torch
179
+
180
+ model_path = "sayakpaul/sd-model-finetuned-lora-t4"
181
+ pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
182
+ pipe.unet.load_attn_procs(model_path)
183
+ pipe.to("cuda")
184
+
185
+ prompt = "A pokemon with green eyes and red legs."
186
+ image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0]
187
+ image.save("pokemon.png")
188
+ ```
189
+
190
+ ## Training with Flax/JAX
191
+
192
+ For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script.
193
+
194
+ **___Note: The flax example doesn't yet support features like gradient checkpoint, gradient accumulation etc, so to use flax for faster training we will need >30GB cards or TPU v3.___**
195
+
196
+
197
+ Before running the scripts, make sure to install the library's training dependencies:
198
+
199
+ ```bash
200
+ pip install -U -r requirements_flax.txt
201
+ ```
202
+
203
+ ```bash
204
+ export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
205
+ export dataset_name="lambdalabs/pokemon-blip-captions"
206
+
207
+ python train_text_to_image_flax.py \
208
+ --pretrained_model_name_or_path=$MODEL_NAME \
209
+ --dataset_name=$dataset_name \
210
+ --resolution=512 --center_crop --random_flip \
211
+ --train_batch_size=1 \
212
+ --mixed_precision="fp16" \
213
+ --max_train_steps=15000 \
214
+ --learning_rate=1e-05 \
215
+ --max_grad_norm=1 \
216
+ --output_dir="sd-pokemon-model"
217
+ ```
218
+
219
+ To run on your own training files prepare the dataset according to the format required by `datasets`, you can find the instructions for how to do that in this [document](https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder-with-metadata).
220
+ If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script.
221
+
222
+ ```bash
223
+ export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
224
+ export TRAIN_DIR="path_to_your_dataset"
225
+
226
+ python train_text_to_image_flax.py \
227
+ --pretrained_model_name_or_path=$MODEL_NAME \
228
+ --train_data_dir=$TRAIN_DIR \
229
+ --resolution=512 --center_crop --random_flip \
230
+ --train_batch_size=1 \
231
+ --mixed_precision="fp16" \
232
+ --max_train_steps=15000 \
233
+ --learning_rate=1e-05 \
234
+ --max_grad_norm=1 \
235
+ --output_dir="sd-pokemon-model"
236
+ ```
237
+
238
+ ### Training with xFormers:
239
+
240
+ You can enable memory efficient attention by [installing xFormers](https://huggingface.co/docs/diffusers/main/en/optimization/xformers) and passing the `--enable_xformers_memory_efficient_attention` argument to the script.
241
+
242
+ xFormers training is not available for Flax/JAX.
243
+
244
+ **Note**:
245
+
246
+ According to [this issue](https://github.com/huggingface/diffusers/issues/2234#issuecomment-1416931212), xFormers `v0.0.16` cannot be used for training in some GPUs. If you observe that problem, please install a development version as indicated in that comment.
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ accelerate
2
+ torchvision
3
+ transformers>=4.25.1
4
+ datasets
5
+ ftfy
6
+ tensorboard
7
+ Jinja2
requirements_flax.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ transformers>=4.25.1
2
+ datasets
3
+ flax
4
+ optax
5
+ torch
6
+ torchvision
7
+ ftfy
8
+ tensorboard
9
+ Jinja2
train_text_to_image.py ADDED
@@ -0,0 +1,788 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding=utf-8
3
+ # Copyright 2022 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+
16
+ import argparse
17
+ import logging
18
+ import math
19
+ import os
20
+ import random
21
+ from pathlib import Path
22
+ from typing import Optional
23
+
24
+ import accelerate
25
+ import datasets
26
+ import numpy as np
27
+ import torch
28
+ import torch.nn.functional as F
29
+ import torch.utils.checkpoint
30
+ import transformers
31
+ from accelerate import Accelerator
32
+ from accelerate.logging import get_logger
33
+ from accelerate.utils import ProjectConfiguration, set_seed
34
+ from datasets import load_dataset
35
+ from huggingface_hub import HfFolder, Repository, create_repo, whoami
36
+ from packaging import version
37
+ from torchvision import transforms
38
+ from tqdm.auto import tqdm
39
+ from transformers import CLIPTextModel, CLIPTokenizer
40
+
41
+ import diffusers
42
+ from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
43
+ from diffusers.optimization import get_scheduler
44
+ from diffusers.training_utils import EMAModel
45
+ from diffusers.utils import check_min_version, deprecate
46
+ from diffusers.utils.import_utils import is_xformers_available
47
+
48
+
49
+ # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
50
+ check_min_version("0.14.0.dev0")
51
+
52
+ logger = get_logger(__name__, log_level="INFO")
53
+
54
+
55
+ def parse_args():
56
+ parser = argparse.ArgumentParser(description="Simple example of a training script.")
57
+ parser.add_argument(
58
+ "--pretrained_model_name_or_path",
59
+ type=str,
60
+ default=None,
61
+ required=True,
62
+ help="Path to pretrained model or model identifier from huggingface.co/models.",
63
+ )
64
+ parser.add_argument(
65
+ "--revision",
66
+ type=str,
67
+ default=None,
68
+ required=False,
69
+ help="Revision of pretrained model identifier from huggingface.co/models.",
70
+ )
71
+ parser.add_argument(
72
+ "--dataset_name",
73
+ type=str,
74
+ default=None,
75
+ help=(
76
+ "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
77
+ " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
78
+ " or to a folder containing files that 🤗 Datasets can understand."
79
+ ),
80
+ )
81
+ parser.add_argument(
82
+ "--dataset_config_name",
83
+ type=str,
84
+ default=None,
85
+ help="The config of the Dataset, leave as None if there's only one config.",
86
+ )
87
+ parser.add_argument(
88
+ "--train_data_dir",
89
+ type=str,
90
+ default=None,
91
+ help=(
92
+ "A folder containing the training data. Folder contents must follow the structure described in"
93
+ " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
94
+ " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
95
+ ),
96
+ )
97
+ parser.add_argument(
98
+ "--image_column", type=str, default="image", help="The column of the dataset containing an image."
99
+ )
100
+ parser.add_argument(
101
+ "--caption_column",
102
+ type=str,
103
+ default="text",
104
+ help="The column of the dataset containing a caption or a list of captions.",
105
+ )
106
+ parser.add_argument(
107
+ "--max_train_samples",
108
+ type=int,
109
+ default=None,
110
+ help=(
111
+ "For debugging purposes or quicker training, truncate the number of training examples to this "
112
+ "value if set."
113
+ ),
114
+ )
115
+ parser.add_argument(
116
+ "--output_dir",
117
+ type=str,
118
+ default="sd-model-finetuned",
119
+ help="The output directory where the model predictions and checkpoints will be written.",
120
+ )
121
+ parser.add_argument(
122
+ "--cache_dir",
123
+ type=str,
124
+ default=None,
125
+ help="The directory where the downloaded models and datasets will be stored.",
126
+ )
127
+ parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
128
+ parser.add_argument(
129
+ "--resolution",
130
+ type=int,
131
+ default=512,
132
+ help=(
133
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
134
+ " resolution"
135
+ ),
136
+ )
137
+ parser.add_argument(
138
+ "--center_crop",
139
+ default=False,
140
+ action="store_true",
141
+ help=(
142
+ "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
143
+ " cropped. The images will be resized to the resolution first before cropping."
144
+ ),
145
+ )
146
+ parser.add_argument(
147
+ "--random_flip",
148
+ action="store_true",
149
+ help="whether to randomly flip images horizontally",
150
+ )
151
+ parser.add_argument(
152
+ "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
153
+ )
154
+ parser.add_argument("--num_train_epochs", type=int, default=100)
155
+ parser.add_argument(
156
+ "--max_train_steps",
157
+ type=int,
158
+ default=None,
159
+ help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
160
+ )
161
+ parser.add_argument(
162
+ "--gradient_accumulation_steps",
163
+ type=int,
164
+ default=1,
165
+ help="Number of updates steps to accumulate before performing a backward/update pass.",
166
+ )
167
+ parser.add_argument(
168
+ "--gradient_checkpointing",
169
+ action="store_true",
170
+ help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
171
+ )
172
+ parser.add_argument(
173
+ "--learning_rate",
174
+ type=float,
175
+ default=1e-4,
176
+ help="Initial learning rate (after the potential warmup period) to use.",
177
+ )
178
+ parser.add_argument(
179
+ "--scale_lr",
180
+ action="store_true",
181
+ default=False,
182
+ help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
183
+ )
184
+ parser.add_argument(
185
+ "--lr_scheduler",
186
+ type=str,
187
+ default="constant",
188
+ help=(
189
+ 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
190
+ ' "constant", "constant_with_warmup"]'
191
+ ),
192
+ )
193
+ parser.add_argument(
194
+ "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
195
+ )
196
+ parser.add_argument(
197
+ "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
198
+ )
199
+ parser.add_argument(
200
+ "--allow_tf32",
201
+ action="store_true",
202
+ help=(
203
+ "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
204
+ " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
205
+ ),
206
+ )
207
+ parser.add_argument("--use_ema", action="store_true", help="Whether to use EMA model.")
208
+ parser.add_argument(
209
+ "--non_ema_revision",
210
+ type=str,
211
+ default=None,
212
+ required=False,
213
+ help=(
214
+ "Revision of pretrained non-ema model identifier. Must be a branch, tag or git identifier of the local or"
215
+ " remote repository specified with --pretrained_model_name_or_path."
216
+ ),
217
+ )
218
+ parser.add_argument(
219
+ "--dataloader_num_workers",
220
+ type=int,
221
+ default=0,
222
+ help=(
223
+ "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
224
+ ),
225
+ )
226
+ parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
227
+ parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
228
+ parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
229
+ parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
230
+ parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
231
+ parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
232
+ parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
233
+ parser.add_argument(
234
+ "--hub_model_id",
235
+ type=str,
236
+ default=None,
237
+ help="The name of the repository to keep in sync with the local `output_dir`.",
238
+ )
239
+ parser.add_argument(
240
+ "--logging_dir",
241
+ type=str,
242
+ default="logs",
243
+ help=(
244
+ "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
245
+ " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
246
+ ),
247
+ )
248
+ parser.add_argument(
249
+ "--mixed_precision",
250
+ type=str,
251
+ default=None,
252
+ choices=["no", "fp16", "bf16"],
253
+ help=(
254
+ "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
255
+ " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
256
+ " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
257
+ ),
258
+ )
259
+ parser.add_argument(
260
+ "--report_to",
261
+ type=str,
262
+ default="tensorboard",
263
+ help=(
264
+ 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
265
+ ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
266
+ ),
267
+ )
268
+ parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
269
+ parser.add_argument(
270
+ "--checkpointing_steps",
271
+ type=int,
272
+ default=500,
273
+ help=(
274
+ "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
275
+ " training using `--resume_from_checkpoint`."
276
+ ),
277
+ )
278
+ parser.add_argument(
279
+ "--checkpoints_total_limit",
280
+ type=int,
281
+ default=None,
282
+ help=(
283
+ "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`."
284
+ " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state"
285
+ " for more docs"
286
+ ),
287
+ )
288
+ parser.add_argument(
289
+ "--resume_from_checkpoint",
290
+ type=str,
291
+ default=None,
292
+ help=(
293
+ "Whether training should be resumed from a previous checkpoint. Use a path saved by"
294
+ ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
295
+ ),
296
+ )
297
+ parser.add_argument(
298
+ "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
299
+ )
300
+
301
+ args = parser.parse_args()
302
+ env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
303
+ if env_local_rank != -1 and env_local_rank != args.local_rank:
304
+ args.local_rank = env_local_rank
305
+
306
+ # Sanity checks
307
+ if args.dataset_name is None and args.train_data_dir is None:
308
+ raise ValueError("Need either a dataset name or a training folder.")
309
+
310
+ # default to using the same revision for the non-ema model if not specified
311
+ if args.non_ema_revision is None:
312
+ args.non_ema_revision = args.revision
313
+
314
+ return args
315
+
316
+
317
+ def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
318
+ if token is None:
319
+ token = HfFolder.get_token()
320
+ if organization is None:
321
+ username = whoami(token)["name"]
322
+ return f"{username}/{model_id}"
323
+ else:
324
+ return f"{organization}/{model_id}"
325
+
326
+
327
+ dataset_name_mapping = {
328
+ "lambdalabs/pokemon-blip-captions": ("image", "text"),
329
+ }
330
+
331
+
332
+ def main():
333
+ args = parse_args()
334
+
335
+ if args.non_ema_revision is not None:
336
+ deprecate(
337
+ "non_ema_revision!=None",
338
+ "0.15.0",
339
+ message=(
340
+ "Downloading 'non_ema' weights from revision branches of the Hub is deprecated. Please make sure to"
341
+ " use `--variant=non_ema` instead."
342
+ ),
343
+ )
344
+ logging_dir = os.path.join(args.output_dir, args.logging_dir)
345
+
346
+ accelerator_project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit)
347
+
348
+ accelerator = Accelerator(
349
+ gradient_accumulation_steps=args.gradient_accumulation_steps,
350
+ mixed_precision=args.mixed_precision,
351
+ log_with=args.report_to,
352
+ logging_dir=logging_dir,
353
+ project_config=accelerator_project_config,
354
+ )
355
+
356
+ # Make one log on every process with the configuration for debugging.
357
+ logging.basicConfig(
358
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
359
+ datefmt="%m/%d/%Y %H:%M:%S",
360
+ level=logging.INFO,
361
+ )
362
+ logger.info(accelerator.state, main_process_only=False)
363
+ if accelerator.is_local_main_process:
364
+ datasets.utils.logging.set_verbosity_warning()
365
+ transformers.utils.logging.set_verbosity_warning()
366
+ diffusers.utils.logging.set_verbosity_info()
367
+ else:
368
+ datasets.utils.logging.set_verbosity_error()
369
+ transformers.utils.logging.set_verbosity_error()
370
+ diffusers.utils.logging.set_verbosity_error()
371
+
372
+ # If passed along, set the training seed now.
373
+ if args.seed is not None:
374
+ set_seed(args.seed)
375
+
376
+ # Handle the repository creation
377
+ if accelerator.is_main_process:
378
+ if args.push_to_hub:
379
+ if args.hub_model_id is None:
380
+ repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
381
+ else:
382
+ repo_name = args.hub_model_id
383
+ create_repo(repo_name, exist_ok=True, token=args.hub_token)
384
+ repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token)
385
+
386
+ with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
387
+ if "step_*" not in gitignore:
388
+ gitignore.write("step_*\n")
389
+ if "epoch_*" not in gitignore:
390
+ gitignore.write("epoch_*\n")
391
+ elif args.output_dir is not None:
392
+ os.makedirs(args.output_dir, exist_ok=True)
393
+
394
+ # Load scheduler, tokenizer and models.
395
+ noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
396
+ tokenizer = CLIPTokenizer.from_pretrained(
397
+ args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision
398
+ )
399
+ text_encoder = CLIPTextModel.from_pretrained(
400
+ args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
401
+ )
402
+ vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision)
403
+ unet = UNet2DConditionModel.from_pretrained(
404
+ args.pretrained_model_name_or_path, subfolder="unet", revision=args.non_ema_revision
405
+ )
406
+
407
+ # Freeze vae and text_encoder
408
+ vae.requires_grad_(False)
409
+ text_encoder.requires_grad_(False)
410
+
411
+ # Create EMA for the unet.
412
+ if args.use_ema:
413
+ ema_unet = UNet2DConditionModel.from_pretrained(
414
+ args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision
415
+ )
416
+ ema_unet = EMAModel(ema_unet.parameters(), model_cls=UNet2DConditionModel, model_config=ema_unet.config)
417
+
418
+ if args.enable_xformers_memory_efficient_attention:
419
+ if is_xformers_available():
420
+ import xformers
421
+
422
+ xformers_version = version.parse(xformers.__version__)
423
+ if xformers_version == version.parse("0.0.16"):
424
+ logger.warn(
425
+ "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
426
+ )
427
+ unet.enable_xformers_memory_efficient_attention()
428
+ else:
429
+ raise ValueError("xformers is not available. Make sure it is installed correctly")
430
+
431
+ # `accelerate` 0.16.0 will have better support for customized saving
432
+ if version.parse(accelerate.__version__) >= version.parse("0.16.0"):
433
+ # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format
434
+ def save_model_hook(models, weights, output_dir):
435
+ if args.use_ema:
436
+ ema_unet.save_pretrained(os.path.join(output_dir, "unet_ema"))
437
+
438
+ for i, model in enumerate(models):
439
+ model.save_pretrained(os.path.join(output_dir, "unet"))
440
+
441
+ # make sure to pop weight so that corresponding model is not saved again
442
+ weights.pop()
443
+
444
+ def load_model_hook(models, input_dir):
445
+ if args.use_ema:
446
+ load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DConditionModel)
447
+ ema_unet.load_state_dict(load_model.state_dict())
448
+ ema_unet.to(accelerator.device)
449
+ del load_model
450
+
451
+ for i in range(len(models)):
452
+ # pop models so that they are not loaded again
453
+ model = models.pop()
454
+
455
+ # load diffusers style into model
456
+ load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet")
457
+ model.register_to_config(**load_model.config)
458
+
459
+ model.load_state_dict(load_model.state_dict())
460
+ del load_model
461
+
462
+ accelerator.register_save_state_pre_hook(save_model_hook)
463
+ accelerator.register_load_state_pre_hook(load_model_hook)
464
+
465
+ if args.gradient_checkpointing:
466
+ unet.enable_gradient_checkpointing()
467
+
468
+ # Enable TF32 for faster training on Ampere GPUs,
469
+ # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
470
+ if args.allow_tf32:
471
+ torch.backends.cuda.matmul.allow_tf32 = True
472
+
473
+ if args.scale_lr:
474
+ args.learning_rate = (
475
+ args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
476
+ )
477
+
478
+ # Initialize the optimizer
479
+ if args.use_8bit_adam:
480
+ try:
481
+ import bitsandbytes as bnb
482
+ except ImportError:
483
+ raise ImportError(
484
+ "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`"
485
+ )
486
+
487
+ optimizer_cls = bnb.optim.AdamW8bit
488
+ else:
489
+ optimizer_cls = torch.optim.AdamW
490
+
491
+ optimizer = optimizer_cls(
492
+ unet.parameters(),
493
+ lr=args.learning_rate,
494
+ betas=(args.adam_beta1, args.adam_beta2),
495
+ weight_decay=args.adam_weight_decay,
496
+ eps=args.adam_epsilon,
497
+ )
498
+
499
+ # Get the datasets: you can either provide your own training and evaluation files (see below)
500
+ # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
501
+
502
+ # In distributed training, the load_dataset function guarantees that only one local process can concurrently
503
+ # download the dataset.
504
+ if args.dataset_name is not None:
505
+ # Downloading and loading a dataset from the hub.
506
+ dataset = load_dataset(
507
+ args.dataset_name,
508
+ args.dataset_config_name,
509
+ cache_dir=args.cache_dir,
510
+ )
511
+ else:
512
+ data_files = {}
513
+ if args.train_data_dir is not None:
514
+ data_files["train"] = os.path.join(args.train_data_dir, "**")
515
+ dataset = load_dataset(
516
+ "imagefolder",
517
+ data_files=data_files,
518
+ cache_dir=args.cache_dir,
519
+ )
520
+ # See more about loading custom images at
521
+ # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
522
+
523
+ # Preprocessing the datasets.
524
+ # We need to tokenize inputs and targets.
525
+ column_names = dataset["train"].column_names
526
+
527
+ # 6. Get the column names for input/target.
528
+ dataset_columns = dataset_name_mapping.get(args.dataset_name, None)
529
+ if args.image_column is None:
530
+ image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
531
+ else:
532
+ image_column = args.image_column
533
+ if image_column not in column_names:
534
+ raise ValueError(
535
+ f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}"
536
+ )
537
+ if args.caption_column is None:
538
+ caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
539
+ else:
540
+ caption_column = args.caption_column
541
+ if caption_column not in column_names:
542
+ raise ValueError(
543
+ f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}"
544
+ )
545
+
546
+ # Preprocessing the datasets.
547
+ # We need to tokenize input captions and transform the images.
548
+ def tokenize_captions(examples, is_train=True):
549
+ captions = []
550
+ for caption in examples[caption_column]:
551
+ if isinstance(caption, str):
552
+ captions.append(caption)
553
+ elif isinstance(caption, (list, np.ndarray)):
554
+ # take a random caption if there are multiple
555
+ captions.append(random.choice(caption) if is_train else caption[0])
556
+ else:
557
+ raise ValueError(
558
+ f"Caption column `{caption_column}` should contain either strings or lists of strings."
559
+ )
560
+ inputs = tokenizer(
561
+ captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt"
562
+ )
563
+ return inputs.input_ids
564
+
565
+ # Preprocessing the datasets.
566
+ train_transforms = transforms.Compose(
567
+ [
568
+ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
569
+ transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
570
+ transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
571
+ transforms.ToTensor(),
572
+ transforms.Normalize([0.5], [0.5]),
573
+ ]
574
+ )
575
+
576
+ def preprocess_train(examples):
577
+ images = [image.convert("RGB") for image in examples[image_column]]
578
+ examples["pixel_values"] = [train_transforms(image) for image in images]
579
+ examples["input_ids"] = tokenize_captions(examples)
580
+ return examples
581
+
582
+ with accelerator.main_process_first():
583
+ if args.max_train_samples is not None:
584
+ dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
585
+ # Set the training transforms
586
+ train_dataset = dataset["train"].with_transform(preprocess_train)
587
+
588
+ def collate_fn(examples):
589
+ pixel_values = torch.stack([example["pixel_values"] for example in examples])
590
+ pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
591
+ input_ids = torch.stack([example["input_ids"] for example in examples])
592
+ return {"pixel_values": pixel_values, "input_ids": input_ids}
593
+
594
+ # DataLoaders creation:
595
+ train_dataloader = torch.utils.data.DataLoader(
596
+ train_dataset,
597
+ shuffle=True,
598
+ collate_fn=collate_fn,
599
+ batch_size=args.train_batch_size,
600
+ num_workers=args.dataloader_num_workers,
601
+ )
602
+
603
+ # Scheduler and math around the number of training steps.
604
+ overrode_max_train_steps = False
605
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
606
+ if args.max_train_steps is None:
607
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
608
+ overrode_max_train_steps = True
609
+
610
+ lr_scheduler = get_scheduler(
611
+ args.lr_scheduler,
612
+ optimizer=optimizer,
613
+ num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
614
+ num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
615
+ )
616
+
617
+ # Prepare everything with our `accelerator`.
618
+ unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
619
+ unet, optimizer, train_dataloader, lr_scheduler
620
+ )
621
+
622
+ if args.use_ema:
623
+ ema_unet.to(accelerator.device)
624
+
625
+ # For mixed precision training we cast the text_encoder and vae weights to half-precision
626
+ # as these models are only used for inference, keeping weights in full precision is not required.
627
+ weight_dtype = torch.float32
628
+ if accelerator.mixed_precision == "fp16":
629
+ weight_dtype = torch.float16
630
+ elif accelerator.mixed_precision == "bf16":
631
+ weight_dtype = torch.bfloat16
632
+
633
+ # Move text_encode and vae to gpu and cast to weight_dtype
634
+ text_encoder.to(accelerator.device, dtype=weight_dtype)
635
+ vae.to(accelerator.device, dtype=weight_dtype)
636
+
637
+ # We need to recalculate our total training steps as the size of the training dataloader may have changed.
638
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
639
+ if overrode_max_train_steps:
640
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
641
+ # Afterwards we recalculate our number of training epochs
642
+ args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
643
+
644
+ # We need to initialize the trackers we use, and also store our configuration.
645
+ # The trackers initializes automatically on the main process.
646
+ if accelerator.is_main_process:
647
+ accelerator.init_trackers("text2image-fine-tune", config=vars(args))
648
+
649
+ # Train!
650
+ total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
651
+
652
+ logger.info("***** Running training *****")
653
+ logger.info(f" Num examples = {len(train_dataset)}")
654
+ logger.info(f" Num Epochs = {args.num_train_epochs}")
655
+ logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
656
+ logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
657
+ logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
658
+ logger.info(f" Total optimization steps = {args.max_train_steps}")
659
+ global_step = 0
660
+ first_epoch = 0
661
+
662
+ # Potentially load in the weights and states from a previous save
663
+ if args.resume_from_checkpoint:
664
+ if args.resume_from_checkpoint != "latest":
665
+ path = os.path.basename(args.resume_from_checkpoint)
666
+ else:
667
+ # Get the most recent checkpoint
668
+ dirs = os.listdir(args.output_dir)
669
+ dirs = [d for d in dirs if d.startswith("checkpoint")]
670
+ dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
671
+ path = dirs[-1] if len(dirs) > 0 else None
672
+
673
+ if path is None:
674
+ accelerator.print(
675
+ f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
676
+ )
677
+ args.resume_from_checkpoint = None
678
+ else:
679
+ accelerator.print(f"Resuming from checkpoint {path}")
680
+ accelerator.load_state(os.path.join(args.output_dir, path))
681
+ global_step = int(path.split("-")[1])
682
+
683
+ resume_global_step = global_step * args.gradient_accumulation_steps
684
+ first_epoch = global_step // num_update_steps_per_epoch
685
+ resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps)
686
+
687
+ # Only show the progress bar once on each machine.
688
+ progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process)
689
+ progress_bar.set_description("Steps")
690
+
691
+ for epoch in range(first_epoch, args.num_train_epochs):
692
+ unet.train()
693
+ train_loss = 0.0
694
+ for step, batch in enumerate(train_dataloader):
695
+ # Skip steps until we reach the resumed step
696
+ if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step:
697
+ if step % args.gradient_accumulation_steps == 0:
698
+ progress_bar.update(1)
699
+ continue
700
+
701
+ with accelerator.accumulate(unet):
702
+ # Convert images to latent space
703
+ latents = vae.encode(batch["pixel_values"].to(weight_dtype)).latent_dist.sample()
704
+ latents = latents * vae.config.scaling_factor
705
+
706
+ # Sample noise that we'll add to the latents
707
+ noise = torch.randn_like(latents)
708
+ bsz = latents.shape[0]
709
+ # Sample a random timestep for each image
710
+ timesteps = torch.randint(0, noise_scheduler.num_train_timesteps, (bsz,), device=latents.device)
711
+ timesteps = timesteps.long()
712
+
713
+ # Add noise to the latents according to the noise magnitude at each timestep
714
+ # (this is the forward diffusion process)
715
+ noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
716
+
717
+ # Get the text embedding for conditioning
718
+ encoder_hidden_states = text_encoder(batch["input_ids"])[0]
719
+
720
+ # Get the target for loss depending on the prediction type
721
+ if noise_scheduler.config.prediction_type == "epsilon":
722
+ target = noise
723
+ elif noise_scheduler.config.prediction_type == "v_prediction":
724
+ target = noise_scheduler.get_velocity(latents, noise, timesteps)
725
+ else:
726
+ raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
727
+
728
+ # Predict the noise residual and compute loss
729
+ model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
730
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
731
+
732
+ # Gather the losses across all processes for logging (if we use distributed training).
733
+ avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean()
734
+ train_loss += avg_loss.item() / args.gradient_accumulation_steps
735
+
736
+ # Backpropagate
737
+ accelerator.backward(loss)
738
+ if accelerator.sync_gradients:
739
+ accelerator.clip_grad_norm_(unet.parameters(), args.max_grad_norm)
740
+ optimizer.step()
741
+ lr_scheduler.step()
742
+ optimizer.zero_grad()
743
+
744
+ # Checks if the accelerator has performed an optimization step behind the scenes
745
+ if accelerator.sync_gradients:
746
+ if args.use_ema:
747
+ ema_unet.step(unet.parameters())
748
+ progress_bar.update(1)
749
+ global_step += 1
750
+ accelerator.log({"train_loss": train_loss}, step=global_step)
751
+ train_loss = 0.0
752
+
753
+ if global_step % args.checkpointing_steps == 0:
754
+ if accelerator.is_main_process:
755
+ save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
756
+ accelerator.save_state(save_path)
757
+ logger.info(f"Saved state to {save_path}")
758
+
759
+ logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
760
+ progress_bar.set_postfix(**logs)
761
+
762
+ if global_step >= args.max_train_steps:
763
+ break
764
+
765
+ # Create the pipeline using the trained modules and save it.
766
+ accelerator.wait_for_everyone()
767
+ if accelerator.is_main_process:
768
+ unet = accelerator.unwrap_model(unet)
769
+ if args.use_ema:
770
+ ema_unet.copy_to(unet.parameters())
771
+
772
+ pipeline = StableDiffusionPipeline.from_pretrained(
773
+ args.pretrained_model_name_or_path,
774
+ text_encoder=text_encoder,
775
+ vae=vae,
776
+ unet=unet,
777
+ revision=args.revision,
778
+ )
779
+ pipeline.save_pretrained(args.output_dir)
780
+
781
+ if args.push_to_hub:
782
+ repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
783
+
784
+ accelerator.end_training()
785
+
786
+
787
+ if __name__ == "__main__":
788
+ main()
train_text_to_image_flax.py ADDED
@@ -0,0 +1,579 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import logging
3
+ import math
4
+ import os
5
+ import random
6
+ from pathlib import Path
7
+ from typing import Optional
8
+
9
+ import jax
10
+ import jax.numpy as jnp
11
+ import numpy as np
12
+ import optax
13
+ import torch
14
+ import torch.utils.checkpoint
15
+ import transformers
16
+ from datasets import load_dataset
17
+ from flax import jax_utils
18
+ from flax.training import train_state
19
+ from flax.training.common_utils import shard
20
+ from huggingface_hub import HfFolder, Repository, create_repo, whoami
21
+ from torchvision import transforms
22
+ from tqdm.auto import tqdm
23
+ from transformers import CLIPFeatureExtractor, CLIPTokenizer, FlaxCLIPTextModel, set_seed
24
+
25
+ from diffusers import (
26
+ FlaxAutoencoderKL,
27
+ FlaxDDPMScheduler,
28
+ FlaxPNDMScheduler,
29
+ FlaxStableDiffusionPipeline,
30
+ FlaxUNet2DConditionModel,
31
+ )
32
+ from diffusers.pipelines.stable_diffusion import FlaxStableDiffusionSafetyChecker
33
+ from diffusers.utils import check_min_version
34
+
35
+
36
+ # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
37
+ check_min_version("0.14.0.dev0")
38
+
39
+ logger = logging.getLogger(__name__)
40
+
41
+
42
+ def parse_args():
43
+ parser = argparse.ArgumentParser(description="Simple example of a training script.")
44
+ parser.add_argument(
45
+ "--pretrained_model_name_or_path",
46
+ type=str,
47
+ default=None,
48
+ required=True,
49
+ help="Path to pretrained model or model identifier from huggingface.co/models.",
50
+ )
51
+ parser.add_argument(
52
+ "--dataset_name",
53
+ type=str,
54
+ default=None,
55
+ help=(
56
+ "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
57
+ " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
58
+ " or to a folder containing files that 🤗 Datasets can understand."
59
+ ),
60
+ )
61
+ parser.add_argument(
62
+ "--dataset_config_name",
63
+ type=str,
64
+ default=None,
65
+ help="The config of the Dataset, leave as None if there's only one config.",
66
+ )
67
+ parser.add_argument(
68
+ "--train_data_dir",
69
+ type=str,
70
+ default=None,
71
+ help=(
72
+ "A folder containing the training data. Folder contents must follow the structure described in"
73
+ " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
74
+ " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
75
+ ),
76
+ )
77
+ parser.add_argument(
78
+ "--image_column", type=str, default="image", help="The column of the dataset containing an image."
79
+ )
80
+ parser.add_argument(
81
+ "--caption_column",
82
+ type=str,
83
+ default="text",
84
+ help="The column of the dataset containing a caption or a list of captions.",
85
+ )
86
+ parser.add_argument(
87
+ "--max_train_samples",
88
+ type=int,
89
+ default=None,
90
+ help=(
91
+ "For debugging purposes or quicker training, truncate the number of training examples to this "
92
+ "value if set."
93
+ ),
94
+ )
95
+ parser.add_argument(
96
+ "--output_dir",
97
+ type=str,
98
+ default="sd-model-finetuned",
99
+ help="The output directory where the model predictions and checkpoints will be written.",
100
+ )
101
+ parser.add_argument(
102
+ "--cache_dir",
103
+ type=str,
104
+ default=None,
105
+ help="The directory where the downloaded models and datasets will be stored.",
106
+ )
107
+ parser.add_argument("--seed", type=int, default=0, help="A seed for reproducible training.")
108
+ parser.add_argument(
109
+ "--resolution",
110
+ type=int,
111
+ default=512,
112
+ help=(
113
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
114
+ " resolution"
115
+ ),
116
+ )
117
+ parser.add_argument(
118
+ "--center_crop",
119
+ default=False,
120
+ action="store_true",
121
+ help=(
122
+ "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
123
+ " cropped. The images will be resized to the resolution first before cropping."
124
+ ),
125
+ )
126
+ parser.add_argument(
127
+ "--random_flip",
128
+ action="store_true",
129
+ help="whether to randomly flip images horizontally",
130
+ )
131
+ parser.add_argument(
132
+ "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
133
+ )
134
+ parser.add_argument("--num_train_epochs", type=int, default=100)
135
+ parser.add_argument(
136
+ "--max_train_steps",
137
+ type=int,
138
+ default=None,
139
+ help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
140
+ )
141
+ parser.add_argument(
142
+ "--learning_rate",
143
+ type=float,
144
+ default=1e-4,
145
+ help="Initial learning rate (after the potential warmup period) to use.",
146
+ )
147
+ parser.add_argument(
148
+ "--scale_lr",
149
+ action="store_true",
150
+ default=False,
151
+ help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
152
+ )
153
+ parser.add_argument(
154
+ "--lr_scheduler",
155
+ type=str,
156
+ default="constant",
157
+ help=(
158
+ 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
159
+ ' "constant", "constant_with_warmup"]'
160
+ ),
161
+ )
162
+ parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
163
+ parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
164
+ parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
165
+ parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
166
+ parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
167
+ parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
168
+ parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
169
+ parser.add_argument(
170
+ "--hub_model_id",
171
+ type=str,
172
+ default=None,
173
+ help="The name of the repository to keep in sync with the local `output_dir`.",
174
+ )
175
+ parser.add_argument(
176
+ "--logging_dir",
177
+ type=str,
178
+ default="logs",
179
+ help=(
180
+ "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
181
+ " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
182
+ ),
183
+ )
184
+ parser.add_argument(
185
+ "--report_to",
186
+ type=str,
187
+ default="tensorboard",
188
+ help=(
189
+ 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
190
+ ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
191
+ ),
192
+ )
193
+ parser.add_argument(
194
+ "--mixed_precision",
195
+ type=str,
196
+ default="no",
197
+ choices=["no", "fp16", "bf16"],
198
+ help=(
199
+ "Whether to use mixed precision. Choose"
200
+ "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
201
+ "and an Nvidia Ampere GPU."
202
+ ),
203
+ )
204
+ parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
205
+
206
+ args = parser.parse_args()
207
+ env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
208
+ if env_local_rank != -1 and env_local_rank != args.local_rank:
209
+ args.local_rank = env_local_rank
210
+
211
+ # Sanity checks
212
+ if args.dataset_name is None and args.train_data_dir is None:
213
+ raise ValueError("Need either a dataset name or a training folder.")
214
+
215
+ return args
216
+
217
+
218
+ def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
219
+ if token is None:
220
+ token = HfFolder.get_token()
221
+ if organization is None:
222
+ username = whoami(token)["name"]
223
+ return f"{username}/{model_id}"
224
+ else:
225
+ return f"{organization}/{model_id}"
226
+
227
+
228
+ dataset_name_mapping = {
229
+ "lambdalabs/pokemon-blip-captions": ("image", "text"),
230
+ }
231
+
232
+
233
+ def get_params_to_save(params):
234
+ return jax.device_get(jax.tree_util.tree_map(lambda x: x[0], params))
235
+
236
+
237
+ def main():
238
+ args = parse_args()
239
+
240
+ logging.basicConfig(
241
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
242
+ datefmt="%m/%d/%Y %H:%M:%S",
243
+ level=logging.INFO,
244
+ )
245
+ # Setup logging, we only want one process per machine to log things on the screen.
246
+ logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR)
247
+ if jax.process_index() == 0:
248
+ transformers.utils.logging.set_verbosity_info()
249
+ else:
250
+ transformers.utils.logging.set_verbosity_error()
251
+
252
+ if args.seed is not None:
253
+ set_seed(args.seed)
254
+
255
+ # Handle the repository creation
256
+ if jax.process_index() == 0:
257
+ if args.push_to_hub:
258
+ if args.hub_model_id is None:
259
+ repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
260
+ else:
261
+ repo_name = args.hub_model_id
262
+ create_repo(repo_name, exist_ok=True, token=args.hub_token)
263
+ repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token)
264
+
265
+ with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
266
+ if "step_*" not in gitignore:
267
+ gitignore.write("step_*\n")
268
+ if "epoch_*" not in gitignore:
269
+ gitignore.write("epoch_*\n")
270
+ elif args.output_dir is not None:
271
+ os.makedirs(args.output_dir, exist_ok=True)
272
+
273
+ # Get the datasets: you can either provide your own training and evaluation files (see below)
274
+ # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
275
+
276
+ # In distributed training, the load_dataset function guarantees that only one local process can concurrently
277
+ # download the dataset.
278
+ if args.dataset_name is not None:
279
+ # Downloading and loading a dataset from the hub.
280
+ dataset = load_dataset(
281
+ args.dataset_name,
282
+ args.dataset_config_name,
283
+ cache_dir=args.cache_dir,
284
+ )
285
+ else:
286
+ data_files = {}
287
+ if args.train_data_dir is not None:
288
+ data_files["train"] = os.path.join(args.train_data_dir, "**")
289
+ dataset = load_dataset(
290
+ "imagefolder",
291
+ data_files=data_files,
292
+ cache_dir=args.cache_dir,
293
+ )
294
+ # See more about loading custom images at
295
+ # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
296
+
297
+ # Preprocessing the datasets.
298
+ # We need to tokenize inputs and targets.
299
+ column_names = dataset["train"].column_names
300
+
301
+ # 6. Get the column names for input/target.
302
+ dataset_columns = dataset_name_mapping.get(args.dataset_name, None)
303
+ if args.image_column is None:
304
+ image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
305
+ else:
306
+ image_column = args.image_column
307
+ if image_column not in column_names:
308
+ raise ValueError(
309
+ f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}"
310
+ )
311
+ if args.caption_column is None:
312
+ caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
313
+ else:
314
+ caption_column = args.caption_column
315
+ if caption_column not in column_names:
316
+ raise ValueError(
317
+ f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}"
318
+ )
319
+
320
+ # Preprocessing the datasets.
321
+ # We need to tokenize input captions and transform the images.
322
+ def tokenize_captions(examples, is_train=True):
323
+ captions = []
324
+ for caption in examples[caption_column]:
325
+ if isinstance(caption, str):
326
+ captions.append(caption)
327
+ elif isinstance(caption, (list, np.ndarray)):
328
+ # take a random caption if there are multiple
329
+ captions.append(random.choice(caption) if is_train else caption[0])
330
+ else:
331
+ raise ValueError(
332
+ f"Caption column `{caption_column}` should contain either strings or lists of strings."
333
+ )
334
+ inputs = tokenizer(captions, max_length=tokenizer.model_max_length, padding="do_not_pad", truncation=True)
335
+ input_ids = inputs.input_ids
336
+ return input_ids
337
+
338
+ train_transforms = transforms.Compose(
339
+ [
340
+ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
341
+ transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
342
+ transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
343
+ transforms.ToTensor(),
344
+ transforms.Normalize([0.5], [0.5]),
345
+ ]
346
+ )
347
+
348
+ def preprocess_train(examples):
349
+ images = [image.convert("RGB") for image in examples[image_column]]
350
+ examples["pixel_values"] = [train_transforms(image) for image in images]
351
+ examples["input_ids"] = tokenize_captions(examples)
352
+
353
+ return examples
354
+
355
+ if jax.process_index() == 0:
356
+ if args.max_train_samples is not None:
357
+ dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
358
+ # Set the training transforms
359
+ train_dataset = dataset["train"].with_transform(preprocess_train)
360
+
361
+ def collate_fn(examples):
362
+ pixel_values = torch.stack([example["pixel_values"] for example in examples])
363
+ pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
364
+ input_ids = [example["input_ids"] for example in examples]
365
+
366
+ padded_tokens = tokenizer.pad(
367
+ {"input_ids": input_ids}, padding="max_length", max_length=tokenizer.model_max_length, return_tensors="pt"
368
+ )
369
+ batch = {
370
+ "pixel_values": pixel_values,
371
+ "input_ids": padded_tokens.input_ids,
372
+ }
373
+ batch = {k: v.numpy() for k, v in batch.items()}
374
+
375
+ return batch
376
+
377
+ total_train_batch_size = args.train_batch_size * jax.local_device_count()
378
+ train_dataloader = torch.utils.data.DataLoader(
379
+ train_dataset, shuffle=True, collate_fn=collate_fn, batch_size=total_train_batch_size, drop_last=True
380
+ )
381
+
382
+ weight_dtype = jnp.float32
383
+ if args.mixed_precision == "fp16":
384
+ weight_dtype = jnp.float16
385
+ elif args.mixed_precision == "bf16":
386
+ weight_dtype = jnp.bfloat16
387
+
388
+ # Load models and create wrapper for stable diffusion
389
+ tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
390
+ text_encoder = FlaxCLIPTextModel.from_pretrained(
391
+ args.pretrained_model_name_or_path, subfolder="text_encoder", dtype=weight_dtype
392
+ )
393
+ vae, vae_params = FlaxAutoencoderKL.from_pretrained(
394
+ args.pretrained_model_name_or_path, subfolder="vae", dtype=weight_dtype
395
+ )
396
+ unet, unet_params = FlaxUNet2DConditionModel.from_pretrained(
397
+ args.pretrained_model_name_or_path, subfolder="unet", dtype=weight_dtype
398
+ )
399
+
400
+ # Optimization
401
+ if args.scale_lr:
402
+ args.learning_rate = args.learning_rate * total_train_batch_size
403
+
404
+ constant_scheduler = optax.constant_schedule(args.learning_rate)
405
+
406
+ adamw = optax.adamw(
407
+ learning_rate=constant_scheduler,
408
+ b1=args.adam_beta1,
409
+ b2=args.adam_beta2,
410
+ eps=args.adam_epsilon,
411
+ weight_decay=args.adam_weight_decay,
412
+ )
413
+
414
+ optimizer = optax.chain(
415
+ optax.clip_by_global_norm(args.max_grad_norm),
416
+ adamw,
417
+ )
418
+
419
+ state = train_state.TrainState.create(apply_fn=unet.__call__, params=unet_params, tx=optimizer)
420
+
421
+ noise_scheduler = FlaxDDPMScheduler(
422
+ beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000
423
+ )
424
+ noise_scheduler_state = noise_scheduler.create_state()
425
+
426
+ # Initialize our training
427
+ rng = jax.random.PRNGKey(args.seed)
428
+ train_rngs = jax.random.split(rng, jax.local_device_count())
429
+
430
+ def train_step(state, text_encoder_params, vae_params, batch, train_rng):
431
+ dropout_rng, sample_rng, new_train_rng = jax.random.split(train_rng, 3)
432
+
433
+ def compute_loss(params):
434
+ # Convert images to latent space
435
+ vae_outputs = vae.apply(
436
+ {"params": vae_params}, batch["pixel_values"], deterministic=True, method=vae.encode
437
+ )
438
+ latents = vae_outputs.latent_dist.sample(sample_rng)
439
+ # (NHWC) -> (NCHW)
440
+ latents = jnp.transpose(latents, (0, 3, 1, 2))
441
+ latents = latents * vae.config.scaling_factor
442
+
443
+ # Sample noise that we'll add to the latents
444
+ noise_rng, timestep_rng = jax.random.split(sample_rng)
445
+ noise = jax.random.normal(noise_rng, latents.shape)
446
+ # Sample a random timestep for each image
447
+ bsz = latents.shape[0]
448
+ timesteps = jax.random.randint(
449
+ timestep_rng,
450
+ (bsz,),
451
+ 0,
452
+ noise_scheduler.config.num_train_timesteps,
453
+ )
454
+
455
+ # Add noise to the latents according to the noise magnitude at each timestep
456
+ # (this is the forward diffusion process)
457
+ noisy_latents = noise_scheduler.add_noise(noise_scheduler_state, latents, noise, timesteps)
458
+
459
+ # Get the text embedding for conditioning
460
+ encoder_hidden_states = text_encoder(
461
+ batch["input_ids"],
462
+ params=text_encoder_params,
463
+ train=False,
464
+ )[0]
465
+
466
+ # Predict the noise residual and compute loss
467
+ model_pred = unet.apply(
468
+ {"params": params}, noisy_latents, timesteps, encoder_hidden_states, train=True
469
+ ).sample
470
+
471
+ # Get the target for loss depending on the prediction type
472
+ if noise_scheduler.config.prediction_type == "epsilon":
473
+ target = noise
474
+ elif noise_scheduler.config.prediction_type == "v_prediction":
475
+ target = noise_scheduler.get_velocity(noise_scheduler_state, latents, noise, timesteps)
476
+ else:
477
+ raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
478
+
479
+ loss = (target - model_pred) ** 2
480
+ loss = loss.mean()
481
+
482
+ return loss
483
+
484
+ grad_fn = jax.value_and_grad(compute_loss)
485
+ loss, grad = grad_fn(state.params)
486
+ grad = jax.lax.pmean(grad, "batch")
487
+
488
+ new_state = state.apply_gradients(grads=grad)
489
+
490
+ metrics = {"loss": loss}
491
+ metrics = jax.lax.pmean(metrics, axis_name="batch")
492
+
493
+ return new_state, metrics, new_train_rng
494
+
495
+ # Create parallel version of the train step
496
+ p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,))
497
+
498
+ # Replicate the train state on each device
499
+ state = jax_utils.replicate(state)
500
+ text_encoder_params = jax_utils.replicate(text_encoder.params)
501
+ vae_params = jax_utils.replicate(vae_params)
502
+
503
+ # Train!
504
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader))
505
+
506
+ # Scheduler and math around the number of training steps.
507
+ if args.max_train_steps is None:
508
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
509
+
510
+ args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
511
+
512
+ logger.info("***** Running training *****")
513
+ logger.info(f" Num examples = {len(train_dataset)}")
514
+ logger.info(f" Num Epochs = {args.num_train_epochs}")
515
+ logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
516
+ logger.info(f" Total train batch size (w. parallel & distributed) = {total_train_batch_size}")
517
+ logger.info(f" Total optimization steps = {args.max_train_steps}")
518
+
519
+ global_step = 0
520
+
521
+ epochs = tqdm(range(args.num_train_epochs), desc="Epoch ... ", position=0)
522
+ for epoch in epochs:
523
+ # ======================== Training ================================
524
+
525
+ train_metrics = []
526
+
527
+ steps_per_epoch = len(train_dataset) // total_train_batch_size
528
+ train_step_progress_bar = tqdm(total=steps_per_epoch, desc="Training...", position=1, leave=False)
529
+ # train
530
+ for batch in train_dataloader:
531
+ batch = shard(batch)
532
+ state, train_metric, train_rngs = p_train_step(state, text_encoder_params, vae_params, batch, train_rngs)
533
+ train_metrics.append(train_metric)
534
+
535
+ train_step_progress_bar.update(1)
536
+
537
+ global_step += 1
538
+ if global_step >= args.max_train_steps:
539
+ break
540
+
541
+ train_metric = jax_utils.unreplicate(train_metric)
542
+
543
+ train_step_progress_bar.close()
544
+ epochs.write(f"Epoch... ({epoch + 1}/{args.num_train_epochs} | Loss: {train_metric['loss']})")
545
+
546
+ # Create the pipeline using using the trained modules and save it.
547
+ if jax.process_index() == 0:
548
+ scheduler = FlaxPNDMScheduler(
549
+ beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", skip_prk_steps=True
550
+ )
551
+ safety_checker = FlaxStableDiffusionSafetyChecker.from_pretrained(
552
+ "CompVis/stable-diffusion-safety-checker", from_pt=True
553
+ )
554
+ pipeline = FlaxStableDiffusionPipeline(
555
+ text_encoder=text_encoder,
556
+ vae=vae,
557
+ unet=unet,
558
+ tokenizer=tokenizer,
559
+ scheduler=scheduler,
560
+ safety_checker=safety_checker,
561
+ feature_extractor=CLIPFeatureExtractor.from_pretrained("openai/clip-vit-base-patch32"),
562
+ )
563
+
564
+ pipeline.save_pretrained(
565
+ args.output_dir,
566
+ params={
567
+ "text_encoder": get_params_to_save(text_encoder_params),
568
+ "vae": get_params_to_save(vae_params),
569
+ "unet": get_params_to_save(state.params),
570
+ "safety_checker": safety_checker.params,
571
+ },
572
+ )
573
+
574
+ if args.push_to_hub:
575
+ repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
576
+
577
+
578
+ if __name__ == "__main__":
579
+ main()
train_text_to_image_lora.py ADDED
@@ -0,0 +1,872 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Fine-tuning script for Stable Diffusion for text2image with support for LoRA."""
16
+
17
+ import argparse
18
+ import logging
19
+ import math
20
+ import os
21
+ import random
22
+ from pathlib import Path
23
+ from typing import Optional
24
+
25
+ import datasets
26
+ import numpy as np
27
+ import torch
28
+ import torch.nn.functional as F
29
+ import torch.utils.checkpoint
30
+ import transformers
31
+ from accelerate import Accelerator
32
+ from accelerate.logging import get_logger
33
+ from accelerate.utils import ProjectConfiguration, set_seed
34
+ from datasets import load_dataset
35
+ from huggingface_hub import HfFolder, Repository, create_repo, whoami
36
+ from packaging import version
37
+ from torchvision import transforms
38
+ from tqdm.auto import tqdm
39
+ from transformers import CLIPTextModel, CLIPTokenizer
40
+
41
+ import diffusers
42
+ from diffusers import AutoencoderKL, DDPMScheduler, DiffusionPipeline, UNet2DConditionModel
43
+ from diffusers.loaders import AttnProcsLayers
44
+ from diffusers.models.cross_attention import LoRACrossAttnProcessor
45
+ from diffusers.optimization import get_scheduler
46
+ from diffusers.utils import check_min_version, is_wandb_available
47
+ from diffusers.utils.import_utils import is_xformers_available
48
+
49
+
50
+ # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
51
+ check_min_version("0.14.0.dev0")
52
+
53
+ logger = get_logger(__name__, log_level="INFO")
54
+
55
+
56
+ def save_model_card(repo_name, images=None, base_model=str, dataset_name=str, repo_folder=None):
57
+ img_str = ""
58
+ for i, image in enumerate(images):
59
+ image.save(os.path.join(repo_folder, f"image_{i}.png"))
60
+ img_str += f"![img_{i}](./image_{i}.png)\n"
61
+
62
+ yaml = f"""
63
+ ---
64
+ license: creativeml-openrail-m
65
+ base_model: {base_model}
66
+ tags:
67
+ - stable-diffusion
68
+ - stable-diffusion-diffusers
69
+ - text-to-image
70
+ - diffusers
71
+ - lora
72
+ inference: true
73
+ ---
74
+ """
75
+ model_card = f"""
76
+ # LoRA text2image fine-tuning - {repo_name}
77
+ These are LoRA adaption weights for {base_model}. The weights were fine-tuned on the {dataset_name} dataset. You can find some example images in the following. \n
78
+ {img_str}
79
+ """
80
+ with open(os.path.join(repo_folder, "README.md"), "w") as f:
81
+ f.write(yaml + model_card)
82
+
83
+
84
+ def parse_args():
85
+ parser = argparse.ArgumentParser(description="Simple example of a training script.")
86
+ parser.add_argument(
87
+ "--pretrained_model_name_or_path",
88
+ type=str,
89
+ default=None,
90
+ required=True,
91
+ help="Path to pretrained model or model identifier from huggingface.co/models.",
92
+ )
93
+ parser.add_argument(
94
+ "--revision",
95
+ type=str,
96
+ default=None,
97
+ required=False,
98
+ help="Revision of pretrained model identifier from huggingface.co/models.",
99
+ )
100
+ parser.add_argument(
101
+ "--dataset_name",
102
+ type=str,
103
+ default=None,
104
+ help=(
105
+ "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
106
+ " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
107
+ " or to a folder containing files that 🤗 Datasets can understand."
108
+ ),
109
+ )
110
+ parser.add_argument(
111
+ "--dataset_config_name",
112
+ type=str,
113
+ default=None,
114
+ help="The config of the Dataset, leave as None if there's only one config.",
115
+ )
116
+ parser.add_argument(
117
+ "--train_data_dir",
118
+ type=str,
119
+ default=None,
120
+ help=(
121
+ "A folder containing the training data. Folder contents must follow the structure described in"
122
+ " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
123
+ " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
124
+ ),
125
+ )
126
+ parser.add_argument(
127
+ "--image_column", type=str, default="image", help="The column of the dataset containing an image."
128
+ )
129
+ parser.add_argument(
130
+ "--caption_column",
131
+ type=str,
132
+ default="text",
133
+ help="The column of the dataset containing a caption or a list of captions.",
134
+ )
135
+ parser.add_argument(
136
+ "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference."
137
+ )
138
+ parser.add_argument(
139
+ "--num_validation_images",
140
+ type=int,
141
+ default=4,
142
+ help="Number of images that should be generated during validation with `validation_prompt`.",
143
+ )
144
+ parser.add_argument(
145
+ "--validation_epochs",
146
+ type=int,
147
+ default=1,
148
+ help=(
149
+ "Run fine-tuning validation every X epochs. The validation process consists of running the prompt"
150
+ " `args.validation_prompt` multiple times: `args.num_validation_images`."
151
+ ),
152
+ )
153
+ parser.add_argument(
154
+ "--max_train_samples",
155
+ type=int,
156
+ default=None,
157
+ help=(
158
+ "For debugging purposes or quicker training, truncate the number of training examples to this "
159
+ "value if set."
160
+ ),
161
+ )
162
+ parser.add_argument(
163
+ "--output_dir",
164
+ type=str,
165
+ default="sd-model-finetuned-lora",
166
+ help="The output directory where the model predictions and checkpoints will be written.",
167
+ )
168
+ parser.add_argument(
169
+ "--cache_dir",
170
+ type=str,
171
+ default=None,
172
+ help="The directory where the downloaded models and datasets will be stored.",
173
+ )
174
+ parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
175
+ parser.add_argument(
176
+ "--resolution",
177
+ type=int,
178
+ default=512,
179
+ help=(
180
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
181
+ " resolution"
182
+ ),
183
+ )
184
+ parser.add_argument(
185
+ "--center_crop",
186
+ default=False,
187
+ action="store_true",
188
+ help=(
189
+ "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
190
+ " cropped. The images will be resized to the resolution first before cropping."
191
+ ),
192
+ )
193
+ parser.add_argument(
194
+ "--random_flip",
195
+ action="store_true",
196
+ help="whether to randomly flip images horizontally",
197
+ )
198
+ parser.add_argument(
199
+ "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
200
+ )
201
+ parser.add_argument("--num_train_epochs", type=int, default=100)
202
+ parser.add_argument(
203
+ "--max_train_steps",
204
+ type=int,
205
+ default=None,
206
+ help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
207
+ )
208
+ parser.add_argument(
209
+ "--gradient_accumulation_steps",
210
+ type=int,
211
+ default=1,
212
+ help="Number of updates steps to accumulate before performing a backward/update pass.",
213
+ )
214
+ parser.add_argument(
215
+ "--gradient_checkpointing",
216
+ action="store_true",
217
+ help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
218
+ )
219
+ parser.add_argument(
220
+ "--learning_rate",
221
+ type=float,
222
+ default=1e-4,
223
+ help="Initial learning rate (after the potential warmup period) to use.",
224
+ )
225
+ parser.add_argument(
226
+ "--scale_lr",
227
+ action="store_true",
228
+ default=False,
229
+ help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
230
+ )
231
+ parser.add_argument(
232
+ "--lr_scheduler",
233
+ type=str,
234
+ default="constant",
235
+ help=(
236
+ 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
237
+ ' "constant", "constant_with_warmup"]'
238
+ ),
239
+ )
240
+ parser.add_argument(
241
+ "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
242
+ )
243
+ parser.add_argument(
244
+ "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
245
+ )
246
+ parser.add_argument(
247
+ "--allow_tf32",
248
+ action="store_true",
249
+ help=(
250
+ "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
251
+ " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
252
+ ),
253
+ )
254
+ parser.add_argument(
255
+ "--dataloader_num_workers",
256
+ type=int,
257
+ default=0,
258
+ help=(
259
+ "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
260
+ ),
261
+ )
262
+ parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
263
+ parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
264
+ parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
265
+ parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
266
+ parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
267
+ parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
268
+ parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
269
+ parser.add_argument(
270
+ "--hub_model_id",
271
+ type=str,
272
+ default=None,
273
+ help="The name of the repository to keep in sync with the local `output_dir`.",
274
+ )
275
+ parser.add_argument(
276
+ "--logging_dir",
277
+ type=str,
278
+ default="logs",
279
+ help=(
280
+ "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
281
+ " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
282
+ ),
283
+ )
284
+ parser.add_argument(
285
+ "--mixed_precision",
286
+ type=str,
287
+ default=None,
288
+ choices=["no", "fp16", "bf16"],
289
+ help=(
290
+ "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
291
+ " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
292
+ " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
293
+ ),
294
+ )
295
+ parser.add_argument(
296
+ "--report_to",
297
+ type=str,
298
+ default="tensorboard",
299
+ help=(
300
+ 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
301
+ ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
302
+ ),
303
+ )
304
+ parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
305
+ parser.add_argument(
306
+ "--checkpointing_steps",
307
+ type=int,
308
+ default=500,
309
+ help=(
310
+ "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
311
+ " training using `--resume_from_checkpoint`."
312
+ ),
313
+ )
314
+ parser.add_argument(
315
+ "--checkpoints_total_limit",
316
+ type=int,
317
+ default=None,
318
+ help=(
319
+ "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`."
320
+ " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state"
321
+ " for more docs"
322
+ ),
323
+ )
324
+ parser.add_argument(
325
+ "--resume_from_checkpoint",
326
+ type=str,
327
+ default=None,
328
+ help=(
329
+ "Whether training should be resumed from a previous checkpoint. Use a path saved by"
330
+ ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
331
+ ),
332
+ )
333
+ parser.add_argument(
334
+ "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
335
+ )
336
+
337
+ args = parser.parse_args()
338
+ env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
339
+ if env_local_rank != -1 and env_local_rank != args.local_rank:
340
+ args.local_rank = env_local_rank
341
+
342
+ # Sanity checks
343
+ if args.dataset_name is None and args.train_data_dir is None:
344
+ raise ValueError("Need either a dataset name or a training folder.")
345
+
346
+ return args
347
+
348
+
349
+ def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
350
+ if token is None:
351
+ token = HfFolder.get_token()
352
+ if organization is None:
353
+ username = whoami(token)["name"]
354
+ return f"{username}/{model_id}"
355
+ else:
356
+ return f"{organization}/{model_id}"
357
+
358
+
359
+ DATASET_NAME_MAPPING = {
360
+ "lambdalabs/pokemon-blip-captions": ("image", "text"),
361
+ }
362
+
363
+
364
+ def main():
365
+ args = parse_args()
366
+ logging_dir = os.path.join(args.output_dir, args.logging_dir)
367
+
368
+ accelerator_project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit)
369
+
370
+ accelerator = Accelerator(
371
+ gradient_accumulation_steps=args.gradient_accumulation_steps,
372
+ mixed_precision=args.mixed_precision,
373
+ log_with=args.report_to,
374
+ logging_dir=logging_dir,
375
+ project_config=accelerator_project_config,
376
+ )
377
+ if args.report_to == "wandb":
378
+ if not is_wandb_available():
379
+ raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
380
+ import wandb
381
+
382
+ # Make one log on every process with the configuration for debugging.
383
+ logging.basicConfig(
384
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
385
+ datefmt="%m/%d/%Y %H:%M:%S",
386
+ level=logging.INFO,
387
+ )
388
+ logger.info(accelerator.state, main_process_only=False)
389
+ if accelerator.is_local_main_process:
390
+ datasets.utils.logging.set_verbosity_warning()
391
+ transformers.utils.logging.set_verbosity_warning()
392
+ diffusers.utils.logging.set_verbosity_info()
393
+ else:
394
+ datasets.utils.logging.set_verbosity_error()
395
+ transformers.utils.logging.set_verbosity_error()
396
+ diffusers.utils.logging.set_verbosity_error()
397
+
398
+ # If passed along, set the training seed now.
399
+ if args.seed is not None:
400
+ set_seed(args.seed)
401
+
402
+ # Handle the repository creation
403
+ if accelerator.is_main_process:
404
+ if args.push_to_hub:
405
+ if args.hub_model_id is None:
406
+ repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
407
+ else:
408
+ repo_name = args.hub_model_id
409
+ repo_name = create_repo(repo_name, exist_ok=True)
410
+ repo = Repository(args.output_dir, clone_from=repo_name)
411
+
412
+ with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
413
+ if "step_*" not in gitignore:
414
+ gitignore.write("step_*\n")
415
+ if "epoch_*" not in gitignore:
416
+ gitignore.write("epoch_*\n")
417
+ elif args.output_dir is not None:
418
+ os.makedirs(args.output_dir, exist_ok=True)
419
+
420
+ # Load scheduler, tokenizer and models.
421
+ noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
422
+ tokenizer = CLIPTokenizer.from_pretrained(
423
+ args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision
424
+ )
425
+ text_encoder = CLIPTextModel.from_pretrained(
426
+ args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
427
+ )
428
+ vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision)
429
+ unet = UNet2DConditionModel.from_pretrained(
430
+ args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision
431
+ )
432
+ # freeze parameters of models to save more memory
433
+ unet.requires_grad_(False)
434
+ vae.requires_grad_(False)
435
+
436
+ text_encoder.requires_grad_(False)
437
+
438
+ # For mixed precision training we cast the text_encoder and vae weights to half-precision
439
+ # as these models are only used for inference, keeping weights in full precision is not required.
440
+ weight_dtype = torch.float32
441
+ if accelerator.mixed_precision == "fp16":
442
+ weight_dtype = torch.float16
443
+ elif accelerator.mixed_precision == "bf16":
444
+ weight_dtype = torch.bfloat16
445
+
446
+ # Move unet, vae and text_encoder to device and cast to weight_dtype
447
+ unet.to(accelerator.device, dtype=weight_dtype)
448
+ vae.to(accelerator.device, dtype=weight_dtype)
449
+ text_encoder.to(accelerator.device, dtype=weight_dtype)
450
+
451
+ # now we will add new LoRA weights to the attention layers
452
+ # It's important to realize here how many attention weights will be added and of which sizes
453
+ # The sizes of the attention layers consist only of two different variables:
454
+ # 1) - the "hidden_size", which is increased according to `unet.config.block_out_channels`.
455
+ # 2) - the "cross attention size", which is set to `unet.config.cross_attention_dim`.
456
+
457
+ # Let's first see how many attention processors we will have to set.
458
+ # For Stable Diffusion, it should be equal to:
459
+ # - down blocks (2x attention layers) * (2x transformer layers) * (3x down blocks) = 12
460
+ # - mid blocks (2x attention layers) * (1x transformer layers) * (1x mid blocks) = 2
461
+ # - up blocks (2x attention layers) * (3x transformer layers) * (3x down blocks) = 18
462
+ # => 32 layers
463
+
464
+ # Set correct lora layers
465
+ lora_attn_procs = {}
466
+ for name in unet.attn_processors.keys():
467
+ cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
468
+ if name.startswith("mid_block"):
469
+ hidden_size = unet.config.block_out_channels[-1]
470
+ elif name.startswith("up_blocks"):
471
+ block_id = int(name[len("up_blocks.")])
472
+ hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
473
+ elif name.startswith("down_blocks"):
474
+ block_id = int(name[len("down_blocks.")])
475
+ hidden_size = unet.config.block_out_channels[block_id]
476
+
477
+ lora_attn_procs[name] = LoRACrossAttnProcessor(
478
+ hidden_size=hidden_size, cross_attention_dim=cross_attention_dim
479
+ )
480
+
481
+ unet.set_attn_processor(lora_attn_procs)
482
+
483
+ if args.enable_xformers_memory_efficient_attention:
484
+ if is_xformers_available():
485
+ import xformers
486
+
487
+ xformers_version = version.parse(xformers.__version__)
488
+ if xformers_version == version.parse("0.0.16"):
489
+ logger.warn(
490
+ "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
491
+ )
492
+ unet.enable_xformers_memory_efficient_attention()
493
+ else:
494
+ raise ValueError("xformers is not available. Make sure it is installed correctly")
495
+
496
+ lora_layers = AttnProcsLayers(unet.attn_processors)
497
+
498
+ # Enable TF32 for faster training on Ampere GPUs,
499
+ # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
500
+ if args.allow_tf32:
501
+ torch.backends.cuda.matmul.allow_tf32 = True
502
+
503
+ if args.scale_lr:
504
+ args.learning_rate = (
505
+ args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
506
+ )
507
+
508
+ # Initialize the optimizer
509
+ if args.use_8bit_adam:
510
+ try:
511
+ import bitsandbytes as bnb
512
+ except ImportError:
513
+ raise ImportError(
514
+ "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`"
515
+ )
516
+
517
+ optimizer_cls = bnb.optim.AdamW8bit
518
+ else:
519
+ optimizer_cls = torch.optim.AdamW
520
+
521
+ optimizer = optimizer_cls(
522
+ lora_layers.parameters(),
523
+ lr=args.learning_rate,
524
+ betas=(args.adam_beta1, args.adam_beta2),
525
+ weight_decay=args.adam_weight_decay,
526
+ eps=args.adam_epsilon,
527
+ )
528
+
529
+ # Get the datasets: you can either provide your own training and evaluation files (see below)
530
+ # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
531
+
532
+ # In distributed training, the load_dataset function guarantees that only one local process can concurrently
533
+ # download the dataset.
534
+ if args.dataset_name is not None:
535
+ # Downloading and loading a dataset from the hub.
536
+ dataset = load_dataset(
537
+ args.dataset_name,
538
+ args.dataset_config_name,
539
+ cache_dir=args.cache_dir,
540
+ )
541
+ else:
542
+ data_files = {}
543
+ if args.train_data_dir is not None:
544
+ data_files["train"] = os.path.join(args.train_data_dir, "**")
545
+ dataset = load_dataset(
546
+ "imagefolder",
547
+ data_files=data_files,
548
+ cache_dir=args.cache_dir,
549
+ )
550
+ # See more about loading custom images at
551
+ # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
552
+
553
+ # Preprocessing the datasets.
554
+ # We need to tokenize inputs and targets.
555
+ column_names = dataset["train"].column_names
556
+
557
+ # 6. Get the column names for input/target.
558
+ dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None)
559
+ if args.image_column is None:
560
+ image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
561
+ else:
562
+ image_column = args.image_column
563
+ if image_column not in column_names:
564
+ raise ValueError(
565
+ f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}"
566
+ )
567
+ if args.caption_column is None:
568
+ caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
569
+ else:
570
+ caption_column = args.caption_column
571
+ if caption_column not in column_names:
572
+ raise ValueError(
573
+ f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}"
574
+ )
575
+
576
+ # Preprocessing the datasets.
577
+ # We need to tokenize input captions and transform the images.
578
+ def tokenize_captions(examples, is_train=True):
579
+ captions = []
580
+ for caption in examples[caption_column]:
581
+ if isinstance(caption, str):
582
+ captions.append(caption)
583
+ elif isinstance(caption, (list, np.ndarray)):
584
+ # take a random caption if there are multiple
585
+ captions.append(random.choice(caption) if is_train else caption[0])
586
+ else:
587
+ raise ValueError(
588
+ f"Caption column `{caption_column}` should contain either strings or lists of strings."
589
+ )
590
+ inputs = tokenizer(
591
+ captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt"
592
+ )
593
+ return inputs.input_ids
594
+
595
+ # Preprocessing the datasets.
596
+ train_transforms = transforms.Compose(
597
+ [
598
+ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
599
+ transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
600
+ transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
601
+ transforms.ToTensor(),
602
+ transforms.Normalize([0.5], [0.5]),
603
+ ]
604
+ )
605
+
606
+ def preprocess_train(examples):
607
+ images = [image.convert("RGB") for image in examples[image_column]]
608
+ examples["pixel_values"] = [train_transforms(image) for image in images]
609
+ examples["input_ids"] = tokenize_captions(examples)
610
+ return examples
611
+
612
+ with accelerator.main_process_first():
613
+ if args.max_train_samples is not None:
614
+ dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
615
+ # Set the training transforms
616
+ train_dataset = dataset["train"].with_transform(preprocess_train)
617
+
618
+ def collate_fn(examples):
619
+ pixel_values = torch.stack([example["pixel_values"] for example in examples])
620
+ pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
621
+ input_ids = torch.stack([example["input_ids"] for example in examples])
622
+ return {"pixel_values": pixel_values, "input_ids": input_ids}
623
+
624
+ # DataLoaders creation:
625
+ train_dataloader = torch.utils.data.DataLoader(
626
+ train_dataset,
627
+ shuffle=True,
628
+ collate_fn=collate_fn,
629
+ batch_size=args.train_batch_size,
630
+ num_workers=args.dataloader_num_workers,
631
+ )
632
+
633
+ # Scheduler and math around the number of training steps.
634
+ overrode_max_train_steps = False
635
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
636
+ if args.max_train_steps is None:
637
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
638
+ overrode_max_train_steps = True
639
+
640
+ lr_scheduler = get_scheduler(
641
+ args.lr_scheduler,
642
+ optimizer=optimizer,
643
+ num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
644
+ num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
645
+ )
646
+
647
+ # Prepare everything with our `accelerator`.
648
+ lora_layers, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
649
+ lora_layers, optimizer, train_dataloader, lr_scheduler
650
+ )
651
+
652
+ # We need to recalculate our total training steps as the size of the training dataloader may have changed.
653
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
654
+ if overrode_max_train_steps:
655
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
656
+ # Afterwards we recalculate our number of training epochs
657
+ args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
658
+
659
+ # We need to initialize the trackers we use, and also store our configuration.
660
+ # The trackers initializes automatically on the main process.
661
+ if accelerator.is_main_process:
662
+ accelerator.init_trackers("text2image-fine-tune", config=vars(args))
663
+
664
+ # Train!
665
+ total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
666
+
667
+ logger.info("***** Running training *****")
668
+ logger.info(f" Num examples = {len(train_dataset)}")
669
+ logger.info(f" Num Epochs = {args.num_train_epochs}")
670
+ logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
671
+ logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
672
+ logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
673
+ logger.info(f" Total optimization steps = {args.max_train_steps}")
674
+ global_step = 0
675
+ first_epoch = 0
676
+
677
+ # Potentially load in the weights and states from a previous save
678
+ if args.resume_from_checkpoint:
679
+ if args.resume_from_checkpoint != "latest":
680
+ path = os.path.basename(args.resume_from_checkpoint)
681
+ else:
682
+ # Get the most recent checkpoint
683
+ dirs = os.listdir(args.output_dir)
684
+ dirs = [d for d in dirs if d.startswith("checkpoint")]
685
+ dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
686
+ path = dirs[-1] if len(dirs) > 0 else None
687
+
688
+ if path is None:
689
+ accelerator.print(
690
+ f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
691
+ )
692
+ args.resume_from_checkpoint = None
693
+ else:
694
+ accelerator.print(f"Resuming from checkpoint {path}")
695
+ accelerator.load_state(os.path.join(args.output_dir, path))
696
+ global_step = int(path.split("-")[1])
697
+
698
+ resume_global_step = global_step * args.gradient_accumulation_steps
699
+ first_epoch = global_step // num_update_steps_per_epoch
700
+ resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps)
701
+
702
+ # Only show the progress bar once on each machine.
703
+ progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process)
704
+ progress_bar.set_description("Steps")
705
+
706
+ for epoch in range(first_epoch, args.num_train_epochs):
707
+ unet.train()
708
+ train_loss = 0.0
709
+ for step, batch in enumerate(train_dataloader):
710
+ # Skip steps until we reach the resumed step
711
+ if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step:
712
+ if step % args.gradient_accumulation_steps == 0:
713
+ progress_bar.update(1)
714
+ continue
715
+
716
+ with accelerator.accumulate(unet):
717
+ # Convert images to latent space
718
+ latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample()
719
+ latents = latents * vae.config.scaling_factor
720
+
721
+ # Sample noise that we'll add to the latents
722
+ noise = torch.randn_like(latents)
723
+ bsz = latents.shape[0]
724
+ # Sample a random timestep for each image
725
+ timesteps = torch.randint(0, noise_scheduler.num_train_timesteps, (bsz,), device=latents.device)
726
+ timesteps = timesteps.long()
727
+
728
+ # Add noise to the latents according to the noise magnitude at each timestep
729
+ # (this is the forward diffusion process)
730
+ noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
731
+
732
+ # Get the text embedding for conditioning
733
+ encoder_hidden_states = text_encoder(batch["input_ids"])[0]
734
+
735
+ # Get the target for loss depending on the prediction type
736
+ if noise_scheduler.config.prediction_type == "epsilon":
737
+ target = noise
738
+ elif noise_scheduler.config.prediction_type == "v_prediction":
739
+ target = noise_scheduler.get_velocity(latents, noise, timesteps)
740
+ else:
741
+ raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
742
+
743
+ # Predict the noise residual and compute loss
744
+ model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
745
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
746
+
747
+ # Gather the losses across all processes for logging (if we use distributed training).
748
+ avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean()
749
+ train_loss += avg_loss.item() / args.gradient_accumulation_steps
750
+
751
+ # Backpropagate
752
+ accelerator.backward(loss)
753
+ if accelerator.sync_gradients:
754
+ params_to_clip = lora_layers.parameters()
755
+ accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
756
+ optimizer.step()
757
+ lr_scheduler.step()
758
+ optimizer.zero_grad()
759
+
760
+ # Checks if the accelerator has performed an optimization step behind the scenes
761
+ if accelerator.sync_gradients:
762
+ progress_bar.update(1)
763
+ global_step += 1
764
+ accelerator.log({"train_loss": train_loss}, step=global_step)
765
+ train_loss = 0.0
766
+
767
+ if global_step % args.checkpointing_steps == 0:
768
+ if accelerator.is_main_process:
769
+ save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
770
+ accelerator.save_state(save_path)
771
+ logger.info(f"Saved state to {save_path}")
772
+
773
+ logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
774
+ progress_bar.set_postfix(**logs)
775
+
776
+ if global_step >= args.max_train_steps:
777
+ break
778
+
779
+ if accelerator.is_main_process:
780
+ if args.validation_prompt is not None and epoch % args.validation_epochs == 0:
781
+ logger.info(
782
+ f"Running validation... \n Generating {args.num_validation_images} images with prompt:"
783
+ f" {args.validation_prompt}."
784
+ )
785
+ # create pipeline
786
+ pipeline = DiffusionPipeline.from_pretrained(
787
+ args.pretrained_model_name_or_path,
788
+ unet=accelerator.unwrap_model(unet),
789
+ revision=args.revision,
790
+ torch_dtype=weight_dtype,
791
+ )
792
+ pipeline = pipeline.to(accelerator.device)
793
+ pipeline.set_progress_bar_config(disable=True)
794
+
795
+ # run inference
796
+ generator = torch.Generator(device=accelerator.device).manual_seed(args.seed)
797
+ images = []
798
+ for _ in range(args.num_validation_images):
799
+ images.append(
800
+ pipeline(args.validation_prompt, num_inference_steps=30, generator=generator).images[0]
801
+ )
802
+
803
+ if accelerator.is_main_process:
804
+ for tracker in accelerator.trackers:
805
+ if tracker.name == "tensorboard":
806
+ np_images = np.stack([np.asarray(img) for img in images])
807
+ tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC")
808
+ if tracker.name == "wandb":
809
+ tracker.log(
810
+ {
811
+ "validation": [
812
+ wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
813
+ for i, image in enumerate(images)
814
+ ]
815
+ }
816
+ )
817
+
818
+ del pipeline
819
+ torch.cuda.empty_cache()
820
+
821
+ # Save the lora layers
822
+ accelerator.wait_for_everyone()
823
+ if accelerator.is_main_process:
824
+ unet = unet.to(torch.float32)
825
+ unet.save_attn_procs(args.output_dir)
826
+
827
+ if args.push_to_hub:
828
+ save_model_card(
829
+ repo_name,
830
+ images=images,
831
+ base_model=args.pretrained_model_name_or_path,
832
+ dataset_name=args.dataset_name,
833
+ repo_folder=args.output_dir,
834
+ )
835
+ repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
836
+
837
+ # Final inference
838
+ # Load previous pipeline
839
+ pipeline = DiffusionPipeline.from_pretrained(
840
+ args.pretrained_model_name_or_path, revision=args.revision, torch_dtype=weight_dtype
841
+ )
842
+ pipeline = pipeline.to(accelerator.device)
843
+
844
+ # load attention processors
845
+ pipeline.unet.load_attn_procs(args.output_dir)
846
+
847
+ # run inference
848
+ generator = torch.Generator(device=accelerator.device).manual_seed(args.seed)
849
+ images = []
850
+ for _ in range(args.num_validation_images):
851
+ images.append(pipeline(args.validation_prompt, num_inference_steps=30, generator=generator).images[0])
852
+
853
+ if accelerator.is_main_process:
854
+ for tracker in accelerator.trackers:
855
+ if tracker.name == "tensorboard":
856
+ np_images = np.stack([np.asarray(img) for img in images])
857
+ tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC")
858
+ if tracker.name == "wandb":
859
+ tracker.log(
860
+ {
861
+ "test": [
862
+ wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
863
+ for i, image in enumerate(images)
864
+ ]
865
+ }
866
+ )
867
+
868
+ accelerator.end_training()
869
+
870
+
871
+ if __name__ == "__main__":
872
+ main()