source
stringclasses 273
values | url
stringlengths 47
172
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#custom-diffusion
|
.md
|
This guide will explore the [train_custom_diffusion.py](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion/train_custom_diffusion.py) script to help you become more familiar with it, and how you can adapt it for your own use-case.
Before running the script, make sure you install the library from source:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```
|
36_1_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#custom-diffusion
|
.md
|
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```
Navigate to the example folder with the training script and install the required dependencies:
```bash
cd examples/custom_diffusion
pip install -r requirements.txt
pip install clip-retrieval
```
<Tip>
|
36_1_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#custom-diffusion
|
.md
|
```bash
cd examples/custom_diffusion
pip install -r requirements.txt
pip install clip-retrieval
```
<Tip>
π€ Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the π€ Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.
</Tip>
Initialize an π€ Accelerate environment:
```bash
accelerate config
```
|
36_1_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#custom-diffusion
|
.md
|
</Tip>
Initialize an π€ Accelerate environment:
```bash
accelerate config
```
To setup a default π€ Accelerate environment without choosing any configurations:
```bash
accelerate config default
```
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
```py
from accelerate.utils import write_basic_config
|
36_1_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#custom-diffusion
|
.md
|
write_basic_config()
```
Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.
<Tip>
|
36_1_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#custom-diffusion
|
.md
|
<Tip>
The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion/train_custom_diffusion.py) and let us know if you have any questions or concerns.
</Tip>
|
36_1_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#script-parameters
|
.md
|
The training script contains all the parameters to help you customize your training run. These are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/custom_diffusion/train_custom_diffusion.py#L319) function. The function comes with default values, but you can also set your own values in the training command if you'd like.
For example, to change the resolution of the input image:
```bash
|
36_2_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#script-parameters
|
.md
|
For example, to change the resolution of the input image:
```bash
accelerate launch train_custom_diffusion.py \
--resolution=256
```
Many of the basic parameters are described in the [DreamBooth](dreambooth#script-parameters) training guide, so this guide focuses on the parameters unique to Custom Diffusion:
|
36_2_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#script-parameters
|
.md
|
- `--freeze_model`: freezes the key and value parameters in the cross-attention layer; the default is `crossattn_kv`, but you can set it to `crossattn` to train all the parameters in the cross-attention layer
- `--concepts_list`: to learn multiple concepts, provide a path to a JSON file containing the concepts
- `--modifier_token`: a special word used to represent the learned concept
- `--initializer_token`: a special word used to initialize the embeddings of the `modifier_token`
|
36_2_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#prior-preservation-loss
|
.md
|
Prior preservation loss is a method that uses a model's own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions.
Many of the parameters for prior preservation loss are described in the [DreamBooth](dreambooth#prior-preservation-loss) training guide.
|
36_3_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#regularization
|
.md
|
Custom Diffusion includes training the target images with a small set of real images to prevent overfitting. As you can imagine, this can be easy to do when you're only training on a few images! Download 200 real images with `clip_retrieval`. The `class_prompt` should be the same category as the target images. These images are stored in `class_data_dir`.
```bash
python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200
```
|
36_4_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#regularization
|
.md
|
```bash
python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200
```
To enable regularization, add the following parameters:
- `--with_prior_preservation`: whether to use prior preservation loss
- `--prior_loss_weight`: controls the influence of the prior preservation loss on the model
- `--real_prior`: whether to use a small set of real images to prevent overfitting
```bash
accelerate launch train_custom_diffusion.py \
--with_prior_preservation \
|
36_4_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#regularization
|
.md
|
```bash
accelerate launch train_custom_diffusion.py \
--with_prior_preservation \
--prior_loss_weight=1.0 \
--class_data_dir="./real_reg/samples_cat" \
--class_prompt="cat" \
--real_prior=True \
```
|
36_4_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
<Tip>
A lot of the code in the Custom Diffusion training script is similar to the [DreamBooth](dreambooth#training-script) script. This guide instead focuses on the code that is relevant to Custom Diffusion.
</Tip>
The Custom Diffusion training script has two dataset classes:
|
36_5_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
</Tip>
The Custom Diffusion training script has two dataset classes:
- [`CustomDiffusionDataset`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/custom_diffusion/train_custom_diffusion.py#L165): preprocesses the images, class images, and prompts for training
|
36_5_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
- [`PromptDataset`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/custom_diffusion/train_custom_diffusion.py#L148): prepares the prompts for generating class images
|
36_5_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
Next, the `modifier_token` is [added to the tokenizer](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/custom_diffusion/train_custom_diffusion.py#L811), converted to token ids, and the token embeddings are resized to account for the new `modifier_token`. Then the `modifier_token` embeddings are initialized with the embeddings of the `initializer_token`. All parameters in the text encoder are frozen, except for the token embeddings since this is what the model
|
36_5_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
All parameters in the text encoder are frozen, except for the token embeddings since this is what the model is trying to learn to associate with the concepts.
|
36_5_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
```py
params_to_freeze = itertools.chain(
text_encoder.text_model.encoder.parameters(),
text_encoder.text_model.final_layer_norm.parameters(),
text_encoder.text_model.embeddings.position_embedding.parameters(),
)
freeze_params(params_to_freeze)
```
|
36_5_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
)
freeze_params(params_to_freeze)
```
Now you'll need to add the [Custom Diffusion weights](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/custom_diffusion/train_custom_diffusion.py#L911C3-L911C3) to the attention layers. This is a really important step for getting the shape and size of the attention weights correct, and for setting the appropriate number of attention processors in each UNet block.
```py
st = unet.state_dict()
|
36_5_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
```py
st = unet.state_dict()
for name, _ in unet.attn_processors.items():
cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
if name.startswith("mid_block"):
hidden_size = unet.config.block_out_channels[-1]
elif name.startswith("up_blocks"):
block_id = int(name[len("up_blocks.")])
hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
elif name.startswith("down_blocks"):
block_id = int(name[len("down_blocks.")])
|
36_5_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
elif name.startswith("down_blocks"):
block_id = int(name[len("down_blocks.")])
hidden_size = unet.config.block_out_channels[block_id]
layer_name = name.split(".processor")[0]
weights = {
"to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"],
"to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"],
}
if train_q_out:
weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"]
weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"]
|
36_5_8
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"]
weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"]
if cross_attention_dim is not None:
custom_diffusion_attn_procs[name] = attention_class(
train_kv=train_kv,
train_q_out=train_q_out,
hidden_size=hidden_size,
cross_attention_dim=cross_attention_dim,
).to(unet.device)
custom_diffusion_attn_procs[name].load_state_dict(weights)
else:
custom_diffusion_attn_procs[name] = attention_class(
train_kv=False,
|
36_5_9
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
else:
custom_diffusion_attn_procs[name] = attention_class(
train_kv=False,
train_q_out=False,
hidden_size=hidden_size,
cross_attention_dim=cross_attention_dim,
)
del st
unet.set_attn_processor(custom_diffusion_attn_procs)
custom_diffusion_layers = AttnProcsLayers(unet.attn_processors)
```
|
36_5_10
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
unet.set_attn_processor(custom_diffusion_attn_procs)
custom_diffusion_layers = AttnProcsLayers(unet.attn_processors)
```
The [optimizer](https://github.com/huggingface/diffusers/blob/84cd9e8d01adb47f046b1ee449fc76a0c32dc4e2/examples/custom_diffusion/train_custom_diffusion.py#L982) is initialized to update the cross-attention layer parameters:
```py
optimizer = optimizer_class(
itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters())
|
36_5_11
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters())
if args.modifier_token is not None
else custom_diffusion_layers.parameters(),
lr=args.learning_rate,
betas=(args.adam_beta1, args.adam_beta2),
weight_decay=args.adam_weight_decay,
eps=args.adam_epsilon,
)
```
|
36_5_12
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
betas=(args.adam_beta1, args.adam_beta2),
weight_decay=args.adam_weight_decay,
eps=args.adam_epsilon,
)
```
In the [training loop](https://github.com/huggingface/diffusers/blob/84cd9e8d01adb47f046b1ee449fc76a0c32dc4e2/examples/custom_diffusion/train_custom_diffusion.py#L1048), it is important to only update the embeddings for the concept you're trying to learn. This means setting the gradients of all the other token embeddings to zero:
```py
if args.modifier_token is not None:
|
36_5_13
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
```py
if args.modifier_token is not None:
if accelerator.num_processes > 1:
grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad
else:
grads_text_encoder = text_encoder.get_input_embeddings().weight.grad
index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0]
for i in range(len(modifier_token_id[1:])):
index_grads_to_zero = index_grads_to_zero & (
torch.arange(len(tokenizer)) != modifier_token_id[i]
)
|
36_5_14
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#training-script
|
.md
|
index_grads_to_zero = index_grads_to_zero & (
torch.arange(len(tokenizer)) != modifier_token_id[i]
)
grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[
index_grads_to_zero, :
].fill_(0)
```
|
36_5_15
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
Once youβve made all your changes or youβre okay with the default configuration, youβre ready to launch the training script! π
In this guide, you'll download and use these example [cat images](https://www.cs.cmu.edu/~custom-diffusion/assets/data.zip). You can also create and use your own dataset if you want (see the [Create a dataset for training](create_dataset) guide).
|
36_6_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
Set the environment variable `MODEL_NAME` to a model id on the Hub or a path to a local model, `INSTANCE_DIR` to the path where you just downloaded the cat images to, and `OUTPUT_DIR` to where you want to save the model. You'll use `<new1>` as the special word to tie the newly learned embeddings to. The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin file to your repository.
|
36_6_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
To monitor training progress with Weights and Biases, add the `--report_to=wandb` parameter to the training command and specify a validation prompt with `--validation_prompt`. This is useful for debugging and saving intermediate results.
<Tip>
If you're training on human faces, the Custom Diffusion team has found the following parameters to work well:
- `--learning_rate=5e-6`
- `--max_train_steps` can be anywhere between 1000 and 2000
- `--freeze_model=crossattn`
|
36_6_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
- `--learning_rate=5e-6`
- `--max_train_steps` can be anywhere between 1000 and 2000
- `--freeze_model=crossattn`
- use at least 15-20 images to train with
</Tip>
<hfoptions id="training-inference">
<hfoption id="single concept">
```bash
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export OUTPUT_DIR="path-to-save-model"
export INSTANCE_DIR="./data/cat"
|
36_6_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
accelerate launch train_custom_diffusion.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--class_data_dir=./real_reg/samples_cat/ \
--with_prior_preservation \
--real_prior \
--prior_loss_weight=1.0 \
--class_prompt="cat" \
--num_class_images=200 \
--instance_prompt="photo of a <new1> cat" \
--resolution=512 \
--train_batch_size=2 \
--learning_rate=1e-5 \
--lr_warmup_steps=0 \
--max_train_steps=250 \
--scale_lr \
--hflip \
|
36_6_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
--train_batch_size=2 \
--learning_rate=1e-5 \
--lr_warmup_steps=0 \
--max_train_steps=250 \
--scale_lr \
--hflip \
--modifier_token "<new1>" \
--validation_prompt="<new1> cat sitting in a bucket" \
--report_to="wandb" \
--push_to_hub
```
</hfoption>
<hfoption id="multiple concepts">
|
36_6_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
--report_to="wandb" \
--push_to_hub
```
</hfoption>
<hfoption id="multiple concepts">
Custom Diffusion can also learn multiple concepts if you provide a [JSON](https://github.com/adobe-research/custom-diffusion/blob/main/assets/concept_list.json) file with some details about each concept it should learn.
Run clip-retrieval to collect some real images to use for regularization:
```bash
pip install clip-retrieval
python retrieve.py --class_prompt {} --class_data_dir {} --num_class_images 200
```
|
36_6_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
```bash
pip install clip-retrieval
python retrieve.py --class_prompt {} --class_data_dir {} --num_class_images 200
```
Then you can launch the script:
```bash
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export OUTPUT_DIR="path-to-save-model"
|
36_6_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
accelerate launch train_custom_diffusion.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--output_dir=$OUTPUT_DIR \
--concepts_list=./concept_list.json \
--with_prior_preservation \
--real_prior \
--prior_loss_weight=1.0 \
--resolution=512 \
--train_batch_size=2 \
--learning_rate=1e-5 \
--lr_warmup_steps=0 \
--max_train_steps=500 \
--num_class_images=200 \
--scale_lr \
--hflip \
--modifier_token "<new1>+<new2>" \
--push_to_hub
```
</hfoption>
</hfoptions>
|
36_6_8
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
--scale_lr \
--hflip \
--modifier_token "<new1>+<new2>" \
--push_to_hub
```
</hfoption>
</hfoptions>
Once training is finished, you can use your new Custom Diffusion model for inference.
<hfoptions id="training-inference">
<hfoption id="single concept">
```py
import torch
from diffusers import DiffusionPipeline
|
36_6_9
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16,
).to("cuda")
pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin")
pipeline.load_textual_inversion("path-to-save-model", weight_name="<new1>.bin")
|
36_6_10
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
image = pipeline(
"<new1> cat sitting in a bucket",
num_inference_steps=100,
guidance_scale=6.0,
eta=1.0,
).images[0]
image.save("cat.png")
```
</hfoption>
<hfoption id="multiple concepts">
```py
import torch
from huggingface_hub.repocard import RepoCard
from diffusers import DiffusionPipeline
|
36_6_11
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
pipeline = DiffusionPipeline.from_pretrained("sayakpaul/custom-diffusion-cat-wooden-pot", torch_dtype=torch.float16).to("cuda")
pipeline.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin")
pipeline.load_textual_inversion(model_id, weight_name="<new1>.bin")
pipeline.load_textual_inversion(model_id, weight_name="<new2>.bin")
|
36_6_12
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#launch-the-script
|
.md
|
image = pipeline(
"the <new1> cat sculpture in the style of a <new2> wooden pot",
num_inference_steps=100,
guidance_scale=6.0,
eta=1.0,
).images[0]
image.save("multi-subject.png")
```
</hfoption>
</hfoptions>
|
36_6_13
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/custom_diffusion.md
|
https://huggingface.co/docs/diffusers/en/training/custom_diffusion/#next-steps
|
.md
|
Congratulations on training a model with Custom Diffusion! π To learn more:
- Read the [Multi-Concept Customization of Text-to-Image Diffusion](https://www.cs.cmu.edu/~custom-diffusion/) blog post to learn more details about the experimental results from the Custom Diffusion team.
|
36_7_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/adapt_a_model.md
|
https://huggingface.co/docs/diffusers/en/training/adapt_a_model/#adapt-a-model-to-a-new-task
|
.md
|
Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task.
This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained [`UNet2DConditionModel`].
|
37_0_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/adapt_a_model.md
|
https://huggingface.co/docs/diffusers/en/training/adapt_a_model/#configure-unet2dconditionmodel-parameters
|
.md
|
A [`UNet2DConditionModel`] by default accepts 4 channels in the [input sample](https://huggingface.co/docs/diffusers/v0.16.0/en/api/models#diffusers.UNet2DConditionModel.in_channels). For example, load a pretrained text-to-image model like [`stable-diffusion-v1-5/stable-diffusion-v1-5`](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) and take a look at the number of `in_channels`:
```py
from diffusers import StableDiffusionPipeline
|
37_1_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/adapt_a_model.md
|
https://huggingface.co/docs/diffusers/en/training/adapt_a_model/#configure-unet2dconditionmodel-parameters
|
.md
|
pipeline = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", use_safetensors=True)
pipeline.unet.config["in_channels"]
4
```
Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like [`runwayml/stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting):
```py
from diffusers import StableDiffusionPipeline
|
37_1_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/adapt_a_model.md
|
https://huggingface.co/docs/diffusers/en/training/adapt_a_model/#configure-unet2dconditionmodel-parameters
|
.md
|
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True)
pipeline.unet.config["in_channels"]
9
```
To adapt your text-to-image model for inpainting, you'll need to change the number of `in_channels` from 4 to 9.
|
37_1_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/adapt_a_model.md
|
https://huggingface.co/docs/diffusers/en/training/adapt_a_model/#configure-unet2dconditionmodel-parameters
|
.md
|
9
```
To adapt your text-to-image model for inpainting, you'll need to change the number of `in_channels` from 4 to 9.
Initialize a [`UNet2DConditionModel`] with the pretrained text-to-image model weights, and change `in_channels` to 9. Changing the number of `in_channels` means you need to set `ignore_mismatched_sizes=True` and `low_cpu_mem_usage=False` to avoid a size mismatch error because the shape is different now.
```py
from diffusers import UNet2DConditionModel
|
37_1_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/adapt_a_model.md
|
https://huggingface.co/docs/diffusers/en/training/adapt_a_model/#configure-unet2dconditionmodel-parameters
|
.md
|
model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
unet = UNet2DConditionModel.from_pretrained(
model_id,
subfolder="unet",
in_channels=9,
low_cpu_mem_usage=False,
ignore_mismatched_sizes=True,
use_safetensors=True,
)
```
|
37_1_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/adapt_a_model.md
|
https://huggingface.co/docs/diffusers/en/training/adapt_a_model/#configure-unet2dconditionmodel-parameters
|
.md
|
model_id,
subfolder="unet",
in_channels=9,
low_cpu_mem_usage=False,
ignore_mismatched_sizes=True,
use_safetensors=True,
)
```
The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (`conv_in.weight`) of the `unet` are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise.
|
37_1_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/
|
.md
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
38_0_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/
|
.md
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
|
38_0_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#cogvideox
|
.md
|
CogVideoX is a text-to-video generation model focused on creating more coherent videos aligned with a prompt. It achieves this using several methods.
- a 3D variational autoencoder that compresses videos spatially and temporally, improving compression rate and video accuracy.
- an expert transformer block to help align text and video, and a 3D full attention module for capturing and creating spatially and temporally accurate videos.
|
38_1_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#cogvideox
|
.md
|
The actual test of the video instruction dimension found that CogVideoX has good effects on consistent theme, dynamic information, consistent background, object information, smooth motion, color, scene, appearance style, and temporal style but cannot achieve good results with human action, spatial relationship, and multiple objects.
Finetuning with Diffusers can help make up for these poor results.
|
38_1_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#data-preparation
|
.md
|
The training scripts accepts data in two formats.
The first format is suited for small-scale training, and the second format uses a CSV format, which is more appropriate for streaming data for large-scale training. In the future, Diffusers will support the `<Video>` tag.
|
38_2_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#small-format
|
.md
|
Two files where one file contains line-separated prompts and another file contains line-separated paths to video data (the path to video files must be relative to the path you pass when specifying `--instance_data_root`). Let's take a look at an example to understand this better!
Assume you've specified `--instance_data_root` as `/dataset`, and that this directory contains the files: `prompts.txt` and `videos.txt`.
The `prompts.txt` file should contain line-separated prompts:
```
|
38_3_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#small-format
|
.md
|
The `prompts.txt` file should contain line-separated prompts:
```
A black and white animated sequence featuring a rabbit, named Rabbity Ribfried, and an anthropomorphic goat in a musical, playful environment, showcasing their evolving interaction.
|
38_3_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#small-format
|
.md
|
A black and white animated sequence on a ship's deck features a bulldog character, named Bully Bulldoger, showcasing exaggerated facial expressions and body language. The character progresses from confident to focused, then to strained and distressed, displaying a range of emotions as it navigates challenges. The ship's interior remains static in the background, with minimalistic details such as a bell and open door. The character's dynamic movements and changing expressions drive the narrative, with no
|
38_3_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#small-format
|
.md
|
details such as a bell and open door. The character's dynamic movements and changing expressions drive the narrative, with no camera movement to distract from its evolving reactions and physical gestures.
|
38_3_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#small-format
|
.md
|
...
```
The `videos.txt` file should contain line-separate paths to video files. Note that the path should be _relative_ to the `--instance_data_root` directory.
```
videos/00000.mp4
videos/00001.mp4
...
```
Overall, this is how your dataset would look like if you ran the `tree` command on the dataset root directory:
```
/dataset
βββ prompts.txt
βββ videos.txt
βββ videos
βββ videos/00000.mp4
βββ videos/00001.mp4
βββ ...
```
|
38_3_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#small-format
|
.md
|
```
/dataset
βββ prompts.txt
βββ videos.txt
βββ videos
βββ videos/00000.mp4
βββ videos/00001.mp4
βββ ...
```
When using this format, the `--caption_column` must be `prompts.txt` and `--video_column` must be `videos.txt`.
|
38_3_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#stream-format
|
.md
|
You could use a single CSV file. For the sake of this example, assume you have a `metadata.csv` file. The expected format is:
```
<CAPTION_COLUMN>,<PATH_TO_VIDEO_COLUMN>
"""A black and white animated sequence featuring a rabbit, named Rabbity Ribfried, and an anthropomorphic goat in a musical, playful environment, showcasing their evolving interaction.""","""00000.mp4"""
|
38_4_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#stream-format
|
.md
|
"""A black and white animated sequence on a ship's deck features a bulldog character, named Bully Bulldoger, showcasing exaggerated facial expressions and body language. The character progresses from confident to focused, then to strained and distressed, displaying a range of emotions as it navigates challenges. The ship's interior remains static in the background, with minimalistic details such as a bell and open door. The character's dynamic movements and changing expressions drive the narrative, with no
|
38_4_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#stream-format
|
.md
|
details such as a bell and open door. The character's dynamic movements and changing expressions drive the narrative, with no camera movement to distract from its evolving reactions and physical gestures.""","""00001.mp4"""
|
38_4_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#stream-format
|
.md
|
...
```
In this case, the `--instance_data_root` should be the location where the videos are stored and `--dataset_name` should be either a path to local folder or a [`~datasets.load_dataset`] compatible dataset hosted on the Hub. Assuming you have videos of Minecraft gameplay at `https://huggingface.co/datasets/my-awesome-username/minecraft-videos`, you would have to specify `my-awesome-username/minecraft-videos`.
|
38_4_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#stream-format
|
.md
|
When using this format, the `--caption_column` must be `<CAPTION_COLUMN>` and `--video_column` must be `<PATH_TO_VIDEO_COLUMN>`.
You are not strictly restricted to the CSV format. Any format works as long as the `load_dataset` method supports the file format to load a basic `<PATH_TO_VIDEO_COLUMN>` and `<CAPTION_COLUMN>`. The reason for going through these dataset organization gymnastics for loading video data is because `load_dataset` does not fully support all kinds of video formats.
> [!NOTE]
|
38_4_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#stream-format
|
.md
|
> CogVideoX works best with long and descriptive LLM-augmented prompts for video generation. We recommend pre-processing your videos by first generating a summary using a VLM and then augmenting the prompts with an LLM. To generate the above captions, we use [MiniCPM-V-26](https://huggingface.co/openbmb/MiniCPM-V-2_6) and [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). A very barebones and no-frills example for this is available
|
38_4_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#stream-format
|
.md
|
A very barebones and no-frills example for this is available [here](https://gist.github.com/a-r-r-o-w/4dee20250e82f4e44690a02351324a4a). The official recommendation for augmenting prompts is [ChatGLM](https://huggingface.co/THUDM?search_models=chatglm) and a length of 50-100 words is considered good.
|
38_4_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#stream-format
|
.md
|
>![NOTE]
> It is expected that your dataset is already pre-processed. If not, some basic pre-processing can be done by playing with the following parameters:
> `--height`, `--width`, `--fps`, `--max_num_frames`, `--skip_frames_start` and `--skip_frames_end`.
> Presently, all videos in your dataset should contain the same number of video frames when using a training batch size > 1.
<!-- TODO: Implement frame packing in future to address above issue. -->
|
38_4_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
You need to setup your development environment by installing the necessary requirements. The following packages are required:
- Torch 2.0 or above based on the training features you are utilizing (might require latest or nightly versions for quantized/deepspeed training)
- `pip install diffusers transformers accelerate peft huggingface_hub` for all things modeling and training related
- `pip install datasets decord` for loading video training data
|
38_5_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
- `pip install datasets decord` for loading video training data
- `pip install bitsandbytes` for using 8-bit Adam or AdamW optimizers for memory-optimized training
- `pip install wandb` optionally for monitoring training logs
- `pip install deepspeed` optionally for [DeepSpeed](https://github.com/microsoft/DeepSpeed) training
- `pip install prodigyopt` optionally if you would like to use the Prodigy optimizer for training
|
38_5_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
- `pip install prodigyopt` optionally if you would like to use the Prodigy optimizer for training
To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
Before running the script, make sure you install the library from source:
```bash
|
38_5_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
Before running the script, make sure you install the library from source:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install -e .
```
Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:
- PyTorch
```bash
cd examples/cogvideo
pip install -r requirements.txt
```
And initialize an [π€ Accelerate](https://github.com/huggingface/accelerate/) environment with:
```bash
accelerate config
|
38_5_3
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
And initialize an [π€ Accelerate](https://github.com/huggingface/accelerate/) environment with:
```bash
accelerate config
```
Or for a default accelerate configuration without answering questions about your environment
```bash
accelerate config default
```
Or if your environment doesn't support an interactive shell (e.g., a notebook)
```python
from accelerate.utils import write_basic_config
write_basic_config()
```
|
38_5_4
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
```python
from accelerate.utils import write_basic_config
write_basic_config()
```
When running `accelerate config`, if you use torch.compile, there can be dramatic speedups. The PEFT library is used as a backend for LoRA training, so make sure to have `peft>=0.6.0` installed in your environment.
If you would like to push your model to the Hub after training is completed with a neat model card, make sure you're logged in:
```bash
huggingface-cli login
|
38_5_5
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
# Alternatively, you could upload your model manually using:
# huggingface-cli upload my-cool-account-name/my-cool-lora-name /path/to/awesome/lora
```
Make sure your data is prepared as described in [Data Preparation](#data-preparation). When ready, you can begin training!
|
38_5_6
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
Make sure your data is prepared as described in [Data Preparation](#data-preparation). When ready, you can begin training!
Assuming you are training on 50 videos of a similar concept, we have found 1500-2000 steps to work well. The official recommendation, however, is 100 videos with a total of 4000 steps. Assuming you are training on a single GPU with a `--train_batch_size` of `1`:
- 1500 steps on 50 videos would correspond to `30` training epochs
|
38_5_7
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
- 1500 steps on 50 videos would correspond to `30` training epochs
- 4000 steps on 100 videos would correspond to `40` training epochs
```bash
#!/bin/bash
|
38_5_8
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
GPU_IDS="0"
|
38_5_9
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
accelerate launch --gpu_ids $GPU_IDS examples/cogvideo/train_cogvideox_lora.py \
--pretrained_model_name_or_path THUDM/CogVideoX-2b \
--cache_dir <CACHE_DIR> \
--instance_data_root <PATH_TO_WHERE_VIDEO_FILES_ARE_STORED> \
--dataset_name my-awesome-name/my-awesome-dataset \
--caption_column <CAPTION_COLUMN> \
--video_column <PATH_TO_VIDEO_COLUMN> \
--id_token <ID_TOKEN> \
|
38_5_10
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
--validation_prompt "<ID_TOKEN> Spiderman swinging over buildings:::A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The
|
38_5_11
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance" \
|
38_5_12
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
--validation_prompt_separator ::: \
--num_validation_videos 1 \
--validation_epochs 10 \
--seed 42 \
--rank 64 \
--lora_alpha 64 \
--mixed_precision fp16 \
--output_dir /raid/aryan/cogvideox-lora \
--height 480 --width 720 --fps 8 --max_num_frames 49 --skip_frames_start 0 --skip_frames_end 0 \
--train_batch_size 1 \
--num_train_epochs 30 \
--checkpointing_steps 1000 \
--gradient_accumulation_steps 1 \
--learning_rate 1e-3 \
--lr_scheduler cosine_with_restarts \
--lr_warmup_steps 200 \
--lr_num_cycles 1 \
|
38_5_13
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
--learning_rate 1e-3 \
--lr_scheduler cosine_with_restarts \
--lr_warmup_steps 200 \
--lr_num_cycles 1 \
--enable_slicing \
--enable_tiling \
--optimizer Adam \
--adam_beta1 0.9 \
--adam_beta2 0.95 \
--max_grad_norm 1.0 \
--report_to wandb
```
To better track our training experiments, we're using the following flags in the command above:
* `--report_to wandb` will ensure the training runs are tracked on Weights and Biases. To use it, be sure to install `wandb` with `pip install wandb`.
|
38_5_14
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
|
38_5_15
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
Setting the `<ID_TOKEN>` is not necessary. From some limited experimentation, we found it works better (as it resembles [Dreambooth](https://huggingface.co/docs/diffusers/en/training/dreambooth) training) than without. When provided, the `<ID_TOKEN>` is appended to the beginning of each prompt. So, if your `<ID_TOKEN>` was `"DISNEY"` and your prompt was `"Spiderman swinging over buildings"`, the effective prompt used in training would be `"DISNEY Spiderman swinging over buildings"`. When not provided, you
|
38_5_16
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
the effective prompt used in training would be `"DISNEY Spiderman swinging over buildings"`. When not provided, you would either be training without any additional token or could augment your dataset to apply the token where you wish before starting the training.
|
38_5_17
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
> [!NOTE]
> You can pass `--use_8bit_adam` to reduce the memory requirements of training.
> [!IMPORTANT]
> The following settings have been tested at the time of adding CogVideoX LoRA training support:
> - Our testing was primarily done on CogVideoX-2b. We will work on CogVideoX-5b and CogVideoX-5b-I2V soon
|
38_5_18
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
> - One dataset comprised of 70 training videos of resolutions `200 x 480 x 720` (F x H x W). From this, by using frame skipping in data preprocessing, we created two smaller 49-frame and 16-frame datasets for faster experimentation and because the maximum limit recommended by the CogVideoX team is 49 frames. Out of the 70 videos, we created three groups of 10, 25 and 50 videos. All videos were similar in nature of the concept being trained.
> - 25+ videos worked best for training new concepts and styles.
|
38_5_19
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
> - 25+ videos worked best for training new concepts and styles.
> - We found that it is better to train with an identifier token that can be specified as `--id_token`. This is similar to Dreambooth-like training but normal finetuning without such a token works too.
> - Trained concept seemed to work decently well when combined with completely unrelated prompts. We expect even better results if CogVideoX-5B is finetuned.
|
38_5_20
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
> - The original repository uses a `lora_alpha` of `1`. We found this not suitable in many runs, possibly due to difference in modeling backends and training settings. Our recommendation is to set to the `lora_alpha` to either `rank` or `rank // 2`.
|
38_5_21
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
> - If you're training on data whose captions generate bad results with the original model, a `rank` of 64 and above is good and also the recommendation by the team behind CogVideoX. If the generations are already moderately good on your training captions, a `rank` of 16/32 should work. We found that setting the rank too low, say `4`, is not ideal and doesn't produce promising results.
|
38_5_22
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
> - The authors of CogVideoX recommend 4000 training steps and 100 training videos overall to achieve the best result. While that might yield the best results, we found from our limited experimentation that 2000 steps and 25 videos could also be sufficient.
|
38_5_23
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
> - When using the Prodigy opitimizer for training, one can follow the recommendations from [this](https://huggingface.co/blog/sdxl_lora_advanced_script) blog. Prodigy tends to overfit quickly. From my very limited testing, I found a learning rate of `0.5` to be suitable in addition to `--prodigy_use_bias_correction`, `prodigy_safeguard_warmup` and `--prodigy_decouple`.
|
38_5_24
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#training
|
.md
|
> - The recommended learning rate by the CogVideoX authors and from our experimentation with Adam/AdamW is between `1e-3` and `1e-4` for a dataset of 25+ videos.
>
> Note that our testing is not exhaustive due to limited time for exploration. Our recommendation would be to play around with the different knobs and dials to find the best settings for your data.
<!-- TODO: Test finetuning with CogVideoX-5b and CogVideoX-5b-I2V and update scripts accordingly -->
|
38_5_25
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#inference
|
.md
|
Once you have trained a lora model, the inference can be done simply loading the lora weights into the `CogVideoXPipeline`.
```python
import torch
from diffusers import CogVideoXPipeline
from diffusers.utils import export_to_video
|
38_6_0
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#inference
|
.md
|
pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-2b", torch_dtype=torch.float16)
# pipe.load_lora_weights("/path/to/lora/weights", adapter_name="cogvideox-lora") # Or,
pipe.load_lora_weights("my-awesome-hf-username/my-awesome-lora-name", adapter_name="cogvideox-lora") # If loading from the HF Hub
pipe.to("cuda")
# Assuming lora_alpha=32 and rank=64 for training. If different, set accordingly
pipe.set_adapters(["cogvideox-lora"], [32 / 64])
|
38_6_1
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#inference
|
.md
|
prompt = "A vast, shimmering ocean flows gracefully under a twilight sky, its waves undulating in a mesmerizing dance of blues and greens. The surface glints with the last rays of the setting sun, casting golden highlights that ripple across the water. Seagulls soar above, their cries blending with the gentle roar of the waves. The horizon stretches infinitely, where the ocean meets the sky in a seamless blend of hues. Close-ups reveal the intricate patterns of the waves, capturing the fluidity and dynamic
|
38_6_2
|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/cogvideox.md
|
https://huggingface.co/docs/diffusers/en/training/cogvideox/#inference
|
.md
|
the sky in a seamless blend of hues. Close-ups reveal the intricate patterns of the waves, capturing the fluidity and dynamic beauty of the sea in motion."
|
38_6_3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.