Diffusers documentation

InstructPix2Pix

You are viewing v0.21.0 version. A newer version v0.30.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

InstructPix2Pix

InstructPix2Pix is a method to fine-tune text-conditioned diffusion models such that they can follow an edit instruction for an input image. Models fine-tuned using this method take the following as inputs:

instructpix2pix-inputs

The output is an “edited” image that reflects the edit instruction applied on the input image:

instructpix2pix-output

The train_instruct_pix2pix.py script (you can find the it here) shows how to implement the training procedure and adapt it for Stable Diffusion.

Disclaimer: Even though train_instruct_pix2pix.py implements the InstructPix2Pix training procedure while being faithful to the original implementation we have only tested it on a small-scale dataset. This can impact the end results. For better results, we recommend longer training runs with a larger dataset. Here you can find a large dataset for InstructPix2Pix training.

Running locally with PyTorch

Installing the dependencies

Before running the scripts, make sure to install the library’s training dependencies:

Important

To make sure you can successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:

git clone https://github.com/huggingface/diffusers
cd diffusers
pip install -e .

Then cd in the example folder

cd examples/instruct_pix2pix

Now run

pip install -r requirements.txt

And initialize an 🤗Accelerate environment with:

accelerate config

Or for a default accelerate configuration without answering questions about your environment

accelerate config default

Or if your environment doesn’t support an interactive shell e.g. a notebook

from accelerate.utils import write_basic_config

write_basic_config()

Toy example

As mentioned before, we’ll use a small toy dataset for training. The dataset is a smaller version of the original dataset used in the InstructPix2Pix paper. To use your own dataset, take a look at the Create a dataset for training guide.

Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the pretrained_model_name_or_path argument. You’ll also need to specify the dataset name in DATASET_ID:

export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export DATASET_ID="fusing/instructpix2pix-1000-samples"

Now, we can launch training. The script saves all the components (feature_extractor, scheduler, text_encoder, unet, etc) in a subfolder in your repository.

accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \
    --pretrained_model_name_or_path=$MODEL_NAME \
    --dataset_name=$DATASET_ID \
    --enable_xformers_memory_efficient_attention \
    --resolution=256 --random_flip \
    --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \
    --max_train_steps=15000 \
    --checkpointing_steps=5000 --checkpoints_total_limit=1 \
    --learning_rate=5e-05 --max_grad_norm=1 --lr_warmup_steps=0 \
    --conditioning_dropout_prob=0.05 \
    --mixed_precision=fp16 \
    --seed=42 \
    --push_to_hub

Additionally, we support performing validation inference to monitor training progress with Weights and Biases. You can enable this feature with report_to="wandb":

accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \
    --pretrained_model_name_or_path=$MODEL_NAME \
    --dataset_name=$DATASET_ID \
    --enable_xformers_memory_efficient_attention \
    --resolution=256 --random_flip \
    --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \
    --max_train_steps=15000 \
    --checkpointing_steps=5000 --checkpoints_total_limit=1 \
    --learning_rate=5e-05 --max_grad_norm=1 --lr_warmup_steps=0 \
    --conditioning_dropout_prob=0.05 \
    --mixed_precision=fp16 \
    --val_image_url="https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" \
    --validation_prompt="make the mountains snowy" \
    --seed=42 \
    --report_to=wandb \
    --push_to_hub

We recommend this type of validation as it can be useful for model debugging. Note that you need wandb installed to use this. You can install wandb by running pip install wandb.

Here, you can find an example training run that includes some validation samples and the training hyperparameters.

Note: In the original paper, the authors observed that even when the model is trained with an image resolution of 256x256, it generalizes well to bigger resolutions such as 512x512. This is likely because of the larger dataset they used during training.

Training with multiple GPUs

accelerate allows for seamless multi-GPU training. Follow the instructions here for running distributed training with accelerate. Here is an example command:

accelerate launch --mixed_precision="fp16" --multi_gpu train_instruct_pix2pix.py \
 --pretrained_model_name_or_path=runwayml/stable-diffusion-v1-5 \
 --dataset_name=sayakpaul/instructpix2pix-1000-samples \
 --use_ema \
 --enable_xformers_memory_efficient_attention \
 --resolution=512 --random_flip \
 --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \
 --max_train_steps=15000 \
 --checkpointing_steps=5000 --checkpoints_total_limit=1 \
 --learning_rate=5e-05 --lr_warmup_steps=0 \
 --conditioning_dropout_prob=0.05 \
 --mixed_precision=fp16 \
 --seed=42 \
 --push_to_hub

Inference

Once training is complete, we can perform inference:

import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline

model_id = "your_model_id"  # <- replace this
pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(
   model_id, torch_dtype=torch.float16, use_safetensors=True
).to("cuda")
generator = torch.Generator("cuda").manual_seed(0)

url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png"


def download_image(url):
   image = PIL.Image.open(requests.get(url, stream=True).raw)
   image = PIL.ImageOps.exif_transpose(image)
   image = image.convert("RGB")
   return image


image = download_image(url)
prompt = "wipe out the lake"
num_inference_steps = 20
image_guidance_scale = 1.5
guidance_scale = 10

edited_image = pipe(
   prompt,
   image=image,
   num_inference_steps=num_inference_steps,
   image_guidance_scale=image_guidance_scale,
   guidance_scale=guidance_scale,
   generator=generator,
).images[0]
edited_image.save("edited_image.png")

An example model repo obtained using this training script can be found here - sayakpaul/instruct-pix2pix.

We encourage you to play with the following three parameters to control speed and quality during performance:

  • num_inference_steps
  • image_guidance_scale
  • guidance_scale

Particularly, image_guidance_scale and guidance_scale can have a profound impact on the generated (“edited”) image (see here for an example).

If you’re looking for some interesting ways to use the InstructPix2Pix training methodology, we welcome you to check out this blog post: Instruction-tuning Stable Diffusion with InstructPix2Pix.

Stable Diffusion XL

Training with Stable Diffusion XL is also supported via the train_instruct_pix2pix_sdxl.py script. Please refer to the docs here.