Diffusers documentation

Text-guided image-to-image generation

You are viewing v0.21.0 version. A newer version v0.31.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Text-guided image-to-image generation

The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images.

Before you begin, make sure you have all the necessary libraries installed:

# uncomment to install the necessary libraries in Colab
#!pip install diffusers transformers ftfy accelerate

Get started by creating a StableDiffusionImg2ImgPipeline with a pretrained Stable Diffusion model like nitrosocke/Ghibli-Diffusion.

import torch
import requests
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionImg2ImgPipeline

device = "cuda"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
    "nitrosocke/Ghibli-Diffusion", torch_dtype=torch.float16, use_safetensors=True
).to(device)

Download and preprocess an initial image so you can pass it to the pipeline:

url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image.thumbnail((768, 768))
init_image

💡 strength is a value between 0.0 and 1.0 that controls the amount of noise added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input.

Define the prompt (for this checkpoint finetuned on Ghibli-style art, you need to prefix the prompt with the ghibli style tokens) and run the pipeline:

prompt = "ghibli style, a fantasy landscape with castles"
generator = torch.Generator(device=device).manual_seed(1024)
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
image

You can also try experimenting with a different scheduler to see how that affects the output:

from diffusers import LMSDiscreteScheduler

lms = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.scheduler = lms
generator = torch.Generator(device=device).manual_seed(1024)
image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
image

Check out the Spaces below, and try generating images with different values for strength. You’ll notice that using lower values for strength produces images that are more similar to the original image.

Feel free to also switch the scheduler to the LMSDiscreteScheduler and see how that affects the output.