The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images using Stable Diffusion.
The original codebase can be found here: CampVis/stable-diffusion
StableDiffusionImg2ImgPipeline is compatible with all Stable Diffusion checkpoints for Text-to-Image
The pipeline uses the diffusion-denoising mechanism proposed by SDEdit (SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations proposed by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon).
class diffusers.StableDiffusionImg2ImgPipeline< source >
( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True )
- vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
CLIPTextModel) — Frozen text-encoder. Stable Diffusion uses the text portion of CLIP, specifically the clip-vit-large-patch14 variant.
CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
- unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unetto denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
StableDiffusionSafetyChecker) — Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the model card for details.
CLIPFeatureExtractor) — Model that extracts features from generated images to be used as inputs for the
Pipeline for text-guided image to image generation using Stable Diffusion.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
__call__< source >
prompt: typing.Union[str, typing.List[str]] = None
image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None
strength: float = 0.8
num_inference_steps: typing.Optional[int] = 50
guidance_scale: typing.Optional[float] = 7.5
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None
num_images_per_prompt: typing.Optional[int] = 1
eta: typing.Optional[float] = 0.0
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
prompt_embeds: typing.Optional[torch.FloatTensor] = None
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: int = 1
List[str], optional) — The prompt or prompts to guide the image generation. If not defined, one has to pass
Image, or tensor representing an image batch, that will be used as the starting point for the process.
float, optional, defaults to 0.8) — Conceptually, indicates how much to transform the reference
image. Must be between 0 and 1.
imagewill be used as a starting point, adding more noise to it the larger the
strength. The number of denoising steps depends on the amount of noise initially added. When
strengthis 1, added noise will be maximum and the denoising process will run for the full number of iterations specified in
num_inference_steps. A value of 1, therefore, essentially ignores
int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. This parameter will be modulated by
float, optional, defaults to 7.5) — Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scaleis defined as
wof equation 2. of Imagen Paper. Guidance scale is enabled by setting
guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text
prompt, usually at the expense of lower image quality.
List[str], optional) — The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if
guidance_scaleis less than
int, optional, defaults to 1) — The number of images to generate per prompt.
float, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others.
torch.Generator, optional) — One or a list of torch generator(s) to make generation deterministic.
torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from
torch.FloatTensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from
str, optional, defaults to
"pil") — The output format of the generate image. Choose between PIL:
bool, optional, defaults to
True) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple.
Callable, optional) — A function that will be called every
callback_stepssteps during inference. The function will be called with the following arguments:
callback(step: int, timestep: int, latents: torch.FloatTensor).
int, optional, defaults to 1) — The frequency at which the
callbackfunction will be called. If not specified, the callback will be called at every step.
return_dict is True, otherwise a
tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bool
s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`.
Function invoked when calling the pipeline for generation.
import requests import torch from PIL import Image from io import BytesIO from diffusers import StableDiffusionImg2ImgPipeline device = "cuda" model_id_or_path = "runwayml/stable-diffusion-v1-5" pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) pipe = pipe.to(device) url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" response = requests.get(url) init_image = Image.open(BytesIO(response.content)).convert("RGB") init_image = init_image.resize((768, 512)) prompt = "A fantasy landscape, trending on artstation" images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images images.save("fantasy_landscape.png")
enable_attention_slicing< source >
( slice_size: typing.Union[str, int, NoneType] = 'auto' )
int, optional, defaults to
"auto") — When
"auto", halves the input to the attention heads, so attention will be computed in two steps. If
"max", maxium amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices as
attention_head_dim // slice_size. In this case,
attention_head_dimmust be a multiple of
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.
disable_attention_slicing< source >
Disable sliced attention computation. If
enable_attention_slicing was previously invoked, this method will go
back to computing attention in one step.
enable_xformers_memory_efficient_attention< source >
( attention_op: typing.Optional[typing.Callable] = None )
Callable, optional) — Override the default
Noneoperator for use as
opargument to the
memory_efficient_attention()function of xFormers.
Enable memory efficient attention as implemented in xformers.
When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference time. Speed up at training time is not guaranteed.
Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention is used.
import torch from diffusers import DiffusionPipeline from xformers.ops import MemoryEfficientAttentionFlashAttentionOp pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) # Workaround for not accepting attention shape using VAE for Flash Attention pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
disable_xformers_memory_efficient_attention< source >
Disable memory efficient attention as implemented in xformers.
enable_model_cpu_offload< source >
( gpu_id = 0 )
Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the
enable_sequential_cpu_offload< source >
( gpu_id = 0 )
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
torch.device('meta') and loaded to GPU only when their specific submodule has its forward
method called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower.