The Stable Diffusion model can also infer depth based on an image using MiDas. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map
to preserve the image structure.
Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations!
( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers depth_estimator: DPTForDepthEstimation feature_extractor: DPTFeatureExtractor )
Parameters
CLIPTokenizer
to tokenize text. UNet2DConditionModel
to denoise the encoded image latents. unet
to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
The pipeline also inherits the following loading methods:
( prompt: typing.Union[str, typing.List[str]] = None image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor]] = None depth_map: typing.Optional[torch.FloatTensor] = None strength: float = 0.8 num_inference_steps: typing.Optional[int] = 50 guidance_scale: typing.Optional[float] = 7.5 negative_prompt: typing.Union[typing.List[str], str, NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: typing.Optional[float] = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None clip_skip: typing.Optional[int] = None ) → StableDiffusionPipelineOutput or tuple
Parameters
str
or List[str]
, optional) —
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds
. torch.FloatTensor
, PIL.Image.Image
, np.ndarray
, List[torch.FloatTensor]
, List[PIL.Image.Image]
, or List[np.ndarray]
) —
Image
or tensor representing an image batch to be used as the starting point. Can accept image
latents as image
only if depth_map
is not None
. torch.FloatTensor
, optional) —
Depth prediction to be used as additional conditioning for the image generation process. If not
defined, it automatically predicts the depth with self.depth_estimator
. float
, optional, defaults to 0.8) —
Indicates extent to transform the reference image
. Must be between 0 and 1. image
is used as a
starting point and more noise is added the higher the strength
. The number of denoising steps depends
on the amount of noise initially added. When strength
is 1, added noise is maximum and the denoising
process runs for the full number of iterations specified in num_inference_steps
. A value of 1
essentially ignores image
. int
, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. This parameter is modulated by strength
. float
, optional, defaults to 7.5) —
A higher guidance scale value encourages the model to generate images closely linked to the text
prompt
at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1
. str
or List[str]
, optional) —
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
pass negative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
). int
, optional, defaults to 1) —
The number of images to generate per prompt. float
, optional, defaults to 0.0) —
Corresponds to parameter eta (η) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. torch.Generator
or List[torch.Generator]
, optional) —
A torch.Generator
to make
generation deterministic. torch.FloatTensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
provided, text embeddings are generated from the prompt
input argument. torch.FloatTensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, negative_prompt_embeds
are generated from the negative_prompt
input argument. str
, optional, defaults to "pil"
) —
The output format of the generated image. Choose between PIL.Image
or np.array
. bool
, optional, defaults to True
) —
Whether or not to return a StableDiffusionPipelineOutput instead of a
plain tuple. Callable
, optional) —
A function that calls every callback_steps
steps during inference. The function is called with the
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor)
. int
, optional, defaults to 1) —
The frequency at which the callback
function is called. If not specified, the callback is called at
every step. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined in
self.processor
. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Returns
StableDiffusionPipelineOutput or tuple
If return_dict
is True
, StableDiffusionPipelineOutput is returned,
otherwise a tuple
is returned where the first element is a list with the generated images.
The call function to the pipeline for generation.
Examples:
>>> import torch
>>> import requests
>>> from PIL import Image
>>> from diffusers import StableDiffusionDepth2ImgPipeline
>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
... "stabilityai/stable-diffusion-2-depth",
... torch_dtype=torch.float16,
... )
>>> pipe.to("cuda")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> init_image = Image.open(requests.get(url, stream=True).raw)
>>> prompt = "two tigers"
>>> n_propmt = "bad, deformed, ugly, bad anotomy"
>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0]
( slice_size: typing.Union[str, int, NoneType] = 'auto' )
Parameters
str
or int
, optional, defaults to "auto"
) —
When "auto"
, halves the input to the attention heads, so attention will be computed in two steps. If
"max"
, maximum amount of memory will be saved by running only one slice at a time. If a number is
provided, uses as many slices as attention_head_dim // slice_size
. In this case, attention_head_dim
must be a multiple of slice_size
. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. For more than one attention head, the computation is performed sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.
⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention
(SDPA) from PyTorch
2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs!
Examples:
>>> import torch
>>> from diffusers import StableDiffusionPipeline
>>> pipe = StableDiffusionPipeline.from_pretrained(
... "runwayml/stable-diffusion-v1-5",
... torch_dtype=torch.float16,
... use_safetensors=True,
... )
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
Disable sliced attention computation. If enable_attention_slicing
was previously called, attention is
computed in one step.
( attention_op: typing.Optional[typing.Callable] = None )
Parameters
Callable
, optional) —
Override the default None
operator for use as op
argument to the
memory_efficient_attention()
function of xFormers. Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent.
Examples:
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
Disable memory efficient attention from xFormers.
( pretrained_model_name_or_path: typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]] token: typing.Union[str, typing.List[str], NoneType] = None tokenizer: typing.Optional[ForwardRef('PreTrainedTokenizer')] = None text_encoder: typing.Optional[ForwardRef('PreTrainedModel')] = None **kwargs )
Parameters
str
or os.PathLike
or List[str or os.PathLike]
or Dict
or List[Dict]
) —
Can be either one of the following or a list of them:
sd-concepts-library/low-poly-hd-logos-icons
) of a
pretrained model hosted on the Hub../my_text_inversion_directory/
) containing the textual
inversion weights../my_text_inversions.pt
) containing textual inversion weights.str
or List[str]
, optional) —
Override the token to use for the textual inversion weights. If pretrained_model_name_or_path
is a
list, then token
must also be a list of equal length. CLIPTokenizer
to tokenize text. If not specified, function will take self.tokenizer. str
, optional) —
Name of a custom weight file. This should be used when:
text_inv.bin
.Union[str, os.PathLike]
, optional) —
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. bool
, optional, defaults to False
) —
Whether or not to resume downloading the model weights and configuration files. If set to False
, any
incompletely downloaded files are deleted. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether to only load local model weights and configuration files or not. If set to True
, the model
won’t be downloaded from the Hub. str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, the token generated from
diffusers-cli login
(stored in ~/.huggingface
) is used. str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. str
, optional, defaults to ""
) —
The subfolder location of a model file within a larger model repository on the Hub or locally. str
, optional) —
Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
information. Load textual inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and Automatic1111 formats are supported).
Example:
To load a textual inversion embedding vector in 🤗 Diffusers format:
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("sd-concepts-library/cat-toy")
prompt = "A <cat-toy> backpack"
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
To load a textual inversion embedding vector in Automatic1111 format, make sure to download the vector first (for example from civitAI) and then load the vector
locally:
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")
prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] adapter_name = None **kwargs )
Parameters
str
or os.PathLike
or dict
) —
See lora_state_dict(). dict
, optional) —
See lora_state_dict(). str
, optional) —
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
default_{i}
where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict
into self.unet
and
self.text_encoder
.
All kwargs are forwarded to self.lora_state_dict
.
See lora_state_dict() for more details on how the state dict is loaded.
See load_lora_into_unet() for more details on how the state dict is loaded into
self.unet
.
See load_lora_into_text_encoder() for more details on how the state dict is loaded
into self.text_encoder
.
( save_directory: typing.Union[str, os.PathLike] unet_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = True )
Parameters
str
or os.PathLike
) —
Directory to save LoRA parameters to. Will be created if it doesn’t exist. Dict[str, torch.nn.Module]
or Dict[str, torch.Tensor]
) —
State dict of the LoRA layers corresponding to the unet
. Dict[str, torch.nn.Module]
or Dict[str, torch.Tensor]
) —
State dict of the LoRA layers corresponding to the text_encoder
. Must explicitly pass the text
encoder LoRA state dict because it comes from 🤗 Transformers. bool
, optional, defaults to True
) —
Whether the process calling this is the main process or not. Useful during distributed training and you
need to call this function on all processes. In this case, set is_main_process=True
only on the main
process to avoid race conditions. Callable
) —
The function to use to save the state dictionary. Useful during distributed training when you need to
replace torch.save
with another method. Can be configured with the environment variable
DIFFUSERS_SAVE_MODE
. bool
, optional, defaults to True
) —
Whether to save the model using safetensors
or the traditional PyTorch way with pickle
. Save the LoRA parameters corresponding to the UNet and text encoder.
( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None )
Parameters
str
or List[str]
, optional) —
prompt to be encoded
device — (torch.device
):
torch device int
) —
number of images that should be generated per prompt bool
) —
whether to use classifier free guidance or not str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). torch.FloatTensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument. torch.FloatTensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument. float
, optional) —
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. int
, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states.
( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] )
Parameters
List[PIL.Image.Image]
or np.ndarray
) —
List of denoised PIL images of length batch_size
or NumPy array of shape (batch_size, height, width, num_channels)
. List[bool]
) —
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or
None
if safety checking could not be performed. Output class for Stable Diffusion pipelines.