The depth-guided stable diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION, as part of Stable Diffusion 2.0. It uses MiDas to infer depth based on an image.
StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images as well as a depth_map
to preserve the images’ structure.
The original codebase can be found here:
Available Checkpoints are:
( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers depth_estimator: DPTForDepthEstimation feature_extractor: DPTFeatureExtractor )
Parameters
CLIPTextModel
) —
Frozen text-encoder. Stable Diffusion uses the text portion of
CLIP, specifically
the clip-vit-large-patch14 variant.
CLIPTokenizer
) —
Tokenizer of class
CLIPTokenizer.
unet
to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
Pipeline for text-guided image to image generation using Stable Diffusion.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
(
prompt: typing.Union[str, typing.List[str]] = None
image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None
depth_map: typing.Optional[torch.FloatTensor] = None
strength: float = 0.8
num_inference_steps: typing.Optional[int] = 50
guidance_scale: typing.Optional[float] = 7.5
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None
num_images_per_prompt: typing.Optional[int] = 1
eta: typing.Optional[float] = 0.0
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
prompt_embeds: typing.Optional[torch.FloatTensor] = None
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: int = 1
)
→
StableDiffusionPipelineOutput or tuple
Parameters
str
or List[str]
, optional) —
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds
.
instead.
torch.FloatTensor
or PIL.Image.Image
) —
Image
, or tensor representing an image batch, that will be used as the starting point for the
process.
float
, optional, defaults to 0.8) —
Conceptually, indicates how much to transform the reference image
. Must be between 0 and 1. image
will be used as a starting point, adding more noise to it the larger the strength
. The number of
denoising steps depends on the amount of noise initially added. When strength
is 1, added noise will
be maximum and the denoising process will run for the full number of iterations specified in
num_inference_steps
. A value of 1, therefore, essentially ignores image
.
int
, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. This parameter will be modulated by strength
.
float
, optional, defaults to 7.5) —
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale
is defined as w
of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the text prompt
,
usually at the expense of lower image quality.
str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
. instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is less than 1
).
int
, optional, defaults to 1) —
The number of images to generate per prompt.
float
, optional, defaults to 0.0) —
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
schedulers.DDIMScheduler, will be ignored for others.
torch.Generator
, optional) —
One or a list of torch generator(s)
to make generation deterministic.
torch.FloatTensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument.
torch.FloatTensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument.
str
, optional, defaults to "pil"
) —
The output format of the generate image. Choose between
PIL: PIL.Image.Image
or np.array
.
bool
, optional, defaults to True
) —
Whether or not to return a StableDiffusionPipelineOutput instead of a
plain tuple.
Callable
, optional) —
A function that will be called every callback_steps
steps during inference. The function will be
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor)
.
int
, optional, defaults to 1) —
The frequency at which the callback
function will be called. If not specified, the callback will be
called at every step.
Returns
StableDiffusionPipelineOutput or tuple
StableDiffusionPipelineOutput if return_dict
is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of
bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the
safety_checker`.
Function invoked when calling the pipeline for generation.
Examples:
>>> import torch
>>> import requests
>>> from PIL import Image
>>> from diffusers import StableDiffusionDepth2ImgPipeline
>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
... "stabilityai/stable-diffusion-2-depth",
... torch_dtype=torch.float16,
... )
>>> pipe.to("cuda")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> init_image = Image.open(requests.get(url, stream=True).raw)
>>> prompt = "two tigers"
>>> n_propmt = "bad, deformed, ugly, bad anotomy"
>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0]
( slice_size: typing.Union[str, int, NoneType] = 'auto' )
Parameters
str
or int
, optional, defaults to "auto"
) —
When "auto"
, halves the input to the attention heads, so attention will be computed in two steps. If
"max"
, maximum amount of memory will be saved by running only one slice at a time. If a number is
provided, uses as many slices as attention_head_dim // slice_size
. In this case, attention_head_dim
must be a multiple of slice_size
.
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.
Disable sliced attention computation. If enable_attention_slicing
was previously invoked, this method will go
back to computing attention in one step.
( attention_op: typing.Optional[typing.Callable] = None )
Parameters
Callable
, optional) —
Override the default None
operator for use as op
argument to the
memory_efficient_attention()
function of xFormers.
Enable memory efficient attention as implemented in xformers.
When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference time. Speed up at training time is not guaranteed.
Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention is used.
Examples:
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
Disable memory efficient attention as implemented in xformers.
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
torch.device('meta') and loaded to GPU only when their specific submodule has its
forward` method called.