Latent upscaler
The Stable Diffusion latent upscaler model was created by Katherine Crowson in collaboration with Stability AI. It is used to enhance the output image resolution by a factor of 2 (see this demo notebook for a demonstration of the original implementation).
Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations!
StableDiffusionLatentUpscalePipeline
class diffusers.StableDiffusionLatentUpscalePipeline
< source >( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: EulerDiscreteScheduler )
Parameters
- vae (AutoencoderKL) — Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- text_encoder (CLIPTextModel) — Frozen text-encoder (clip-vit-large-patch14).
- tokenizer (CLIPTokenizer) —
A
CLIPTokenizer
to tokenize text. - unet (UNet2DConditionModel) —
A
UNet2DConditionModel
to denoise the encoded image latents. - scheduler (SchedulerMixin) —
A EulerDiscreteScheduler to be used in combination with
unet
to denoise the encoded image latents.
Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
The pipeline also inherits the following loading methods:
- from_single_file() for loading
.ckpt
files
__call__
< source >( prompt: Union image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: Union = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple
Parameters
- prompt (
str
orList[str]
) — The prompt or prompts to guide image upscaling. - image (
torch.FloatTensor
,PIL.Image.Image
,np.ndarray
,List[torch.FloatTensor]
,List[PIL.Image.Image]
, orList[np.ndarray]
) —Image
or tensor representing an image batch to be upscaled. If it’s a tensor, it can be either a latent output from a Stable Diffusion model or an image tensor in the range[-1, 1]
. It is considered alatent
ifimage.shape[1]
is4
; otherwise, it is considered to be an image representation and encoded using this pipeline’svae
encoder. - num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. - guidance_scale (
float
, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the textprompt
at the expense of lower image quality. Guidance scale is enabled whenguidance_scale > 1
. - negative_prompt (
str
orList[str]
, optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to passnegative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
). - eta (
float
, optional, defaults to 0.0) — Corresponds to parameter eta (η) from the DDIM paper. Only applies to the DDIMScheduler, and is ignored in other schedulers. - generator (
torch.Generator
orList[torch.Generator]
, optional) — Atorch.Generator
to make generation deterministic. - latents (
torch.FloatTensor
, optional) — Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied randomgenerator
. - output_type (
str
, optional, defaults to"pil"
) — The output format of the generated image. Choose betweenPIL.Image
ornp.array
. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple. - callback (
Callable
, optional) — A function that calls everycallback_steps
steps during inference. The function is called with the following arguments:callback(step: int, timestep: int, latents: torch.FloatTensor)
. - callback_steps (
int
, optional, defaults to 1) — The frequency at which thecallback
function is called. If not specified, the callback is called at every step.
Returns
StableDiffusionPipelineOutput or tuple
If return_dict
is True
, StableDiffusionPipelineOutput is returned,
otherwise a tuple
is returned where the first element is a list with the generated images.
The call function to the pipeline for generation.
Examples:
>>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline
>>> import torch
>>> pipeline = StableDiffusionPipeline.from_pretrained(
... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16
... )
>>> pipeline.to("cuda")
>>> model_id = "stabilityai/sd-x2-latent-upscaler"
>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
>>> upscaler.to("cuda")
>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic"
>>> generator = torch.manual_seed(33)
>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images
>>> with torch.no_grad():
... image = pipeline.decode_latents(low_res_latents)
>>> image = pipeline.numpy_to_pil(image)[0]
>>> image.save("../images/a1.png")
>>> upscaled_image = upscaler(
... prompt=prompt,
... image=low_res_latents,
... num_inference_steps=20,
... guidance_scale=0,
... generator=generator,
... ).images[0]
>>> upscaled_image.save("../images/a2.png")
enable_sequential_cpu_offload
< source >( gpu_id: Optional = None device: Union = 'cuda' )
Parameters
- gpu_id (
int
, optional) — The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. - device (
torch.Device
orstr
, optional, defaults to “cuda”) — The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will default to “cuda”.
Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state
dicts of all torch.nn.Module
components (except those in self._exclude_from_cpu_offload
) are saved to CPU
and then moved to torch.device('meta')
and loaded to GPU only when their specific submodule has its forward
method called. Offloading happens on a submodule basis. Memory savings are higher than with
enable_model_cpu_offload
, but performance is lower.
enable_attention_slicing
< source >( slice_size: Union = 'auto' )
Parameters
- slice_size (
str
orint
, optional, defaults to"auto"
) — When"auto"
, halves the input to the attention heads, so attention will be computed in two steps. If"max"
, maximum amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices asattention_head_dim // slice_size
. In this case,attention_head_dim
must be a multiple ofslice_size
.
Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. For more than one attention head, the computation is performed sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.
⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention
(SDPA) from PyTorch
2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs!
Examples:
>>> import torch
>>> from diffusers import StableDiffusionPipeline
>>> pipe = StableDiffusionPipeline.from_pretrained(
... "runwayml/stable-diffusion-v1-5",
... torch_dtype=torch.float16,
... use_safetensors=True,
... )
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
Disable sliced attention computation. If enable_attention_slicing
was previously called, attention is
computed in one step.
enable_xformers_memory_efficient_attention
< source >( attention_op: Optional = None )
Parameters
- attention_op (
Callable
, optional) — Override the defaultNone
operator for use asop
argument to thememory_efficient_attention()
function of xFormers.
Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent.
Examples:
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
Disable memory efficient attention from xFormers.
StableDiffusionPipelineOutput
class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput
< source >( images: Union nsfw_content_detected: Optional )
Parameters
- images (
List[PIL.Image.Image]
ornp.ndarray
) — List of denoised PIL images of lengthbatch_size
or NumPy array of shape(batch_size, height, width, num_channels)
. - nsfw_content_detected (
List[bool]
) — List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content orNone
if safety checking could not be performed.
Output class for Stable Diffusion pipelines.