Diffusers documentation

Stable diffusion pipelines

You are viewing v0.14.0 version. A newer version v0.30.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Stable diffusion pipelines

Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs.

Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. You can learn more details about it in the specific pipeline for latent diffusion that is part of 🤗 Diffusers.

For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, please refer to the official launch announcement post and this section of our own blog post.

Tips:

  • To tweak your prompts on a specific result you liked, you can generate your own latents, as demonstrated in the following notebook: Open In Colab

Overview:

Pipeline Tasks Colab Demo
StableDiffusionPipeline Text-to-Image Generation Open In Colab 🤗 Stable Diffusion
StableDiffusionImg2ImgPipeline Image-to-Image Text-Guided Generation Open In Colab 🤗 Diffuse the Rest
StableDiffusionInpaintPipeline ExperimentalText-Guided Image Inpainting Open In Colab Coming soon
StableDiffusionDepth2ImgPipeline ExperimentalDepth-to-Image Text-Guided Generation Coming soon
StableDiffusionImageVariationPipeline ExperimentalImage Variation Generation 🤗 Stable Diffusion Image Variations
StableDiffusionUpscalePipeline ExperimentalText-Guided Image Super-Resolution Coming soon
StableDiffusionLatentUpscalePipeline ExperimentalText-Guided Image Super-Resolution Coming soon
StableDiffusionInstructPix2PixPipeline ExperimentalText-Based Image Editing InstructPix2Pix: Learning to Follow Image Editing Instructions
StableDiffusionAttendAndExcitePipeline ExperimentalText-to-Image Generation Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models
StableDiffusionPix2PixZeroPipeline ExperimentalText-Based Image Editing Zero-shot Image-to-Image Translation

Tips

How to load and use different schedulers.

The stable diffusion pipeline uses PNDMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following:

>>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler

>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)

>>> # or
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler)

How to convert all use cases with multiple or single pipeline

If you want to use all possible use cases in a single DiffusionPipeline you can either:

  • Make use of the Stable Diffusion Mega Pipeline or
  • Make use of the components functionality to instantiate all components in the most memory-efficient way:
>>> from diffusers import (
...     StableDiffusionPipeline,
...     StableDiffusionImg2ImgPipeline,
...     StableDiffusionInpaintPipeline,
... )

>>> text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components)
>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components)

>>> # now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline

StableDiffusionPipelineOutput

class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput

< >

( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] )

Parameters

  • images (List[PIL.Image.Image] or np.ndarray) — List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline.
  • nsfw_content_detected (List[bool]) — List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” (nsfw) content, or None if safety checking could not be performed.

Output class for Stable Diffusion pipelines.