Diffusers documentation

🧨 Diffusers

You are viewing v0.10.2 version. A newer version v0.31.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started



🧨 Diffusers

🤗 Diffusers provides pretrained vision diffusion models, and serves as a modular toolbox for inference and training.

More precisely, 🤗 Diffusers offers:

  • State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see Using Diffusers) or have a look at Pipelines to get an overview of all supported pipelines and their corresponding papers.
  • Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference. For more information see Schedulers.
  • Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system. See Models for more details
  • Training examples to show how to train the most popular diffusion model tasks. For more information see Training.

🧨 Diffusers Pipelines

The following table summarizes all officially supported pipelines, their corresponding paper, and if available a colab notebook to directly try them out.

Pipeline Paper Tasks Colab
alt_diffusion AltDiffusion Image-to-Image Text-Guided Generation
audio_diffusion Audio Diffusion Unconditional Audio Generation Open In Colab
cycle_diffusion Cycle Diffusion Image-to-Image Text-Guided Generation
dance_diffusion Dance Diffusion Unconditional Audio Generation
ddpm Denoising Diffusion Probabilistic Models Unconditional Image Generation
ddim Denoising Diffusion Implicit Models Unconditional Image Generation
latent_diffusion High-Resolution Image Synthesis with Latent Diffusion Models Text-to-Image Generation
latent_diffusion High-Resolution Image Synthesis with Latent Diffusion Models Super Resolution Image-to-Image
latent_diffusion_uncond High-Resolution Image Synthesis with Latent Diffusion Models Unconditional Image Generation
paint_by_example Paint by Example: Exemplar-based Image Editing with Diffusion Models Image-Guided Image Inpainting
pndm Pseudo Numerical Methods for Diffusion Models on Manifolds Unconditional Image Generation
score_sde_ve Score-Based Generative Modeling through Stochastic Differential Equations Unconditional Image Generation
score_sde_vp Score-Based Generative Modeling through Stochastic Differential Equations Unconditional Image Generation
stable_diffusion Stable Diffusion Text-to-Image Generation Open In Colab
stable_diffusion Stable Diffusion Image-to-Image Text-Guided Generation Open In Colab
stable_diffusion Stable Diffusion Text-Guided Image Inpainting Open In Colab
stable_diffusion_2 Stable Diffusion 2 Text-to-Image Generation
stable_diffusion_2 Stable Diffusion 2 Text-Guided Image Inpainting
stable_diffusion_2 Stable Diffusion 2 Text-Guided Super Resolution Image-to-Image
stable_diffusion_safe Safe Stable Diffusion Text-Guided Generation Open In Colab
stochastic_karras_ve Elucidating the Design Space of Diffusion-Based Generative Models Unconditional Image Generation
versatile_diffusion Versatile Diffusion: Text, Images and Variations All in One Diffusion Model Text-to-Image Generation
versatile_diffusion Versatile Diffusion: Text, Images and Variations All in One Diffusion Model Image Variations Generation
versatile_diffusion Versatile Diffusion: Text, Images and Variations All in One Diffusion Model Dual Image and Text Guided Generation
vq_diffusion Vector Quantized Diffusion Model for Text-to-Image Synthesis Text-to-Image Generation

Note: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers.