Diffusers documentation

Quicktour

You are viewing v0.13.0 version. A newer version v0.31.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Quicktour

Get up and running with 🧨 Diffusers quickly! Whether you’re a developer or an everyday user, this quick tour will help you get started and show you how to use DiffusionPipeline for inference.

Before you begin, make sure you have all the necessary libraries installed:

pip install --upgrade diffusers accelerate transformers

DiffusionPipeline

The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference. You can use the DiffusionPipeline out-of-the-box for many tasks across different modalities. Take a look at the table below for some supported tasks:

Task Description Pipeline
Unconditional Image Generation generate an image from gaussian noise unconditional_image_generation
Text-Guided Image Generation generate an image given a text prompt conditional_image_generation
Text-Guided Image-to-Image Translation adapt an image guided by a text prompt img2img
Text-Guided Image-Inpainting fill the masked part of an image given the image, the mask and a text prompt inpaint
Text-Guided Depth-to-Image Translation adapt parts of an image guided by a text prompt while preserving structure via depth estimation depth2img

For more in-detail information on how diffusion pipelines function for the different tasks, please have a look at the Using Diffusers section.

As an example, start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. You can use the DiffusionPipeline for any Diffusers’ checkpoint. In this guide though, you’ll use DiffusionPipeline for text-to-image generation with Stable Diffusion.

For Stable Diffusion, please carefully read its license before running the model. This is due to the improved image generation capabilities of the model and the potentially harmful content that could be produced with it. Please, head over to your stable diffusion model of choice, e.g. runwayml/stable-diffusion-v1-5, and read the license.

You can load the model as follows:

>>> from diffusers import DiffusionPipeline

>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")

The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU. You can move the generator object to GPU, just like you would in PyTorch.

>>> pipeline.to("cuda")

Now you can use the pipeline on your text prompt:

>>> image = pipeline("An image of a squirrel in Picasso style").images[0]

The output is by default wrapped into a PIL Image object.

You can save the image by simply calling:

>>> image.save("image_of_squirrel_painting.png")

Note: You can also use the pipeline locally by downloading the weights via:

git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5

and then loading the saved weights into the pipeline.

>>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")

Running the pipeline is then identical to the code above as it’s the same model architecture.

>>> generator.to("cuda")
>>> image = generator("An image of a squirrel in Picasso style").images[0]
>>> image.save("image_of_squirrel_painting.png")

Diffusion systems can be used with multiple different schedulers each with their pros and cons. By default, Stable Diffusion runs with PNDMScheduler, but it’s very simple to use a different scheduler. E.g. if you would instead like to use the EulerDiscreteScheduler scheduler, you could use it as follows:

>>> from diffusers import EulerDiscreteScheduler

>>> pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")

>>> # change scheduler to Euler
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)

For more in-detail information on how to change between schedulers, please refer to the Using Schedulers guide.

Stability AI’s Stable Diffusion model is an impressive image generation model and can do much more than just generating images from text. We have dedicated a whole documentation page, just for Stable Diffusion here.

If you want to know how to optimize Stable Diffusion to run on less memory, higher inference speeds, on specific hardware, such as Mac, or with ONNX Runtime, please have a look at our optimization pages:

If you want to fine-tune or train your diffusion model, please have a look at the training section

Finally, please be considerate when distributing generated images publicly 🤗.