Diffusers documentation

Unconditional image generation

You are viewing v0.16.0 version. A newer version v0.31.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Unconditional image generation

Unconditional image generation is a relatively straightforward task. The model only generates images - without any additional context like text or an image - resembling the training data it was trained on.

The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference.

Start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. You can use any of the 🧨 Diffusers checkpoints from the Hub (the checkpoint you’ll use generates images of butterflies).

💡 Want to train your own unconditional image generation model? Take a look at the training guide to learn how to generate your own images.

In this guide, you’ll use DiffusionPipeline for unconditional image generation with DDPM:

>>> from diffusers import DiffusionPipeline

>>> generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128")

The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on a GPU. You can move the generator object to a GPU, just like you would in PyTorch:

>>> generator.to("cuda")

Now you can use the generator to generate an image:

>>> image = generator().images[0]

The output is by default wrapped into a PIL.Image object.

You can save the image by calling:

>>> image.save("generated_image.png")

Try out the Spaces below, and feel free to play around with the inference steps parameter to see how it affects the image quality!