Diffusers documentation

Unconditional image generation

You are viewing v0.30.1 version. A newer version v0.32.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Unconditional image generation

Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image.

To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image.

from diffusers import DiffusionPipeline

generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda")
image = generator().images[0]
image

Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images.

The output image is a PIL.Image object that can be saved:

image.save("generated_image.png")

You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it’ll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality.

image = generator(num_inference_steps=100).images[0]
image

Try out the Space below to generate an image of a butterfly!

< > Update on GitHub