Diffusers documentation

Controlling generation of diffusion models

You are viewing v0.13.0 version. A newer version v0.27.2 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Controlling generation of diffusion models

Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed.

Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose.

Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic.

We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue.

We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources.

Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion.

Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights.

  1. Instruct Pix2Pix
  2. Pix2Pix 0
  3. Attend and excite
  4. Semantic guidance
  5. Self attention guidance
  6. Depth2image
  7. DreamBooth
  8. Textual Inversion

Instruct pix2pix

Paper

Pix2Pix is fine-tuned from stable diffusion to support editing input images. It takes as input an image with a prompt describing an edit, and it outputs the edited image. Pix2Pix has been trained to work explicitely well with instructGPT-like prompts.

See here for more information on how to use it.

Pix2PixZero

Paper

Pix2Pix-zero allows modifying an image from one concept to another while preserving general image semantics.

The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation.

Pix2PixZero can be used both to edit synthetic images as well as real images.

  • To edit synthetic images, one first generates on image given a caption. Next, for a concept of the caption that shall be edited as well as the new target concept one generates image captions (e.g. with a model like Flan-T5). Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image.
  • To edit a real image, one first generates an image caption using a model like Blip. Then one applies ddim inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image.

Pix2PixZero is the first model that allows “0-shot” image editing. This means that the model can edit an image in less than a minute on a consumer GPU as shown here

See here for more information on how to use it.

Attend and excite

Paper

Attend and excite allows subjects in the prompt to be faithfully represented in the final image.

A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is insured to have above a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens.

See here for more information on how to use it.

Semantic guidance

Paper

SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait.

Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively.

See here for more information on how to use it.

Self attention guidance

Paper

Self attention guidance improves the general quality of images.

SAG provides guidance from predictions not conditioned on high frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps.

See here for more information on how to use it.

Depth2image

Paper

Depth2image is fine-tuned from stable diffusion to better preserve semantics for text guided image variation.

It conditions on a monocular depth estimate of the original image.

See here for more information on how to use it.

Fine-tuning methods

In addition to pre-trained models, diffusers has training scripts for fine-tuning models on user provided data.

DreamBooth

DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles.

See here for more information on how to use it.

Textual Inversion

Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style.

See here for more information on how to use it.