Improving Sample Quality of Diffusion Models Using Self-Attention Guidance by Susung Hong et al.
The abstract of the paper is the following:
Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement.
Resources:
Pipeline | Tasks | Demo |
---|---|---|
StableDiffusionSAGPipeline | Text-to-Image Generation | 🤗 Space |
import torch
from diffusers import StableDiffusionSAGPipeline
from accelerate.utils import set_seed
pipe = StableDiffusionSAGPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
seed = 8978
prompt = "."
guidance_scale = 7.5
num_images_per_prompt = 1
sag_scale = 1.0
set_seed(seed)
images = pipe(
prompt, num_images_per_prompt=num_images_per_prompt, guidance_scale=guidance_scale, sag_scale=sag_scale
).images
images[0].save("example.png")
( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True )
Parameters
CLIPTextModel
) —
Frozen text-encoder. Stable Diffusion uses the text portion of
CLIP, specifically
the clip-vit-large-patch14 variant.
CLIPTokenizer
) —
Tokenizer of class
CLIPTokenizer.
unet
to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
StableDiffusionSafetyChecker
) —
Classification module that estimates whether generated images could be considered offensive or harmful.
Please, refer to the model card for details.
CLIPImageProcessor
) —
Model that extracts features from generated images to be used as inputs for the safety_checker
.
Pipeline for text-to-image generation using Stable Diffusion.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
(
prompt: typing.Union[str, typing.List[str]] = None
height: typing.Optional[int] = None
width: typing.Optional[int] = None
num_inference_steps: int = 50
guidance_scale: float = 7.5
sag_scale: float = 0.75
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None
num_images_per_prompt: typing.Optional[int] = 1
eta: float = 0.0
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
latents: typing.Optional[torch.FloatTensor] = None
prompt_embeds: typing.Optional[torch.FloatTensor] = None
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: typing.Optional[int] = 1
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
)
→
StableDiffusionPipelineOutput or tuple
Parameters
str
or List[str]
, optional) —
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds
.
instead.
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The height in pixels of the generated image.
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The width in pixels of the generated image.
int
, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference.
float
, optional, defaults to 7.5) —
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale
is defined as w
of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the text prompt
,
usually at the expense of lower image quality.
float
, optional, defaults to 0.75) —
SAG scale as defined in [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance]
(https://arxiv.org/abs/2210.00939). sag_scale
is defined as s_s
of equation (24) of SAG paper:
https://arxiv.org/pdf/2210.00939.pdf. Typically chosen between [0, 1.0] for better quality.
str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
).
int
, optional, defaults to 1) —
The number of images to generate per prompt.
float
, optional, defaults to 0.0) —
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
schedulers.DDIMScheduler, will be ignored for others.
torch.Generator
or List[torch.Generator]
, optional) —
One or a list of torch generator(s)
to make generation deterministic.
torch.FloatTensor
, optional) —
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will ge generated by sampling using the supplied random generator
.
torch.FloatTensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument.
torch.FloatTensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument.
str
, optional, defaults to "pil"
) —
The output format of the generate image. Choose between
PIL: PIL.Image.Image
or np.array
.
bool
, optional, defaults to True
) —
Whether or not to return a StableDiffusionPipelineOutput instead of a
plain tuple.
Callable
, optional) —
A function that will be called every callback_steps
steps during inference. The function will be
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor)
.
int
, optional, defaults to 1) —
The frequency at which the callback
function will be called. If not specified, the callback will be
called at every step.
dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined under
self.processor
in
diffusers.cross_attention.
Returns
StableDiffusionPipelineOutput or tuple
StableDiffusionPipelineOutput if return_dict
is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of
bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the
safety_checker`.
Function invoked when calling the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import StableDiffusionSAGPipeline
>>> pipe = StableDiffusionSAGPipeline.from_pretrained(
... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> image = pipe(prompt, sag_scale=0.75).images[0]
Disable sliced VAE decoding. If enable_vae_slicing
was previously invoked, this method will go back to
computing decoding in one step.
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
torch.device('meta') and loaded to GPU only when their specific submodule has its
forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than with
enable_model_cpu_offload`, but performance is lower.
Enable sliced VAE decoding.
When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.