Semantic Guidance
Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Diffusion using Semantic Dimensions and provides strong semantic control over the image generation. Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, and stay true to the original image composition.
The abstract of the paper is the following:
Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on a variety of tasks and provide evidence for its versatility and flexibility.
Overview:
Pipeline | Tasks | Colab | Demo |
---|---|---|---|
pipeline_semantic_stable_diffusion.py | Text-to-Image Generation | Coming Soon |
Tips
- The Semantic Guidance pipeline can be used with any Stable Diffusion checkpoint.
Run Semantic Guidance
The interface of SemanticStableDiffusionPipeline provides several additional parameters to influence the image generation. Exemplary usage may look like this:
import torch
from diffusers import SemanticStableDiffusionPipeline
pipe = SemanticStableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
out = pipe(
prompt="a photo of the face of a woman",
num_images_per_prompt=1,
guidance_scale=7,
editing_prompt=[
"smiling, smile", # Concepts to apply
"glasses, wearing glasses",
"curls, wavy hair, curly hair",
"beard, full beard, mustache",
],
reverse_editing_direction=[False, False, False, False], # Direction of guidance i.e. increase all concepts
edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept
edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept
edit_threshold=[
0.99,
0.975,
0.925,
0.96,
], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions
edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance
edit_mom_beta=0.6, # Momentum beta
edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other
)
For more examples check the Colab notebook.
StableDiffusionSafePipelineOutput
class diffusers.pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput
< source >( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] )
Parameters
-
images (
List[PIL.Image.Image]
ornp.ndarray
) — List of denoised PIL images of lengthbatch_size
or numpy array of shape(batch_size, height, width, num_channels)
. PIL images or numpy array present the denoised images of the diffusion pipeline. -
nsfw_content_detected (
List[bool]
) — List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” (nsfw) content, orNone
if safety checking could not be performed.
Output class for Stable Diffusion pipelines.
SemanticStableDiffusionPipeline
class diffusers.SemanticStableDiffusionPipeline
< source >( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True )
Parameters
- vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
-
text_encoder (
CLIPTextModel
) — Frozen text-encoder. Stable Diffusion uses the text portion of CLIP, specifically the clip-vit-large-patch14 variant. -
tokenizer (
CLIPTokenizer
) — Tokenizer of class CLIPTokenizer. - unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
-
scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unet
to denoise the encoded image latens. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. -
safety_checker (
Q16SafetyChecker
) — Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the model card for details. -
feature_extractor (
CLIPImageProcessor
) — Model that extracts features from generated images to be used as inputs for thesafety_checker
.
Pipeline for text-to-image generation with latent editing.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
This model builds on the implementation of [‘StableDiffusionPipeline’]
__call__
< source >(
prompt: typing.Union[str, typing.List[str]]
height: typing.Optional[int] = None
width: typing.Optional[int] = None
num_inference_steps: int = 50
guidance_scale: float = 7.5
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None
num_images_per_prompt: int = 1
eta: float = 0.0
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
latents: typing.Optional[torch.FloatTensor] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: int = 1
editing_prompt: typing.Union[str, typing.List[str], NoneType] = None
editing_prompt_embeddings: typing.Optional[torch.Tensor] = None
reverse_editing_direction: typing.Union[bool, typing.List[bool], NoneType] = False
edit_guidance_scale: typing.Union[float, typing.List[float], NoneType] = 5
edit_warmup_steps: typing.Union[int, typing.List[int], NoneType] = 10
edit_cooldown_steps: typing.Union[int, typing.List[int], NoneType] = None
edit_threshold: typing.Union[float, typing.List[float], NoneType] = 0.9
edit_momentum_scale: typing.Optional[float] = 0.1
edit_mom_beta: typing.Optional[float] = 0.4
edit_weights: typing.Optional[typing.List[float]] = None
sem_guidance: typing.Optional[typing.List[torch.Tensor]] = None
)
→
SemanticStableDiffusionPipelineOutput or tuple
Parameters
-
prompt (
str
orList[str]
) — The prompt or prompts to guide the image generation. -
height (
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image. -
width (
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image. -
num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. -
guidance_scale (
float
, optional, defaults to 7.5) — Guidance scale as defined in Classifier-Free Diffusion Guidance.guidance_scale
is defined asw
of equation 2. of Imagen Paper. Guidance scale is enabled by settingguidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the textprompt
, usually at the expense of lower image quality. -
negative_prompt (
str
orList[str]
, optional) — The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored ifguidance_scale
is less than1
). -
num_images_per_prompt (
int
, optional, defaults to 1) — The number of images to generate per prompt. -
eta (
float
, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others. -
generator (
torch.Generator
, optional) — One or a list of torch generator(s) to make generation deterministic. -
latents (
torch.FloatTensor
, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied randomgenerator
. -
output_type (
str
, optional, defaults to"pil"
) — The output format of the generate image. Choose between PIL:PIL.Image.Image
ornp.array
. -
return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple. -
callback (
Callable
, optional) — A function that will be called everycallback_steps
steps during inference. The function will be called with the following arguments:callback(step: int, timestep: int, latents: torch.FloatTensor)
. -
callback_steps (
int
, optional, defaults to 1) — The frequency at which thecallback
function will be called. If not specified, the callback will be called at every step. -
editing_prompt (
str
orList[str]
, optional) — The prompt or prompts to use for Semantic guidance. Semantic guidance is disabled by settingediting_prompt = None
. Guidance direction of prompt should be specified viareverse_editing_direction
. -
editing_prompt_embeddings (
torch.Tensor>
, optional) — Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be specified viareverse_editing_direction
. -
reverse_editing_direction (
bool
orList[bool]
, optional, defaults toFalse
) — Whether the corresponding prompt inediting_prompt
should be increased or decreased. -
edit_guidance_scale (
float
orList[float]
, optional, defaults to 5) — Guidance scale for semantic guidance. If provided as list values should correspond toediting_prompt
.edit_guidance_scale
is defined ass_e
of equation 6 of SEGA Paper. -
edit_warmup_steps (
float
orList[float]
, optional, defaults to 10) — Number of diffusion steps (for each prompt) for which semantic guidance will not be applied. Momentum will still be calculated for those steps and applied once all warmup periods are over.edit_warmup_steps
is defined asdelta
(δ) of SEGA Paper. -
edit_cooldown_steps (
float
orList[float]
, optional, defaults toNone
) — Number of diffusion steps (for each prompt) after which semantic guidance will no longer be applied. -
edit_threshold (
float
orList[float]
, optional, defaults to 0.9) — Threshold of semantic guidance. -
edit_momentum_scale (
float
, optional, defaults to 0.1) — Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0 momentum will be disabled. Momentum is already built up during warmup, i.e. for diffusion steps smaller thansld_warmup_steps
. Momentum will only be added to latent guidance once all warmup periods are finished.edit_momentum_scale
is defined ass_m
of equation 7 of SEGA Paper. -
edit_mom_beta (
float
, optional, defaults to 0.4) — Defines how semantic guidance momentum builds up.edit_mom_beta
indicates how much of the previous momentum will be kept. Momentum is already built up during warmup, i.e. for diffusion steps smaller thanedit_warmup_steps
.edit_mom_beta
is defined asbeta_m
(β) of equation 8 of SEGA Paper. -
edit_weights (
List[float]
, optional, defaults toNone
) — Indicates how much each individual concept should influence the overall guidance. If no weights are provided all concepts are applied equally.edit_mom_beta
is defined asg_i
of equation 9 of SEGA Paper. -
sem_guidance (
List[torch.Tensor]
, optional) — List of pre-generated guidance vectors to be applied at generation. Length of the list has to correspond tonum_inference_steps
.
Returns
SemanticStableDiffusionPipelineOutput if return_dict
is True,
otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of
bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the
safety_checker`.
Function invoked when calling the pipeline for generation.