Text-to-Image Generation with ControlNet Conditioning
Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang and Maneesh Agrawala.
Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.
The abstract of the paper is the following:
We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.
This model was contributed by the amazing community contributor takuma104 ❤️ .
|StableDiffusionControlNetPipeline||Text-to-Image Generation with ControlNet Conditioning||Colab Example|
In the following we give a simple example of how to use a ControlNet checkpoint with Diffusers for inference. The inference pipeline is the same for all pipelines:
- Take an image and run it through a pre-conditioning processor.
- Run the pre-processed image through the StableDiffusionControlNetPipeline.
Let’s have a look at a simple example using the Canny Edge ControlNet.
from diffusers import StableDiffusionControlNetPipeline from diffusers.utils import load_image # Let's load the popular vermeer image image = load_image( "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" )
Next, we process the image to get the canny image. This is step 1. - running the pre-conditioning processor. The pre-conditioning processor is different for every ControlNet. Please see the model cards of the official checkpoints for more information about other models.
First, we need to install opencv:
pip install opencv-contrib-python
Next, let’s also install all required Hugging Face libraries:
pip install diffusers transformers git+https://github.com/huggingface/accelerate.git
Then we can retrieve the canny edges of the image.
import cv2 from PIL import Image import numpy as np image = np.array(image) low_threshold = 100 high_threshold = 200 image = cv2.Canny(image, low_threshold, high_threshold) image = image[:, :, None] image = np.concatenate([image, image, image], axis=2) canny_image = Image.fromarray(image)
Let’s take a look at the processed image.
Now, we load the official Stable Diffusion 1.5 Model as well as the ControlNet for canny edges.
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel import torch controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 )
To speed-up things and reduce memory, let’s enable model offloading and use the fast UniPCMultistepScheduler.
from diffusers import UniPCMultistepScheduler pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) # this command loads the individual model components on GPU on-demand. pipe.enable_model_cpu_offload()
Finally, we can run the pipeline:
generator = torch.manual_seed(0) out_image = pipe( "disco dancer with colorful lights", num_inference_steps=20, generator=generator, image=canny_image ).images
This should take only around 3-4 seconds on GPU (depending on hardware). The output image then looks as follows:
Note: To see how to run all other ControlNet checkpoints, please have a look at ControlNet with Stable Diffusion 1.5
ControlNet requires a control image in addition to the text-to-image prompt. Each pretrained model is trained using a different conditioning method that requires different images for conditioning the generated outputs. For example, Canny edge conditioning requires the control image to be the output of a Canny filter, while depth conditioning requires the control image to be a depth map. See the overview and image examples below to know more.
All checkpoints can be found under the authors’ namespace lllyasviel.
ControlNet with Stable Diffusion 1.5
|Model Name||Control Image Overview||Control Image Example||Generated Image Example|
Trained with canny edge detection
|A monochrome image with white edges on a black background.|
Trained with Midas depth estimation
|A grayscale image with black representing deep areas and white representing shallow areas.|
Trained with HED edge detection (soft edge)
|A monochrome image with white soft edges on a black background.|
Trained with M-LSD line detection
|A monochrome image composed only of white straight lines on a black background.|
Trained with normal map
|A normal mapped image.|
Trained with OpenPose bone image
|A OpenPose bone image.|
Trained with human scribbles
|A hand-drawn monochrome image with white outlines on a black background.|
Trained with semantic segmentation
|An ADE20K’s segmentation protocol image.|
class diffusers.StableDiffusionControlNetPipeline< source >
( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: typing.Union[diffusers.models.controlnet.ControlNetModel, typing.List[diffusers.models.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnet.ControlNetModel], diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet.MultiControlNetModel] scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True )
- vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
CLIPTextModel) — Frozen text-encoder. Stable Diffusion uses the text portion of CLIP, specifically the clip-vit-large-patch14 variant.
CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
- unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
controlnet (ControlNetModel or
List[ControlNetModel]) — Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets as a list, the outputs from each ControlNet are added together to create one combined additional conditioning.
scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unetto denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
StableDiffusionSafetyChecker) — Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the model card for details.
CLIPFeatureExtractor) — Model that extracts features from generated images to be used as inputs for the
Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
__call__< source >
prompt: typing.Union[str, typing.List[str]] = None
image: typing.Union[torch.FloatTensor, PIL.Image.Image, typing.List[torch.FloatTensor], typing.List[PIL.Image.Image]] = None
height: typing.Optional[int] = None
width: typing.Optional[int] = None
num_inference_steps: int = 50
guidance_scale: float = 7.5
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None
num_images_per_prompt: typing.Optional[int] = 1
eta: float = 0.0
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
latents: typing.Optional[torch.FloatTensor] = None
prompt_embeds: typing.Optional[torch.FloatTensor] = None
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: int = 1
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
controlnet_conditioning_scale: typing.Union[float, typing.List[float]] = 1.0
List[str], optional) — The prompt or prompts to guide the image generation. If not defined, one has to pass
List[List[PIL.Image.Image]]): The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If the type is specified as
Torch.FloatTensor, it is passed to ControlNet as is.
PIL.Image.Imagecan also be accepted as an image. The dimensions of the output image defaults to
image’s dimensions. If height and/or width are passed,
imageis resized according to them. If multiple ControlNets are specified in init, images must be passed as a list such that each element of the list can be correctly batched for input to a single controlnet.
int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image.
int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image.
int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
float, optional, defaults to 7.5) — Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scaleis defined as
wof equation 2. of Imagen Paper. Guidance scale is enabled by setting
guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text
prompt, usually at the expense of lower image quality.
List[str], optional) — The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds. instead. If not defined, one has to pass
negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if
guidance_scaleis less than
int, optional, defaults to 1) — The number of images to generate per prompt.
float, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others.
List[torch.Generator], optional) — One or a list of torch generator(s) to make generation deterministic.
torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random
torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from
torch.FloatTensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from
str, optional, defaults to
"pil") — The output format of the generate image. Choose between PIL:
bool, optional, defaults to
True) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple.
Callable, optional) — A function that will be called every
callback_stepssteps during inference. The function will be called with the following arguments:
callback(step: int, timestep: int, latents: torch.FloatTensor).
int, optional, defaults to 1) — The frequency at which the
callbackfunction will be called. If not specified, the callback will be called at every step.
dict, optional) — A kwargs dictionary that if specified is passed along to the
AttentionProcessoras defined under
List[float], optional, defaults to 1.0) — The outputs of the controlnet are multiplied by
controlnet_conditioning_scalebefore they are added to the residual in the original unet. If multiple ControlNets are specified in init, you can set the corresponding scale as a list.
return_dict is True, otherwise a
tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bool
s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`.
Function invoked when calling the pipeline for generation.
# !pip install opencv-python transformers accelerate from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler from diffusers.utils import load_image import numpy as np import torch import cv2 from PIL import Image # download an image image = load_image( "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" ) image = np.array(image) # get canny image image = cv2.Canny(image, 100, 200) image = image[:, :, None] image = np.concatenate([image, image, image], axis=2) canny_image = Image.fromarray(image) # load control net and stable diffusion v1-5 controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 ) # speed up diffusion process with faster scheduler and memory optimization pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) # remove following line if xformers is not installed pipe.enable_xformers_memory_efficient_attention() pipe.enable_model_cpu_offload() # generate image generator = torch.manual_seed(0) image = pipe( "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image ).images
enable_attention_slicing< source >
( slice_size: typing.Union[str, int, NoneType] = 'auto' )
int, optional, defaults to
"auto") — When
"auto", halves the input to the attention heads, so attention will be computed in two steps. If
"max", maxium amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices as
attention_head_dim // slice_size. In this case,
attention_head_dimmust be a multiple of
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.
disable_attention_slicing< source >
Disable sliced attention computation. If
enable_attention_slicing was previously invoked, this method will go
back to computing attention in one step.
enable_vae_slicing< source >
Enable sliced VAE decoding.
When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
disable_vae_slicing< source >
Disable sliced VAE decoding. If
enable_vae_slicing was previously invoked, this method will go back to
computing decoding in one step.
enable_xformers_memory_efficient_attention< source >
( attention_op: typing.Optional[typing.Callable] = None )
Callable, optional) — Override the default
Noneoperator for use as
opargument to the
memory_efficient_attention()function of xFormers.
Enable memory efficient attention as implemented in xformers.
When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference time. Speed up at training time is not guaranteed.
Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention is used.
import torch from diffusers import DiffusionPipeline from xformers.ops import MemoryEfficientAttentionFlashAttentionOp pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) # Workaround for not accepting attention shape using VAE for Flash Attention pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
disable_xformers_memory_efficient_attention< source >
Disable memory efficient attention as implemented in xformers.
enable_model_cpu_offload< source >
( gpu_id = 0 )
Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the
enable_sequential_cpu_offload< source >
( gpu_id = 0 )
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae, controlnet, and safety checker have their state dicts saved to CPU and then are moved to a
torch.device('meta') and loaded to GPU only when their specific submodule has its forward
method called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower.