Diffusers documentation

SD3ControlNetModel

You are viewing v0.30.1 version. A newer version v0.32.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

SD3ControlNetModel

SD3ControlNetModel is an implementation of ControlNet for Stable Diffusion 3.

The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.

The abstract from the paper is:

We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.

Loading from the original format

By default the SD3ControlNetModel should be loaded with from_pretrained().

from diffusers import StableDiffusion3ControlNetPipeline
from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel

controlnet = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Canny")
pipe = StableDiffusion3ControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", controlnet=controlnet)

SD3ControlNetModel

class diffusers.SD3ControlNetModel

< >

( sample_size: int = 128 patch_size: int = 2 in_channels: int = 16 num_layers: int = 18 attention_head_dim: int = 64 num_attention_heads: int = 18 joint_attention_dim: int = 4096 caption_projection_dim: int = 1152 pooled_projection_dim: int = 2048 out_channels: int = 16 pos_embed_max_size: int = 96 )

enable_forward_chunking

< >

( chunk_size: Optional = None dim: int = 0 )

Parameters

  • chunk_size (int, optional) — The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually over each tensor of dim=dim.
  • dim (int, optional, defaults to 0) — The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) or dim=1 (sequence length).

Sets the attention processor to use feed forward chunking.

forward

< >

( hidden_states: FloatTensor controlnet_cond: Tensor conditioning_scale: float = 1.0 encoder_hidden_states: FloatTensor = None pooled_projections: FloatTensor = None timestep: LongTensor = None joint_attention_kwargs: Optional = None return_dict: bool = True )

Parameters

  • hidden_states (torch.FloatTensor of shape (batch size, channel, height, width)) — Input hidden_states.
  • controlnet_cond (torch.Tensor) — The conditional input tensor of shape (batch_size, sequence_length, hidden_size).
  • conditioning_scale (float, defaults to 1.0) — The scale factor for ControlNet outputs.
  • encoder_hidden_states (torch.FloatTensor of shape (batch size, sequence_len, embed_dims)) — Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
  • pooled_projections (torch.FloatTensor of shape (batch_size, projection_dim)) — Embeddings projected from the embeddings of input conditions.
  • timestep ( torch.LongTensor) — Used to indicate denoising step.
  • joint_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.models.attention_processor.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~models.transformer_2d.Transformer2DModelOutput instead of a plain tuple.

The SD3Transformer2DModel forward method.

fuse_qkv_projections

< >

( )

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.

This API is 🧪 experimental.

set_attn_processor

< >

( processor: Union )

Parameters

  • processor (dict of AttentionProcessor or only AttentionProcessor) — The instantiated processor class or a dictionary of processor classes that will be set as the processor for all Attention layers.

    If processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors.

Sets the attention processor to use to compute attention.

unfuse_qkv_projections

< >

( )

Disables the fused QKV projection if enabled.

This API is 🧪 experimental.

SD3ControlNetOutput

class diffusers.models.controlnet_sd3.SD3ControlNetOutput

< >

( controlnet_block_samples: Tuple )

< > Update on GitHub