Diffusers documentation

SD3 Transformer Model

You are viewing v0.30.2 version. A newer version v0.31.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

SD3 Transformer Model

The Transformer model introduced in Stable Diffusion 3. Its novelty lies in the MMDiT transformer block.

SD3Transformer2DModel

class diffusers.SD3Transformer2DModel

< >

( sample_size: int = 128 patch_size: int = 2 in_channels: int = 16 num_layers: int = 18 attention_head_dim: int = 64 num_attention_heads: int = 18 joint_attention_dim: int = 4096 caption_projection_dim: int = 1152 pooled_projection_dim: int = 2048 out_channels: int = 16 pos_embed_max_size: int = 96 )

Parameters

  • sample_size (int) — The width of the latent images. This is fixed during training since it is used to learn a number of position embeddings.
  • patch_size (int) — Patch size to turn the input data into small patches.
  • in_channels (int, optional, defaults to 16) — The number of channels in the input.
  • num_layers (int, optional, defaults to 18) — The number of layers of Transformer blocks to use.
  • attention_head_dim (int, optional, defaults to 64) — The number of channels in each head.
  • num_attention_heads (int, optional, defaults to 18) — The number of heads to use for multi-head attention.
  • cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use.
  • caption_projection_dim (int) — Number of dimensions to use when projecting the encoder_hidden_states.
  • pooled_projection_dim (int) — Number of dimensions to use when projecting the pooled_projections.
  • out_channels (int, defaults to 16) — Number of output channels.

The Transformer model introduced in Stable Diffusion 3.

Reference: https://arxiv.org/abs/2403.03206

enable_forward_chunking

< >

( chunk_size: Optional = None dim: int = 0 )

Parameters

  • chunk_size (int, optional) — The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually over each tensor of dim=dim.
  • dim (int, optional, defaults to 0) — The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) or dim=1 (sequence length).

Sets the attention processor to use feed forward chunking.

forward

< >

( hidden_states: FloatTensor encoder_hidden_states: FloatTensor = None pooled_projections: FloatTensor = None timestep: LongTensor = None block_controlnet_hidden_states: List = None joint_attention_kwargs: Optional = None return_dict: bool = True )

Parameters

  • hidden_states (torch.FloatTensor of shape (batch size, channel, height, width)) — Input hidden_states.
  • encoder_hidden_states (torch.FloatTensor of shape (batch size, sequence_len, embed_dims)) — Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
  • pooled_projections (torch.FloatTensor of shape (batch_size, projection_dim)) — Embeddings projected from the embeddings of input conditions.
  • timestep ( torch.LongTensor) — Used to indicate denoising step. block_controlnet_hidden_states — (list of torch.Tensor): A list of tensors that if specified are added to the residuals of transformer blocks.
  • joint_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.models.attention_processor.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~models.transformer_2d.Transformer2DModelOutput instead of a plain tuple.

The SD3Transformer2DModel forward method.

fuse_qkv_projections

< >

( )

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.

This API is 🧪 experimental.

set_attn_processor

< >

( processor: Union )

Parameters

  • processor (dict of AttentionProcessor or only AttentionProcessor) — The instantiated processor class or a dictionary of processor classes that will be set as the processor for all Attention layers.

    If processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors.

Sets the attention processor to use to compute attention.

unfuse_qkv_projections

< >

( )

Disables the fused QKV projection if enabled.

This API is 🧪 experimental.

< > Update on GitHub