Transformer3D

The Transformer2D model extended for video-like data.

Transformer3DModel

class diffusers.Transformer3DModel

< >

( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: typing.Optional[int] = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: int = 1280 activation_fn: str = 'geglu' )

Parameters

  • num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention.
  • attention_head_dim (int, optional, defaults to 88) — The number of channels in each head.
  • in_channels (int, optional) — Pass if the input is continuous. The number of channels in the input and output.
  • num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use.
  • dropout (float, optional, defaults to 0.0) — The dropout probability to use. norm_num_groups — (int, optional, defaults to 32): The number of norm groups for the group norm.
  • cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use.
  • activation_fn (str, optional, defaults to "geglu") — Activation function to be used in feed-forward.

Transformer model for a video-like data.

When input is continuous: First, project the input (aka embedding) and reshape to b, h * w, c. Then apply the sparse 3d transformer action. Finally, reshape to video again.

forward

< >

( hidden_states encoder_hidden_states = None timestep = None return_dict: bool = True ) Transformer3DModelOutput or tuple

Parameters

  • hidden_states (torch.FloatTensor of shape (batch size, channel, num_frames, height, width)) — Input hidden_states
  • encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — Conditional embeddings for cross attention layer. If not given, cross-attention defaults to self-attention.
  • timestep ( torch.long, optional) — Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a Transformer3DModelOutput instead of a plain tuple.

Returns

Transformer3DModelOutput or tuple

Transformer3DModelOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is the sample tensor.

Transformer3DModelOutput

class diffusers.models.transformer_3d.Transformer3DModelOutput

< >

( sample: FloatTensor )

Parameters

  • sample (torch.FloatTensor of shape (batch_size, num_channels, num_frames, height, width). —