Diffusers documentation

Transformer Temporal

You are viewing v0.25.0 version. A newer version v0.32.2 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Transformer Temporal

A Transformer model for video-like data.

TransformerTemporalModel

class diffusers.models.TransformerTemporalModel

< >

( num_attention_heads: int = 16attention_head_dim: int = 88in_channels: Optional = Noneout_channels: Optional = Nonenum_layers: int = 1dropout: float = 0.0norm_num_groups: int = 32cross_attention_dim: Optional = Noneattention_bias: bool = Falsesample_size: Optional = Noneactivation_fn: str = 'geglu'norm_elementwise_affine: bool = Truedouble_self_attention: bool = Truepositional_embeddings: Optional = Nonenum_positional_embeddings: Optional = None )

Parameters

  • num_attention_heads (int, optional, defaults to 16) β€” The number of heads to use for multi-head attention.
  • attention_head_dim (int, optional, defaults to 88) β€” The number of channels in each head.
  • in_channels (int, optional) β€” The number of channels in the input and output (specify if the input is continuous).
  • num_layers (int, optional, defaults to 1) β€” The number of layers of Transformer blocks to use.
  • dropout (float, optional, defaults to 0.0) β€” The dropout probability to use.
  • cross_attention_dim (int, optional) β€” The number of encoder_hidden_states dimensions to use.
  • attention_bias (bool, optional) β€” Configure if the TransformerBlock attention should contain a bias parameter.
  • sample_size (int, optional) β€” The width of the latent images (specify if the input is discrete). This is fixed during training since it is used to learn a number of position embeddings.
  • activation_fn (str, optional, defaults to "geglu") β€” Activation function to use in feed-forward. See diffusers.models.activations.get_activation for supported activation functions.
  • norm_elementwise_affine (bool, optional) β€” Configure if the TransformerBlock should use learnable elementwise affine parameters for normalization.
  • double_self_attention (bool, optional) β€” Configure if each TransformerBlock should contain two self-attention layers. positional_embeddings β€” (str, optional): The type of positional embeddings to apply to the sequence input before passing use. num_positional_embeddings β€” (int, optional): The maximum length of the sequence over which to apply positional embeddings.

A Transformer model for video-like data.

forward

< >

( hidden_states: FloatTensorencoder_hidden_states: Optional = Nonetimestep: Optional = Noneclass_labels: LongTensor = Nonenum_frames: int = 1cross_attention_kwargs: Optional = Nonereturn_dict: bool = True ) β†’ TransformerTemporalModelOutput or tuple

Parameters

  • hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) β€” Input hidden_states.
  • encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) β€” Conditional embeddings for cross attention layer. If not given, cross-attention defaults to self-attention.
  • timestep ( torch.LongTensor, optional) β€” Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm.
  • class_labels ( torch.LongTensor of shape (batch size, num classes), optional) β€” Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in AdaLayerZeroNorm.
  • num_frames (int, optional, defaults to 1) β€” The number of frames to be processed per batch. This is used to reshape the hidden states.
  • cross_attention_kwargs (dict, optional) β€” A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.models.attention_processor.
  • return_dict (bool, optional, defaults to True) β€” Whether or not to return a UNet2DConditionOutput instead of a plain tuple.

If return_dict is True, an TransformerTemporalModelOutput is returned, otherwise a tuple where the first element is the sample tensor.

The TransformerTemporal forward method.

TransformerTemporalModelOutput

class diffusers.models.transformer_temporal.TransformerTemporalModelOutput

< >

( sample: FloatTensor )

Parameters

  • sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) β€” The hidden states output conditioned on encoder_hidden_states input.

The output of TransformerTemporalModel.