The Transformer2D model extended for video-like data.
( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: typing.Optional[int] = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: int = 1280 activation_fn: str = 'geglu' )
Parameters
int
, optional, defaults to 16) —
The number of heads to use for multi-head attention. int
, optional, defaults to 88) —
The number of channels in each head. int
, optional) —
Pass if the input is continuous. The number of channels in the input and output. int
, optional, defaults to 1) —
The number of layers of Transformer blocks to use. float
, optional, defaults to 0.0) —
The dropout probability to use.
norm_num_groups — (int
, optional, defaults to 32):
The number of norm groups for the group norm. int
, optional) —
The number of encoder_hidden_states dimensions to use. str
, optional, defaults to "geglu"
) —
Activation function to be used in feed-forward. Transformer model for a video-like data.
When input is continuous: First, project the input (aka embedding) and reshape to b, h * w, c. Then apply the sparse 3d transformer action. Finally, reshape to video again.
( hidden_states encoder_hidden_states = None timestep = None return_dict: bool = True ) → Transformer3DModelOutput or tuple
Parameters
torch.FloatTensor
of shape (batch size, channel, num_frames, height, width)
) —
Input hidden_states torch.LongTensor
of shape (batch size, encoder_hidden_states dim)
, optional) —
Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
self-attention. torch.long
, optional) —
Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step. bool
, optional, defaults to True
) —
Whether or not to return a Transformer3DModelOutput instead of a plain
tuple. Returns
Transformer3DModelOutput or tuple
Transformer3DModelOutput if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
( sample: FloatTensor )