Diffusers documentation

PixArtTransformer2DModel

You are viewing v0.30.2 version. A newer version v0.31.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

PixArtTransformer2DModel

A Transformer model for image-like data from PixArt-Alpha and PixArt-Sigma.

PixArtTransformer2DModel

class diffusers.PixArtTransformer2DModel

< >

( num_attention_heads: int = 16 attention_head_dim: int = 72 in_channels: int = 4 out_channels: Optional = 8 num_layers: int = 28 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = 1152 attention_bias: bool = True sample_size: int = 128 patch_size: int = 2 activation_fn: str = 'gelu-approximate' num_embeds_ada_norm: Optional = 1000 upcast_attention: bool = False norm_type: str = 'ada_norm_single' norm_elementwise_affine: bool = False norm_eps: float = 1e-06 interpolation_scale: Optional = None use_additional_conditions: Optional = None caption_channels: Optional = None attention_type: Optional = 'default' )

Parameters

  • num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention.
  • attention_head_dim (int, optional, defaults to 72) — The number of channels in each head.
  • in_channels (int, defaults to 4) — The number of channels in the input.
  • out_channels (int, optional) — The number of channels in the output. Specify this parameter if the output channel number differs from the input.
  • num_layers (int, optional, defaults to 28) — The number of layers of Transformer blocks to use.
  • dropout (float, optional, defaults to 0.0) — The dropout probability to use within the Transformer blocks.
  • norm_num_groups (int, optional, defaults to 32) — Number of groups for group normalization within Transformer blocks.
  • cross_attention_dim (int, optional) — The dimensionality for cross-attention layers, typically matching the encoder’s hidden dimension.
  • attention_bias (bool, optional, defaults to True) — Configure if the Transformer blocks’ attention should contain a bias parameter.
  • sample_size (int, defaults to 128) — The width of the latent images. This parameter is fixed during training.
  • patch_size (int, defaults to 2) — Size of the patches the model processes, relevant for architectures working on non-sequential data.
  • activation_fn (str, optional, defaults to “gelu-approximate”) — Activation function to use in feed-forward networks within Transformer blocks.
  • num_embeds_ada_norm (int, optional, defaults to 1000) — Number of embeddings for AdaLayerNorm, fixed during training and affects the maximum denoising steps during inference.
  • upcast_attention (bool, optional, defaults to False) — If true, upcasts the attention mechanism dimensions for potentially improved performance.
  • norm_type (str, optional, defaults to “ada_norm_zero”) — Specifies the type of normalization used, can be ‘ada_norm_zero’.
  • norm_elementwise_affine (bool, optional, defaults to False) — If true, enables element-wise affine parameters in the normalization layers.
  • norm_eps (float, optional, defaults to 1e-6) — A small constant added to the denominator in normalization layers to prevent division by zero.
  • interpolation_scale (int, optional) — Scale factor to use during interpolating the position embeddings.
  • use_additional_conditions (bool, optional) — If we’re using additional conditions as inputs.
  • attention_type (str, optional, defaults to “default”) — Kind of attention mechanism to be used.
  • caption_channels (int, optional, defaults to None) — Number of channels to use for projecting the caption embeddings.
  • use_linear_projection (bool, optional, defaults to False) — Deprecated argument. Will be removed in a future version.
  • num_vector_embeds (bool, optional, defaults to False) — Deprecated argument. Will be removed in a future version.

A 2D Transformer model as introduced in PixArt family of models (https://arxiv.org/abs/2310.00426, https://arxiv.org/abs/2403.04692).

forward

< >

( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None cross_attention_kwargs: Dict = None attention_mask: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True )

Parameters

  • hidden_states (torch.FloatTensor of shape (batch size, channel, height, width)) — Input hidden_states.
  • encoder_hidden_states (torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — Conditional embeddings for cross attention layer. If not given, cross-attention defaults to self-attention.
  • timestep (torch.LongTensor, optional) — Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. added_cond_kwargs — (Dict[str, Any], optional): Additional conditions to be used as inputs.
  • cross_attention_kwargs ( Dict[str, Any], optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.models.attention_processor.
  • attention_mask ( torch.Tensor, optional) — An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large negative values to the attention scores corresponding to “discard” tokens.
  • encoder_attention_mask ( torch.Tensor, optional) — Cross-attention mask applied to encoder_hidden_states. Two formats supported:

    • Mask (batch, sequence_length) True = keep, False = discard.
    • Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard.

    If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format above. This bias will be added to the cross-attention scores.

  • return_dict (bool, optional, defaults to True) — Whether or not to return a UNet2DConditionOutput instead of a plain tuple.

The PixArtTransformer2DModel forward method.

fuse_qkv_projections

< >

( )

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.

This API is 🧪 experimental.

set_attn_processor

< >

( processor: Union )

Parameters

  • processor (dict of AttentionProcessor or only AttentionProcessor) — The instantiated processor class or a dictionary of processor classes that will be set as the processor for all Attention layers.

    If processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors.

Sets the attention processor to use to compute attention.

unfuse_qkv_projections

< >

( )

Disables the fused QKV projection if enabled.

This API is 🧪 experimental.

< > Update on GitHub