Attention Processor
An attention processor is a class for applying different types of attention mechanisms.
AttnProcessor
Default processor for performing attention-related computations.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).
Processor for performing attention-related computations with extra learnable key and value matrices for the text encoder.
Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra learnable key and value matrices for the text encoder.
Processor for implementing flash attention using torch_npu. Torch_npu supports only fp16 and bf16 data types. If fp32 is used, F.scaled_dot_product_attention will be used for computation, but the acceleration effect on NPU is not significant.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.
This API is currently 🧪 experimental in nature and can change in future.
Allegro
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the Allegro model. It applies a normalization layer and rotary embedding on the query and key vector.
AuraFlow
Attention processor used typically in processing Aura Flow.
Attention processor used typically in processing Aura Flow with fused projections.
CogVideoX
Processor for implementing scaled dot-product attention for the CogVideoX model. It applies a rotary embedding on query and key vectors, but does not include spatial normalization.
Processor for implementing scaled dot-product attention for the CogVideoX model. It applies a rotary embedding on query and key vectors, but does not include spatial normalization.
CrossFrameAttnProcessor
class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor
< source >( batch_size = 2 )
Cross frame attention processor. Each frame attends the first frame.
Custom Diffusion
class diffusers.models.attention_processor.CustomDiffusionAttnProcessor
< source >( train_kv: bool = True train_q_out: bool = True hidden_size: typing.Optional[int] = None cross_attention_dim: typing.Optional[int] = None out_bias: bool = True dropout: float = 0.0 )
Parameters
- train_kv (
bool
, defaults toTrue
) — Whether to newly train the key and value matrices corresponding to the text features. - train_q_out (
bool
, defaults toTrue
) — Whether to newly train query matrices corresponding to the latent image features. - hidden_size (
int
, optional, defaults toNone
) — The hidden size of the attention layer. - cross_attention_dim (
int
, optional, defaults toNone
) — The number of channels in theencoder_hidden_states
. - out_bias (
bool
, defaults toTrue
) — Whether to include the bias parameter intrain_q_out
. - dropout (
float
, optional, defaults to 0.0) — The dropout probability to use.
Processor for implementing attention for the Custom Diffusion method.
class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0
< source >( train_kv: bool = True train_q_out: bool = True hidden_size: typing.Optional[int] = None cross_attention_dim: typing.Optional[int] = None out_bias: bool = True dropout: float = 0.0 )
Parameters
- train_kv (
bool
, defaults toTrue
) — Whether to newly train the key and value matrices corresponding to the text features. - train_q_out (
bool
, defaults toTrue
) — Whether to newly train query matrices corresponding to the latent image features. - hidden_size (
int
, optional, defaults toNone
) — The hidden size of the attention layer. - cross_attention_dim (
int
, optional, defaults toNone
) — The number of channels in theencoder_hidden_states
. - out_bias (
bool
, defaults toTrue
) — Whether to include the bias parameter intrain_q_out
. - dropout (
float
, optional, defaults to 0.0) — The dropout probability to use.
Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled dot-product attention.
class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor
< source >( train_kv: bool = True train_q_out: bool = False hidden_size: typing.Optional[int] = None cross_attention_dim: typing.Optional[int] = None out_bias: bool = True dropout: float = 0.0 attention_op: typing.Optional[typing.Callable] = None )
Parameters
- train_kv (
bool
, defaults toTrue
) — Whether to newly train the key and value matrices corresponding to the text features. - train_q_out (
bool
, defaults toTrue
) — Whether to newly train query matrices corresponding to the latent image features. - hidden_size (
int
, optional, defaults toNone
) — The hidden size of the attention layer. - cross_attention_dim (
int
, optional, defaults toNone
) — The number of channels in theencoder_hidden_states
. - out_bias (
bool
, defaults toTrue
) — Whether to include the bias parameter intrain_q_out
. - dropout (
float
, optional, defaults to 0.0) — The dropout probability to use. - attention_op (
Callable
, optional, defaults toNone
) — The base operator to use as the attention operator. It is recommended to set toNone
, and allow xFormers to choose the best operator.
Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method.
Flux
Attention processor used typically in processing the SD3-like self-attention projections.
Attention processor used typically in processing the SD3-like self-attention projections.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).
Hunyuan
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the HunyuanDiT model. It applies a s normalization layer and rotary embedding on query and key vector.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0) with fused projection layers. This is used in the HunyuanDiT model. It applies a s normalization layer and rotary embedding on query and key vector.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the HunyuanDiT model. It applies a normalization layer and rotary embedding on query and key vector. This variant of the processor employs Pertubed Attention Guidance.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the HunyuanDiT model. It applies a normalization layer and rotary embedding on query and key vector. This variant of the processor employs Pertubed Attention Guidance.
IdentitySelfAttnProcessor2_0
Processor for implementing PAG using scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). PAG reference: https://arxiv.org/abs/2403.17377
Processor for implementing PAG using scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). PAG reference: https://arxiv.org/abs/2403.17377
IP-Adapter
class diffusers.models.attention_processor.IPAdapterAttnProcessor
< source >( hidden_size cross_attention_dim = None num_tokens = (4,) scale = 1.0 )
Parameters
- hidden_size (
int
) — The hidden size of the attention layer. - cross_attention_dim (
int
) — The number of channels in theencoder_hidden_states
. - num_tokens (
int
,Tuple[int]
orList[int]
, defaults to(4,)
) — The context length of the image features. - scale (
float
or Listfloat
, defaults to 1.0) — the weight scale of image prompt.
Attention processor for Multiple IP-Adapters.
class diffusers.models.attention_processor.IPAdapterAttnProcessor2_0
< source >( hidden_size cross_attention_dim = None num_tokens = (4,) scale = 1.0 )
Parameters
- hidden_size (
int
) — The hidden size of the attention layer. - cross_attention_dim (
int
) — The number of channels in theencoder_hidden_states
. - num_tokens (
int
,Tuple[int]
orList[int]
, defaults to(4,)
) — The context length of the image features. - scale (
float
orList[float]
, defaults to 1.0) — the weight scale of image prompt.
Attention processor for IP-Adapter for PyTorch 2.0.
class diffusers.models.attention_processor.SD3IPAdapterJointAttnProcessor2_0
< source >( hidden_size: int ip_hidden_states_dim: int head_dim: int timesteps_emb_dim: int = 1280 scale: float = 0.5 )
Parameters
- hidden_size (
int
) — The number of hidden channels. - ip_hidden_states_dim (
int
) — The image feature dimension. - head_dim (
int
) — The number of head channels. - timesteps_emb_dim (
int
, defaults to 1280) — The number of input channels for timestep embedding. - scale (
float
, defaults to 0.5) — IP-Adapter scale.
Attention processor for IP-Adapter used typically in processing the SD3-like self-attention projections, with additional image-based information and timestep embeddings.
JointAttnProcessor2_0
Attention processor used typically in processing the SD3-like self-attention projections.
Attention processor used typically in processing the SD3-like self-attention projections.
Attention processor used typically in processing the SD3-like self-attention projections.
Attention processor used typically in processing the SD3-like self-attention projections.
LoRA
Processor for implementing attention with LoRA.
Processor for implementing attention with LoRA (enabled by default if you’re using PyTorch 2.0).
Processor for implementing attention with LoRA with extra learnable key and value matrices for the text encoder.
Processor for implementing attention with LoRA using xFormers.
Lumina-T2X
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the LuminaNextDiT model. It applies a s normalization layer and rotary embedding on query and key vector.
Mochi
Attention processor used in Mochi.
Attention processor used in Mochi VAE.
Sana
Processor for implementing scaled dot-product linear attention.
Processor for implementing multiscale quadratic attention.
Processor for implementing scaled dot-product linear attention.
Processor for implementing scaled dot-product linear attention.
Stable Audio
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). This is used in the Stable Audio model. It applies rotary embedding on query and key vector, and allows MHA, GQA or MQA.
SlicedAttnProcessor
class diffusers.models.attention_processor.SlicedAttnProcessor
< source >( slice_size: int )
Processor for implementing sliced attention.
class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor
< source >( slice_size )
Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder.
XFormersAttnProcessor
class diffusers.models.attention_processor.XFormersAttnProcessor
< source >( attention_op: typing.Optional[typing.Callable] = None )
Parameters
- attention_op (
Callable
, optional, defaults toNone
) — The base operator to use as the attention operator. It is recommended to set toNone
, and allow xFormers to choose the best operator.
Processor for implementing memory efficient attention using xFormers.
class diffusers.models.attention_processor.XFormersAttnAddedKVProcessor
< source >( attention_op: typing.Optional[typing.Callable] = None )
Parameters
- attention_op (
Callable
, optional, defaults toNone
) — The base operator to use as the attention operator. It is recommended to set toNone
, and allow xFormers to choose the best operator.
Processor for implementing memory efficient attention using xFormers.
XLAFlashAttnProcessor2_0
class diffusers.models.attention_processor.XLAFlashAttnProcessor2_0
< source >( partition_spec: typing.Optional[typing.Tuple[typing.Optional[str], ...]] = None )
Processor for implementing scaled dot-product attention with pallas flash attention kernel if using torch_xla
.