An attention processor is a class for applying different types of attention mechanisms.
Default processor for performing attention-related computations.
Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).
( hidden_size cross_attention_dim = None rank = 4 )
Processor for implementing the LoRA attention mechanism.
( train_kv = True train_q_out = True hidden_size = None cross_attention_dim = None out_bias = True dropout = 0.0 )
Parameters
bool
, defaults to True
) —
Whether to newly train the key and value matrices corresponding to the text features.
bool
, defaults to True
) —
Whether to newly train query matrices corresponding to the latent image features.
int
, optional, defaults to None
) —
The hidden size of the attention layer.
int
, optional, defaults to None
) —
The number of channels in the encoder_hidden_states
.
bool
, defaults to True
) —
Whether to include the bias parameter in train_q_out
.
float
, optional, defaults to 0.0) —
The dropout probability to use.
Processor for implementing attention for the Custom Diffusion method.
Processor for performing attention-related computations with extra learnable key and value matrices for the text encoder.
Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra learnable key and value matrices for the text encoder.
( hidden_size cross_attention_dim = None rank = 4 )
Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text encoder.
( attention_op: typing.Optional[typing.Callable] = None )
Parameters
Callable
, optional, defaults to None
) —
The base
operator to
use as the attention operator. It is recommended to set to None
, and allow xFormers to choose the best
operator.
Processor for implementing memory efficient attention using xFormers.
( hidden_size cross_attention_dim rank = 4 attention_op: typing.Optional[typing.Callable] = None )
Parameters
int
, optional) —
The hidden size of the attention layer.
int
, optional) —
The number of channels in the encoder_hidden_states
.
int
, defaults to 4) —
The dimension of the LoRA update matrices.
Callable
, optional, defaults to None
) —
The base
operator to
use as the attention operator. It is recommended to set to None
, and allow xFormers to choose the best
operator.
Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers.
( train_kv = True train_q_out = False hidden_size = None cross_attention_dim = None out_bias = True dropout = 0.0 attention_op: typing.Optional[typing.Callable] = None )
Parameters
bool
, defaults to True
) —
Whether to newly train the key and value matrices corresponding to the text features.
bool
, defaults to True
) —
Whether to newly train query matrices corresponding to the latent image features.
int
, optional, defaults to None
) —
The hidden size of the attention layer.
int
, optional, defaults to None
) —
The number of channels in the encoder_hidden_states
.
bool
, defaults to True
) —
Whether to include the bias parameter in train_q_out
.
float
, optional, defaults to 0.0) —
The dropout probability to use.
Callable
, optional, defaults to None
) —
The base
operator to use
as the attention operator. It is recommended to set to None
, and allow xFormers to choose the best operator.
Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method.
( slice_size )
Processor for implementing sliced attention.
( slice_size )
Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder.