Diffusers contains pretrained models for popular algorithms and modules for creating the next set of diffusion models.
The primary function of these models is to denoise an input sample, by modeling the distribution $p\theta(\mathbf{x}{t-1}|\mathbf{x}_t)$.
The models are built on the base class [‘ModelMixin’] that is a torch.nn.module
with basic functionality for saving and loading models both locally and from the HuggingFace hub.
Base class for all models.
ModelMixin takes care of storing the configuration of the models and handles methods for loading, downloading and saving models.
str
) — A filename under which the model should be stored when calling
save_pretrained().Deactivates gradient checkpointing for the current model.
Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.
Disable memory efficient attention as implemented in xformers.
Activates gradient checkpointing for the current model.
Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”.
( attention_op: typing.Optional[typing.Callable] = None )
Parameters
Callable
, optional) —
Override the default None
operator for use as op
argument to the
memory_efficient_attention()
function of xFormers.
Enable memory efficient attention as implemented in xformers.
When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference time. Speed up at training time is not guaranteed.
Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention is used.
Examples:
>>> import torch
>>> from diffusers import UNet2DConditionModel
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> model = UNet2DConditionModel.from_pretrained(
... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16
... )
>>> model = model.to("cuda")
>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
( pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] **kwargs )
Parameters
str
or os.PathLike
, optional) —
Can be either:
google/ddpm-celebahq-256
.~ModelMixin.save_config
, e.g.,
./my_model_directory/
.Union[str, os.PathLike]
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
str
or torch.dtype
, optional) —
Override the default torch.dtype
and load the model under this dtype. If "auto"
is passed the dtype
will be automatically derived from the model’s weights.
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (i.e., do not try to download the model).
str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, will use the token generated
when running diffusers-cli login
(stored in ~/.huggingface
).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Load the model weights from a Flax checkpoint save file.
str
, optional, defaults to ""
) —
In case the relevant files are located inside a subfolder of the model repo (either remote in
huggingface.co or downloaded locally), you can specify the folder name here.
str
, optional) —
Mirror source to accelerate downloads in China. If you are from China and have an accessibility
problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety.
Please refer to the mirror site for more information.
str
or Dict[str, Union[int, str, torch.device]]
, optional) —
A map that specifies where each submodule should go. It doesn’t need to be refined to each
parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the
same device.
To have Accelerate compute the most optimized device_map
automatically, set device_map="auto"
. For
more information about each option see designing a device
map.
bool
, optional, defaults to True
if torch version >= 1.9.0 else False
) —
Speed up model loading by not initializing the weights and only loading the pre-trained weights. This
also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the
model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch,
setting this argument to True
will raise an error.
str
, optional) —
If specified load weights from variant
filename, e.g. pytorch_model.variant
is
ignored when using from_flax
.bool
, optional, defaults to None
) —
If set to None
, the safetensors
weights will be downloaded if they’re available and if the
safetensors
library is installed. If set to True
, the model will be forcibly loaded from
safetensors
weights. If set to False
, loading will not use safetensors
.
Instantiate a pretrained pytorch model from a pre-trained model configuration.
The model is set in evaluation mode by default using model.eval()
(Dropout modules are deactivated). To train
the model, you should first set it back in training mode with model.train()
.
The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task.
The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded.
It is required to be logged in (huggingface-cli login
) when you want to use private or gated
models.
Activate the special “offline-mode” to use this method in a firewalled environment.
(
only_trainable: bool = False
exclude_embeddings: bool = False
)
→
int
Get number of (optionally, trainable or non-embeddings) parameters in the module.
( save_directory: typing.Union[str, os.PathLike] is_main_process: bool = True save_function: typing.Callable = None safe_serialization: bool = False variant: typing.Optional[str] = None )
Parameters
str
or os.PathLike
) —
Directory to which to save. Will be created if it doesn’t exist.
bool
, optional, defaults to True
) —
Whether the process calling this is the main process or not. Useful when in distributed training like
TPUs and need to call this function on all processes. In this case, set is_main_process=True
only on
the main process to avoid race conditions.
Callable
) —
The function to use to save the state dictionary. Useful on distributed training like TPUs when one
need to replace torch.save
by another method. Can be configured with the environment variable
DIFFUSERS_SAVE_MODE
.
bool
, optional, defaults to False
) —
Whether to save the model using safetensors
or the traditional PyTorch way (that uses pickle
).
str
, optional) —
If specified, weights are saved in the format pytorch_model.Save a model and its configuration file to a directory, so that it can be re-loaded using the
[from_pretrained()](/docs/diffusers/pr_186/en/api/models#diffusers.ModelMixin.from_pretrained)
class method.
( sample: FloatTensor )
( sample_size: typing.Union[int, typing.Tuple[int, int], NoneType] = None in_channels: int = 3 out_channels: int = 3 center_input_sample: bool = False time_embedding_type: str = 'positional' freq_shift: int = 0 flip_sin_to_cos: bool = True down_block_types: typing.Tuple[str] = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') up_block_types: typing.Tuple[str] = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') block_out_channels: typing.Tuple[int] = (224, 448, 672, 896) layers_per_block: int = 2 mid_block_scale_factor: float = 1 downsample_padding: int = 1 act_fn: str = 'silu' attention_head_dim: typing.Optional[int] = 8 norm_num_groups: int = 32 norm_eps: float = 1e-05 resnet_time_scale_shift: str = 'default' add_attention: bool = True class_embed_type: typing.Optional[str] = None num_class_embeds: typing.Optional[int] = None )
Parameters
int
or Tuple[int, int]
, optional, defaults to None
) —
Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1)
.
int
, optional, defaults to 3) — Number of channels in the input image.
int
, optional, defaults to 3) — Number of channels in the output.
bool
, optional, defaults to False
) — Whether to center the input sample.
str
, optional, defaults to "positional"
) — Type of time embedding to use.
int
, optional, defaults to 0) — Frequency shift for fourier time embedding.
bool
, optional, defaults to —
obj:True
): Whether to flip sin to cos for fourier time embedding.
Tuple[str]
, optional, defaults to —
obj:("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")
): Tuple of downsample block
types.
str
, optional, defaults to "UNetMidBlock2D"
) —
The mid block type. Choose from UNetMidBlock2D
or UnCLIPUNetMidBlock2D
.
Tuple[str]
, optional, defaults to —
obj:("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")
): Tuple of upsample block types.
Tuple[int]
, optional, defaults to —
obj:(224, 448, 672, 896)
): Tuple of block output channels.
int
, optional, defaults to 2
) — The number of layers per block.
float
, optional, defaults to 1
) — The scale factor for the mid block.
int
, optional, defaults to 1
) — The padding for the downsample convolution.
str
, optional, defaults to "silu"
) — The activation function to use.
int
, optional, defaults to 8
) — The attention head dimension.
int
, optional, defaults to 32
) — The number of groups for the normalization.
float
, optional, defaults to 1e-5
) — The epsilon for the normalization.
str
, optional, defaults to "default"
) — Time scale shift config
for resnet blocks, see ResnetBlock2D
. Choose from default
or scale_shift
.
str
, optional, defaults to None) —
The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None
,
"timestep"
, or "identity"
.
int
, optional, defaults to None) —
Input dimension of the learnable embedding matrix to be projected to time_embed_dim
, when performing
class conditioning with class_embed_type
equal to None
.
UNet2DModel is a 2D UNet model that takes in a noisy sample and a timestep and returns sample shaped output.
This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library implements for all the model (such as downloading or saving, etc.)
(
sample: FloatTensor
timestep: typing.Union[torch.Tensor, float, int]
class_labels: typing.Optional[torch.Tensor] = None
return_dict: bool = True
)
→
UNet2DOutput or tuple
Parameters
torch.FloatTensor
) — (batch, channel, height, width) noisy inputs tensor
torch.FloatTensor
or float
or `int) — (batch) timesteps
torch.FloatTensor
, optional, defaults to None
) —
Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
bool
, optional, defaults to True
) —
Whether or not to return a UNet2DOutput instead of a plain tuple.
Returns
UNet2DOutput or tuple
UNet2DOutput if return_dict
is True,
otherwise a tuple
. When returning a tuple, the first element is the sample tensor.
( sample: FloatTensor )
( sample_size: int = 65536 sample_rate: typing.Optional[int] = None in_channels: int = 2 out_channels: int = 2 extra_in_channels: int = 0 time_embedding_type: str = 'fourier' flip_sin_to_cos: bool = True use_timestep_embedding: bool = False freq_shift: float = 0.0 down_block_types: typing.Tuple[str] = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') up_block_types: typing.Tuple[str] = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') mid_block_type: typing.Tuple[str] = 'UNetMidBlock1D' out_block_type: str = None block_out_channels: typing.Tuple[int] = (32, 32, 64) act_fn: str = None norm_num_groups: int = 8 layers_per_block: int = 1 downsample_each_block: bool = False )
Parameters
int
, optional) — Default length of sample. Should be adaptable at runtime.
int
, optional, defaults to 2) — Number of channels in the input sample.
int
, optional, defaults to 2) — Number of channels in the output.
int
, optional, defaults to 0) —
Number of additional channels to be added to the input of the first down block. Useful for cases where the
input data has more channels than what the model is initially designed for.
str
, optional, defaults to "fourier"
) — Type of time embedding to use.
float
, optional, defaults to 0.0) — Frequency shift for fourier time embedding.
bool
, optional, defaults to —
obj:False
): Whether to flip sin to cos for fourier time embedding.
Tuple[str]
, optional, defaults to —
obj:("DownBlock1D", "DownBlock1DNoSkip", "AttnDownBlock1D")
): Tuple of downsample block types.
Tuple[str]
, optional, defaults to —
obj:("UpBlock1D", "UpBlock1DNoSkip", "AttnUpBlock1D")
): Tuple of upsample block types.
Tuple[int]
, optional, defaults to —
obj:(32, 32, 64)
): Tuple of block output channels.
str
, optional, defaults to “UNetMidBlock1D”) — block type for middle of UNet.
str
, optional, defaults to None
) — optional output processing of UNet.
str
, optional, defaults to None) — optional activation function in UNet blocks.
int
, optional, defaults to 8) — group norm member count in UNet blocks.
int
, optional, defaults to 1) — added number of layers in a UNet block.
int
, optional, defaults to False —
experimental feature for using a UNet without upsampling.
UNet1DModel is a 1D UNet model that takes in a noisy sample and a timestep and returns sample shaped output.
This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library implements for all the model (such as downloading or saving, etc.)
(
sample: FloatTensor
timestep: typing.Union[torch.Tensor, float, int]
return_dict: bool = True
)
→
UNet1DOutput or tuple
Parameters
torch.FloatTensor
) — (batch_size, num_channels, sample_size)
noisy inputs tensor
torch.FloatTensor
or float
or `int) — (batch) timesteps
bool
, optional, defaults to True
) —
Whether or not to return a UNet1DOutput instead of a plain tuple.
Returns
UNet1DOutput or tuple
UNet1DOutput if return_dict
is True,
otherwise a tuple
. When returning a tuple, the first element is the sample tensor.
( sample: FloatTensor )
( sample_size: typing.Optional[int] = None in_channels: int = 4 out_channels: int = 4 center_input_sample: bool = False flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: typing.Optional[str] = 'UNetMidBlock2DCrossAttn' up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) layers_per_block: typing.Union[int, typing.Tuple[int]] = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: typing.Optional[int] = 32 norm_eps: float = 1e-05 cross_attention_dim: typing.Union[int, typing.Tuple[int]] = 1280 encoder_hid_dim: typing.Optional[int] = None attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 dual_cross_attention: bool = False use_linear_projection: bool = False class_embed_type: typing.Optional[str] = None addition_embed_type: typing.Optional[str] = None num_class_embeds: typing.Optional[int] = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' resnet_skip_time_act: bool = False resnet_out_scale_factor: int = 1.0 time_embedding_type: str = 'positional' time_embedding_dim: typing.Optional[int] = None time_embedding_act_fn: typing.Optional[str] = None timestep_post_act: typing.Optional[str] = None time_cond_proj_dim: typing.Optional[int] = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: typing.Optional[int] = None class_embeddings_concat: bool = False mid_block_only_cross_attention: typing.Optional[bool] = None cross_attention_norm: typing.Optional[str] = None addition_embed_type_num_heads = 64 )
Parameters
int
or Tuple[int, int]
, optional, defaults to None
) —
Height and width of input/output sample.
int
, optional, defaults to 4) — The number of channels in the input sample.
int
, optional, defaults to 4) — The number of channels in the output.
bool
, optional, defaults to False
) — Whether to center the input sample.
bool
, optional, defaults to False
) —
Whether to flip the sin to cos in the time embedding.
int
, optional, defaults to 0) — The frequency shift to apply to the time embedding.
Tuple[str]
, optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")
) —
The tuple of downsample blocks to use.
str
, optional, defaults to "UNetMidBlock2DCrossAttn"
) —
The mid block type. Choose from UNetMidBlock2DCrossAttn
or UNetMidBlock2DSimpleCrossAttn
, will skip the
mid block layer if None
.
Tuple[str]
, optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)
) —
The tuple of upsample blocks to use.
bool
or Tuple[bool]
, optional, default to False
) —
Whether to include self-attention in the basic transformer blocks, see
BasicTransformerBlock
.
Tuple[int]
, optional, defaults to (320, 640, 1280, 1280)
) —
The tuple of output channels for each block.
int
, optional, defaults to 2) — The number of layers per block.
int
, optional, defaults to 1) — The padding to use for the downsampling convolution.
float
, optional, defaults to 1.0) — The scale factor to use for the mid block.
str
, optional, defaults to "silu"
) — The activation function to use.
int
, optional, defaults to 32) — The number of groups to use for the normalization.
If None
, it will skip the normalization and activation layers in post-processing
float
, optional, defaults to 1e-5) — The epsilon to use for the normalization.
int
or Tuple[int]
, optional, defaults to 1280) —
The dimension of the cross attention features.
int
, optional, defaults to None) —
If given, encoder_hidden_states
will be projected from this dimension to cross_attention_dim
.
int
, optional, defaults to 8) — The dimension of the attention heads.
str
, optional, defaults to "default"
) — Time scale shift config
for resnet blocks, see ResnetBlock2D
. Choose from default
or scale_shift
.
str
, optional, defaults to None) —
The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None
,
"timestep"
, "identity"
, "projection"
, or "simple_projection"
.
str
, optional, defaults to None) —
Configures an optional embedding which will be summed with the time embeddings. Choose from None
or
“text”. “text” will use the TextTimeEmbedding
layer.
int
, optional, defaults to None) —
Input dimension of the learnable embedding matrix to be projected to time_embed_dim
, when performing
class conditioning with class_embed_type
equal to None
.
str
, optional, default to positional
) —
The type of position embedding to use for timesteps. Choose from positional
or fourier
.
int
, optional, default to None
) —
An optional override for the dimension of the projected time embedding.
str
, optional, default to None
) —
Optional activation function to use on the time embeddings only one time before they as passed to the rest
of the unet. Choose from silu
, mish
, gelu
, and swish
.
str, *optional*, default to
None) -- The second activation function to use in timestep embedding. Choose from
silu,
mishand
gelu`.
int
, optional, default to None
) —
The dimension of cond_proj
layer in timestep embedding.
int
, optional, default to 3
) — The kernel size of conv_in
layer.
int
, optional, default to 3
) — The kernel size of conv_out
layer.
int
, optional) — The dimension of the class_labels
input when
using the “projection” class_embed_type
. Required when using the “projection” class_embed_type
.
bool
, optional, defaults to False
) — Whether to concatenate the time
embeddings with the class embeddings.
bool
, optional, defaults to None
) —
Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn
. If
only_cross_attention
is given as a single boolean and mid_block_only_cross_attention
is None, the
only_cross_attention
value will be used as the value for mid_block_only_cross_attention
. Else, it will
default to False
.
UNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep and returns sample shaped output.
This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library implements for all the models (such as downloading or saving, etc.)
(
sample: FloatTensor
timestep: typing.Union[torch.Tensor, float, int]
encoder_hidden_states: Tensor
class_labels: typing.Optional[torch.Tensor] = None
timestep_cond: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
down_block_additional_residuals: typing.Optional[typing.Tuple[torch.Tensor]] = None
mid_block_additional_residual: typing.Optional[torch.Tensor] = None
return_dict: bool = True
)
→
UNet2DConditionOutput or tuple
Parameters
torch.FloatTensor
) — (batch, channel, height, width) noisy inputs tensor
torch.FloatTensor
or float
or int
) — (batch) timesteps
torch.FloatTensor
) — (batch, sequence_length, feature_dim) encoder hidden states
bool
, optional, defaults to True
) —
Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple.
dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined under
self.processor
in
diffusers.cross_attention.
Returns
UNet2DConditionOutput or tuple
UNet2DConditionOutput if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
( slice_size )
Parameters
str
or int
or list(int)
, optional, defaults to "auto"
) —
When "auto"
, halves the input to the attention heads, so attention will be computed in two steps. If
"max"
, maximum amount of memory will be saved by running only one slice at a time. If a number is
provided, uses as many slices as attention_head_dim // slice_size
. In this case, attention_head_dim
must be a multiple of slice_size
.
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.
( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] )
Parameters
dict
of AttentionProcessor
or AttentionProcessor
) —
The instantiated processor class or a dictionary of processor classes that will be set as the processor
of all Attention
layers.
processor
is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. —
Disables custom attention processors and sets the default attention implementation.
( sample: FloatTensor )
( sample_size: typing.Optional[int] = None in_channels: int = 4 out_channels: int = 4 down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') up_block_types: typing.Tuple[str] = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: typing.Optional[int] = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1024 attention_head_dim: typing.Union[int, typing.Tuple[int]] = 64 )
Parameters
int
or Tuple[int, int]
, optional, defaults to None
) —
Height and width of input/output sample.
int
, optional, defaults to 4) — The number of channels in the input sample.
int
, optional, defaults to 4) — The number of channels in the output.
Tuple[str]
, optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")
) —
The tuple of downsample blocks to use.
Tuple[str]
, optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)
) —
The tuple of upsample blocks to use.
Tuple[int]
, optional, defaults to (320, 640, 1280, 1280)
) —
The tuple of output channels for each block.
int
, optional, defaults to 2) — The number of layers per block.
int
, optional, defaults to 1) — The padding to use for the downsampling convolution.
float
, optional, defaults to 1.0) — The scale factor to use for the mid block.
str
, optional, defaults to "silu"
) — The activation function to use.
int
, optional, defaults to 32) — The number of groups to use for the normalization.
If None
, it will skip the normalization and activation layers in post-processing
float
, optional, defaults to 1e-5) — The epsilon to use for the normalization.
int
, optional, defaults to 1280) — The dimension of the cross attention features.
int
, optional, defaults to 8) — The dimension of the attention heads.
UNet3DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep and returns sample shaped output.
This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library implements for all the models (such as downloading or saving, etc.)
(
sample: FloatTensor
timestep: typing.Union[torch.Tensor, float, int]
encoder_hidden_states: Tensor
class_labels: typing.Optional[torch.Tensor] = None
timestep_cond: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
down_block_additional_residuals: typing.Optional[typing.Tuple[torch.Tensor]] = None
mid_block_additional_residual: typing.Optional[torch.Tensor] = None
return_dict: bool = True
)
→
~models.unet_2d_condition.UNet3DConditionOutput
or tuple
Parameters
torch.FloatTensor
) — (batch, num_frames, channel, height, width) noisy inputs tensor
torch.FloatTensor
or float
or int
) — (batch) timesteps
torch.FloatTensor
) — (batch, sequence_length, feature_dim) encoder hidden states
bool
, optional, defaults to True
) —
Whether or not to return a models.unet_2d_condition.UNet3DConditionOutput
instead of a plain tuple.
dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined under
self.processor
in
diffusers.cross_attention.
Returns
~models.unet_2d_condition.UNet3DConditionOutput
or tuple
~models.unet_2d_condition.UNet3DConditionOutput
if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
( slice_size )
Parameters
str
or int
or list(int)
, optional, defaults to "auto"
) —
When "auto"
, halves the input to the attention heads, so attention will be computed in two steps. If
"max"
, maximum amount of memory will be saved by running only one slice at a time. If a number is
provided, uses as many slices as attention_head_dim // slice_size
. In this case, attention_head_dim
must be a multiple of slice_size
.
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.
( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] )
Parameters
dict
of AttentionProcessor
or AttentionProcessor
) —
The instantiated processor class or a dictionary of processor classes that will be set as the processor
of all Attention
layers.
processor
is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. —
Disables custom attention processors and sets the default attention implementation.
( sample: FloatTensor )
Output of decoding method.
( latents: FloatTensor )
Output of VQModel encoding method.
( in_channels: int = 3 out_channels: int = 3 down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) block_out_channels: typing.Tuple[int] = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: typing.Optional[int] = None scaling_factor: float = 0.18215 )
Parameters
Tuple[str]
, optional, defaults to —
obj:("DownEncoderBlock2D",)
): Tuple of downsample block types.
Tuple[str]
, optional, defaults to —
obj:("UpDecoderBlock2D",)
): Tuple of upsample block types.
Tuple[int]
, optional, defaults to —
obj:(64,)
): Tuple of block output channels.
str
, optional, defaults to "silu"
) — The activation function to use.
int
, optional, defaults to 3
) — Number of channels in the latent space.
int
, optional, defaults to 32
) — TODO
int
, optional, defaults to 256
) — Number of codebook vectors in the VQ-VAE.
int
, optional) — Hidden dim of codebook vectors in the VQ-VAE.
float
, optional, defaults to 0.18215
) —
The component-wise standard deviation of the trained latent space computed using the first batch of the
training set. This is used to scale the latent space to have unit variance when training the diffusion
model. The latents are scaled with the formula z = z * scaling_factor
before being passed to the
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z
. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
Synthesis with Latent Diffusion Models paper.
VQ-VAE model from the paper Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu.
This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library implements for all the model (such as downloading or saving, etc.)
( sample: FloatTensor return_dict: bool = True )
( latent_dist: DiagonalGaussianDistribution )
Output of AutoencoderKL encoding method.
( in_channels: int = 3 out_channels: int = 3 down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) block_out_channels: typing.Tuple[int] = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 )
Parameters
Tuple[str]
, optional, defaults to —
obj:("DownEncoderBlock2D",)
): Tuple of downsample block types.
Tuple[str]
, optional, defaults to —
obj:("UpDecoderBlock2D",)
): Tuple of upsample block types.
Tuple[int]
, optional, defaults to —
obj:(64,)
): Tuple of block output channels.
str
, optional, defaults to "silu"
) — The activation function to use.
int
, optional, defaults to 4) — Number of channels in the latent space.
int
, optional, defaults to 32
) — TODO
float
, optional, defaults to 0.18215) —
The component-wise standard deviation of the trained latent space computed using the first batch of the
training set. This is used to scale the latent space to have unit variance when training the diffusion
model. The latents are scaled with the formula z = z * scaling_factor
before being passed to the
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z
. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
Synthesis with Latent Diffusion Models paper.
Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling.
This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library implements for all the model (such as downloading or saving, etc.)
Disable sliced VAE decoding. If enable_slicing
was previously invoked, this method will go back to computing
decoding in one step.
Disable tiled VAE decoding. If enable_vae_tiling
was previously invoked, this method will go back to
computing decoding in one step.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful to save a large amount of memory and to allow the processing of larger images.
( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: typing.Optional[torch._C.Generator] = None )
( z: FloatTensor return_dict: bool = True )
Parameters
torch.FloatTensor
): Input batch of latent vectors. return_dict (bool
, optional, defaults to
True
):
Whether or not to return a DecoderOutput
instead of a plain tuple.
Decode a batch of images using a tiled decoder.
( x: FloatTensor return_dict: bool = True )
Parameters
torch.FloatTensor
): Input batch of images. return_dict (bool
, optional, defaults to True
):
Whether or not to return a AutoencoderKLOutput
instead of a plain tuple.
Encode a batch of images using a tiled encoder.
( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: typing.Optional[int] = None out_channels: typing.Optional[int] = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: typing.Optional[int] = None attention_bias: bool = False sample_size: typing.Optional[int] = None num_vector_embeds: typing.Optional[int] = None patch_size: typing.Optional[int] = None activation_fn: str = 'geglu' num_embeds_ada_norm: typing.Optional[int] = None use_linear_projection: bool = False only_cross_attention: bool = False upcast_attention: bool = False norm_type: str = 'layer_norm' norm_elementwise_affine: bool = True )
Parameters
int
, optional, defaults to 16) — The number of heads to use for multi-head attention.
int
, optional, defaults to 88) — The number of channels in each head.
int
, optional) —
Pass if the input is continuous. The number of channels in the input and output.
int
, optional, defaults to 1) — The number of layers of Transformer blocks to use.
float
, optional, defaults to 0.0) — The dropout probability to use.
int
, optional) — The number of encoder_hidden_states dimensions to use.
int
, optional) — Pass if the input is discrete. The width of the latent images.
Note that this is fixed at training time as it is used for learning a number of position embeddings. See
ImagePositionalEmbeddings
.
int
, optional) —
Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels.
Includes the class for the masked latent pixel.
str
, optional, defaults to "geglu"
) — Activation function to be used in feed-forward.
int
, optional) — Pass if at least one of the norm_layers is AdaLayerNorm
.
The number of diffusion steps used during training. Note that this is fixed at training time as it is used
to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for
up to but not more than steps than num_embeds_ada_norm
.
bool
, optional) —
Configure if the TransformerBlocks’ attention should contain a bias parameter.
Transformer model for image-like data. Takes either discrete (classes of vector embeddings) or continuous (actual embeddings) inputs.
When input is continuous: First, project the input (aka embedding) and reshape to b, t, d. Then apply standard transformer action. Finally, reshape to image.
When input is discrete: First, input (classes of latent pixels) is converted to embeddings and has positional
embeddings applied, see ImagePositionalEmbeddings
. Then apply standard transformer action. Finally, predict
classes of unnoised image.
Note that it is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image do not contain a prediction for the masked pixel as the unnoised image cannot be masked.
(
hidden_states
encoder_hidden_states = None
timestep = None
class_labels = None
cross_attention_kwargs = None
return_dict: bool = True
)
→
Transformer2DModelOutput or tuple
Parameters
torch.LongTensor
of shape (batch size, num latent pixels)
. —
When continuous, torch.FloatTensor
of shape (batch size, channel, height, width)
): Input
hidden_states
torch.FloatTensor
of shape (batch size, sequence len, embed dims)
, optional) —
Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
self-attention.
torch.long
, optional) —
Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step.
torch.LongTensor
of shape (batch size, num classes)
, optional) —
Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels
conditioning.
bool
, optional, defaults to True
) —
Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple.
Returns
Transformer2DModelOutput or tuple
Transformer2DModelOutput if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
( sample: FloatTensor )
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
or (batch size, num_vector_embeds - 1, num_latent_pixels)
if Transformer2DModel is discrete) —
Hidden states conditioned on encoder_hidden_states
input. If discrete, returns probability distributions
for the unnoised latent pixels.
( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: typing.Optional[int] = None out_channels: typing.Optional[int] = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: typing.Optional[int] = None attention_bias: bool = False sample_size: typing.Optional[int] = None activation_fn: str = 'geglu' norm_elementwise_affine: bool = True double_self_attention: bool = True )
Parameters
int
, optional, defaults to 16) — The number of heads to use for multi-head attention.
int
, optional, defaults to 88) — The number of channels in each head.
int
, optional) —
Pass if the input is continuous. The number of channels in the input and output.
int
, optional, defaults to 1) — The number of layers of Transformer blocks to use.
float
, optional, defaults to 0.0) — The dropout probability to use.
int
, optional) — The number of encoder_hidden_states dimensions to use.
int
, optional) — Pass if the input is discrete. The width of the latent images.
Note that this is fixed at training time as it is used for learning a number of position embeddings. See
ImagePositionalEmbeddings
.
str
, optional, defaults to "geglu"
) — Activation function to be used in feed-forward.
bool
, optional) —
Configure if the TransformerBlocks’ attention should contain a bias parameter.
bool
, optional) —
Configure if each TransformerBlock should contain two self-attention layers
Transformer model for video-like data.
(
hidden_states
encoder_hidden_states = None
timestep = None
class_labels = None
num_frames = 1
cross_attention_kwargs = None
return_dict: bool = True
)
→
~models.transformer_2d.TransformerTemporalModelOutput
or tuple
Parameters
torch.LongTensor
of shape (batch size, num latent pixels)
. —
When continous, torch.FloatTensor
of shape (batch size, channel, height, width)
): Input
hidden_states
torch.LongTensor
of shape (batch size, encoder_hidden_states dim)
, optional) —
Conditional embeddings for cross attention layer. If not given, cross-attention defaults to
self-attention.
torch.long
, optional) —
Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step.
torch.LongTensor
of shape (batch size, num classes)
, optional) —
Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels
conditioning.
bool
, optional, defaults to True
) —
Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple.
Returns
~models.transformer_2d.TransformerTemporalModelOutput
or tuple
~models.transformer_2d.TransformerTemporalModelOutput
if return_dict
is True, otherwise a tuple
.
When returning a tuple, the first element is the sample tensor.
( sample: FloatTensor )
( num_attention_heads: int = 32 attention_head_dim: int = 64 num_layers: int = 20 embedding_dim: int = 768 num_embeddings = 77 additional_embeddings = 4 dropout: float = 0.0 )
Parameters
int
, optional, defaults to 32) — The number of heads to use for multi-head attention.
int
, optional, defaults to 64) — The number of channels in each head.
int
, optional, defaults to 20) — The number of layers of Transformer blocks to use.
int
, optional, defaults to 768) — The dimension of the CLIP embeddings. Note that CLIP
image embeddings and text embeddings are both the same dimension.
int
, optional, defaults to 77) — The max number of clip embeddings allowed. I.e. the
length of the prompt after it has been tokenized.
int
, optional, defaults to 4) — The number of additional tokens appended to the
projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings
.
float
, optional, defaults to 0.0) — The dropout probability to use.
The prior transformer from unCLIP is used to predict CLIP image embeddings from CLIP text embeddings. Note that the transformer predicts the image embeddings through a denoising diffusion process.
This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library implements for all the models (such as downloading or saving, etc.)
For more details, see the original paper: https://arxiv.org/abs/2204.06125
(
hidden_states
timestep: typing.Union[torch.Tensor, float, int]
proj_embedding: FloatTensor
encoder_hidden_states: FloatTensor
attention_mask: typing.Optional[torch.BoolTensor] = None
return_dict: bool = True
)
→
PriorTransformerOutput or tuple
Parameters
torch.FloatTensor
of shape (batch_size, embedding_dim)
) —
x_t, the currently predicted image embeddings.
torch.long
) —
Current denoising step.
torch.FloatTensor
of shape (batch_size, embedding_dim)
) —
Projected embedding vector the denoising process is conditioned on.
torch.FloatTensor
of shape (batch_size, num_embeddings, embedding_dim)
) —
Hidden states of the text embeddings the denoising process is conditioned on.
torch.BoolTensor
of shape (batch_size, num_embeddings)
) —
Text mask for the text embeddings.
bool
, optional, defaults to True
) —
Whether or not to return a models.prior_transformer.PriorTransformerOutput instead of a plain
tuple.
Returns
PriorTransformerOutput or tuple
PriorTransformerOutput if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
( predicted_image_embedding: FloatTensor )
( down_block_res_samples: typing.Tuple[torch.Tensor] mid_block_res_sample: Tensor )
( in_channels: int = 4 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: typing.Optional[int] = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 use_linear_projection: bool = False class_embed_type: typing.Optional[str] = None num_class_embeds: typing.Optional[int] = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' projection_class_embeddings_input_dim: typing.Optional[int] = None controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: typing.Optional[typing.Tuple[int]] = (16, 32, 96, 256) global_pool_conditions: bool = False )
( unet: UNet2DConditionModel controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: typing.Optional[typing.Tuple[int]] = (16, 32, 96, 256) load_weights_from_unet: bool = True )
Instantiate Controlnet class from UNet2DConditionModel.
( slice_size )
Parameters
str
or int
or list(int)
, optional, defaults to "auto"
) —
When "auto"
, halves the input to the attention heads, so attention will be computed in two steps. If
"max"
, maximum amount of memory will be saved by running only one slice at a time. If a number is
provided, uses as many slices as attention_head_dim // slice_size
. In this case, attention_head_dim
must be a multiple of slice_size
.
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.
( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] )
Parameters
dict
of AttentionProcessor
or AttentionProcessor
) —
The instantiated processor class or a dictionary of processor classes that will be set as the processor
of all Attention
layers.
processor
is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. —
Disables custom attention processors and sets the default attention implementation.
Base class for all flax models.
FlaxModelMixin takes care of storing the configuration of the models and handles methods for loading, downloading and saving models.
( pretrained_model_name_or_path: typing.Union[str, os.PathLike] dtype: dtype = <class 'jax.numpy.float32'> *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
runwayml/stable-diffusion-v1-5
../my_model_directory/
.jax.numpy.dtype
, optional, defaults to jax.numpy.float32
) —
The data type of the computation. Can be one of jax.numpy.float32
, jax.numpy.float16
(on GPUs) and
jax.numpy.bfloat16
(on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype
.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see ~ModelMixin.to_fp16
and
~ModelMixin.to_bf16
.
__init__
method.
Union[str, os.PathLike]
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (i.e., do not try to download the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_config()). Each key of kwargs
that corresponds to
a configuration attribute will be used to override said attribute with the supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the
underlying model’s __init__
function.Instantiate a pretrained flax model from a pre-trained model configuration.
The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task.
The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded.
Examples:
>>> from diffusers import FlaxUNet2DConditionModel
>>> # Download model and configuration from huggingface.co and cache.
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable).
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/")
( save_directory: typing.Union[str, os.PathLike] params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] is_main_process: bool = True )
Parameters
str
or os.PathLike
) —
Directory to which to save. Will be created if it doesn’t exist.
Union[Dict, FrozenDict]
) —
A PyTree
of model parameters.
bool
, optional, defaults to True
) —
Whether the process calling this is the main process or not. Useful when in distributed training like
TPUs and need to call this function on all processes. In this case, set is_main_process=True
only on
the main process to avoid race conditions.
Save a model and its configuration file to a directory, so that it can be re-loaded using the
[from_pretrained()](/docs/diffusers/pr_186/en/api/models#diffusers.FlaxModelMixin.from_pretrained)
class method
( params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] mask: typing.Any = None )
Cast the floating-point params
to jax.numpy.bfloat16
. This returns a new params
tree and does not cast
the params
in place.
This method can be used on TPU to explicitly convert the model parameters to bfloat16 precision to do full half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed.
Examples:
>>> from diffusers import FlaxUNet2DConditionModel
>>> # load model
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision
>>> params = model.to_bf16(params)
>>> # If you don't want to cast certain parameters (for example layer norm bias and scale)
>>> # then pass the mask as follows
>>> from flax import traverse_util
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> flat_params = traverse_util.flatten_dict(params)
>>> mask = {
... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale"))
... for path in flat_params
... }
>>> mask = traverse_util.unflatten_dict(mask)
>>> params = model.to_bf16(params, mask)
( params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] mask: typing.Any = None )
Cast the floating-point params
to jax.numpy.float16
. This returns a new params
tree and does not cast the
params
in place.
This method can be used on GPU to explicitly convert the model parameters to float16 precision to do full half-precision training or to save weights in float16 for inference in order to save memory and improve speed.
Examples:
>>> from diffusers import FlaxUNet2DConditionModel
>>> # load model
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> # By default, the model params will be in fp32, to cast these to float16
>>> params = model.to_fp16(params)
>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale)
>>> # then pass the mask as follows
>>> from flax import traverse_util
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> flat_params = traverse_util.flatten_dict(params)
>>> mask = {
... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale"))
... for path in flat_params
... }
>>> mask = traverse_util.unflatten_dict(mask)
>>> params = model.to_fp16(params, mask)
( params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] mask: typing.Any = None )
Cast the floating-point params
to jax.numpy.float32
. This method can be used to explicitly convert the
model parameters to fp32 precision. This returns a new params
tree and does not cast the params
in place.
Examples:
>>> from diffusers import FlaxUNet2DConditionModel
>>> # Download model and configuration from huggingface.co
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> # By default, the model params will be in fp32, to illustrate the use of this method,
>>> # we'll first cast to fp16 and back to fp32
>>> params = model.to_f16(params)
>>> # now cast back to fp32
>>> params = model.to_fp32(params)
( sample: ndarray )
“Returns a new object replacing the specified fields with new values.
( sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = <class 'jax.numpy.float32'> flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = <flax.linen.module._Sentinel object at 0x7f7dbc6df490> name: str = None )
Parameters
int
, optional) —
The size of the input sample.
int
, optional, defaults to 4) —
The number of channels in the input sample.
int
, optional, defaults to 4) —
The number of channels in the output.
Tuple[str]
, optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")
) —
The tuple of downsample blocks to use. The corresponding class names will be: “FlaxCrossAttnDownBlock2D”,
“FlaxCrossAttnDownBlock2D”, “FlaxCrossAttnDownBlock2D”, “FlaxDownBlock2D”
Tuple[str]
, optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)
) —
The tuple of upsample blocks to use. The corresponding class names will be: “FlaxUpBlock2D”,
“FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D”
Tuple[int]
, optional, defaults to (320, 640, 1280, 1280)
) —
The tuple of output channels for each block.
int
, optional, defaults to 2) —
The number of layers per block.
int
or Tuple[int]
, optional, defaults to 8) —
The dimension of the attention heads.
int
, optional, defaults to 768) —
The dimension of the cross attention features.
float
, optional, defaults to 0) —
Dropout probability for down, up and bottleneck blocks.
bool
, optional, defaults to True
) —
Whether to flip the sin to cos in the time embedding.
int
, optional, defaults to 0) — The frequency shift to apply to the time embedding.
bool
, optional, defaults to False
) —
enable memory efficient attention https://arxiv.org/abs/2112.05682
FlaxUNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep and returns sample shaped output.
This model inherits from FlaxModelMixin. Check the superclass documentation for the generic methods the library implements for all the models (such as downloading or saving, etc.)
Also, this model is a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
( sample: ndarray )
Output of decoding method.
“Returns a new object replacing the specified fields with new values.
( latent_dist: FlaxDiagonalGaussianDistribution )
Output of AutoencoderKL encoding method.
“Returns a new object replacing the specified fields with new values.
( in_channels: int = 3 out_channels: int = 3 down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) block_out_channels: typing.Tuple[int] = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 dtype: dtype = <class 'jax.numpy.float32'> parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = <flax.linen.module._Sentinel object at 0x7f7dbc6df490> name: str = None )
Parameters
int
, optional, defaults to 3) —
Input channels
int
, optional, defaults to 3) —
Output channels
Tuple[str]
, optional, defaults to (DownEncoderBlock2D)) —
DownEncoder block type
Tuple[str]
, optional, defaults to (UpDecoderBlock2D)) —
UpDecoder block type
Tuple[str]
, optional, defaults to (64,)) —
Tuple containing the number of output channels for each block
int
, optional, defaults to 2) —
Number of Resnet layer for each block
str
, optional, defaults to silu) —
Activation function
int
, optional, defaults to 4) —
Latent space channels
int
, optional, defaults to 32) —
Norm num group
int
, optional, defaults to 32) —
Sample input size
jnp.dtype
, optional, defaults to jnp.float32) —
parameters dtype
Flax Implementation of Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling.
This model is a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
( down_block_res_samples: ndarray mid_block_res_sample: ndarray )
“Returns a new object replacing the specified fields with new values.
( sample_size: int = 32 in_channels: int = 4 down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = <class 'jax.numpy.float32'> flip_sin_to_cos: bool = True freq_shift: int = 0 controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: typing.Tuple[int] = (16, 32, 96, 256) parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = <flax.linen.module._Sentinel object at 0x7f7dbc6df490> name: str = None )
Parameters
int
, optional) —
The size of the input sample.
int
, optional, defaults to 4) —
The number of channels in the input sample.
Tuple[str]
, optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")
) —
The tuple of downsample blocks to use. The corresponding class names will be: “FlaxCrossAttnDownBlock2D”,
“FlaxCrossAttnDownBlock2D”, “FlaxCrossAttnDownBlock2D”, “FlaxDownBlock2D”
Tuple[int]
, optional, defaults to (320, 640, 1280, 1280)
) —
The tuple of output channels for each block.
int
, optional, defaults to 2) —
The number of layers per block.
int
or Tuple[int]
, optional, defaults to 8) —
The dimension of the attention heads.
int
, optional, defaults to 768) —
The dimension of the cross attention features.
float
, optional, defaults to 0) —
Dropout probability for down, up and bottleneck blocks.
bool
, optional, defaults to True
) —
Whether to flip the sin to cos in the time embedding.
int
, optional, defaults to 0) — The frequency shift to apply to the time embedding.
str
, optional, defaults to rgb
) —
The channel order of conditional image. Will convert it to rgb
if it’s bgr
tuple
, optional, defaults to (16, 32, 96, 256)
) —
The tuple of output channel for each block in conditioning_embedding layer
Quoting from https://arxiv.org/abs/2302.05543: “Stable Diffusion uses a pre-processing method similar to VQ-GAN [11] to convert the entire dataset of 512 × 512 images into smaller 64 × 64 “latent images” for stabilized training. This requires ControlNets to convert image-based conditions to 64 × 64 feature space to match the convolution size. We use a tiny network E(·) of four convolution layers with 4 × 4 kernels and 2 × 2 strides (activated by ReLU, channels are 16, 32, 64, 128, initialized with Gaussian weights, trained jointly with the full model) to encode image-space conditions … into feature maps …”
This model inherits from FlaxModelMixin. Check the superclass documentation for the generic methods the library implements for all the models (such as downloading or saving, etc.)
Also, this model is a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as: