The variational autoencoder (VAE) model with KL loss was introduced in Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling. The model is used in đŸ¤— Diffusers to encode images into latents and to decode latent representations into images.
The abstract from the paper is:
How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.
By default the AutoencoderKL should be loaded with from_pretrained(), but it can also be loaded
from the original format using FromOriginalVAEMixin.from_single_file
as follows:
from diffusers import AutoencoderKL
url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file
model = AutoencoderKL.from_single_file(url)
( in_channels: int = 3 out_channels: int = 3 down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) block_out_channels: typing.Tuple[int] = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 force_upcast: float = True )
Parameters
Tuple[str]
, optional, defaults to ("DownEncoderBlock2D",)
) —
Tuple of downsample block types. Tuple[str]
, optional, defaults to ("UpDecoderBlock2D",)
) —
Tuple of upsample block types. Tuple[int]
, optional, defaults to (64,)
) —
Tuple of block output channels. str
, optional, defaults to "silu"
) — The activation function to use. int
, optional, defaults to 4) — Number of channels in the latent space. int
, optional, defaults to 32
) — Sample input size. float
, optional, defaults to 0.18215) —
The component-wise standard deviation of the trained latent space computed using the first batch of the
training set. This is used to scale the latent space to have unit variance when training the diffusion
model. The latents are scaled with the formula z = z * scaling_factor
before being passed to the
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z
. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
Synthesis with Latent Diffusion Models paper. bool
, optional, default to True
) —
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
can be fine-tuned / trained to a lower range without loosing too much precision in which case
force_upcast
can be set to False
- see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix A VAE model with KL loss for encoding images into latents and decoding latent representations into images.
This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
Disable sliced VAE decoding. If enable_slicing
was previously enabled, this method will go back to computing
decoding in one step.
Disable tiled VAE decoding. If enable_tiling
was previously enabled, this method will go back to computing
decoding in one step.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: typing.Optional[torch._C.Generator] = None )
( processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor]]] _remove_lora = False )
Parameters
dict
of AttentionProcessor
or only AttentionProcessor
) —
The instantiated processor class or a dictionary of processor classes that will be set as the processor
for all Attention
layers.
If processor
is a dict, the key needs to define the path to the corresponding cross attention
processor. This is strongly recommended when setting trainable attention processors.
Sets the attention processor to use to compute attention.
Disables custom attention processors and sets the default attention implementation.
( z: FloatTensor return_dict: bool = True ) → DecoderOutput or tuple
Parameters
torch.FloatTensor
) — Input batch of latent vectors. bool
, optional, defaults to True
) —
Whether or not to return a DecoderOutput instead of a plain tuple. Returns
DecoderOutput or tuple
If return_dict is True, a DecoderOutput is returned, otherwise a plain tuple
is
returned.
Decode a batch of images using a tiled decoder.
( x: FloatTensor return_dict: bool = True ) → AutoencoderKLOutput or tuple
Parameters
torch.FloatTensor
) — Input batch of images. bool
, optional, defaults to True
) —
Whether or not to return a AutoencoderKLOutput instead of a plain tuple. Returns
AutoencoderKLOutput or tuple
If return_dict is True, a AutoencoderKLOutput is returned, otherwise a plain
tuple
is returned.
Encode a batch of images using a tiled encoder.
When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the output, but they should be much less noticeable.
( latent_dist: DiagonalGaussianDistribution )
Output of AutoencoderKL encoding method.
( sample: FloatTensor )
Output of decoding method.
( in_channels: int = 3 out_channels: int = 3 down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) block_out_channels: typing.Tuple[int] = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 dtype: dtype = <class 'jax.numpy.float32'> parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = <flax.linen.module._Sentinel object at 0x7f52e4218f10> name: typing.Optional[str] = None )
Parameters
int
, optional, defaults to 3) —
Number of channels in the input image. int
, optional, defaults to 3) —
Number of channels in the output. Tuple[str]
, optional, defaults to (DownEncoderBlock2D)
) —
Tuple of downsample block types. Tuple[str]
, optional, defaults to (UpDecoderBlock2D)
) —
Tuple of upsample block types. Tuple[str]
, optional, defaults to (64,)
) —
Tuple of block output channels. int
, optional, defaults to 2
) —
Number of ResNet layer for each block. str
, optional, defaults to silu
) —
The activation function to use. int
, optional, defaults to 4
) —
Number of channels in the latent space. int
, optional, defaults to 32
) —
The number of groups for normalization. int
, optional, defaults to 32) —
Sample input size. float
, optional, defaults to 0.18215) —
The component-wise standard deviation of the trained latent space computed using the first batch of the
training set. This is used to scale the latent space to have unit variance when training the diffusion
model. The latents are scaled with the formula z = z * scaling_factor
before being passed to the
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z
. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image
Synthesis with Latent Diffusion Models paper. jnp.dtype
, optional, defaults to jnp.float32
) —
The dtype
of the parameters. Flax implementation of a VAE model with KL loss for decoding latent representations.
This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
This model is a Flax Linen flax.linen.Module subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its general usage and behavior.
Inherent JAX features such as the following are supported:
( latent_dist: FlaxDiagonalGaussianDistribution )
Output of AutoencoderKL encoding method.
“Returns a new object replacing the specified fields with new values.
( sample: Array )
Output of decoding method.
“Returns a new object replacing the specified fields with new values.