AutoencoderDC
The 2D Autoencoder model used in SANA and introduced in DCAE by authors Junyu Chen*, Han Cai*, Junsong Chen, Enze Xie, Shang Yang, Haotian Tang, Muyang Li, Yao Lu, Song Han from MIT HAN Lab.
The abstract from the paper is:
We present Deep Compression Autoencoder (DC-AE), a new family of autoencoder models for accelerating high-resolution diffusion models. Existing autoencoder models have demonstrated impressive results at a moderate spatial compression ratio (e.g., 8x), but fail to maintain satisfactory reconstruction accuracy for high spatial compression ratios (e.g., 64x). We address this challenge by introducing two key techniques: (1) Residual Autoencoding, where we design our models to learn residuals based on the space-to-channel transformed features to alleviate the optimization difficulty of high spatial-compression autoencoders; (2) Decoupled High-Resolution Adaptation, an efficient decoupled three-phases training strategy for mitigating the generalization penalty of high spatial-compression autoencoders. With these designs, we improve the autoencoder’s spatial compression ratio up to 128 while maintaining the reconstruction quality. Applying our DC-AE to latent diffusion models, we achieve significant speedup without accuracy drop. For example, on ImageNet 512x512, our DC-AE provides 19.1x inference speedup and 17.9x training speedup on H100 GPU for UViT-H while achieving a better FID, compared with the widely used SD-VAE-f8 autoencoder. Our code is available at this https URL.
The following DCAE models are released and supported in Diffusers.
This model was contributed by lawrence-cj.
Load a model in Diffusers format with from_pretrained().
from diffusers import AutoencoderDC
ae = AutoencoderDC.from_pretrained("mit-han-lab/dc-ae-f32c32-sana-1.0-diffusers", torch_dtype=torch.float32).to("cuda")
Load a model in Diffusers via from_single_file
from difusers import AutoencoderDC
ckpt_path = "https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0/blob/main/model.safetensors"
model = AutoencoderDC.from_single_file(ckpt_path)
The AutoencoderDC
model has in
and mix
single file checkpoint variants that have matching checkpoint keys, but use different scaling factors. It is not possible for Diffusers to automatically infer the correct config file to use with the model based on just the checkpoint and will default to configuring the model using the mix
variant config file. To override the automatically determined config, please use the config
argument when using single file loading with in
variant checkpoints.
from diffusers import AutoencoderDC
ckpt_path = "https://huggingface.co/mit-han-lab/dc-ae-f128c512-in-1.0/blob/main/model.safetensors"
model = AutoencoderDC.from_single_file(ckpt_path, config="mit-han-lab/dc-ae-f128c512-in-1.0-diffusers")
AutoencoderDC
class diffusers.AutoencoderDC
< source >( in_channels: int = 3 latent_channels: int = 32 attention_head_dim: int = 32 encoder_block_types: typing.Union[str, typing.Tuple[str]] = 'ResBlock' decoder_block_types: typing.Union[str, typing.Tuple[str]] = 'ResBlock' encoder_block_out_channels: typing.Tuple[int, ...] = (128, 256, 512, 512, 1024, 1024) decoder_block_out_channels: typing.Tuple[int, ...] = (128, 256, 512, 512, 1024, 1024) encoder_layers_per_block: typing.Tuple[int] = (2, 2, 2, 3, 3, 3) decoder_layers_per_block: typing.Tuple[int] = (3, 3, 3, 3, 3, 3) encoder_qkv_multiscales: typing.Tuple[typing.Tuple[int, ...], ...] = ((), (), (), (5,), (5,), (5,)) decoder_qkv_multiscales: typing.Tuple[typing.Tuple[int, ...], ...] = ((), (), (), (5,), (5,), (5,)) upsample_block_type: str = 'pixel_shuffle' downsample_block_type: str = 'pixel_unshuffle' decoder_norm_types: typing.Union[str, typing.Tuple[str]] = 'rms_norm' decoder_act_fns: typing.Union[str, typing.Tuple[str]] = 'silu' scaling_factor: float = 1.0 )
Parameters
- in_channels (
int
, defaults to3
) — The number of input channels in samples. - latent_channels (
int
, defaults to32
) — The number of channels in the latent space representation. - encoder_block_types (
Union[str, Tuple[str]]
, defaults to"ResBlock"
) — The type(s) of block to use in the encoder. - decoder_block_types (
Union[str, Tuple[str]]
, defaults to"ResBlock"
) — The type(s) of block to use in the decoder. - encoder_block_out_channels (
Tuple[int, ...]
, defaults to(128, 256, 512, 512, 1024, 1024)
) — The number of output channels for each block in the encoder. - decoder_block_out_channels (
Tuple[int, ...]
, defaults to(128, 256, 512, 512, 1024, 1024)
) — The number of output channels for each block in the decoder. - encoder_layers_per_block (
Tuple[int]
, defaults to(2, 2, 2, 3, 3, 3)
) — The number of layers per block in the encoder. - decoder_layers_per_block (
Tuple[int]
, defaults to(3, 3, 3, 3, 3, 3)
) — The number of layers per block in the decoder. - encoder_qkv_multiscales (
Tuple[Tuple[int, ...], ...]
, defaults to((), (), (), (5,), (5,), (5,))
) — Multi-scale configurations for the encoder’s QKV (query-key-value) transformations. - decoder_qkv_multiscales (
Tuple[Tuple[int, ...], ...]
, defaults to((), (), (), (5,), (5,), (5,))
) — Multi-scale configurations for the decoder’s QKV (query-key-value) transformations. - upsample_block_type (
str
, defaults to"pixel_shuffle"
) — The type of block to use for upsampling in the decoder. - downsample_block_type (
str
, defaults to"pixel_unshuffle"
) — The type of block to use for downsampling in the encoder. - decoder_norm_types (
Union[str, Tuple[str]]
, defaults to"rms_norm"
) — The normalization type(s) to use in the decoder. - decoder_act_fns (
Union[str, Tuple[str]]
, defaults to"silu"
) — The activation function(s) to use in the decoder. - scaling_factor (
float
, defaults to1.0
) — The multiplicative inverse of the root mean square of the latent features. This is used to scale the latent space to have unit variance when training the diffusion model. The latents are scaled with the formulaz = z * scaling_factor
before being passed to the diffusion model. When decoding, the latents are scaled back to the original scale with the formula:z = 1 / scaling_factor * z
.
An Autoencoder model introduced in DCAE and used in SANA.
This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
Disable sliced AE decoding. If enable_slicing
was previously enabled, this method will go back to computing
decoding in one step.
Disable tiled AE decoding. If enable_tiling
was previously enabled, this method will go back to computing
decoding in one step.
Enable sliced AE decoding. When this option is enabled, the AE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
enable_tiling
< source >( tile_sample_min_height: typing.Optional[int] = None tile_sample_min_width: typing.Optional[int] = None tile_sample_stride_height: typing.Optional[float] = None tile_sample_stride_width: typing.Optional[float] = None )
Parameters
- tile_sample_min_height (
int
, optional) — The minimum height required for a sample to be separated into tiles across the height dimension. - tile_sample_min_width (
int
, optional) — The minimum width required for a sample to be separated into tiles across the width dimension. - tile_sample_stride_height (
int
, optional) — The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are no tiling artifacts produced across the height dimension. - tile_sample_stride_width (
int
, optional) — The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling artifacts produced across the width dimension.
Enable tiled AE decoding. When this option is enabled, the AE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
DecoderOutput
class diffusers.models.autoencoders.vae.DecoderOutput
< source >( sample: Tensor commit_loss: typing.Optional[torch.FloatTensor] = None )
Output of decoding method.