Diffusers documentation
AutoencoderKLHunyuanVideo15
AutoencoderKLHunyuanVideo15
The 3D variational autoencoder (VAE) model with KL loss used in HunyuanVideo1.5 by Tencent.
The model can be loaded with the following code snippet.
from diffusers import AutoencoderKLHunyuanVideo15
vae = AutoencoderKLHunyuanVideo15.from_pretrained("hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-480p_t2v", subfolder="vae", torch_dtype=torch.float32)
# make sure to enable tiling to avoid OOM
vae.enable_tiling()AutoencoderKLHunyuanVideo15
class diffusers.AutoencoderKLHunyuanVideo15
< source >( in_channels: int = 3 out_channels: int = 3 latent_channels: int = 32 block_out_channels: typing.Tuple[int] = (128, 256, 512, 1024, 1024) layers_per_block: int = 2 spatial_compression_ratio: int = 16 temporal_compression_ratio: int = 4 downsample_match_channel: bool = True upsample_match_channel: bool = True scaling_factor: float = 1.03682 )
A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos. Used for HunyuanVideo-1.5.
This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing
decoding in one step.
Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing
decoding in one step.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
enable_tiling
< source >( tile_sample_min_height: typing.Optional[int] = None tile_sample_min_width: typing.Optional[int] = None tile_latent_min_height: typing.Optional[int] = None tile_latent_min_width: typing.Optional[int] = None tile_overlap_factor: typing.Optional[float] = None )
Parameters
- tile_sample_min_height (
int, optional) — The minimum height required for a sample to be separated into tiles across the height dimension. - tile_sample_min_width (
int, optional) — The minimum width required for a sample to be separated into tiles across the width dimension. - tile_latent_min_height (
int, optional) — The minimum height required for a latent to be separated into tiles across the height dimension. - tile_latent_min_width (
int, optional) — The minimum width required for a latent to be separated into tiles across the width dimension.
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
forward
< source >( sample: Tensor sample_posterior: bool = False return_dict: bool = True generator: typing.Optional[torch._C.Generator] = None )
tiled_decode
< source >( z: Tensor ) → ~models.vae.DecoderOutput or tuple
Parameters
- z (
torch.Tensor) — Input batch of latent vectors. - return_dict (
bool, optional, defaults toTrue) — Whether or not to return a~models.vae.DecoderOutputinstead of a plain tuple.
Returns
~models.vae.DecoderOutput or tuple
If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is
returned.
Decode a batch of images using a tiled decoder.
tiled_encode
< source >( x: Tensor ) → torch.Tensor
Encode a batch of images using a tiled encoder.
DecoderOutput
class diffusers.models.autoencoders.vae.DecoderOutput
< source >( sample: Tensor commit_loss: typing.Optional[torch.FloatTensor] = None )
Output of decoding method.