|
Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusionβs VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch |
|
from diffusers import DiffusionPipeline, AutoencoderTiny |
|
|
|
pipe = DiffusionPipeline.from_pretrained( |
|
"stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16 |
|
) |
|
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16) |
|
pipe = pipe.to("cuda") |
|
|
|
prompt = "slice of delicious New York-style berry cheesecake" |
|
image = pipe(prompt, num_inference_steps=25).images[0] |
|
image To use with Stable Diffusion XL 1.0 Copied import torch |
|
from diffusers import DiffusionPipeline, AutoencoderTiny |
|
|
|
pipe = DiffusionPipeline.from_pretrained( |
|
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 |
|
) |
|
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16) |
|
pipe = pipe.to("cuda") |
|
|
|
prompt = "slice of delicious New York-style berry cheesecake" |
|
image = pipe(prompt, num_inference_steps=25).images[0] |
|
image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) β Number of channels in the input image. out_channels (int, optional, defaults to 3) β Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) β |
|
Tuple of integers representing the number of output channels for each encoder block. The length of the |
|
tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) β |
|
Tuple of integers representing the number of output channels for each decoder block. The length of the |
|
tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") β |
|
Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) β |
|
Number of channels in the latent representation. The latent space acts as a compressed representation of |
|
the input image. upsampling_scaling_factor (int, optional, defaults to 2) β |
|
Scaling factor for upsampling in the decoder. It determines the size of the output image during the |
|
upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) β |
|
Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The |
|
length of the tuple should be equal to the number of stages in the encoder. Each stage has a different |
|
number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) β |
|
Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The |
|
length of the tuple should be equal to the number of stages in the decoder. Each stage has a different |
|
number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) β |
|
Magnitude of the latent representation. This parameter scales the latent representation values to control |
|
the extent of information preservation. latent_shift (float, optional, defaults to 0.5) β |
|
Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) β |
|
The component-wise standard deviation of the trained latent space computed using the first batch of the |
|
training set. This is used to scale the latent space to have unit variance when training the diffusion |
|
model. The latents are scaled with the formula z = z * scaling_factor before being passed to the |
|
diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image |
|
Synthesis with Latent Diffusion Models paper. For this Autoencoder, |
|
however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) β |
|
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE |
|
can be fine-tuned / trained to a lower range without losing too much precision, in which case |
|
force_upcast can be set to False (see this fp16-friendly |
|
AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for |
|
all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing |
|
decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing |
|
decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
|
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to |
|
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow |
|
processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) β Input sample. return_dict (bool, optional, defaults to True) β |
|
Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) β Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method. |
|
|