Diffusers documentation

UVit2DModel

You are viewing v0.30.0 version. A newer version v0.31.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

UVit2DModel

The U-ViT model is a vision transformer (ViT) based UNet. This model incorporates elements from ViT (considers all inputs such as time, conditions and noisy image patches as tokens) and a UNet (long skip connections between the shallow and deep layers). The skip connection is important for predicting pixel-level features. An additional 3x3 convolutional block is applied prior to the final output to improve image quality.

The abstract from the paper is:

Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet.

UVit2DModel

class diffusers.UVit2DModel

< >

( hidden_size: int = 1024 use_bias: bool = False hidden_dropout: float = 0.0 cond_embed_dim: int = 768 micro_cond_encode_dim: int = 256 micro_cond_embed_dim: int = 1280 encoder_hidden_size: int = 768 vocab_size: int = 8256 codebook_size: int = 8192 in_channels: int = 768 block_out_channels: int = 768 num_res_blocks: int = 3 downsample: bool = False upsample: bool = False block_num_heads: int = 12 num_hidden_layers: int = 22 num_attention_heads: int = 16 attention_dropout: float = 0.0 intermediate_size: int = 2816 layer_norm_eps: float = 1e-06 ln_elementwise_affine: bool = True sample_size: int = 64 )

set_attn_processor

< >

( processor: Union )

Parameters

  • processor (dict of AttentionProcessor or only AttentionProcessor) — The instantiated processor class or a dictionary of processor classes that will be set as the processor for all Attention layers.

    If processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors.

Sets the attention processor to use to compute attention.

set_default_attn_processor

< >

( )

Disables custom attention processors and sets the default attention implementation.

UVit2DConvEmbed

class diffusers.models.unets.uvit_2d.UVit2DConvEmbed

< >

( in_channels block_out_channels vocab_size elementwise_affine eps bias )

UVitBlock

class diffusers.models.unets.uvit_2d.UVitBlock

< >

( channels num_res_blocks: int hidden_size hidden_dropout ln_elementwise_affine layer_norm_eps use_bias block_num_heads attention_dropout downsample: bool upsample: bool )

ConvNextBlock

class diffusers.models.unets.uvit_2d.ConvNextBlock

< >

( channels layer_norm_eps ln_elementwise_affine use_bias hidden_dropout hidden_size res_ffn_factor = 4 )

ConvMlmLayer

class diffusers.models.unets.uvit_2d.ConvMlmLayer

< >

( block_out_channels: int in_channels: int use_bias: bool ln_elementwise_affine: bool layer_norm_eps: float codebook_size: int )

< > Update on GitHub