Diffusers documentation

CosmosTransformer3DModel

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.33.1).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

CosmosTransformer3DModel

A Diffusion Transformer model for 3D video-like data was introduced in Cosmos World Foundation Model Platform for Physical AI by NVIDIA.

The model can be loaded with the following code snippet.

from diffusers import CosmosTransformer3DModel

transformer = CosmosTransformer3DModel.from_pretrained("nvidia/Cosmos-1.0-Diffusion-7B-Text2World", subfolder="transformer", torch_dtype=torch.bfloat16)

CosmosTransformer3DModel

class diffusers.CosmosTransformer3DModel

< >

( in_channels: int = 16 out_channels: int = 16 num_attention_heads: int = 32 attention_head_dim: int = 128 num_layers: int = 28 mlp_ratio: float = 4.0 text_embed_dim: int = 1024 adaln_lora_dim: int = 256 max_size: typing.Tuple[int, int, int] = (128, 240, 240) patch_size: typing.Tuple[int, int, int] = (1, 2, 2) rope_scale: typing.Tuple[float, float, float] = (2.0, 1.0, 1.0) concat_padding_mask: bool = True extra_pos_embed_type: typing.Optional[str] = 'learnable' )

Parameters

  • in_channels (int, defaults to 16) — The number of channels in the input.
  • out_channels (int, defaults to 16) — The number of channels in the output.
  • num_attention_heads (int, defaults to 32) — The number of heads to use for multi-head attention.
  • attention_head_dim (int, defaults to 128) — The number of channels in each attention head.
  • num_layers (int, defaults to 28) — The number of layers of transformer blocks to use.
  • mlp_ratio (float, defaults to 4.0) — The ratio of the hidden layer size to the input size in the feedforward network.
  • text_embed_dim (int, defaults to 4096) — Input dimension of text embeddings from the text encoder.
  • adaln_lora_dim (int, defaults to 256) — The hidden dimension of the Adaptive LayerNorm LoRA layer.
  • max_size (Tuple[int, int, int], defaults to (128, 240, 240)) — The maximum size of the input latent tensors in the temporal, height, and width dimensions.
  • patch_size (Tuple[int, int, int], defaults to (1, 2, 2)) — The patch size to use for patchifying the input latent tensors in the temporal, height, and width dimensions.
  • rope_scale (Tuple[float, float, float], defaults to (2.0, 1.0, 1.0)) — The scaling factor to use for RoPE in the temporal, height, and width dimensions.
  • concat_padding_mask (bool, defaults to True) — Whether to concatenate the padding mask to the input latent tensors.
  • extra_pos_embed_type (str, optional, defaults to learnable) — The type of extra positional embeddings to use. Can be one of None or learnable.

A Transformer model for video-like data used in Cosmos.

Transformer2DModelOutput

class diffusers.models.modeling_outputs.Transformer2DModelOutput

< >

( sample: torch.Tensor )

Parameters

  • sample (torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.

The output of Transformer2DModel.

< > Update on GitHub