Diffusers documentation
HunyuanImageTransformer2DModel
HunyuanImageTransformer2DModel
A Diffusion Transformer model for HunyuanImage2.1.
The model can be loaded with the following code snippet.
from diffusers import HunyuanImageTransformer2DModel
transformer = HunyuanImageTransformer2DModel.from_pretrained("hunyuanvideo-community/HunyuanImage-2.1-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)HunyuanImageTransformer2DModel
class diffusers.HunyuanImageTransformer2DModel
< source >( in_channels: int = 64 out_channels: int = 64 num_attention_heads: int = 28 attention_head_dim: int = 128 num_layers: int = 20 num_single_layers: int = 40 num_refiner_layers: int = 2 mlp_ratio: float = 4.0 patch_size: typing.Tuple[int, int] = (1, 1) qk_norm: str = 'rms_norm' guidance_embeds: bool = False text_embed_dim: int = 3584 text_embed_2_dim: typing.Optional[int] = None rope_theta: float = 256.0 rope_axes_dim: typing.Tuple[int, ...] = (64, 64) use_meanflow: bool = False )
Parameters
- in_channels (
int, defaults to16) — The number of channels in the input. - out_channels (
int, defaults to16) — The number of channels in the output. - num_attention_heads (
int, defaults to24) — The number of heads to use for multi-head attention. - attention_head_dim (
int, defaults to128) — The number of channels in each head. - num_layers (
int, defaults to20) — The number of layers of dual-stream blocks to use. - num_single_layers (
int, defaults to40) — The number of layers of single-stream blocks to use. - num_refiner_layers (
int, defaults to2) — The number of layers of refiner blocks to use. - mlp_ratio (
float, defaults to4.0) — The ratio of the hidden layer size to the input size in the feedforward network. - patch_size (
int, defaults to2) — The size of the spatial patches to use in the patch embedding layer. - patch_size_t (
int, defaults to1) — The size of the tmeporal patches to use in the patch embedding layer. - qk_norm (
str, defaults torms_norm) — The normalization to use for the query and key projections in the attention layers. - guidance_embeds (
bool, defaults toTrue) — Whether to use guidance embeddings in the model. - text_embed_dim (
int, defaults to4096) — Input dimension of text embeddings from the text encoder. - pooled_projection_dim (
int, defaults to768) — The dimension of the pooled projection of the text embeddings. - rope_theta (
float, defaults to256.0) — The value of theta to use in the RoPE layer. - rope_axes_dim (
Tuple[int], defaults to(16, 56, 56)) — The dimensions of the axes to use in the RoPE layer. - image_condition_type (
str, optional, defaults toNone) — The type of image conditioning to use. IfNone, no image conditioning is used. Iflatent_concat, the image is concatenated to the latent stream. Iftoken_replace, the image is used to replace first-frame tokens in the latent stream and apply conditioning.
The Transformer model used in HunyuanImage-2.1.
Transformer2DModelOutput
class diffusers.models.modeling_outputs.Transformer2DModelOutput
< source >( sample: torch.Tensor )
Parameters
- sample (
torch.Tensorof shape(batch_size, num_channels, height, width)or(batch size, num_vector_embeds - 1, num_latent_pixels)if Transformer2DModel is discrete) — The hidden states output conditioned on theencoder_hidden_statesinput. If discrete, returns probability distributions for the unnoised latent pixels.
The output of Transformer2DModel.