YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

A tiny random pipeline for testing purposes based on THUDM/CogView4-6B.

from transformers import AutoTokenizer, GlmConfig, GlmModel
from diffusers import CogView4Transformer2DModel, FlowMatchEulerDiscreteScheduler, AutoencoderKL, CogView4Pipeline

tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-4-9b-chat", trust_remote_code=True)

config = GlmConfig(hidden_size=32, intermediate_size=8, num_hidden_layers=2, num_attention_heads=4, head_dim=8)
text_encoder = GlmModel(config)

transformer_kwargs = {
    "patch_size": 2,
    "in_channels": 4,
    "num_layers": 2,
    "attention_head_dim": 4,
    "num_attention_heads": 4,
    "out_channels": 4,
    "text_embed_dim": 32,
    "time_embed_dim": 8,
    "condition_dim": 4,
}
transformer = CogView4Transformer2DModel(**transformer_kwargs)

vae_kwargs = {
    "block_out_channels": [32, 64],
    "in_channels": 3,
    "out_channels": 3,
    "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"],
    "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"],
    "latent_channels": 4,
    "sample_size": 128,
}
vae = AutoencoderKL(**vae_kwargs)

scheduler = FlowMatchEulerDiscreteScheduler()

pipe = CogView4Pipeline(tokenizer=tokenizer, text_encoder=text_encoder, transformer=transformer, vae=vae, scheduler=scheduler)
pipe.save_pretrained("./dump-cogview4-dummy-pipe")
Downloads last month
16
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.