YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

GANime model trained on the Kimetsu no Yaiba dataset on Tensorflow.

The model internaly uses a Huggingface GPT2-large transformer to generate videos based on the first and last frame.

The parameters used for training are the following:

model:
  transformer_config:
    remaining_frames_method: "own_embeddings"
    transformer_type: "gpt2-large"
  first_stage_config:
    vqvae_config:
      beta: 0.25
      num_embeddings: 50257
      embedding_dim: 128
    autoencoder_config:
      z_channels: 512
      channels: 32
      channels_multiplier: 
      - 2
      - 4
      - 8
      - 8
      num_res_blocks: 1
      attention_resolution: 
      - 16
      resolution: 128
      dropout: 0.0
    discriminator_config:
      num_layers: 3
      filters: 64
      
    loss_config:
      discriminator:
        loss: "hinge"
        factor: 1.0
        iter_start: 16200
        weight: 0.3
      vqvae:
        codebook_weight: 1.0
        perceptual_weight: 4.0
      perceptual_loss: "vgg19"

train:
  batch_size: 64
  accumulation_size: 1
  n_epochs: 10000
  len_x_train: 28213
  warmup_epoch_percentage: 0.15
  lr_start: 1e-5
  lr_max: 2.5e-4
  perceptual_loss_weight: 1.0
  n_frames_before: 1
  stop_ground_truth_after_epoch: 1000

Implementation and documentation can be found here

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Space using Kurokabe/GANime_Kimetsu-no-yaiba_Tensorflow 1