sayakpaul's picture
sayakpaul HF staff
End of training
a9277c4 verified
metadata
base_model: THUDM/CogVideoX-5b
library_name: diffusers
license: other
tags:
  - text-to-video
  - diffusers-training
  - diffusers
  - lora
  - cogvideox
  - cogvideox-diffusers
  - template:sd-lora
widget: []

CogVideoX LoRA Finetune

Model description

This is a lora finetune of the CogVideoX model THUDM/CogVideoX-5b.

The model was trained using CogVideoX Factory - a repository containing memory-optimized training scripts for the CogVideoX family of models using TorchAO and DeepSpeed. The scripts were adopted from CogVideoX Diffusers trainer.

Download model

Download LoRA in the Files & Versions tab.

Usage

Requires the 🧨 Diffusers library installed.

import torch
from diffusers import CogVideoXPipeline
from diffusers.utils import export_to_video

pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16).to("cuda")
pipe.load_lora_weights("sayakpaul/optimizer_adamw_steps_1000_lr-schedule_cosine_with_restarts_learning-rate_1e-4", weight_name="pytorch_lora_weights.safetensors", adapter_name="cogvideox-lora")

# The LoRA adapter weights are determined by what was used for training.
# In this case, we assume `--lora_alpha` is 32 and `--rank` is 64.
# It can be made lower or higher from what was used in training to decrease or amplify the effect
# of the LoRA upto a tolerance, beyond which one might notice no effect at all or overflows.
pipe.set_adapters(["cogvideox-lora"], [32 / 64])

video = pipe("None", guidance_scale=6, use_dynamic_cfg=True).frames[0]
export_to_video(video, "output.mp4", fps=8)

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers.

License

Please adhere to the licensing terms as described here and here.

Intended uses & limitations

How to use

# TODO: add an example code snippet for running this diffusion pipeline

Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

Training details

[TODO: describe the data used to train the model]