Mochi-1 Preview LoRA Finetune
This is a LoRA fine-tune of the Mochi-1 preview model. The model was trained using custom training data.
Usage
from diffusers import MochiPipeline
from diffusers.utils import export_to_video
import torch
pipe = MochiPipeline.from_pretrained("genmo/mochi-1-preview")
pipe.load_lora_weights("uglysonic3121/animationtest2.0")
pipe.enable_model_cpu_offload()
video = pipe(
prompt="your prompt here",
guidance_scale=6.0,
num_inference_steps=64,
height=480,
width=848,
max_sequence_length=256,
).frames[0]
export_to_video(video, "output.mp4", fps=30)
Training details
Trained on Replicate using: lucataco/mochi-1-lora-trainer
Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 6
Inference API (serverless) does not yet support diffusers models for this pipeline type.