Edit model card

AnimateDiff is a method that allows you to create videos using pre-existing Stable Diffusion Text to Image models.

Converted https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt to Huggingface Diffusers format using Diffuser's convetion script (available https://github.com/huggingface/diffusers/blob/main/scripts/convert_animatediff_motion_module_to_diffusers.py)

The following example demonstrates how you can utilize the motion modules with an existing Stable Diffusion text to image model.

import torch
from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
from diffusers.utils import export_to_gif

# Load the motion adapter
adapter = MotionAdapter.from_pretrained("Warvito/animatediff-motion-adapter-v1-5-3")
# load SD 1.5 based finetuned model
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter)
scheduler = DDIMScheduler.from_pretrained(
    model_id,
    subfolder="scheduler",
    beta_schedule="linear",
    clip_sample=False,
    timestep_spacing="linspace",
    steps_offset=1
)

pipe.scheduler = scheduler

# enable memory savings
pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()

output = pipe(
    prompt=(
        "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, "
        "orange sky, warm lighting, fishing boats, ocean waves seagulls, "
        "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, "
        "golden hour, coastal landscape, seaside scenery"
    ),
    negative_prompt="bad quality, worse quality",
    num_frames=16,
    guidance_scale=7.5,
    num_inference_steps=25,
    generator=torch.Generator("cpu").manual_seed(42),
)
frames = output.frames[0]
export_to_gif(frames, "animation.gif")
Downloads last month
318
Inference API (serverless) does not yet support diffusers models for this pipeline type.